October 07, 2021
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
This application of AI became a valuable source IT expertise that multiplied staff bandwidth to manage the solution and allowed for a full and complex monitoring of the entire networked environment. With Flowmon ADS in place, the institute has a comprehensive, yet noise-free overview of suspicious behaviours in the partner networks, flawless detection capability, and a platform for the validation of indicators of compromise. Flowmon’s solution works at scale too. GéANT – which is a pan-European data network for the research and education community – is one of the world’s largest data networks, and transfers over 1,000 terabytes of data per day over the GéANT IP backbone. For something of that scale there is simply no way to manually monitor the entire network for aberrant data. With a redundant application of two Flowmon collectors deployed in parallel, GéANT was able to have a pilot security solution to manage data flow of this scale live in just a few hours. With a few months of further testing, integration and algorithmic learning, the solution was then ready to protect GéANT’s entire network from encrypted data threats.
“As digital transformation accelerates and we experience generational shifts, professionals will increasingly desire better work-life balance and freedom from legacy in-office models,” says Saum Mathur, chief product, technology and AI officer with Paro. “Consultancies and others that are reliant on legacy models are struggling to adapt to this new reality, and marketplaces are only furthering these models’ disruption. Three to five years ago, the gig economy pioneers offered customers finite, task-based services that didn’t require extensive experience and enabled flexible scheduling. With continued shifts in the technical and cultural landscape, the gig economy has been extended into professional services, which is powered by highly experienced subject matter experts of all levels.” Corporate culture needs to be receptive to the changes wrought by digital transformation. Forty-one percent of executives in the Alliantgroup survey have encountered employee resistance, while 32$ say they have had “the wrong team or department overseeing initiatives.”
The Future Forum Pulse survey echoed a sentiment that has been voiced repeatedly over the past 18 or so months: employees have embraced remote working, and see it as a pillar of their future working preferences. Yet executives are more likely than lower-level workers to be in favour of a working week based heavily around an office. Of those surveyed, 44% of executives said they wanted to work from the office every day, compared to just 17% of employees. Three-quarters (75%) of executives said they wanted to work from the office 3-5 days a week, versus 34% of employees. This disconnect between employer and employee preferences risks being entrenched into new workplace policies, researchers found. Two-thirds (66%) of executives reported they were designing post-pandemic workforce plans with little to no direct input from employees – and yet 94% said they were "moderately confident" that the policies they had created matched employee expectations. What's more, more than half (56%) of executives reported they had finalized their plans on how employees can work in the future.?
领英推荐
"CSPs' cloud and digital services have given them access to the enormous amounts of data required to effectively train AI models," the authors concluded. Such economies of scale have been an asset to the cloud providers for years. Years ago, RedMonk analyst Stephen O'Grady highlighted the "relentless economies of scale" that the cloud providers brought to hardware–they could simply build more cheaply than any enterprise could hope to replicate in their own data centers. Now the CSPs enjoy a similar advantage with data. But it's not merely a matter of raw data. The CSPs also have more experience using that data on a large scale. The CSPs have products (e.g., Amazon Alexa to assist with natural language processing, or Google Search to help with recommendation systems). Lots of data feeding ever-smarter applications feeding more data into the applications... it's a self-reinforcing cycle. Oh, and that hardware mentioned earlier? The CSPs also have more experience tuning hardware to process machine learning workloads at scale.?
Operationalizing ML is data-centric—the main challenge isn’t identifying a sequence of steps to automate but finding quality data that the underlying algorithms can analyze and learn from. This can often be a question of data management and quality—for example, when companies have multiple legacy systems and data are not rigorously cleaned and maintained across the organization. However, even if a company has high-quality data, it may not be able to use the data to train the ML model, particularly during the early stages of model design. Typically, deployments span three distinct, and sequential, environments: the developer environment, where systems are built and can be easily modified; a test environment (also known as user-acceptance testing, or UAT), where users can test system functionalities but the system can’t be modified; and, finally, the production environment, where the system is live and available at scale to end users.
Managing code in Machine Learning appliances is a complex matter. Let’s see why! Collaboration on model experiments among data scientists is not as easy as sharing traditional code files: Jupyter Notebooks allow for writing and executing code, resulting in more difficult git chores to keep code synchronized between users, with frequent merge conflicts. Developers must code on different sub-projects: ETL jobs, model logic, training and validation, inference logic, and Infrastructure-as-Code templates. All of these separate projects must be centrally managed and adequately versioned! For modern software applications, there are many consolidated Version Control procedures like conventional commit, feature branching, squash and rebase, and continuous integration. These techniques however, are not always applicable to Jupyter Notebooks since, as stated before, they are not simple text files. Data scientists need to try many combinations of datasets, features, modeling techniques, algorithms, and parameter configurations to find the solution which best extracts business value.