DataOps and MLOps - Hype or Inevitable Reality?

DataOps and MLOps - Hype or Inevitable Reality?


As often mentioned both on this site and elsewhere, the recent explosion of data due to smartphones and IoT has been a goldmine for companies that are seeking to better serve their customers. The availability of masses of high-quality data enables companies to keep a finger on the pulse of both external and internal factors that could affect their delivery of services to customers. The rise of cloud computing and increased computing power has, in turn, enabled companies to make use of cutting-edge machine learning algorithms to draw insights from this data, further improving the customer experience.

These developments have also, however, given rise to the need for robust data management and ML model development. As traditional software development was disrupted and transformed by DevOps, so will data engineering and production machine learning be transformed by DataOps and MLOps.

DevOps

The word "DevOps" was created by combining the words Development and Operations by Patrick Debois in 2009. Gartner defines DevOps as

“DevOps represents a change in IT culture, focusing on rapid IT service delivery through the adoption of agile, lean practices in the context of a system-oriented approach. DevOps emphasizes people (and culture), and seeks to improve collaboration between operations and development teams. DevOps implementations utilize technology— especially automation tools that can leverage an increasingly programmable and dynamic infrastructure from a life cycle perspective.”

Traditional IT service delivery teams comprised several sub-teams, with clearly defined and delineated responsibilities. The development team generated code and handed it over to quality assurance (QA). Rigorous testing was then performed by the QA team to uncover bugs, which were fed back to development. Some iterations of code were required between dev and QA before the software met quality requirements. Then the code was passed to the operations team to build and deploy.

The silo mentality of traditional IT service delivery led to a slow, difficult, and error-prone process. The software development life cycle took months and releases had to be scheduled at regular intervals, instead of as and when required. DevOps changed all that.

There are three key qualities that set modern DevOps apart from traditional IT service delivery. Firstly, a single team is responsible for the entire product, from development to deployment. Secondly, as a necessary consequence of the increased responsibility, teams are multi-disciplinary and gain a larger range of skills. Finally, templating and automation take over from the previous manual methods for delivery of build, deployment, and testing.

The increase of collaborative efforts by making a single team accountable for the end-to-end product greatly reduces the development lifecycle of software developed using modern DevOps. The integration of the dev and QA personnel into a single team increases the velocity of bug-fixing iterations, leading to a product that is ready for build and deployment quicker than previously possible.

The meteoric rise of cloud computing also enabled the automation of new software integration and deployment, known as continuous integration/continuous deployment (CI/CD). CI/CD allows new software to be deployed to the end user at much more frequent intervals than previously possible. A further step to automate testing, known as continuous testing, ensures that the quality of CI/CD projects is maintained despite their rushed timelines.

The modern DevOps process affords dramatic improvements to an organisation's ability to release software by reducing development lifecycle times, increasing deployment velocity, improving quality, and reducing mean time to recovery.

“The foundation of DevOps success is how well teams and individuals collaborate across the enterprise to get things done more rapidly, efficiently and effectively.”

—Tony Bradley, “Scaling Collaboration in DevOps,” DevOps.com

In the next two sections, we look at how DevOps translates into Data and ML spaces.

DataOps

Data platform teams are accountable for ingesting, storing, and processing an organisation's data in a safe, secure, and scalable fashion. Modern data can be ingested from a bewildering variety of sources, including IoT devices, clickstreams and batch loads via scripts or APIs. This data is then stored en-masse in operational data stores or data warehouses. The processing or transformation of stored data is then carried out on shared or dedicated compute clusters.

The sheer volume and diversity of data is growing at an exponential rate. The ability to rapidly process and analyse data to draw insights, and then act on those insights, provides a significant competitive advantage to data-driven organisations. Like DevOps, DataOps is an invaluable tool for driving home this competitive advantage by getting the right information to the right people at the right time. Jack Vaughan defines Data Ops as:

"A DataOps strategy, which is inspired by the DevOps movement, strives to speed the production of applications running on big data processing frameworks. Like DevOps, DataOps seeks to break down silos across IT operations and software development teams, encouraging line-of-business stakeholders to also work with data engineers, data scientists and analysts so that the organization’s data can be used in the most flexible, effective manner possible to achieve positive business outcomes.”

DataOps automates the day-to-day management of large databases and the extraction, transformation and loading (ETL) of data from these databases. In particular, the automation of the ETL process has massive flow-on effects on the productivity of the end-users of the data, typically data scientists and data analysts. The personal experience of the Deep Blue AI team has been that data scientists spend a significant amount of their time transforming badly formed data. DataOps automates this transformation and ensures that transformation algorithms need only be developed once for the whole organisation to benefit.

Another less-appreciated but still important function of the DataOps team is to monitor incoming data quality and alert end users when this data changes in some fundamental way. By keeping an eye on data quality, failing IoT devices, changes in the environment, or changes in any other upstream process can be detected quickly and reliably, enabling the organisation to react appropriately.

Of course, as with any data-related work, DataOps must consider seriously the security of its data pipelines and stores. Modern-day infrastructure is mostly cloud-based, where an external supplier such as Microsoft or AWS manages physical security, but access controls and digital security responsibilities still lie with the DataOps team.

MLOps

The massive increase in data availability in the past few years has driven an increase in demand for data science teams, comprising data scientists and machine learning engineers. Data science teams leverage data and data models to draw insights and make predictions about items as varied as customer behavior, market responses, and legal documents. These insights and predictions can be used to enhance the customer experience, increase revenue, and drive efficiencies in business and technology processes.

Traditional data science methods of operation involved the coding and training of models by data scientists on local machines or on purpose-built workstations. The trained models were then tested manually before being sent off to the Operations team for deployment. The increased volumes of modern data and quickly-changing customer preferences have made such operating models obsolete. Today, data science needs to follow a CI/CD operating model like DevOps in order to rapidly iterate in production.

The introduction of machine learning platforms by the major cloud providers has made this possible. Platforms such as Azure Cognitive Services or Amazon Sagemaker provide easy-to-use APIs for several canonical machine learning solutions, such as sentiment analysis, facial recognition, and content moderation. Software engineers with little-to-no ML expertise can leverage these APIs to build working solutions without the need to understand the intricacies of the models being used.

For more complex or niche applications where an off-the-shelf model doesn't quite cut it, these platforms also provide frameworks for data scientists and ML engineers to develop custom models. Functionality like experiment tracking, model and dataset versioning, and collaborative tools allows data science teams to dramatically shorten the timelines for delivery and consumption. Cloud ML platforms also allow tracking of both input and output data to detect data drift, keeping data scientists informed of changing trends in the environment, triggering model re-training whenever appropriate.

"MLOps follows a similar pattern to DevOps. The practices that drive a seamless integration between your development cycle and your overall operations process can also transform how your organization handles big data. Just like DevOps shortens production life cycles by creating better products with each iteration, MLOps drives insights you can trust and put into play more quickly."

-Elizabeth Wallace, "What is MLOps and Why Does it Matter?", opendatascience.com

This article was originally published at Deep Blue AI






要查看或添加评论,请登录

Phil de Silva的更多文章

社区洞察

其他会员也浏览了