5 modelops capabilities that boost data science productivity

Organizations are hiring data scientists to develop ML models and experiment with AI, but the business impact is lagging for many large enterprises.

5 modelops capabilities that boost data science productivity
Thinkstock

In the State of Modelops 2022 report, 51% of large enterprises had done early-stage pilots or experiments in artificial intelligence but have yet to put them into production. Only 38% reported they can answer executive questions on the return on investment on AI, and 43% said that their company is ineffective at finding and fixing issues in a timely matter.

These challenges raise the question of how to improve the productivity of developing, delivering, and managing ML models in production.

MLops or modelops? You may need both

Now data scientists have plenty of analytics tools to choose from to develop models, including Alteryx, AWS SageMaker, Dataiku, DataRobot, Google Vertex AI, KNIME, Microsoft Azure Machine Learning, SAS, and others. There are also MLops platforms to help data science teams integrate their analytics tools, run experiments, and deploy ML models during the development process.

Rohit Tandon, general manager for ReadyAI and managing director at Deloitte Consulting, explains the role of MLops in large-scale AI deployments. “As enterprises seek to scale AI development capacity from dozens to hundreds or even thousands of ML models, they can benefit from the same engineering and operational discipline that devops brought to software development. MLops can help automate manual, inefficient workflows and streamline all steps of model construction and management.”

Although many MLops platforms support deployment and monitoring models in productions, their primary function is to serve data scientists during the development, testing, and improving processes. Modelops platforms and practices aim to fill a gap by providing collaboration, orchestration, and reporting tools about what ML models are running in production and how well they perform from operational, compliance, and business perspectives.

One way to think about MLops versus modelops is that MLops for data science is similar to devops tools, while modelops provides governance, collaboration, and reporting around the ML life cycle, with a focus on operations, monitoring, and support.

Example modelops use cases include banks developing credit approval models, hospitals using ML to identify patient anomalies, and retailers using ML to balance production throughput with customer demand. In these cases, business stakeholders seek explainable ML and need to trust the predictions. In some cases, regulators require model transparency.

There’s certainly some confusing overlap in terminology and capabilities between MLops, modelops, and even dataops. In thinking about how to help data scientists deploy, manage, and provide business reporting on compliant models, I offer five modelops capabilities to improve data science productivity.

1. Collaborate using a catalog of machine learning models

Do data science teams know what machine learning models are running in production and how well they perform? Much like data governance and dataops use data catalogs as a go-to source for available data sets, modelops can provide operational transparency to ML models.

Dmitry Petrov, cofounder and CEO of Iterative, says, “Productivity of data scientists can be measured in how quickly they can bring models to market into their organization’s apps and services. To accomplish that, I recommend improving the visibility and collaboration across data science teams.”

Petrov suggests “having a central place to store all model-related information, such as data, experiments, metrics, and hyperparameters, and connecting to devops-oriented tools so that putting models into production goes more smoothly.”

2. Establish a consistent and automated path to production

The devops tools Petrov mentions specifically refer to CI/CD tools to help push the code, parameters, and data artifacts to runtime environments. Implementing continuous deployment to production environments has additional business stakeholders, especially when predictive models require compliance reviews.

Manasi Vartak, founder and CEO of Verta, suggests, “Modelops platforms with readiness checklists, automated workflows, and inbuilt access controls for governance can facilitate and expedite handover.” She continues, “Data science teams hand over models to their model risk management, ML engineering, SRE, and devops teams to ensure operational reliability, governance, security, and scalability of mission-critical, real-time deployments of AI.”

3. Monitor ML models for operations and compliance

Helping data scientists automate and deploy more models faster can create business issues if there isn’t an operational modelops model keeping pace.

A key operational need is model monitoring, as Kjell Carlsson, head of data science strategy and evangelism at Domino Data Lab, explains. “With the help of modelops platforms, data scientists can develop models faster. In the best instances, these platforms streamline deployment and monitoring, for example, model drift across the different environments where the business applications reside, whether in the cloud or on-prem.”

John Wills, field CTO at Alation, shared an easy-to-understand definition of model drift. “Model drift is the platform’s ability to measure the situation where the distribution of model inputs changes,” he says. “Early identification of this shift allows data scientists to get ahead of problems and negative business impacts related to loss of precision.”

4. Provide executive reporting on business impacts

When data scientists deploy ML models to production and business users experience the benefits, how will executives sponsoring the AI investments know when they are paying off?

Krishna Kallakuri, CEO of Diwo, says, “The goal is rapid and accurate decisions, so companies should measure a data scientist’s productivity in tandem with the productivity of the analysts and business users that the AI serves.“

Iterative’s Petrov adds that modelops platforms should visualize the “progress around model building and improvements and share it amongst team members and leadership.”

The bottom line is that the impacts from production AI and ML aren’t always visible to executives. It’s often an ingredient to a customer experience, employee workflow, or application integration that provides the impact. Modelops platforms with executive-level reporting aim to address this gap.

5. Provide capabilities to support the ML model life cycle

Let’s consider some of the capabilities of modelops platforms that improve data science productivity:

  • Manage production deployments with versioning and rollback capabilities
  • Enable collaboration with other data scientists, promote knowledge sharing, and enable reuse
  • Identify and help prioritize which models in production are underperforming or require support
  • Improve model audibility and audit reporting of models so data scientists don’t lose precious time responding to regulators
  • Automate business reporting so that data scientists have a single source to share with stakeholders and business executives that demonstrates the business impacts of their models

These are some of the capabilities AI leaders want from modelops platforms—the outcomes that are important to organizations aiming to deliver business impacts from ML investments.

More organizations will experiment with ML and AI. The question remains whether MLops, modelops, or other emerging best practices will help data scientists deploy, manage, and demonstrate business outcomes from models in production.

Copyright © 2022 IDG Communications, Inc.