Assess before Adoption - A Technology Maturity Framework

Assess before Adoption - A Technology Maturity Framework

Technology underpins any digital transformation (DX). When you want to drive a digital transformation in an organization using any of the technologies, a good starting point will be to assess the maturity of the technology. This technology can be IoT, AI, Analytics etc.

What are the 5 stages of maturity? (TUC refers to the technology under consideration and whose maturity is being assessed)

The 5 stages of maturity of any technology for effective DX (digital transformation)

For example, let's consider AI as one of the underpinning technologies of DX. We want to assess the orgn's maturity in this field. How do we go about it using this framework/model?

First, what do we mean by AI maturity? We employ the term "maturity" to denote the extent of formalized, operational processes within the organization, particularly concerning AI implementation. In the realm of software development, capability maturity models traditionally assess an organization's capacity to consistently deliver software projects in a streamlined and reproducible manner, often relying on metrics for control. We posit that in the domain of AI, organizational maturity progresses beyond simply delivering AI solutions to the stage where operations are conducted in tandem with AI, particularly evident in the Transforming stage of maturity.

Level-1 Exploring: Initially, organizations embark on the exploration phase as they transition from a general understanding of AI to targeted inquiries about its potential applications. This phase may commence with limited financial resources or with a formal commitment to integrating AI. Regardless, teams are in the process of discovering the specific advantages AI offers to their industry and grappling with how to leverage them effectively.

Subsequently, exploration is typically spearheaded by motivated individuals or teams dedicated to fostering informed interest and support. Progress is made through the assessment of business use cases, evaluating costs and benefits. While technical teams may initiate AI experiments, their primary focus is on learning and generating internal enthusiasm and awareness.

As organizations progress, they reach a critical juncture where they develop the capability to distinguish between promising AI opportunities and less viable ones. This enables teams to formulate a roadmap outlining the necessary steps to define compelling AI solutions.

Level-2 Experimenting: Initially, organizations transition into the Experimenting stage as they begin to test hypotheses regarding the potential value derived from specific AI solutions and the methods for achieving it. Typically, this involves conducting a Proof of Concept (POC), which may be initiated either through collaboration with an AI software vendor or by an internal team operating autonomously.

As experiments unfold, progress is achieved through the clarification of how AI can create business impact within the organization's unique context of resources, opportunities, and challenges. This iterative learning process entails not only verifying AI capabilities but also identifying additional requirements necessary for achieving desired outcomes. Teams that advance most rapidly maintain a keen focus on identifying obstacles and facilitators for deploying AI models in production, with particular emphasis on governance aspects such as reliability, safety, trustworthiness, and accountability.

In later stages of experimentation, business value may be realized through the strategic deployment of AI into specific application areas as a calculated risk. At this juncture, it becomes paramount for teams to discern which projects are suitable for production and to establish clear metrics for measuring success.

Use Case: An insurance company grappled with the increasing complexity of processing insurance claims at scale. To address this challenge, they explored the potential of new Optical Character Recognition (OCR) techniques driven by deep learning algorithms to expedite claim form intake. Additionally, they investigated predictive techniques to enhance the efficiency of claim approval processes. However, before implementing these solutions, the company needed to ascertain the achievable performance levels and associated costs specific to their market niche. They meticulously curated a set of test data and performance metrics to evaluate various trade-offs, including the rates of false negatives and positives. The experiment culminated in the development of a gradient boosting model, capable of significantly improving straight-through processing rates while reducing current processing costs by up to 27%. This successful Proof of Concept (POC) empowered the insurer to initiate a pilot project for full-scale implementation in the next phase of maturity: Formalizing.

Level-3 Formalizing: Initially, organizations transition into the Formalizing stage when they successfully deploy their initial AI projects into production, often as limited pilot programs. The objective shifts from experimentation to leveraging the insights gained from these experiments to achieve tangible business outcomes.

The process of implementing AI solutions into production demands significant resources and effort, requiring each solution to be supported by a well-defined business case and agreed-upon performance metrics. Moreover, adherence to internal risk policies and industry regulations is essential, as AI projects cannot be launched without adequate processes and software tools to ensure responsible use. Organizations at this stage may encounter gaps in AI Governance if they have not yet matured in this area.

Subsequently, while initial AI solutions may have been developed and deployed in an ad hoc manner, Formalizing organizations utilize their experience to refine future plans for standardizing or streamlining AI delivery. This strategic focus prompts the organization to address any deficiencies, such as the need for more integrated data strategy to support AI solution deployment.

In later stages, executive-level sponsorship becomes crucial for adopting more complex AI applications in critical business processes. This support translates into increased budgets, mandates, and strategic plans, with a specific emphasis on ensuring the safety, responsibility, and maintainability of AI models over time.

Use Case: Formalizing a machine learning solution to minimize delays in transportation and logistics. In the transportation and logistics sector, the efficient loading and unloading of cargo ships by trucks is essential. However, scheduling trucks has become increasingly challenging, leading to significant wait times for drivers. While traditional statistical methods identified some causes of delays, a Proof of Concept demonstrated that implementing an AI model could double the accuracy of predicted wait times. To effectively reduce wait times using AI, it was crucial to consider the diverse needs of truckers, workers, planners, and transportation operating systems. Initially, a thorough data audit was conducted to ensure the availability and quality of data for training and deploying machine learning models. This process also helped outline requirements for technical system integrations. Additionally, in-person interviews were conducted to understand the problem from various perspectives and garner support for AI-driven solutions. Subsequently, a machine learning model was trained to predict the behavior of multiple agents and processes, providing actionable insights for users. The solution was then piloted in a limited capacity, with a production environment established for the model and a system implemented to monitor metrics on model quality and value throughout operational and seasonal changes. These measures ensured that the system could be gradually expanded as its benefits were validated, and stakeholders became more confident in its use. With the successful implementation of its first AI solution, the organization laid the groundwork for scaling impact in the next phase: Optimizing.

Level-4 Optimizing: Initially, organizations begin transitioning into the Optimizing stage when they have successfully deployed at least one AI solution into production and can effectively select, deliver, and manage additional AI projects with a positive return on investment (ROI).

Subsequently, as the number of deployed AI solutions increases, there are new opportunities to enhance the efficiency of delivering AI projects. This includes aspects such as the reusability of AI solution components and alignment between different organizational roadmaps, leading to greater cost savings and faster deployment. However, with this growth comes new challenges, particularly regarding the complexity of supporting AI models in production. This necessitates the development of new infrastructure and programs to integrate data, train users, and effectively measure and control AI model performance at scale.

In later stages, the organization has made significant investments to streamline the development and management of AI systems. Additionally, formalized policies and guidelines for the responsible use of AI have been established. Typically, C-level sponsorship has played a crucial role in driving integration across the organization at this stage.

Use Case: Streamlining agile AI development at an insurance company

After successfully deploying several AI models to production, an insurance company aimed to extend their success throughout the organization. They identified data preparation as the primary obstacle, noting that data scientists and engineers were dedicating considerable time to organizing and analyzing data for their AI solutions. To overcome this hurdle, the company conducted interviews and workshops to pinpoint the problem and devise a scalable solution. Their analysis revealed a lack of standardized methods and documentation for data analysis as a major bottleneck. Consequently, they developed a comprehensive strategy to address this challenge, outlining new tools, processes, and both technical and non-technical roles. This strategy aimed to empower personnel, enhance data management practices, and strengthen governance for AI initiatives.

By implementing this strategy, the insurer not only bridged skill gaps within teams but also prioritized investments in a data lake platform to streamline AI model development. Each step taken to optimize AI delivery brings the insurer closer to the transformative stage of their journey.

Level-5 Transforming: Initially, organizations progress into the Transforming stage when all necessary organizational structures are established for the integration of AI, and a significant portion of business decisions are facilitated by or made with artificial intelligence. With widespread AI literacy and effective communication of the AI vision and roadmap, support is garnered for cross-team collaboration, leading to the development of advanced AI solutions.

Subsequently, the organization actively employs AI to shape or reshape business models, products, and services, alongside operational strategies. AI becomes a primary budgetary focus, with executives predominantly relying on AI-driven insights to inform decision-making. The strategic direction of the company becomes closely intertwined with its utilization of AI, and organizational silos continue to dissolve to further integrate data, infrastructure, talent, and operations for AI advancement.

In later stages, as transformative AI maturity reaches its peak, the technology becomes deeply embedded in business operations and throughout entire value chains, serving as a cornerstone for the conception and implementation of new strategic opportunities. Organizations committed to ongoing transformation must continually push the boundaries of AI science and engineering while ensuring its ethical application in society.

Numerous Routes to Revolutionary AI

Currently, only a handful of organizations worldwide have attained the transformative stage of AI deployment, and it remains uncertain whether any have fully realized its potential. These organizations typically fall into two broad categories, each following distinct paths toward AI integration.

The first category comprises firms that have either been founded with AI as a central focus or have undergone (re)construction around digital frameworks before strategically embracing AI. Examples of these "AI-first" entities include renowned platforms such as Uber and Airbnb, along with emerging research and development entities in cutting-edge sectors like aerospace and biotechnology.

In contrast, the second category encompasses "AI-focused" firms like Google and Amazon, which have been digitally oriented since their inception during the dot-com era. Additionally, established players like Microsoft have transitioned into AI-focused entities after making substantial investments in digital transformation. Presently, the majority of large organizations fall into this latter category, requiring a significant pivot toward digital-centric operations before fully harnessing the potential of transformative AI.

The 5-Dimensions behind the DX maturity in a technology are:

  1. Strategy
  2. Data
  3. Technology
  4. People
  5. Governance

Let's continue to take the same use case of AI (as the TUC) and explain how each dimensions varies by maturity.

Strategy

At its essence, strategy revolves around the decisions a business makes to achieve success. In the context of AI maturity, strategic planning entails devising a course of action to attain the desired level of AI advancement within your organization.

Your strategy should provide clear guidance on the steps necessary for AI implementation, detailing the what, where, when, and why—including how the organization intends to leverage AI for competitive advantage post-implementation. Crafting this plan entails making choices that strike a balance between short-term objectives and long-term goals, considering factors such as the organization's current AI maturity stage, competitive environment, strategic aspirations, and leadership's desired pace of advancement.

Neglecting the strategic dimension can undermine AI initiatives, leaving experiments without the necessary business direction and rationale to surmount deployment obstacles or maintain relevance post-implementation.

1.1 Exploration Phase

At this stage, the organization lacks strategic alignment regarding its AI objectives and implementation methods. Typically, internal enthusiasts or experts are exploring potential use cases or experimenting with personal projects related to AI. However, these early visions often suffer from either being overly narrow, focusing on non-critical areas of the business, or overly broad and unrealistic, lacking a clear value proposition or sufficient resources to progress.

To progress to the Experimentation Phase: Foster alignment between business and technical leaders regarding the necessity of developing an AI strategy.

1.2 Experimentation Phase

Although an overarching AI strategy or vision is not yet established, organizations are beginning to move in that direction through two primary avenues. Firstly, they are planning the utilization of AI within specific subsets of the organization, such as individual business units or teams. Secondly, they are refining and testing hypotheses regarding potential AI solutions for business problems through trials and Proof of Concepts (POCs). While some executive sponsorship exists to fund POCs, project owners bear the responsibility of proving the viability of investment opportunities.

To advance to the Formalization Phase: Align and rally leadership around AI investments by showcasing successful Proof of Concepts (POCs).

1.3 Formalization Phase

Executive sponsorship facilitates the definition of the AI strategy for the organization, typically emanating from a VP-level executive or higher. Although immediate returns from AI investments may be limited, the organization can forecast ROI with clarity, enabling the unlocking of budgets and mandates necessary for strategy execution.

To progress to the Optimization Phase: Document the AI strategy to ensure shared understanding within the organization. Secure budget allocation and C-suite sponsorship for AI projects.

1.4 Optimization Phase

The organization begins implementing its AI strategy and mandate with clarity, supported by C-level sponsorship to integrate AI across various facets of the enterprise. The AI roadmap aligns with broader strategies such as digital transformation and innovation, with pre-approved budgets allocated for AI initiatives across business units. ROI for AI solutions is systematically measured, informing fiscal planning processes.

To move forward to the Transformation Phase: Align the AI strategy with other organizational roadmaps. Identify opportunities for coordinating AI efforts across functions to maximize impact.

1.5 Transformation Phase

AI becomes seamlessly integrated into the organization's overall strategy, with unified budgeting schemes and indicators encompassing both business and AI technology domains. This integration enables organizations to swiftly identify and capitalize on new AI-driven operational improvements and business models. Armed with extensive experience, the organization can envision significant innovations in its work, products, and services over longer time horizons.

To sustain progress: Maintain momentum to perpetuate innovation and transformation efforts.

Data

Data is the lifeblood of AI maturity, providing the foundation for training AI models. Simply put, without data, there can be no AI. However, determining the optimal quantity of data is not a one-size-fits-all endeavor. Different AI methodologies necessitate varying types and volumes of data. For instance, simulation-based modeling can commence with modest datasets, while synthetic data can supplement smaller pools of data. Consequently, the organization's data landscape should inform the development of the AI roadmap, with specific data requirements dictated by the needs of AI solutions, rather than vice versa.

Typically, these data prerequisites encompass factors such as cleanliness, comprehensiveness, labeling (for supervised machine learning methods), integration, security, and mitigation of bias. These criteria span the entire AI lifecycle, encompassing training, testing, maintenance, and retraining in operational settings, necessitating collaboration between technical and business stakeholders. For instance, business users responsible for data-generating systems must grasp the downstream implications of altering system usage patterns over time.

Presently, the primary hurdle for many organizations lies not in the scarcity of data but rather in the accessibility and utility of data pertinent to their desired AI implementations. Progressing through the Experimenting stage, a majority encounter challenges in data collection and cleaning, often resorting to ad hoc methods. As organizations transition to the Formalizing stage, a slightly smaller majority leverage insights from previous experiences to establish specialized practices and infrastructure supporting multiple AI initiatives.

When readying data for AI implementation within an organization, take into account the following considerations:

  • Volume: Assess whether the available data volume aligns with the requirements of the AI techniques outlined in the AI roadmap.
  • Representativeness: Ensure that the data encompasses a diverse range of scenarios reflective of those encountered by the use cases identified in the roadmap.
  • Quality: Verify that the data is well-structured, devoid of gaps and inaccuracies, to uphold its reliability and efficacy.
  • Labeling: If employing supervised learning methods, confirm that the data is appropriately labeled, facilitating the AI models' comprehension of examples.
  • Accessibility: Guarantee that the data is easily accessible for utilization in both development and production environments, facilitating seamless integration into AI systems.

2.1 Exploring

In the Exploring phase, organizations face three primary challenges regarding data utilization for AI. Firstly, internal datasets lack visibility, making it challenging to identify and access relevant data. Secondly, interpreting the data often requires specialized expertise, posing a barrier to its effective utilization. Thirdly, there is a lack of standard infrastructure or processes for seamless data access. Typically, structured data resides in various departmental silos, hindering comprehensive analysis. Additionally, the organization struggles to define effective data requirements for AI and lacks clarity on available unstructured data sources.

To progress to the Experimenting phase: Gain insights into data requirements for different AI techniques. Identify unique organizational elements captured in data to inform the strategic AI roadmap.

2.2 Experimenting

During this phase, teams begin assembling usable data in accessible formats. Efforts may be underway to establish common data stores or data lakes, albeit with limited connectivity and periodic data refreshes. Specialized tools for data preparation, such as labeling, play a crucial role in readying data for AI models.

To advance to the Formalizing phase: Utilize initial AI experiments to advocate for breaking down data silos and consolidating data.

2.3 Formalizing

In the Formalizing phase, the organization possesses a core set of accessible data suitable for building AI solutions. Targeted data collection based on the strategic AI roadmap is prioritized over a generic data-gathering approach. Data enablement emerges as a strategic priority, unlocking resources for infrastructure development or data acquisition. The organization can effectively measure data quality for specific AI techniques and use cases.

To proceed to the Optimizing phase: Continue dismantling data silos with AI use cases in mind. Establish metrics, processes, and technologies for managing data quality for AI.

2.4 Optimizing

In the Optimizing phase, organizations have extensive, up-to-date data to develop complex AI solutions. Most strategic systems are integrated into a common data platform, facilitating synchronized information flow. The data platform enjoys wide adoption within the organization, with real-time access for priority use cases. Data cleaning and preparation align with quality metrics tied to the AI roadmap.

To transition to the Transforming phase: Further automate data aggregation and accessibility. Identify technologies, processes, or partnerships to acquire new data.

2.5 Transforming

In the Transforming phase, the data platform becomes integral to core business functions. Automated infrastructure and tools streamline data consolidation, enabling teams to ingest new datasets effortlessly. Both internal and external datasets receive high visibility and are well-documented. Strategic investments ensure a self-service data access process, supported by automated health monitoring of the central data repository.

To sustain progress: Explore new AI techniques to maximize existing data potential. Continuously seek new sources of actionable data beyond existing systems.

Use Case: An insurance company sought to enhance its AI capabilities to enhance the value of its products in an evolving digital landscape. Amidst a significant IT infrastructure overhaul and a plan to double its data science team, the company embarked on a collaborative effort across departments to craft an AI Data Strategy. This strategy aimed to establish the groundwork necessary to leverage diverse data sources in accordance with industry best practices, ensuring scalability, expedited implementation, and optimized AI value realization. Through this collaborative endeavor, the company fostered buy-in and alignment among business and technical stakeholders, facilitating swift progress. Some recommendations from the strategy were swiftly implemented within weeks of its formulation.

Technology

Tools, infrastructure, and workflows are pivotal for driving AI initiatives across the solution lifecycle. Technology for AI maturity encompasses the necessary resources to facilitate every phase, from initial development and testing to deployment, ongoing monitoring, and retraining. Regardless of whether AI solutions are acquired externally or developed internally, they adhere to this standardized lifecycle. Leaders must grasp how technology underpins each stage of this process and acknowledge the inherent trade-offs as the organization progresses. For instance, an infrastructure supporting a single AI model in production may prove inadequate for scaling to multiple models cost-effectively.

The foremost areas of technological evolution for most organizations revolve around development tools and computing hardware. Emerging development tools encompass AI frameworks like TensorFlow and PyTorch, alongside software categories such as DevOps, MLOps, and AIOps. These tools facilitate closer collaboration between engineering and infrastructure management, a trend accentuated by the iterative nature of AI model training. Additionally, new computing infrastructure, such as purpose-built AI chips or GPUs, leverages architectures optimized for AI algorithms, surpassing traditional processors' capabilities.

Presently, while commencing AI experiments on personal computers is increasingly accessible, 45% of organizations in the Experimenting stage have already deployed dedicated servers for AI solutions. Some have initiated AI applications to forecast variations in server workload, enabling automatic resource scaling. However, at the Formalizing stage, only around a third (35%) monitor AI models for governance concerns like concept drift, and merely 13% have established protocols for retraining and updating AI models in production. These metrics substantially improve at higher stages.

When crafting technology for AI, it's imperative to consider various factors:

  • Requirements: Anticipate current needs and anticipate how they will evolve over time.
  • Flexibility: Ensure tools can seamlessly integrate with diverse data types, support various modeling approaches, and accommodate different AI frameworks.
  • Scale: Assess the scalability of technology across different production scenarios.
  • Policies: Establish requisite policies to govern the functioning and success of the technology.

Unlike traditional rule-based software, which operates on predefined instructions, modern AI solutions are driven by goals or objectives that guide machine learning processes. Consequently, AI solutions necessitate iterative training and testing during development, alongside continuous monitoring and retraining in production. As the business landscape evolves, machine learning models may experience performance degradation if left unattended. Addressing this challenge entails designing models capable of adapting to new data continuously, albeit at the cost of increased complexity in AI governance. This includes implementing techniques for monitoring AI models in operational environments.

3.1 Exploring

At this stage, organizations typically lack specialized AI or machine learning solutions, even if they have invested in related technologies like DevOps, robotic process automation (RPA), or advanced analytics. Business leaders are uncertain about the requirements, and any initial experiments are usually conducted on personal computers or cloud-based environments.

To progress to Experimenting: Determine the necessary technology for conducting initial AI experiments, starting with personal computers and cloud development environments.

3.2 Experimenting

Data scientists and developers begin utilizing cloud infrastructure to collaborate on know-how and results, harnessing GPU power beyond their laptops. Cloud-based or on-premise servers may be provisioned, but AI model training still occurs manually without automated resource management. DevOps teams, if present, are likely unfamiliar with deploying AI models, and there's a lack of standard processes or deployment architectures.

To advance to Formalizing: Formalize deployment architectures and explore automation opportunities.

3.3 Formalizing

Technical controls are established to facilitate human oversight and incorporate explainability features outlined by AI governance practices for production deployment. Standardized AI deployment architecture and development tools are implemented, with automated management of computing resources. As AI development and deployment processes become more standardized, departments experiment with more complex AI solution designs, including reusing AI models across different business units.

To move toward Optimizing: Continue refining development tools and managing computing resources.

3.4 Optimizing

With an increasing number of deployed AI models, organizations invest in new infrastructure to streamline AI development, deployment, and management. This includes retraining models on new data and centralizing tasks like monitoring, auditing, and performance management. Challenges are addressed through centralization and support for code reuse.

To progress toward Transforming: Invest in a centralized platform for tracking, deploying, and retraining AI models.

3.5 Transforming

AI deployment architecture becomes standardized and efficient, aligning with the organization's strategic objectives. New use cases drive technological advancements, such as scaling to new locations or personalizing AI models for individual customers. This necessitates leveraging AI to manage technology infrastructure automatically, optimizing resource provisioning and enabling innovative use cases.

To sustain progress: Define novel use cases to expand technological boundaries.

People

The Human dimension of AI maturity revolves around aligning leadership and change management to ensure individuals are prepared, motivated, and equipped to utilize AI effectively. Even the most advanced AI solutions will falter if individuals are not organized and incentivized to engage with them. Thus, it falls upon executive leaders to empower both business and technical teams to implement and utilize AI successfully.

To effectively lead teams in the realm of AI, leaders must facilitate the convergence of expertise, enabling them to develop optimal visions, roadmaps, and daily operational decisions concerning AI. This necessitates guiding individuals at all levels through a series of mindset shifts: transitioning from constructing rule-based systems with established development processes to embracing learning systems that demand iterative refinement and continual attention over time; and from executing tasks independently to collaborating with AI systems that actively participate in the workflow. Moreover, leaders themselves must possess a comprehensive understanding of AI's implications for their business, enabling them to provide adept guidance and make decisive decisions as necessary.

For individuals to adeptly construct and interact with AI solutions, they require comprehensive training, on-the-job support, and meaningful involvement in the design and deployment phases. Training initiatives should encompass both business and technical facets of AI, empowering employees to comprehend and contribute to the organization's distinct AI vision. Job support entails regular communication of the AI roadmap and assistance with skill enhancement and adaptation as needed. Engaging users in the design and deployment processes of AI solutions fosters trust and ensures that solutions leverage the most pertinent information at each decision point. Organization-wide, this entails distinguishing between AI myths and realities rather than universally upskilling all employees on AI algorithms.

A recurring concern for business leaders is the potential impact of AI on jobs. While the exact implications remain uncertain, research indicates that the choices made by leaders play a pivotal role. For instance, AI solutions can facilitate automation as well as collaboration between humans and machines. Even when employed for automation, AI need not replace workers but can instead complement and potentially bolster demand for their skills. This underscores the opportunity for leaders to identify AI applications that align with the unique people, culture, and values of their organization.

When preparing individuals for AI integration, consider the following factors:

  • Leadership Persona: Assess the individual spearheading the initiative to enable or scale AI. Is the appropriate leader in place, equipped with the necessary knowledge to make informed decisions?
  • AI Literacy: Beyond technical proficiency, evaluate individuals' capacity to actively learn and adapt to AI technologies. Are they equipped with the aptitude to comprehend and navigate AI solutions effectively?
  • Job Skills and Resources: Identify the personnel and organizational segments necessitating reskilling or upskilling to accommodate evolving demands and roles. Additionally, determine the supplementary resources employees will require upon engaging with AI.
  • Talent Strategy: Determine the requisite new talent and the criteria for acquisition. Consider the need for partnerships or external assistance to fulfill talent requirements effectively.
  • Operating Model: Define the framework for managing AI resources, projects, and solutions throughout the AI roadmap. Consider whether management of AI should be centralized or decentralized to optimize operational efficiency and efficacy.

4.1 Exploring

Currently, the organization lacks defined roles and responsibilities for AI implementation and struggles with understanding how to establish them. Business teams require assistance in absorbing pertinent insights from technical literature to formulate viable AI use cases. Additionally, data science teams seek collaboration from business partners to align AI techniques with significant business challenges and comprehend technical aspects of AI methodologies.

To progress to Experimenting: Enhance AI literacy among both business and technical teams to foster confidence and support. Facilitate knowledge exchange among teams to ensure widespread accessibility to AI. Engage AI specialists to swiftly identify and address knowledge gaps.

4.2 Experimenting

While some roles and responsibilities related to AI exist, the organization is still in the experimental phase to determine the most effective organizational structure for AI integration. Typically, small teams comprising internal experts in data science, business intelligence, or advanced analytics initiate experiments with Proofs of Concept (POCs). However, these teams should not operate in isolation; instead, POCs should inform the organization about additional AI literacy requirements. Leaders should communicate the AI vision and roadmap to employees and involve individuals from diverse levels and functions in defining and conducting AI experiments.

To advance to Formalizing: Form cross-functional, adaptable, networked teams to spearhead AI experimentation. Organize educational activities for AI, such as workshops, hackathons, or temporary assignments. Identify AI career paths and their implications for workforce planning.

4.3 Formalizing

At this stage, new AI roles, like machine learning engineer, have surfaced and are being delineated at the enterprise level. While performance metrics are established, they are not yet integrated into formal performance management processes. Typically, organic Communities of Influence (CoIs) or a dedicated Center of Excellence (CoE) have been established to furnish skills and resources for new roles, offer guidance on acquiring external talent, and provide education for other organizational members. Business leaders play a pivotal role in communicating the AI vision and motivating and educating employees to align with it.

To progress to Optimizing: Define AI responsibilities for executive leadership, team roles, structure, and budgets to execute the AI roadmap effectively. Revise rewards, recognition, and performance standards to attract and retain AI talent. Foster Communities of Influence (CoIs) or a Center of Excellence (CoE) to engage individuals beyond the formal AI organization.

4.4 Optimizing

Overcoming obstacles to foster collaboration between business and technical teams for AI product development

In a financial institution, a team of machine learning engineers encountered a barrier: although they analyzed financial data and commenced coding experiments, they faced challenges collaborating with financial analysts. After addressing ad hoc questions about data usage in their daily tasks, analysts hesitated to provide further input and expressed concerns to management, impeding progress. Upon understanding analysts' apprehensions, it became evident that they misconstrued the purpose of AI-based tools, fearing job automation. In reality, engineers aimed to assist analysts in overcoming challenges rather than replacing their roles. Once leaders addressed these concerns and facilitated collaboration between business and technical teams, the project regained momentum.

At this stage, organizations have clearly outlined responsibilities and Key Performance Indicators (KPIs) for new AI-related roles. The talent strategy supports all employees in their learning journey to enhance AI literacy and adapt to evolving work structures. Furthermore, plans are in place to develop specific AI competencies and reskill or transition existing staff as needed. Leaders actively support organizational adaptation, and structures like CoIs or a CoE are formalized to manage the organization's broader AI ecosystem relationships.

To advance to Transforming: Ensure representation of the AI organization at the executive level, holding accountability for Enterprise KPIs related to AI. Establish sustainable learning pathways for individuals responsible for implementing and utilizing AI.

4.5 Transforming

At this stage, all teams and employees possess advanced AI literacy and foster a culture of collaboration with AI systems. AI integration spans across all roles, including executive positions, likely aiding HR and talent teams in planning and operations. Consequently, the organization's delivery model undergoes transformation, reshaping role definitions and work expectations.

To continue progressing: Communicate self-directed career paths for AI to guide professional development across various AI expertise domains. Empower HR and talent teams to leverage AI as a tool for business transformation.

Governance

AI governance encompasses more than just risk management; however, business leaders should be aware of distinct new risks associated with AI systems. These risks stem from the unique characteristics of AI, where systems are not configured with step-by-step instructions but rather by setting goals that guide a process of machine learning. Failure in this process can occur if machines learn from biased or incomplete data, resulting in errors in real-world applications. Additionally, AI models may be effectively trained but not updated to adapt to changing real-world conditions.

To ensure responsible and safe AI, trust is paramount, forming the foundation of every organizational interaction. Governance for AI maturity entails establishing policies, processes, and relevant technology components to ensure AI solutions are safe, reliable, accountable, and trustworthy.

To achieve these qualities, collaboration between business, technical, and risk teams is essential. Practices should be interconnected from the design of AI solutions to the development of policies, process controls, and supporting technologies. For instance, ensuring algorithmic decisions are traceable back to the data and models is crucial for debugging, compliance, and continuous improvement.

Currently, organizations exhibit less maturity in AI governance compared to other dimensions, with a significant gap between the most and least mature organizations. While many respondents lack awareness or are just beginning to explore governance efforts, organizations at higher maturity levels prioritize governance beyond regulatory requirements, considering it a competitive differentiator.

When developing AI governance strategies, organizations should consider various factors:

  • Risk: Identify potential risks at each stage of AI solution maturity as it scales in data, users, and impact.
  • Regulation: Understand and comply with relevant regulations in each country and jurisdiction of operation.
  • Safety: Ensure AI solutions prioritize and protect personal safety.
  • Explainability: Demonstrate the rationale behind AI predictions or decisions to enhance transparency and accountability.

5.1 Exploring

Board members, management teams, and employees are initiating their understanding of responsible AI to grasp the emerging risks, obligations, and opportunities. Collaborative efforts in crafting a strategic roadmap enable the identification of major risks associated with priority use cases.

To progress to Experimenting: Gain insight into new AI risks like model bias and drift. Identify specific risks along the AI roadmap and consider additional governance practices beyond existing ones. Begin formulating overarching principles to guide responsible AI adoption.

5.2 Experimenting

Business, technical, and risk teams share a comprehensive understanding of the legal obligations concerning AI compliance throughout the solution lifecycle. The organization is crafting high-level principles to guide AI usage beyond minimal legal requirements. To foster trust, internal stakeholders involved or affected by AI systems play a role in testing and refining system designs. Explorations into techniques like explainable AI (XAI) aim to enhance trustworthiness as complex models transition to production.

To move forward to Formalizing: Explore current discussions on AI ethics and Fairness, Accountability, and Transparency (FAccT). Engage diverse stakeholders to comprehensively address reliability, safety, trustworthiness, and accountability. Translate principles into defined role responsibilities, processes, and measurable metrics.

5.3 Formalizing

Guiding principles for AI governance are integrated into daily practices, tracking specific performance metrics in safety, reliability, trustworthiness, and accountability. Reporting is centralized, granting key stakeholders access to pertinent data. Typically, a dedicated model evaluation function exists separately, akin to a QA team. AI governance is formalized as a critical component of the overall strategy, with reliability and trustworthiness practices integrated into the standard development cycle. External perspectives on AI ethics are appropriately incorporated into discussions.

To move forward to Optimizing: Synthesize existing practices into generalizable guidance for broader use cases. Explore supporting technologies for governance, such as reporting tools.

5.4 Optimizing

With the increasing deployment of AI models in production, responsible AI practices evolve to manage complex interactions and scrutiny from stakeholders and regulators. Standardized guidance enforces centralized and auditable processes, policies, and technologies. Risk considerations extend to the model and portfolio levels, reflecting a sophisticated understanding of dependencies and feedback loops between people, AI applications, and the business environment.

To move forward to Transforming: Establish organizational structures to bolster the strength and scalability of AI ethics and governance across multiple domains, such as an ethics board.

5.5 Transforming

Robust governance positions the organization beyond regulatory compliance, offering a competitive edge in applying AI effectively. Multi-line risk defenses and stakeholder trust become assets, introducing novel challenges related to AI ethics or Fairness, Accountability, and Transparency. The organization may invest formally in capabilities to foster multi-stakeholder consensus on navigating these challenges, disseminating developed technologies and approaches.

To continue progressing: Engage with the broader AI ecosystem to shape AI governance standards and best practices industry-wide.

Use case: Unlocking the AI "black box" for enhanced trust and auditability

A manufacturer developed a sophisticated AI model for anomaly detection with the goal of identifying all defects before shipping products to customers. The new model significantly improved defect detection capabilities at plants. However, the data science teams had deployed the model in a "black box" manner, meaning it couldn't transparently explain to quality inspectors at the plants why certain parts were rejected while others were accepted. This lack of interpretability caused resistance to the model's implementation among inspectors. Although data scientists had validated the model's accuracy to meet risk and compliance requirements, additional investment in AI governance was necessary to instill trust and accountability among stakeholders. To progress to production, the manufacturer is implementing AI explainability techniques to incorporate new features that visually elucidate the rationale behind the anomaly detection model's decisions.

Snapshot of 5 maturity levels across the 5 factors

Maturity Levels for the dimension: Strategy & Data
Maturity Levels for the dimension: Technology & People
Maturity Levels for the dimension: Governance

To Sum up:

  • Implementing AI into operations is a complex endeavor. Many organizations stumble by either underestimating the challenges in Strategy, Data, Technology, People, or Governance, or by overemphasizing one dimension at the expense of others. Both approaches can impede progress, potentially hindering the ability to effectively compete with AI in the long run.
  • The key is to initiate the journey, starting with one use case at a time, and maintaining momentum until operations can scale, exploring new products, services, and business models for transformative impact.
  • Follow the systematic pathway of the 5 steps.
  • Consider these 3 cases: (1) A financial institution successfully identified AI opportunities amidst a sea of analytics and automation projects, unblocking progress through strategic alignment. (2) A manufacturing company utilized explainable AI techniques to garner user trust and support, facilitating the deployment of AI models into production through robust governance practices. (3) An insurance company formulated a comprehensive data strategy, enabling the scalable integration of AI across various facets of their business operations, facilitated by robust data management practices.
  • In each case, advancements in less mature dimensions paved the way for leveraging strengths in other areas, propelling AI initiatives forward. Identifying the areas in need of improvement is half the battle.
  • Often before adopting AI or Analytics as part of DX, ensure the 2 most important factors i.e. DATA and TECH are in a reasonable maturity level (>= 3). If this is not the case, put in place a preparatory program that addresses this first. The other 3 can be improved as the journey unfolds.
  • Likewise, the AI Maturity Framework can assist any organization, regardless of industry, in pinpointing the dimensions hindering their progress and devising actionable strategies to move forward.




要查看或添加评论,请登录

Swaminathan Nagarajan的更多文章

  • Cybersecurity: A challenging and lucrative field

    Cybersecurity: A challenging and lucrative field

    Why Work in Cybersecurity? Digital infrastructure has become the backbone of any industry. Cybersecurity has become one…

    1 条评论
  • Cybersecurity: Safeguard in an increasingly digital world

    Cybersecurity: Safeguard in an increasingly digital world

    The growing dependency on digital systems and rapid technological advancements have significantly amplified…

    3 条评论
  • Cybersecurity: Cyber Risk Reporting

    Cybersecurity: Cyber Risk Reporting

    Cyber risks pose a formidable challenge, especially in today's increasingly connected world, to organizations across…

    4 条评论
  • Cybersecurity: Emerging Trends

    Cybersecurity: Emerging Trends

    The field of cybersecurity is continually evolving, driven by the relentless pace of technological advancements and the…

  • Cybersecurity: The Cornerstone of the digital enterprise

    Cybersecurity: The Cornerstone of the digital enterprise

    As companies embark on their digital transformation journeys, the role of cybersecurity has become more critical than…

    5 条评论
  • Cybersecurity: A Primer

    Cybersecurity: A Primer

    Cybersecurity refers to the practices, technologies, and processes designed to protect networks, devices, programs, and…

    2 条评论
  • What is triple bottom line?

    What is triple bottom line?

    Businesses face challenges that go far beyond the pursuit of profit. Climate change, economic inequality, and social…

    1 条评论
  • Sustainable Initiatives: A few examples

    Sustainable Initiatives: A few examples

    The world is currently facing a multitude of crises: climate change, social inequality, and a global health pandemic…

  • Jobs to be done theory

    Jobs to be done theory

    The "Jobs to Be Done" (JTBD) theory is a powerful framework for understanding customer needs and driving innovation…

  • What does sustainability mean in business?

    What does sustainability mean in business?

    The concept of sustainability has evolved from a niche concern to a central business strategy. With rising awareness of…

    2 条评论

社区洞察

其他会员也浏览了