Amaris AWS Big Data Solution: How Managing Complexity Reverses Success Rate to 100%

Amaris AWS Big Data Solution: How Managing Complexity Reverses Success Rate to 100%

That's a disaster! According to the International Institute of Business Analysis (IIBA), 85% of big data, analytics, and artificial intelligence projects fail. To drive a nail into the matter, Gartner confirms that big data practices have not improved since the failure rate was already 85% in 2021.

What is going wrong? What does it take to make them successful?

In this article, I share the philosophy and principles of the Amaris Big Data Implementation Framework for AWS; it outlines the solutions we have developed at Amaris to meet the big data implementation challenges.

Succeeding in Your Big Data Project is an Obligation

Because of the 2024 moderate economic growth announced by the OECD, CIOs will not have the choice but to place big data, data governance, and machine learning in their top priorities.

Big data will help to streamline business and IT operations, increase customer service quality and revenues, spot innovation opportunities and other actions that boost profits.

Data governance will provide the framework including people, processes, tools, policies, and standards needed to continuously monitor and adjust the relevance of the organization's data assets.

Finally, machine learning will sort of make easier the automation of processes as key as data quality assessment, data quality improvement, and proper distribution of the so precious data quality scorecards.

That is the cost to stay in business in 2024!

Lack of Tactical Thinking Paired With Weak Engineering Practice is Why They Fail

Many fellow consultants cite big data and cloud experts scarcity; the volume, variety, velocity, and complexity of data; and challenging tools as the failure causes. Although they're not totally wrong, I think that the root causes of failures have to do more with lack of hindsight, absence of structured approach, and dysfunctional IT transformation practices.

Let me share with you, in order of importance, the mistakes that I spotted from the dozens of AWS consultancies I gave this year 2023 for Amaris Consulting in industries as varied as automotive, telecom, beverage, energy, freight, and transportation:

  1. Big data projects aren't tackled in the project mode: They're seen and managed as sets of scattered and individual tasks coordinated with makeshift practices.
  2. Delivery doesn't follow incremental approaches: Engineering activities are delivered in the "big-bang" mode with not intermediate releases following a waterfall lifecycle.
  3. Big data projects are improperly planned: Technical tasks are stressed at the expense of equally important activities including modeling the future big data infrastructure architecture, managing technical risks, developing adequate implementation and testing scenarios, and more like it.
  4. Big data architecture design is improvised: The indispensable logical architecture design step and the logical architecture diagram needed to collaboratively define the infrastructure's scope and architecture are ignored in favor of makeshift approaches where stakeholders, in silos, work on their part of the job. This leads to deficient capacities.

Proponents of the perspective "Don't Plan, Just Do!" would get nervous and protest, "We're talking about technology implementation, not project management. The stake is whether the implemented systems work or not." That's where they fail. First, they're in denial because the 85% of projects not delivering the business expectations are factual, in addition, they have devasting impacts on business operations and CIOs' credibility. Second, more importantly, the stake is, how well you deal with the complexity surrounding not only the big data concepts, infrastructure, and technologies but also the cloud as a computing environment.

I am asking, how can you succeed when you have no shared plan and idea about how to deal with so many and diverse challenges? Examples include choosing between 250+ AWS services, the myriad of configuration options of each AWS service, integration with the multitude of third-parties solutions to interact with, designing and managing AWS networking challenges (VPC, subnets, routing, security, latency, edge computing, and more), complexity of advanced analytics and machine learning algorithms, big data processing velocity, big data quality and cleansing, and more.

Let's be clear about it, successful big data projects builds on three IT engineering best practices that help to meet the challenges. These best practices are:

  • Relying on a Big Data Reference Architecture Model,
  • Leveraging an Implementation Planning Model,
  • Building on an Advanced Execution Plan.

I grouped the combined benefits of these engineering practices into the concept of Implementation Tactical Thinking.

Let's see how it helps!

Implementation Tactical Thinking is reversing AWS big data failure to 100% success rate

The big data Tactical Thinking for AWS is the belief that, unless a holistic plan is set up to anticipate technological, technical, organizational, and human challenges and to proactively navigate the complexities of both big data architecture and AWS computing environment, big data projects are doomed to failure.

The Big Data Implementation Framework by Philippe Abdoulaye

As illustrated, the framework is structured around three pillars including the big data reference architecture for AWS, the big data implementation planning model, and big data advanced implementation plan.

Let's discuss them, starting in this article with the big data reference architecture for AWS!

Understanding Amaris Big Data Reference Architecture for AWS

The big data reference architecture for AWS is an architectural pattern that I developed based on the big data reference architecture model of the ISO 20547-3 standard. Its primary purpose is to create the conditions for more effective, easier and faster design of big data infrastructure. These conditions include the holistic design of the big data infrastructure, the collaborative design approach, and more importantly, the AWS solution libraries.

Collaborative design reinforces the condition for fit-for-purpose, accurate, and robust AWS-based big data infrastructure

Collaborative design as we deploy it, using and adapting the work of Axel Feredj , Amaris Agile Offer Manager, is a cloud transformation and implementation practice that mobilizes the collective efforts, intelligence, expertise and skills.

Its primary purpose is to remove through extensive cross-functional collaboration the failure factors of big data implementation projects. It capitalizes on Extreme Programming (XP) and Scrum agile practices to put together relevant stakeholders—data scientists, data engineers, data analysts, DBAs, big data architects, business analysts, security experts—in an effort to encourage commitment, break down silos, facilitate the sharing of concerns, feedback, and solutions, and accelerate problem-solving and conflict resolution.

The role of agile practices, especially those of the scrum methodology, in the success of big data implementation projects is increasingly important. Two factors explain their importance: flexibility and adaptability and the iterative approach.

The flexibility and adaptability factor provided by agile practices makes it possible to manage frequent changes in business requirements and to more easily integrate feedback. As to the iterative lever, it is particularly suitable in a big data implementation context where insights and improvements are gained progressively throughout the project life cycle.

Holistic design creates the conditions for fit-for-purpose, accurate, and robust AWS-based big data infrastructure

Holistic design is an approach that considers the organization's entire big data ecosystem to be implemented including the AWS infrastructure components and their interconnectedness, setting and integration options, the external data sources as well as the data?governance. It is primarily used to leverage the AWS-based big data infrastructure reference architecture and derive from it the AWS services, AWS solutions, associated settings and third-parties solutions needed to implement the expected big data ecosystem.

Holistic design emphasizes the big picture—the entire ecosystem—rather than focusing on isolated pieces. It creates a context where participants stress the optimization of interactions and dependencies among the big data ecosystem’s building blocks to improve the overall and reliability of the overall infrastructure. Additional benefits include effective problem-solving, superior user experience, reduced friction and productive communication across the project lifecycle.

AWS Solution Libraries speed up AWS-based big data infrastructure architecture

AWS Solution Libraries are combinations of AWS services and AWS-based solutions developed to accelerate the implementation of the 7 building blocks suggested by the AWS-big data reference architecture. These building blocks include External Big Data Sources, Ingestion, Storage, Processing Pipelines, Consumption, Cataloging, and Data Governance.

The primary objective of the libraries is to remove the significant time wasted navigating the 250+ AWS services and options by giving easy access to the proper AWS big data services and solutions.

The AWS big data solution libraries map each of the big data infrastructure's 7 functions to the AWS service options likely to support their implementation.

The pre-identified AWS services and the pre-defined solutions and best practices they provide contribute to reducing the design time from 4-6 months on average to only 1-2 months.

The Key Takeaways

The key lesson in this article is that, complexity management is the challenge in today's IT. The rapid technological change, the extensive interconnectedness of systems not to mention the increasing volume of data to process contribute to the complexity in IT.

If your business users keep complaining about not getting the expected insights, inability to make informed decisions, missing KPIs, missed deadlines, insufficient integration, data quality issues or resistance to change, chances are you are among the 85% Gartner is talking about.

Do not kid yourself, you failed!

The reason for the failure is, you, your teams, or your AWS big data consultants underestimated and did not manage the complexity inherent to IT transformation projects.

The next article I will share insights about Amaris Reference Big Data Architecture for AWS.

Happy New Year 2024 !!!

Ronan Lévêque

Sales Director @ Amaris

9 个月

Merci Philippe A. Abdoulaye pour tous ces éclairages et cette expertise partagées tout au long de 2023 ??

Jérémy Gourou

Directeur @Amaris

9 个月

??????

要查看或添加评论,请登录

社区洞察

其他会员也浏览了