BigData to BigAI: What This Transition Means for Industry Cloud Platforms

BigData to BigAI: What This Transition Means for Industry Cloud Platforms

In an article I published a couple of weeks ago, I introduced the topic of Industry Cloud Platforms (ICPs). In a nutshell, an ICP is a cloud service or solution tailored to a specific industry. This progressive development over the years has had a significant impact on how cloud computing has transformed the use of technology across several industries. Back to the topic at hand: In this series, the focus is on the impact of the Big Data to Big AI transition on ICPs.

According to Hirenkumar (2024), cloud computing and subsequently ICPs are some of the major contributors to most of the technological advancements in the past few years. This is thanks to the barrier it removed in upfront infrastructure costs usually needed to make any contribution in the field of technology. This revolution has been fundamental to some of the major achievements in Big Data and AI. With Big Data, ICPs pushed innovation to limits we never thought possible. As we moved from limited data ecosystems to rich data ecosystems, our understanding of systems deepened. When directed toward industry-specific solutions in healthcare, finance, and retail, these insights led to improved understanding of operational efficiencies, customer behaviour, and market trends. In healthcare, for instance, Big Data platforms now analyse patient data to enhance treatment plans, augmenting the role of healthcare professionals. Big AI has taken these capabilities even further, with advancements occurring almost daily.

The Big AI effect on Industry Cloud Platforms

To better write on this topic, I have organised this discussion into three major industries; healthcare, finance, and manufacturing.

Healthcare

Let’s focus on the impact of Big AI on medical research.

Medical Research

We have recently seen great advancements in healthcare thanks to AI. One of the most recent is the AlphaFold database coined by Google DeepMind and EMBL’s European Bioinformatics Institute. AlphaFold is an AI system built to predict the 3D structure of proteins based on their amino acid sequence. The science behind this groundbreaking piece of technology won Demis Hassabis and John Jumper from Google DeepMind, alongside David Baker from the University of Washington the 2024, Nobel Prize in chemistry. This solution has solved a 50 year problem in biology, greatly advancing the field of medical research.

To put this into perspective, previous traditional methods used to determine the structure of proteins were X-ray crystallography, nuclear magnetic resonance spectroscopy, and cryo-electron microscopy. With these methods, scientists take weeks, months, and sometimes years to come up with the structure of a single protein. However, with AlphaFold, this is a task that is done in minutes and at most, hours.

And interestingly, in accordance with Google’s free access model, AlphaFold is freely accessible to the scientific community via a number of cloud options. You can also use the AlphaFold models in Google Colab, the Google Cloud Console, or even Amazon Web Services.

Forget about the technicalities, what is the actual quantifiable impact of AlphaFold to science?

  1. MIT researchers have already built an antibiotic called Abaucin. This is a highly effective drug against acinetobacter baumannii. This is a drug resistant bacteria, common in hospitals and responsible for diseases like pneumonia and meningitis.
  2. We can now have personalized medicine. By quickly predicting the structure of a protein, physicians can understand the structure of a patient’s specific protein and build a solution just for that. This particular use case will come in handy in cases of disease mutation.
  3. We can now design and build custom proteins for therapeutics. Yes, you heard me right. David Baker’s contribution in computational protein design in AlphaFold can be used to build entirely new proteins that do not exist in nature. Amazing! According to the Nobel Prize Institute, this could be the future of novel protein-based therapies


Finance

According to Deloitte, data is the single most important driver of the financial services industry today. With the Big Data explosion, customers are able to interact with their financial providers on a larger scale digitally than ever before. BigAI has propelled the finance sector in one of its most important areas of concern:

Security and Fraud

According to the Mandiant report 2024, the financial sector is the most targeted industry, accounting for 17.3% of all cyber attacks. Previously, financial institutions relied on traditional rule-based systems for security and fraud detection. This was not very smart since an individual with an intricate understanding of these systems could easily circumvent them by operating outside their base rules. However, Big AI-powered systems evolve and adapt to new forms of fraud as they emerge. This means that these systems mitigate forms of security and fraud attacks that financial institutions did not actually know existed. In the same spirit, these systems can run automated audits on the entire organization's infrastructure to find weak links and even mitigate them where applicable, only providing reports on actions taken to security personnel. One such breach that could have been avoided with such a system was the 2019 First American Financial Corp data breach. In this data leak, 885 million financial and personal records were exposed. This was not from a cyber attack but an internal infrastructure error on their website.

However one of the biggest concerns with the use of BigAI in the finance sector is AI governance. According to Mapple 2023,this spans a number of issues like transparency, accountability, fairness, interpretability, and trustworthiness, thus raising critical questions on data privacy and security. It is important to note that the issue of AI governance is something everyone is grappling with. This is because we really never had governance structures in place in anticipation of the AI boom. Nevertheless, a number of experts believe that it is impossible to use traditional data governance methodology on AI and thus propose the use of AI to govern AI. In a recent Tucker interview, Elon Musk stated that it would be impossible to govern AI, but that it does not negate the need for AI governance bodies. In his own words, “something above nothing.” He argues that the only way to govern AI is by instilling moral and humanistic values during the training and development of these AI models. This is an interesting and broad conversation that requires a dedicated issue. We will get back to this in the future.


Manufacturing

The manufacturing sector has maximised the use of Big AI in recent years. This is a field that relies heavily on machines and devices, thus collecting a tremendous amount of data. And as a rule of thumb, anywhere there is data, machine learning and AI can thrive. One particular AI use case that has propelled the manufacturing sector is predictive maintenance

Predictive Maintenance

Predictive maintenance has become a cornerstone of efficiency and reliability in manufacturing. It is driven by the convergence of IoT (Internet of Things), AI, and cloud computing. Predictive maintenance leverages real-time data from sensors embedded in machinery, enabling companies to monitor equipment health, anticipate potential failures, and schedule repairs before breakdowns occur. This approach not minimises unexpected downtime, extends equipment life, reduces maintenance costs, and enhances safety.

In oilfield operations, predictive maintenance plays a particularly crucial role. Remote oilfields often operate in harsh, inaccessible environments, making regular manual inspections challenging and costly. Through IoT-enabled devices and sensors, oilfield operators now gather data on machinery such as temperature, pressure, vibration, and fluid levels. This information is then sent to cloud-based systems, where AI models analyse patterns and detect anomalies that could indicate an impending equipment failure. For instance, if a pump's vibration levels are consistently above a normal threshold, the AI model might flag it for maintenance before it breaks down, potentially avoiding costly production stoppages and safety hazards.

A real-world example is Shell, which implemented predictive maintenance systems across its oil refineries and drilling platforms. Using AI and machine learning, Shell's systems monitor assets, predict equipment failures, and schedule proactive maintenance, saving the company millions in maintenance costs and downtime.

The benefits of predictive maintenance extend beyond oilfields. In sectors like automotive, aerospace, and heavy machinery, manufacturers have adopted AI-based predictive maintenance to optimise their production lines. For example, BMW uses predictive maintenance for assembly line robots, preventing failures that could halt production. By analysing historical data on component wear, usage patterns, and environmental conditions, AI models help BMW and other manufacturers develop maintenance schedules tailored to each piece of equipment, minimising disruptions and ensuring consistent output.


By harnessing BigAI’s capabilities, ICPs have pushed the boundaries of what’s possible, creating systems that don’t just store and process data but actively learn, predict, guide decision-making, and sometimes handle decision making. From personalised healthcare treatments and fraud prevention to precise predictive maintenance, BigAI is giving industries the power to be more proactive, accurate, and efficient.

要查看或添加评论,请登录

Sangalo Mwenyinyo的更多文章

社区洞察

其他会员也浏览了