A.I. in Manufacturing Industries: the Opportunity and the Risk
Andrew Sanderson
B2B Marketing Consultant ? Creating better Processes to "get B2B Marketing done" ? Shares Tools & Methods, automates Processes
What are the implications of generative Artificial Intelligence for manufacturing? Is it an Opportunity or a Threat? And if it’s an Opportunity, how can we adopt it, adapt it and turn it into a Strength?
Your company has probably been using AI for a while already: in quality control, predictive maintenance, fault detection and diagnosis systems. These are Discriminative AI systems, based on supervised learning in which the model learns from labelled data. Programmers label the training data that associates input features with output categories (this where the potential for bias comes in). By learning the mapping, the system can then make predictions about how to categorise new input data.
What Generative AI does that’s different, is focus on learning to generate new data samples that resemble the training data. It does this by combining supervised learning with unsupervised learning, which examines the underlying structure and distribution of data without explicit labels. These models then generate new examples that reflect the statistical properties of the training data.
So far, so good. But what are the implications of Artificial Intelligence for manufacturing? Or more specifically for us as professionals: the people who carry the responsibility for our company, our products and staff?
The current status of AI
The goal of AI extends beyond mimicking human cognitive processes. It aims to enhance decision-making, automate tasks, and solve complex problems. This involves leveraging AI to process vast amounts of data, identify trends, make predictions, and even understand natural language and visual information. The analytical techniques encompass both pattern recognition (positive cognition) and anomaly detection (negative cognition).
But … (and it’s a big ‘but’) while AI can recognise patterns and the outputs may support decision-making, these systems fall short of human-like intelligence and consciousness. In spite of the advances made, AI lacks the holistic cognition and adaptability that are characteristic of human intelligence.
Precisely because they are created by humans, the current status of AI is optimistic, forward-looking ... and occasionally plagued by curious errors.
In practice, most AIs today can be described as Digital Automata. They are programmed entities, which have been designed to exhibit specific behaviour and implemented to perform clearly defined functions. The algorithms within the AI perform arithmetic calculations, logical comparisons and data transformations. They process input data and produce output results. Like automata, they exhibit deterministic behaviour: the same input data and algorithmic operations always results in the same (class of) output response, to ensure consistency and predictability. Any flexibility and customisation of output is based purely on application requirements and occurs only within permitted parameters.
While AIs often excel at specific tasks, they frequently struggle with diverse contexts and tasks that require adaptability or an understanding of complex nuances. For example: language translation has come a long way, but AI struggles with idiomatic expressions, cultural references, and wordplay. Similarly, word-for-word voice transcription is improving but usually lacks the emotional intelligence necessary to interpret tonal cues such as sarcasm, or empathy. This limits an AI's ability to provide the appropriate response to human interaction. Flexible, contextually rich reasoning and decision-making abilities continue to be a challenge for AIs.
Where we’re going in this article
To understand the current status of AIs, I’ll characterise them by usage and by implementation. This 2x2 matrix becomes our foundation for discussing future directions and their implications …
Characterising AIs by Usage
Here are two ways that AIs are currently used …
User-Initiated
Think of systems like ChatGPT or Gemini for text; Deep Dream, DALL-E or StyleGAN for images; or the Chatbot icon in the corner of a webpage. The sequence is straightforward: the user logs into the system, provides an input, gets a response. The user has direct control over the how and when of the input. This offers high levels of autonomy: the pace and frequency are completely based on the users’ needs. On the other hand, this type of system has low levels of productivity, because every output must be individually triggered. In fact, the user may need to experiment with multiple iterations of input before they get the output that satisfies their requirements.
Continuously online
Think of smart home assistants like Alexa or Siri; or wearable health monitoring systems like Fitbits, smartwatches or medical devices; for industry 4.0 / IoT, think of all those embedded sensors that monitor machinery. System set up is simple: link up the input feed, connect the AI and switch it on. From there on, the input is always available. The advantage here is the “set it and forget it” approach. It’s always on. When the user wants output they interact with the system. Users typically have a choice from a series of pre-defined responses on demand. You can set up real-time analysis to monitor performance within parameters; or exception reporting to flag errors. You can set up periodic reporting to measure success. This is the path to optimisation of processes for efficiency and reliability.
Characterising AIs by Implementation
Here are two ways that AIs are currently implemented …
Stand-alone use
This is the single-purpose health tracker you wear on your wrist, or an individual login for ChatGPT. The AI is a free-standing application that has nothing to do with any other system. It can serve as an individual productivity tool or feedback device, according to your needs. Once again, the user is in full control here: oversight, frequency and depth of interaction. But note that the style of control is typically pre-defined and limited: the FitBIT has buttons only; there is no text input for Alexa or Siri; there is no voice input for ChatGPT or Gemini. You might want the AI for it’s creative / productive ability; or for its feedback / optimisation insights. If it does what you need it to, then the world is a sunny place. But that’s pretty much it.
Connected and Integrated?
This where the powerful stuff happens. The output from one AI-enabled system becomes the automated, integrated input for another AI-enabled system. Systems can be joined to form a sequence. For example: the website chatbot identifies a conversation as requiring human intervention and automatically transfers the enquirer to the Customer Support team. Or the AI enables the AGV to pick up components from the warehouse and deliver them to an AI-enabled production machine in real time. Data collections systems can also be integrated to provide wider or deeper insights into an environment. For example: you could connect weather station data from the roof of the factory to production and quality data, to extend the scope of the dataset when looking for root causes in quality fluctuations.
Current AI Use Cases
Now it’s time to put the matrix together and look at a few typical systems that are already in use in machinery manufacturing:
What opportunities does AI offer?
Generally speaking, the simplest applications and approaches are bottom left; the most sophisticated top right. The top right quadrant is where productivity and optimisation derived from closed-loop or autonomous systems will overlap with deeper insights, faster response and potentially, deliver a bigger payoff.
Adopting continuous data feeds in a manufacturing environment offers a way to find the operational sweet spot that combines quality with efficiency and effectiveness. Optical checking leads to more accurate rejection of defective product for higher quality. Smoother, more continuous operations means that production volumes can be maximised. Optimising operational speeds can minimise wear and tear, which promotes preventive maintenance and reduces unplanned downtime.
Integrating AI-enabled applications, devices or systems into a multi-systems environment also offers powerful advantages. Closed-loop systems include self-monitoring functions to ensure continual operation. Autonomous devices ‘know’ what needs to be done and simply get on with it. The opportunity here is approaches like Adaptive Production Lines in which AI enables machinery to quickly adapt to changes in production requirements. The ability to switch between different products without significant downtime, enhances the flexibility of manufacturing processes.
Implications and risks with AI
However, any transition towards integrated implementation or continuous operation also reduces human oversight – and the direct consequence of that is to increase risk. A classic risk matrix comprises two dimensions: frequency of occurrence; and gravity (or cost) of consequence.
Ask ChatGPT or Gemini “what is the biggest threat to an AI?” and they will tell you something like this:
"The biggest threat to an AI involves a complex array of factors, each interlinked with technological, ethical, and societal dimensions. At its core, the misuse of AI technology poses a profound challenge. This encompasses a range of issues from the creation of biased algorithms, which can perpetuate and exacerbate social inequalities, to ..." Blah, blah, blah.
And although it is grammatically correct and factually credible, this answer illustrates the major problem with Generative AIs for text creation. They are designed to string words together without any real understanding of the outside world. (And in spite of this, some organisations believe that Generative AI is mature enough to be used in Customer-facing applications.)
In my view, the biggest threats to systems that involve AIs are: loss of operational functionality, disruption in data processing capabilities, compromise of data integrity, degradation of system performance.
领英推荐
Humans are instinctively aware that they can only survive 3 minutes without air, 3 days without water, 3 weeks without food. Artificial Intelligence systems have to be told that they need a power source.
The Outside Context Problem
An OCP is a completely unexpected external event with the potential to cause partial or total failure of a system.
An easy-to-understand example of an AI-managed closed-loop system is a real-time traffic flow system. Through rain and shine, data from cameras feeds into an AI that adjusts traffic light timings to continuously optimise safe traffic flow through a complex junction.
Everything runs smoothly until an Outside Context Problem like this happens …
The consequence? The system suffers total or partial collapse, but the traffic continues to arrive … At this point the AI can only respond within its design parameters and modify the flow of traffic. It cannot identify root causes, much less suggest an effective course of corrective action.
From this it should be immediately clear that we - the designers - need to put human oversight back into the system, because the AI will not. In this sense all AI systems are fundamentally flawed because they are by nature self-referential. They cannot take into account a wider context that is beyond their training.
Outside Context Problems are by nature unpredictable. They can happen at any time. And we have no way of knowing what they look like, so there’s not much point in trying to guess what they are. They will, by definition, be infrequent, but that’s not the issue, either.
The issue is not the cause of the OCP, but the consequence: “what must we ensure happens, subsequent to the partial or total collapse of an AI and/or the system it is connected to”?
Designing for Disaster Recovery
If we have to prepare for systemic collapse before switching the AI system on, how does that change our design thinking? Here, a couple of thoughts for business managers:
What about Generative AI?
Generative AI adds the potential for adding another layer of functionality on top of existing systems. The examples discussed above – especially language and voice – hold the promise of richer and more meaningful interfaces to operational systems.
Where I think we're headed with Gen AI is multiple input methods for integrated systems. In the future, Users will be able to choose between giving inputs by buttons, text, or language and voice. And AIs will be able to offer a variety of outputs or report formats: by voice or text; as an image, a chart, a dataset ... whatever is most appropriate for rapid comprehension.
In practice however, the current status of generative AI is that applications for language or text are probably not reliable or robust enough for immediate, trouble-free integration into productive commercial systems. Business managers should allow plenty of time for iteration and gaining experience before going live.
The potential for A.I. getting things wrong is still high. If the consequences are small in scope and limited in financial impact, the risk may be assessed as “part of the learning curve”. If the consequences are far-reaching and high impact, a more cautious approach to staged development and roll-out may well be safer.
Related issues are management flexibility and staff processes. [See how the Canadian Airline story went global: Washington Post, Forbes, Guardian. ] If an AI-enabled system does make a mistake, how will the error be identified, how will it be escalated and corrected? At some point there has to be a case-by-case hand-over from AI to human. The Canada Airline example could have been low-impact in financial terms, but lack of management preparation turned it into an unnecessary public relations embarrassment. We’re back to the oversight issue (again).
My experience of AIs for text, images & programming:
Observations
Implications
From Operational System to Product
So far I’ve only talked about operational systems that a company might choose to use to increase effectiveness of its own operations.
If product designers want to add and embed AI elements into the products themselves, I respectfully suggest that the OCP and Design issues are even more important. In addition to the potential for liability to customers and-users., there is the potential for liability to bystanders and the general public.
The case in point is the self-driving Uber car that killed Elaine Herzberg in March 2018. The vehicle’s cameras observed a shape, but the AI did not ‘recognise’ it. Because it could not classify the shape as a ‘pedestrian’, the AI did not actuate the brakes. Elaine was pushing a bicycle with shopping bags hanging from the handlebars. She died of her injuries in the local hospital.
The origin of the term “Outside Context Problem”
An OCP is similar to a Black Swan event. It doesn’t exist, it can’t happen – until it‘s right there before your eyes. What are you going to do now?
Science fiction author Iain M Banks coined the term for the novel Excession and described it like this:
An Outside Context Problem was the sort of thing most civilizations encountered just once, and which they tended to encounter rather in the same way a sentence encountered a full stop. The usual example given to illustrate an Outside Context Problem was imagining you were a tribe on a largish, fertile island; you'd tamed the land, invented the wheel or writing or whatever, the neighbors were cooperative or enslaved but at any rate peaceful and you were busy raising temples to yourself with all the excess productive capacity you had. You were in a position of near-absolute power and control which your hallowed ancestors could hardly have dreamed of and the whole situation was just running along nicely like a canoe on wet grass... when suddenly this bristling lump of iron appears sail-less and trailing steam in the bay and these guys carrying long funny-looking sticks come ashore and announce you've just been discovered, you're all subjects of the Emperor now, he's keen on presents called tax and these bright-eyed holy men would like a word with your priests.
A great book, by the way.
Please add your comments, experiences or opinions below ...
B2B Marketing Consultant ? Creating better Processes to "get B2B Marketing done" ? Shares Tools & Methods, automates Processes
9 个月Fraunhofer has published a Whitepaper on compliance with the EU AI Act. pdf available here: https://www.iks.fraunhofer.de/content/dam/iks/documents/Whitepaper-EU-AI-Act-Fraunhofer-IKS.pdf
B2B Marketing Consultant ? Creating better Processes to "get B2B Marketing done" ? Shares Tools & Methods, automates Processes
9 个月update: the Council of the EU approved the European AI Act on 21 May 2024. Overview here: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence#next-steps-7 Key points: specific applications are prohibited; other applications are categorised by risk; "high-risk" applications must include Human Oversight (Article 14).
Exciting times ahead for manufacturing with AI integration!
??? Engineer & Manufacturer ?? | Internet Bonding routers to Video Servers | Network equipment production | ISP Independent IP address provider | Customized Packet level Encryption & Security ?? | On-premises Cloud ?
1 年Artificial Intelligence (AI) is revolutionizing manufacturing, offering unparalleled opportunities for efficiency, productivity, and innovation. As professionals responsible for our companies, products, and staff, AI empowers us to optimize operations, enhance product quality, and drive strategic decision-making. To capitalize on AI's potential, we must cultivate a culture of adaptability, embracing AI technologies as enablers rather than threats. However, we must also address the weaknesses, including concerns about data security, workforce displacement, and algorithmic bias. How do you envision AI reshaping the manufacturing landscape, and what steps do you believe are crucial for maximizing its benefits while mitigating its risks?
Drive Digital Transformation in Business | Passionate about Business & Partner Development, Marketing, E-commerce | Lecturer | Start-up Advisor | Ex-SAP
1 年Spot on article Andrew. I would not limit it only to the manufacturing industry.