Using artificial intelligence to control digital manufacturing Researchers train a machine-learning model to monitor and adjust the 3D
Using artificial intelligence to control digital manufacturing
Researchers train a machine-learning model to monitor and adjust the 3D printing process to correct errors in real-time.
www.mgireservationsandbookings.co.uk
Scientists and engineers are constantly developing new materials with unique properties that can be used for 3D printing, but figuring out how to print with these materials can be a complex, costly conundrum.
Often, an expert operator must use manual trial-and-error — possibly making thousands of prints — to determine ideal parameters that consistently print a new material effectively. These parameters include printing speed and how much material the printer deposits.
MIT researchers have now used artificial intelligence to streamline this procedure. They developed a machine-learning system that uses computer vision to watch the manufacturing process and then correct errors in how it handles the material in real-time.
They used simulations to teach a neural network how to adjust printing parameters to minimize error, and then applied that controller to a real 3D printer. Their system printed objects more accurately than all the other 3D printing controllers they compared it to.
The work avoids the prohibitively expensive process of printing thousands or millions of real objects to train the neural network. And it could enable engineers to more easily incorporate novel materials into their prints, which could help them develop objects with special electrical or chemical properties. It could also help technicians make adjustments to the printing process on-the-fly if material or environmental conditions change unexpectedly.
“This project is really the first demonstration of building a manufacturing system that uses machine learning to learn a complex control policy,” says senior author Wojciech Matusik, professor of electrical engineering and computer science at MIT who leads the Computational Design and Fabrication Group (CDFG) within the Computer Science and Artificial Intelligence Laboratory (CSAIL). “If you have manufacturing machines that are more intelligent, they can adapt to the changing environment in the workplace in real-time, to improve the yields or the accuracy of the system. You can squeeze more out of the machine.”
The co-lead authors on the research are Mike Foshey, a mechanical engineer and project manager in the CDFG, and Michal Piovarci, a postdoc at the Institute of Science and Technology in Austria. MIT co-authors include Jie Xu, a graduate student in electrical engineering and computer science, and Timothy Erps, a former technical associate with the CDFG.
Picking parameters
Determining the ideal parameters of a digital manufacturing process can be one of the most expensive parts of the process because so much trial-and-error is required. And once a technician finds a combination that works well, those parameters are only ideal for one specific situation. She has little data on how the material will behave in other environments, on different hardware, or if a new batch exhibits different properties.
Using a machine-learning system is fraught with challenges, too. First, the researchers needed to measure what was happening on the printer in real-time.
To do this, they developed a machine-vision system using two cameras aimed at the nozzle of the 3D printer. The system shines light at material as it is deposited and, based on how much light passes through, calculates the material’s thickness.
“You can think of the vision system as a set of eyes watching the process in real-time,” Foshey says.
The controller would then process images it receives from the vision system and, based on any error it sees, adjust the feed rate and the direction of the printer.
But training a neural network-based controller to understand this manufacturing process is data-intensive, and would require making millions of prints. So, the researchers built a simulator instead.
Successful simulation
To train their controller, they used a process known as reinforcement learning in which the model learns through trial-and-error with a reward. The model was tasked with selecting printing parameters that would create a certain object in a simulated environment. After being shown the expected output, the model was rewarded when the parameters it chose minimized the error between its print and the expected outcome.
In this case, an “error” means the model either dispensed too much material, placing it in areas that should have been left open, or did not dispense enough, leaving open spots that should be filled in. As the model performed more simulated prints, it updated its control policy to maximize the reward, becoming more and more accurate.
However, the real world is messier than a simulation. In practice, conditions typically change due to slight variations or noise in the printing process. So the researchers created a numerical model that approximates noise from the 3D printer. They used this model to add noise to the simulation, which led to more realistic results.
“The interesting thing we found was that, by implementing this noise model, we were able to transfer the control policy that was purely trained in simulation onto hardware without training with any physical experimentation,” Foshey says. “We didn’t need to do any fine-tuning on the actual equipment afterwards.”
When they tested the controller, it printed objects more accurately than any other control method they evaluated. It performed especially well at infill printing, which is printing the interior of an object. Some other controllers deposited so much material that the printed object bulged up, but the researchers’ controller adjusted the printing path so the object stayed level.
Their control policy can even learn how materials spread after being deposited and adjust parameters accordingly.
“We were also able to design control policies that could control for different types of materials on-the-fly. So if you had a manufacturing process out in the field and you wanted to change the material, you wouldn’t have to revalidate the manufacturing process. You could just load the new material and the controller would automatically adjust,” Foshey says.
Now that they have shown the effectiveness of this technique for 3D printing, the researchers want to develop controllers for other manufacturing processes. They’d also like to see how the approach can be modified for scenarios where there are multiple layers of material, or multiple materials being printed at once. In addition, their approach assumed each material has a fixed viscosity (“syrupiness”), but a future iteration could use AI to recognize and adjust for viscosity in real-time.
Additional co-authors on this work include Vahid Babaei, who leads the Artificial Intelligence Aided Design and Manufacturing Group at the Max Planck Institute; Piotr Didyk, associate professor at the University of Lugano in Switzerland; Szymon Rusinkiewicz, the David M. Siegel ’83 Professor of computer science at Princeton University; and Bernd Bickel, professor at the Institute of Science and Technology in Austria.
The work was supported, in part, by the FWF Lise-Meitner program, a European Research Council starting grant, and the U.S. National Science Foundation.
Protecting students: School district adds artificial intelligence to increase security
PEORIA, Ill. (WEEK/Gray News) - A school district in Illinois has approved a new security system officials say should better protect students.
WEEK reports Peoria Public Schools will be getting a new security system that will operate without being noticed. The security system is known as Intellisee, and the company says it is a real-time artificial intelligence security platform.
The system consists of 64 cameras that would reportedly learn over time to protect kids as they go about their day at school.
According to an Intellisee representative, data will be analyzed in real-time to identify objects and people while recording footage for future playback. It will also alert staff to handle whatever problem it detects if it poses a danger of some kind.
The security system is expected to notify a custodian if it detects a puddle or police if a shooting incident is being monitored.
“This is just simply adapting with the current climate of what a school day could look like in 2022,” said Mike Murphy, with the Peoria Public Schools school board.
Members of the school district shared that none of the data or video collected would leave the servers, meaning no third parties would have access.
During Tuesday’s meeting on the new security system, members of the school board brought up the deadly shooting in Uvalde, Texas. An incident that district leaders said they want to do everything they can to avoid it from happening locally.
“As a district, we need to learn from that situation, and learning from that means putting different things in place that could potentially stop the threat of violence inside of our schools,” said Gregory Wilson, with the Peoria Public Schools board.
Officials said the system is expected to be active at the beginning of the upcoming school year, with payment coming from state funds and grants focused on student support.
领英推荐
Peoria Public Schools board members said the district has agreed to a one-year contract with Intellisee.
“We want to make sure it’s safe, and we’re going to do our job as a district to ensure school safety,” Wilson said. “A top priority is the safety of our students, staff, teachers and administrators.”
ARTIFICIAL INTELLIGENCE IS THE FUTURE OF THE BANKING INDUSTRY — ARE YOU PREPARED FOR IT?
Our world is moving at a fast pace. Though banks originally built their foundations to be run solely by humans, the time has come for artificial intelligence in the banking industry. In 2020, the global AI banking market was valued at $3.88 billion, and it is projected to reach $64.03 billion by the end of the decade, with a compound annual growth rate of 32.6%. However, when it comes to implementing even the best strategies, the application of artificial intelligence in the banking industry is susceptible to weak core tech and poor data backbones.
By my count, there were 20,000 new banking regulatory requirements created in 2015 alone. Chances are your business won’t find a one-size-fits-all solution to dealing with this. The next-best option is to be nimble. You need to be able to break down the business process into small chunks. By doing so, you can come up with digital strategies that work with new and existing regulations.
AI can take you a long way in this process, but you must know how to harness its power. Take originating home loans, for instance. This can be an important, sometimes tedious, process for the loan seeker and bank. With an AI solution, loan origination can happen quicker and be more beneficial to both parties.
As the world of banking moves toward AI, it is integral to note that the crucial working element for AI is data. The trick to using that data is to understand how to leverage it best for your business’ value. Data with no direction won’t lead to progress, nor will it lead to the proper deployment of your AI. That is one of the top reasons it is so challenging to implement AI in banks ?— there has to be a plan.
Even if you come up with a poor strategy, those mistakes can be course-corrected over time. It takes some time and effort, but it is doable. If you home in on how customer information can be used, you can utilize AI for banking services in a way that is scalable and actionable. Once you understand how to use the data you collect, you can develop technical solutions that work with each other, identify specific needs, and build data pipelines that will lead you down the road to AI.
How is artificial intelligence changing the banking sector?
Due to the increasingly digital world, customers have more access to their banking information than ever. Of course, this can lead to other problems. Because there is so much access to data, there are also prime opportunities for fraudulent activities, and this is one example of how AI is changing the banking sector. With AI, you can train systems to learn, understand, and recognize when these activities happen. In fact, there was a 5% decrease in record exposure from 2020 to 2021.
AI also safeguards against data theft or abuse. Not only can AI recognize breaches from outside sources, but it can also recognize internal threats. Once an AI system is trained, it can identify these problems and even offer solutions to them. For instance, a customer support call center can have traffic directed by AI to assuage an influx of calls during high-volume fluctuations.
Another great example of this is the development of conversational AI platforms. The ubiquity of social media and other online platforms can be used to tailor customer experiences directly led by AI. By using the data gathered from all sources, AI can greatly improve the customer experience overall.
For example, a loan might take anywhere from seven to 45 days to be granted. But with AI, the process can be expedited not only for the customer, but also the bank. By using AI in a situation such as this, your bank can assess the risk it is taking on by servicing loans. It can also make the process faster by performing underwriting, document scanning, and other manual processes previously associated with data collection. On top of all that, AI can gather and analyze data about your customers’ behaviors throughout their banking lives.
In the past, so much of this work was done solely by people. Although automation has certainly helped speed up and simplify tasks, it is used for tedium and doesn’t have the complexity of AI. AI saves time and money by freeing up your employees to do other processes and provides valuable insights to your customers. And customers can budget better and have a clearer idea of where their money is going.
Even the most traditional banks will want to adopt AI to save time and money and allow employees more opportunities to have positive one-on-one relationships with customers. Look no further than fintech companies — such as Credijusto, Nubank, and Monzo — that have digitized traditional banking services through the power of cutting-edge tech.
Are you ready to put AI to work for your business?
Today, it’s not a question of how AI is impacting financial services. Now, it’s about how to implement it. That all starts with you. You must ask the right questions: What are your goals for implementing AI? Do you want to improve your internal processes? Simply provide a better customer service experience? If so, how should you implement AI for your banking services? Start with these strategies:
1. Set short-term goals. Instead of diving in deep, zero in on some features you think would be nice up front so you can build an infrastructure that you can fully realize later. For example, you could set up AI to identify the type of credit your customers seek. Eventually, you could use that same technology to predict if that line of credit will be successful.
By making realistic short-term goals, you set yourself up for future success. These are the solutions that will be the building blocks for the type of AI everyone will aspire to use.
2. Understand your readiness for implementation. Here, you are going to need a little self-awareness. If you want to implement AI, you need to make sure that you have the proper data collection mechanisms in place. If not, then you need to start from there.
You want to ensure that you know how you currently use data and how you plan on using it in the future. Again, this sets your organization up for success in the long run. If you don’t have the right practices now, you certainly won’t going forward.
3. Equip yourself with the right tools. Once you have done some self-reflection, it is time to set yourself up with the tools required to implement AI functions. To do so, you need a team gathered and ready, so you can hit the ground running.
As you implement AI into your banking practices, you should know how exactly you generate data. Then, you must understand how you interpret it. What is the best use for it? After that, you can make decisions that will be scalable, useful, and seamless.
Technology has not only made the world around us move faster, but also better in so many ways. Traditional institutions such as banks might be slow to adopt, but we’ve already seen how artificial intelligence is changing the banking sector. By taking the proper steps, you could be moving right along with it into the future.
?
DATA FABRIC: SIMPLIFYING DATA MANAGEMENT IN AN INCREASINGLY DATA-RELIANT WORLD
usinesses all over the world are seeking to become more data-driven in their decision-making and extract as much value as possible from the data available to them. But given the rise of various data-related technologies, devices and storage environments, data management has become tougher than ever. One architecture that is now gaining traction within such organisations to help them effectively manage this problem is data fabric.
Indeed, today’s global organisation is almost certain to have much of its data deployed across a vast geographical area—on-site and off-premise, and perhaps even across various physical and cloud environments. Sensors for the internet of things (IoT), cloud computing and edge computing represent just a few common ways in which data is generated across various remote locations. What’s more, an expanding array of data facilities have also emerged, including data lakes, relational databases, data mesh and flat files, and as such, managing, processing, storing, integrating and securing data across all these data types and platforms can present a major headache to enterprise organisations.
Data fabric can be utilised to consolidate disparate data sources and locations into one significantly more manageable environment. Sometimes confused with data lakes, which involve centralised storage of large quantities of raw data, data fabric supports a multitude of storage locations, making the process of managing the data across all these locations as simple as possible. “Previously, software development teams went with their own implementation for data storage and retrieval,” explained Palo Alto-based data-integration firm Striim. “A typical enterprise data centre stores data in relational databases (e.g., Microsoft SQL Server), non-relational databases (e.g., MongoDB), data repositories (e.g., a data warehouse), flat files, and other platforms. As a result, data is spread across rigid and isolated data silos, which creates issues for modern businesses.” The data fabric thus simplifies data access across such remote data locations to facilitate efficient self-service data storage and consumption.
This simplification crucially prevents organisations from having to overhaul their entire data infrastructure; instead, data fabrics can make existing data architectures more efficient. According to IBM, the architecture thus becomes agnostic to data environments, processes, utilities and geographies, all while integrating end-to-end data-management capabilities. “A data fabric automates data discovery, governance and consumption, enabling enterprises to use data to maximize their value chain,” the US tech giant explained. “With a data fabric, enterprises elevate the value of their data by providing the right data, at the right time, regardless of where it is resides.”
Specifically, the data fabric improves upon existing data infrastructure, often by adding automation to the data-management process. It operates as an integrated layer—the fabric—of data and connecting processes, and it implements analytics over metadata assets (that is, the data that provides more information about other data), which, according to Gartner, supports “the design, deployment and utilization of integrated and reusable data across all environments, including hybrid and multi-cloud platforms.” With in-built analytics able to interpret this metadata, data fabric can learn what data is being used as well as make recommendations for more, different and better data, which, in turn, can lower data-management requirements by up to 70 percent.
Perhaps not surprisingly, then, machine learning (ML) plays a crucial role in learning existing data and making further recommendations. Exposing data to ML models facilitates improvement in their learning capabilities, with ML algorithms connected to the data fabric used to monitor data pipelines, identify valuable relationships and make appropriate recommendations. According to the Dallas-based data-fabric firm K2View, this ML capability can be broken down into three stages:
Passive learning: Data fabric learns what data exists and applies artificial intelligence (AI) to add any missing metadata.
Behaviour analysis: Data fabric uses metadata to learn where and how data is being used and then analyses other data for similar behaviour. “If the behaviour is the same, it’s probably already usable,” explained K2View.
Active recommendations: Data fabric relies on active metadata to generate recommendations for data engineers, such as new data, more data or best data to deliver to data consumers.
Data fabric leverages both human and machine capabilities to access the data in place or support its consolidation where appropriate, as Gartner’s Ashutosh Gupta explained in May 2021. “It continuously identifies and connects data from disparate applications to discover unique, business-relevant relationships between the available data points. The insight supports re-engineered decision-making, providing more value through rapid access and comprehension than traditional data management practices.”
Gartner has ranked data fabric as the top technology trend for 2022 and predicted that data-fabric deployments will quadruple efficiency in data utilisation while cutting human-driven data-management tasks in half by 2024. It has identified four key pillars of a data-fabric architecture that data and analytics leaders must know:
1. Data fabric must collect and analyse all forms of metadata: This contextual information provides the data-fabric design’s foundation, and as such, there should be a mechanism to enable the data fabric to identify, connect and analyse all kinds of metadata.
2. Data fabric must convert passive metadata to active metadata: Data fabric should analyse available metadata for key metrics and statistics, graphically depict metadata in an easy-to-understand manner and leverage key metadata metrics to enable AI/ML algorithms.
3. Data fabric must create and curate knowledge graphs: Knowledge graphs enrich data with semantics and thus enable businesses to derive value from the data. The semantic layer of the knowledge graph adds depth and meaning to the data usage and content graph, allowing AI/ML algorithms to use the information for analytics and other operational use cases.
4. Data fabric must have a robust data-integration backbone: Data fabric should be compatible with various data-delivery styles, such as ETL (extract, transform, and load), streaming, replication and messaging, and it should support all types of data users, including IT (information technology) users and business users.
The data analytics and ML firm AtScale has also outlined the necessary capabilities a “true data fabric solution” should have, including autonomous data engineering; unified data semantics; centralised data security and governance; data-management visibility and being agnostic to the platform and application. “The need for speed is the competitive differentiator for global enterprises,” AtScale explained in September 2019. “A data fabric can serve to minimize disruption by creating a highly adaptable data management environment that automatically adjusts to changing technology.”
And to underline just how vital data fabric is expected to prove to businesses over the next few years, a recent report published by market-research firm StrategyR projected that the global data-fabric market will reach $3.7 billion by 2026 from $1.6 billion this year at a compound annual growth rate (CAGR) of 22.2 percent over the forecast period. “The market is slated to receive a noteworthy push from a number of favourable factors like increasing acceptance of big data and analytics for deriving business decisions,” according to the report. “The increasing penetration of connected devices and systems has resulted in a notable spike in the amount and variety of data. The deployment of sensors and video cameras across facilities for gaining location-related and other information is leading to significant data volumes that can be used for deriving business decisions. These trends are creating strong demand for data fabric solutions.”