Hype: ML ≠  AI
Photo Credit: Metrix, LLC - The future of security

Hype: ML ≠ AI

Over the past few weeks, the artificial intelligence (AI) news has been dominated by the discussions related to the "democratization of AI." I find the discourse to be rather humorous because real AI, defined as having general intelligence, is still a distant reality. If only the hype cheerleaders knew that most machine learning (ML) applications can be downloaded for free in several software languages and that most of the time, ML programs do not work very well for complex problems. It is even more comical how the marketing and sales teams at major tech companies have come up with the formula that machine learning = AI, and this conflation has led to running jokes in several IEEE publications. Everyone should take a deep breath and relax as AI, cough... cough... machine learning, is not going to emerge quite as fast as everyone thinks due to the bottlenecks of 2 key technologies that AI is dependent upon: broadband and electricity.

What does AI & ML need to function?

AI and ML need data, lots of it. Not only do these processes require data, but the data must be valid and reliable. There are many potential pitfalls of analysis and causal inference. Here are three articles related to the concept of Garbage in/Garbage out:

  1. https://www.capgemini.com/2017/10/quality-data-a-must-have-for-ai/
  2. https://www.cio.com/article/3254693/ais-biggest-risk-factor-data-gone-wrong.html
  3. https://www.forbes.com/sites/cognitiveworld/2019/03/07/the-achilles-heel-of-ai/#4624a1dc7be7

After you have valid and reliable data, ask yourself this question: is AI or ML the best tool for the job? Sometimes, when speaking with customers or other scientists and engineers, they think they want to use AI or ML, but oftentimes more simple analytical techniques can provide the information that they need. This figure depicts how to think about the analytical AI and ML process:

Photo credit Monica Regati https://hackernoon.com/the-ai-hierarchy-of-needs-18f111fcc007

AI and ML need fast processors. Deep Learning (DL), a type of ML, is computationally intensive and it requires fast hardware. Over the years, I built several different DL workstations and despite careful research, I made plenty of mistakes. For example, I burned out three NVIDIA GTX 290 boards analyzing spatial data for my master’s thesis. Here is a great article discussing the hardware landscape and requirements for AI and ML: https://www.designnews.com/electronics-test/battle-ai-processors-begins-2018/212131505757984

AI and ML processors need electricity. Processors for machine learning frequently consume over 250 watts per chip, and even “energy efficient” Tensor Processing Units (TPUs) require around 75 watts. For applications like sensor networks, power draw needs to be below 10 milliwatts. Equally, any AI or ML chip that needs to work on the edge cannot consume too much electricity or produce too much heat. Over the next year or two, there may be significant progress in low-power AI and ML chips. Below is a figure that was created by NVIDIA to demonstrate how great their hardware is (which there is much debate over). The main thing you can take away from the figure is that it takes a lot of electricity for these hardware devices to run AI and ML. More electricity than is available from technologies like Power over Ethernet (PoE; 30 watts). 

No alt text provided for this image

AI and ML need fast networks. AI and ML integration to business and consumer workloads will increase demand on networks and many AI and ML applications will be offered as Software as a Service (SaaS) or as Infrastructure as a Service (IaaS). This will cause increased demand on the network as most of these SaaS and IaaS systems are cloud based. Networks running AI and ML will have a considerably different type of workload. New traffic from AI and ML will be very unpredictable and come with significantly higher bandwidth requirements. The over-hyped 5G networks are not going to be fast enough for cloud based AI and ML applications to process data in real time at factories or for businesses. 

How much data are captured by these systems?

At our company Metrix, we are developing a system to track and classify human behavior in industrial manufacturing environments to identify security and safety threats. By developing the technology and working through these problems we have learned much about AI, sensor arrays, and high speed networks. Sensor arrays can capture large amounts of data very quickly. To put this in perspective, the sensor arrays on self driving cars collect and process about 10 GBs of data per second.

Where are the data processed?

Data are typically processed on the edge or in the cloud. In the previous example, collecting data and processing it in self-driving cars works well for obvious reasons. The data travels a short distance, is rapidly processed, the inputs are communicated back to the vehicle's' control system, and the car hopefully stays on the road. The electricity, data, and processing all happen in a relatively small confined package. Most companies that use AI and Ml process their data in the cloud and AI and ML performance-based bottlenecks are increasingly becoming an issue. This is alright for mundane tasks like scanning job applications and returning scores on candidates' goodness of fit or determining what color button online shoppers like to click the most to maximize return on investment. However, it is not ideal in use cases where any latency cannot be tolerated when AI or ML is being used to make critical decisions rapidly (e.g., self driving cars, security systems, or surgeries).

Summary

For the next technological revolution to occur, industrial manufacturers, healthcare providers, and communities are going to have to make significant investments to the their telecommunications infrastructures to take advantage of ML applications. Consequently, in the next 5 years we will see more AI applications being pushed to the edge of the network, especially as technologies that partially address the cost associated with infrastructure upgrades like Power of Ethernet (PoE) become more prevalent (PoE voltage and broadband cable combined into 1 cable). One of the downsides of the short term PoE solution is that PoE can only travel a distance of 100 meters, and the amount of electricity carried over the PoE line is insufficient to power many AI and ML processors. Without sufficient electricity, or the ability to send large amounts of data rapidly, ML and AI applications will not emerge rapidly.

Despite the current limitations and hype surrounding AI, the technology will continue to grow and improve. For AI to expand deeper into society, faster networks need to be deployed. Many parts of the world are investing heavily into fiber optic networks. This is exciting because there is a new emerging technology called Power over Fiber (PoF). PoF can transmit energy over fiber optic cables at long distances, and PoF technology overcomes the technical challenges associated with upgrading infrastructure to support AI and ML technologies. There are two current downsides of PoF technologies: the cost (5 times the cost of PoE), and the cables and equipment are scarce. Although AI is not currently living up to the hype, AI holds great promise for improving society in the future.


Andrew Huff is a technologist. He brings people together from different backgrounds to find innovative solutions to complex problems.



要查看或添加评论,请登录

Andrew G. Huff, PhD, MS的更多文章

社区洞察

其他会员也浏览了