#7: AI & IoT

#7: AI & IoT

AI is the ability of machines (computers, robots, drones, …) to perform tasks that normally require human intelligence. In 2018, the US department of defense started Joint Artificial Intelligence Center (JAIC), to explore the use of AI in combat. In Article-#5, we saw how the Internet originated as a solution to risks in war. Today, AI with its origin in computer science has found a utility in war besides other applications.

spike in Scientific-publications & Patent families (source: WIPO Technology Trends 2019)

In 2019, a report from World Intellectual Property Organization (WIPO), an agency of the UN, reported a trend that AI patents grew by an average of 28% during the year 2012 and 2017. It was one of the first report that provided international assessments of AI for policymakers, researchers and businesses to understand how AI was shaping the globe.

The publications and patents had increased because of the use of AI (computational neuroscience) as a tool to understand how the brain works and how it functions parallelly which is different from sequential processing inside computers. The knowledge gained in this process was being used to solve practical problems in the world.

Brain as a network of neurons (source: Wikipedia)

Neuroscience is the study of nervous system and brain is an important part of this system. The brain is made up of numerous (100 billion) brain-cells (or neurons), each capable of many (10000) connections with other neurons. This huge network of connections makes it a complex thing to study. Additionally, it is fragile and dies when experimented. Hence, to accelerate understanding of the brain a strategic path has been chosen, that is, to study its behavior by comparing it with the outcomes from brain-simulations done using AI.

Number of AI patents granted from 2010 to 2022 (source: AI-index Report 2024)

This year in June, a growth trend in AI patents was published in the AI Index Report by the Stanford Institute for Human centered Artificial Intelligence (HAI). In October this year, the Nobel prize in Physics was awarded to John Hopfield (a physicist) & Geoffrey Hinton (a computer scientist) for foundational discoveries & inventions which enable machine learning using artificial neural networks. The Nobel prize endorses AI as an important achievement for mankind instead of being just another milestone only for computer science.

Brain & Computers? (process information differently)

The prize was awarded for the discovery of how learning happens inside the brain and the invention of how it could be taught to a machine. Any computing machine can now be programmed to perform some (not all) tasks that earlier required a human intervention.

ImageNet

The principle of artificial learning was invented in 1943, however it was in 2007 when GPUs were available for general purpose computing that it became practical to write a machine learning program like AlexNet that won a large scale visual database contest in 2012, by recognizing objects in a scene and caught the fancy of the industry.

Brain: look and functions

From a distance the brain looks the same all over; however, the general purpose brain-cells (neurons) can transform themselves into special purpose functions in response to activations coming from sensory neurons. This ability of neurons to learn generates different adaptive (learning) regions inside the brain that perform different functions (like audio, vision, speech, ...) as highlighted in the image on the right, above.

Structure of Neuron (source: Wikipedia)

A neuron maps to the structure shown above. Towards left is the cell-body with branched-structures called dendrites that receive input signals from other neurons connected to it. These signals influence the electrical potential inside the cell-body and after a certain threshold the cell-body throws an electrical signal as output which propagates over a trunk-like structure called axon and from thereon to branched-structures called axon-terminals which further connect with dendrites of other neurons. The output signal is either excitatory (spike) or inhibitory (drop).

Artificial Neuron (source: wikipedia)

The image above shows a primitive, abstract artificial-neuron which is a mathematical function inspired by a neuron in the brain. The inputs (x1, x2, x3, … xn) on the left map to the dendrites. The weights (w1j, w2j, w3j, … wnj) map to the capacity (strength) of input-synapses. The electrical signal that travels over the axon maps to Activation (oj) which is output of the function. The weights are calculated (adjusted continuously) as the artificial-neuron learns (gets trained) with more (sensory) experiences.

Artificial Neural Network (ANN)

A framework of artificial-neurons (nodes) creates an Artificial Neural Network (ANN) as shown in figure above. Research has identified patterns inside the brain which inspire models of ANNs. The pattern in the image above is called Feed-forward Neural Network. These ANNs don’t have a direct correlation with the Neural Network (NN) patterns found in the brain, but their algorithmic outcomes mimic responses of the brain.

These formulated AI techniques are good for doing things that the brain is good at, like (vision) recognizing objects seen by eyes, where it is ok to be wrong (object identified incorrectly since the vision is unclear). But they are bad at doing things brains are bad at, like multiplying two large numbers 256 x 138, where there is zero tolerance for errors in the real world. However in certain situations, the serial programming model of a computer with its precision in mathematical operations can complement machine learning algorithms to provide an intelligent solution.

AI application fields (source: WIPO Technology Trends 2019)

In the diagram above, which is referenced from the WIPO report, attached to Networks is a field for application of AI to Internet Of Things (IoT). A network here means an interconnected system of people or things. Major value in the IT industry so far has been generated because of networks. In the online world, places like twitter, linkedin, whatsapp demonstrate networks of people.

Networks (source: Nfx)

In the real world, radios, landline phones, television are some of the early examples of networks of things. PCs, Internet and smartphones are examples of most recently formed networks. Operating system platforms like Windows, Mac, Linux, iOS, Android form networks of users. Religion and currencies form belief networks.

emerging networks of consumer IoT devices

Anticipating that technology is headed towards such a world of networked-things, Meta has launched devices like Quest, a VR headset and Ray-Ban smart-glasses. Apple is leading the consumer IoT ecosystem with multiple networks built around an array of things like Mac, iPad, iPhone, Watch, Airpods, TV, HomePod and Vision. Amazon has built devices like Kindle, Echo and a few more. Its subsidiary company Ring has launched things like doorbells and cameras. Google under its Pixel brand has launched its own array of devices like phones, tablets, laptops, smartwatches, accessories and some more things.

Few startups are building networks of things with a vision that AI will be the central technology around which everyday human life would revolve. A company called Humane is building a network of AI-pins, a no-screen device, whose users interact with it using speech commands and direct it to do their digital chores using its AI services. Another company Rabbit Inc has launched a device R1 with its own set of AI services.

Without a creative control on the device held in the user's hand software solution providers are vulnerable to competition from device makers and also risks the opportunity of shaping the market according to their vision. To eliminate such a risk for its AI chat service, OpenAI, the company behind ChatGPT, intends to build a new device.

Today there are AI driven experiences in quite a few applications. Considering the range of application fields outlined in the diagram from the WIPO report, the impact of AI seems to be everywhere. However, before utilizing AI techniques in any solution it is important to analyze and minimize errors like hallucinations and biases that it brings along or focus on building solutions for niche jobs where AI can do better than human-intelligence.

The technology adoption of AI has just begun however the adoption of IoT has already moved ahead and prepared a sizeable market that is ready for adopting useful things.

As mentioned in Article-#1, the IoT ecosystem with different kinds of devices each belonging to a different network presents a fringe opportunity for takers. AI as a technology will intersect with the IoT ecosystem to enable AI driven experiences, however in the long run networks of devices that are useful, designed well and are supported by continuously improved software services will sustain to become an essential part of human surroundings.

Stay tuned for future articles published here!

Reference:

  1. Webpage of Geoffrey Hinton
  2. WIPO Technology Trends 2019 (pdf)
  3. The AI-index Report 2024 (pdf)
  4. History of Artificial Intelligence (wikipedia)
  5. The Network Effects

Gaurav Rajwanshi

Transforming Enterprises, Businesses and Teams for the Age of AI

2 周

it's amazing that AI is being used to create more personalized experiences. I'm excited to see what the future holds for AI.Thanks for Sharing your valuable insights!

Manish Anandani

Transforming Engineering Organizations, AI/ML Enthusiast - Ex-Veritas, Ex-Tieto

2 周

Nice article Steven .. There is huge value that can be created by making the edge devices smart and by continuously improving their smartness .. Controlling pollution, saving wastage of resources, many more .. some of these systems are already in action

Sai Deepak M.

content creator | AI enthusiastic

2 周

Insightful overview, Steven - AI's integration with IoT is indeed reshaping industries, and your article provides a valuable starting point for understanding this synergy.

要查看或添加评论,请登录