#238: Sensing Systems, Not Just AI
It's useful to think about entire sensing & nervous systems for your business, not just the central decision making intelligence, while designing for the age of AI.
Put very simply, intelligence is the ability to process information. Whether that information is quantitative or linguistic, explicit or implicit, human or artificial. In that sense even a calculator has some intelligence. Of course the implied meaning of intelligence involves a higher order of information processing - typically one that involves making more complex decisions. So information (or data) is the fundamental ingredient for any intelligence. Which is why such a significant part of the AI discussion is dominated by the availability of data.
Artificial Intelligence is fast approaching human levels of decision making capability. In many areas it’s already ahead. AGI is being discussed in earnest in many parts of the world. When an AI system works in the real world, it obviously requires the core decision system, but it also requires the right data and information to flow in. This is where a critical part of the AI Ecosystem, which is also improving constantly, needs a closer look, and I don’t think we focus on this nearly enough: namely, Artificial Sensing.
The term is a bit self serving. On the one hand we all know about sensors and various forms of sensors have been around for ages. On the other hand human sensing is well understood. What I mean is that from individual sensors and disconnected bits of data, we are getting to large complex and coordinated systems for sensing which means these multi-sensory systems are now connecting to a single artificial brain / AI so that better decisions can be made, including where to focus or magnify the sensing itself.
Consider the example of Media Production.
I’ve recently joined the advisory board of CoStar who are the national laboratory in the UK for creative technologies. At the Royal Holloway campus a state of the art studio set up has a giant curved LED screen that can replicate a very lifelike scene - let’s say an Antarctic landscape. So rather than risk shooting in Antarctica or even having to fly a crew to some remote island that might resemble the South Pole (with all the attendant financial and environmental cost), you can shoot the whole thing in the studio. But what if you’re shooting an action film and you also need special effects - say a sleigh chase sequence against a setting sun? Now you’ve got lights, sounds, effects all needing to work together. Fear not, the cameras, the lights, and sound systems are all connected to an intelligent system so that it’s all synchronised (using technology from Disguise) and no human input is required to make sure that lighting is exactly right and the angles of the light and shadows, and even the directions of the sounds are all managed through a common system.
I was told that for the movie Gravity, the camera, an LED screen depicting the sky, and the actor Sandra Bullock were all on individual robotic arms each of which could rotate 360 degrees in multiple axes, and an intelligent system managed the way in which these 3 entities rotated as the scene shows her being flung into space and spinning round and round, keeping in mind the fidelity of the reflection of the celestial objects on her visor, for example. See the clip below from 2:30 - 4 mins
Here’s another example: Healthcare.
You may remember that last year, the Apple Watch was approved by the FDA for hearth health monitoring. It can track heart rhythm and detect atrial fibrillations. The Dexcom glucose monitor continuously tracks your blood glucose levels and manages your diabetes. The Withings scanwatch tracks respiratory patterns and helps with sleep apnea. The Empatica Embrace Plus tracks electrodermal activity, heart rate, and motion, is approved for seizure detection - useful for epilepsy. The Masimo watch manages sleep disorders and chronic respiratory diseases. The Cala Trio tracks hand tremors. The Ava Fertility tracker tracks skin temperature, heart rate variability and other parameters and helps track menstrual cycles. And all of these examples are wearables, so they individually consume small amounts of data, ranging from a few hundred kilobytes up to 5MB per day. (They are fine tuned for accuracy.) They are also piecemeal. For a more consolidated view, you can go to Neko and do a full body scan which will generate about 15GB of health data from a single scan of you. It has over 70 sensors that generate 6000+ high res images to to create a detailed 3D model of your skin, a range of cardiovascular assessments, blood tests, and overall some 50m data points about you in a single scan. Not satisfied? you can upgrade to Prenuvo which costs about 10x and captures a billion data points from a single scan. Or Ezra, which uses 3T MRI technology for better detailing of brain and spinal cord, cardiology, and cancer detection - and is particularly good at early stage detection.
Let’s look at a third example: Autonomous Vehicles.
AVs use monocular cameras for basic visioning and and stereo cameras to triangulate for depth. They use infra-red cameras for low light and night time situations. They also use a spinning Lidar system for building 3D images of their environments. They use short and long range Radar systems for detecting objects around them especially in low light conditions such as fog, and for tracking the movements of other vehicles which might be further away, especially during adaptive cruise control. Ultrasonic sensors (high frequency, low range) for parking controls. GPS systems for location. Internal Measurement Units (IMU) for acceleration, angular velocity, orientation - which are critical to stablisation and responses, V2X (Vehicle to everything - typically other vehicles, infrastructure, and command centre) communication systems - both cellular and short range - for traffic updates and hazards. High definition maps - for everything from lane markings, traffic signs, and landmarks. Collectively this set of sensors can consume upwards of 20GB of data every minute, the bulk of which comes from the cameras and Lidar. By comparison, human senses generate about 1 GB of raw data every minute. Like the AVs, this is dominated by visual data.
The point of these 3 examples is that in order to make better decisions, you need better data and information. So any ‘intelligence’ is dependent on the information stream and quality, else you have GIGO.
Human vs Automated Sensing Systems
This is where we see another key difference between human intelligence and AI. It’s the way our brain has been designed to work with our sensory system. We said earlier that our 5 human senses - sight, touch, hearing, smell, and taste generate 1GB of data per minute. But accordingly to neurologists, the human brain consciously processes only 375 bytes per minute. Which means we process approximately 0.00003% of all the data our senses generate. This involves a range of compression technologies (reduction in peripheral vision detail), prioritisation, predictive coding (familiar patterns are not reprocessed). And while this is by itself a marvel of bio-engineering, and has worked well to keep us alive and progressing through life, it also introduces errors, biases, and many other performance constraints, which lead to suboptimal outcomes. In the world of film and media, it’s a much more benign impact of poorly produced films and video in older films - be it the faintly visible wires on Superman, or the mismatched lighting between the foreground and the background in Casablanca. Inconsistent shadows, trees that don’t move naturally in the wind, limited screen time for the shark in Jaws - it’s a long list. But in healthcare and road driving, it translates to missed diagnoses of thousands if not millions of early stage health problems, and thousands of automobile accidents, many of them fatal.
In these instances of healthcare or driving failures it wasn’t that the signs weren’t always there or visible. But that they were often eliminated between the eye and the brain. Or that the response wasn’t fast enough. We’re simply not engineered for that kind of precision. When you look at a sea of people at a busy street corner, you might miss a family member or friend who is just 10 feet away. Or a person dressed in black crossing a road in front of your car at night might not register early enough for you to slow down gently. This is all changing - not just with AI, but with AS - Artificial Sensing. The phenomena by which we are starting to collect and use an order of magnitude more data for better decisions.
Automated Business Sensing Systems
How might this translate to business decisions? In my discussions with my colleagues who work in the Utilities for example, I remember coming across this statistic, that a typical water company throws away 90% plus of all the data it captures through sensors across its network, because we simply don’t have the means to store or use that data. There are also categories of products that call themselves data historian products, that work on time series data. But how would your business work if you could have super-power sensing? What if you could be intensely aware of any anomaly in your clients behaviour as it was happening? What if as a health insurer you could let your clients know about elevated risks or conditions very early? Or if the aggregated data from autonomous vehicles painted a much clearer real time picture of any road incidents, for auto insurers? Digital twins are really a version of this ability to create real time versions of your data and to run what if scenarios against them. And although this is happening, we may see an elevation of this kind of situational awareness in the near future.
As humans we think in terms of our 5 senses. This is a limitation because we are bounded by the human physiology. The number of sensing organs and their limitations in space and time with respect to our brains means we have a very finite and localised sensing capability. Artificial Sensing can significantly surpass this. Instead of 2 eyes, we can have thousands of cameras such as the ones mounted on AVs. Instead of the being fixed, they can move independently of us. Instead of just being limited to the visual spectrum, we can now sense infra-red and ultra-violet, and in fact many other wavelengths. We can develop bat-like Sonar or Lidar capabilities. Humans don’t have radar capabilities but artificial sensing does.
AI has given us a way to now connect these multiple sensing modes into consolidated systems of decisions. We’ve had Lidar, Sonar, Radar, and cameras for a while now, but to combine them into one sensing system for a moving car that can process, decide, and act in near real time is the ‘miracle’ of the AI universe. That involves edge capabilities and a whole lot of other architectural tweaks, but it’s happening. So could we possibly see an entire hospital connected to a single ‘brain’ - continuously processing information from devices, rooms, patients, heat signals, and sounds, to provide inputs on possible infections, threat of contaminations, better use of infrastructure and resources, faster processing of patients needs, and overall healthier outcomes? Where a set of small unmanned vehicles can constantly be delivering consumables where they are running low, or provide triaging of patients based on a quick automated scan as you enter the hospital? Why not? Why not similarly intelligent retail stores, or office buildings? Or supply chains and warehouses?
Much of the discussion and excitement around AI is around the decision making and the central ‘brain’ and rightly so, but it might be instructive to think of intelligent systems. Or in other words, when you think about AI for your business, you’re not designing a decision system or a brain, but rather an entire sensing and nervous system
Chief Automation Officer at Microland Limited
1 个月Lovely article Ved. Very insightful.
Transforming Digital Health
1 个月Great article! However in the primary Healthcare segment where most Govts. of developing countries work the cost benefits might not translate into actionable or sustainable models. It might be seen as innovation and a one off rather than organic adoption. That's my 2 bit personal view...
Seasoned Financial Crime Compliance Leader
1 个月Very insightful, Ved. Excellent piece
Professor at Lancaster University (Chair in Information Systems) and Visiting Professor at Nova SBE
1 个月Very good point and agreed Ved,sensors and data from sensors can be useful for many organisations, and critical when it is used to save lives, for example in natural disaster management, but is not a substitute for contextual and local knowledge, and sometimes too much reliance and emphasis on data from sensors and advanced computing models can lead to placing less attention on experts and contextual knowledge of where the data comes from, this is what we found in our research study, available here https://link.springer.com/article/10.1007/s10796-020-10075-8