Future Proof Computing Lives at the Edge

Future Proof Computing Lives at the Edge

WHAT DOES EDGE COMPUTING REALLY MEAN?

What do we really mean by “edge” computing?? Does anyone really have a consistent and agreed upon definition for edge computing? And how does this new trendy term align with distributed computing?

When speaking about computing and computer networks, “the Edge” still means various things to various people in various contexts. Everyone agrees that it begins with the collection of data from the physical world. But even real-time data captured and transmitted electronically doesn’t automatically constitute the Edge. For example, weather monitoring devices have been collecting data about air temperature and wind-speed and barometric pressure for more than a century. They transmitted it via hard-wired connections initially, and now via wireless connections. And yet no one thinks of the weather as “the Edge.”

As remote devices became tinier, more powerful, and connected to networks, they could sample more and more “real-time” data from the physical world. For example, atmospheric weather data could now be supplemented by road-condition data captured by sensors in the tires of commercial trucks.

This evolutionary step sometimes led people to assume they were fully “at the Edge” before they really were. When smart phones first enabled users to speak words and have them translated into text, it was natural to think that the phone was doing all that magic on its own.

However, if users tried speaking when no Internet was available, suddenly the translation didn’t work. Powerful as the “computer in your pocket” was, it needed a “real” computer (in a real data center) to accomplish that task. Today, your phone can do speech-to-text all by itself.

Before Edge computing, the acts of data collection, management, and analytics functioned primarily as a passive monitoring tool and data historian. This was useful in its way, but it lacked the capabilities to truly impact business and adapt experiences in real-time.

This leads us to a simple, but effective definition of the Edge. The Edge exists wherever the digital world and physical world intersect and data is securely generated, collected and analyzed to create new value. The Edge involves the remote collection and processing of data to create new value where the data actually lives.


A HISTORY OF EDGE COMPUTING

A HISTORY OF EDGE COMPUTING
source: Harbor Research


EDGE ORIGINS

For the last ten years or so, the evolution of digital systems has been largely comprised of moving data and workloads from physical hardware to virtual platforms. Developers have moved the data center to the cloud and businesses have transitioned from managing their own computing and network assets to “everything as a service.”

Enabling these digital innovations has forced developers to move compute, storage and networking resources from their traditionally centralized locations, such as data centers and clouds, to decentralized or distributed [edge] locations that are closer to where data is generated and consumed. This architectural shift is changing the economics of information systems, reducing the costs and significantly increasing the flexibility of computing and networking resources.

The underlying technologies that are enabling more complex adaptive and distributed systems are all in some way trying to break from today’s software and computing paradigms. To reach this goal, a new view of architecture is required in order to overcome today’s performance limitations. What are the major obstacles that need to be overcome?

  • First, we need to achieve more distributed and much more intelligent infrastructure where more powerful and pervasive networking and compute resources are deeply embedded into our everyday lives to overcome infrastructural limitations that constrain new more dynamic software applications and services.
  • Second, emerging applications are increasingly running on specialized hardware platforms such as wearables and VR headsets, application-specific integrated circuits and accelerators for AI and inferencing workloads, as well as field-programmable gate arrays for specialized use cases such as software-defined networking devices. This requires new development tools that can abstract and reduce development complexity to enable application-specific hardware devices to work seamlessly with software.
  • Third, software applications today have become distributed, needing to work across multiple clouds, at the edge, and in diverse devices supporting multi-modal interfaces including voice, wearables, touch and AR/VR in addition to web and mobile. Emerging software applications will increasingly need to support new and novel user experiences.
  • Fourth, we need better ways to manage data interactions and eliminate data boundaries — getting the application data close to its point of use while managing changes in the data quickly and consistently to enable fast, reliable and trusted application experiences — connecting users with the right code and the right data at the right time requires intelligent orchestration of application traffic and workloads across dynamic and distributed users and applications.

So, what do we mean by Edge Computing? Edge technologies enable real-time temporal, spatial and state-based processing where systems can act on any type of information from any device, storage or streaming source, and can be flexibly and reliably deployed. As organizations become increasingly digital, the way they incorporate the edge becomes crucial to their future.


EDGE COMPUTING

Edge Computing Technical Infrastructure
source: Harbor Research


EDGE CHALLENGES

In the course of the last two decades, the world has become so dependent upon the existing ways computing is organized that most people, inside IT and out, cannot bring themselves to think about it with any critical detachment.

IT professionals talk these days about the need for ever-evolving information services that can be made available anywhere, anytime, for any kind of information, yet IT rarely has the experience and background to architect and integrate real world physical systems.

With each additional layer of engineering and administration based on historical IT design principles, computing systems come closer and closer to resembling a fantastically jury-rigged Rube Goldberg contraption with development and maintenance costs that inhibit the growth of new Smart Systems applications. The reason is simple. Today’s computing systems were not really designed for a world driven by pervasive and diverse information flow and interaction.

The Edge model is rapidly evolving, which often forces system developers to rapidly shift the ways they think about how systems need to be designed. It follows from this that next-generation network solutions will not be a rigidly organized, centrally planned development story. The network that is now emerging will not be “designed” in any traditional sense. It, and the solutions that will grow from it, may appear unplanned and disorganized when viewed from a classic central computing vantage point. But like all human-made things, it will be “designed” in some sense.

So, what will it look like?

As distributed systems evolve, the sensor and actuator devices will all become smart themselves and the connectivity between them (devices, for the most part, that have never been connected) will become more and more complex.? As the numbers of smart devices grow, the existing client-server hierarchy and all of these “edge” or “middle world boxes” acting as gateways, hubs and interfaces will quickly start to blur.? In this future, the need for any kind of client-server architecture will become superfluous; hierarchical models are numbered.

Now, imagine a future Smart Systems world where sensors and devices that were once connected by twisted pair, current loops or were hardwired, become networked with all devices integrated onto one IP-based network (wired or wireless).? In this new world, the “middle world boxes” don’t need traditional input/output (I/O) hardware or interfaces. They begin to look just like network computers running applications designed to interact with peer devices and carry out functions with their “herd” or “clusters” of smart sensors and devices.

We can readily imagine an application environment where there may be several “networked processors” (some embedded into equipment and some acting autonomously on a given machine). These networked processers will run applications, sharing their sensors and actuators, some even ‘sharing’ a whole herd – a smart building, for example, where the processor in an occupancy sensor is used to turn the lights on, change the heating or cooling profile or alert security.

In this evolving architecture, the network essentially flattens until the end-point devices are merely peers and a variety of applications reside on one or more networked application processors. For all intents and purposes, this will look like today’s router/modems, industrial PCs or small “headless” high-availability distributed servers, but will be increasingly more powerful and able to host applications due to embedded functions. In truth, if we are to achieve truly distributed computing, we must enable the actual devices and equipment on a truly embedded level such as on an embedded microcontroller within a device or machine.

We believe it will resemble something closer to an organic system, with an architecture that matches the structure of the physical world. All of the sensors and actuators will become smart themselves, and the connectivity between devices will become increasingly complex.?As the numbers of smart devices grow, the existing client-server hierarchy — devices, gateways, hubs, etc. — will quickly start to blur.?In this future, the need for client-server architecture will become superfluous, and so the days of hierarchical computing models are numbered.

Value-added applications will spring to life in environments where the right resources (data) find the right nutrients (software applications) that deliver financial value. In reality, these applications will occur at all levels of the architecture, from the device to the cloud.?Exactly where they originate, and to what extent the data remains there or is later post-processed elsewhere, will be dictated on the fly by the nature of the data and the application requirements moment-to-moment. Acceptance of this reality is essential to the effective design of software for data management, analytics and collaborative systems.

FUTURE PERFECT COMPUTING SOLUTIONS

Distributed systems and applications are dynamic in nature, needing to work across different cloud configurations, different physical locations, hardware and devices. Smart Systems development needs a completely new approach – one where new development frameworks will need to simplify application development and shorten time to market by better organizing developer tools and run-time support for multi-cloud, edge, IoT applications as well as contend with the insane diversity of hardware and devices.


Click here to access our free Growth Opportunity Insight, “Edge Computing Evolution and Opportunities.”

Edge Computing Evolution and Opportunities


要查看或添加评论,请登录

社区洞察

其他会员也浏览了