Predictions for Edge Computing in 2019

Edge computing is the future of the cloud, and it’s the ecosystem in which all future devices will operate and within which they’ll engage with users and each other to optimize our lives 24x7.

Edge computing encompasses far more than the Internet of Things (IoT). It refers to the evolution of cloud-computing ecosystems toward more distributed environments within which most compute, storage, memory, bandwidth, and other hardware resources are physically held by, collocated with, or nearby their ultimate users. Just as important, edge computing is the new automation fabric within which every device will incorporate artificial intelligence (AI) that enables it to achieve desired outcomes nonstop with varying degrees of autonomy.

As we look ahead to 2019, we can expect to see the following dominant trends in edge computing:

  • Cloud-to-edge interoperability frameworks will begin to take hold: Over the past several years, many industry initiatives have been building vendor-neutral, open source, loosely-coupled frameworks for distributing microservices all the way to the edge. In 2018, we saw initiatives such as EdgeX Foundry gain traction while leveraging the work that’s already been accomplished by IoT-related standards groups such as the Industrial Internet Consortium, the OpenFog Consortium and the Automotive Edge Computing Consortium. We also saw the Eclipse Foundation team with the Cloud Native Computing Foundation on a new Kubernetes IoT Edge Working Group. And there was a growing range of edge-AI performance benchmarking frameworks in development. In 2019, we expect that demand for cloud-centric interoperability among AI-infused mobile, embedded, IoT, and robotics products will spur vendors to incorporate these emerging frameworks into useful cloud-to-edge solutions.
  • Edge-based appliances will give public clouds a foothold in on-premise deployments: For many enterprises, cloud-to-edge means deploying some public cloud compute, storage, and other resources in their private data centers to reduce latencies on some performance-sensitive applications. In 2018, we saw AWS step into this market with its announcement of Outposts, a preconfigured on-premise hybrid-cloud appliance that will be available late next year. AWS’ offering will compete with several rival cloud-to-edge on-premises appliances that are already in the market, including Microsoft Azure Stack, IBM Cloud Private, and Oracle Cloud At Customer. In 2019, we expect to see these vendors ramp up the competition among their respective edge appliances. They will optimize these appliances for the most latency-sensitive cloud applications, especially those involving stream computing, real-time full-motion video analysis and processing, and fast local AI inferencing. And they will package them into architectures that enable users to start by deploying on-premises micro-instances that can be rapidly scaled on demand.
  • Edge management tools will come into enterprise cloud management suites: Over the past several years, enterprises have distributed more compute, storage, applications, and workloads onto disparate IoT and other edge platforms. In 2019, we expect to see enterprise IT managers step up efforts to rein in these investments through new edge-facing management consoles that monitor, manage, secure, and control it all in real-time within their broader multicloud environments. Public cloud providers will provide customers with the flexibility to run edge management consoles in hybridized on-premises and SaaS environments. These management environments will use AI to automate most of the real-time monitoring, anomaly detection, root-cause analysis, predictive remediation, and other closed-loop functionality needed to manage complex edge environments.
  • Edge-based peer-to-peer environments will deliver on-demand cloud spot instances: As edge computing gains hold, we’re seeing more cloud-computing initiatives in which users can rent out endpoints’ compute, storage, memory, and bandwidth resources individually or in larger virtualized clusters. In 2019, we expect more of these cloud-to-edge virtualization fabrics to come to market, enabling on-demand elastic access to IaaS and PaaS spot instances to support dynamic workloads. We expect that the public cloud providers—such as AWS, Microsoft Azure, and Google Cloud Platform—will add edge-resource peer-to-peer spot-instance serving to their online IaaS/PaaS marketplaces as a complement to instances that they offer from their standard inventories. To support these capabilities, cloud providers will invest heavily in the network virtualization backplane services that support agile mesh interoperability patterns across peer-based resource-serving environments.
  • Edge environments will become the cloud’s core transactional platforms: Edge commerce is coming rapidly to our lives. As every new IoT device comes online and expands its automation capabilities, our personal edge nodes, such as Alexa-powered smart speakers, will take center stage in our lives. In 2018, we saw the IoT take on more transactional applications, though much of the enabling trust infrastructure has not yet coalesced around ubiquitous standards. In 2019, we expect online marketplaces to emerge that drive the edge economy further into mainstream commerce. To enable this evolution, device manufacturers and service providers will leverage IoT, blockchain, smart contracts, artificial intelligence, streaming, and cloud computing to automate 24x7 commerce among edge devices that are operating on consumers’ behalf. Many of the use cases for these blockchain-based edge commerce environments will involve chatbots and other conversational front-end device interfaces, but a growing range will incorporate edge devices that are partially or entirely autonomous, especially in the industrial IoT.
  • DevOps toolchains will automatically optimize AI models for fast edge inferencing: Developers of AI applications for edge deployment are doing their work in a growing range of frameworks and deploying their models to myriad hardware, software, and cloud environments. This complicates the task of making sure that each new AI model is optimized for fast inferencing on its target platform, a burden that has traditionally required manual tuning. Over the past several years, open-source AI-model compilers have come to market to ensure that the toolchain automatically optimizes AI models for fast efficient edge execution without compromising model accuracy. These model-once run-anywhere compilers now include AWS NNVM Compiler,  Intel Ngraph, Google XLA, and NVIDIA TensorRT 3. In the past year, AWS announced SageMaker Neo, which it plans to open source, while Google integrated TensorRT with TensorFlow for inferencing optimization on GPU-based targets. In 2019, we expect to see other cloud providers roll out their own edge-AI model compiler managed services, while vendors of data-science toolchain solutions will build hooks into the most widely adopted open-source projects in this segment.
  • New edge-AI hardware-accelerator systems-on-chip will flood the market: Over the past several years, both startups and established chip vendors have introduced an impressive new generation of new hardware architectures optimized for machine learning, deep learning, natural language processing, and other AI workloads. Chief among these new AI-optimized chipset architectures—in addition to new generations of GPUs—are tensorcore processing units, field programmable gate arrays, and application-specific integrated circuits. Edge requirements are driving introduction of AI accelerators that are optimized for greater autonomy in mobile, embedded, robotics, and IoT devices. In 2019, we’ll see a flood of new systems-on-chip to market to support complex workloads that demand stringent processing of real-time sensor-driven video, audio, speech, motion, locomotion, grappling, and other complex AI workloads. These systems-on-chip will be configured out of the box with diverse algorithms to help edge nodes autonomously sense environments, respond effectively, and operate safely in immersive 3-D environments that drive human users’ productivity. Low-cost, low-power, embedded, and ruggedized operation will be a must for these systems-on-a-chip to prevail on a fast-moving competitive market.
  • Reinforcement learning will begin to rule AI at the edge: Reinforcement learning has heretofore played a central role in modeling and training the AI in gaming, robotics, and other edge applications. However, reinforcement learning has matured over the past year into a mainstream approach for building and training statistical models even in operational circumstances where is little opportunity to simulate the edge domain before putting its AI into production. Reinforcement learning plays a growing role in many industries, often to drive autonomous robotics, computer vision, digital assistants, and natural language processing in edge applications. In 2019, we expect to see more open-source reinforcement learning workbenches and libraries come to market, following the lead taken by AWS with its recent release of SageMaker RL and RoboMaker. What’s likely to gain broad acceptance over the year is Dopamine, a TensorFlow-based framework and codebase for fast, iterative prototyping of RL algorithms in Python 2.7. And more reinforcement learning will be done online in operational edge-AI applications in production, rather than the traditional offline mode involving simulators.
  • Smart edge objects will become a principal AI development and training workbench: Autonomous operation is the AI magic behind edge devices such as self-driving vehicles, smart drones, android-like robots, and intelligent consumer goods in the IoT. Developer-ready smart objects such as AWS’ DeepRacer, DeepLens, and the Echo family represent a paradigm shift in AI development for the edge. Going forward, more AI-infused edge applications, including robotics for consumer and business uses, will be developed on workbenches that sprawl across both physical platforms such as these devices as well as virtual workspaces in the cloud. As this trend intensifies, more data scientists will begin to litter their physical workspace with a menagerie of AI-infused devices for demonstration, prototyping, and production development purposes. In 2019, we’ll see IoT edge devices become an important workbench for advanced AI applications that can operate autonomously. Over the year, AI practitioners will shift toward new AI workbenches that execute all or most DevOps pipeline functions—including distributed training–-in the smart objects themselves.

Am I overlooking anything important in edge computing? I would like to hear my readers’ predictions on this topic.


Jeffrey Goldsmith

FRACTIONAL CMO. Strategy, Brand Awareness, Lead Gen, Partnerships, Advertising & Growth Hacks.

5 年

Great piece - we're working on this at https://Chooch.AI

回复
Marcosiris A. O. Pessoa, DSc.

Engineer | Laboratory Specialist | Researcher ??Industry 4.0 ?? Ciber-Physical Systems ?? Digital Twin

5 年

James Kobielus, great article, thanks for sharing, congrats.

回复
Leo Kluger

Digital Marketing Analytics Leader | Performance Marketing Director | Product Management Leader | Data Science

6 年

Is Dopamine aligned with Python 2.7 as stated in the article, or perhaps should be Python 3.7. Very interesting overview!

回复
David C Martin

Simple smart home automation: github.com/ageathome

6 年

You’re missing the incorporation of feedback from the users of the AI predictions; and that feedback needs to be collected from the execution of automations based on that prediction and the result of those actions, especially when their wrong.

回复

要查看或添加评论,请登录

James Kobielus的更多文章

社区洞察

其他会员也浏览了