Edge-Native App Design When You Can’t See Behind the Curtain
Unsplashed

Edge-Native App Design When You Can’t See Behind the Curtain

By Jim Blakley, Living Edge Lab @ Carnegie Mellon University

This blog is a reposting of my blog here.

No alt text provided for this image

In the mid-1980s, Bell Labs conducted a human factors study to understand how telephone users conceptualized the telephone network. Within the switching engineering division, we thought of the network as a complex collection of switching, transmission and operation systems interconnected by a web of copper and fiber “outside plant”. But our customers, who only used the network to make phone calls, had a much simpler mental model – two telephones attached to an opaque cloud. Those two phones were nebulously connected to each other by dialing digits or speaking to an operator who resided somewhere in that cloud. The network itself was unimportant to users and, in their minds, had little impact on the quality of their experience. And, even if they wanted to know more, information about the network was not available to them. But this infrastructure opaqueness was not a major problem because Bell Labs engineers understood and could design for user experience metrics like dial tone wait times and voice quality.In today’s world of mobile device applications accessing cloud and edge computing resources over mobile networks, infrastructure opaqueness persists but is compounded by application opaqueness. Old-fashioned telephone calling applications were limited and well understood. Today’s mobile applications like gaming, video conferencing and video surveillances are unbounded in number and diversity. They rely on multi-generational, multi-operator networks connected to multiple cloud and edge computing service providers. Mobile network designers and mobile application developers are faced with significant gaps in their knowledge and information about each other’s world just as those 1980’s telephone users knew little about the telephone network.

But, the user’s quality of experience depends on assuring acceptable application performance in the face of opaque and diverse network and compute performance. We see this firsthand when our video streaming service begins to “spin” waiting for download over a congested network. Content Delivery Networks, the first widespread use of edge computing, were overlaid on existing networks to address the spin problem. Standards like MPEG-DASH and HLS emerged to detect and mitigate bandwidth limitations in video delivery applications. General purpose edge computing is enabling applications well beyond video delivery but new tools and approaches will be needed to enable application developers to design for diverse network and computing environments and for network operators to design for the many applications that their networks must support.

No alt text provided for this image

At the Living Edge Lab, we have been working on approaches that use benchmarking and simulation of edge-native applications in networks to understand the interplay between applications and their environments. This work starts from the premise that application experience, quantitatively measured at the user device, is key to understanding the value and limitations of the infrastructure that supports that application. Our work uses an instrumented version of the OpenRTIST edge-native application to collect key application experience metrics like framerate and end-to-end (E2E) latency in different network and computing environments. The first work, by Shilpa George, Thomas Eiszler, Roger Iyengar, Haithem Turki, Ziqiang Feng and Junjue Wang from Carnegie Mellon University and Padmanabhan Pillai from Intel Labs, measured the E2E application latency in a variety of physical edge and cloud-based environments and WiFi and LTE networks. As shown at left, this work found substantial differences in E2E latency depending on what infrastructure the application ran on.

No alt text provided for this image

The second work by me, Roger Iyengar and Michel Roy from InterDigital, extended the first work by running a series of simulations on an environment built around the AdvantEDGE network emulator. These simulations varied the location and characteristics of users, edge nodes and network interconnect points while the application ran. We also looked at the impacts of network handoff, cell type and mobility on application performance. As an example, the figure below shows the impact on OpenRTIST round trip time and framerate at different simulated carrier interconnect points. 

So far, this work focuses on a single user and a single, specific application but we do have some key learnings.

  • For applications with high levels of user interactivity, low latency networking to “backend” computing is critical. In many of our experiments, connection to cloud infrastructure outside of our metro area caused unacceptable E2E latency of >150ms.
  • Carrier interconnect points also caused similar E2E latency challenges. Without local metro carrier interconnect, application sessions from an LTE mobile device to an edge node on local wired carrier connected through an interconnect point outside the metro area resulting in the same 150ms+ E2E latency.
  • When the network path stays within the metro area, E2E latency begins to be driven as much by application compute latency as from network latency. This effect is, of course, highly application specific. But it suggests that attention to the CPU and GPU compute resources available from edge nodes is important for experience acceptability.

Going forward, we’ll focus on diversifying the number and types of applications under study to include multi-user interactive, IOT and edge analytics applications. We also plan to test in more real and simulated network environments to further validate and extend our learnings.

This work shows the value of using application benchmarking and system simulation to better understand what’s behind the network and computing curtain imposed by infrastructure opaqueness. These are not the only tools developers will need to design and characterize their applications but they can provide key insights into application acceptability in the face of widely varying environments.

For more information on our work in this area, see the following.

REFERENCES

  1. S. George et al., "OpenRTiST: End-to-End Benchmarking for Edge Computing," in IEEE Pervasive Computing, vol. 19, no. 4, pp. 10-18, 1 Oct.-Dec. 2020, doi: 10.1109/MPRV.2020.3028781.
  2. J. Blakley et al., “Simulating Edge Computing Environments to Optimize Application Experience”, Carnegie Mellon University Technical Report, CMU-CS-20-135, November 2020.
Douglas Castor

Head of Wireless Research, InterDigital || Co-Chair of ATIS Next G Alliance Steering Group || Vice-Chair of ATIS Next G Alliance Roadmap Working Group

4 年

Nice blog Jim and great example of how emulators like AdvantEDGE can help mobile application programmers understand edge networks!

回复
Kreig D.

Leadership | Systems Thinking | Program Delivery | Product Management | Oxon | ex-Microsoft | ex-Intel | Technology Infrastructure | Media Processing and Delivery

4 年

Ah those crafty Bell Labs folks - using Human factors before all the Cx/Dx/Ux buzz. Curious Jim what part of the overall E2E was attributable to each layer of the solution stack such as device metrics (network, IO, compute) and cloud stack metrics also ?

回复

要查看或添加评论,请登录

Jim Blakley的更多文章

  • A New Home for My Articles

    A New Home for My Articles

    Hi Readers, For the last couple of years, I've been reposting my articles from the Open Edge Computing Initiative (OEC)…

  • The Quest for Lower Latency: Announcing the New Living Edge Lab Wireless Network

    The Quest for Lower Latency: Announcing the New Living Edge Lab Wireless Network

    Note: This blog is a republication of the original at The Open Edge Computing Initiative, https://bit.ly/3f7CWgh In…

    1 条评论
  • The Waiting is the Hardest Part

    The Waiting is the Hardest Part

    By Jim Blakley -- Associate Director, Living Edge Lab, Carnegie Mellon University Note: This blog is a repost of the…

  • Connecting the Dots at the Edges

    Connecting the Dots at the Edges

    By Jim Blakley -- Associate Director, Living Edge Lab, Carnegie Mellon University Note: This blog is a repost of the…

  • Putting It All Together With Cognitive Assistance

    Putting It All Together With Cognitive Assistance

    By Jim Blakley, Living Edge Lab Associate Director, Carnegie Mellon University This blog also appears on the Open Edge…

    1 条评论
  • When the Trainer Can’t Be There

    When the Trainer Can’t Be There

    By Jim Blakley, Associate Director Carnegie Mellon University Living Edge Lab This blog also appears on the Open Edge…

  • Seeing Further Down the Visual Cloud Road

    Seeing Further Down the Visual Cloud Road

    Written by Jim Blakley| December 4, 2019 (Note: This is a repost of my Intel Blog) Almost three years ago, Carnegie…

    1 条评论
  • For Droid Eyes Only

    For Droid Eyes Only

    This blog is a repost of my Intel Blog @ intel.com These days, you can get a 65” 4K Ultra High Definition, High Dynamic…

    1 条评论
  • Overcoming Visual Analysis Paralysis

    Overcoming Visual Analysis Paralysis

    Written by Jim Blakley| October 25, 2019 (This is a repost of my blog on the Intel IT Peer Network) An Intel marketing…

  • Sharing the Video Edge

    Sharing the Video Edge

    You may recognize the images at right as Bell System photos from the turn of the 20th century. They are real examples…

    2 条评论

社区洞察

其他会员也浏览了