Serverless + edge = EDGELESS

Serverless + edge = EDGELESS

Some time ago Prime Video posted a blog article that captured the attention of the whole community of cloud computing researchers and developers.

The article discusses the recent shift from a serverless architecture to a so-called monolith system, which is claimed to save up to 90% of the costs. The blog post went viral and countless discussions have sprouted around the topic, in some cases followed by headlines declaring serverless dead and rotting.

This pessimistic angle is counterbalanced by many reports declaring how the use of serverless allows the service provider to cut down significantly costs. For instance, Bustle claims an 84% cost saving by shifting to serverless.

But that’s not really the point. Of course, serverless is not dead (nor dying) and, certainly, it is widely known that a one-size-fits-all software architecture does not exist: serverless can be good for some customers when used in some products at some stage of their lifecycle, otherwise there are other solutions out there that are better suited to the job.

What are the characteristics of a service that do not make it fit for serverless computing?

No simple answer exists for such a simple question, but, in a first-order approximation, we could say that a service that matches any of the following characteristics will give software engineers and developers some headaches when implemented through serverless computing:

  1. Long-running processes: serverless computing platforms enforce a maximum duration, in a range from tens of seconds to minutes, for the execution of a function. That is because the underlying orchestration system is designed to schedule relatively short run-time executions: having a mix of short-lived and long-lived functions messes with the design assumptions and leads to inefficient use of resources.
  2. Stateful applications: serverless computing was born with FaaS in mind, and in turn, that is inherently stateless. Not all applications are stateless, though. Well, some studies carried out on open-source serverless computing software have hinted that most of the applications are, indeed, stateful. The cloud providers, therefore, offer developers the option of accessing an external state, stored in a KVS or other persistence service, so that the function can look like a stateless one from the point of view of the platform because its state is stored somewhere else. That is a not major concern for functions that only access a small-size state but can become a performance choke point or cost showstopper if the application requires managing large blobs.
  3. Bounded service latency: elasticity is a key feature offered by serverless computing to its customers. The platform will scale up/down the number of active instances automagically. Well done pal, one less thing the system architects and developers of the application do not have to care about. However, this comes at a cost. The orchestration system can follow the demands, not anticipate them. Well, there are research studies suggesting that upscaling proactively resources can lead to performance benefits, but it is not clear how much the cloud providers are willing to be aggressive in consuming resources for traffic that may never arrive (our bet: not much). Therefore, cold-start effects, which happen when there is a sudden spike of function calls that require resource upscaling or some other reconfiguration of the inner workings of the serverless platforms, are here to stay. This also means that latency is not predictable, and in particular, applications will experience high-tail latencies.

Characteristics of a service that do not make it fit for serverless computing.

OK now, if what we said above is true, then it is very easy to design the guidelines for choosing whether or not to use serverless computing for an application, see the flow diagram below.

Should you use serverless computing for your service?

The problem is:

  1. Many edge computing applications of practical interest fall into the class of Internet of Things (IoT) services, which could benefit significantly from an on-demand service model because they are inherently bursty in nature.
  2. The functional programming model offered by FaaS matches very well many IoT applications and can also be a good fit for low-code applications to be deployed quickly by non-programmers.

For these reasons, in the EDGELESS project we stubbornly insist on making serverless a viable paradigm at the edge for all applications, and in particular those that would fall into the red box of “NOT serverless computing” above: long-running stateful jobs which may have bounded latency requirements.

We have the ambition of achieving this goal by means of a fresh bottom-up design of the key elements of a serverless computing system at the edge, supported by innovative enabling technologies:

  • The ε-controller will enforce the Quality of Service and Service Level Agreements of workflows of elementary functions based on real-time measurements captured from the edge nodes and other system services.
  • The ε-orchestrator will provide a uniform interface to manage heterogeneous resources, small vs. big edge nodes, and specialized hardware, such as GPUs for fast execution of some tasks, TEE for secure computation offloading, and HW security modules for edge node identity authentication.
  • The ε-balancer will dispatch functions at run-time by closely following the fast time-varying system dynamics to provide stable data-plane performance.

So, is serverless a good idea for edge computing applications? Maybe not now, but stay tuned if you want to see this change!

Remember you can check the current status of development in the public EDGELESS GitHub repository.

Should you use EDGELESS for your service?


要查看或添加评论,请登录

Claudio Cicconetti的更多文章

  • SEATED 2024

    SEATED 2024

    The 1st workshop on Serverless at the Edge (SEATED) was held on June 3rd, 2024, co-located with the ACM Conference on…

  • *LESS'24

    *LESS'24

    The 3rd edition of the workshop series on Serverless Computing for Pervasive Cloud-Edge-Device Systems and Services…

  • A step-by-step guide to deploying a minimal EDGELESS system

    A step-by-step guide to deploying a minimal EDGELESS system

    In this guide, we will: Provide a basic overview of EDGELESS Deploy a minimal EDGELESS cluster, with a controller…

  • 2nd *LESS workshop @ PerCom 2023

    2nd *LESS workshop @ PerCom 2023

    March 13th, 2023. Student center of Georgia State University in Atlanta, Georgia, USA.

    5 条评论

社区洞察

其他会员也浏览了