Should we go serverless-first?

Should we go serverless-first?

Should we go serverless-first?

  1. What is serverless-first
  2. How can we apply serverless-first

  • Changes in the organization
  • Must have
  • Achievements

3. Real benefits

Spoiler

Speed to market = 30x faster development

Total Cost of Ownership = Up to 60% lower

Autonomy = Less Dev + Ops more DevOps

Scalability = Out of the box

Disclaim

Serverless still relies on servers; however, they are abstracted from us.

Serverless-first

Let’s define the serverless concept as a cloud-native model for developing and running applications without managing infrastructure alone.

As a concept, this is very attractive in building modern applications from a developer perspective mindset. Naturally, the demand to embrace serverless is high, with fewer Ops dependencies and very high autonomy for Devs. The results of these advantages from the management level are also well pursued.

Every option has two sides, and the other perspective relies on the disruption of such adoption, resulting in the need for the organization to build a mindset and processes where autonomy, empowerment, and accountability are put in place to support serverless-first.

Before we continue, let me introduce a buzzword often associated with serverless, FaaS (Functions as a Service), which is very common in modern CSP (Cloud Service Providers) and with many distinct implementations often associated with the Kubernetes ecosystem. However, there are differences between both.

Back to the meaning of the serverless first, since this is a mindset, we can mingle with the organization’s culture, which always turns into the hard part of any adoption.

The decision process

The reasons for a developer or a team to embark on this journey are simple to understand, based on the autonomy principle and grasp of new technology. The same can be harder to know when we think at an organizational level.

Adopting serverless independently of the core business is often very similar, starting by introducing a small task for starters, and inclemently adding new functionalities and promoting integration with core business applications. Many times are seen as a process to migrate monolith into microservices using the Strangler-Fig pattern.

Once the results surface and the relation between cost/profit/time is beneficial, the adoption becomes natural.

What are serverless deliverables

Putting them on a list, we can be immediately attracted

  • Speed, Velocity, Time to Market, Faster
  • Scalability, Agility, Elasticity
  • Cheaper, lower TCO
  • Low Complexity
  • Base in Code only, automated, repeatable, auditable
  • Modern, Trendy, Challenging

With such predicates, who can say no?

How can we apply serverless first

Applying this technology is relatively easy, with internal resources or requesting external support for introducing the foundations and delivering the CI/CD processes into place, giving the needed comfort for an easy decision.

Changes in the organization

I’m repeating myself, but the organization, as a whole, needs to assess what stage of evolution is currently. This assessment is fundamental. The organization must reach a level of maturity where providing all management levels with training in leadership and mentorship capabilities is ordinary and does not raise constraints; no one is born taught.

This maturity will promote employees to be part of the decision processes and elevate anyone’s opinion to all levels of the organization. This environment builds a critical mass by increasing the value and gathering the most wanted buy-in.

Promote or contract the SMEs (Subject-Matter Experts), accelerating the adoption and avoiding painful pitfalls. Allowing more people to be in touch with technology will become a success factor.

Show off the benefits of the adoption to the entire organization; it’s a powerful tool and a loud call to evolution.

Putting all these moving parts together is much easier said than done; like any other cultural mindset change, it is painful and hard to break with some of the current status quo.

Must have

  1. Executive sponsor

  • Quick and Hard: Start the adoption of serverless-first only with clear executive sponsorship.
  • This sponsorship will be perceived as the strategy, the direction, and the vision. This conjunction of factors will promote the focus.
  • This will empower the ownership and autonomy to achieve the goal.

2. Operating Model and Guardrails

  • In the current state, pre-defining the security guardrails is a top priority, at least should be; shifting the guardrails higher in the stack must be the way to do:
  • The CSP shared responsibility model.
  • Impact radius of exposed vulnerabilities.
  • Security best practices are still valid.

No alt text provided for this image

  • Implementing the base framework for development with simple and clear guidance easily integrated into a standardized CI / CD (Integration, Delivery, Deployment) pipeline. If the organization already has a CCoE (Cloud Center of Excellence), this would fit like a glove in the scope:
  • Common tooling leverages knowledge.
  • Frameworking the development lifecycle.
  • Increased quality value is driven by testing and integration.

No alt text provided for this image

  • The current methods and tools to monitor serverless must evolve to integrate observability levels; the metrics needed by serverless to deliver visibility are different in IaaS and PaaS. Therefore also here there is a need to evolve. More or less consensual, those should cover the following:
  • Operational metrics
  • Tracks the operational performance and ensures capabilities to build thresholds used for alerting systems, examples like counts for aggregated execution and aggregated errors grouped by type.
  • Load metrics
  • Centered on the execution metrics, mostly duration of execution, those metrics will be fed again into the development, encouraging performance changes since performance is directly related to operational costs. One simple warning about how to understand these metrics, those need to be perceived under a statistical perspective and not nominal value. Percentile-based metrics (P90, P95, and P99) are commonly used.
  • Business metrics

Based on how a business operates, these business metrics should be developed and embedded within the application/function and evidence of customer-facing actions that jeopardize customer satisfaction or elevate churn probabilities.

The serverless paradigm pushes the boundaries to understanding the user experience. The reality that most of the resources are short-lived (single execution) and the need to consider other metrics and variables (devices, services, volume of data, deployment frequency) raises a new set of challenges in collecting, correlating, and visualizing those findings.

3. Embrace early-stage projects

  • Applying the Well-Architected framework must be practice
  • Show off metrics to grange sponsorship and adoption
  • Documentation is going to be the hard part for any tech, but do not go without a log of decisions and lessons
  • Reuse, Reuse, Reuse before even trying to build; the capability to use proven architectures and patterns will boost the productivity
  • Actively encourage ongoing serverless first projects to support other projects to become serverless, also known as building communities.

Achievements

A direct relation between the execution cost and the scale on-demand capacity leverages organizations to focus more on business value than technology and operational costs.

The evolution of underlying infrastructures contributes directly to some of the principles that any organization pursues, allowing them to adapt to changes quickly and gain advantages of the market evolution. The resultant acceleration reduces the time to market, reduces the latency, delivers better user experiences, and allows a higher quality accomplished by constant iterations.

A reduced Ops team can maintain the infrastructure since a big chuck it’s being transferred to the CSP responsibility scope.

More and better skills are available for development since there are quite a few limitations.

The maturity of the development lifecycle varies from organization to organization; nowadays, many frameworks and tools are available. The providers supply these tools and frameworks, and the community actively contributes with different tools and frameworks., which brings solidity to the process. Some examples for future exploration: are?Serverless Framework ,?AWS SAM ,?AWS CDK , and?SST .

Real benefits

Simplicity

The adoption of serverless-first deliveries, a much less complex setup, reduces the toolchain considerably and does not impose (all thought suggest) the ability to implement in more friendly programming languages. This assumption can be a point of disagreement, but the fact that you can avoid the compilation time for every single code change can significantly reduce the time to test, deliver and deploy.

Costs

The economic benefits of this technology’s capacity to scale according to demand turn it into a much more efficient investment.

Depending on the complexity, budgeting a serverless solution can be a challenging task. However, adopting new budgeting strategies and realigning the processes to other variables more specific to serverless gives the capability to perform cost calculations based on consumption predictions during the decision phase.

To provide an example to clarify this explanation, let’s take an AWS lambda, where we can establish the memory size used in runtime, which relates directly to performance. This math will return the cost of execution.

No alt text provided for this image
This table is calculated based on 1000 lambda function invocations.

This financial advantage will be lost or negligible when the number of executions (workload) overcomes some thresholds. To maintain the economic benefit of running with a serverless architecture, the costs of execution calculations need to make a direct comparison with the costs of running the same workload with VM instances or Containers.

To simplify this calculation, using this?documentation ?we can establish that when the function memory is defined to the size of 1,792MB, we get one complete vCPU. So if we allocate only 1,024MB to a function, we can use roughly 57% of a vCPU (1,024/1,792 ~= 0,57). From this perspective, we are sharing the rest of the vCPU capacity with other functions.

The more memory we allocate, the faster we can finish a task.

Here is a minor update from this?blog note ?on how the memory/vCPU ratios had become more flexible and announce new capacities.

I’m also introducing another topic that points to the way the organization’s financial departments think about the current procedures to calculate operational costs. These calculations need to deliver the correct visibility to evaluate serverless-first performance. In IaaS and PaaS, fees are charged to the producer; most times, a simple principle of division between who’s consuming the resources is followed by the financial department to divide the producer’s costs. When the organization deploys shared services, this division becomes a financial decision on how to separate them. In the serverless implementation, the consumption data must be collected by embedded code to distinguish who’s using and, therefore, taking advantage. The request traceability effort to allow observability is almost the same as the one to calculate the cost for a request. This process will bring new tools to the FinOps team. The correlation between the execution and the profit generated will be the new decision-maker for management.

Adaptability

Cost-saving requirements imposed by the current state of the economy still struggling with the pandemic effects, the war, energy crisis, distributed workforces, remote work, and global teams fit perfectly to the advantages that serverless first offers, becoming more of a norm rather than a niche option.

Retention

We can highlight some points from serverless-first adoption related to retaining talent and becoming more appealing to new recruitment. The framework, the approach, and the tools simplify the entire development lifecycle and promote autonomy, and the realization of quickly overcoming challenges is very gratifying.



Credits:

Shared Responsability Model based in AWS documentation

Application Owner icon created by Freepik - Flaticon

Cloud Provider icon created by Eucalyp - Flaticon

Development lifecycle based in AWS documentation

Table based in AWS documentation

The spoilers numbers are based on a AWS re:invent presentation by Jessica Feng.

#serverless #cloudnative #cloudcomputing

要查看或添加评论,请登录

Gustavo Gama的更多文章

社区洞察

其他会员也浏览了