AWS Serverless: Final Thoughts

This is the seventh article in a series about implementing an AWS Serverless Web Application for global clients of a large enterprise. If you have not already read it, please read the first article.

Writing 6 short articles was an enjoyable way to to share my technical experience on Serverless Architecture for large (Global 2000) enterprises. In this seventh and last article I want to write about the “non-technical” experience. In this article, I am going to list a few key focus areas that helped us achieve massive success.

Transformation

Throughout the implementation, serverless was causing us to incrementally re-write our entire playbook. Our customer achieved much better time to market allowing them to promise many “long tail” features to broader audiences instead of fewer “big bang” features to small number of users. Storing and using data became much easier and meaningful. We built disaster recovery and high availability very early into the application at much lower costs and much easier recovery. All of these benefits changed the conversation from “How to do?” to “What to do?”. That is a really big change in mindset in the organization. This change shifts the focus of the organization from inside to the outside; that is, from capability to possibility. It changes the mind from thinking about constraints to thinking about abilities.

Budgeting & Staffing

The annual budget conversation turns from a technical conversation about servers, datacenters to that of business parameters such as number of customers, new products and geographic spread. It is still important to have a baseline of costs before such business parameters can be used. Once the baseline is set (example: 100,000 users, 100 products costs $10,000), then, future costs can be estimated based on estimated users and products. Further, budgeting (and provisioning) need not be precise and need not be critical. If there is an overrun on requirements, the serverless infrastructure simply expands to meet your requirements. That means even though you had cost overruns, those are due to a good reason — you had unexpected growth in business. Notice that I did not even mention staffing as yet, because the entire skill set for cloud-native serverless coalesces around developers. As long as there is some way to wake up a developer for level 1 support, all application support can land in the developer’s lap. This is because all infrastructure is “code” there are no cables to fix, no racks or power supplies or any of that. This significantly changes the staffing equation.

Launch Pad

We used this project as a launch pad for many other initiatives. This serverless implementation was simply the trail blazer followed by many other initiatives that used cloud native technologies in some or the other way. In other words, in some ways, this serverless project was a sort of an experiment. This is crucial. When the mindset is to “conduct an experiment”, people tend to take risks and try new things. If the mindset is to “disallow failure” in the middle of the stream of work (as opposed to the head of the stream), then, people do not run; they walk; safely. It is important for staff to give up on known norms and practices and launch new ways of working.

Koolaid

When we first started this serverless project, I used to pronounce a new product name or methodology name every day. Developers learnt new things every day. It sounded like we were dancing to the tune of our cloud vendor (AWS). It almost felt suspect that we were walking into the den of something we did not know (almost sinister). It was only after a few cycles of development that we all realized we must drink the koolaid. Product announcements, enhancements and advertisements from AWS were helpful to us designing our functionality as well as designing our infrastructure. Everytime we ran into issues and problems, AWS Support was able to guide us in the right direction and we kept driving the cutting edge into our work. In other words, there is no use holding back. Once on the serverless path, do not drive in reverse, always be going forth.

Handle Naysayers, or don’t

There was a vocal group of developers and architects who did not understand or flat out did not agree with many new principles. There will be plenty of questions ranging from the fundamentals to the mundanes, including “How do we trust the cloud vendor?”, “What about performance?”, “How do we do transactions in NoSQL?”, “How do we rollback safely”, “What if the Lambda function fails”. I decided to pick the battles to fight. Instead of lobbing ourselves into lava of endless discussions, I decided to prove out the technology to the naysayers. With serverless, it is decidedly easier and faster to prove things out than grovel in analyses paralysis. In some cases, I just plain ignored questions until the product was ready to be tested. For example, the performance question is easily proven in formal performance testing instead of one of tests. For this specific question of performance, it easy to mistake the perceived poor performance from Lambda “cold starts” as a permanent ailment of the serveless world. On the contrary, in formal performance tests, there are no cold starts and hence no such performance issues.

Teamwork & Management

Much of our success came from adopting the micro services paradigm at the outset. When adopting micro services, it is essential to rally teams around specific services or even specific sub products (features) within the main product. Instead of managing build, delivery and code branching at the product level, it is essential to manage those things at the team level. This means that if one team wants to do run it’s build using one tool and another wants to do it differently, then so be it. Middle Management such as PMs, Development Managers should not dictate tool sets, languages or any of that to teams. Instead, Middle Management should focus on bugs, availability, performance and other such outputs of the team’s product. The User Interface should bind all backend services together and that is where most of the Middle Management focus should lie. Instead of asking questions such as “what is your unit test coverage”, Middle Managers should ask “what is the frequency of high/medium/low bugs found in QA environment and production environment”. Instead of asking “is the development done”, ask “how many users are using this feature in production”.

Conclusion

In general, we have great success with AWS Serverless. Since we started our work in March 2017, AWS has made many improvements on their side. I am no longer the protagonist on that application and I am pleased with how the new owners are putting their own imprint on the application.

This series has focussed on the positives of our experience building a large, high volume, global serverless application. Individual experience building other applications will vary and I am eager to hear from others how they have fared.

I must share credit with all my associates and leadership who worked with me on this project. Even though I was the protagonist, a project of this magnitude cannot be completed without the help of some very brilliant people.

Natesh V

Join our fantastic team at Bluepineapple!

6 年

Well done, Sachin! You are absolutely correct on the koolaid point. It's very important to work with the cloud platform vendor and sing from the same sheet.

要查看或添加评论,请登录

Sachin Dole的更多文章

  • Where to apply AI First

    Where to apply AI First

    There are several ways to solve business problems and seize new opportunities using AI. The key for most businesses is…

  • Parts of an AI Platform

    Parts of an AI Platform

    Building an AI Platform should go a long way for an enterprise to deliver value to business units. In this short…

  • What is Generative AI - for the non-techie

    What is Generative AI - for the non-techie

    By now, everyone has written about the basics of this topic and ChatGPT probably has answered this question a million…

  • How to succeed at Generative AI Projects

    How to succeed at Generative AI Projects

    McKinsey Digital has published a comprehensive article targeted to CEOs to break down Generative AI along several…

  • Five Characteristics of an AI-Driven Future Built for Everyone

    Five Characteristics of an AI-Driven Future Built for Everyone

    I first published this article in Newsweek on Mar 22, 2022. Over the previous 16 months, this topic has come up in…

  • 4 stages of Enterprise AI Portfolios

    4 stages of Enterprise AI Portfolios

    There seems to be no proven playbook available for building AI capabilities in large enterprises. Consultants and…

    2 条评论
  • Implementing Generative AI in an Enterprise

    Implementing Generative AI in an Enterprise

    It took us several weeks to arrive at a problem statement for our customer support voice, chat and ticket routing team.…

  • My Leadership Principles

    My Leadership Principles

    I have recently experienced positive career events that led to several transitions in a short amount of time. Change is…

    4 条评论
  • I sold my business

    I sold my business

    I sold my business. Specifically, I sold the operations of my business to a customer who wishes to remain unnamed.

    12 条评论
  • AWS Elasticsearch for Log Visualization

    AWS Elasticsearch for Log Visualization

    This is the sixth article in a series about implementing an AWS Serverless Web Application for global customers of a…

社区洞察

其他会员也浏览了