Severless Computing Drawbacks
Serverless computing is a method of providing backend services on an as-used basis. A serverless provider allows users to write and deploy code without the hassle of worrying about the underlying infrastructure. A company that gets backend services from a serverless vendor is charged based on their computation and do not have to reserve and pay for a fixed amount of bandwidth or number of servers, as the service is auto-scaling. Note that despite the name serverless, physical servers are still used but developers do not need to be aware of them.
What are the advantages of serverless computing?
Potential Drawbacks of Serverless Computing
While switching to serverless can benefit your organization in various ways, it also has some possible drawbacks that you should carefully consider before implementing this type of solution. These are some problems you can run into while leveraging serverless architecture:
Deploying a Lot of Functions
FaaS follows the pay-as-you-go approach; deployed functions are billed only when they are run. As there are no costs for inactive serverless functions, deploying as many functions as you want might be tempting. Nevertheless, this may not be the best approach, as it increases the size of the system and its complexity—not to mention that maintenance becomes more difficult. Instead, analyze whether there is a need for a new function; you may be able to modify an existing function to match the change in the requirements, but make sure it does not break its current functionality.
Calling a Function Synchronously
Calling a function synchronously increases debugging complexity, and the isolation of the implemented feature is lost. The cost also increases if the two functions are being run at the same time (synchronously). If the second function is not used anywhere else, combine the two functions into one instead.
Calling a Function Asynchronously
It is well known that asynchronous calls increase the complexity of a system. Costs will increase, as a response channel and a serverless message queue will be required to notify the caller when an operation has been completed. Nevertheless, calling a function asynchronously can be a feasible approach for one-time operations; e.g., to run a long process such as a backup in the background.
Employing Many Libraries
There is a limit to the image size, and employing many libraries increases the size of the application. The warm-up time will increase if the image size limit is reached. To avoid this, employ only the necessary libraries. If library X offers functionality A, and library Y offers functionality B, spend time inves‐ tigating whether a library Z exists that offers A and B.
Using Many Technologies
Using too many frameworks, libraries, and programming languages can be costly in the long term, as it requires people with skills in all of them. This approach also increases the complexity of the system, its maintenance, and its documentation. Try limiting the use of different technologies, especially those that do not have a broad developer community and a well-documented API.
Not Documenting Functions
Failing to document functions is the bad practice of all times. Some people say that good code is like a good joke—it needs no explanation. However, this is not always the case. Functions can have a certain level of complexity, and the people maintaining them may not always be there. Hence, docu‐ menting a function is always a good idea. Future developers working on the system and maintaining the functions will be happy you did it
领英推荐
Loss of Control
One of the biggest criticisms of serverless is loss of control. What does this mean? In serverless applications, we tend to use a lot of services managed by third parties (called BaaS—backend as a service, in serverless slang) and a lot of function platforms (FaaS—functions as a service). These BaaS and FaaS are both developed and operated by third parties.
In the non-serverless world, we control the whole stack; we have control over which version of what software goes into each of the services. We write queues, databases, and authentication systems one at a time. The more we move to serverless, the more control we lose because we give up ownership of the software stack that our services use. The positive side of this is that we can focus on putting more time and energy into providing business value.
One important thing you should do if you are really concerned about losing control of your infrastructure is a risk assessment. This will allow you to analyze what’s most important for your business and how losing control would potentially affect this. For example, if you are in the business of selling cakes online, it may not make sense to spend a lot of time creating the authentication system that your e-commerce platform uses. By using a third-party authentication system, you can provide the same, or even more, value (as this is a tested and secure system) than writing it yourself. Remember that when you choose to write a system yourself, you need to own it, and maintain it, for the duration of its life.
Security
One of the biggest risks of serverless is poorly configured applications. Poor configuration can lead to many issues, including (but not limited to) security-related issues. If you are using AWS, for example, it’s important to correctly configure the different permissions that your services will have for accessing other AWS services. When permissions are not very specific, a function, for example, can have more privileges than it needs, leaving room for a security breach. Another problem with security is that the security mechanisms inside the cloud cannot be transferred outside of the cloud, so if we are connecting to third-party services by HTTP, we need to make sure that those connections are safe and the data encrypted.
To overcome security issues, the most important thing is to give the correct permissions to your AWS resources, so that they can perform their tasks. So be sure to provide the exact permissions your functions need, as well as permissions per function, and be very strict about it. Then make sure that you encrypt all your data at rest. And for the third-party connections, make sure that the request takes place over a secure connection.
Architecture Complexity
When developing serverless applications, even the simplest application has a very complicated architecture diagram. In general, the code for functions tend to be very simple and to do only one task at a time, so this leads to a lot of functions per application. In addition, we use a lot of managed services for all kinds of tasks. When you combine these two things, the architecture tends to get complicated. The complexity of an application then moves from the code to the architectural level. You have to be careful here to follow solid architectural patterns or you can end up with a tangled architecture.
The simplest way to avoid this is to educate yourself on how to build distributed systems. Learn the most common architectural patterns for designing event-driven applications and become familiar with asynchronous messaging in applications. Developers and architects often think in synchronous communications, but in a distributed system, asynchronous messaging is more efficient. It is very important to understand how the different services integrate with each other, how much latency a request has, and where the bottlenecks are. Sometimes, by slightly rethinking the architecture, you can improve the performance of the entire set of applications. For example, if you see that one service is taking an inordinate amount of time to respond, you might ask yourself: Can I add a cache in front? Or, can we switch some of the synchronous messaging to asynchronous connections?
Difficult to Test
Because of its distributed nature, serverless applications tend to be hard to test. Developers, in general, like to perform local tests because they were accustomed to doing that before serverless was available. But in the serverless world, local tests are complicated, as we need to somehow mock the cloud services on our local machine. In non-serverless applications, one of the biggest risks tends to be in the code. However, in serverless applications, configurations and integrations are the greatest risks. So we need to make sure that we perform a decent amount of integration tests as well.
To overcome the difficulty of testing serverless applications, it is important to invest the time and effort upfront to architect your application correctly, so you can test all your business logic running unit tests, and then create good integration tests that run in the cloud.
Difficult to Monitor
Again, for the same reason that our system is difficult to test, it is also difficult to monitor. Monitoring serverless applications is very complex, and the tooling available is not yet well developed. In a traditional application, we usually focus on monitoring the execution of code, while in serverless applications, we also need to monitor the integrations between the different services and make sure that we can follow a request end to end in our distributed system.
In order to deal with this effectively, it is crucial that you find a good monitoring tool that works for your application. There are many out there and they all have different features. Best practice says you should have one that supports the ELK Stack, has features for visualizing the different logs and metrics (not only from your functions, but also from your resources), and, additionally, supports distributed tracing