A Review of "Serverless"? Tech

A Review of "Serverless" Tech

WOT NO SERVER?

David Lee from Blueberry reviews the advances in “serverless” cloud technology and what benefits it brings to the enterprise.

Since the emergence of cloud technology around 15 years ago, more and more computing has inexorably migrated from the company server room (the so-called “on premise” model of computing) to the ignominious gargantuan racks of servers provided by Microsoft, Amazon Web Services, Google, IBM and other more niche providers. Collectively this is “The Cloud”. It’s your corporate computer something like you always knew it but, in the words of Captain Kirk “out there somewhere”. In spite of often-still-held popular misconception in the Board Room, cloud computing is MORE secure, MOST cost-effective, MORE reliable and EASIER to maintain than an (albeit virtualised) chunk of tin in a (hopefully) dry (hopefully) locked (hopefully) maintained basement at HQ.

“Virtualisation” was the key technology advance that unlocked the advent of the cloud. Before virtual computing, a program would run “literally” on a server machine which would have a defined physical amount of processing power (CPU), memory (RAM) and disk space. This had many drawbacks. It was inefficient and if the business need for computing power increased, software would begin to run slowly or become unresponsive, infuriating users, and upgrading could be a costly and time-consuming matter, often necessitating in-house I.T. resource. God forbid the server “went down” – the business went down with it; an unscheduled coffee break for all while, customers queued on the phone. Virtualisation began to change that. Now a single powerful computer, or cluster of computers, could pretend to be different types of computing resource to suit varying purposes. Effectively, the combined computing power of the devices installed is divided up between the software applications that need it in a more efficient manner, with spare shared resource available so that power could be added relatively easily if required.

Before virtualisation, one could rent a server in a data centre. This would be like your traditional office server, just not in your offices and maintained by someone somewhere else. But, with virtualisation, the cloud was truly born, because now the data centre provider could fill a large purpose-built warehouse with as many servers as they could keep cool, and share out all that lovely computing power amongst thousands of customers, and it mattered not whether a customer was a small entrepreneur or a corporate giant. The traditional I.T. giants like IBM and Microsoft recognised the potential, as did a certain online book retailer who realised that if they could service millions of online customers worldwide for their retail therapy they could perhaps offer other corporations massive computing power as well. Amazon Web Services (AWS) was created and challenged the traditional “big boys” with better, clearer cost models and a wide range of tools for increasing computing efficiency.

For relatively small software applications, for an average SME, the cloud is nice and simple. There are standard server architectures that you buy “off the shelf”, analogous to buying a server for the office in the old world. This typical architecture would consist primarily of a “web server” and a database. The former is the interface to the Internet (or private network), optimising the receipt of requests from software running in a browser and the serving of data, documents and images back to the end user. The database is the storage for the data. In the simplest example, the web server and the database could live on a single virtual server (sometimes called an “instance”) or for more efficiency, could be split across two – or more - virtual servers.

For larger demands, other architectures could be provided, splitting centres of processing across multiple virtual machines and enabling degrees of “auto scaling” so that spikes or longer term increases in demand for computing resource are met automatically by more power or more machines becoming active. In such a way, any scale of demand can be serviced, up to the hungriest banking application or retail site with hundreds of thousands of concurrent users. With usually no slowdown or failure. On such is modern society now based.

These models offer many benefits. Businesses no longer need the server room and therefore a much reduced I.T. department. There are cost benefits as you only pay for the servers that are “switched on” (virtually, remember, not literally at the hardware level), so if you are running an in-house customer management software application you only need it “on” during your office hours; when you’re tucked up in bed, so is your server. Servers can also be configured to be “on demand” so if you have a training server, you only need to pay for it when someone needs to be trained, for example.

There are benefits for software developers too. In the 1980s, developing a program required the programmer to know the detailed architecture of the computer it was targeted to run on – how much RAM, how much disk space etc. Further back (I was there!) the programmer had to know about the “bits and bytes” at the lowest level. Nowadays, the computer power can be assumed and software applications built with no knowledge of the underlying hardware or where it will reside. They do still need to understand server architecture though.

This brings us nicely to the era of so-called “serverless” technology. This does not refer to peer-to-peer networks that literally have no server, but to the further virtualisation of processing. It’s a logical “next step” from the standard cloud paradigms and is particularly useful for highly transactional computing, such as payment processing, messaging and performing calculations - repetitive atomic tasks. Actually, most things we ask computers to do are repetitive! The idea is to break software down into small functions that do a specific task. This is nothing new, in principle, as “modularity” has been a thing in programming for over 30 years, but now the emphasis is more on the “function” (the process we want to execute) rather than the “object” (the data thing we are working with). AWS calls such a function a “lambda”. For example, we may design some code to “send an invoice”. With a serverless model, we don’t have a web server permanently running on a defined virtual computer waiting to send invoices. Instead, we send the request to send the invoice to the cloud provider and the cloud provider is in complete control of where that piece of code is executed to, in the example, send the invoice. So, now we not only don’t know what hardware our program is running on, we don’t even know which virtual server it’s running on! It just doesn’t matter. All we care about is that our request gets processed efficiently and quickly. Notice something exciting? In the traditional cloud model, requests are processed more or less in a queue (albeit this queue might get served by several machines at the same time); however, with serverless, if we suddenly threw 100 invoice sending requests at the same time at the cloud, they could in theory ALL be executed in parallel! So, the scaling and speed, especially for sudden peaks of demand, goes to a whole new level.

There are other benefits too. Again, there is a cost benefit potentially – pay only for the computing power used; no paying for idle time at all. There are also benefits for support. Even with a virtual server you need someone to look after it specifically. It still needs operating system updates, backup, security provision, fault monitoring and so on. Of course, so do the virtual servers and actual hardware underlying serverless computing – BUT this is now entirely the responsibility of the cloud provider, not you or your “I.T. crowd”. You also don’t need to worry about designing the architecture either.

There are some disadvantages however. Whilst cost can be a benefit, the serverless cost can be less controlled as there is no limit unless explicitly imposed so, for example, a denial-of-service style cyber attack could force a huge spike in cost if not controlled. Security is arguably reduced or at least harder to control.

Initially, serverless computing was confined to the processing (i.e. the web server function), still requiring a database behind it in a known location, therefore still requiring itself attention to architecture, licensing, backup and so on. However, more recently models have been emerging that ALSO make the database serverless, providing a complete “serverless” model.

Serverless may not be the answer for all applications, but we will see this model used increasingly. It may even be that your business application is using serverless technology behind the scenes without you even knowing it!

At Blueberry, our solid cloud technology underlying our bespoke software applications is BBWT3 (Blueberry Web Template third generation), which can be deployed on Microsoft Azure, AWS or any other cloud, as well as on premise where needed. We now also offer a serverless variant of BBWT3 for AWS can reduce cost and increase performance of your critical business software applications.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了