The Tao of Modern Application Delivery

The Tao of Modern Application Delivery

The recent AWS outage again raises the question of public vs. private cloud technologies. How do I architect my systems in such a way to provide maximum flexibility with regard as to where they run? Microservices are one step towards this goal, but the workflow for deploying and managing microservices across public and private clouds is still an open problem.

The prevailing architecture in the software industry has evolved from monolithic (mainframe) to two-tier (client-server) to multi-tier (web/app server) and is currently in the midst of another evolution towards highly componentized distributed applications (containerized microservices).

These new microservice-based or “hyperscale” applications are designed to support massive user volumes on web/mobile. The smaller code base of each service is easier to understand and generally easier to scale and more resilient. Additionally, packaging applications as smaller, distributed components makes moving between public and private cloud platforms more achievable. 

BUT, distributed systems are incredibly hard to build, deploy, debug and manage. Having personally built software (albeit decades ago mostly in C/C++) – writing clean and safe concurrent programs has historically been incredibly daunting (mutexes and semaphores, ugh!).

AND, building and managing private cloud infrastructure is also hard and expensive. Leaving the capex aside, just configuring servers, networks and storage is time consuming, not to mention the costs of operations and power and cooling.

VMware helped popularize virtualization and enabled the packaging of applications (plus operating systems) into virtual machines. However, while virtual machine packaging may have worked for multi-tier or Windows applications – it is not suitable for hyperscale applications handling modern traffic levels. Hyperscale apps require a new metaphor -- hence the rise of Docker and the container movement. 

The rise of containers adds another layer of complexity to the scenario described above. Now a company is responsible for managing the full software development, deployment and monitoring lifecycle, as well as the underlying infrastructure on public and private clouds. Building a Docker container is easy. Deploying thousands of containers on hundreds of hosts using a secure, technology-agnostic workflow is hard.

The developer workflow is relatively defined at this point - most organizations use GitHub as their platform for collaborative software development. But how does a company manage the rest of the application delivery workflow? How can development, operations and security teams collaborate to safely and securely deploy and manage thousands of microservices in production? 

Over the past several years, the team at HashiCorp has been systematically breaking down the application delivery workflow problem down into separate components (please do read the Tao of HashiCorp). Each of their open source projects: PackerConsulTerraform, and Vault are all powerful discrete elements in application delivery. Even more exciting are the two newest additions to the HashiCorp lineup:

  • Nomad  - distributed, highly available, datacenter aware scheduler
  • Otto - application definition and deployment

Atlas unites these components to provide cross-functional teams a "hub" for secure application delivery. Just as GitHub has become the de facto collaboration solution for software development, the goal of Atlas is to become the de facto solution for deployment, maintenance and security of applications across both public and private clouds.

Congratulations to Mitchell, Armon and the entire HashiCorp team on continuing to fulfill your mission of modernizing application delivery!

Tam Cao

Consumer Services Professional

9 年

Very interesting

回复

Good work guys. Way ahead of the game.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了