A Comprehensive Guide to Building & Deploying Spring Boot Applications on AWS
Shanoj Kumar V
VP - Senior Technology Architecture Manager @ Citi | LLMs, AI Agents & RAG | Cloud & Big Data | Author
AWS (Amazon Web Services) is a cloud-based service that allows you to build, deploy, and run applications and services. In addition, AWS provides the ability to create and deploy Spring Boot applications as one of its services. This article will go over how to set up a Spring Boot application backend on AWS.
Part - 1
The first step in configuring an AWS Spring Boot application backend is to configure a source system to send files to S3. S3 (Simple Storage Service) is an object-based storage service that is highly scalable and used to store and retrieve data. Any system capable of sending files to S3, such as a database or an application, can serve as the source system. We can set up an event trigger that will be triggered when new files are added to S3 once the files are in S3. This event trigger can be configured using AWS Lambda. Lambda is a serverless compute service that allows you to run code without the need for server provisioning or management.
import boto3
s3 = boto3.client('s3')
lambda_client = boto3.client('lambda')
def lambda_handler(event, context):
# your code here
s3_bucket = event['Records'][0]['s3']['bucket']['name']
s3_key = event['Records'][0]['s3']['object']['key']
lambda_client.invoke(FunctionName='your_lambda_function', InvocationType='Event', Payload=json.dumps(event))
When the event trigger is triggered, the Lambda function will be executed. The Lambda function will read the data from S3 and load it into an RDS (Relational Database Service) instance. RDS is a managed relational database service that makes it easy to set up, operate, and scale a relational database in the cloud.
After the data has been loaded into the RDS instance, the Lambda function will send a status message to an SNS (Simple Notification Service) topic. SNS is a fully managed messaging service that makes it easy to send notifications to various endpoints, such as email, SMS, and HTTP/S.
End users will send requests to the application load balancer (ALB). ALB is a fully managed service that makes it easy to route incoming traffic to the correct service or application. The ALB will then forward the request to the AWS Load Balancer Controller in the EKS (Elastic Kubernetes Service) cluster. EKS is a fully managed Kubernetes service that makes it easy to deploy, manage, and scale containerized applications using Kubernetes. The AWS Load Balancer Controller will route the request to the Spring Boot application service. The Spring Boot application service is a pod in the EKS cluster responsible for handling the request and generating a response. The Spring Boot pods talk to Amazon Aurora for Redis and generate the response. Amazon Aurora is a fully managed relational database service compatible with MySQL and PostgreSQL.
Part - 2
A deployment pipeline is a set of automated processes that are used to build, test, and deploy software:
Committing the code to an AWS CodeCommit repository is the first step in creating a deployment pipeline. CodeCommit is a fully managed source control service for storing, managing, and tracking code changes.
Once the code is committed to the CodeCommit repository, it will trigger an AWS CodePipeline. CodePipeline is a fully managed continuous delivery service that allows you to automate your application's build, test, and deployment.
The first phase of the CodePipeline is the build phase. In this phase, the code is built and a Docker image is created. The Docker image is then pushed to the Amazon Elastic Container Registry (ECR) at the end of the task. ECR is a fully managed Docker container registry that makes storing, managing, and deploying Docker images easy. When the task in the build phase is successful, a post-build phase is triggered. In the post-build phase, a container image manifest is created on top of the Docker images. The manifest alias is used to route the container image based on the architecture of the requester. The post-build task commits the manifest to ECR. After the manifest is committed to ECR, the post-build task runs kubectl to apply Kubernetes config changes and update the Spring Boot service image to the newly created manifest alias.
import boto3
codepipeline = boto3.client('codepipeline')
ecr = boto3.client('ecr')
# Create a new CodePipeline
response = codepipeline.create_pipeline(
name='MyPipeline',
roleArn='arn:aws:iam::788956789012:role/MyPipelineRole',
artifactStore={
'type': 'S3',
'location': 'my-bucket'
},
stages=[
{
'name': 'Build',
'actions': [
{
'name': 'Build',
'actionTypeId': {
'category': 'Build',
'owner': 'AWS',
'provider': 'CodeBuild',
'version': '1'
},
'runOrder': 1,
'configuration': {
'ProjectName': 'MyProject'
},
'inputArtifacts': [],
'outputArtifacts': [
{
'name': 'MyApp'
}
]
}
]
},
{
'name': 'Deploy',
'actions': [
{
'name': 'Deploy',
'actionTypeId': {
'category': 'Deploy',
'owner': 'AWS',
'provider': 'ECR',
'version': '1'
},
The Spring Boot service picks up the config change, pods download the image matching the architecture of their hosting nodes via the manifest alias.