Serverless Telegram Bot with Lambda & Pulumi
Omer Holzinger
Coder | Entrepreneur | Technology Leader | Proven Track Record Helping Companies Identify Complex Problems, Build Teams & Scale Through Best Practice Strategies and Processes
Introduction
How to write a chatbot? There are so many ways to do that. In this series we will focus on how to write one that can make sense as a real product, from early stage PoC to real-world product involving different teams and technologies. Welcome to the crossroads where innovation meets pragmatism in the craft of software development. This series is your navigational beacon through the landscape of modern software creation, where cutting-edge tools, adherence to best practices, and strategic technology choices converge to turn your ideas into scalable, real-world applications with finesse.?
I will use this series to introduce some of my favorite tools and frameworks, best practices and my considerations when making architecture decisions. At the heart of this series is a serverless chatbot. One of the most popular apps these days, chatbots offer an exceptional blueprint for learning the ropes of software development. In addition to their natural use case for the hottest technology these days, LLMs, they combine responsive design, semi real-time processing, and an immediate user interface, letting us interact with our creations from the get-go.??
We start with two essential technologies: AWS Lambda, the epitome of serverless computing, and Pulumi, the new kid on the block of infrastructure as code (IaC). The serverless approach is perfect for MVPs—cost-effective and effortlessly scalable. But there’s no denying its limitations, which can sometimes lead to concerns about future pains. That’s why we’ll structure our chatbot for an easy transition to a more traditional architecture later on, with a sneak peek into easily dockerizing our serverless code in upcoming posts.??
Pulumi changes the IaC game by empowering developers to declare infrastructure in their programming language of choice, merging deployment with development in an unprecedented dance of efficiency and flexibility.??
While this post focus on AWS Lambda and Pulumi, through this series, we’ll chart a course from the whys and hows of technology choices to a tangible, interactive product. We’ll expand our chatbot’s functionality, infuse it with artificial intelligence, learn different AI techniques, and all along, be introduced to new technologies and tools, including LangChain and llamaindex.
In the following sections we’ll write some code for our chatbot and then deploy it to AWS. The full code, including the chatbot and deployment can be found here.
Serverless Architecture & AWS Lambda: A Primer for Chatbots
At its essence, serverless computing is about abstracting server management away from the app development process. It’s not that servers vanish; rather, the cloud provider dynamically manages the allocation of machine resources. For engineering leaders, it’s the perfect architecture for new products, delivering quick turnaround time and low price on low usage. For developers, it’s freedom—it means launching applications without getting bogged down by the underlying infrastructure.?AWS Lambda epitomizes this serverless ethos. It runs your code in response to events—such as a message from a user to a chatbot—without the need for you to provision or manage servers. It scales automatically with the workload, and you’re billed solely for the compute time you consume. For developers, Lambda represents efficiency and focus, as it handles the heavy lifting of infrastructure management.
Chatbots are naturally event-driven, springing into action upon user input. AWS Lambda excels here, processing events with low latency. It’s engineered to respond at a moment’s notice, providing near real-time interaction capabilities for chatbots that are critical for maintaining engaging user experiences, while at the same time scaling down to 0 when unused, you only pay when your code is running. This pay-per-use model is economical for fluctuating workloads—typical of chatbots and MVPs alike. It’s the best of both worlds. There’s no cost for idle time, which can be significant with traditional server hosting, and the service scales up seamlessly during traffic spikes, ensuring your chatbot remains responsive, no matter the number of users.
Building
We start by defining methods to handle different types of messages. The bot may seem synchronous to the user, but in fact, it is asynchronous. The bot receives an event when a message is received and may then send a message in response. Since this is an asynchronous workflow, we’ll define our handlers as python async functions. In this example we want to handle one command, the start command, and 2 type of user messages: one that handles user message sent as a DM and another for handling user message sent in a group.
# bot/handler.py
async def telegram_start_handler(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
logger.info("Received /telegram_start_handler command")
await context.bot.send_message(chat_id=update.effective_chat.id, text="Hello! I'm your bot. How can I assist you today?")
async def telegram_group_message_handler(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
logger.info(f"Received group message: {update.message.text}")
await context.bot.send_message(chat_id=update.effective_chat.id, text="This is a response to a group chat message.")
async def telegram_private_message_handler(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
logger.info(f"Received private message: {update.message.text}")
await context.bot.send_message(chat_id=update.effective_chat.id, text="This is a response to a private message :)")
To connect our telegram handlers to AWS lambda, we’ll create a lambda handler. The lambda handler is expected to be synchronous. To bridge that gap, we’ll use asyncio.run to launch our async handler.
?In our async lambda handler we can register the telegram handlers and process the body of the lambda event argument, in our case, the body of the HTTP POST request we received from telegram.
# bot/handler.py
def lambda_handler(event, context):
logger.info("Received event: %s", event)
try:
asyncio.run(async_lambda_handler(event))
except Exception as e:
import traceback
logger.error(f"An error occurred: {str(e)}")
logger.error(traceback.format_exc())
return {'statusCode': 500}
logger.info("Successfully processed event")
return {'statusCode': 200}
async def async_lambda_handler(event):
logger.info("This is an info log")
logger.info(f"Received event: {event}")
application = Application.builder().token(TOKEN).build()
# Register your handlers
application.add_handler(CommandHandler("start", start))
application.add_handler(MessageHandler(filters.ChatType.GROUPS & filters.TEXT & ~filters.COMMAND, handle_group_message))
application.add_handler(MessageHandler(filters.ChatType.PRIVATE & filters.TEXT & ~filters.COMMAND, handle_private_message))
# Decode the incoming Telegram message
if event.get('body'):
update_dict = json.loads(event['body'])
async with application:
update = Update.de_json(update_dict, application.bot)
await application.process_update(update)
That’s it! this is all the code we need for our chatbot. Now all we need it to deploy it.
Infrastructure as Code with Pulumi: Streamlining Deployment
Infrastructure as Code has revolutionized the way we think about provisioning and managing infrastructure. Pulumi brings a fresh perspective to this concept, allowing developers to define infrastructure using general-purpose programming languages. This shift enables better abstraction, reuse, and sharing of code, aligning infrastructure management closely with application development practices. Unlike traditional IaC tools that require learning domain-specific languages, Pulumi speaks your language, whether it’s Python, JavaScript, TypeScript, or Go. This means less context-switching and a more intuitive approach to declaring cloud resources. Pulumi also offers deep integration with existing software development tools and practices, making it a natural fit for modern DevOps workflows.?With Pulumi, setting up the infrastructure for an AWS Lambda-based chatbot becomes a matter of writing a few lines of code. It allows you to specify the necessary AWS resources, like Lambda functions, API Gateway, and more, within your application codebase.
Building
Before we can start building with pulumi you have to install it. To easily install pulumi follow the documentation:?pulumi.com/docs/install/
Once you have Pulumi installed, create a new directory for infrastructure. We’ll call it infra. At this point, we should go into the infra directory and use the pulumi new command to instantiate a new project. Pulumi will walk you through this process. You can use the aws-python template or play around with Pulumi’s integrated AI feature, which enables you to describe you’re desired infrastructure and language and get a starting deployment code. We will be covering all the code you need and explaining our choices. Pulumi will create several files and a Python virtual env with the required dependencies.?We want to describe our infrastructure using python. Our bot will run as a lambda, so that’s one piece, but we have a few other moving parts. We’ll want to have a separate lambda layer for the dependencies. This is a best practice that will ensure our lambda function doesn’t get too big in size and allow us to reuse dependencies later. We’ll also want to include an API Gateway that will stand between our lambda and the world. Lambda Function URL could be an alternative to consider to the API Gateway. While API Gateway offer better security, a lambda Function URL could save cost. API Gateway charges are based on API calls and data transfer, while Lambda Function URLs are included at not extra cost. In this post we will use API Gateway, just to demonstrate how it’s done. If your application has high request rate and large data transfer, Lambda Function URLs could make an impact on your costs.
领英推荐
app/
├─ bot/
│ ├─ handler.py
│ └─ requirements.txt
└─ infra/
├─ Pulumi.dev.yaml
└─ Pulumi.yaml
└─ requirements.txt
└─ __main__.py
└─ venv
Firstly, setup_lambda_layer() function is responsible for preparing a Lambda layer that contains all the necessary dependencies for our bot. By packaging these into a layer, we ensure our Lambda function remains lightweight and the dependencies can be reused, which is an excellent practice for optimization. The dependencies are defined in the requirements.txt located in the bot directory, and a zip file is created for the layer.
# infra/bot_lambda.py
from telegram_webhook_provider import Webhook
from utils import zip_directory, install_dependencies_and_prepare_layer, PY_VER
bot_dir = '../bot'
def setup_lambda_layer():
# Define AWS resources
lambda_layer_zip_file = 'dependencies_layer.zip'
layer_requirements_path = os.path.join(bot_dir, 'requirements.txt')
# Prepare the Lambda layer package with dependencies
output_file_name = install_dependencies_and_prepare_layer(layer_requirements_path, lambda_layer_zip_file)
zipped_code = pulumi.FileArchive(output_file_name)
# Create the Lambda layer
lambda_layer = aws.lambda_.LayerVersion("myLambdaLayer",
layer_name="my-dependencies-layer",
code=zipped_code,
compatible_runtimes=[PY_VER],
)
return lambda_layer
Next, we have the setup_lambda_function() function and attached IAM Role. The role is used to manage the necessary permissions for the Lambda function to run. It attaches the AWSLambdaBasicExecutionRole policy to the role, allowing our Lambda function to write logs to CloudWatch. The lambda setup function zips up our bot’s code, excluding the dependencies (as they’re already in our layer), and creates the AWS Lambda function. Note how we handle secrets—our bot’s token is securely pulled from Pulumi’s configuration management, ensuring sensitive information is not hard-coded. The function returns the name of the Lambda function for use in subsequent steps.
def setup_iam_role_for_lambda():
# Create an IAM role for the Lambda function
role = iam.Role("lambdaRole",
assume_role_policy=json.dumps({
"Version": "2012-10-17",
"Statement": [{
"Action": "sts:AssumeRole",
"Principal": {"Service": "lambda.amazonaws.com"},
"Effect": "Allow"
}]
}))
iam.RolePolicyAttachment("lambdaLogs",
role=role.name,
policy_arn="arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole")
return role
def setup_lambda_function(lambda_layer, role, token_config_key="telegram_bot_token"):
# Function code zip (excluding dependencies)
zip_file = 'bot_code.zip'
zip_directory(bot_dir, zip_file)
# Create the Lambda function
lambda_function = aws.lambda_.Function("TelegramBot",
handler="handler.lambda_handler",
role=role.arn,
runtime=PY_VER,
code=pulumi.FileArchive(zip_file),
layers=[lambda_layer.arn],
environment=aws.lambda_.FunctionEnvironmentArgs(
variables={
"TELEGRAM_BOT_TOKEN": pulumi.Config().require_secret("telegram_bot_token")
}
))
pulumi.export('lambda_function_name', lambda_function.name)
return lambda_function
setup_api_gateway() sets up an HTTP API Gateway as the public-facing endpoint for our Telegram bot. The POST route is configured to trigger our Lambda function. We also ensure that the API Gateway has the permission to invoke the Lambda function with aws.lambda_.Permission. The API endpoint URL is then exported, providing us with the address we’ll use to configure our Telegram bot’s webhook. Finally, register_webhook()? registers our newly created API gateway with telegram.
def setup_api_gateway(lambda_function):
# Create the HTTP API Gatewy
api = aws.apigatewayv2.Api("telegramBotApi", protocol_type="HTTP",
route_key="POST /bot",
target=lambda_function.invoke_arn)
# Set Lambda permission for API Gateway
lambda_permission = aws.lambda_.Permission("ApiGatewayPermission",
action="lambda:InvokeFunction",
function=lambda_function.name,
principal="apigateway.amazonaws.com",
source_arn=api.execution_arn.apply(lambda arn: f"{arn}/*/*"))
# Export the API endpoint URL
pulumi.export('api_url', api.api_endpoint)
return api
def register_webhook(api, token_config_key="telegram_bot_token"):
# Register the Telegram webhook using the full URL including the route
webhook_url = pulumi.Output.concat(api.api_endpoint, "/bot")
webhook = Webhook("telegramWebhookRegistration",
token=pulumi.Config().require_secret(token_config_key),
url=webhook_url)
# Export the API endpoint URL including the /bot route
pulumi.export('api_url_with_bot', webhook_url)
Missing Pieces
We’re almost ready to bring our serverless chatbot to life. The utils module referenced in our code takes care of zipping up both the Lambda layer and the function code—essential steps for deployment. For a peek at the mechanics behind these operations, take a look at the code in the accompanying GitHub repository linked with this post.
Also instrumental is the telegram_webhook_provider, which handles HTTPS requests to Telegram. It’s a succinct piece of code you’ll find detailed in the repository.
Another piece omitted here is editing the infra/__main__.py to use the deployment functions we’ve written. Take a look.
Before proceeding, ensure you have access to AWS. This means having an active AWS account and the AWS CLI installed and configured on your machine. Follow the official AWS documentation for guidance on setting this up if you haven’t already.
Another key ingredient is the Telegram bot token. To obtain one, chat with @BotFather on Telegram and use the /newbot command. After answering @BotFather’s questions, you’ll be awarded a token. Secure this token within Pulumi by running pulumi config set –secret telegram_bot_token <YOUR_TELEGRAM_TOKEN> in your CLI. This step stores your token as an encrypted secret, a best practice for sensitive information.
Now, with the stage set, run pulumi up. This command triggers Pulumi to orchestrate the deployment of your bot to AWS. Once completed, your chatbot should be active and ready to interact with users. Why not give it a test drive?
Conclusion
As we draw this post to a close, let’s pause to appreciate our progress and anticipate the journey ahead. This series embarked on a mission to deftly navigate the nexus of cutting-edge technology and strategic thinking, aiming to construct scalable, production-level software with an eye on simplicity and finesse. Our intent is to sculpt an MVP that’s mindful of the future—minimizing technical debt while leveraging the best tools and practices available.
Today marked a pivotal stride toward that goal. We unraveled the complexities of serverless computing with AWS Lambda, showcasing how it can expedite the deployment and functionality of our chatbot. We then explored the seamless efficiency of Pulumi, an IaC tool that’s reshaping how we manage our deployment strategies.
Looking ahead, our next post promises to weave our Lambda functions with AWS Step Functions for enhanced orchestration, and we’ll introduce LangChain to add a layer of AI-driven dynamism to our chatbot. The conversation will only get richer from here.
Beyond that, we will plunge into the subtleties of serverless architectures, tackling everything from database integration to complex workflows. We’re not just stopping at chatbot mechanics—we’ll expand into the realms of artificial intelligence and large language models (LLMs). Expect deep dives into Retrieval-Augmented Generation (RAG) and data-centric use cases with LlamaIndex, not to mention strategies for optimizing the costs of running AI applications, ensuring that our high-tech assistant remains economically viable.
We’re crafting more than a chatbot; we’re architecting a sophisticated digital assistant designed to evolve with user needs and technological advances.
UX Research Team Lead
10 个月So cool, so smart??????
AI | SaaS | B2B | Agile | PMP Project Manager | M.Ed | Process Improvement | International Relations | Agentic AI | Author
10 个月Joining the discussion by adding hashtag #neuralseek to the discussion on Generative AI Chatbot in AWS with LLM and Natural language https://us02web.zoom.us/webinar/register/WN_qPw4QzUFS0GsozKYoRpmDw
Can't wait to follow along with your AI chatbot journey Omer Holzinger
Making AI-powered solutions for you
11 个月Can't wait to follow your AI chatbot building journey