AWS Certified Cloud Practitioner Certification for Node.js developers
Published?on?May 6, 2020
Do you want to be more than just another Node.js developer? (although being one is a great position these days)
Do you want to get a detailed high-level understanding of AWS services and get a certification from the most valuable company on Earth?
Do you want to leverage your existing AWS experience?
In this article, you will get all you need to start and pass the AWS Certified Cloud Practitioner Certification.
This is a foundational certification that tests your understanding of AWS services offering and the problems they solve.
It does not go into great details about each service, later certifications do.
Let’s be clear, this exam does not ask you a lot of details on AWS services.
You mostly need to understand the main use cases for specific services. For example, what service to use for a specific scenario.
Remember that this is geared toward people that will not necessarily implement the architectures but at least they will understand them.
That said, it’s always best to practice implementation, that way questions are similar to scenarios you have encountered.
FYI, yes, I passed the exam; not writing about something I don’t know. The secret is that writing this article help me pass it!
Why pass?it?
Simple answer:
The Cloud is always a great (not good, great) choice to add to your portfolio of skills.
Frankly, AWS certifications are difficult which makes them valuable on the market.
Okay, but why not go for the Certified Developer Associate Certification directly. You are free to do the most difficult AWS certification (Solutions Architect Professional) first. All this to say that if you are reading this article, it probably means that you are new to AWS.
As a newcomer to AWS (less than 2 years of full time practice), even if you deploy Node.js apps on AWS already, I recommend passing the Cloud Practitioner first. I can bet that you will learn a lot of new things about the AWS ecosystem.
It is not enough to “know about” EC2, Lambda, Elastic Beanstalk or RDS. You need to consolidate your knowledge about the ecosystem on which those services are part.
The AWS Certified Cloud Practitioner Certification is the foundational AWS certification and does not assume that you are a technical person. So basically, anyone, whatever their credentials, can pass it (after studying). That said, having a technical background is a great advantage because many concepts will be familiar.
As a Node.js developer, you may have used AWS on the job, while following a tutorial or used some other cloud provider like Heroku or Netlify.
AWS Certified Developer Associate Certification can be intimidating considering the amount of detailed knowledge you need. The developer certification is complex even for developers already building complex microservices architectures and web apps on AWS.
It is a great confidence booster to get an AWS certification even the simplest one, on your path to more complex ones. Moreover, it can consolidate your disperse knowledge about different aspects of the AWS Cloud and Cloud computing in general.
If you’re already an expert on AWS, just start from the associate-level certifications up. Otherwise, if you think starting from zero is the correct path than the cloud practitioner certification is for you.
Let’s be clear here, nothing is easy with AWS. Just because the Cloud Practitioner certification is an entry level certification, it does not mean that you can pass it tomorrow with no prior AWS knowledge.
In fact AWS recommends six months of practice, but you don’t need all that to pass the certification. For someone with a developer background, no prior AWS knowledge, I would say about a month of relaxed (but serious) preparation with one-hour everyday practice on the platform will make you ready to pass. For sure, you can pass it in a week or less with dedicated preparation (several hours per day).
It took me two weeks of everyday practice to pass. That said, I am using AWS at work for cloud development but my on-the-job knowledge would not have been sufficient.
In this article, we will look at AWS services in a more technical way because after all I assume that you are one (it’s in the title).
So let’s get started!
Exam Blueprint
You can find it at:
AWS Certified Cloud Practitioner(CLF-C01) Exam validates that you are able to;
Make sure to read it, it’s only two pages.
About the?Exam
What is the Cloud (a.k.a Cloud Computing)??
Cloud computing is the on-demand delivery of I.T. resources like compute, database, applications and storage through a Cloud service provider like Amazon, Microsoft, Google, Alibaba.
These Cloud providers provide their services through a platform via the Internet with pay-as-go pricing.
There is a meme saying that;
Indeed, you can think of Cloud computing as renting someone else’s computer by the hour, the minute or the second.
Cloud Computing Deployments
There are 3 types of deployments:
More and more enterprise-grade companies are going full public cloud.
Creating a Free Tier?Account
Nothing special here, just go to https://aws.amazon.com/ and click on the “ Create an AWS account” button to signup for a free tier account.
Read more on the free tier here.
That free tier is valid one year so take advantage of it to pass a few certifications while training for free (or a cheap cost).
My piece of advice here is that the Cloud Practitioner is not about how to do things (services) on AWS but what and why to use it.
So please, if you are curious and start instances in services, make sure to stop and delete them right after you are done. You don’t want a surprise bill of tens or hundreds of dollars (happened to me — but I’ve learned my lesson). I will show you how to set billing alarms.
So, free tier only means that a certain amount of usage per service is offered by AWS. After that amount has been reached, you will pay.
One more thing, you can set alarms when you are about to exceed the free tier.
Creating Billing?Alarm
So you want to get a notification when you are about to exceed your budget??
Having a $10 / month budget to practice on AWS should be OK. I did not say that you will pay that each mount. This is a “I am willing to pay up to” limit. In most cases, with the free tier, you will not pay anything or a few cents or dollars here or there.
As I said before, this is your responsibility to read the free tier documentation to know exactly what and how much you can do at no cost and work within those limits.
So to create a billing alarm, log into your AWS console and go to CloudWatch service.
On the exam, if they ask you how to get automatic notifications if your account goes offer some amount of dollars, the answer is to go into CloudWatch and create a billing alarm, just like we did above.
AWS Global Cloud Infrastructure
AWS divides its infrastructure into:
AWS Regions
AWS has the concept of a Region, which is:
Unlike other cloud providers, who often define a region as a single data center, the multiple AZ design of every AWS Region offers advantages for customers.
The GovCloud (us-west and us-east) regions are used by the federal government as well as private companies but their employees must only be U.S. citizens on U.S. soil. Their is a screening process.
AWS Availability Zones
Each AZ has independent power, cooling, and physical security and is connected via redundant, ultra-low-latency networks.
They are:
All AZ’s in an AWS Region are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber providing high-throughput, low-latency networking between AZ’s.
One availability zone is actually be several data centers very close to each other, so conceptually considered as one entity.
Choosing a Region
Main reasons to choose a particular region are:
AWS Edge Locations
An edge location is a site that the AWS CloudFront service uses to cache copies of your content for faster delivery to users at any location.
They allow you to serve content where it is nearest to your users and it is tied one primary service.
Edge locations:
There are many more AWS edge locations than AWS regions.
Global AWS?Services
Here’s a list of AWS global services (more details later in this article):
Here are services that give a global view but are regional:
To remember them, think S.I.R.C.S.S. with all S. referring to “Simple N.E.S”
S.I.R.C.S.S. with all "s"'s referring to “Simple N.E.S”
(Notification, Email, Storage).
AWS services deployed On?Premise
Yes, you can deploy some AWS services in your own data center (a.k.a On Premise).
Not all organizations allow Cloud services over the Internet (for national security reasons, as they say), that does not mean Amazon ignored this market.
Moreover, not all projects are run in environment were an Internet connection is available all the time (or at all). Think of projects in poles or dense forests or even on other planets (yes, you read that right…).
Here’s the list:
So remember that to deploy your applications on-premise you can use:
Shared Responsibility Model
The motto of AWS regarding shared responsibility is:
Security and compliance is a shared responsibility between AWS and the customer
This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.
AWS Responsibility:
Customer / your Responsibility:
More on Shared Responsibility on AWS
So AWS manages security of the cloud but security in the cloud is your responsibility as a customer.
You retain control of what security you the customer chooses to implement to protect your own content, platform, applications, systems and networks, no different than you would on-premise.
For example, AWS is responsible for the security of the data center. You are responsible for the security of the EC2 instances by applying security patches, using encryption for your data in S3, etc.
AWS is responsible up to the software for managed services where you cannot directly access the operating system like Amazon RDS or S3. But still you are responsible for encrypting of your database, for creating backups, etc.
It is your responsibility to rotate your access keys and enable Multi Factor Authentication (MFA) and to transmit data over HTTPS or other secure protocol.
Also AWS is responsible for training its employees and you are responsible for training yours.
Think of AWS protecting the building and the parking, and the customer responsible for what’s going on inside the building.
Tips:
More details at https://aws.amazon.com/compliance/shared-responsibility-model/
Economics of the?Cloud
One thing to know, you will never be tested on prices because they change all the time. I give some of them as an illustration.
The basic pricing policies of AWS are:
The three fundamentals of cost with AWS are:
To remember them, think Comp StoDOut
You need to understand that you start building with cost in mind before your infrastructure becomes large and complex.
Maximizing the power of flexibility
One key advantage of cloud resources is you don’t pay when they are not used. By turning off used resources, you can reduce your costs by 70% or more compared to running 24/7.
Pricing Models on AWS
Depending on resources, AWS offers the following pricing:
Free Service
Here qre the services which are free even after the free tier period is finished:
It’s about understanding how companies pay for traditional data centers and how that changes when they take advantage of cloud offerings.
The big idea is that you:
Capitalized Expenditure (CapEx)
This type of expense has to do with initial investment. It consists of large upfront cost when adding a building, adding new servers or any supporting equipment.
This type of expense to achieve a fixed asset (receive value over time) is referred to as CapEx.
Operating Expenditure (OpEx)
This has to do with the day to day expenses of doing business. For example, utilities or the data connection for data centers who be covered as OpEx because you cannot run the business without them.
After the initial build of a data center, ongoing connectivity, utility and maintenance costs are considered to be OpEx.
Handling Demand in Data Center
Let’s say that we decide to build a data center for the latest app of our company. This will be a global app with growing demand over time
When we build our own data center, we do not get to scale it on-demand. We need to plan for the demand that we’re going to receive. Meaning that we must buy a lot of resources that will be unused at first and anticipate buying more resources to handle upcoming traffic.
The first issue with that model is that we have unused capacity. This means that we are paying for demand that we are not yet getting from our users. This expense goes unused until demand is there.
The next issue is demand overcapacity. This is the other way around. Now, we have the demand from users but we did not provision enough resources. Users will be left with outages because we didn’t allow for that amount of demand in our data center.
In terms of expenses, at first, there is a large CapEx for a steady OpEx that goes for that initial period. The OpEx is a steady expenditure because not tied to the demand (at least in theory).
But anytime we want to increase the capability of our data center to meet demand, we will need to make another large CapEx and as a corollary our OpEx will also increase because we now have more resources to manage.
All this to say that if you decide to build a data center, we need a lot of upfront capital (money + people) just to build the infrastructure, plus the maintenance cost. All this for an app that you are not sure it will ever be a success and generate money to cover expenses and make a profit.
Handling demand in the Cloud
For the same demand, when using the cloud, we are able to shift the capacity of the infrastructure that supports our application based o the demand. This enables capacity to row as demand grows.
Now instead of large CapEx, you have OpEx cost that matches the demand.
Financial Implications
Now, let’s compare the financial implications of both models.
Managing a data center.
Leveraging cloud infrastructure:
Predicting and Managing AWS?Costs
AWS provide tools that you can use to make a case for the use of AWS cloud services in your organization. These tools can allow you to predict what the cost of services usage will be.
AWS Total Cost of Ownership (TCO) Calculators
This tool helps an organization to determine the savings by leveraging AWS cloud infrastructure instead of the data center model.
TCO calculators allow you to estimate the cost savings when using AWS and provide a detailed set of reports that can be used in executive presentations.
The calculators also give you the option to modify assumptions that best meet your business needs.
It is a cost comparison tool between running IT resources on-premise and in AWS cloud.
AWS Simple Monthly Calculator
It enables an organization to calculate the cost of running specific AWS infrastructure (OpEx).
Hosted on Amazon S3.
Available at https://calculator.s3.amazonaws.com/index.html
No longer supported as of June 2020, replaced by AWS Pricing Calculator (this info is not part of the exam until next update).
AWS Budgets
AWS Cost Explorer
AWS Cost Explorer is an interface that lets you visualize, understand, and manage your AWS costs and usage over time.
It provides breakdowns:
It also provides predictions for the next three months of costs based on your current usage.
It also gives recommendations for cost optimization.
Like most AWS services, it can be accessed via an API. You can therefore use that data for whatever your needs (data science, dashboards, etc.).
Used to explore costs after they have been incurred.
AWS Organizations
They allow you to organize multiple accounts from your existing organization under a single user account.
It provides organizations the possibility to leverage consolidated billing for all accounts. Therefore, you will receive one bill irrespective of how many AWS accounts there are in your organization.
All this permits organizations to centralize logging and security standards across accounts while still providing separated accounts for different users.
More on AWS Organizations down below.
Solution Architecture on?AWS
AWS has specific certifications dedicated to this topic (AWS Certified Solutions Architect).
Let’s have a high level view about that subject.
AWS Well-Architected Framework
The Well-Architected Framework has been developed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications.
It is a collection of best practices across five key pillars for how best to create robust and secure systems that create value on AWS.
Here are the five pillars:
Reliability in AWS
Reliability can be summarized to two key principles:
AWS Disaster Recovery Approaches
Even companies that have their own data centers can take advantage of AWS disaster recovery (D.R.).
Four architectures should be considered, from simplest to most complex:
Understanding the needs of your organization to know which D.R approach to choose.
Support on?AWS
Once you deploy your infrastructure on AWS, it is essential to know how to support that infrastructure. AWS provides four different levels of support. It is important to understand the needs of your organization to know which level to choose.
The four levels are:
— AWS Basic support
— AWS Developer support
— AWS Business support
Designed for organizations leveraging AWS for some production infrastructure.
— AWS Enterprise support
Designed for enterprise organizations running mission critical apps on AWS.
To summarize:
Support Response?Times
When you create a support ticket, the support response times will be dependent on the level that we assign to that issue.
For developer-level support:
For business-level support:
For enterprise-level support:
Part 2: AWS Core?Services
Interacting with AWS Services
Here are the ways to interact with AWS services:
AWS SDK Languages
AWS SDKs are available for the following programming languages:
There are also mobile versions of the SDK for Android and IOS.,
Networking and Content Delivery Services:
We will see the key services you need to know about.
Amazon Route 53
It is a DNS (Domain Name Service) within AWS. A DNS allows you to connect a domain name like medium.com to a specific IP address which is connected to specific servers.
53 in the name refers to port 53 which is the port reserved for DNS traffic.
Contrary to most services on AWS, Route 53 is a global service. Most services on AWS are regional, meaning that what you do on a service in a region will only apply to that region.
It is highly available. It allows you to reroute traffic from a faulty servers to healthy ones in different regions. You are able to handle failure and still provide the same levels of service.
It enables global resource routing. You can route users to different different sets of servers based on their latency or their region. It allows you to create a global architecture regardless of where your users are with similar levels of performance.
You can use Route53 to route requests to your own registered domain name (this ismydomain.com for example) to an S3 bucket containing the static assets for your website (make sure that the bucket has the same name as your domain, therefore the bucket should be called this ismydomain.com).
Amazon Virtual Private Cloud (VPC)
A Virtual Private Cloud is a logically isolated ( — meaning isolated programmatically, not physically) section of the AWS cloud were you can launch AWS resources in a virtual network that you define.
The Amazon VPC service enable these virtual networks in AWS.
It supports IPv4 and IPv6.
You can configure:
The VPC service supports different use cases:
Think of a VPC as your own data center inside AWS, it is your own responsibility.
AWS Direct Connect
This service makes it easy to establish a dedicated network connection from your premises (data center) to AWS.
There are several reasons you might want to do this, among them:
Amazon API Gateway
Amazon API Gateway helps developers to create and manage APIs to back-end systems running on Amazon EC2, AWS Lambda, or any publicly addressable web service.
With Amazon API Gateway, you can generate custom client SDKs for your APIs, to connect your back-end systems to mobile, web, and server applications or services.
This service provides:
Amazon CloudFront
Remember when we talked about edge locations??
CloudFront is the AWS service that uses them. CloudFront is a Content delivery network (CDN).
A CDN is a system of distributed servers that deliver webpages and other kinds of Web content to users based on their geographic location, the origin of the webpage and a content delivery server.
It enables users to get content from server closest to them and supports static and dynamic content.
It includes advanced security features like:
More Details on CloudFront
Let’s review the terminology:
So, when CloudFront is enabled, users will first query the edge location nearest to their geographical location.
The first time the content is requested, the edge location will connect to the origin of the content to retrieve it. This will create latency.
The content is then cached / stored in the edge location and distributed to the user.
When another user in the same geographical region queries the same content, the edge location does not need to contact the origin because it has a copy of that content.
Users after the first one will get that content much faster.
The content is cached for an amount of time called the TTL (Time To Live) given in seconds. Usually, you have a TTL of 48 hours.
CloudFront is used to deliver websites including static, dynamic, streaming and interactive content using AWS global network of edge locations.
There are two types of distributions;
Practical Use of CloudFront
The origin can be a folder in an S3 bucket, not necessary the whole bucket.
You can even restrict direct access to the bucket and force going through the CDN to access S3 objects (more details in S3 section).
Here is an example of how it looks when your create an distribution:
The created distributions will have a domain name following this model:
<RANDOM_HASH>.cloudfront.net
CloudFront Tips
CloudFront Pricing
You pay for:
Elastic Load Balancing (ELB)
Elasticity = ability for the infrastructure supporting an application to grow and contract based on how much it is used at a point in time.
In essence:
Tips:
Security on?AWS
According to AWS Shared Responsibility Model:
Security and Compliance is a shared responsibility between AWS and the customer.
AWS Identity & Access Management (IAM)
IAM enables you to manage access to AWS services and resources securely.
Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.
IAM Identities
There are three types of identities in IAM:
IAM Policies
We grant permissions by using policies.
A policy is a JSON document that defines permissions for an AWS IAM identity (principal).
Defines both the AWS services that the identity can access and what actions can be taken on that service
Can be either customer managed or managed by AWS. AWS provides a set of managed policies.
For example, if you wanted to grant DynamoDB full access to a specific users so they could do everything within the service within that account, you would grant them the full access managed policy to DynamoDB.
But if you want a custom policy, you are free to write the JSON document yourself (or use the visual editor) to define the permissions. This would be a customer-managed policy.
AWS IAM Best Practices.
More Details on?IAM
In IAM:
Everything in AWS is an API. To execute these APIs, we first need to authenticate and then authorize.
The IAM Role is simply the authentication.
Permissions happen in Policy documents in JSON format. The policy document attaches directly to a user, a group or a role. This document lists APIs (or groups of APIs) that are being whitelisted / allowed against specific AWS resources.
Let’s take the example of an API call to S3. An operator want to upload a file into an S3 bucket, that’s an API call. It executes the call to PUT object TOTO in S3 bucket TITI, and this operator presents a set of credentials whether an access key secret key or a username and password. All of this is the API execution statement.
It then goes to the AWS API engine. The IAM engine verifies the credentials and validates that those are active authorization credentials and validates the identity of the operator (IAM user, group or role).
Then, the system takes the policy document associated with that validated operator and evaluates all the policy documents as a single view. It looks to see if the action you’re doing (put object in S3 bucket) is authorized by any of the policy documents attached to that identity. If it is, you are then allowed to execute the S3 API.
A policy document might also have an explicit denial. This overrides any “allow” statements. If you don’t have an “allow”, there is an implicit denial. This mechanism of explicit denial is useful to permanently deny some operations (API calls), for example, deny resource termination in production.
What to remember is that IAM identities (user, group, role) are for authentication and IAM Policy Documents are for authorization.
This model is useful in the case of compromised credentials:
Let’s say that you write your username and password on a sticker or a keylogger managed to capture your credentials. Moreover you are not using MFA (bad).
Let’s say that some hacker decides to ransom the company for Monero cryptocurrencies and deletes a few S3 buckets to prove its point. By detaching the Policy Document attached to the compromised IAM identities, that hacker will no longer be able to perform anything. The credentials will still be compromised but no permissions will be attached to the compromised identities.
Finally, never store credentials like username/password or access keys in EC2 instances. Prefer attaching a role with the minimum permissions to those instances
You can get more practice on IAM (this is out of the scope of the cloud practitioner certification) here?:
Security in Amazon VPC
Other Security Services on?AWS
AWS CloudTrail
AWS Inspector
Basically Inspector is an agent that you install in your EC2 instance that inspects the environment.
AWS Trusted Advisor
AWS Shield;
AWS WAF (Web Application Firewall):
About Compliance on?AWS
AWS gets tested by third party organization to verify that it meets strict standards.
More details at aws.amazon.com/compliance.
You can also see the compliance reports by using the AWS Artifact service. AWS Artifact features a comprehensive list of access-controlled documents relevant to compliance and security in the AWS cloud. It is used to get documentation about compliance from worldwide authorities.
So AWS Artifact is used to retrieve compliance reports from all around the world.
One thing to remember is that just because AWS is compliant to standards does not mean that your applications running on AWS cloud are compliant too. For more details, see the shared responsibility section.
AWS Compute?Services
There are mainly 3 types of Cloud computing models (others exist)?:
领英推荐
We will talk about the following compute services (the complete list keeps increasing):
Amazon EC2 (Elastic Compute Cloud)
EC2 reduces to minutes — not weeks or months like in the pre-cloud era — the time needed to obtain and boot new virtual server instances hence allowing you to quickly scale capacity both up and down as compute requirements change (scalability + elasticity).
EC2 features are:
To administer servers running on EC2, you use:
AWS EC2 Instance Types
Scaling on Amazon EC2
Amazon EC2 Horizontal Scaling Services
Amazon EC2 Auto-scaling Group (ASG)
Note (mostly out of scope of Cloud Practitioner exam — but the concepts are in scope):
You can create your own custom AMI’s (Amazon Machine Image) that can be used in the launch configuration of an Auto Scaling Group, meaning that when you need to scale out, the new instances will be created based on your custom EC2 image.
This Golden Image (AWS vocabulary — ) is used as a template for launching EC2 instances.
Basically, you can first create the EC2 instance that meets your needs then generate a custom AMI image from that instance. Behind the scenes, it creates a snapshot of the EC2 instance and the custom AMI is based on that snapshot.
You will not be able to delete the snapshot until you first deregister the custom AMI image from the AWS marketplace.
Let’s say your Node.js server is running on an EC2 instance. You create an image from that instance which will encapsulate the file system and configuration so that when you launch an instance from this image it will already be configured and running another copy of your Node app. You can also launch user data script (which are bootstrap scripts that are launched when the instance is created) to update your custom instances when they are launched by the ASG.
By selecting all subnets when creating your ASG, it will automatically spread instances to multiple Availability Zones when creating new instances during scale out.
Amazon EC2 Horizontal Scaling Illustration
Let’s create a VPC with an Internet Gateway (to give it access to the Internet) within a region (us-east-1).
Then, let’s use two Availability Zones within that region.
Within each of these Availability Zones, let’s create an EC2 instance (C4 instance type) to serve our Node.js app.
Next, let’s create an Auto Scaling Group around these two instances. This will allow us to have centralized management of our instances in the different Availability Zones. It will also do health-checks on our servers and automatically decommission and recreate new ones in case of failure. In addition, it will allow us to meet the demand by horizontally scaling (scale out = provision additional new instances).
Next, we add Application Load Balancer to provide a centralized way to route users to the appropriate server within the Auto Scaling Group, so that users don’t need to know which sever to contact.
The Application Load Balancer communicates with the Auto Scaling Group in order to know which instances are available and healthy so that the the load balancer can send users to running servers.
We are now able to receive traffic from the Internet, route it to the appropriate server from the Application Load Balancer and ensure that our group stays healthy by managing the lifecycle of the servers within the Auto Scaling Group.
Amazon EC2 Purchase Options
If you have an instance that is consistent and always needed, you should purchase a Reserved Instance. For example, if you have servers that will be running all the time for the next few years, prefer this option for greatest discount for this usage.
If you have batch processing (fault tolerant workload) where the process can start and stop without affecting the job, you should leverage Spot Instances. These instances are available for a period of time. You bid on them and if what you have bid gets lower than current market price then your instance will be shutdown. The workload needs to be able to shutdown at any moment.
If you have an inconsistent need for instances that cannot be stopped without affecting the job, leverage On-demand Instances. You don’t know exactly how long you will need these instances and your workload is not fully fault tolerant (for development and test for example).
Bonus
Here’s a mnemonic to remember instance types (not required for the Cloud Practitioner exam):
F.I.G.H.T.D.R.M.C.P.X.Z — (“FIGHT DoctoR MaC PiXiZ”)
Of course, AWS keeps adding new types of instances for specific workloads but you don’t need to know the latest for exams (until the certification updates).
AWS System Manager
This service allows you to manage your EC2 instances at scale.
When you have a lot of EC2 instances, we talk about EC2 fleets. Fleets are not limited to EC2.
Basically, to manage all these resources, in each of them you have a daemon (process) also called agent running that connects to AWS System Manager.
AWS System Manager allows you to run commands on all your instances rather than SSH-ing in thousands of instances…
To recap:
Container Management Services for?AWS
This is another approach to leverage compute on AWS.
If your application consists of Docker containers, you can use the following AWS services to run your clusters:
AWS Lambda
Another AWS compute service:
Lambda Pricing
The following determines Lambda pricing:
AWS Elastic Beanstalk (EB)
Another service that automates EC2 management.
It is the simplest entry to AWS deployments because you just upload your source code and EB takes care of all the provisioning and deployment.
You can quickly deploy and manage applications without worrying about the infrastructure that runs your Node.js application (or whatever platform).
Its features are:
File Storage with?AWS
General File Storage Services
Amazon S3 (Simple Storage Service)
More Details on?S3
What is an object in S3??
Think of objects as files. They consist of:
By default, when you upload an object in a S3 bucket, it is NOT public even if you have created that S3 bucket with public access.
By the way, it’s not an accident that everything is private access by default, it is part of the Well Architected framework Security pillar of AWS.
Most likely, you will make your bucket and objects public for static website asset sharing. So basically, make things public only when you use S3 as a content delivery server, for example when hosting static websites.
You have to explicitly make objects public:
The above solution is fine if you have very few files but a bucket can contain thousands or millions of files. In order to automatically make public all files that are uploaded to the bucket, you will need to create a bucket policy.
To do this, go to the Permissions tab of your bucket then the Bucket Policy tab:
Let’s have a look at the bucket policy JSON document:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow", ====> will allow
"Principal": "*", ====> EVERYONE
"Action": [
"s3:GetObject" ====> make GetObject API calls on S3 service
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME/*" ====> for all objects in bucket
]
}
]
}
You don’t need that level of detail at the Cloud Practitioner level but you are not just passing a certification, you are learning real life skills. Of course, there is documentation on how to write JSON bucket policies.
You can also change encryption and storage classes of objects on the fly.
What to understand is that S3 is a key/value store for hosting static files that automatically scales with demand.
Data Consistency on S3
AWS Guarantees for S3
S3 Features
Amazon S3 Non-archival Storage Classes
Here are the different non-archival storage classes of S3:
Amazon S3 Glacier
Amazon S3 Glacier Storage Classes
S3 Pricing
As usual with AWS, charges follow the pay-as-you-go model, no upfront charges. You are charged monthly.
By the way, you don’t need to know all this by heart. At the Cloud practitioner level, AWS does not ask that you to be an expert in the details of charges. I give you this information to add context when you see questions in the exam. Frankly some of these details are asked of you in higher level certifications like the AWS Certified Solutions Architect.
S3 provides billing reports.
You are charged for the following:
Charges for Storage Classes
To see your bill, log with your root user account and go to the billing dashboard. There, you will be able to see all fees per region for S3.
Pfew that was a lot of fees for just one service…
But don’t forget that because of the economies of scale offered by AWS, most of these fees will be a few cents and don’t forget tp take advantage of your one-year free tier to test around and shut it down before reaching free usage limit.
S3 Transfer Acceleration
Cross Region Replication
When you upload files to S3 bucket 1, they are automatically replicated to S3 bucket 1.
This is useful for disaster recovery (DR)
Restricting Bucket Access
You can restrict access to a bucket using:
S3 Use Cases
You can put any kind of files in S3. Storage is almost unlimited.
S3 common use cases are:
Amazon EC2 File Storage?Services
Amazon Elastic Block Store (EBS)
Think of EBS as virtual disks in the Cloud.
EBS can create storage volumes and attach them to EC2 instances.
Once attached, you can use them to:
EBS volumes are placed in specific Availability zones where they are replicated for fault tolerance and disaster recovery. The EC2 instance to which the EBS volume is attached needs to be in the same availability zone.
Amazon EBS Volume Types
SSD (Solid State Drive):
Magnetic:
EBS Pricing
You pay for:
Amazon Elastic File System (EFS) — do not confuse it with EBS!
It is very important not to confuse EFS with EBS.
Remember that EBS is IaaS (Infrastructure as a Service) because it provides virtual hard drives that you use for whatever use cases that you manage yourself whereas EFS would be PaaS (Platform as a Service) because it is:
Databases with?AWS
Do not confuse database services with storage services.
About Databases on AWS
Let’s recap the different types of databases:
Relational Databases (think Amazon Aurora or MySQL)
NoSQL Databases (think Amazon DynamoDB or MongoDB)
For large binary files (image, audio, video, etc.) consider using S3 to store then.
Data Warehouse (think of Amazon Redshift)
Search Databases (think (AWS CloudSearch or ElasticSearch)
Graph Databases (think Amazon Neptune)
Data Lakes (think of Amazon S3)
AWS Databases & Related Services
We will talk about the following services:
Amazon RDS (Relational Database Service)
To increase performance, you can do all your writes to the master database and all reads to multiple read replicas that are synchronized with the master DB.
Available Amazon RDS Platforms
RDS supports the follwing database engines:
RDS Pricing
You are charged for:
Amazon DynamoDB
According to AWS, DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second.
DynamoDB Pricing
You pay for:
Amazon Redshift
Okay, let’s understand why you would use Redshift as opposed to RDS or DynamoDB.
You need to differentiate between OLTP (OnLine Transaction Processing) and OLAP (OnLine Analytical Processing).
What is?OLTP?
Online transaction processing shortly known as OLTP supports transaction-oriented applications in a 3-tier architecture. OLTP administers day to day transaction of an organization.
The primary objective is data processing and not data analysis
What is?OLAP?
Online Analytical Processing, a category of software tools which provide analysis of data for business decisions. OLAP systems allow users to analyze database information from multiple database systems at one time.
The primary objective is data analysis and not data processing.
KEY DIFFERENCE:
Basically, data warehousing was invented to be able to do data analysis away from production databases. Therefore doing complex analytics won’t impact the performance of production databases.
Redshift being a data warehouse, it is built to handle the complex queries for data analysis. It is used to pull very large and complex data sets for business intelligence and all sorts of analytics.
For more on OLAP vs OLTP, check here (out of scope of Cloud Practitioner exam).
Amazon ElastiCache
ElastiCache improves performance of Web apps by allowing them to retrieve data from faster, managed in-memory caches instead of just relying on slower disk-based databases (Amazon RDS or any hosted Database as a Service (DBaaS)).
This allows apps to reduce the load off of databases by reducing the number of requests to them.
The usual strategy is to cache the most frequent identical queries like landing on the homepage of a website.
AWS Database Migration Service
Used to securely migrate data into AWS for both:
It supports both modes of migration:
Application Integration Services
Amazon SNS (Simple Notification Service)
Example of Amazon SNS Architecture
In this example, there is a user that signs up for an online service.
We can have the sign up service publish a message to a specific SNS topic called user_signup.
You can then have a Lambda function executed as a result of that SNS message from that topic.
You can also have an SNS queue populated with the payload/content of the message.
You can finally have an email sent as a result of the message published to that topic.
In all this, the sign-up service does not know anything about the Lambda function, the SQS queue or the email.
We basically have here a decoupled application architecture.
Amazon SQS (Simple Queue Service)
Example Amazon SNS and SQS Architecture
In this example, a user submits an order and there is a Web service that’s get called for that operation.
You could handle everything in this web service but you choose to decouple and create a fault-tolerant application.
So once the user submits an order, a message is send to an SNS topic named user_order.
From the SNS topic, we fan it out to a fulfillment queue and that queue goes and leverages an order fulfillment microservice running in an ECS cluster. This order fulfillment microservice could be an API communicating with the warehouse system to take incoming orders and ship them out.
You could also fan it out to an analytics queue with a Lambda function consumer that ingests data coming from the order into an analytics service.
If the analytics ingestion service or the fulfillment service fail, the orders are not lost. Instead, when the respective teams put these services back online, they will be able to pull order from the queues.
The queues add fault-tolerance to our architecture, no data is lost if the receiving services break because the queue store the messages.
There is the concept of dead-letter queue. If for some reason the system is unable to process some messages, they could be sent to a dead-letter queue which you would handle more manually (out of scope of Cloud Practitioner).
Management & Governance Services
AWS CloudFormation
The output of the CloudFormation template is called a stack. It’s basically all the AWS resources created based on the template specifications.
Typically, CloudFormation could be used to setup short-lived stacks in order to save money by not having resources running all the time (if your use case supports that, of course).
For example, you could automatically launch a dev and test stacks early in the morning and have them destroyed in the evening.
Elastic Beanstalk uses CloudFormation behind the scenes.
Example CloudFormation YAML
The code below is placed into a CloudFormation Template?, it would create a S3 bucket named my-sample-s3-bucket.
Description: Creates an S3 bucket
Resources:
SampleS3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-sample-s3-bucket
AWS Quick Starts
Quick Starts are built by AWS solutions architects and partners to help you deploy popular technologies on AWS, based on AWS best practices for security and high availability.
These accelerators reduce hundreds of manual procedures into just a few steps, so you can build your production environment quickly and start using it immediately.
Each Quick Start includes AWS CloudFormation templates that automate the deployment and a guide that discusses the architecture and provides step-by-step deployment instructions.
On a per account basis.
Available at https://aws.amazon.com/quickstart
AWS Landing Zones
AWS Landing Zone is a solution that helps customers more quickly set up a secure, multi-account (starts with 4) AWS environment based on AWS best practices.
With the large number of design choices, setting up a multi-account environment can take a significant amount of time, involve the configuration of multiple accounts and services, and require a deep understanding of AWS services.
This solution can help save time by automating the set-up of an environment for running secure and scalable workloads while implementing an initial security baseline through the creation of core accounts and resources. It also provides a baseline environment to get started with a multi-account architecture, identity and access management, governance, data security, network design, and logging.
Version 2.3.1 of the solution uses the most up-to-date Node.js runtime. Available at https://aws.amazon.com/solutions/aws-landing-zone/
Recap AWS Quick Starts VS AWS Landing Zones
AWS Quick Starts:?
for deploying environments quickly?, using CloudFormation templates built by AWS Solutions Architects who are experts in that particular technology
AWS Landing Zones:
for quickly setting up a secure, multi-account AWS environment based on AWS best practices. Starts with 4 accounts.
Amazon CloudWatch
CloudWatch monitors Amazon services as well as the applications that you launch on AWS (you can create custom metrics).
CloudWatch with EC2:
You can send any metrics to CLoudWatch just by writing scripts that communicate with CloudWatch.
You can create CloudWatch alarm that trigger notifications and/or actions.
CloudWatch is about monitoring performance of resources and applications on AWS.
AWS Config
If you are asked about configuration change, think about AWS Config (like changing a port number in a Securiry Group or whatever similar).
Tagging and Resource groups
Tags are key/value pairs attached to AWS resources.
They are used for metadata (data about the data).
They can be inherited, for example when you tag a launch configuration of an auto scaling group, the new EC2 instances inherit the tags from that launch configuration.
Resource groups are used to group resources based on their tags.
Using resource groups, you can apply automation to tagged resources. For example, update all EC2 instances in a region with a specific tag.
Resource groups in combination with AWS Systems Manager allow to execute automation against entire fleets of AWS resources at the push of a button.
Tag editor which allows you to find tagged resources is a global service and also allows to add tags.
Newer regions may not be visible at the start in existing services like Tag editor.
AWS Organizations
Multiple AWS accounts are used in companies with different teams.
Having just one account (with multiple IAM users) for an entire company is not recommended (think separation of concerns and security — if that account were to be compromised the entire company would be too).
So, in an AWS organization, you have a root account (base account) and organizational units (O.U).
Theses O.U’s could be different departments of your company. You attach one or multiple AWS accounts to these O.U’s.
You can apply policies to these O.U’s to restrict what the AWS accounts inside them can do (what services they can access or use) with their accounts. You can also attach the policies directly to the accounts.
With AWS organizations turned on, you benefit from the economy of scale, meaning the more you use the less you pay. The more accounts you use, the more you can get cheaper rates.
If you only use AWS Organizations consolidated billing feature:
Best Practices with AWS Organizations
With CloudTrail, is a per account and per region service so you have to turn it on for all regions and accounts in order to consolidate all the logs in a S3 bucket:
Basically, you push all the CloudTrail logs from all account into the paying account S3 bucket.
This bucket will serve as the source of truth about what’s going on the all the AWS organization.
Billing Alerts
When monitoring is enabled on the paying account, the billing data for all linked accounts is included.
You can still create billing alerts per individual accounts.
Consolidated Billing allows to get discounts on all your account.
Unused Reserved Instances for EC2 are applied across the group.
AWS Acceptable Use Policy
The AWS Acceptable Use Policy defines prohibited uses of the services offered by AWS. All users of the platform are bound by this policy.
For example, you will get in trouble if you use AWS to send spam emails.
You are prohibited to circumvent security measures that AWS has put in place.
Don’t think you are smarter than the thousands of talented engineers working for AWS, it’s not worth the trouble.
If you enjoy hacking and finding vulnerabilities, why not become a Cloud Security Expert and make that (plenty) money legally?? AWS has specialty certifications for that path.
AWS Marketplace
AWS Large Scale Data Transfer?Services
AWS Snowball:
Service to physically migrate petabyte scale data to AWS.
- uses secure appliances to transfer large amount of data into and out of AWS cloud;
Snowball Pricing
Analytics on?AWS
AWS Athena
Amazon Athena is a fast, cost-effective, interactive query service that makes it easy to analyze petabytes of data in S3 with no data warehouses or clusters to manage.
AWS Macie
About Personally Identifiable Information (PII)
Macie is a security service that uses Machine Learning and Natural Language Processing (NLP) to discover, classify and protect sensitive data stored in S3.
Preparing to Take the?Exam
Signing Up for the Exam
Go to aws.training and sign in with your AWS / Amazon account.
Once you clicked on Scheduled New Exam, you will get to a page where there is a list of AWS certifications.
Click on the links to either schedule the exam with Pearson Vue or PSI.
Note:
since March 2020, all AWS certification exams are available to be taken online with Pearson Vue.
Certification Areas of Focus
Reviewing Cloud Concepts
Reviewing Security
Reviewing Billing & Pricing
Reviewing Technology
Taking the?Exam
Testing Best Practices
More resources
If you want and need more practice, I recommend the following videos courses:
Read the whitepapers recommended on the exam preparation page (services overview, architecting on AWS, pricing on AWS). This will help you better understand the context of the exam questions. I would even go so far as saying reading them is more important than watching courses. The video courses are very interactive but the whitepapers are more detailed.
I repeat read the damn whitepapers, you will thank me later?!?
(Tip: you can generate a MP3 from the PDFs and read while listening).
I would like to cover more but this article is already long enough. To continue your preparation, check the courses above and their practical labs (learning by doing.
If you have arrived here congratulations, you now have a deep overview about what is covered in the exam. Take one of the above video courses and you will be ready to pass that certification with ease.
I hope this article was very useful and wish you success on your exam.
Now go schedule and pass that exam to start your journey toward AWS Mastery!
Want more on #AWS #JavaScript #NodeJS #MongoDB #Go #DevOps #Python??
Read on here:
My courses:
3x AWS Certified. AWS Solutions Architect Professional AWS Community Builder (3rd Year)
3 年Congratulations Florian GOTO ??
Co-Founder @ TutorialsDojo.com | Linkedin Top Voice | AWS Community Builder | 10x AWS Certified
3 年Congratulations, Florian! Keep the momentum going.