What’s new in AWS : Part 1 – Compute & Storage
Since this is the first post of this year, let me wish you a very Happy New Year 2019 even though more than half a month has already passed. Time flies, especially when you have work to do.
There have been lot of developments on the AWS front and given that re:Invent is held at the end of the year, the number of new services and features introduced is huge. In my blog post I will concentrate only on the new happenings in the Compute, Storage, Networking and Database services. This will be a two part post. In the first part I will talk about Compute and Storage.
Hibernating EC2 instance: Till now we were only able to stop or terminate EC2 instances. AWS now gives us the ability to hibernate an instance. As you know, hibernation basically saves the system state and when you start it again it runs from where you left. Very similarly to closing your laptop without switching off the system. The billing will stop for the instance once it is in the hibernate state (ofcourse you will pay for the attached EBS disk and any attached EIP). In other words, billing wise it is similar to a stopped instance
Increase in Max IOPS for Provisioned IOPS EBS Disks: The Max IOPS that can be requested of a Provisioned IOPS disk is now doubled. Earlier the limit was 32,000 IOPS. Now the limit is 64,000 IOPS. While this is definitely good for the performance, you also have to keep in mind that this is about the performance of a single disk. The Max IOPS that an instance can support is still 80,000 IOPS. If you want to use multiple disks to build a RAID, keep the instance limit in mind. The throughput of these disks is now at 1,000 MB/s maximum with the instance maximum throughput being 1,750 MB/s
FSx : AWS already has EFS, Elastic File System, which is a shared file system. EFS though has a limitation. You can mount EFS only if you are using a Linux instance, since EFS uses NFS v4.1. You cannot mount it if you have a Windows instance. This limitation is now overcome by providing the FSx file system. FSx allows you to create file shares for Windows as well as Luster. FSx for Windows integrates with Microsoft AD as well.
AWS Transfer for SFTP: In many cases you store files in order to share it to your clients via SFTP. Generally you have to setup the FTP server and store your files in your server. You also need to maintain the server. With AWS Transfer for SFTP, AWS sets up the SFTP server for you and you can store your files durably in S3. Additionally you can reroute your current FTP domain name to the AWS domain name using Route53.
S3 Intelligent-Tiering : One of the major issues for Enterprise is managing data and ensuring that costs of storing data is minimized. Many a times we resort to putting up lifecycle rules to ensure cost optimization. In major Enterprise storage arrays we have intelligent tiering, wherein data is moved between various classes of storage. Now this is available in AWS. The S3 Intelligent Tiering moves your objects between Standard Storage and Standard Storage – Infrequent Access tiers. So if your object in Standard Storage is not accessed for 30 days, it will be automatically moved to Infrequent Access Class, thus saving costs for you.
Amazon S3 Object Lock: Till now, we were having the Vault Lock feature in Glacier. The lock feature would convert a vault to WORM (Write Once Read Many). Now AWS has extended this to objects in S3. So we have a lock at object level. Once an object is locked for a certain period of time you can neither over write it nor delete it
S3 Batch Operations: This is mainly aimed at developers and automation engineers. In earlier times, we needed to make changes for each object separately. Now you can apply certain actions to a whole set of objects at the same time. This allows changes to happen in hours, what would have taken days or even a month, in earlier case.
S3 Glacier Deep Archive: This what AWS says about S3 Glacier Deep Archive. This is self explanatory. “This new storage class for Amazon Simple Storage Service (S3) is designed for long-term data archival and is the lowest cost storage from any cloud provider. Priced from just $0.00099/GB-mo (less than one-tenth of one cent, or $1.01 per TB-mo), the cost is comparable to tape archival services. Data can be retrieved in 12 hours or less, and there will also be a bulk retrieval option that will allow you to inexpensively retrieve even petabytes of data within 48 hours.”
In the next part, we will discuss new items in Networking, Database and Security.