Highlights
- The launch of Amazon Aurora Serverless v2 will allow AWS customers to easily scale their database capacity and transition speed.
- In the case of SageMaker Serverless Inference, it offers businesses the option of a pay-as-you-go service to deploy their ML options.
At the AWS Summit San Francisco, Amazon’s cloud computing venture announced a number of product launches, including two dedicated to the serverless portfolio. The first among them is the GA launch of Amazon Aurora Serverless V2, the serverless database service, which can scale up and down faster than the previous versions. It is also able to scale in more fine-grained increments. The next one is the GA launch of SageMaker Serverless Inference. Both services were first launched into preview at AWS re: Invent last December.
Swami Sivasubramanian, the Vice President for databases, analytics, and Machine Learning (ML) at AWS, said that more than 100,000 AWS customers today run their database workloads on Aurora and that the service continues to be the fastest-growing AWS service. He noted that previously, in version 1, scaling the database capacity would take five to 40 seconds, and the customers had to double the capacity.
“Because it’s serverless, customers then didn’t have to worry about managing database capacity,” Sivasubramanian explained. “However, to run a wide variety of production workloads with [Aurora] Serverless V1, when we were talking to customers more and more, they said, customers need the capacity to scale in fractions of a second and then in much more fine-grained increments, not just doubling in terms of capacity.”
He also argued in favor of the new system that it could save users up to 90% of their database cost compared to the cost of provisioning for pre-capacity. He mentioned that there are no tradeoffs in moving to v2 and that all features in v1 are still available. Changes have only been introduced across computing platforms and storage engines so that it’s now possible to scale in small increments and do the job quickly.
“It’s a really remarkable piece of engineering done by the team,” he added.
There is already a list of customers using the system such as Venmo, Pagely, and Zendesk. Also, AWS argued that it is not a heavy lift to convert workloads that currently run-on Amazon Aurora Serverless v1 to v2.
As for the other product, Sivasubramanian noted that the service gives businesses pay-as-you-go service for deploying their ML models – and especially that often sit idle – into production. There are four more inferencing options: Serverless Inference, Real-Time Inference for workloads where low latency is paramount, SageMaker Batch Transform for working with batches of data, and SageMaker Asynchronous Inference for workloads with large payload sizes that may require long processing times. AWS offerings also include SageMaker Inference Recommender to help users find the best way to deploy their models.