In his keynote speech, Swami Sivasubramanian, vice president of Database, Analytics, and Machine Learning at Amazon Web Services (AWS), said he expects that AWS services and capabilities will democratize the use of generative artificial intelligence (generative AI)—broadening access for all types of customers, across all lines of business—from engineering to marketing to customer service to finance and sales.
“Generative AI has captured our imaginations,” Sivasubramanian said. “This technology has reached its tipping point.”
A photo of Swami Sivasubramanian speaking on stage at AWS New York SummitSwami Sivasubramanian, vice president of Database, Analytics, and Machine Learning at Amazon Web Services
What is generative AI? It’s a type of machine learning (ML) powered by ultra-large models, including large language models (LLMs). These models are pre-trained on a vast amount of data and are known as “foundation models” (FMs).
Generative AI will help improve experiences for customers as they interact with virtual assistants, intelligent customer contact centers, and personalized shopping services. An employee might see their productivity boosted by generative AI–powered conversational search, text summarization, or code generation tools. Business operations will improve with intelligent document processing or quality controls built with generative AI. And customers will be able to use generative AI to turbocharge the production of all types of creative content.
Sivasubramanian underscored how all this value for generative AI will be unlocked with AWS—and how AWS customers will bring these AI-powered experiences to life.
First, model choice will be paramount. No one model will rule them all. Rather, organizations will need to be able to choose the right model for the right job. Then, customers will need to be able to securely customize these models with their own data. For example, an advertising company may want to fine-tune a model by showing it the company’s top performing ad copy, while an online retailer may want to give the model access to its inventory details so it can pull up the right information when a customer asks.
Easy-to-use tools are also a key part of democratizing AI within organizations—along with the ability to deliver responses that are low cost and low latency, thanks to purpose-built ML infrastructure. Much of this innovation will be built with Amazon Bedrock, a service offered by AWS that helps organizations of any size and across all industries around the world easily build and scale their own generative AI applications. It does this by giving customers easy access to a wide range of FMs through a simple API and making it easy to leverage existing data stores to customize them.
Amazon has been developing AI and ML technology for more than 25 years, and recent ML innovations have made the capabilities of generative AI possible. Here are seven generative AI updates announced at the AWS Summit in New York.

Page overview

AWS expands Amazon Bedrock with new model provider and additional FMs

1
AWS expands Amazon Bedrock with new model provider and additional FMs
2
Customers can now create agents for Amazon Bedrock to enable automation of complex tasks and deliver customized, up-to-date answers for their applications, based on their proprietary data
3
Vector engine support for Amazon OpenSearch Serverless gives customers a simpler way to leverage vectors for search
4
Generative business intelligence (BI) in Amazon QuickSight produces business intelligence based on natural language questions, making insights more accessible
5
AWS HealthScribe will use generative AI to ease the paperwork burden for health care professionals, giving time back for patients
6
New Amazon Elastic Compute Cloud (Amazon EC2) P5 instances harness NVIDIA H100 graphics processing units (GPUs) for accelerating generative AI training and inference
7
AWS offers seven free and low-cost skills training courses to help you use generative AI
8
1.
AWS expands Amazon Bedrock with new model provider and additional FMs

Since model choice is paramount, Amazon Bedrock is expanding to include the addition of Cohere as an FM provider, and the latest FMs from Anthropic and Stability AI. Cohere will add its flagship text generation model, Command, as well as its multilingual text understanding model, Cohere Embed. Additionally, Anthropic has brought Claude 2, the latest version of their language model, to Amazon Bedrock, and Stability AI announced it will release the latest version of Stable Diffusion, SDXL 1.0, which produces improved image and composition detail, generating more realistic creations for films, television, music, and instructional videos. These FMs join AWS’s existing offerings on Amazon Bedrock, including models from AI21 Labs and Amazon, to help meet customers where they are on their machine learning journey, with a broad and deep set of AI and ML resources for builders of all levels of expertise.

2.
Customers can now create agents for Amazon Bedrock to enable automation of complex tasks and deliver customized, up-to-date answers for their applications, based on their proprietary data
A photo of VP Swami Sivasubramanian speaking on stage at AWS Summit New York. Behind him is a screen that states, "Agents for Amazon Bedrock".

While FMs are incredibly powerful on their own for a wide range of tasks, like summarization, they need additional programming to execute more complex requests. For example, they don’t have access to company data, like the latest inventory information, and they can’t automatically access internal APIs. Developers spend hours writing code to overcome these challenges. With just a few clicks, agents for Amazon Bedrock will automatically break down tasks and create an orchestration plan—without any manual coding, making the task of programming generative AI applications easier for developers. For example, to service a customer request to return a pair of shoes—“I want to exchange these black shoes for a brown pair instead”—the agent securely connects to company data, automatically converts it into a machine-readable format, provides the FM with the relevant information, and then calls the right set of APIs to service this request.

3.
Vector engine support for Amazon OpenSearch Serverless gives customers a simpler way to leverage vectors for search

Vector embeddings allow machines to understand relationships across text, images, audio, and video content in a format that’s digestible for ML—making everything from online product recommendations to smarter search results work. Now, with vector engine support for Amazon OpenSearch Serverless, developers will have a simple, scalable, and high-performing solution to build ML-augmented search experiences and generative AI applications without having to manage a vector database infrastructure.

4.
Generative business intelligence (BI) in Amazon QuickSight produces business intelligence based on natural language questions, making insights more accessible

Amazon QuickSight is a unified business intelligence service that helps organizations’ employees easily find answers to questions about their data. Now, QuickSight is combining its existing ML innovations with new LLM capabilities available through Amazon Bedrock to provide generative AI capabilities—called generative BI. These capabilities will help break down siloes, making it even easier to collaborate on data across an organization and speeding up data-driven decision making. Using everyday natural language prompts, analysts will be able to author or fine-tune dashboards, and business users will be able to share insights with compelling visuals within seconds.

5.
AWS HealthScribe will use generative AI to ease the paperwork burden for health care professionals, giving time back for patients

Updating electronic health records is one of the most cumbersome tasks for doctors and nurses. Clinicians will find relief when this HIPAA-eligible service empowers health care software vendors to more easily build clinical applications that leverage generative AI. HealthScribe uses speech recognition and Amazon Bedrock–powered generative AI to create transcripts and generate easy-to-review clinical notes, with built-in security and privacy features designed to protect sensitive patient data.

6.
New Amazon Elastic Compute Cloud (Amazon EC2) P5 instances harness NVIDIA H100 graphics processing units (GPUs) for accelerating generative AI training and inference

These Amazon EC2 P5 instances—now generally available—are powered by NVIDIA H100 Tensor Core GPUs, which are optimized for training LLMs and developing generative AI applications. (An “instance” in cloud lingo is virtual access to a compute resource—in this case, compute powered by H100 GPUs.) AWS is the first leading cloud provider to make NVIDIA’s highly sought-after H100 GPUs generally available in production. These instances are ideal for training and running inference for the increasingly complex LLMs and compute-intensive generative AI applications, including question answering, code generation, video and image generation, speech recognition, and more. With access to H100 GPUs, customers will be able to create their own LLMs and FMs on AWS faster than ever.

7.
AWS offers seven free and low-cost skills training courses to help you use generative AI
A photo of attendees of the AWS Summit in New York learning about generative ai training courses

More than 75% of organizations plan to adopt big data, cloud computing, and AI in the next five years, according to the World Economic Forum. To help people train for the AI and ML jobs of the future, AWS released on-demand skills trainings to support those who want to understand, implement, and begin using generative AI. Amazon has designed training courses specifically for developers who want to use Amazon CodeWhisperer, engineers and data scientists who want to leverage generative AI by training and deploying FMs, executives seeking to understand how generative AI can address their business challenges, and AWS Partners helping their customers harness generative AI’s potential.