As one of the most transformational innovations of our time, generative artificial intelligence (generative AI) continues to capture the world’s imagination, and we remain as committed as ever to harnessing it responsibly. With a team of dedicated responsible AI experts, complemented by our engineering and development organizations, we continually test and assess our products and services to define, measure, and mitigate concerns about accuracy, fairness, intellectual property, appropriate use, toxicity, and privacy. And while we don’t have all the answers today, we are working alongside others to develop new approaches and solutions to address these emerging challenges. We believe we can drive innovation in AI while continuing to implement the necessary safeguards to protect our customers and consumers.
At Amazon Web Services (AWS), we know that generative AI technology and how it is used will continue to evolve, posing new challenges that will require additional attention and mitigation. That’s why Amazon is actively engaged with organizations and standard bodies focused on the responsible development of next-generation AI systems including NIST, ISO, the Responsible AI Institute, and the Partnership on AI. In fact, last week at the White House, Amazon signed voluntary commitments to foster the safe, responsible, and effective development of AI technology. We are eager to share knowledge with policymakers, academics, and civil society, as we recognize the unique challenges posed by generative AI will require ongoing collaboration.
This commitment is consistent with our approach to developing our own generative AI services, including building foundation models (FMs) with responsible AI in mind at each stage of our comprehensive development process. Throughout design, development, deployment, and operations, we consider a range of factors including 1) accuracy, e.g., how closely a summary matches the underlying document or whether a biography is factually correct; 2) fairness, e.g., whether outputs treat demographic groups similarly; 3) intellectual property and copyright considerations; 4) appropriate usage, e.g., filtering out user requests for legal advice, medical diagnoses, or illegal activities; 5) toxicity, e.g., hate speech, profanity, and insults; and 6) privacy, e.g., protecting personal information and customer prompts. We build solutions to address these issues into our processes for training data, into the FMs themselves, and into the technology that we use to pre-process user prompts and post-process outputs.
For all of our FMs, we invest actively to improve our features and learn from customers as they experiment with new use cases. For example, Amazon Titan FMs are built to detect and remove harmful content in the data that customers provide for customization, reject inappropriate content in the user input, and filter the models’ outputs containing inappropriate content (such as hate speech, profanity, and violence).
To help developers build applications responsibly, Amazon CodeWhisperer provides a reference tracker that displays the licensing information for a code recommendation and provides links to the corresponding open source repository, when necessary. This makes it easier for developers to decide whether to use the code in their project and make the relevant source code attributions as they see fit. In addition, Amazon CodeWhisperer filters out code recommendations that include toxic phrases, and recommendations that indicate bias.
Through innovative services like these, we will continue to help our customers realize the benefits of generative AI, while collaborating across the public and private sectors to ensure we’re doing so responsibly. Together, we will build trust among customers and the broader public, as we harness this transformative new technology as a force for good.