For 25 years, Amazon and other innovative American companies have been laying the groundwork for artificial intelligence (AI) by building machine learning into the systems, logistics networks, and products and services that consumers use every day.
Yet just over a year ago generative AI stormed into our cultural zeitgeist, capturing our collective attention and stoking fears about how AI might present new dangers. Questions also emerged about how to control and regulate AI, which was appropriate given the potential risks if AI fell into the wrong hands, or if companies didn’t deploy this powerful technology responsibly.
But those concerns—understandable yet still largely abstract—have been met with a public-private consensus approach that holds great promise. This process began a year ago when leading American technology companies joined the Biden administration’s voluntary commitments promoting the safe, secure, and transparent development of AI. A year later, we’ve learned some key lessons and made significant progress around deploying responsible AI, transparency, and public-private information sharing.
A series of developments have followed the White House voluntary commitments, including the recent roadmap for AI policy from U.S. senators, the G7 AI Hiroshima Process Code of Conduct, and the AI Safety Summits in the U.S. and Seoul, which laid the groundwork for international interoperability. We closely collaborated with those efforts and committed to the important standards and framework that advances and balances innovation and safety.
These commitments build on Amazon’s approach to responsible AI development. From the outset, we have prioritized responsible AI innovation by embedding safety, fairness, robustness, security, and privacy into our development processes and educating our employees. Our practical approach, coupled with the necessary tools and expertise, enables us to offer our customers what they need to implement responsible AI practices effectively within their organization. To date we have launched more than 70 responsible AI capabilities and features, more than 500 research papers, studies, and scientific blogs on responsible AI, and completed tens of thousands of training hours on responsible AI for Amazon employees.
It's now very clear we can have rules that protect against risks, while also ensuring we don’t hinder innovation. But we still need to secure global alignment on responsible AI measures to protect U.S. economic prosperity and security. That alignment won’t be easy to accomplish, but we can do it. And if we succeed over the long term, American innovation can be enhanced, not restricted or harmed.
Here are some of the ways we can accomplish such an innovative and collaborative public-private approach to responsible AI:
First, all companies building, using, or leveraging AI must commit to its responsible deployment. At Amazon, we are operationalizing our global commitments by building guardrails into all of our AI products and services. For example, we’re embedding invisible watermarks into a tool that gives customers the ability to generate studio-quality images at scale. Watermark technology is important because it can help reduce the spread of disinformation.
Second, we think it’s fair to ask companies to be transparent about how they are developing and deploying AI, and there must be fostered trust between the public and the private sector regarding responsible and safe AI measures. To this end, we’ve created more than 10 AI Service Cards to help AWS customers understand AWS’ AI and machine learning services’ limitations and responsible AI best practices, so they can build their AI applications safely and evaluate models against key safety and accuracy criteria.
Finally, collaboration and information sharing among companies and governments regarding safety and trust issues is crucial. This year, the U.S. Artificial Intelligence Safety Institute Consortium, established by the National Institute of Standards and Technology, is advancing research and measurement science for AI safety, conducting safety evaluations of models and systems, and developing guidelines for evaluations and risk mitigations, including content authentication and the detection of synthetic content.
Scaling AI to tackle some of humanity’s most challenging problems is no easy feat. But AI’s possibilities to improve our lives in so many ways—from drug discovery to mapping and curing diseases to optimizing agricultural yields—must capture our attention like the generative AI craze did a year ago. And we cannot assume the U.S. will lead this AI innovation wave—so many countries are pursuing strategies to win the AI race.
For the U.S. and our allies to unlock the benefits of AI while minimizing its risks, we must continue to work together to establish AI guardrails that are consistent with democratic values, secure economic prosperity and security, ensure global interoperability, promote competition, and enhance safe and responsible innovation. Frameworks like the White House commitments help ensure that companies, governments, academia, and researchers alike can work together to deliver groundbreaking generative AI innovation with trust at the forefront.
We must all commit and double-down on this emerging global consensus on responsible AI, and to the public-private collaboration that will continue to move us forward. Amazon is dedicated to innovating on behalf of our customers while implementing the necessary safeguards and committing to this balanced and collaborative progress.