Across Amazon’s services, we devote significant resources to combating CSAM, and we continue to invest in new technologies and tools to improve our ability to prevent, detect, respond to, and remove it.

Our approach

As we strive to be Earth’s Most Customer-Centric Company, Amazon and its subsidiaries provide services directly to customers as well as enable businesses to use our technology and services to sell and provide their own products and services. In all cases, our policies, teams, and tools work together to prevent and mitigate CSAM.
Our consumer services use a variety of tools and technologies such as machine learning, keyword filters, automated detection tools, and human moderators to screen images, videos, or text in public-facing content for policy compliance before it’s allowed online. These measures enforce multiple policies, including prohibitions on CSAM. As one example, Amazon Photos uses Thorn’s Safer technology to detect hash matches of images uploaded to the service and verifies positive matches using human reviewers.
We also enable customers that use Amazon technologies and services to mitigate abuse in their products and services. For example, Thorn’s Safer technology is available to businesses via the Amazon Web Services (AWS) Marketplace so they can proactively identify and address CSAM on their services.
Our services enable anyone to report inappropriate, harmful, or illegal content to us. When we receive reports of prohibited content, we act quickly to investigate and take appropriate action. We have relationships with U.S. and international hotlines, like the National Center for Missing & Exploited Children (NCMEC) and the Internet Watch Foundation (IWF), to allow us to receive and quickly act on reports of CSAM.

2025 CSAM mitigation

In 2025, Amazon and its subsidiaries* collectively submitted 26,500 CyberTipline reports of confirmed CSAM to NCMEC.**
In total, we filed 1,120,171 CyberTipline reports of suspected CSAM. 1,098,047 of these reports related to images and videos from the public web, which Amazon scanned and removed before training Amazon's foundation models. After human review, Amazon determined that 99.60% were false positives, and 4,376 were verified CSAM from the public web. We retracted the remaining 1,093,671 false positive reports so that NCMEC has an accurate accounting of verified CSAM. We also provided NCMEC with URLs for the verified CSAM so NCMEC can take appropriate action, and we will include actionable information in future reports, where available.
Amazon Photos reported 21,437 images (affecting 3,069 accounts) using Safer.

Reports from Hotlines

Hotlines, such as NCMEC, IWF, and Canadian CyberTipline, submitted a total of 676 unique notifications of possible CSAM (related to 208 accounts) to Amazon for content that we promptly reviewed and actioned as appropriate (average time to action any hotline report was 1.22 days).
For all content reported by hotlines, we found 595 were CSAM (relating to 133 accounts), actioned them, and reported them to NCMEC.
For all such notifications involving AWS customers, customers resolved the issue without additional intervention 91% of the time.
*Get more information about Twitch's efforts.
**Depending on the specifics of the service, we remove or disable, as applicable: URLs, images, chat interactions, resources, services, or accounts.

Mitigating CSAM in AI

Amazon joined the Generative AI Principles to Prevent Child Abuse in April 2024. In alignment with the voluntary commitments, we undertake efforts to mitigate CSAM, using industry-established methods, at all stages in the responsible AI lifecycle: Development, Deployment, and Maintenance.
As part of our commitment to safeguard our foundation models from CSAM, we scan training datasets for known CSAM and design and test our models and generative AI applications to reduce the risk that they will produce exploitative content. We scanned billions of pieces of publicly available content for possible CSAM. We detected 1,098,047 instances of possible CSAM, which we removed before training our foundation models and reported to NCMEC through our Amazon AI Services CyberTipline account. After subsequent human review, Amazon determined that 99.60% were false positives, and only 4,376 were confirmed CSAM. In 2026, we have enhanced our detection pipeline to improve our identification of CSAM and substantially reduce the false positive rate. In addition, we have identified and will include actionable information in our CyberTipline reports, where available. We remain committed to improving the quality of our reports and CSAM detection processes.
To date, we are not aware of any instances of our models generating CSAM. Our first party image models embed an invisible watermark on all images they generate, and we also offer a detection solution to allow individuals to check for the existence of the watermark. Our first party image models also, by default, include content credentials based on a technical specification developed by the Coalition for Content Provenance and Authenticity (C2PA). As noted in Generative AI Principles, content provenance solutions can support law enforcement efforts to distinguish between generated content and images of child victims.
We design our consumer-facing generative AI products, such as PartyRock and Rufus, with safeguards. We block exploitative prompts and responses and test our systems to prevent inappropriate use or model outputs. We also enable customers to report potential CSAM content directly to us, and our trust and safety teams prioritize review of those reports and make adjustments to improve our systems.
AWS’s Responsible AI policy explicitly prohibits the use of our AI/ML Services to harm or abuse a minor, which includes grooming and child sexual exploitation.
Finally, Amazon has continued to invest financial and in-kind contributions to key organizations that are working on research and technologies to address safety risks associated with the advancement of generative AI. This includes Thorn, NCMEC, and the Coalition for Content Provenance and Authenticity.

Commitments and partnerships

As part of our work to fight CSAM, we engage with a variety of organizations and support their work to protect children.
Amazon has endorsed the Voluntary Principles to Counter Child Sexual Exploitation and Abuse and is part of the WePROTECT Global Alliance. We sit on the boards of NCMEC and the Tech Coalition. Together with Thorn and All Tech is Human, we have committed to Generative AI Principles to Prevent Child Abuse to reduce the risk that our generative AI services will be misused for child exploitation.
Amazon provides NCMEC millions of dollars in AWS promotional credits to reliably operate mission-critical infrastructure and applications to help missing and exploited children. In 2025, Amazon’s financial support enabled NCMEC to enhance CyberTipline improvements, including updating case triage capabilities and report processing systems, allowing them to identify and process reports more efficiently.
Amazon continued its partnership with Thorn, providing AWS credits for Thorn to power its tools. Thorn leverages a variety of AWS solutions to support Safer, which helps content-hosting platforms detect CSAM and exploitation at scale. Companies have used Safer to detect over 12 million suspected CSAM files.
Amazon partnered with INHOPE, a global network of 57 internet hotlines, providing thousands of dollars to fund project NOTICE, an initiative to automate CSAM reporting workflows through standardized formats.

More information on our partnerships

FAQs

Previous reports