[Answered] Analyze the emerging threat of AI-generated child sexual abuse material (CSAM). What are the key challenges in regulating this technology, and suggest measures to protect children from such exploitation?
Quarterly-SFG-Jan-to-March
Red Book

Introduction

The proliferation of Artificial Intelligence (AI) has brought unprecedented advancements, but it has also facilitated new forms of cybercrime. One of the most alarming threats is the AI-assisted generation, possession, and dissemination of Child Sexual Abuse Material (CSAM). Reports from WeProtect Global Alliance (2023) indicate an 87% rise in online CSAM cases since 2019. The International AI Safety Report 2025 by the UK Government warns about AI-driven CSAM proliferation. India, as a rapidly digitizing nation, faces significant challenges in regulating this menace and protecting children from exploitation.

What is AI-Generated CSAM?

CSAM refers to sexually explicit depictions of children, including audio, video, and images. AI-powered tools can now generate lifelike, synthetic CSAM without involving real children, making detection difficult. The Internet Watch Foundation (IWF) Report 2024 highlights the rapid rise of AI-created CSAM on the open web. Deepfake technology further complicates regulation, as it allows the fabrication of realistic child abuse images without direct criminal activity.

Key Challenges in Regulating AI-Generated CSAM

  1. Legal and Policy Gaps
  • India’s IT Act, 2000 (Section 67B) and POCSO Act, 2012 criminalize child pornography but lack provisions specifically targeting AI-generated CSAM.
  • The NHRC Advisory (2023) recommends replacing ‘child pornography’ with CSAM, but legislative amendments remain pending.
  • The UK’s upcoming legislation criminalizing AI tools for CSAM sets a global precedent, but India has yet to introduce similar laws.
  1. Detection and Enforcement Challenges
  • AI-generated CSAM does not always depict real children, complicating its classification as an offense under existing laws.
  • End-to-end encryption hinders tracking of CSAM-sharing networks.
  • NCRP data (April 2024) recorded 94 lakh child pornography incidents in India, but only a fraction led to convictions due to enforcement gaps.
  1. Platform and Tech Company Accountability
  • Major platforms like Meta, X, TikTok, and Discord face criticism for failing to proactively block AI-generated CSAM.
  • Congressional hearings (2024, U.S.) criticized Big Tech’s negligence in safeguarding children online.

Measures to Protect Children from AI-Generated CSAM

  1. Strengthening Legal Frameworks
  • Amend POCSO Act, IT Act, and Digital India Act to explicitly criminalize AI-generated CSAM.
  • Adopt the UN Draft Convention on ‘Countering the Use of Information and Communications Technology for Criminal Purposes’.
  • Define ‘sexually explicit’ under IT Act Section 67B to enable real-time CSAM blocking.
  1. Enhanced Monitoring and AI-Based Detection
  • Use AI-powered tools for deepfake and CSAM detection, similar to the UK’s AI Safety Institute approach.
  • Enforce tech company liability for CSAM detection and removal.
  1. Stronger Global Collaboration and Regulation
  • India must partner with global CSAM tracking initiatives like the National Center for Missing and Exploited Children (NCMEC, USA).
  • Introduce a mandatory reporting system for AI-driven CSAM cases.

Conclusion

AI-generated CSAM poses a severe challenge to child safety. While India has taken steps through NCRP and cybercrime reporting mechanisms, legal loopholes, poor enforcement, and Big Tech’s lax oversight continue to enable perpetrators. A combination of stringent legislation, AI-driven monitoring, corporate accountability, and international cooperation is essential to curb this emerging threat and safeguard children in the digital age.

Print Friendly and PDF
Blog
Academy
Community