Contents
Introduction
According to the World Economic Forum’s Global Risks Report 2024, AI-driven misinformation ranks among the top five global threats. Labelling synthetic media represents India’s nascent yet crucial step toward ensuring digital transparency and trust.
The Rise of Synthetic Media and the Policy Context
- Explosion of Deepfakes and AI “Slop”: With generative AI tools like Midjourney, DALL·E 3, and Sora, creating realistic deepfakes has become effortless. A 2024 Deeptrace Report estimated over 85% increase in AI-generated fake visuals online. Political misuse: Deepfake videos circulated during the 2024 Lok Sabha elections and similar incidents in the U.S. presidential primaries show its potential to distort public discourse.
- Governmental Response — The Labelling Mandate: The Union government’s proposed amendment to the IT Rules, 2021 mandates that AI-generated content be labelled. This aligns India with international trends, such as the EU’s AI Act (2024) and the U.S. AI Bill of Rights, which emphasize content authenticity.
- Global Industry Support: Leading firms like Meta and the Coalition for Content Provenance and Authenticity (C2PA) have voluntarily adopted digital watermarking and metadata-based provenance tracking, signaling private sector readiness for compliance.
Efficacy of Labelling in Ensuring Digital Integrity
- Promoting Transparency and Authenticity: Labelling synthetic content enhances digital provenance—the ability to trace origin and alteration. It strengthens information integrity systems, helping citizens distinguish between authentic and manipulated media.
- Enhancing Electoral and National Security: During elections, deepfakes can alter voter perception. Proper labelling acts as a soft deterrent, preserving information hygiene and supporting democratic resilience. Case in point: Taiwan’s 2024 AI Transparency Initiative curbed electoral misinformation by mandating provenance labels on political content.
- Building Trust in Digital Ecosystems: As India expands its Digital Public Infrastructure (DPI), trust becomes central. Labelling fosters algorithmic accountability and complements frameworks like the Digital Personal Data Protection Act, 2023.
Limitations and Structural Challenges
- Technological Limitations: AI models evolve faster than detection tools. Sophisticated generative adversarial networks (GANs) can bypass watermarking or labelling, creating a cat-and-mouse dynamic between regulators and developers.
- Implementation and Jurisdictional Gaps: Subordinate legislation through IT Rules, rather than a Parliamentary Act, limits oversight and legitimacy. Moreover, India’s vast linguistic diversity and low digital literacy complicate effective enforcement.
- Risk of Over-regulation: Excessive or ambiguous rules could stifle innovation in creative AI sectors, particularly for startups and artists experimenting with generative design and virtual production.
- Global Interoperability Issues: Without standardized global labelling norms, cross-border digital content flow may dilute the effect. UNESCO’s “Ethics of AI” Recommendation (2023) calls for harmonized governance frameworks.
Way Forward
- Comprehensive AI Regulation: Move from reactive labelling to a holistic AI Governance Act, incorporating ethical AI, transparency audits, and grievance redressal.
- AI Literacy and Citizen Awareness: Public awareness campaigns akin to “Fake News Buster” can empower users to identify synthetic content.
- Technological Co-regulation: Collaboration between government, tech firms, and academia for watermarking, digital forensics, and AI provenance blockchain systems.
- Periodic Review Mechanism: Continuous adaptation of norms as AI models evolve, ensuring regulatory agility.
Conclusion
As Yuval Noah Harari cautions in Homo Deus, “Clarity is power.” Labelling synthetic media is the first step toward digital clarity—vital for safeguarding truth, trust, and democratic integrity online.


