[Answered] Examine the necessity for legislation explicitly defining personality rights in the Age of AI. Justify the need for AI watermarking, platform liability, and global collaboration for enforcement.

Introduction

ASER-like digital studies show a 900% rise in deepfake circulation since 2020 (EUROPOL 2023). AI-generated impersonations increasingly threaten privacy, dignity, and economic autonomy, necessitating legally codified personality rights and robust global safeguards.

Need for Explicit Legislation on Personality Rights in the AI Age

Rising Identity Misappropriation Through AI

  1. Generative AI, deepfakes, and voice clones can replicate facial expressions, speech patterns, and mannerisms with near-perfect accuracy. India witnessed lawsuits by Aishwarya Rai Bachchan, Abhishek Bachchan, Anil Kapoor, Amitabh Bachchan, and Arijit Singh, showing real-world identity misuse causing reputational, emotional, and financial harm.
  2. AI erodes the boundary between authenticity and simulation, making implicit constitutional protections under Article 21 inadequate without explicit statutory guidance.

Current Indian Framework is Fragmented and Reactive

  1. India follows a hybrid privacy–property model, but personality rights are derived only from court precedents:
    • Amitabh Bachchan v. Rajat Nagi (2022): protected likeness and voice.
    • Anil Kapoor v. Simply Life (2023): banned AI misuse of “Jhakaas”.
    • Arijit Singh v. Codible Ventures (2024): protected vocal identity.
  1. However, no dedicated legislation defines identity, likeness, voice, digital persona, or post-mortem rights. Enforcement under the IT Act 2000 and Intermediary Guidelines 2021/24 remains weak due to anonymity, cross-border hosting, and absence of training-data transparency.

Economic Stakes Necessitate Codification

  1. Technologies increasingly monetise celebrity images through advertising, gaming, and virtual worlds.
  2. The U.S. “right of publicity” shows identity is an economic resource, and the global digital persona market is projected to exceed USD 30 billion by 2030.
  3. Without codified rights, celebrities, artists, and creators lose control over commercial exploitation of their digital selves.

Justification for AI Watermarking

  1. Ensures Traceability and Accountability: The EU AI Act (2024) mandates deepfake labelling and watermarking for transparency. Watermarks create audit trails to identify AI-generated content and deter impersonation, fraud, and misinformation.
  2. Mitigates Democratic and Social Risks: NITI Aayog (2023) flagged deepfake-enabled political manipulation as a “critical national security threat.” Watermarking reduces virality of doctored content affecting elections, public trust, and communal harmony.

Need for Platform Liability

  1. Platforms Enable Mass Dissemination: Google, YouTube, Meta, and Character.AI host and algorithmically amplify deepfakes. Without liability, platforms treat identity violations as third-party content. U.S. cases against Character.AI (2024) show harm caused by chatbots encouraging self-harm or impersonation.
  2. Harmonising Safe-Harbour with Duty of Care: Safe-harbour protections under Section 79 IT Act must be balanced with obligations for:
  • Rapid takedowns
  • Content provenance checks
  • Model training disclosures
  • Preventing re-uploading loops

Countries like China, EU, South Korea already mandate proactive filtering.

Need for Global Collaboration

  1. AI is Transnational; National Laws Alone Are Insufficient: Storage, training data, servers, and models operate across borders. UNESCO’s Recommendation on the Ethics of AI (2021) insists on global interoperability for identity safeguards.
  2. Harmonising Definitions and Enforcement: Standardising laws on digital persona, likeness, post-mortem rights, and training-data consent is essential, as recommended in: Aldrich & Smith (2024) on high-risk deepfake governance and Guido Westkamp (2025) advocating extended personality rights

Conclusion

As Yuval Harari warns in Homo Deus, technology must not erode human dignity. Codified personality rights, strong platform accountability, and global AI governance are imperative for safeguarding identity in the algorithmic era.

Print Friendly and PDF
Blog
Academy
Community