Contents
Introduction
The Economic Survey 2025–26 flags digital trust deficit as a rising concern, while AI expansion highlighted in Budget 2026–27 underscores deepfakes as a governance challenge, threatening information integrity, democracy, and national security globally.
What are Deepfakes?
- Deepfakes are AI-generated synthetic media (images, videos, audio) that convincingly manipulate or fabricate a person’s likeness and voice.
- They rely on Generative Adversarial Networks (GANs): one network (generator) creates fake content, while the other (discriminator) detects fakes.
Concept of Deepfakes
- Technological Foundation: Deepfakes use deep learning models where, a generator creates fake content and a discriminator evaluates authenticity. Continuous iteration produces outputs nearly indistinguishable from real footage.
- Evolution and Context: Initially used for entertainment and satire, deepfakes have evolved into tools capable of mimicking faces, voices, and emotions with high precision. Example: fabricated videos of global leaders during crises creating confusion and distrust.
- Epistemic Shift: Traditionally, photos/videos were seen as proof of truth. Deepfakes undermine this, creating a post-truth visual culture, where even authentic evidence is doubted (the liar’s dividend effect).
Potential Risks Associated with Deepfakes
- Political Manipulation: Fabricated videos of leaders can incite unrest or sway elections; the 2026 Netanyahu deepfake controversy illustrated the “liar’s dividend,” where authentic footage is dismissed as fake.
- Non-Consensual Intimate Imagery (NCII): The most common abuse, causing severe psychological harm; India reported a surge in deepfake porn cases targeting women.
- National Security Concerns: Deepfakes can be used in psychological warfare, misinformation campaigns, and diplomatic manipulation. In geopolitical conflicts, information becomes a strategic weapon.
- Geopolitical Weaponisation: State actors use deepfakes for disinformation campaigns, amplifying hybrid warfare.
- Financial Fraud: Rise of voice cloning (vishing) to authorize fraudulent transactions. Example: impersonation of CEOs to transfer funds.
- Social Fragmentation: Echo chambers reinforce competing realities, polarising societies along ideological lines.
- Economic and Institutional Impact: Weak enforcement of contracts and trust deficit harms business ecosystems. As highlighted by NITI Aayog, digital trust is foundational for India’s AI-driven economy.
Solutions to Mitigate Threats
A multi-layered approach combining technology, law, and education is essential:
- Technological Safeguards: Mandate C2PA digital provenance standards and robust watermarking that survives compression/re-upload. Expand blockchain-based verification for media authenticity.
- Regulatory Framework: Enforce 3-hour takedown for malicious deepfakes under 2026 IT Rules amendments; require clear SGI (Synthetically Generated Information) labelling for satirical content. Strengthen DPDP Act enforcement for NCII. Example: The EU AI Act requires transparency, risk classification, and compliance audits for AI systems.
- Institutional Mechanisms: Establish an independent AI Ethics Oversight Body with judicial and civil-society representation for high-risk cases.
- Public Awareness: Scale digital literacy programmes through iGOT and school curricula to foster critical consumption (verify before you share).
- Deepfake Evaluation Frameworks: Governments (like the UK in February 2026) are collaborating with tech giants like Microsoft to create standardized Detection Evaluation tools to stay ahead of the latest AI models.
Conclusion
Echoing Yuval Noah Harari, in an age of synthetic realities, deepfakes are no longer just a tech problem; they are a trust problem. Preserving trust demands ethical technology, robust institutions, and informed citizens are the need of the day.


