UPSC Syllabus Topic: GS Paper3- Awareness in the fields of IT, Space, Computers, robotics, nano-technology, bio-technology
Introduction
Near-perfect AI videos and audio now appear next to real content, so users struggle to trust what they see and hear. A deepfake of the Finance Minister promoting an investment scheme and causing a large financial loss shows how synthetic media can directly harm citizens. To respond, India has proposed an AI labelling framework under the IT Rules 2021, focusing on clear labels, duties for large platforms, better detection tools and graded responsibilities for creators.

What is synthetic media?
Synthetic media is content that is artificially or algorithmically created, modified, or generated to appear authentic. It includes digital material reshaped by software in images, audio or video, even when it is not produced by generative AI. Content may be fully AI-generated, AI-assisted or AI-altered, including mixed media such as real visuals with cloned audio.
Over 50% of all content on the Internet is now considered AI-generated. This huge volume makes it hard for platforms and users to pick out content that is dangerous or misleading.
Concern Related to Synthetic Media
- Misinformation and Disinformation: A major concern is the potential for synthetic media to spread fake news, create false narratives, and manipulate public opinion. This can impact political campaigns, disrupt democratic processes, and erode public trust in news organizations and government institutions.
- Difficulty of user detection: Many synthetic videos and audio clips now look and sound almost real. Some still show visible signs of editing, but others are so realistic that viewers cannot clearly distinguish them from authentic content.
- Privacy and Consent Violations: Synthetic media tools allow for the use of individuals’ likenesses, voices, and behaviors without their consent. This has led to an increase in non-consensual intimate imagery (deepfake pornography), identity theft, and online harassment, causing significant psychological and reputational harm to victims.
- Fraud and Financial Crime: Deepfake audio and video can be used in social engineering attacks to impersonate individuals (such as a CEO or bank employee) and deceive others into transferring money or divulging sensitive information.
- Erosion of Trust and Authenticity: The prevalence of convincing synthetic content blurs the line between reality and fabrication, leading to a general skepticism towards digital media. This “authenticity crisis” makes it harder to use authentic media as reliable evidence in legal or journalistic contexts.
- Intellectual Property Issues: The use of copyrighted material to train AI models and the generation of content that may infringe on existing works raise complex legal challenges regarding ownership and originality.
- National Security Risks: Malicious state or non-state actors may use synthetic media for information warfare, psychological operations, or to sow discord and destabilize trust in targeted nations.
Regulating Mechanism (Draft Amendments to the IT Rule 2021)
The government earlier treated the existing framework as adequate to deal with synthetic media. It has now proposed draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The aim is to create a clear AI labelling framework.
Key provisions of the draft amendments to the IT Rule 2021
- Enhanced Obligations for significant social media intermediaries (SSMIs) -Requires SSMIs to:
- Obtain a user declaration on whether uploaded information is synthetically generated;
- Deploy reasonable and proportionate technical measures to verify such declarations;
- Ensure that synthetically generated information is clearly labelled or accompanied by a notice indicating the same; and
- The label or identifier must enable immediate identification of the content as synthetically generated information.
- The rule further prohibits intermediaries from modifying, suppressing, or removing such labels or identifiers.
- Minimum label size and duration: The draft requires that labels cover at least 10% of the visual area of synthetic videos. For audio, labels must cover at least 10% of the initial duration of synthetic clips. This tries to ensure that the label is prominent and not hidden like fine print.
- Due diligence focused on large platforms: The primary obligations fall on Significant Social Media Intermediaries, which host large user bases and can amplify harmful synthetic media at scale. This reflects the view that bigger platforms carry higher responsibility.
Major Concerns Related to The Draft Amendments to the IT Rule 2021
- Broad and unclear scope of synthetic media: The definition of synthetic media covers any content that is artificially or algorithmically created or modified. This makes it hard to separate everyday edits or computer-generated imagery from content that is actually harmful or misleading, even though not all synthetic media is problematic.
- Rigid 10% labelling rule may not work in practice: The rule that labels must cover 10% of the visual area or 10% of the initial audio duration may not meet the reasonable person test. Short disclaimers in longer clips can be ignored like fine print, and long disclaimers may overwhelm users instead of helping them.
- Unclear treatment of mixed media formats: The framework does not clearly deal with mixed media, such as real visuals combined with cloned or synthetic audio. It is not clear how the 10% rule will apply in such cases, which creates confusion for both platforms and creators.
- Unreliable technical markers like watermarks: Watermarks added by AI companies are easy to remove. Soon after a major text-to-video tool promised watermarking of synthetic videos, other tools appeared that could wipe these markings. This makes sole reliance on watermarks a weak safeguard.
- Limited effectiveness of current detection and labelling tools: Synthetic media is multiplying faster than verification tools can keep up. Platforms face difficulty in detecting AI-generated or algorithmically created content, and third-party detection tools are only as good as their training and accuracy. An audit of 516 AI-generated posts found that only 30% were correctly flagged, and even the best-performing platform labelled just about 55% of such content.
- Gaps in content provenance and platform practices
Many platforms follow Coalition for Content Provenance and Authenticity (C2PA) standards to track content origin, but these standards do not always result in consistent labelling.
Way forward
- Fine-tune categories and standards:,Develop clear, precise standards for different types of synthetic media. Use a tiered labelling system that separates fully AI-generated, AI-assisted and AI-altered content, instead of relying on one generic label.
- Extend duties to influential creators: Make creators above a certain follower threshold disclose their use of AI in content creation. Encourage voluntary self-labelling among smaller creators to build a basic culture of transparency.
- Adopt graded compliance: Link stricter obligations to higher reach and influence. Professional creators and big accounts should follow stronger labelling and disclosure norms to maintain public trust and adapt to changing regulation.
- Improve detection systems with external tools: Strengthen platform capacity to identify synthetic media by using specialised third-party detection tools, and regularly improve them based on training quality and accuracy levels.
- Use independent auditors for high-risk content: In cases of harmful, fraudulent or misleading synthetic media, rely on independent information verifiers and auditors.
Conclusion
AI labelling rules are emerging because synthetic media is widespread, hard to detect and sometimes highly harmful.Draft IT rules push large platforms to label such content and verify user declarations, while graded compliance can involve creators. As a multi-stakeholder effort with stronger standards, tiered labels, better detection tools and support from independent auditors, users can receive clearer signals on what is real and what is synthetic and face fewer risks online.
Question for practice:
Examine the effectiveness of India’s proposed AI labelling framework in addressing the risks posed by synthetic media.
Source: The Hindu




