{"id":349888,"date":"2025-11-13T16:37:56","date_gmt":"2025-11-13T11:07:56","guid":{"rendered":"https:\/\/forumias.com\/blog\/?p=349888"},"modified":"2025-11-17T21:49:17","modified_gmt":"2025-11-17T16:19:17","slug":"ai-labelling-regulations-framework","status":"publish","type":"post","link":"https:\/\/forumias.com\/blog\/ai-labelling-regulations-framework\/","title":{"rendered":"AI Labelling Regulations Framework"},"content":{"rendered":"<p><strong>UPSC Syllabus Topic:<\/strong> GS Paper3- Awareness in the fields of IT, Space, Computers, robotics, nano-technology, bio-technology<\/p>\n<h2><strong>Introduction<\/strong><\/h2>\n<p>Near-perfect AI videos and audio now appear next to real content, so users struggle to trust what they see and hear. A deepfake of the Finance Minister promoting an investment scheme and causing a large financial loss shows how <strong>synthetic media can directly harm citizens.<\/strong> To respond, India has proposed an AI labelling framework under the IT Rules 2021, focusing on clear labels, duties for large platforms, better detection tools and graded responsibilities for creators<strong>.<\/strong><\/p>\n<p><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-350151\" src=\"https:\/\/i0.wp.com\/forumias.com\/blog\/wp-content\/uploads\/2025\/11\/AI-Labelling-Regulations-Framework.png?resize=437%2C290&#038;ssl=1\" alt=\"AI Labelling Regulations Framework\" width=\"437\" height=\"290\" srcset=\"https:\/\/i0.wp.com\/forumias.com\/blog\/wp-content\/uploads\/2025\/11\/AI-Labelling-Regulations-Framework.png?resize=300%2C199&amp;ssl=1 300w, https:\/\/i0.wp.com\/forumias.com\/blog\/wp-content\/uploads\/2025\/11\/AI-Labelling-Regulations-Framework.png?resize=1024%2C680&amp;ssl=1 1024w, https:\/\/i0.wp.com\/forumias.com\/blog\/wp-content\/uploads\/2025\/11\/AI-Labelling-Regulations-Framework.png?resize=768%2C510&amp;ssl=1 768w, https:\/\/i0.wp.com\/forumias.com\/blog\/wp-content\/uploads\/2025\/11\/AI-Labelling-Regulations-Framework.png?w=1280&amp;ssl=1 1280w\" sizes=\"auto, (max-width: 437px) 100vw, 437px\" \/><\/p>\n<h2><strong>What is synthetic media?<\/strong><\/h2>\n<p><strong>Synthetic media is content that is artificially or algorithmically created, modified, or generated to appear authentic.<\/strong> It includes digital material reshaped by software in images, audio or video, even when it is not produced by generative AI. Content may be <strong>fully AI-generated, AI-assisted or AI-altered<\/strong>, including mixed media such as real visuals with cloned audio.<\/p>\n<p><strong>Over 50% of all content on the Internet is now considered AI-generated.<\/strong> This huge volume makes it hard for platforms and users to pick out content that is dangerous or misleading.<\/p>\n<h2><strong>Concern Related to Synthetic Media<\/strong><\/h2>\n<ol>\n<li><strong>Misinformation and Disinformation:<\/strong> A major concern is the potential for synthetic media to spread fake news, create false narratives, and manipulate public opinion. This can impact political campaigns, disrupt democratic processes, and erode public trust in news organizations and government institutions.<\/li>\n<li><strong> Difficulty of user detection:<\/strong> Many synthetic videos and audio clips now look and sound almost real. Some still show visible signs of editing, but others are so realistic that viewers cannot clearly distinguish them from authentic content.<\/li>\n<li><strong>Privacy and Consent Violations:<\/strong> Synthetic media tools allow for the use of individuals&#8217; likenesses, voices, and behaviors without their consent. This has led to an increase in non-consensual intimate imagery (deepfake pornography), identity theft, and online harassment, causing significant psychological and reputational harm to victims.<\/li>\n<li><strong>Fraud and Financial Crime:<\/strong> Deepfake audio and video can be used in social engineering attacks to impersonate individuals (such as a CEO or bank employee) and deceive others into transferring money or divulging sensitive information.<\/li>\n<li><strong>Erosion of Trust and Authenticity:<\/strong> The prevalence of convincing synthetic content blurs the line between reality and fabrication, leading to a general skepticism towards digital media. This &#8220;authenticity crisis&#8221; makes it harder to use authentic media as reliable evidence in legal or journalistic contexts.<\/li>\n<li><strong>Intellectual Property Issues:<\/strong> The use of copyrighted material to train AI models and the generation of content that may infringe on existing works raise complex legal challenges regarding ownership and originality.<\/li>\n<li><strong>National Security Risks:<\/strong> Malicious state or non-state actors may use synthetic media for information warfare, psychological operations, or to sow discord and destabilize trust in targeted nations.<\/li>\n<\/ol>\n<h2><strong>Regulating Mechanism (Draft Amendments to the IT Rule 2021)<\/strong><\/h2>\n<p>The government earlier treated the existing framework as adequate to deal with synthetic media. <strong>It has now proposed draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.<\/strong> The aim is to create a clear <strong>AI labelling framework.<\/strong><\/p>\n<h2><strong>Key provisions of the draft amendments to the IT Rule 2021<\/strong><\/h2>\n<ol>\n<li><strong>Enhanced Obligations for significant social media intermediaries (SSMIs) -Requires SSMIs to:<\/strong><\/li>\n<\/ol>\n<ul>\n<li>Obtain a <strong>user declaration<\/strong> on whether uploaded information is synthetically generated;<\/li>\n<li>Deploy <strong>reasonable and proportionate technical measures to verify<\/strong> such declarations;<\/li>\n<li>Ensure that synthetically generated information is <strong>clearly labelled or accompanied<\/strong> by a notice indicating the same; and<\/li>\n<li>The <strong>label or identifier must enable immediate identification of the content<\/strong> as synthetically generated information.<\/li>\n<li>The rule further <strong>prohibits intermediaries from modifying, suppressing, or removing<\/strong> such labels or identifiers.<\/li>\n<\/ul>\n<ol start=\"2\">\n<li><strong> Minimum label size and duration: <\/strong>The draft requires that labels cover <strong>at least 10% of the visual area of synthetic videos<\/strong>. For audio, labels must cover <strong>at least 10% of the initial duration of synthetic clips<\/strong>. This tries to ensure that the label is prominent and not hidden like fine print.<\/li>\n<li><strong>Due diligence focused on large platforms: <\/strong>The primary obligations fall on <strong>Significant Social Media Intermediaries<\/strong>, which host large user bases and can amplify harmful synthetic media at scale. This reflects the view that bigger platforms carry higher responsibility.<\/li>\n<\/ol>\n<h2><strong>Major Concerns Related to The Draft Amendments to the IT Rule 2021<\/strong><\/h2>\n<ol>\n<li><strong>Broad and unclear scope of synthetic media: <\/strong>The definition of synthetic media covers any content that is artificially or algorithmically created or modified. This makes it hard to separate everyday edits or computer-generated imagery from content that is actually harmful or misleading, even though <strong>not all synthetic media is problematic.<\/strong><\/li>\n<li><strong>Rigid 10% labelling rule may not work in practice: <\/strong>The rule that labels must cover <strong>10% of the visual area or 10% of the initial audio duration<\/strong> may not meet the <strong>reasonable person test.<\/strong> Short disclaimers in longer clips can be ignored like fine print, and long disclaimers may overwhelm users instead of helping them.<\/li>\n<li><strong>Unclear treatment of mixed media formats: <\/strong>The framework does not clearly deal with <strong>mixed media<\/strong>, such as real visuals combined with cloned or synthetic audio. It is not clear how the 10% rule will apply in such cases, which creates confusion for both platforms and creators.<\/li>\n<li><strong>Unreliable technical markers like watermarks: Watermarks added by AI companies are easy to remove.<\/strong> Soon after a major text-to-video tool promised watermarking of synthetic videos, other tools appeared that could wipe these markings. This makes sole reliance on watermarks a weak safeguard.<\/li>\n<li><strong>Limited effectiveness of current detection and labelling tools: <\/strong>Synthetic media is multiplying faster than verification tools can keep up. Platforms face difficulty in detecting AI-generated or algorithmically created content, and <strong>third-party detection tools are only as good as their training and accuracy.<\/strong> An audit of 516 AI-generated posts found that <strong>only 30% were correctly flagged<\/strong>, and even the best-performing platform labelled just about <strong>55%<\/strong> of such content.<\/li>\n<li><strong>Gaps in content provenance and platform practices<br \/>\n<\/strong>Many platforms follow <strong>Coalition for Content Provenance and Authenticity <\/strong>(<strong>C2PA) standards<\/strong> to track content origin, but these standards do not always result in consistent labelling.<\/li>\n<\/ol>\n<h2><strong>Way forward<\/strong><\/h2>\n<ol>\n<li><strong> Fine-tune categories and standards:,<\/strong>Develop <strong>clear, precise standards for different types of synthetic media<\/strong>. Use a <strong>tiered labelling system<\/strong> that separates <strong>fully AI-generated, AI-assisted and AI-altered content<\/strong>, instead of relying on one generic label.<\/li>\n<li><strong>Extend duties to influential creators: <\/strong>Make <strong>creators above a certain follower threshold disclose their use of AI<\/strong> in content creation. Encourage <strong>voluntary self-labelling<\/strong> among smaller creators to build a basic culture of transparency.<\/li>\n<li><strong>Adopt graded compliance: <\/strong>Link <strong>stricter obligations to higher reach and influence<\/strong>. Professional creators and big accounts should follow stronger labelling and disclosure norms to <strong>maintain public trust<\/strong> and adapt to changing regulation.<\/li>\n<li><strong>Improve detection systems with external tools: <\/strong>Strengthen platform capacity to identify synthetic media by using <strong>specialised third-party detection tools<\/strong>, and regularly improve them based on <strong>training quality and accuracy levels<\/strong>.<\/li>\n<li><strong> Use independent auditors for high-risk content: <\/strong>In cases of <strong>harmful, fraudulent or misleading synthetic media<\/strong>, rely on <strong>independent information verifiers and auditors<\/strong>.<\/li>\n<\/ol>\n<h2><strong>Conclusion<\/strong><\/h2>\n<p>AI labelling rules are emerging because <strong>synthetic media is widespread, hard to detect and sometimes highly harmful.<\/strong>Draft IT rules push large platforms to label such content and verify user declarations, while graded compliance can involve creators. As a <strong>multi-stakeholder effort<\/strong> with stronger standards, tiered labels, better detection tools and support from independent auditors, <strong>users can receive clearer signals on what is real and what is synthetic<\/strong> and face fewer risks online.<\/p>\n<p><strong>Question for practice:<\/strong><\/p>\n<p>Examine the effectiveness of India\u2019s proposed AI labelling framework in addressing the risks posed by synthetic media.<\/p>\n<p><strong>Source<\/strong>: <a href=\"https:\/\/www.thehindu.com\/opinion\/op-ed\/fine-tune-the-ai-labelling-regulations-framework\/article70272172.ece#:~:text=The%20amendments%20mandate%20that%20large,require%20engagement%20across%20multiple%20stakeholders.\"><strong>The Hindu<\/strong><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>UPSC Syllabus Topic: GS Paper3- Awareness in the fields of IT, Space, Computers, robotics, nano-technology, bio-technology Introduction Near-perfect AI videos and audio now appear next to real content, so users struggle to trust what they see and hear. A deepfake of the Finance Minister promoting an investment scheme and causing a large financial loss shows&hellip; <a class=\"more-link\" href=\"https:\/\/forumias.com\/blog\/ai-labelling-regulations-framework\/\">Continue reading <span class=\"screen-reader-text\">AI Labelling Regulations Framework<\/span><\/a><\/p>\n","protected":false},"author":10320,"featured_media":350151,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[1230],"tags":[216,242,10498],"class_list":["post-349888","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-9-pm-daily-articles","tag-gs-paper-3","tag-science-and-technology","tag-the-hindu","entry"],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/forumias.com\/blog\/wp-content\/uploads\/2025\/11\/AI-Labelling-Regulations-Framework.png?fit=1280%2C850&ssl=1","views":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/forumias.com\/blog\/wp-json\/wp\/v2\/posts\/349888","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forumias.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forumias.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forumias.com\/blog\/wp-json\/wp\/v2\/users\/10320"}],"replies":[{"embeddable":true,"href":"https:\/\/forumias.com\/blog\/wp-json\/wp\/v2\/comments?post=349888"}],"version-history":[{"count":0,"href":"https:\/\/forumias.com\/blog\/wp-json\/wp\/v2\/posts\/349888\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forumias.com\/blog\/wp-json\/wp\/v2\/media\/350151"}],"wp:attachment":[{"href":"https:\/\/forumias.com\/blog\/wp-json\/wp\/v2\/media?parent=349888"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forumias.com\/blog\/wp-json\/wp\/v2\/categories?post=349888"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forumias.com\/blog\/wp-json\/wp\/v2\/tags?post=349888"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}