Centre Revises IT Rules to Address AI-Based Content

sfg-2026

Source: The post “Centre Revises IT Rules to Address AI-Based Content” has been created, based on “Take down deepfakes within two hours: India’s new IT Rules” published in “The Hindu ” on  12th February 2026.

UPSC Syllabus: GS Paper-2- Governance

Context: The Government of India has notified the amended Information Technology (IT) Rules, 2026 to strengthen regulation of social media intermediaries and curb the misuse of Artificial Intelligence, particularly deepfakes and synthetic content. The amendments introduce stricter compliance timelines and revised obligations regarding AI-generated content.

Key Features of the Amended IT Rules, 2026

  1. Regulation of AI-Generated Content
  1. Social media intermediaries are required to label AI-generated or modified content only when such content appears real, authentic, or true.
  2. The labelling requirement applies when the content depicts an individual or event in a manner that is indistinguishable from a natural person or real-world event.
  3. The earlier proposal mandating watermarking of 10% of nearly all online content has been removed.
  4. Routine or good-faith editing, such as filters, formatting, colour correction, compression, transcription, or technical enhancement, is exempt from the labelling requirement.
  1. Stricter Takedown Timelines
  1. Platforms must remove non-consensual sexual imagery, including AI-generated deepfakes, within two hours of receiving a complaint.
  2. Intermediaries must remove other unlawful content within three hours of a user complaint or a government or court order.
  3. Complaints related to defamation, harassment, or other legal violations must be resolved within thirty-six hours.
  4. Grievance Redressal Officers must deliver a final decision on user complaints within seven days.
  1. Enhanced User Awareness
  1. Platforms are required to inform users about privacy policies, prohibited content, and grievance redressal mechanisms at least once every three months.
  2. This replaces the earlier annual disclosure requirement.
  1. Enforcement and Liability
  1. Non-compliance with the revised timelines may attract criminal litigation under existing intermediary laws.
  2. All entities classified as intermediaries must comply with the amended rules from 20 February, providing a ten-day transition window.
  3. Intermediaries retain safe harbour protection only if they adhere to the prescribed compliance framework.

Significance of the Amendments

  1. The amendments seek to address the growing threat of deepfakes, misinformation, and non-consensual sexual imagery enabled by AI.
  2. The stricter timelines recognise the rapid generation and dissemination capacity of AI tools.
  3. The removal of blanket watermarking reflects responsiveness to industry concerns and supports ease of doing business.
  4. Faster grievance redressal enhances digital safety and strengthens the accountability of platforms.

Concerns and Challenges

  1. The compressed timelines may impose operational and technological burdens, especially on smaller intermediaries.
  2. The broad definition of intermediary may create ambiguity for AI-based startups and emerging digital businesses.
  3. There is a risk of over-censorship as platforms may adopt precautionary takedowns to avoid liability.
  4. The ten-day compliance window may be insufficient for restructuring internal moderation systems and legal workflows.

Way Forward

  1. The government should consider issuing detailed implementation guidelines to reduce ambiguity in the definition of intermediary and AI-generated content.
  2. A differentiated regulatory framework may be developed for various categories of digital businesses, particularly standalone AI tool providers.
  3. Capacity-building support and transitional flexibility may be provided to smaller platforms to ensure uniform compliance.
  4. Periodic stakeholder consultations with industry, civil society, and technical experts should be institutionalised to refine enforcement mechanisms.
  5. Safeguards must be strengthened to prevent arbitrary takedowns and protect freedom of speech in line with constitutional principles.
  6. Investment in AI-based automated moderation tools and digital forensic capabilities should be encouraged to improve efficiency and accuracy in content regulation.

Conclusion: The amended IT Rules, 2026, mark an important step in India’s AI governance journey by tightening accountability mechanisms while moderating earlier regulatory excesses. Their long-term effectiveness will depend on balanced enforcement, clarity in classification, and sustained dialogue between the state and digital ecosystem stakeholders.

Question: Discuss the key features of the amended Information Technology (IT) Rules, 2026, analyse their implications, and suggest a way forward.

Source: Mint

Print Friendly and PDF
Blog
Academy
Community