India’s AI Guidelines for Tech Regulation

Quarterly-SFG-Jan-to-March
SFG FRC 2026

UPSC Syllabus Topic: GS Paper 2 –Government policies and interventions for development in various sectors and issues arising out of their design and implementation.

India’s AI Guidelines for Tech Regulation

Introduction

India has unveiled governance guidelines for Artificial Intelligence to balance innovation with accountability and growth with safety. The approach prefers agile, sector-specific regulation over an immediate new AI law. It proposes an India-specific risk framework, AI incident database, content authentication for deepfakes, and techno-legal safeguards embedded in system design, ahead of the India–AI Impact Summit 2026.

Need for India AI Governance Guidelines

  1. Harnessing AI while limiting harm: India aims to use AI for inclusive development and competitiveness while managing risks to people and society. Use is expanding fast, including large language models, demanding clarity on responsibility, safety research, and risk classification.
  2. Avoiding premature over-regulation:The stance is to not tighten rules immediately. The goal is to let an innovation economy flourish while preparing guardrails such as risk assessment, voluntary frameworks, and grievance mechanisms.
  3. Deepfake and content authenticity challenge: Synthetically generated images, videos, and audio require content authentication. Draft amendments to IT Rules propose declarations by uploaders, platform verification, and visible labels; non-compliant platforms risk losing safe-harbour.
  4. Public-sector exposure risks: There are privacy and inference risks when officials use AI systems. At scale, prompts may reveal priorities or patterns. There is debate on protecting official systems from foreign AI services and on potential uses of anonymised mass data by global firms.

Guiding Principle of AI Governance Guidelines

  1. “Do No Harm” with flexible sandboxes: The central ethic is “Do No Harm.” Innovation should occur in sandboxes, with risk mitigation built into a flexible, adaptive governance system.
  2. People-centric and law-first approach: Policy remains human-centric. It relies on existing laws—notably the IT Act and the Digital Personal Data Protection Act—and fills gaps through targeted amendments rather than creating a standalone AI statute now.
  3. Seven guiding principles : Seven guiding principles or sutras have been  adapted  from  the  RBI’s  FREE-AI  Committee  report  to guide the overall approach. These principles have been adapted for application across sectors and aligned with national priorities.

Trust is the Foundation: Without trust, innovation and adoption will stagnate.

People First: Human-centric design, human oversight, and human empowerment.

Innovation over Restraint: All other things being equal, responsible innovation should be prioritised over cautionary restraint.

Fairness & Equity: Promote inclusive development and avoid discrimination.

Accountability: Clear allocation of responsibility and enforcement of regulations.

Understandable by Design: Provide disclosures and explanations that can be understood by the intended user and regulators.

Safety, Resilience & Sustainability: Safe, secure, and robust systems that are able to withstand systemic shocks and are environmentally sustainable.

Key Issues in AI Governance in India

key issues in AI governance from India’s perspective & makes recommendations across  six  pillars are:

  1. Infrastructure: Enable innovation and adoption of Al by expanding access to foundational resources such as data and compute, attract investments, and leverage the power of digital public infrastructure for scale, impact and, inclusion.
  2. Capacity Building: Initiate education, skilling, and training programs to empower people, build trust, and increase awareness about the risks and opportunities of Al.
  3. Policy & Regulation: Adopt balanced, agile, and flexible frameworks that support innovation and mitigate the risks of Al. Review current laws, identify regulatory gaps in relation to Al systems, and address them with targeted amendments.
  4. Risk Mitigation:
  • Develop an India-specific risk assessment framework that reflects real-world evidence of harm.
  • Encourage compliance through voluntary measures supported by techno-legal solutions as appropriate.
  • Additional obligations for risk mitigation may apply in specific contexts, for e.g. in relation to sensitive applications or to protect vulnerable groups
  1. Accountability:
  • Adopt a graded liability system based on the function performed, level of risk, and whether due diligence was observed.
  • Applicable laws should be enforced, while guidelines can assist organisations in meeting their obligations Greater transparency is required about how different actors in the Al value chain operate and their compliance with legal obligations.
  1. Institutions:
  • Adopt a whole of government approach where ministries, sectoral regulators, and other public bodies work together to develop and implement Al governance frameworks.
  • An Al Governance Group (AIGG) should be set up, to be supported by a Technology & Policy Expert Committee (TPEC).
  • The Al Safety Institute (AlSI) should be resourced to provide technical expertise on trust and safety issues, while sector regulators continue to exercise enforcement powers.

Action Plan

The Action Plan identifies outcomes mapped to short, medium, and long-term timelines.

TimeframeKey Priorities
 Short-term• Establish key governance institutions

• Develop India-specific risk frameworks

• Adopt voluntary commitments

• Suggest legal amendments

• Develop clear liability regimes

• Expand access to infrastructure Launch awareness programmes

• Increase access to Al safety tools

 Medium term• Publish common standards

• Amend laws and regulations

• Operationalise Al incidents systems

• Pilot regulatory sandboxes

• Expand integration of DPI with Al

 Long-term• Continue ongoing engagements (capacity building, standard setting, access and adoption, etc.)

• Review and update governance frameworks to ensure sustainability of the digital ecosystem.

• Draft new laws based on emerging risks and capabilities

Conclusion

India’s AI governance path is people-first and risk-based, anchored in “Do No Harm.” It relies on agile, sector-specific updates to existing laws, prioritises content authentication against deepfakes, and operationalises an India-specific risk framework with an incident database. With graded liability, AIGG–TPEC–AISI coordination, subsidised compute/datasets (e.g., AIKosh), DPI integration, and capacity building, India can scale trustworthy, inclusive AI at speed and with accountability.

Question for practice

Examine how India’s AI Governance Guidelines aim to balance innovation with accountability while addressing key risks associated with artificial intelligence.

Source: Indian Express

Print Friendly and PDF
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Blog
Academy
Community