[Answered] Use of AI is not just about the routine application of digital technology in service delivery process. It is as much about multifarious interactions for ensuring transparency and accountability. In this context evaluate the role of the ‘Interactive Service Model of AI governance.

Introduction

Economic Survey 2025-26 and Budget 2026-27 move beyond treating AI as a prestige technology, instead framing it as a structural pillar of growth. NITI Aayog’s AI for All report advocates for governance that prioritizes fairness, equity, and accountability to prevent algorithmic colonialism.

The Interactive Service Model

  1. The Interactive Service Model reimagines AI governance as an ongoing, multi-stakeholder service rather than a static technocratic exercise.
  2. It treats governance as a dynamic interaction among citizens, civil society, independent researchers, academia, private developers and the state.
  3. Core elements include accessible reporting platforms, open datasets (AIKosha), community-led audits, and capacity-building programmes such as iGOT Karmayogi’s 176 AI courses (over 72.99 lakh enrolments and 53.79 lakh completions by early 2026).
  4. This model moves beyond black-box opacity by enabling real-time feedback loops and upstream scrutiny of dataset selection, objective functions and harm thresholds.
  5. The Interactive Service Model pierces the social black box, upstream commercial and strategic decisions that embed biases before deployment.

Role in Ensuring Transparency and Accountability

  1. Explainable AI (XAI): As per the India AI Governance Guidelines (Nov 2025), black-box models are no longer acceptable in public administration. The model requires systems to provide audit logs and interpretations for high-stakes decisions (e.g., welfare eligibility).
  2. Graded Liability: Budget 2026-27 and MeitY notifications introduced a phased regulatory approach. Accountability is assigned based on the level of risk and the function performed, ensuring that developers and deployers are held responsible for systemic biases.
  3. Democratic Oversight: The India-AI Impact Summit 2026 emphasized that leadership in AI is not just about compute, but about trust. Interactive governance includes Citizen Assemblies and multi-stakeholder working groups (like the Safe & Trusted AI Working Group) to audit upstream algorithmic choices.
  4. Community Audits: Citizens and civil society stress-test models under local linguistic, cultural and regional contexts. For Example- Indic-language sovereign models like Sarvam AI outperforming frontier models on document understanding.
  5. Deliberative Oversight: Public input via AIKosh’s sandbox and iGOT literacy programmes democratises knowledge, allowing detection of harms overlooked by developers.
  6. Accountability Mechanisms: Mandatory algorithmic impact assessments with public disclosure shift liability from voluntary corporate guardrails to enforceable standards, aligning AI with constitutional values of equality (Article 14) and dignity (Article 21).

Mitigating Systemic Inequalities and Ensuring Just Outcomes

Technocratic governance deepens divides by automating exclusion in labour markets, education and finance. The Interactive Service Model counters this through participatory mechanisms:

  1. Equity Gains: Community input reduces linguistic and caste biases in welfare algorithms, preventing automated exclusion of marginalised groups.
  2. Democratic Resilience: Real-world audits strengthen transparency, countering disinformation and protecting electoral integrity.
  3. Economic Justice: By involving end-users early, the model ensures AI solutions (30 India-specific applications under IndiaAI Mission in agriculture, health, climate) address public needs rather than narrow institutional priorities. Global evidence from GPAI 2025-26 pilots shows participatory models reduce bias by 15-25% in high-stakes domains.

Way Forward

  1. Establish a National AI Regulatory Authority with mandatory citizen and civil-society representation on its board.
  2. Mandate public algorithmic impact assessments for all high-risk deployments with open consultation periods.
  3. Expand iGOT and AIKosha into nationwide AI Citizenship programmes reaching every district.
  4. Institutionalise community-led audits for sovereign models and public-sector AI applications.
  5. Align IndiaAI Mission guidelines with plurilateral standards (REAIM 2026) emphasising upstream democratic scrutiny and human-in-the-loop safeguards.

Conclusion

As President Murmu noted in 2026, Technology must serve humanity, not lead it. Like Dr. Ambedkar’s associated living, participatory AI governance ensures that progress remains anchored in democratic equity.

Print Friendly and PDF
Blog
Academy
Community