[Answered] Critically examine the security risks of AI-integrated ‘kill chains’ in India’s border management. Evaluate the role of plurilateral governance in securing India’s strategic autonomy amidst corporate-led AI rivalries.

Introduction

By March 2026, the Indian Army has moved from conceptualizing AI to operationalizing it through the Smartise the Kill Chain roadmap. The Economic Survey 2025–26 highlights AI as a critical strategic technology, while Union Budget 2026–27 prioritises defence AI and sovereign compute infrastructure, intensifying debates on AI-driven warfare and strategic autonomy..

Security Risks of AI-Integrated Kill Chains in Border Management

AI compression of the sensor-to-shooter cycle to seconds on the Line of Actual Control (LAC) creates systemic vulnerabilities.

  1. Machine-Speed Escalation Trap: AI compresses the traditional military kill chain—find, fix, track, target, engage and assess—into seconds. Misidentified shadows or drone anomalies can trigger kinetic response before human de-escalation, especially in high-altitude fog-prone terrain. Thus, automation risks removing diplomatic buffers in crisis situations.
  2. Algorithmic Bias and Terrain Limitations: Foreign-trained models may fail in high-altitude terrains and extreme weather conditions typical of Himalayan battlefields. Bias in training data can lead to incorrect threat assessments. This creates vulnerabilities in India’s operational planning. For Example- 2026 iDEX trials showed 12-18% false positives in snow camouflage detection.
  3. Black-Box Accountability Gap: AI systems often function through opaque neural networks. When Combat Information Decision Support Systems (CIDSS) fast-track targets, tracing war-crime liability becomes nearly impossible under international humanitarian law. For Example- creates challenges for compliance with Geneva Conventions.
  4. Strategic Dependency on Foreign AI Ecosystems: Global AI development is dominated by private technology companies. Dependence on foreign platforms could expose India to algorithmic manipulation, software vulnerabilities, or geopolitical leverage. For Example- Reliance on U.S. (Maven) or Chinese stacks exposes India to kill-switch vulnerabilities or geopolitical coercion during crises.

Corporate-Led AI Rivalries and India’s Strategic Autonomy

The Anthropic-Pentagon standoff (Feb 2026) and U.S. designation of Chinese labs as threats reveal corporate actors as proxies in great-power rivalry.

  1. India risks entrapment in U.S.-China AI cold war if dependent on foreign frontier models.
  2. Sovereign compute (national GPU clusters) and indigenous firms (Tata Elxsi, Bharat Forge) via iDEX aim to mitigate, but talent mobility and distillation techniques erode controls.
  3. Corporate guardrails collapse under state pressure like OpenAI’s permissive military contract vs. Anthropic’s resistance illustrates the race-to-the-bottom dynamic.

Role of Plurilateral Governance in Safeguarding Strategic Autonomy

  1. International discussions increasingly emphasise human-on-the-loop or human-in-the-loop frameworks. For Example- UN Resolution 80/58 pushes Human-on-the-Loop framework. Such frameworks ensure that AI recommends but does not autonomously execute lethal force.
  2. Middle-power coalitions (India-Brazil-South Africa) can set Trustworthy Defence AI benchmarks, bypassing P5 vetoes.
  3. Given geopolitical rivalry among major powers, universal treaties remain difficult. Therefore, plurilateral coalitions of middle powers can develop operational norms. For Example- Initiatives like the REAIM Summit seek consensus on responsible military AI use. Such platforms enable countries to shape standards outside great-power rivalries.
  4. India must pursue technological self-reliance. Institutions such as NITI Aayog have recommended sovereign AI infrastructure and domestic innovation ecosystems. Programmes like iDEX promote defence-technology startups and indigenous AI solutions.

Way Forward

  1. Operationalise Seven Sutras of Indian AI Governance — sovereign stack, mandatory human-in-the-loop for lethal decisions, auditable black-box explainability.
  2. Expand iDEX to fund indigenous frontier-model training on classified LAC datasets.
  3. Lead plurilateral LAC AI Confidence-Building Measures with China and Pakistan via SCO.
  4. Integrate digital sovereignty clauses in Quad and I2U2 tech cooperation.
  5. Mandate annual Algorithmic Impact Assessments for all defence AI deployments.

Conclusion

For India, AI in 2026 is a Force Multiplier but also a Systemic Risk. The path forward lies in the Seven Sutras of Indian AI Governance, which prioritize Safety and Accountability. By championing plurilateral governance, India can ensure that the kill chain does not become a chain of accidents, preserving its Strategic Autonomy in an era of automated geopolitics.

Print Friendly and PDF
Blog
Academy
Community