Contents
Introduction
Artificial Intelligence is rapidly transforming warfare, with over 70 countries reportedly investing in military AI (SIPRI, 2024). Yet global consensus remains elusive, as reflected in declining endorsements at the REAIM Summit on military AI governance.
Strategic Imperative for Global Guardrails on Military AI
- AI as a Dual-Use, Disruptive Technology: Military AI is inherently dual-use, powering logistics, surveillance, and predictive maintenance while also enabling Lethal Autonomous Weapons Systems (LAWS). This duality complicates arms control verification — unlike nuclear weapons, AI development often overlaps with civilian R&D ecosystems. Technologies perceived as game-changing — like nuclear fission in the 1950s — have historically resisted regulation. AI now holds similar transformative potential in ISR (Intelligence, Surveillance, Reconnaissance), cyber operations, drone swarms, and algorithmic command systems.
- Speed-of-War and Escalation Risks: AI compresses decision-making timelines into machine-speed warfare. Automated threat-detection systems along contested borders could escalate skirmishes before political leadership intervenes. The 2010 Flash Crash in financial markets illustrates algorithmic cascade risks. In warfare, such cascading miscalculations could prove catastrophic, especially in nuclear-armed regions.
- Accountability and Legal Vacuum: International Humanitarian Law (IHL) rests on principles of distinction, proportionality, and accountability. However, AI systems often function as opaque black boxes, raising the question: who is legally responsible for unintended civilian harm — programmer, commander, or manufacturer? The UN Convention on Certain Conventional Weapons (CCW) has struggled to define LAWS, leading to definitional deadlock and stalled negotiations.
- Proliferation and Non-State Actors: Unlike nuclear technology, AI code is replicable and diffusible. The risk of algorithmic proliferation to non-state actors, terrorist groups, or rogue militias heightens urgency for guardrails.
Evaluating India’s Non-Binding Framework Proposal
India abstained from signing the REAIM Pathways to Action declaration, reflecting strategic caution. Its stance rests on three pillars:
- Technological Sovereignty and Strategic Autonomy: India operates in a volatile neighbourhood with two nuclear-armed adversaries. Binding restrictions could curtail its emerging capabilities under initiatives such as the IndiaAI Mission and defence AI integration programs. A legally binding regime risks becoming an AI Non-Proliferation Treaty, freezing existing hierarchies between AI haves and have-nots. India seeks to avoid premature constraints while building indigenous compute infrastructure and sovereign datasets.
- Accountability-Rooted Normative Leadership: India advocates a principle-based, non-binding framework emphasizing: Human-in-the-loop control for lethal systems, Separation of AI from nuclear command and control and Voluntary transparency and confidence-building measures. This mirrors India’s historical nuclear diplomacy — supporting peaceful uses while preserving sovereign options.
- Gradual Norm Development: Given limited battlefield deployment of LAWS, India views a binding treaty as premature. Instead, it proposes developing: A risk hierarchy of AI military applications, Voluntary incident-reporting mechanisms and Shared best practices for testing and validation. Such soft-law instruments could crystallize into customary norms over time.
Balancing Sovereignty with Ethical Governance
- India’s approach reflects Strategic Autonomy 2.0 — participating in global governance without sacrificing national security.
- It supports responsible AI discourse at global summits.
- It refrains from rigid commitments that may constrain capability development.
- It positions itself as a bridge between technologically advanced states and the Global South.
- This mirrors its role in nuclear diplomacy during the Cold War — advocating cooperation while building national capacity.
Way Forward
- Institutionalize mandatory human oversight in military AI doctrine.
- Develop national AI testing and certification standards.
- Promote a Global AI Risk Registry under UN auspices.
- Engage in Track-II diplomacy to build consensus on LAWS definitions.
- Guardrails must evolve alongside technology, not lag behind it.
Conclusion
As President Dr. A.P.J. Abdul Kalam reminded in India 2020, strength must be coupled with wisdom. India’s accountability-driven framework seeks power with restraint, ensuring technology serves humanity, not destabilizes it.


