AI and the National Security Calculus

sfg-2026

UPSC Syllabus: Gs Paper 3- Science and Technology- developments and their applications and effects in everyday life.

Introductions

Recent tensions around Artificial Intelligence show how AI is becoming part of global security competition. An American AI lab, Anthropic, has accused three Chinese AI labs—DeepSeek, MoonshotAI, and MiniMax—of distilling frontier models. At the same time, AI models developed by American companies have reportedly been used by the U.S. military to speed up the “kill chain” from target identification to strike. These developments raise major questions about AI diffusion, military use, and global governance.

AI Competition and the Distillation Controversy

  1. Industrial-scale model distillation: Distillation means training a weaker AI model by learning from the outputs of a stronger model. Anthropic claims Chinese actors conducted 16 million exchanges with its Claude model through about 24,000 fraudulent accounts, violating access restrictions.
  2. Use of deceptive techniques: The distillation activity reportedly used sophisticated methods to hide their identity and intent while extracting model outputs. This indicates organised efforts to reproduce advanced AI capabilities.
  3. National security framing: Anthropic wants DeepSeek, MoonshotAI, and MiniMax to be treated as national security threats. This reflects how AI research competition is now being framed in security terms.

Limits of AI Containment Strategies

  1. AI as a dual-use general-purpose technology: Generative AI is often compared with nuclear technologies. However, it functions more like semiconductors, as it supports civilian uses while also having military applications.
  2. Private-sector driven innovation: Cutting-edge AI research happens mainly in private companies for civilian products. Governments do not control the full development process.
  3. AI models cannot be contained like nuclear materials: Nuclear non-proliferation works because fissile materials are rare and traceable. Mathematical AI models do not have such physical constraints.
  4. Evidence of technological workarounds: DeepSeek reportedly achieved comparable performance to frontier models at a fraction of the cost even after export controls. This shows restrictions cannot easily prevent technological progress.
  5. Limits of restriction-based control: Treating simple AI queries as equivalent to weapons proliferation reflects the weakness of containment strategies.

Military Use of AI and Limits of Corporate Guardrails

  1. Use of AI in military operations: AI models from American labs have reportedly been used by the U.S. military to accelerate the “kill chain” from target identification to legal approval and strike.
  2. Military applications of frontier AI models: Models from companies such as Anthropic, OpenAI, Google and xAIcan support surveillance, cyberwarfare, and lethal autonomous weapons systems.
  3. Pressure on companies to support defence use: When Anthropic raised concerns about military uses of its technology, the Pentagon labelled it a “supply chain risk.” This designation is normally associated with foreign adversaries.
  4. Competitive pressure among companies: Rival firms may accept permissive defence contracts to secure government clients. OpenAI reportedly accepted such arrangements, indicating a race to the bottom in guardrails.
  5. Weakness of corporate safeguards: When governments demand military access, companies can be pressured, replaced, or overridden. Corporate guardrails therefore cannot ensure responsible use.

Market Power and Innovation Concerns

  1. Restrictions strengthen dominant firms: Input-based restrictions make it harder for competitors to challenge large U.S. companies even in civilian AI markets.
  2. Collateral damage to global innovation: These restrictions can weaken scientific collaboration, technological innovation, and economic development.
  3. Debate over intellectual property and fairness: Distillation is often described as industrial-scale intellectual property theft. However, frontier AI models themselves are trained on billions of web pages created by people who did not consent or receive compensation.
  4. Parallel extractive processes: Asking an AI model millions of questions and learning from its responses can be viewed as similar to training models on large public datasets.
  5. Coordinated industry response: AI firms whose models were distilled argue for coordinated action by the AI industry, cloud providers, and policymakers. Such coordination may further concentrate market power among a few companies.

Conclusion

Generative AI is likely to become part of military systems across countries. Corporate guardrails cannot ensure responsible use because governments can pressure companies, replace them, or override restrictions. Effective regulation therefore requires plurilateral commitments by states. These should ensure meaningful human control over lethal decisions, prohibitions on mass civilian surveillance, and auditable technical standards. Such commitments must apply universally to remain effective.

Question for practice:

Discuss how the growing use of Artificial Intelligence in military systems and global technological competition is reshaping the national security calculus.

Source: The Hindu

Print Friendly and PDF
Blog
Academy
Community