US and China Opt Out of Global AI Military Pledge: What It Means

Hero image for article: US and China Opt Out of Global AI Military Pledge: What It Means

US and China Decline to Sign Responsible AI Military Pledge

On February 5, 2026, a historic joint declaration on the responsible use of artificial intelligence (AI) in military applications was signed by 60 nations at an international summit on AI governance. However, the absence of signatures from the United States and China—the world’s two leading AI superpowers—has cast a shadow over the agreement’s potential impact. This development has sparked intense debate about the future of AI in warfare and the challenges of establishing global norms for emerging technologies.

What Is the Joint Declaration on Responsible AI?

The joint declaration, spearheaded by the European Union and supported by nations across Asia, Africa, and the Americas, aims to set ethical boundaries for the development and deployment of AI in military contexts. Key principles of the pledge include:

  • Ensuring human oversight in AI-driven military decisions.
  • Preventing the proliferation of autonomous weapons systems without accountability.
  • Promoting transparency in AI military applications.
  • Committing to international dialogue on AI’s role in conflict.

The declaration is non-binding but serves as a framework for future treaties and regulations. According to Dr. Elena Martinez, a policy expert at the Global Institute for AI Ethics, “This pledge is a critical first step toward preventing an AI arms race that could destabilize global security. But without the US and China, its effectiveness is limited.”

Why Did the US and China Opt Out?

Neither the US nor China has issued an official statement explaining their decision to abstain from the declaration. However, analysts point to several likely reasons for their reluctance.

United States: Strategic Autonomy and National Security

The United States has long prioritized maintaining a technological edge in military capabilities. With significant investments in AI-driven defense systems—such as autonomous drones and predictive analytics for battlefield strategies—Washington may view international agreements as potential constraints on innovation. A 2025 report by the US Department of Defense highlighted that AI could contribute over $500 billion to military modernization by 2030, underscoring the stakes involved.

“The US likely sees this declaration as a risk to its strategic autonomy,” said James Carter, a defense policy analyst at the Center for Strategic Studies. “They’re not opposed to responsible AI in principle, but they want to define ‘responsible’ on their own terms.”

China: Sovereignty and Global Influence

China, similarly, has invested heavily in AI for military purposes, with initiatives like the “New Generation Artificial Intelligence Development Plan” aiming to make the country a global AI leader by 2030. Beijing’s hesitation to sign the pledge may stem from concerns over sovereignty and a desire to avoid external oversight of its military technologies.

Dr. Li Wei, a technology policy researcher at Peking University, noted, “China views AI as a cornerstone of its national security and economic growth. Joining a Western-led initiative could be perceived as ceding control over a critical domain.”

Implications of the Opt-Out

The decision by the US and China to opt out of the declaration has far-reaching consequences for global AI governance and military stability. Here are some key implications:

1. Risk of an AI Arms Race

Without the participation of the two largest AI powers, there is a heightened risk of an unchecked AI arms race. Both nations are already developing $1 AI systems for military use, including autonomous weapons that can operate without human intervention. A 2025 study by the Stockholm International Peace Research Institute (SIPRI) warned that such technologies could lower the threshold for conflict if not regulated.

2. Fragmentation of Global AI Norms

The absence of the US and China could lead to a fragmented landscape of AI regulations, with different regions adopting conflicting standards. This lack of cohesion may hinder international cooperation on pressing issues like cybersecurity and the prevention of AI misuse in warfare.

3. Challenges for Smaller Nations

For the 60 nations that signed the pledge, the opt-out by major powers creates a power imbalance. Smaller countries may lack the resources to develop or regulate military AI on their own, leaving them vulnerable to the decisions of larger players. “We’re committed to ethical AI, but we can’t ignore the reality of geopolitics,” said a diplomat from a signatory nation, speaking anonymously.

What Are Experts Saying?

The global response to the US and China’s decision has been a mix of concern and cautious optimism. UN Secretary-General Maria Fernandez issued a statement urging all nations to prioritize dialogue on AI governance, saying, “AI has the potential to transform warfare in ways we cannot yet fully predict. We must work together to ensure it is a force for peace, not destruction.”

Meanwhile, tech industry leaders have called for private-sector involvement in shaping AI ethics. Sarah Thompson, CEO of AI Solutions Inc., emphasized, “Governments alone cannot solve this. Tech companies must step up to ensure AI is developed with accountability, regardless of national borders.”

The Broader Context: AI in Military Applications

The use of AI in military contexts is not a futuristic concept—it’s already here. From facial recognition systems used in surveillance to AI-powered logistics optimizing supply chains, militaries worldwide are integrating AI at an unprecedented pace. The US, for instance, has deployed AI algorithms to analyze satellite imagery for threat detection, while China has tested AI-driven swarm drones for coordinated $1.

However, the ethical dilemmas are immense. Autonomous weapons, often dubbed “killer robots,” raise questions about accountability: Who is responsible if an AI system makes a fatal error? A 2024 survey by the International Committee of the Red Cross found that 62% of respondents globally opposed the use of fully autonomous weapons in warfare.

What Happens Next?

The joint declaration, while a significant milestone, is only the beginning of a long road toward global consensus on military AI. Experts suggest several potential paths forward:

  • Bilateral Agreements: The US and China could negotiate separate agreements on AI use, focusing on mutual transparency and risk reduction.
  • UN-Led Initiatives: The United Nations could play a larger role in convening all stakeholders, including non-state actors like tech companies, to draft a binding treaty.
  • Regional Frameworks: Smaller coalitions of nations could develop regional AI norms, creating pressure for larger powers to align with ethical standards.

For now, the world watches as the US and China chart their own courses in the AI military landscape. Their decisions in the coming years will likely shape not only the future of warfare but also the broader trajectory of AI governance.

Conclusion: A Call for Global Cooperation

The decision by the US and China to opt out of the joint declaration on responsible AI use in military applications is a stark reminder of the complexities surrounding emerging technologies. While 60 nations have taken a stand for ethical AI, the absence of the world’s two AI giants raises critical questions about enforcement and accountability. As AI continues to reshape the battlefield, international cooperation remains the only viable path to ensuring that this powerful technology serves humanity rather than endangering it.

What are your thoughts on this development? Should the US and China reconsider their stance, or do they have valid reasons for opting out? Share your views in the comments below, and stay tuned to AI Source News for the $1 updates on AI governance and military technology.