US and China Decline Joint Declaration on Responsible AI in Military Use

Hero image for article: US and China Decline Joint Declaration on Responsible AI in Military Use

US and China Opt Out of Responsible AI Military Declaration

On February 5, 2026, a historic joint declaration on the responsible use of artificial intelligence (AI) in military applications was signed by 60 nations at an international summit on AI governance. However, the absence of signatures from the United States and China—the world’s two leading AI powers—has sparked widespread concern among policymakers, technologists, and global security experts. This $1 underscores the growing challenges in establishing unified international norms for AI in warfare and raises critical questions about the future of military technology.

The Declaration: A Step Toward Ethical AI in Warfare

The joint declaration, initiated during a United Nations-led conference on AI ethics and security, aims to set guidelines for the ethical development and deployment of AI in military contexts. Key points of the pledge include:

  • Ensuring human oversight in AI-driven military decisions.
  • Prohibiting the use of autonomous weapons systems without accountability mechanisms.
  • Promoting transparency in AI military applications to prevent misuse.
  • Encouraging international collaboration to mitigate risks of AI escalation in conflicts.

Countries such as the United Kingdom, Germany, Japan, and Canada were among the 60 signatories, representing a diverse coalition of developed and developing nations. The declaration is seen as a critical first step toward preventing an unchecked AI arms race and ensuring that military AI adheres to humanitarian principles.

Why the US and China Declined to Sign

Despite their absence from the list of signatories, neither the US nor China has issued an official statement explaining their decision. However, experts speculate that both nations are wary of constraints on their military AI programs, which are central to their national security strategies.

Dr. Emily Harper, a senior fellow at the Center for AI and Global Security, noted, “The US and China are heavily invested in AI for defense purposes, from autonomous drones to predictive analytics for battlefield scenarios. Signing a declaration that imposes strict oversight could limit their strategic flexibility in a highly competitive geopolitical landscape.”

For the United States, concerns may also stem from ongoing debates within the Department of Defense about balancing innovation with ethical considerations. A 2025 Pentagon report highlighted that while the US supports responsible AI, it prioritizes maintaining a technological edge over adversaries—namely China. Similarly, China’s rapid advancements in AI, including facial recognition and autonomous systems, suggest a reluctance to agree to international oversight that could slow its military modernization efforts.

The Implications of Their Absence

The decision by the US and China to opt out of the declaration has far-reaching implications for global security and AI governance. Without the participation of the two largest AI powers, the effectiveness of the pledge is significantly undermined. According to a 2025 report by the Stockholm International Peace Research Institute (SIPRI), the US and China collectively account for over 60% of global military AI research and development spending. Their absence could create a fragmented regulatory landscape, where smaller nations adhere to ethical guidelines while superpowers operate under their own rules.

Moreover, this development risks exacerbating tensions in regions already grappling with AI-driven military technologies. For instance, in the South China Sea, where both the US and China deploy AI-enhanced surveillance systems, the lack of agreed-upon norms could lead to misunderstandings or unintended escalations.

A Missed Opportunity for Global Leadership

Many experts view the refusal to sign as a missed opportunity for the US and China to demonstrate leadership in shaping responsible AI norms. “This was a chance for both nations to set a precedent for ethical AI use in military contexts,” said Dr. Rajesh Kumar, a professor of international relations at Stanford University. “Instead, their absence sends a message that national interests trump global cooperation.”

The decision also complicates efforts to address the risks of lethal autonomous weapons systems (LAWS), often dubbed “killer robots.” Advocacy groups like the Campaign to Stop Killer Robots have long warned about the dangers of AI systems that can select and engage targets without human intervention. Without the buy-in of major powers, international bans or restrictions on such weapons remain elusive.

The Broader Context: AI in Military Applications

The use of AI in military applications is not new, but its scope and sophistication have grown exponentially in recent years. From predictive maintenance of equipment to real-time battlefield decision-making, AI is transforming modern warfare. According to a 2025 report by McKinsey, global spending on military AI is projected to reach $25 billion by 2030, with the US and China leading the charge.

In the US, projects like the Defense $1 Research Projects Agency’s (DARPA) AI Next initiative aim to integrate AI into every aspect of military operations. Meanwhile, China’s 14th Five-Year Plan (2021-2025) explicitly prioritizes AI as a cornerstone of its military modernization, with state-backed companies like Huawei playing a significant role in development.

Ethical and Security Concerns

While AI offers undeniable advantages, it also poses significant ethical and security risks. Autonomous weapons could lower the threshold for conflict by removing human judgment from life-and-death decisions. Additionally, AI systems are vulnerable to biases, hacking, and errors, which could lead to catastrophic consequences in military contexts.

A 2024 incident in a simulated US military exercise highlighted these dangers when an AI-controlled drone misidentified a friendly target as hostile, resulting in a near-miss scenario. Such incidents underscore the urgent need for robust guidelines—guidelines that the recent declaration aimed to establish.

What Happens Next?

The refusal of the US and China to join the declaration does not spell the end of international efforts to regulate military AI. The 60 signatory nations plan to establish a working group to monitor compliance and encourage additional countries to join the pledge. There is also hope that sustained diplomatic pressure could eventually bring the US and China to the table.

In the meantime, smaller nations and advocacy groups are stepping up. The European Union, for instance, has proposed a parallel $1 for AI ethics in defense, which could serve as a model for broader adoption. Additionally, tech companies like Google and Microsoft, which have faced scrutiny for their involvement in military AI projects, have publicly endorsed responsible AI principles and may play a role in shaping future norms.

A Call for Dialogue

Ultimately, the path forward requires dialogue and compromise. As AI continues to reshape the battlefield, the stakes for global security are higher than ever. “We cannot afford a world where military AI operates without rules,” warned UN Secretary-General António Guterres during the February 2026 summit. “The US and China must engage with the international community to ensure that technology serves humanity, not destruction.”

For now, the world watches as the two AI superpowers navigate their own paths. Will they prioritize national security over global cooperation, or will they eventually align with the growing call for responsible AI in military use? Only time will tell.

Conclusion

The joint declaration on responsible AI in military applications marks a significant, though incomplete, milestone in the quest for ethical technology governance. While 60 nations have committed to a safer future, the absence of the US and China casts a long shadow over these efforts. As military AI continues to advance, the need for international consensus becomes ever more urgent. The decisions made—or avoided—today will shape the security landscape for decades to come.