AI on the Battlefield

 

A person in tactical gear interacting with an artificial interface. Photo courtesy of The Netherlands Ministry of Foreign Affairs.

Introduction

In 2022, the AI race launched with the release of OpenAI’s ChatGPT. Now, four years later, OpenAI holds a narrow lead at best amongst rival corporations, and the race to build and integrate AI has grown from a battle between companies to a clash of nations. The AI race, once defined by aptitude tests or writing ability, is now measured in far more dangerous benchmarks, as AI increasingly becomes integrated into military and surveillance technology. 

In the new race to weaponize AI, key players like the US and China have an early lead in adopting AI for defense. China’s AI benefits from unprecedented access to data, but is constrained by limited computing power. Meanwhile, the US AI models surge ahead thanks to superior chip technology, but lag in adoption due to a scarcity of talent and issues with procurement.

Even as the US and China face these bottlenecks, new players are emerging that have rapidly begun adopting AI into their own militaries. Perhaps chief among these is Israel, whose economy has flourished with the growth of AI startups and now leads the Middle East in defensive AI applications. 

Another major competitor is India, who recently hosted the first-ever global AI summit in the Global South and gained a reputation as a champion of equity in AI access. India’s enormous population makes the country a crucial asset for model training—a leverage point that the nation has used to make strides in AI ethics. 

India is joined in this endeavor by the EU, which has already created several frameworks for addressing AI, though these have mostly failed to meaningfully regulate military applications of AI. As new frameworks emerge, the EU could play a major role in shaping the future of how AI can be used in warfare, but only if it can wield the necessary leverage to bend other key players to its rules.

As the AI race rages on at a breakneck pace of exponential growth, regulation risks becoming an afterthought as the world’s greatest superpowers pour billions of dollars into developing the world’s smartest weapons. Historically, the victors of such arms races determine the rules, so the trillion-dollar question is: Who will come out on top?

Building Bridges: China’s Need for Stronger Diplomacy in the AI Race

Carolyn Zhao, BC ‘28

With leading numbers in artificial intelligence research, model release and adoption, and immense sources of capital, China and the US emerged as the frontrunners of the evolving global AI competition. However, in contrast to the standing myth of a single-lane competition to the top, AI development is actually measured across interacting dimensions of relative strengths and political goals. Though China trails the US in key AI technological infrastructure, including computing power and robust model algorithms, its economic strengths, deep talent pools, and distinct AI development strategy project it to become a dominant influence in the future. As China seeks to achieve global AI leadership by 2030, it must focus on closing competitive gaps and engaging in diplomacy to smooth political friction that could undercut its visions of helming a constructive AI ecosystem.

Unlike subscription-based heavyweight US Language Learning Models (LLMs) aligning with free-market capitalism, China carves a new market through propagating open-weight, user-adaptable models with paid services, successfully showing increasing adoption rates. Furthermore, while the US leans into artificial general intelligence (AGI), China strategically integrates AI into national development, including key industries and the military. With an AI development strategy that promises tangible results and has enjoyed early success, China is poised as a formidable competitor across the global supply chain.

However, while China possesses material strengths in massive resource scalability and state subsidies, computing power and algorithms are two key bottlenecks in long-term competition with the US. Currently, the US holds 75% of global computing resources, compared to China’s ~15%, possesses more data centers than leading competitors, and is developing its cutting-edge NVIDIA chip line. However, China steadily builds toward computational self-sufficiency, leveraging energy resources to prioritize its Huawei Ascend chip line through domestic implementation and datacenter construction. Through targeted state subsidies and resource capacity, China’s computational development trajectory further stabilizes to compete with the US.

Furthermore, China lags behind the US in its algorithm strength. In 2025, while China led in the quantity of AI research output and patents, the US led in research influence, originating over 50% of the 100 most-cited AI papers globally from 2021 to 2023. While the high research output indicates that China possesses vast talent pools, Li Hang, Head of Research at ByteDance, argues that talent needs to be nurtured into leadership to pull China ahead. Therefore, China must amend its undergraduate machine learning curriculum to improve rigor as well as build leadership skills such as critical thinking. Though China currently lags in computer power development and educational foundations, its strong state support, high manufacturing scalability, and promising talent indicate ease in closing those gaps.

Still, China’s goal of leading the AI race is futile without meaningful diplomacy with the US. Since 2023, China has been vocal in promoting AI multilateral cooperation and guardrail policies, but has not yet formed concrete cooperative measures with the US. By proposing resolutions like its Global AI Governance Initiative and strengthening support from the Global South through BRIC and G77 intervention, China’s diplomacy appeals to a wide base and increases its soft power at the negotiation table. However, China’s promises are undercut in its lack of substantive cooperation in refusing to support other state-initiated resolutions and building deals to counteract US competition. Therefore, as China’s global presence in AI entrenches into economies, China will need to build bridges, not communication siloes, to compete with friction from conflicting political paradigms and ensure its AI development trajectory succeeds long-term. Going forward, China will need to focus its solution engineering and build stronger commitments to cooperative AI diplomacy to establish the dominance it desires in the shifting AI landscape.

Threading the Needle: The Tradeoffs of US AI Weaponry Development and Regulation

Alexander Vincenti, CC ‘26

It is no secret that the United States has maintained military dominance since the end of the Cold War. Now, the US military faces a new challenge: artificial intelligence and its ability to reshape warfare. The US must thread a needle, allowing itself to expand AI-powered military technology while also preventing such weaponry from becoming existentially dangerous. To maintain its dominance and ensure global security, it must develop a plan to build its own AI arsenal and regulate rival AI weaponry.

Well aware of AI’s potential to make military operations swifter and more precise, in 2025, the US began integrating AI into military command centers for streamlined communications and data analysis. AI allows commanding officers to analyze complex data, such as surveillance feed and battlefield reports, for patterns. AI can also aid in strategic decision-making by analyzing potential choices and their probabilities of success, helping the government run simulations of conflicts. This has significantly improved the efficiency of US military intelligence by reducing response times.

In addition to infrastructure benefits, AI has significant potential to improve weapons through its speed and accuracy. The US is already building arms and drones guided by AI. Because of its ability to precisely process data, AI-guided munitions can identify and execute targets. These AI capabilities have been implemented in drone swarms—large numbers of unmanned weaponized drones that use AI to synchronize their movement. These swarms have been effective at overrunning enemy defenses and  are inexpensive to make.Simulations show that a drone swarm is the US’s best defense against a Chinese invasion of Taiwan. China has also begun developing similar technology. If the US wishes to maintain its dominance, developing military applications of AI is crucial. For example, it can invest in building AI powered drone ships that monitor underwater activity, targeting enemy submarines that would otherwise go undetected.

While AI has significant potential for military gain, the US also recognizes the need to regulate its military application for international security. The US has already expressed its interest in establishing norms and guardrails for AI weapons development worldwide. In 2023, the US launched the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, a voluntary framework that requires AI to be rigorously tested, abide by international law, and involve human oversight. This program was designed to address key concerns that AI’s military application poses. For example, in war simulations, the three most popular AI models recommended nuclear strikes in 95% of scenarios. As such, the legitimate fear that AI lacks restraint in using nuclear weapons has prompted international norms requiring human input for strikes. Likewise, the US has researched AI drones that can execute targets with limited to no human input. Given AI’s inability to comprehend the human cost of strikes, the US must lead regulatory efforts to limit this technology globally.

The US must invest heavily in its AI development, infrastructure, and diplomacy to adapt to the new era of warfare. Through the Pax Silica initiative, the US is securing its supply chains of semiconductors and computer chips used in AI technology. By securing access to resources, the US can effectively produce new and effective weaponry that will help it maintain its military dominance. At the same time, however, the US must make a diplomatic effort to establish norms for responsible AI weapons development, and to do this it must also lead by example. This approach to development and regulation will give the US its best chance at maintaining military superiority throughout the next era of warfare.

Ukraine: How David Became Goliath

Jay Jacobson, GS ‘28

Since 2022, the narrative of the Russo-Ukrainian War has gradually shifted from a parable about the plucky upstart defending itself from an unstoppable bully to a horror story of automated slaughter at a scale that has not been seen since World War II. The new paradigm of this reality has reinvented Ukraine, turning it into one of the leading experts in warfare. Ukraine needs to leverage this unique experience to its advantage when dealing with other states. It is time the world recognized Ukraine. The reason for this change lies in what Ukraine’s Prime Minister recently called an industry “powered by Ukrainian housewives”—drone production.

“Drones dominate the fighting on both sides,” one Ukrainian soldier explained. The humble drone has redefined 21st-century warfare, making conflicts from the previous two decades look like quaint hangovers from the Cold War in comparison. Drones are shockingly efficient: they are already responsible for the majority of combat casualties.

AI-flown UAVs proliferate through a unique system of delivering equipment to frontline units based on combat effectiveness. There, the UAVs are proving more effective than their human-operated counterparts—even in extreme cases. Ukrainian drones recently captured Russian soldiers in the field, securing the position with no actual Ukrainians present. Investors are treating the war as the perfect testing ground for military equipment, and the Armed Forces of Ukraine (AFU) is happy to accommodate them. Through its logistics, targeting, and command structure, one expert noted, “Ukraine has mastered something NATO hasn’t.”

While the AFU are often outnumbered, they are rarely outgunned. Recently, NATO learned this the hard way in Exercise Hedgehog 2025, where 10 soldiers from the AFU mock-destroyed equipment and formations totalling 16,000 troops. Fancy (and extremely expensive) equipment, including the newest tanks, failed to stop a swarm of cheap, mass-produced UAVs. The wargame left the last skeptics in NATO high command convinced: the future is here, and they are not prepared.

The West does not have to keep learning the hard way through years of grinding warfare (as Russia has), so long as Ukraine is willing to assert itself as the military power it has become. Recently, Ukraine became an exporter (as opposed to an importer) of military equipment and, crucially, the expertise in its use. In an increasingly chaotic world, that military power is already playing out: Ukraine is growing its security network. Ukrainian President Volodymyr Zelensky toured the Middle East, forming new bilateral partnerships with Turkey, Saudi Arabia, and Qatar, preparing the region for 21st-century warfare. Zelensky should try this same tactic with Europe and ignore the official mechanisms of NATO or the EU, and adopt a more practical approach that guarantees the supplies and funds necessary to defend Ukraine. Ukraine's successful partnerships with the Middle East can stand as a model.

There is a mutually beneficial opportunity here for all the parties involved: Europe can learn how to fight wars of the 21st century, and Ukrainians can get closer to Europe. If NATO membership is not on the table, then Ukraine will just have to bring a new table: an advanced partnership that integrates Ukrainian intelligence, logistics, and experience outside of NATO or the EU. In this new multipolar world, where the transatlantic alliance appears weaker everyday, bringing in a country as committed to Europe’s defense as Ukraine could revive the continent in the face of Russia’s increasing bellicosity. Ukrainians are already on the front line of a simmering proto-world war, and the uncertainty of the West’s support is no longer acceptable. It is time for Ukraine to make a new deal.

A Voice in Need of Recognition: India’s Claim to the AI Table

Soenke Pietsch, CC ‘26

Artificial intelligence promises immense prosperity, but not without the potential for immense calamity. Best exemplified by the world’s most populated country, India attracts tech executives from OpenAI’s Sam Altman to Anthropic’s Dario Amodei, eager to train their algorithms using said population. During his visit to India in late 2025, Altman promised one free year of ChatGPT Go, normally priced at $8 per month, to all Indian citizens, amounting to $141 billion in forgone annual revenue for the company. This example encapsulates the strategic sway that India holds in the development of this new technology—one that it must use to shape development not just for itself, but for the Global South that could otherwise be left behind in the race for AI supremacy. 

At present, the cornerstone of current Indian AI regulation is the country’s “AI Governance Guidelines,” set to prioritize “Responsible AI” through voluntary standards rather than heavy-handed regulation. It walks a fine line, prioritizing a human accountability focus to strengthen safety without adding new compliance burdens. 

Yet, India’s focus is not solely on domestic affairs. In February 2026, India’s government joined Pax Silica—the US initiative for safeguarding AI and semiconductor supply chains. That move signaled India’s Western alignment in the AI race. And yet, in the same month, India hosted the first global AI summit for the Global South to champion "AI for All.” This policy does not just shape India's AI future but establishes a foothold for the Global South in discussions they have previously not been a part of. 

India's dual signaling—joining a US-led semiconductor initiative while simultaneously hosting a Global South AI summit—is not contradictory. In fact, this strategy is exactly in line with what India’s foreign policy tradition has always aspired to: principled non-alignment, updated, only now for the algorithm era.

India is already a consequential voice in bodies such as the G20, where it used its recent 2023 presidency to push AI governance onto the agenda. Now, it must wield those same platforms more aggressively to champion frameworks that reflect the priorities of developing nations, such as access instead of restriction, capacity building rather than compliance burdens, and technology transfer over intellectual entrenchment. For now, the current global AI governance conversation will be shaped almost entirely by the EU's regulatory instincts, Washington's national security framing, and Beijing's implementation of a state-capacity model. None adequately represent the smallholder farmer in Telangana who might benefit from AI-powered crop advisory tools, among other examples. India, having designated itself as the voice of the Global South, has a responsibility to convert its rhetorical commitment into concrete negotiating positions.

Fortunately, India can act. Its enormous population gives it the leverage every major AI company seeks. Its diaspora community in the US runs Silicon Valley’s AI laboratories. Its democratic credentials establish a moral authority that China cannot claim, and that the United States struggles to project consistently. Its linguistic and cultural diversity is reflective of human complexity at scale. Building AI that works for India means building AI that works for the world. 

While the Global North is signalling artificial intelligence dominance, it remains largely absent for most of the world—two-thirds of humanity waits to benefit from any form of artificial intelligence. Only India, with its unique vantage, can make space for those unseen and unheard. If India succeeds, the algorithm era will not be remembered for the concentration of power in a few capitals, but for the democratization of intelligence across every border. That is India’s power.

Under Construction: The EU as an AI Ethics Leader

Karen Wu, CC ‘29

At the 2026 Munich Security Conference, the introductory theme “Under Destruction” questioned the resilience of transatlantic collaboration. In his foreword, Chairman Ischinger noted that while historically, the US and allies had a “shared understanding of principles” regarding the international order, this “understanding” now appears uncertain. In light of increased polarization and the US administration’s unpredictable approach to foreign policy, the European Union (EU), a middle power not aligned with either President Trump or the Xi-Putin alliance, should distinguish itself as a collaborative leader in the international system. While post-WWII multilateralism has long since shown signs of dysfunction, recent developments in the artificial intelligence defense space necessitate proactive measures to further the EU’s competitive edge as AI becomes increasingly assimilated into civilian and military operations. 

In 2024, NATO released a revised AI strategy intended to further “use of these technologies…as soon as possible.” Notably, it highlights the development of “review processes…to operationalize responsible adoption of AI.” As of March 10, 2026, the Pentagon’s attempt to strong-arm AI company Anthropic into permitting its models for mass surveillance and fully autonomous weapons falls into a regulatory gray zone. While the NATO framework has no specific policy on lethal autonomous weapons, this coercion forces ambiguity on the interpretation of “responsible adoption,” potentially encouraging other countries to use AI in such military capacity. 

 The EU should take a definitive ethical stance against the use of lethal autonomous systems by revising the 2024 AI Regulation Act, the standing,  comprehensive law that formalizes regulatory statutes for civilian use of AI. Since its passage, the Act has been criticized for a noticeable absence of regulations addressing military applications of AI; however, this oversight can presently be used in the EU's favor. Experts in the field have proposed one model, SYNTHComm, which guarantees human oversight in AI military decision-making via field supervisors with “real-time operation contact” that can override AI recommendations. At the same time, the inherent efficiency of AI is minimally impacted by correction mechanisms built into the system. Introducing such a revision would make the EU the first global power to codify human intervention into AI weapons systems processes, functioning as an authoritative exemplar for what “responsible adoption” looks like.

To solidify a united stance on AI, the EU must then pursue tangible action with member states. The rise of AI provides an opportunity for increased military cooperation, as seen in the 2025 adoption of Security Action for Europe, which will provide up to $150 billion in loans to member states seeking enhanced defense capabilities. Previous collaborative military operations have faced severe delays due to friction between partner countries, thus increasing the present necessity of cooperation. One such operation, the Future Combat Air System (FCAS), stands as a “litmus test” for Europe’s capability to cooperate on joint defense efforts. FCAS is expected to reach capability by 2040, comprising a complex AI backbone that will connect new generation fighter jets with unmanned remote carriers through a “Combat Cloud.” Successful establishment of FCAS would signal that the EU can translate regulatory ambitions into the coordinated military action needed for collective defense.

In a world skeptical of multilateralism, the EU is uniquely positioned to take a definitive stance on ethical AI usage and improve cooperation between member states. Germany’s establishment of the Technology Responsibility Working Group as part of FCAS development is a promising start; however, such an advisory group must extend to all EU members to define the ethical use of AI in the defense space. A unified, definitive stance would construct the EU’s identity as an international leader in the ethical use of AI—a rather salient title. 

Conclusion

Altogether, these cases create a complex interweaving of vastly different strategies between the countries leading the global AI race. While the EU considers its role in pursuing regulatory leadership to ensure ethical use of AI in the defensive space, China seeks to establish bridges with other countries, promising mutual success in building and implementing AI. India, which stands at risk of becoming simply a source of data, instead pursues AI autonomy, cooperating purely where it benefits them, and utilizing non-alignment for leverage. Meanwhile, Ukraine should consolidate its advantages in AI and opt to secure closer ties to regional allies like the EU and NATO by offering its newfound AI weaponry expertise as a bargaining chip. As the current tenuous leader of the AI race, the U.S. should continue to develop its AI capabilities in weaponry to preserve its military dominance while ensuring that AI continues to be used responsibly globally. Each case provides a different trajectory for each country, as they attempt to balance ethics, power, and global competition within the developing AI space.  

What emerges is not cooperation in AI development, but rather specialized strategies that pair selective collaboration with strategic self-determination to secure a competitive advantage in the space. The future of AI will not be determined by one country alone, but by the collective strategic decisions of countries, each of which must weigh whether cooperation strengthens or damages their position in the future AI field.


This roundtable was edited by Roma Tivare, Nathan Shurts, Theodore Griffin, and Braden Mendiburu.

 
Next
Next

Rethinking the Grounds of Power: Starbucks, Luckin, and the Flavor of U.S.–China Rivalry