Artificial Intelligence (AI) Archives - Just Security https://www.justsecurity.org/category/ai-emerging-technology/artificial-intelligence/ A Forum on Law, Rights, and U.S. National Security Mon, 19 Jan 2026 23:23:46 +0000 en-US hourly 1 https://i0.wp.com/www.justsecurity.org/wp-content/uploads/2021/01/cropped-logo_dome_fav.png?fit=32%2C32&ssl=1 Artificial Intelligence (AI) Archives - Just Security https://www.justsecurity.org/category/ai-emerging-technology/artificial-intelligence/ 32 32 77857433 Key Trends that Will Shape Tech Policy in 2026 https://www.justsecurity.org/128568/expert-roundup-emerging-tech-trends-2026/?utm_source=rss&utm_medium=rss&utm_campaign=expert-roundup-emerging-tech-trends-2026 Thu, 15 Jan 2026 14:19:12 +0000 https://www.justsecurity.org/?p=128568 From AI federalism and autonomous cyber operations to intensifying U.S.-China competition, we asked leading experts to identify key trends in the year ahead.

The post Key Trends that Will Shape Tech Policy in 2026 appeared first on Just Security.

]]>
The global technology policy landscape is entering a pivotal year. In the United States, the AI governance debate has evolved from whether to preempt state-level regulation to what a substantive federal framework might actually contain. Internationally, competition between Washington and Beijing is accelerating, with semiconductors and compute capacity now central to national security strategy. The same competitive logic is shaping quantum computing, where the United States, Europe, and China are rethinking national programs while navigating fragile supply chains.

As global technology competition intensifies, new security risks are emerging. The first publicly reported AI-orchestrated hacking campaign appeared in 2025, and agentic AI systems are expected to reshape the offense-defense balance in cyberspace in the year ahead. More broadly, as AI diffuses through societies, policymakers must grapple with its implications for democracy, human rights, and the distribution of power between states and their citizens.

To make sense of these developments, we asked leading experts to identify key trends they will be watching in 2026 and beyond.

AI Federalism and the Future of U.S. Regulation

Josh Geltzer, Partner in WilmerHale’s Defense, National Security and Government Contracts Practice; Former Deputy Assistant to the President, Deputy White House Counsel, and Legal Advisor to the National Security Council: 

In 2025, the central AI policy debate in Congress revolved around whether to impose a federal “AI moratorium” that would block states from regulating AI for a set period of time. This proposal, strongly supported by the Trump administration, nearly passed as part of the summer’s “Big Beautiful Bill” and later resurfaced in the year-end National Defense Authorization Act, but ultimately failed due to insufficient congressional backing. As an alternative, U.S. President Donald Trump issued a December executive order aimed at limiting the impact of state-level AI laws without full federal statutory preemption. The order directed the Department of Justice to develop ways to challenge state-level AI laws deemed contrary to administration policy and, furthermore, instructed executive branch agencies to withhold certain funds from states maintaining AI regulations viewed as restrictive. It also committed the White House to draft a legislative framework for a uniform federal AI policy that would preempt state laws, while preserving state authority over issues like child safety, data center infrastructure, and government AI procurement.

Looking ahead to 2026, the debate is shifting from whether to “preempt something with nothing” to whether to “preempt something with something.” In other words, the key question will no longer be about eliminating state AI laws—which continue to proliferate—without a federal substitute, but will instead become about replacing them with a concrete federal regulatory framework. This change fundamentally alters the conversation: both supporters and critics of the 2025 moratorium must recognize that preemption with a substantive policy is a different proposition from preemption without one. What that federal framework will actually look like remains uncertain, but its details will be critical in determining the level of support for renewed legislation. The bottom line: expect a very different—and potentially more consequential— discussion about federalizing AI law in 2026.

David S. Rubenstein, James R. Ahrens Chair in Constitutional Law and Director of the Robert J. Dole Center for Law and Government at Washburn University School of Law:

The biggest AI federalism story of 2026 will not be about algorithms. It will be about silicon and steel. The National Conference of State Legislatures predicts that data centers will be a central legislative concern. While the dominant political narrative focuses on energy affordability and sustainability, the grassroots data center backlash runs deeper. People vote how they feel, and many Americans feel negatively about an AI-driven future. Data centers are vessels for AI anxiety and antipathy toward big tech more generally. This matters for two related reasons. First, the backlash reflects a broad coalition, spanning affordability, sustainability, job security, and corporate accountability. Second, even if energy costs are contained, the backlash probably will not be. For constituents anxious about AI, job loss, and cultural decay, blocking a local land-use permit or a corporate tax credit is how their voices will be heard.

Beyond infrastructure, states will continue to regulate AI itself. However, comprehensive AI acts are losing momentum. Colorado’s flagship law illustrates why. Originally passed in 2024, Colorado’s AI Act was designed to regulate AI discrimination across employment, housing, healthcare, and more. As the effective date approached, however, Colorado’s Governor and Attorney General backed the industry’s effort to gut the law. Instead, the Colorado legislature delayed the effective date to mid-2026, and future setbacks are likely. States are now pivoting to more targeted approaches, focusing on high-risk applications and legacy sectors. AI chatbots, for example, are in the legislative crosshairs, following headline news that linked chatbots to suicide, defamation, and deception. In 2026, states likely will respond with transparency laws, age-gating, and other guardrails. Pricing algorithms are also on the agenda. Some states may take a general approach, for example, by amending antitrust codes. But most states will seek to regulate price-setting algorithms in specific domains, like housing and insurance. Meanwhile, major legislation enacted in 2025 will take effect this year, including California’s “companion chatbot” law and Illinois’ employment-decision protections.

None of this sits well with the Trump administration. Acceleration and deregulation are twin pillars of the White House’s domestic AI agenda. Most recently, Trump issued an executive order to limit state AI laws through a multi-pronged approach: litigation, federal funding conditions, and regulatory preemption. The order’s ambition makes it legally vulnerable. The executive branch cannot unilaterally preempt state law without a delegation from Congress. Nor can the executive branch impose spending conditions that Congress itself rejected. Agencies will be hard-pressed to demonstrate otherwise in court. Legal issues aside, the order is politically tone-deaf. By large margins, Americans favor AI regulation. States are delivering. The federal government has not. Expect more of the same in 2026.

Losing Control of Autonomous Cyber Operations

Brianna Rosen, Executive Director of the Oxford Programme for Cyber and Technology Policy; Senior Fellow and Director of the AI and Emerging Technologies Initiative at Just Security: 

The emergence of highly autonomous cyber capable agents represents a qualitative shift in the threat landscape that existing governance frameworks are ill-equipped to address. Unlike AI systems that assist human operators, these agents can identify vulnerabilities, develop exploitation strategies, and execute intrusion campaigns with minimal human oversight. The GTG-1002 campaign disclosed last year offered an early glimpse of this future, but the systems now under development by both state and non-state actors will be far more sophisticated.

The governance challenge is twofold. First, attribution becomes significantly harder when autonomous agents can be deployed at scale, adapt their tactics in real time, and obscure their origins. Traditional deterrence models assume adversaries can be identified and held accountable; that assumption is eroding. Second, the speed of autonomous operations may compress decision cycles to the point where meaningful human control becomes impractical. An agent that can move from reconnaissance to exploitation in minutes leaves little room for deliberation.

These dynamics have implications beyond cybersecurity. Highly autonomous agents operating in other domains, from financial markets to critical infrastructure management, raise similar questions about accountability, control, and escalation risk. Policymakers in 2026 will need to move beyond sector-specific frameworks toward a more integrated approach to autonomous systems governance, one that addresses the underlying capabilities rather than their application in any single domain. The forthcoming U.S. cybersecurity strategy and ongoing discussions among allies about AI security cooperation offer opportunities to begin this work, but only if governments are willing to grapple with the harder questions about autonomy, speed, and control that these systems pose.

Teddy Nemeroff, Co-Founder of Verific AI; Former Director for International Cyber Policy on the National Security Council staff:

In 2026, agentic AI will play a decisive role in determining the balance between offense and defense in cyberspace. Last year saw the first publicly reported AI-orchestrated hacking campaign, perpetrated by Chinese state-sponsored cyber actors. Although the campaign used unsophisticated hacking tools and only compromised a handful of the approximately 30 organizations targeted, it provided a proof of concept for how AI could be used to automate 80 to 90 percent of hacking operations. This year, AI-orchestrated hacking campaigns will become the norm. Cyber criminals and states like Russia, Iran, and North Korea will adopt similar approaches, increasingly using agentic capabilities to run their own campaigns.

On the other hand, agentic AI will drive new innovations in cyber defense in 2026—from AI-enabled penetration testing to automated incident response. A key question will be how quickly these innovations can be implemented in cybersecurity products and distributed to owners and operators especially in key critical infrastructure sectors. This time lag will almost certainly give hackers the advantage in the short term, and that advantage may be overwhelming for under-resourced entities, like schools and hospitals, as well as poorer countries with weaker cyber defenses.

In addition to being a tool that cyber attackers and defenders use, in 2026, AI will increasingly shape the landscape in which they compete. As companies integrate AI capabilities into their enterprise systems, this will create new vulnerabilities that adversaries will seek to exploit and defenders must address—from prompt injection attacks to data exfiltration from AI systems. The widespread adoption of AI coding tools will also shape the cybersecurity environment in 2026, as hackers increasingly exploit weaknesses in hastily produced “vibe coded” software. On the other hand, experts hope AI can ultimately reduce software vulnerabilities through better pre-release testing and by lowering the cost to re-write flawed code in legacy technology used to operate many critical infrastructure systems today.

The White House is expected to issue a new cybersecurity strategy in January 2026—one that is expected to feature a heavy emphasis on offensive cyber and deterrence. This will provide an important signal as to how the Trump administration plans to address these challenges in the coming year.

U.S.-China Competition Accelerates Across the Tech Stack 

Martijn Rasser, Vice President of the Technology Leadership Directorate at the Special Competitive Studies Project:

In 2026, the transition from large language models to agentic AI—systems capable of autonomous reasoning and real-world execution—will redefine the stakes of technology transfer. We have moved from information risks to functional risks. Because agentic systems require vast, low-latency compute to manage industrial supply chains, cyber operations, and financial markets, the underlying hardware is no longer merely a commodity; it is the physical infrastructure of sovereignty.

To protect this infrastructure, the United States must move past the strategic myth that selling mid-tier hardware to adversaries ensures dependency. History and current industrial policy suggest the opposite: providing frontier compute to strategic rivals only provides the scaffold they need to train indigenous models and bridge the gap toward self-sufficiency. In 2026, economic security requires building a high fence around the entire compute stack. The goal of export controls should not be to keep adversaries addicted to U.S. technology, but to ensure that the most capable agentic frameworks—those that can disrupt global markets or automate high-speed cyber warfare—remain a privileged asset of the democratic world.

True diffusion should be a strategic reward for allies, not a commercial compromise with rivals. By strictly limiting compute access for adversaries while building a secure, high-capacity compute-as-a-service architecture for trusted partners, the United States can lead a coalition that ensures the rules of the road for autonomous agents are set in Washington, London, and Tokyo—not Beijing. In this new era, U.S. security lies in maintaining America’s technological lead, which should not be compromised for the sake of profit.

Geoffrey Gertz, Senior Fellow in the Energy, Economics, and Security Program at the Center for a New American Security; Former Director for International Economics at the White House: 

Over the course of 2025, the United States and China rapidly escalated trade and tech restrictions toward each other, then negotiated to roll them back. By the end of the year, the superpowers had established a fragile détente, with the Trump administration refraining from imposing new export controls or sanctions to retaliate for China’s cyberespionage out of fears of upsetting this shaky truce. The result is a quiet sea change in U.S. economic security policy. After years of steady progress on the China derisking policy agenda—marked by new controls on dual-use technology exports, outbound investments, and the transfer of sensitive personal data—at the outset of 2026 the Trump administration has effectively paused any new competitive actions toward China.

The trend to watch this year is how long this truce will last, and what happens to the U.S. tech protection agenda in the meantime. The Commerce Department recently withdrew plans to restrict the import of Chinese drones and the White House opted not to introduce any significant new tariffs on semiconductors. Yet even as the Trump administration studiously avoids new tech restrictions that might destabilize the status quo, other parts of the government may have an incentive to act. Late last year, Congress passed a law that codified and expanded the outbound investment rules, and there are ongoing legislative efforts to strengthen U.S. chip controls. Meanwhile the Federal Communications Commission (FCC), a regulatory agency (debatably) independent of the administration, may  step in where the Commerce Department is stepping back: in late December the FCC issued its own rule restricting the import of foreign drones. Unlike Commerce’s proposed approach, the FCC rule applies to foreign drones from any country, rather than explicitly targeting China, perhaps in an effort to avoid provoking Chinese retaliation. This may become a model for additional new restrictions.

Ultimately, the current U.S.-China truce is likely to break down at some point, whether due to a miscalculation, unforeseen shock, or simply because one side or the other determines it no longer serves its interests. At that point the floodgates may open on new tech restrictions, as the various constituencies in favor of more rapid derisking seek to make up for lost time. Any companies making business decisions on the assumption the current détente truly represents a break in the longer-term trend of heightened geoeconomic competition may be disappointed.

Scott Singer, Fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace; Co-Founder of the Oxford China Policy Lab:  

In 2026, general-purpose AI systems will begin to transform major economies and societies in more tangible, noticeable, and irreversible ways than we have seen before. Yet backlash will intensify, creating new and more potent coalitions to rein in AI. The United States and China will be ground zeroes for both these phenomena, with trends operating in parallel but taking shapes under two very different national systems.

AI systems continue to advance in both technical capacity and real-world utility. In late 2025, tools like Claude Code enabled AI-savvy users with no coding experience to fully automate a range of computer-based tasks. With Washington and Beijing both hoping to diffuse such tools throughout their economies, 2026 will showcase a race between the relatively laissez-faire U.S. approach and China’s top-down, whole-of-society AI+ initiative. At the same time, both countries are simultaneously grappling with AI risks that are no longer speculative but clearly present. China will begin implementing its recently published interim guidelines on “human-like” AI interaction services, responding to growing social concerns about emotional dependence and addiction. Meanwhile, parallel anxieties have crystallized in the United States around child safety, companion chatbots, and the ethics of rapid AI development, with bipartisan congressional calls for child safety legislation. Additionally, California has passed laws targeting companion chatbots.

As AI capabilities diffuse more broadly in 2026, expect to see new stakeholder coalitions emerge in both countries demanding governance frameworks that address growing harms. In the United States, the pro-regulatory coalition may include populists of the right and left, labor, parents, Catholics and other religious communities, and civil rights groups. As legislation moves slowly, many will turn to the courts to hold companies liable for AI-related harms. In China, the CCP will enforce its own values and monitor society for grass-roots sentiments that require a response. Beijing will not hesitate to penalize companies if deemed necessary.

Sam Winter-Levy, Fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace:

A year ago, there was something close to bipartisan consensus on AI export controls: the United States should maintain as large a lead as possible over China in advanced chips and prevent Beijing from accessing the hardware needed to train competitive frontier AI systems. That’s still the dominant view in Congress, as well as the national security and AI policy communities. But in the White House, that consensus has collapsed. In December, U.S. President Donald Trump announced he would allow Nvidia to sell its advanced H200 chip to China, in return for the U.S. government receiving a 25 percent cut of the revenue. What was once a set of national security controls off the table for bargaining has become something closer to a pricing negotiation with both Beijing and Nvidia.

The key questions for 2026 center not on whether the Trump administration will maintain this approach, but on how far it will go and what constraints it will face. If the administration greenlights the sale of millions of H200 chips to China, it risks significantly eroding the U.S. compute advantage. But the license requirement remains in place, meaning multiple agencies will have to sign off on license conditions—along with Chinese companies and Beijing. If this process results in significantly limited sales, the effect will be much more muted. Pressure from Congress, where bipartisan concern about AI transfers to adversaries remains strong, may also slow things down.

On the other side of the ledger, how much progress will China make in indigenizing semiconductor manufacturing equipment and scaling domestic chip production—and will the relaxation of U.S. controls slow that drive or give Beijing less incentive to invest? For now, most projections suggest China will continue to struggle to make competitive chips in large quantities. If the administration succeeds in plugging gaps in its semiconductor manufacturing equipment export control regime, which for now it remains committed to, China will face even steeper obstacles. But Washington will need alignment in export restrictions from allies like the Netherlands, Japan, and South Korea, which may be reluctant to bear significant costs enforcing complementary controls now that the United States seems comfortable selling advanced chips to Beijing. If U.S. allies loosen their controls, China’s ability to manufacture high-end chips domestically could improve substantially.

Meanwhile, the competition is broadening. Chinese open-source models are seeing widespread adoption across the Global South, with significant support from the Chinese government—a trend that will accelerate as Beijing scales its domestic chip production. Toward the end of last year, the Trump administration launched its AI exports program to bolster the United States as the global supplier of AI and bring partners into an American-led ecosystem. Together with its newly announced Pax Silica initiative—an attempt to build a “coalition of capabilities” among states in the semiconductor supply chain—it represents one of the administration’s most prominent efforts to marshal public resources to compete internationally with China on AI. For now, both initiatives lack substance. But if the administration follows through with sustained funding and diplomatic engagement, they have the potential to strengthen the U.S. position in what is fast becoming a truly global contest for AI influence.

Lennart Heim, Independent AI Policy & Semiconductors Researcher: 

I will continue to closely watch China’s AI chip ecosystem in 2026. Chinese manufacturers still struggle enormously to produce advanced chips at scale—domestic production remains a small fraction of what the United States and its partners design and manufacture, yields are poor, and their technology nodes lag years behind. This gap has been widening, not closing, as TSMC and other chip manufacturers keep advancing. The key questions I am tracking: How many advanced AI chips will China actually produce domestically this year, and who will buy them? Can China manage to produce high-bandwidth memory domestically, given that the country can no longer import it? I expect this to be a major challenge—China’s production will certainly increase, but from a very low base.

But something has also shifted. In December, the Trump administration approved exports of Nvidia’s H200—the most powerful chip ever cleared for sale to China. With access to high-end foreign chips for domestic use, China might now be able to divert some of its scarce domestic production toward exports, potentially creating tech ecosystem lock-ins abroad. Will China do this, and where will these chips end up?

So far, I am not aware of data centers with advanced Chinese AI chips outside of China. But as the United States pursues major AI infrastructure deals overseas, China faces pressure to promote an alternative tech ecosystem. I will be watching whether China channels domestically produced chips to foreign deployments—or whether Chinese firms might even use their now legally purchased Nvidia chips to compete with U.S. hyperscalers in third markets. China certainly cannot compete on the largest projects with its own chips; they simply are not produced at that scale. But China does not need to match U.S. scale to be strategically relevant. More likely, it might stack modest volumes of AI chips with mobile networking, subsidized financing, and smartphones preloaded with Chinese AI as the default—the whole tech stack. The chip volumes will be limited, but these beachheads matter. As we have seen with other technologies, early deployments can create long-term dependencies if they are strategic.

Quantum Computing’s Industrial Challenge

Constanza M. Vidal Bustamante, Fellow in the Technology and National Security Program at the Center for a New American Security:

​​The United States, China, and Europe are preparing to refresh their national quantum programs in 2026, making this a pivotal year for quantum policy. As quantum sensors and computers move toward real-world utility and nations compete to secure their economic and security advantages, they are converging on a defining challenge: whether their industrial bases and supply chains are ready to support scale.

Despite boasting a world-leading ecosystem of universities and startups, thin and globally dispersed supply chains increasingly constrain U.S. quantum progress. The United States relies heavily on foreign (including Chinese) or fragile single-supplier markets for critical inputs, from precision lasers and cryogenics to photonic materials and advanced microfabrication. Yet less than twelve percent of federal quantum funding supports domestic enabling technologies and manufacturing capacity. Congressional bills and rumored upcoming executive orders signal awareness of some of these gaps, but concrete outcomes remain uncertain, especially as quantum continues to compete for attention with higher-profile policy priorities such as AI and conventional semiconductor manufacturing.

Meanwhile, China’s Fifteenth Five-Year Plan, due this March, is expected to further strengthen its already formidable industrial base by elevating quantum as the top “industry of the future.” And Europe, for its part, is preparing a Quantum Act for release in the second quarter, emphasizing “Made in Europe” industrial capacity and supply chains as part of a broader push for technological sovereignty.

The United States and Europe must take care not to turn their drive for self-reliance into costly fragmentation. Fully indigenizing quantum supply chains on either side of the Atlantic would demand time and investment neither can afford if they hope to stay ahead of China. A more credible path may lie in pooling allied capabilities now to secure trusted sources of critical materials, components, fabrication, and systems, while building domestic capacity over time—an approach reflected in initiatives such as the U.S. Department of State’s Pax Silica framework for AI. Whether the United States and its allies act on this logic in 2026 may determine whether they reap the substantial national security and economic gains of quantum technologies they have long sought—or cede that value to strategic competitors.

An Authoritarian Turn in Tech Policy?

Justin Hendrix, Cofounder and CEO of Tech Policy Press:

In February, heads of state and leaders of industry will gather in Delhi for the AI Impact Summit, the fourth in a series of global conferences that kicked off just one year following the launch of OpenAI’s ChatGPT. The tagline for the event is “Welfare for All, Happiness for All.” While we can expect another measured announcement from the gathered elites about international cooperation towards that goal, the year ahead appears set to more fully reveal what earlier techno-optimism and billions of dollars in marketing have obscured: that under present conditions, AI is more likely, on balance, to undermine democracy and strengthen authoritarianism.

Indeed, where authoritarianism is rising—and that is nearly everywhere, according to the 2025 editions of the Economist Intelligence Unit Democracy Index, the Freedom House Freedom in the World report, and the V-Dem Institute Democracy Report—AI is increasingly a tool of authoritarian control and a threat to democratic systems. OpenAI might be selling “democratic AI,” but as legal scholars Woodrow Hartzog and Jessica Silbey contend, today’s “AI systems are built to function in ways that degrade and are likely to destroy our crucial civic institutions,” which they say includes universities, the free press, and the rule of law itself. That is even before all of the ways AI is being deployed around the world for surveillance, manipulation, and control.

A clinical look at the situation requires adopting a new frame for the new year, and to prioritize interrogating the “tenacious connections” between AI and authoritarianism rather than building more intellectual scaffolding for “responsible AI.” Instead of prioritizing questions like, “How do we regulate tech to ensure a healthier democracy?” we should instead put more effort into answering “How do we preserve space for human agency and resistance in an increasingly authoritarian century?”

This may appear to be a pessimistic reorientation, but it points towards what should be sources of hope: the clear need for solidarity with existing movements for the rule of law, democracy, human rights, social and environmental justice; and the urgency of building alternative public technology infrastructures that are free from both corporate and state control. This is the reorientation that is needed in tech policy beyond simply asking corporations and states to behave more responsibly when it comes to AI.

Petra Molnar, Associate Director of York University’s Refugee Law Lab; Faculty Associate at Harvard’s Berkman Klein Center for Internet and Society: 

I expect a shift in the use of border technologies by Global North states, where surveillance and screening start well before a person reaches a port of entry through increasingly networked systems of social media surveillance, predictive analytics, and automated decision-making. This direction also complements the growth of more automated physical surveillance infrastructure, including autonomous surveillance towers at the U.S.-Mexico border, and additional in-land routine surveillance practices, such as biometric data gathering, app-based verification, and social media monitoring that expand already discretionary decision-making. All these technologies have profound ramifications for people’s human rights and civil liberties, yet governance mechanisms are lacking. While the European Union’s AI Act will likely shape regulatory conversations in 2026 on border technologies, the Act also leaves substantial room for state security rationales and procurement realities to determine what is deployed in practice instead of leading from a rights-protecting perspective at the border.

At the same time, global migration numbers will likely continue increasing in 2026. People in need of protection will continue to flee protracted and new conflicts, exercising their internationally-protected right to seek asylum. But in the current anti-migration political climate, governments will continue pushing for the normalization of surveillance and expanded data collection, with private companies like Anduril, Palantir, and Elbit Systems benefiting from lucrative public-private partnerships in a multi-billion dollar border industrial complex. As such, 2026 may be less about “new” tools—like robo-dogs, iris-scanning orbs and border vigilante apps—and more about existing tools becoming integrated into a continuous data-driven pipeline, from social media and biometrics to automated triage. Practical harms will not only include privacy loss, but also the amplification of discrimination and exclusion through opacity, algorithmic error, chilled speech and association, as well as the weakening of international legal norms.

The post Key Trends that Will Shape Tech Policy in 2026 appeared first on Just Security.

]]>
128568
Who Will Stand Up for Human Rights in 2026 – and How? https://www.justsecurity.org/128753/who-will-stand-for-human-rights-2025/?utm_source=rss&utm_medium=rss&utm_campaign=who-will-stand-for-human-rights-2025 Thu, 15 Jan 2026 14:05:10 +0000 https://www.justsecurity.org/?p=128753 The deterioration in human rights in 2025 heightens the risks for defenders going forward, all worsened by donors' deep funding cuts, especially those of the United States.

The post Who Will Stand Up for Human Rights in 2026 – and How? appeared first on Just Security.

]]>
The year 2025 was difficult for human rights and human rights defenders.

Unceasing attacks came from governments, including the most powerful, as well as from the private sector and non-state groups, pushing agendas in opposition to human rights. Many of these assaults are amped up by technology, with the methods and means becoming ever cheaper and ever more accessible to the masses.

An annual analysis from the Dublin-based international rights group Frontline Defenders paints a devastating picture of killings, arbitrary detention, surveillance, and harassment. CIVICUS, an organization that measures civic space (defined as “the respect in policy, law and practice for freedoms of association, expression and peaceful assembly and the extent to which states protect these fundamental rights”), documented declines in 15 countries and improvements in only three. The location and nature of the drops were diverse, taking place from mature democracies such as the United States, Germany, France, and Switzerland, to authoritarian regimes such as Burundi and Oman, and including countries in crisis and conflict such as Sudan and Israel. Some types of human rights were uniquely politicized and singled out in 2025, including women’s rights and environmental rights. Freedom House recorded the 19th straight year of declines in global freedom.

All this is compounded by an unprecedented slash-and-burn to international aid budgets for organizations and individuals working on human rights worldwide. The Human Rights Funders Network of almost 450 institutions across 70 countries estimates that by 2026, human rights funding globally will experience a $1.9 billion reduction compared to levels in 2023.

Taken together, this makes the world more dangerous than ever for human rights defenders and they have fewer resources at their disposal to combat the threats.

In 2026 and moving forward, two crucial questions arise for the defense of human rights globally. First, who will do the work of fighting to protect and advance human rights in the year ahead, and second, how can those in the international community still fiercely committed to human rights support them? These questions will be shadowed by another trend: impunity. Yet, at the same time, lessons and a few positive developments from 2025 can guide human rights defenders on how to seize opportunities in the coming year, beginning even this month at the United Nations.

The Earthquakes of 2025

Eviscerating Democracy, Human Rights, and Governance Assistance

In the United States, 2025 began with the newly inaugurated Trump administration dismantling the U.S. Agency for International Development (USAID) and canceling approximately 85 percent of its programming (from a budget of more than $35 billion in the fiscal year ending in September 2024). The gutting eliminated hundreds of millions of dollars of support for those working to protect human rights and expand freedom and democracy around the world. The State Department’s grantmaking efforts were similarly cut, with more than half of its awards canceled, including programs directly supporting human rights defenders such as one initiative providing emergency financial assistance to civil society organizations and a fund to promote human rights and democracy and respond to related crises.

Most other major donor countries followed suit, though not with the same sweep or to nearly the same degree. Canada said it would reduce foreign aid by $2.7 billion over the next four years, the Dutch announced structural spending cuts of € 2.4 billion on development aid starting in 2027, and the European Union announced a €2 billion reduction in its main mechanism for development aid for 2025-2027. Multilateral funders were not immune to the trend: the United Nations, for one, will see major budget and staffing cuts for human rights in 2026.

The U.S. retreat from foreign assistance rapidly impacted all development sectors, from health, to education, to humanitarian assistance, but no sector was targeted with such enmity as that of democracy, human rights, and governance. Advocates and implementers saw not only the dire resource clawbacks discussed above, but also found themselves tarred by a steady diet of derisive commentary from the very policymakers doing the cutting.

Secretary of State Marco Rubio, who, once championed human rights and democracy “activists” as a U.S. Senator, even serving on the board of the democracy-promoting International Republican Institute before the administration eliminated the congressional funding that supported it. He once told a crowd at the Brookings Institution “[f]oreign aid is a very cost-effective way, not only to export our values and our example, but to advance our security and our economic interests.”

But as secretary of state, he abruptly reversed course, writing last April that the State Department unit overseeing civilian security, human rights, and democracy had “a bloated budget and unclear mandate,” and that its “Bureau of Democracy, Human Rights, and Labor had become a platform for left-wing activists to wage vendettas against `anti-woke’ leaders in nations such as Poland, Hungary.” Other members of the administration were similarly sharp-tongued about the sector, with now-former USAID Administrator Pete Marocco conflating the promotion of “civic society” with “regime change” in official court documents and President Donald Trump himself referring to USAID’s leadership as “radical lunatics.”

The rhetoric mirrors similar language used by authoritarians across the globe who have long been opposed to foreign assistance for democracy, human rights, and governance work, and it has real-world consequences for those advocating for human rights and freedom. Leaders of multiple countries have seized on the words of the Trump administration to launch spurious investigations of human rights defenders and other civil society activists who had received U.S. funding.

Closing Civic Space and New Technology

Closing civic space is not a new threat to human rights defenders, but it is one that has reached a fevered pitch in the last few years. This has included both an increase in traditional attacks and a greater reliance on new tactics for suppression, especially in the digital sphere.

Nearly 45 percent of all civic space violations CIVICUS recorded for its annual analysis were related to the freedom of expression. The organization documented more than 900 violations of the right to peaceful assembly and more than 800 violations of freedom of association. The most frequent examples were detentions of protesters and journalists, followed by the detention of human rights defenders outside the context of a protest or journalism, merely for doing their work.

Authoritarian regimes also have become ever more adept at utilizing the digital space for repression. Tactics such as doxing, censorship, smearing, and online harassment are important tools in an authoritarian approach. They have been supplemented in recent years by less evident tactics such as shadow-banning, which the CIVICUS analysis defined as when “a platform restricts content visibility without notifying the user,” allowing the platform to maintain an appearance that it is neutral.

Women rights defenders face additional risks online, including technology-facilitated gender-based violence: In a global survey by the Economist Intelligence Unit, 38 percent of women reported personal experience with violence online, from hacking and stalking to image-based sexual abuse.

Attacks in the digital space often are also connected with or fuel physical attacks, “including killings, enforced disappearances, arbitrary detention and harassment,” as Frontline Defenders reported in its analysis. Tunisia is paradigmatic. Amnesty International reported that, beginning in 2024, a “wave of arrests followed a large-scale online campaign…which saw homophobic and transphobic hate speech and discriminatory rhetoric against LGBTI activists and organizations spreading across hundreds of social media pages, including those espousing support for the Tunisian President Kais Said. Traditional media outlets also broadcast inflammatory messages by popular TV and radio hosts attacking LGBTI organizations, calling for their dissolution and for the arrests of LGBTI activists.” 

What to Expect for Human Rights in 2026 

The absence of meaningful and unified international pushback to human rights abuses by some of the world’s most powerful nations means the rights-based international system will continue to face unprecedented attacks, and the challenges that rights defenders face in the year ahead are likely to increase in number and intensity. Authoritarians worldwide have monitored the assault against human rights in the past year — from genocide in Gaza to the crackdowns on protesters in Tanzania to restrictions on freedom of association and expression in El Salvador and so many more instances — and they have learned that they are unlikely to be held accountable internationally in the near term.

Yet despite these challenges, a few developments in 2025 offer some reasons for optimism in the year ahead. Several large-scale, youth-led movements in 2025 held their governments accountable for rights violations, from the July Revolution in Bangladesh that ousted an abusive prime minister to the Gen Z protests in Kenya over economic conditions and government corruption, a protest moniker that spread to other countries as well.

Some governments passed rights-protecting laws, from Thailand’s legalization of same-sex marriage to Colombia’s laws preventing child marriage. Courts stood up for human rights and held perpetrators to account, from the International Criminal Court’s conviction of Sudan’s Ali Muhammad Ali Abd-Al-Rahman for war crimes and crimes against humanity to the U.S. conviction of The Gambia’s Michael Saang Correa for torture, to the symbolic judgment of the People’s Tribunal for Women of Afghanistan. These trends are likely to continue in 2026, despite the challenges, because courageous human rights defenders are using every avenue to fight for rights.

This year will also bring targeted opportunities to continue the fight for human rights. A preparatory committee for a proposed international crimes against humanity treaty begins work this month at the United Nations. Also at the U.N., this year’s Universal Periodic Reviews, a regular peer review of countries’ human rights records, will focus on some of the world’s worst rights offenders — including Sudan, Eswatini, and Rwanda — as well as countries with highly mixed records. These reviews provide an opportunity for the world to examine, publicly and critically, the rights records of all 193 countries and for victims and activists to share their stories and insights. While the United States has not submitted its self-evaluation due late last year, the process continued with the usual submissions from the U.N. and others.

Creative activists also are likely to use prominent events, such as the 2026 Olympic Games, to push for the expansion and recognition of human rights. They can take the opportunity of the United States’ 250th anniversary celebrations to highlight and internationalize the country’s founding principles of life, liberty, and the pursuit of happiness, as well as the requirement that all governments “[derive] their just powers from the consent of the governed.”

Who Will Lead the Fight for Human Rights in 2026? 

As many governments pull back and even attack human rights, the work of human rights defenders and organizations will become more critical than ever. Some of them have been leading the fight for decades, including leading international NGOs, national organizations, networks, and prominent individual leaders. Others have done critical human rights work but haven’t labeled themselves as rights defenders, such as organizations providing access to clean water, supporting girls’ education, or working to prevent violent conflict.

Many work at the community level, alongside neighbors and friends, with human rights defenders networks around the world, from the Mozambique Human Rights Defenders Network to Somos Defensores in Colombia. Some are in exile, fighting for rights in their home countries and for refugee and diaspora communities, like the brave Afghan women who organized a landmark People’s Tribunal in 2025 to expose rights violations against women. Others are professionals whose skills directly relate to human rights — lawyers, judges, journalists, and more. They include people like the brave journalists who continue to report on the context in Gaza, despite the incredible risks, and the Burmese lawyers who continue to document rights violations. Some are individual activists, using their platforms and skills to protect rights and call attention to attacks against them, like Iranian Nobel laureate Narges Mohammadi who was recently detained alongside other rights defenders while attending a memorial service for a human rights lawyer. Some are informal coalitions, student and youth groups, or protest participants — social movements have been and will be an essential component of the fight for human rights. All of these actors play a critical role in the human rights ecosystem. All of them are human rights defenders.

Aid funding cuts have devastated civil society organizations and will continue to impact human rights advocates. A survey by the International Foundation for Electoral Systems and International IDEA of 125 civil society organizations based in 42 countries found that 84 percent of respondents had lost funding due to U.S. and other countries’ aid cuts, with the same number expecting further cuts in 2026. UN Women reported that more than one in three women’s rights and civil society organizations have suspended or shut down programs to end violence against women and girls and more than 40 percent have scaled back or closed life-saving services. The philanthropic organization Humanity United found that 44 percent of peacebuilding organizations that it surveyed would run out of funds by the end of 2025.

These cuts will only be amplified as time goes on, as fewer young people can become human rights professionals while managing to put food on the table, as legal cases that take years to process aren’t filed for lack of funding, as human rights abuses aren’t documented, as the attacks from authoritarian regimes go unchecked. Shrinking development budgets will no longer provide similar levels of support to courts and anti-corruption bodies that human rights defenders have traditionally approached to pursue justice or for support hotlines where ordinary people can call in anonymously to report abuses at the hands of security forces. Such foreign assistance enabled vital avenues of accountability, but also signified solidarity, that at least some political decisionmakers both at home and abroad believed in human rights and supported those working to deepen and protect them.

But despite the myriad challenges, there will be human rights defenders who continue to fight the fight. For many, changes in funding or the withdrawal of political top-cover won’t stop them from finding avenues. One need only look at Iran’s protests today, where thousands of people are exercising and demanding their human rights amidst a brutal crackdown, internet blackout, and without international funding. Rights defenders have been doing a lot with a little for many years. Some — especially women, youth, Indigenous people, and disabled defenders — have often been excluded from human rights funding and support in the past. A new generation has seen the horrors of Gaza, El-Fasher, eastern Ukraine, or even around the corner from their home, in the news and online, and they have committed themselves to social justice and the prevention of atrocities.

Human rights has always been a universal endeavor which has required diverse supporters, advocates, and allies – this is true now more than ever.

How Can the International Community Support? 

Even those governments and institutions that continue to lead in supporting human rights internationally will need to do more with less, as the above-outlined cuts exemplify, to support those on the front lines. This is the chance to shift “localization” – the practice of funding local civil society organizations directly and based on their priorities, rather than via large overhead-requiring NGOs funded by donor countries — from an ideal to a necessary strategy. A grant of $20,000 may not keep a major international organization online, but it can fund a community-based service provider. Donors can integrate a rights-based approach across portfolios instead of siloing the issue, integrating human rights goals and strategies into other foreign policy initiatives. For example, companies can integrate human rights efforts and measurements into their supply chains for products from batteries to chocolate, producing products they would already produce but in a way that advances human rights as well. Military operations can add human rights and gender considerations with little cost but potentially huge impact. This requires training, tools, and high-level political will to succeed. And they can continue to advocate for rights and use diplomatic pressure and support as key tools.

The elephant in the room is the United States. The Trump administration not only is backtracking on the traditional U.S. commitment and values of democracy and human rights internally and internationally but also has sought to hamper others in funding such initiatives. But there are still important steps that can be taken to protect human rights. Congress must do its job and provide oversight, holding the administration accountable to the laws that protect this important work. Members should speak out against injustices and rights violations, at home and abroad. Rep. Chris Smith (R-NJ), for example, has played a key role in the Tom Lantos Human Rights Commission, calling out rights abuses in places like Turkey, and Rep. Tim Kennedy (D-NY) led a congressional letter to the Department of Homeland Security urging the Trump administration to overturn its decision to terminate Temporary Protective Status for Burmese people.  State governments have always played a key role in advancing rights, and this will become more critical than ever.

Foreign governments that have engaged on human rights issues but haven’t been the largest international donors or advocates will be particularly important. Some of them are stepping up already. Examples include Japan playing a leading role in advancing women’s issues, South Africa and Gambia taking cases to the International Court of Justice accusing Israel and Myanmar, respectively, of violating the Genocide Convention, and Ireland continuing its steadfast allyship with human rights defenders.

Now is the time for committed countries around the world to continue to demonstrate the global nature of this agenda, set out more than 75 years ago in the Universal Declaration of Human Rights and reinvigorated by 18 international human rights treaties.

Philanthropy and the international private sector will be more essential than ever in 2026.  Foundations cannot offset the huge funding gaps left by governments and multilateral donors — total U.S. philanthropic giving is about $6 billion per year, whereas U.S. overseas development assistance alone in 2023 accounted for $223 billion — but they can provide strategic investments that help protect rights and those defending them, amplify their voices, fund innovative new approaches, and help the ecosystem survive. Philanthropies around the world provided nearly $5 billion in human rights support globally in 2020 alone, and their funding is critical for many organizations.

Companies have their own role to play, one that includes but goes well beyond corporate social responsibility, from responsible tech and AI to eliminating forced labor from supply chains to hiring diverse employees. The private sector has a unique opportunity to ensure that human rights remain on the global agenda, because there is a strong business case in favor of human rights protections and alliances with those who truly understand the needs and wants of local populations. A great example is the effort by numerous auto and electronics companies to move away from cobalt batteries, both a recognition of the horrible rights violations facing individuals and communities around cobalt mines in the Democratic Republic of Congo and a recognition that this move is also better for business due to supply chain volatility.

Defending against challenges to human rights, democracy, and good governance in 2026 and beyond will require creativity and broad coalition-building across sectors that too often are siloed, such as health, peacebuilding, humanitarian assistance, and the field of democracy, human rights, and governance. Everyone who does not traditionally think of themselves as a human rights defender, from government officials to the private sector, will need to step up to support those on the frontlines of the fight to defend human rights.

The post Who Will Stand Up for Human Rights in 2026 – and How? appeared first on Just Security.

]]>
128753
The Era of AI-Orchestrated Hacking Has Begun: Here’s How the United States Should Respond https://www.justsecurity.org/127053/era-ai-orchestrated-hacking/?utm_source=rss&utm_medium=rss&utm_campaign=era-ai-orchestrated-hacking Tue, 06 Jan 2026 14:07:42 +0000 https://www.justsecurity.org/?p=127053 Policymakers and industry must ensure that organizations have access to fit-for-purpose cyber defenses and take steps to manage the proliferation of AI capabilities.

The post The Era of AI-Orchestrated Hacking Has Begun: Here’s How the United States Should Respond appeared first on Just Security.

]]>
On Nov. 13, Anthropic announced it had disrupted the “first AI-orchestrated cyber espionage campaign,” conducted by Chinese cyber actors using its agentic Claude Code model. Discussed in depth at a congressional hearing on Dec. 17, the operation represents a major escalation from previous malicious uses of AI to generate malware or improve phishing emails, ushering in an era of high-speed and high-volume hacking.

For years, experts have warned that agentic AI would allow even unsophisticated nation-states and criminals to launch autonomous cyber operations at a speed and scale previously unseen. With that future now in reach, policymakers and industry leaders must follow a two-pronged strategy: ensuring that organizations have access to fit-for-purpose cyber defenses and managing the proliferation of AI capabilities that will allow even more powerful cyber operations in the future. Both steps are important not only to safeguard U.S. networks, but also to solidify U.S. technical leadership over competitors such as China.

How the Cyber Campaign Worked

In a detailed report, Anthropic assessed with high confidence that a Chinese state-sponsored group designated as GTG-1002 used its Claude Code model to coordinate multi-staged cyber operations against approximately 30 high-value targets, including technology companies, financial institutions and government agencies. The campaign produced “a handful of successful intrusions.” The hackers circumvented safety features in the model, breaking the workflow into discrete tasks and tricking Claude into believing it was helping fix cybersecurity vulnerabilities in targeted systems.

Humans provided supervision and built a framework that allowed Claude to use open-source hacking tools to conduct the operations. But Claude “executed approximately 80 to 90 percent of all tactical work independently” — from initial reconnaissance and vulnerability identification to gaining access to targeted systems, removing data, and assessing its value. Automation allowed GTG-1002 actors to achieve an operational tempo impossible for human operators; its “peak activity included thousands of requests, representing sustained request rates of multiple operations per second.”

Some outside researchers have questioned the effectiveness of this campaign, pointing out that Claude hallucinated about data and credentials it claimed to have taken. Some also noted the low quality of AI-generated malware. But this is only the beginning. As AI models become more powerful and ubiquitous, the techniques this campaign demonstrated will only grow more sophisticated and accessible. The question is who adopts them next and how quickly.

AI is Empowering U.S. Adversaries

Anthropic’s attribution of this campaign to Chinese state-sponsored actors grabbed headlines at a time of rising geopolitical tensions and high-profile Chinese cyber operations targeting U.S. telecommunications networks and critical infrastructure.

China has a large ecosystem of state-affiliated hacker groups that operate at scale. These groups function essentially as businesses, broadly targeting organizations in the United States and other countries and then selling stolen information to government and commercial customers. GTG-1002’s approach — targeting 30 organizations, gaining access and exfiltrating data where possible — fits this model perfectly. For a high-scale hacking enterprise, using AI automation to increase efficiency is a natural evolution. It is what every business is trying to do right now.

At the same time, the campaign relied on open-source, relatively unsophisticated hacking tools. Any resourceful adversary — Russian cyber criminals, North Korean crypto currency thieves, Iranian hackers — could conduct similar campaigns using advanced AI models. Many of them probably are right now. What was novel was the operational tempo — Claude Code executed reconnaissance, exploitation, and data analysis at a pace no human team could match.

The key takeaway is that adversaries everywhere now have the ability to conduct high-speed, high-volume hacks. Unfortunately, cyber defenders are not prepared to meet this challenge.

AI and the Cyber Offense-Defense Balance 

Cybersecurity has long been a competition between offense and defense, with the offense having the edge thanks to the large attack surfaces produced by modern networks. While defenders must work to patch all vulnerabilities to keep the hackers out, the offense just needs to locate one entry point to compromise the defenders’ systems. Cybersecurity experts are concerned that AI-enabled automated operations, like the one uncovered by Anthropic, will further tip the balance by increasing the speed, scale, and persistence of hacks.

At the same time, AI holds the potential to address many long-standing cybersecurity challenges. AI-enabled testing can help software developers and infrastructure owners remediate vulnerabilities before they are exploited. Managed detection and response companies have touted their use of AI to reduce incident investigation time from hours to minutes, allowing them to disrupt ongoing operations and free up human analysts for more complex tasks. When layered and done right, these solutions can give defenders a fighting chance at keeping up with the new speed and scale of offense — but only if they are widely adopted.

For years, criminals have targeted “cyber poor” small businesses, local hospitals and schools because they are less able to purchase state-of-the art defenses to keep hackers out and less able to resist ransom demands when criminals get in. To ensure these organizations are not overwhelmed by the new pace of AI-driven hacking, organizations will need to adopt newer, high-speed defensive tools. Increased automation will make these tools cheaper and more accessible to those with limited cyber defenses. But it is hard to imagine how this will happen domestically without more funding and targeted efforts to raise cybersecurity standards in key critical infrastructure sectors — at a time that the Trump administration is cutting back on U.S. cyber investments.

The same resource divide exists internationally, where middle and lower income countries are at risk of crippling cyber incidents because they lack resources for basic defenses. It will take concerted international engagement and capacity building to ensure countries can keep pace with new threats, but it is in the United States’ interests to help them do so.. As the United States and China compete to promote global adoption of their technology ecosystems, developing countries in particular are looking for solutions across the full technology stack. AI-enabled cyber defenses — offered individually or baked into other services — can strengthen the United States’ appeal as a technology partner.

When AI Competition Meets Proliferation Risks

In addition to strengthening cyber defenses, it is also important for policymakers and industry leaders to reduce the risk that AI systems will be exploited to orchestrate cyber operations in the first place. GTG-1002’s activities were only discovered and stopped because hackers used a proprietary model; Anthropic had visibility into the groups’ activities and could cut them off, once discovered.

The good news is that companies like Anthropic, OpenAI and Google can learn from malicious use of their models and build in stronger capabilities to detect and block future incidents. Athropic’s transparency in the GTG-1002 case helps build muscle memory so that companies can work together to prevent similar incidents in the future (though some experts argue Anthropic could have gone farther in explaining how the operation worked and sharing actionable details, like sample prompts). The bad news is that as open-source models like China’s DeepSeek improve, malign actors will not need to rely on proprietary models. They will turn to open source models that operate with limited or no oversight.

This is a place where tensions between U.S.-China AI competition and cybersecurity meet. Both countries are competing across multiple dimensions to become the world’s AI leader. U.S. companies — including Google, Microsoft, OpenAI, and Anthropic — have the edge when it comes to the raw capability of their proprietary models. Chinese AI companies (and some U.S. ones, too) have pressed ahead with the development of lower cost, open-source models that are more easily accessible to users in developing countries in particular.

The economic, political, and national security stakes for this competition are enormous.  To ensure the United States maintains a competitive advantage, the Trump administration has sought to reduce AI safety requirements. But if this campaign is a sign of what is to come, both the United States and China should have an interest in preventing the models their companies create from being exploited by criminals, terrorists, and other rogue actors to cause harm within their territories.

The Trump administration’s AI Action Plan calls for more evaluation of national security risks, including cyber risks in frontier models. The question is what additional safeguards need to be put in place to reduce this risk, which incentives are needed, and how to build consensus on such standards internationally.

What Must Be Done Now

It is impossible to stop AI-driven campaigns. But policymakers and industry leaders can still strengthen cyber defenses to mitigate risk. This requires incentivizing development of AI applications that enable secure software development, improved penetration testing, faster threat detection, and more efficient incident response and recovery. Funding and concerted engagement by government and private cybersecurity experts will be needed to support adoption among cyber-poor providers of critical services, like hospitals and schools.

It also requires strengthening safeguards to make it harder for bad actors to weaponize easily accessible AI models. Ideally, the United States would do this in parallel with China requiring increasing safeguards in its own models. (Otherwise, the administration’s recent decision to sell more powerful chips to China will allow China to produce more unsafe models, and faster.)

Regardless, the United States must continue efforts within its own AI safety community to identify and mitigate misuse of U.S. models. Transparency about incidents like this one is a good place to start. But to stay ahead of the threat, companies and researchers should be further encouraged to share information about risks, improve testing standards, and develop mitigations when bad actors circumvent safeguards.

The post The Era of AI-Orchestrated Hacking Has Begun: Here’s How the United States Should Respond appeared first on Just Security.

]]>
127053
Trump’s Chip Strategy Needs Recalibration https://www.justsecurity.org/127032/trump-china-chip-strategy-recalibration/?utm_source=rss&utm_medium=rss&utm_campaign=trump-china-chip-strategy-recalibration Mon, 15 Dec 2025 13:50:59 +0000 https://www.justsecurity.org/?p=127032 Facing the challenge from China, U.S. technological leadership in the century ahead requires a focused and disciplined strategy coordinated with allies.

The post Trump’s Chip Strategy Needs Recalibration appeared first on Just Security.

]]>
President Donald Trump emerged from his recent meeting with Xi Jinping in Busan, South Korea, saying that, on a scale of 1 to 10, “I would say the meeting was a 12.” But behind the hyperbole, the meeting revealed a stark reality: America is negotiating from a position of eroding strength in the technologies that will define 21st century power. Indeed, buried in the president’s comments was a troubling signal: semiconductor policy is now on the negotiating table, with Trump suggesting the American chipmaker Nvidia will talk directly to Chinese officials while Washington plays “referee.” That has now apparently resulted in White House endorsement of a “compromise” that will allow Nvidia’s advanced H200 chip to be exported to China.

To succeed in geo-economic competition with China, U.S. policy should seek to preserve asymmetric advantages by maintaining China’s reliance on U.S. products and technologies, while controlling access to the essential capabilities that secure America’s national security and economic edge. The challenge currently is that China isn’t playing Trump’s transactional game. While the Trump administration celebrates short-term deals, Beijing is executing a multi-decade strategy to dominate the semiconductor supply chain from raw materials to finished chips. Tufts University Professor Chris Miller has estimated that China has invested the equivalent of the entire U.S. CHIPS Act virtually every year since Xi made domestic semiconductor manufacturing a priority in 2014. The CHIPS Act allocated some $52.7 billion ($39 billion for manufacturing grants) for domestic semiconductor manufacturing. It was signed into law by President Joe Biden in 2022 but has been abandoned by the current administration.

When Trump decided to continue the Biden administration’s export controls on advanced chips earlier this year, Xi didn’t blink. Instead, he doubled down on the indigenous innovation programs that have allowed China to achieve breakthroughs that many in the United States thought were only possible in some distant future, such as 7-nanometer chips, considered the “entry point” for competitive AI, and carbon nanotube processors that could leapfrog silicon entirely, outperforming in scale, speed and efficiency.

Worse yet, this strategic drift is coming precisely when the U.S. semiconductor advantage faces threats from multiple directions: China’s accelerating innovation, allies’ frustration with inconsistent U.S. policies, and self-inflicted wounds from using the CHIPS Act as leverage for the government to muscle-in and take a position in private corporations.

The Worst of Both Worlds

The current U.S. government approach to semiconductor export controls embodies the worst of both worlds: undermining American companies’ competitiveness while failing to meaningfully slow China’s progress. Nvidia and AMD face billions in losses from restricted China sales, revenue that funds the research and development needed to keep the United States ahead. Meanwhile, Chinese firms smuggle restricted chips through shell companies, rent computing power from cloud providers, and innovate to get around U.S. restrictions with impressive speed. DeepSeek, a Chinese AI startup, in January 2025 released R1, an open-source research model roughly matching the capabilities of advanced models from OpenAI, Google, Meta, and Anthropic. The breakthrough demonstrated that Chinese companies can achieve efficient AI models, trained on fewer chips than American labs typically use, and demonstrated that hardware constraints can drive software innovation in ways U.S. tech companies have yet to match.

Meanwhile, U.S. policy has devolved into threats to withhold already-promised CHIPS Act grants to American companies, efforts to change deal terms after capital and investments were already committed, and suggestions that the government take an equity stake in companies or convert grants into government ownership. The wildly oscillating approach to export controls creates uncertainty not just for U.S. adversaries, but for American companies, which require predictability to plan and implement multi-billion-dollar, multi-year investments.

The solution requires a more sophisticated approach, exercising the tools the government has at its disposal to both work with and leverage the private sector to assure that the United States will maintain its technological lead for generations yet to come. Trump’s instinct to use semiconductor access as leverage isn’t wrong; it’s that the execution and approach thus far undermine the desired goal. Treating advanced chips as tradeable commodities that the United States can turn on and off based on soybean purchases or other short-term transactions fundamentally misunderstands what’s at stake. These are the chips that power artificial intelligence, quantum computing, and autonomous weapons systems, and it’s not an overstatement that the country that leads the world in AI will also have significant economic, military, and strategic advantages for generations to come.

A More Effective Strategy

A pragmatic, realistic approach would include:

First, control the tools, not just the products. As one analysis put it, “It’s much easier to control the fishing rod than the fish.” Semiconductor manufacturing equipment – the massive chip-making machines from companies like Netherlands-based ASML – represent the real chokepoint. These tools are produced by a handful of firms in countries that are U.S allies; unlike the chips themselves, they are impossible to smuggle; and they’re what China desperately needs to achieve true independence. By contrast, controlling finished chips forces constant policy updates as companies design around each new restriction.

Moreover, constantly shifting rules breed regulatory uncertainty that pushes customers toward Chinese alternatives. Nvidia’s H20 chip was designed specifically to comply with export restrictions, only to then be banned in April 2025, then unbanned months later with a 15 percent revenue-sharing requirement – only to then have the administration just last week lift restrictions on the even more advanced H200. Customers that rely on a predictable supply of chips and are watching this whiplash, many in China, now have every incentive to develop relationships with domestic Chinese suppliers like Huawei, whose supply won’t disappear based on Washington’s latest dealmaking.

The U.S. government needs to focus restrictions on where enforcement works and where partnership is deepest with other nations that share concerns about the malign effect of tech competition with China. Yes, some advanced chips should remain restricted for military applications, but the current approach of banning compliance-engineered chips goes beyond what is needed for security and may be actively counterproductive if the result is to drive Chinese innovation and independent manufacturing capability. U.S. policy should be torqued to restrict where it must but also to let American companies sell more chips where they don’t compromise fundamental advantages.

Second, make export controls a force multiplier, not a substitute, for American competitiveness. The CHIPS Act has spurred investments of tens of billions of dollars in domestic semiconductor capacity — investments that the “adjustments” introduced by the Trump administration threatens to undo. While there are certainly adjustments that can provide additional leverage, Congress and the administration need to refocus on accelerating and expanding investments. U.S. semiconductor companies also need to be able to generate revenues to fund next-generation research and development. That means expanding their access to markets in Japan, South Korea, Taiwan, and Europe to offset losses in China. U.S. companies need economies of scale to compete. While exports to certain markets and countries certainly require additional scrutiny, especially those with deep or longstanding economic ties with China, cutting them off from the world’s largest chip market without providing alternatives is strategic suicide.

Third, recognize that success requires leveraging U.S. alliances fully. The U.S. semiconductor advantage depends entirely on continuing cooperation with technologically advanced and like-minded countries that help support America’s competitive advantage in chips: Dutch lithography (to print the circuit patterns on the silicon at micro/nano scale), Japanese materials, South Korean memory, Taiwanese manufacturing. But the Trump administration’s single-minded pursuit of unilateral control, viewing partnerships as dependency and weakness when in fact they are a strategic asset, has both frustrated allies and left enforcement gaps that China can exploit. The United States benefits from multilateral arrangements with partners to share in the economic pain and required commitment to enforcement as well as to jointly reap the benefits of the strategic upside. The answer isn’t “join us or else.” U.S. economic statecraft and commercial diplomacy should be leveraged to streamline allied export controls, expand their CHIPS Act research access, and jointly develop next-generation technologies, including treaty-level commitments, not just ad hoc coordination shifting with each presidential tweet.

Fourth, accept that some technologies are too foundational for national security to treat them as just another trade variable. The most dangerous signal from the Trump-Xi meeting in Busan was the implication that advanced semiconductor access is negotiable based on agricultural purchases or fentanyl or other matters. While Xi surely suspected it before, he left Busan knowing that security-driven export controls and America’s technological edge are up for negotiation, if the price is right.

Finally, recognize that America’s technological leadership requires protecting the entire innovation ecosystem, not just products. Indeed, America’s advantage has long been in the complete innovation system — universities, capital markets, design tools, equipment expertise, and global talent. Government actions that hollow out or wreck this ecosystem, in whole or in part, create the risk of technological collapse. For the private sector, lost revenues mean cuts in R&D budgets, laying off engineers, and ceding market share to foreign competitors that face fewer restrictions. Export controls should protect core capabilities while enabling companies to maintain the scale and revenues that, in turn, fund continued innovation.

Not Too Late

The Busan meeting bought some temporary calm on rare earths and tariffs. But Xi achieved his objectives: he got semiconductor restrictions on the negotiating table, maintained his option to reimpose rare earth controls in a year, and watched the Trump administration signal that advanced chip access is negotiable rather than strategic — a bet that appears to have now paid off with the H200 decision. Thus far, Xi is thinking in five-year plans while Trump is thinking in tweets.

But it’s not too late. For all its other failings — and they are many — the Trump administration’s new 2025 National Security Strategy is right in maintaining that American leadership in advanced technology lies at the very heart of national security for the 21st century. And nearly at this Trump administration’s one-year mark, it’s likewise clear that if its ambitions for maintaining economic, commercial, and technological advantages are to be fulfilled, it needs to get much smarter about how it seeks to pursue its strategy. That means predictable investments in American research and manufacturing; targeting controls on manufacturing equipment where enforcement works and leverage with allies and partners is greatest; supporting semiconductor ecosystems in friendly nations rather than alienating them through policy chaos; and recognizing that foundational technologies should not be traded away for a hill of soybeans or self-interested revenue-sharing deals.

American technological leadership in the century ahead requires a strategy that is ruthlessly focused, multilaterally coordinated, and strategically disciplined. We know the answer. The question is whether this administration can summon the discipline to execute it.

The post Trump’s Chip Strategy Needs Recalibration appeared first on Just Security.

]]>
127032
Just Security’s Artificial Intelligence Archive https://www.justsecurity.org/99958/just-securitys-artificial-intelligence-archive/?utm_source=rss&utm_medium=rss&utm_campaign=just-securitys-artificial-intelligence-archive Mon, 15 Dec 2025 12:00:45 +0000 https://www.justsecurity.org/?p=99958 Just Security's collection of articles analyzing the implications of AI for society, democracy, human rights, and warfare.

The post Just Security’s Artificial Intelligence Archive appeared first on Just Security.

]]>
Since 2020, Just Security has been at the forefront of analysis on rapid shifts in AI-enabled technologies, providing expert commentary on risks, opportunities, and proposed governance mechanisms. The catalog below organizes our collection of articles on artificial intelligence into general categories to facilitate access to relevant topics for policymakers, academic experts, industry leaders, and the general public. The archive will be updated as new articles are published. The archive also is available in reverse chronological order at the artificial intelligence articles page.

AI Governance

Trump’s Chip Strategy Needs Recalibration
By Michael Schiffer (December 15, 2025)

AI Model Outputs Demand the Attention of Export Control Agencies
By Joe Khawam and Tim Schnabel (December 12, 2025)

Governing AI Agents Globally: The Role of International Law, Norms and Accountability Mechanisms
By Talita Dias (October 17, 2025)

Dueling Strategies for Global AI Leadership? What the U.S. and China Action Plans Reveal
By Zena Assaad (September 4, 2025)

Selling AI Chips Won’t Keep China Hooked on U.S. Technology
By Janet Egan (September 3, 2025)

The AI Action Plans: How Similar are the U.S. and Chinese Playbooks?
By Scott Singer and Pavlo Zvenyhorodskyi (August 26, 2025)

Assessing the Trump Administration’s AI Action Plan
By Sam Winter-Levy (July 25, 2025)

Decoding Trump’s AI Playbook: The AI Action Plan and What Comes Next
Brianna Rosen interview with Joshua Geltzer, Jenny Marron and Sam Winter-Levy (July 24, 2025)

Rethinking the Global AI Race
By Lt. Gen. (ret.) John (Jack) N.T. Shanahan and Kevin Frazier (July 21, 2025)

The Trump Administration’s AI Action Plan Is Coming. Here’s What to Look For.
By Joshua Geltzer (July 18, 2025)

AI Copyright Wars Threaten U.S. Technological Primacy in the Face of Rising Chinese Competition
By Bill Drexel (July 8, 2025)

What Comes Next After Trump’s AI Deals in the Gulf
By Alasdair Phillips-Robins and Sam Winter-Levy (June 4, 2025)

AI Governance Needs Federalism, Not a Federally Imposed Moratorium
By David S. Rubenstein (May 29, 2025)

Open Questions for China’s Open-Source AI Regulation
By Nanda Min Htin (May 5, 2025)

The Just Security Podcast: Trump’s AI Strategy Takes Shape
Brianna Rosen interview with Joshua Geltzer (April 17, 2025)

Shaping the AI Action Plan: Responses to the White House’s Request for Information
By Clara Apt and Brianna Rosen (March 18, 2025)

Export Controls on Open-Source Models Will Not Win the AI Race
By Claudia Wilson and Emmie Hine (February 25, 2025)

The Just Security Podcast: Key Takeaways from the Paris AI Action Summit
Paras Shah interview with Brianna Rosen (February 12, 2024)

The Just Security Podcast: Diving Deeper into DeepSeek
Brianna Rosen interview with Lennart Heim, Keegan McBride and Lauren Wagner (February 4, 2025)

What DeepSeek Really Changes About AI Competition
By Konstantin F. Pilz and Lennart Heim (February 3, 2025)

Throwing Caution to the Wind: Unpacking the U.K. AI Opportunities Action Plan
By Elke Schwarz (January 30, 2025)

What Just Happened: Trump’s Announcement of the Stargate AI Infrastructure Project
By Justin Hendrix (January 22, 2025)

The Future of the AI Diffusion Framework
By Sam Winter-Levy (January 21, 2025)

Unpacking the Biden Administration’s Executive Order on AI Infrastructure
By Clara Apt and Brianna Rosen (January 16, 2025)

Trump’s Balancing Act with China on Frontier AI Policy
By Scott Singer (December 23, 2024)

The AI Presidency: What “America First” Means for Global AI Governance
By Brianna Rosen (December 16, 2024)

The United States Must Win The Global Open Source AI Race
By Keegan McBride and Dean W. Ball (November 7, 2024)

AI at UNGA79: Recapping Key Themes
By Clara Apt (October 1, 2024)

Rethinking Responsible Use of Military AI: From Principles to Practice
By Brianna Rosen and Tess Bridgeman (September 26, 2024)

Competition, Not Control, is Key to Winning the Global AI Race
By Matthew Mittelsteadt and Keegan McBride (September 17, 2024)

The Just Security Podcast: Strategic Risks of AI and Recapping the 2024 REAIM Summit
Paras Shah interview with Brianna Rosen (September 12, 2024)

Putting the Second REAIM Summit into Context
By Tobias Vestner and Simon Cleobury (September 5, 2024)

The Nuts and Bolts of Enforcing AI Guardrails
By Amos Toh and Ivey Dyson (May 30, 2024)

House Meeting on White House AI Overreach Highlights Congressional Inaction
By Melanie Geller and Julian Melendi (April 12, 2024)

Why We Need a National Data Protection Strategy
By Alex Joel (April 4, 2024)

Is the Biden Administration Reaching a New Consensus on What Constitutes Private Information
By Justin Hendrix (March 19, 2024)

The Just Security Podcast: How Should the World Regulate Artificial Intelligence?
Paras Shah and Brianna Rosen interview with Robert Trager (February 2, 2024)

It’s Not Just Technology: What it Means to be a Global Leader in AI
By Kayla Blomquist and Keegan McBride (January 4, 2024)

AI Governance in the Age of Uncertainty: International Law as a Starting Point
By Talita de Souza Dias and Rashmin Sagoo (January 2, 2024)

Experts React: Unpacking the Biden Administration’s New Efforts on AI
By Ian Miller (November 14, 2023)

Biden’s Executive Order on AI Gives Sweeping Mandate to DHS
By Justin Hendrix (November 1, 2023)

The Tragedy of AI Governance
By Simon Chesterman (October 18, 2023)

Introducing the Symposium on AI Governance: Power, Justice, and the Limits of the Law
By Brianna Rosen (October 18, 2023)

U.S. Senate AI Hearings Highlight Increased Need for Regulation
By Faiza Patel and Melanie Geller (September 25, 2023)

The Perils and Promise of AI Regulation
By Faiza Patel and Ivey Dyson (July 26, 2023)

Weighing the Risks: Why a New Conversation is Needed on AI Safety
By Michael Depp (June 30, 2023)

To Legislate on AI, Schumer Should Start with the Basics
By Justin Hendrix and Paul M. Barrett (June 28, 2023)

Regulating Artificial Intelligence Requires Balancing Rights, Innovation
By Bishop Garrison (January 11, 2023)

Emerging Tech Has a Front-Row Seat at India-Hosted UN Counterterrorism Meeting. What About Human Rights?
By Marlena Wisniak (October 28, 2022)

NATO Must Tackle Digital Authoritarianism
By Michèle Flournoy and Anshu Roy (June 29, 2022)

NATO’s 2022 Strategic Concept Must Enhance Digital Access and Capacities
By Chris Dolan (June 8, 2022)

Watchlisting the World: Digital Security Infrastructures, Informal Law, and the “Global War on Terror”
By Ramzi Kassem, Rebecca Mignot-Mahdavi and Gavin Sullivan (October 28, 2021)

One Thousand and One Talents: The Race for A.I. Dominance
by Lucas Irwin (April 7, 2021)

National Security & War

Embedded Human Judgment in the Age of Autonomous Weapons
By Lena Trabucco (October 16, 2025)

AI’s Hidden National Security Cost
By Caroline Baxter (October 1, 2025)

Harnessing the Transformative Potential of AI in Intelligence Analysis
By Rachel Bombach (August 12, 2025)

The Law Already Supports AI in Government — RAG Shows the Way
By Tal Feldman (May 16, 2025)

The United States Must Avoid AI’s Chernobyl Moment
By Janet Egan and Cole Salvador (March 10, 2025)

A Start for AI Transparency at DHS with Room to Grow
by Rachel Levinson-Waldman and Spencer Reynolds (January 22, 2025)

The U.S. National Security Memorandum on AI: Leading Experts Weigh In 
by Just Security (October 25, 2024)

The Double Black Box: AI Inside the National Security Ecosystem
By Ashley Deeks (August 14, 2024)

As DHS Implements New AI Technologies, It Must Overcome Old Shortcomings
By Spencer Reynolds and Faiza Patel (May 21, 2024)

The Machine Got it Wrong? Uncertainties, Assumptions, and Biases in Military AI
By Arthur Holland Michel (May 13, 2024)

Bringing Transparency to National Security Uses of Artificial Intelligence
By Faiza Patel and Patrick C. Toomey (April 4, 2024)

An Oversight Model for AI in National Security: The Privacy and Civil Liberties Oversight Board
By Faiza Patel and Patrick C. Toomey (April 26, 2024)

National Security Carve-Outs Undermine AI Regulations
By Faiza Patel and Patrick C. Toomey (December 21, 2023)

Unhuman Killings: AI and Civilian Harm in Gaza
By Brianna Rosen (December 15, 2023)

DHS Must Evaluate and Overhaul its Flawed Automated Systems
By Rachel Levinson-Waldman and José Guillermo Gutiérrez (October 19, 2023)

The Path to War is Paved with Obscure Intentions: Signaling and Perception in the Era of AI
By Gavin Wilde (October 20, 2023)

AI and the Future of Drone Warfare: Risks and Recommendations
By Brianna Rosen (October 3, 2023)

Latin America and Caribbean Nations Rally Against Autonomous Weapons Systems
By Bonnie Docherty and Mary Wareham (March 6, 2023)

Investigating (Mis)conduct in War is Already Difficult
By Laura Brunn (January 5, 2023)

Gendering the Legal Review of New Means and Methods of Warfare
By Andrea Farrés Jiménez (August 23, 2022)

Artificial Intelligence in the Intelligence Community: Oversight Must Not Be an Oversight
By Corin R. Stone (November 30, 2021)

Artificial Intelligence in the Intelligence Community: Know Risk, Know Reward
By Corin R. Stone (October 19, 2021)

Artificial Intelligence in the Intelligence Community: The Tangled Web of Budget & Acquisition
By Corin R. Stone (September 28, 2021)

Embedding Gender in International Humanitarian Law: Is Artificial Intelligence Up to the Task?
By Andrea Farrés Jiménez (August 27, 2021)

Artificial Intelligence in the Intelligence Community: Culture is Critical
By Corin R. Stone (August 17, 2021)

Artificial Intelligence in the Intelligence Community: Money is Not Enough
By Corin R. Stone (July 12, 2021)

Adding AI to Autonomous Weapons Increases Risks to Civilians in Armed Conflict
By Neil Davison and Jonathan Horowitz (March 26, 2021)

Democracy

The AI Action Plan and Federalism: A Constitutional Analysis
By David S. Rubenstein (July 30, 2025)

U.S. AI-Driven “Catch and Revoke” Initiative Threatens First Amendment Rights
By Faiza Patel (March 18, 2025)

The Munich Security Conference Provides an Opportunity to Improve on the AI Elections Accord
By Alexandra Reeve Givens (February 13, 2025)

Q&A with Marietje Schaake on the Tech Coup and Trump
By Marietje Schaake (February 6, 2025)

Maintaining the Rule of Law in the Age of AI
By Katie Szilagyi (October 9, 2024)

Shattering Illusions: How Cyber Threat Intelligence Augments Legal Action against Russia’s Influence Operations
By Mason W. Krusch (October 8, 2024)

Don’t Downplay Risks of AI for Democracy
By Suzanne Nossel (August 28, 2024)

Tracking Tech Company Commitments to Combat the Misuse of AI in Elections
By Allison Mollenkamp and Clara Apt (March 28, 224)

Multiple Threats Converge to Heighten Disinformation Risks to This Year’s US Elections
By Lawrence Norden, Mekela Panditharatne and David Harris (February 16, 2024)

Is AI the Right Sword for Democray?
By Arthur Holland Michel (November 13, 2023)

The Just Security Podcast: The Dangers of Using AI to Ban Books
Paras Shah interview with Emile Ayoub (October 27, 2023)

Process Rights and the Automation of Public Services through AI: The Case of the Liberal State
By John Zerilli (October 26, 2023)

Using AI to Comply With Book Bans Makes Those Laws More Dangerous
By Emile Ayoub and Faiza Patel (October 3, 2023)

Regulation is Not Enough: A Blueprint for Winning the AI Race
By Keegan McBride (June 29, 2023)

The Existential Threat of AI-Enhanced Disinformation Operations
By Bradley Honigberg (July 8, 2022)

System Rivalry: How Democracies Must Compete with Digital Authoritarians
By Ambassador (ret.) Eileen Donahoe (September 27, 2021)

Surveillance
Social Media & Content Moderation
Further Reading

The post Just Security’s Artificial Intelligence Archive appeared first on Just Security.

]]>
99958
AI Model Outputs Demand the Attention of Export Control Agencies https://www.justsecurity.org/126643/ai-model-outputs-export-control/?utm_source=rss&utm_medium=rss&utm_campaign=ai-model-outputs-export-control Fri, 12 Dec 2025 13:45:19 +0000 https://www.justsecurity.org/?p=126643 The conversation about AI and national security must expand beyond semiconductors and model weights to encompass the outputs those technologies enable.

The post AI Model Outputs Demand the Attention of Export Control Agencies appeared first on Just Security.

]]>
When policymakers discuss artificial intelligence and export controls, the conversation typically centers on advanced semiconductors or AI model weights—the mathematical parameters that govern how the AI model processes information. Both the Biden and Trump administrations have restricted AI chip exports to China and other countries of concern, and the Biden administration’s January 2025 Diffusion Rule proposed extending controls to AI model weights. But these debates obscure another consequential challenge that has gone largely unaddressed: the application of export controls to AI model outputs—the specific text, code, or other responses that users elicit from the system.

Model weights and model outputs present fundamentally different challenges. Possession of the weights allows an adversary to deploy models without restrictions, modify them for malicious purposes, or study them to develop competing systems. But a foreign adversary doesn’t need to obtain model weights to benefit from a model’s capabilities; access to a publicly deployed model’s API or web interface may suffice to elicit controlled information. For instance, a user in a restricted destination could try to exploit a U.S. model to generate code for a missile guidance system or schematics for an advanced radar component. Model outputs thus represent a distinct national security challenge that persists regardless of whether any restrictions are placed on model weights.

Frontier models today can likely generate technical information controlled under the International Traffic in Arms Regulations (ITAR), which restrict defense-related technical data, and the Export Administration Regulations (EAR), which control dual-use technology. Yet these frameworks, designed for discrete transfers of static information between known parties, are ill-suited to govern AI systems that generate unlimited, dynamic outputs on demand for potentially anonymous users worldwide.

The agencies responsible for enforcing these controls—the State Department’s Directorate of Defense Trade Controls (DDTC) for the ITAR and the Commerce Department’s Bureau of Industry and Security (BIS) for the EAR—have yet to address this challenge with authoritative guidance. The result is a policy vacuum that serves neither national security nor economic competitiveness.

Why Export Controls Apply to AI Outputs

Can AI-generated outputs be subject to export controls? The clear answer under existing law is yes. The ITAR’s definition of “technical data” focuses on functional characteristics—information necessary for the design, development, operation, or production of defense articles—without regard to whether that information was produced by a human engineer, photocopied from a blueprint, or synthesized by an AI model. The EAR’s definition of “technology” similarly encompasses information necessary for development, production, or use of controlled items regardless of whether it was created by AI or humans.

This content-focused approach makes strategic sense. A detailed schematic for a missile guidance system would pose the same proliferation risk whether it appears in a leaked document or an AI chat window. The national security harm stems from the information itself, not how it was generated.

Testing by the Law Reform Institute confirms that this isn’t a hypothetical concern. Working with an ITAR expert who previously conducted commodity jurisdiction analyses for DDTC, we assessed whether publicly available frontier models could generate information that would likely qualify as ITAR-controlled technical data. Models from four leading U.S. developers were tested across several categories of defense articles on the ITAR’s U.S. Munitions List. Every tested model produced such information in at least one category. (The examples of defense articles noted elsewhere in this article are purely hypothetical. The particular categories tested by LRI are not being publicly disclosed to avoid providing a roadmap for circumventing ITAR restrictions.)

These tests had limitations—they established capability as a proof of concept rather than comprehensively benchmarking it, and items were not manufactured to verify the accuracy of the model outputs. Nevertheless, the results demonstrate that the problem already exists in a nascent form.

Additionally, the Law Reform Institute’s testing relied exclusively on straightforward queries, forgoing “jailbreaks” or other adversarial techniques used to circumvent any safeguards that may have been designed to prevent the models from assisting with these topics. A more determined adversary would likely extract far more, as the defenses typically built into publicly available models are porous. The National Institute of Standards and Technology (NIST) has warned that AI remains vulnerable to attacks, and researchers have found that professional red-teamers can bypass safety defenses more than 70 percent of the time. As Anthropic CEO Dario Amodei observed in April, the AI industry is in a race between safety and capabilities—one in which capabilities are currently advancing faster. Thus, if current trends continue, the controlled information that can be obtained from frontier models will likely increase in scope, sensitivity, and accuracy. 

The National Security Stakes

What could adversaries gain from ready access to AI-generated controlled information? Future AI models that may be capable of generating detailed technical data and technology—from specifications for advanced radar systems and guidance algorithms for precision munitions to semiconductor fabrication techniques and quantum computing processes—could help adversaries overcome technical barriers in both defense and dual-use technologies.

Perhaps most significantly, as these capabilities mature, an adversary would gain an on-demand technical consultant that can iterate on designs, troubleshoot problems, and provide explanations tailored to specific needs—a capability that poses a unique national security threat. And unlike traditional channels through which controlled information typically travels—traceable shipments, emails, or physical meetings—an adversary prompting a publicly available model leaves minimal independently discoverable forensic evidence.

Who Bears Responsibility for the Export?

One of the most fundamental questions in applying export controls to AI outputs is deceptively simple: who is the “exporter” when a model generates controlled information? This question matters because export control liability attaches to the party responsible for the export.

Under the EAR, the “exporter” is “the person in the United States who has the authority of the principal party in interest to determine and control the sending of items out of the United States.” While the ITAR doesn’t explicitly define “exporter,” the term appears throughout the regulations in contexts assuming the exporter is the person who controls and effectuates the export and is responsible for obtaining authorization.

In traditional scenarios, identifying the exporter is straightforward. When Boeing ships aircraft components to a foreign buyer, Boeing is the exporter. When an engineer emails technical drawings to an overseas facility, the engineer (or their employer) is the exporter.

But AI model outputs scramble this clarity. When a foreign national in China prompts an American AI model to generate controlled technical data, who exported the data? The foreign user can’t be the exporter—that person is the recipient whose access triggers export control requirements. As a practical matter, the most defensible analysis is that the company that developed and deployed the AI system and gave the user access should be considered the exporter—at least for closed-weight models where the developer and deployer are the same entity. (Open-weight models—which allow users to download the full model to modify and run locally—raise distinct issues beyond this article’s scope.)

Such entities have the authority “to determine and control” the export, even if that control is imperfect. As with other software tools, developers and deployers decide whether to implement technical safeguards, screen users, or restrict access to prevent controlled outputs—and are thus uniquely positioned to take actions to mitigate national security harms before making the system accessible. Moreover, under the strict liability standard applied to civil export violations, even a user “tricking” a model via a jailbreak would not automatically absolve the developer of liability for the resulting unauthorized export.

This assessment has profound implications. Absent contrary guidance from DDTC or BIS, AI companies that deploy models capable of generating controlled information likely bear export control compliance responsibility—whether or not they intended their models to have such capabilities, and regardless of how users employ the systems. These companies may therefore already be “exporters” subject to ITAR and EAR requirements.

Why the “Public Domain” and “Published” Exclusions Don’t Always Apply

Both the ITAR and EAR contain exclusions for information that is in the “public domain” or that is “published.” These carve-outs exist because controlling widely available information would be futile and restrict legitimate research and public discourse. Because frontier AI models are generally trained on large datasets that include publicly available data from the internet, many model outputs will often reproduce public information and qualify for these exclusions. At first glance, this might seem to largely resolve the AI outputs problem. But these exclusions don’t always apply to AI-generated outputs for three reasons.

First, frontier AI models can synthesize novel information from disparate sources rather than simply reproducing existing data. They can generate combinations, insights, and emergent knowledge in response to user queries—synthesizing previously dispersed public information into structured guidance, or extrapolating beyond it, to create new controlled information absent from any single training source. As OpenAI’s CEO, Sam Altman, explained, such models function as “a reasoning engine, not a fact database”—they analyze and combine information rather than merely retrieve it. Because they can synthesize information to produce controlled data that never existed in published form, their outputs don’t necessarily constitute “public domain” or “published” data.

Second, the regulatory frameworks impose specific requirements for information to qualify as “published” or “public domain.” The ITAR’s “public domain” designation depends on dissemination through specific enumerated channels, such as sales at bookstores, availability at public libraries, or fundamental research at universities that is ordinarily published. The EAR’s “published” exclusion is broader, encompassing information that is publicly available without restrictions upon its further dissemination, including websites available to the public. Not all training data may meet both standards—and information that qualifies under the EAR’s broader exclusion may still fail to qualify under the ITAR.

Third, AI model outputs don’t automatically qualify as “published” or “public domain” simply because a publicly available model generates them. Both the ITAR standard (“generally accessible or available to the public”) and the EAR standard (public availability “without restrictions upon its further dissemination”) require broad public distribution. When an AI system generates a response to a particular prompt, it creates individualized content for a specific recipient, not publication to an unlimited audience. 

The Core Compliance Problem

These legal complexities culminate in an acute practical challenge. Determining whether information qualifies as ITAR- or EAR-controlled typically requires expert analysis—hours of work parsing technical details against regulatory criteria. The analysis also depends on knowing the recipient’s nationality and location, since export control requirements vary by destination. A transfer to a Canadian citizen in Canada may require no license; the identical transfer to a South African national in the United States may trigger “deemed export” controls; the same transfer to a Russian national in Russia may be prohibited entirely.

An AI model generating responses to prompts lacks reliable access to this critical information. Users can falsify location data and obscure their identity. Even if a model attempted real-time export control classification of its own outputs, it would need to verify information that users have every incentive and ability to misrepresent. And the model would need to determine whether its synthesized output qualifies for the “public domain” or “published” exclusions—an analysis requiring judgment about whether the specific output existed in prior publications or constitutes novel controlled information.

AI developers do of course implement safeguards to prevent harmful outputs—including refusals for dangerous queries and content filtering systems. Whether or not these measures take export control classifications into account, they face fundamental challenges. Current safety systems may block obvious requests for bomb-making instructions, but they may struggle to detect the risk of generating controlled technical data when the request is masked by adversarial prompting or embedded in benign contexts (e.g., coding assistance or creative writing). Furthermore, they lack the technical and legal frameworks to systematically identify and prevent ITAR- or EAR-controlled outputs across all technical domains. Export control determinations require analyzing the intersection of technical specifications, regulatory classifications, recipient characteristics, and public domain status—a level of contextual judgment that current automated systems cannot reliably perform.

AI developers thus face a trilemma. First, they cannot reliably conduct real-time export control determinations. Users can misrepresent critical information, safety filters are imperfect and can be circumvented, and assessing the “public domain” and “published” exclusions requires individualized assessment. Second, they cannot implement blanket restrictions without crippling their models’ utility. And third, they cannot simply deploy models without controls and risk violating export regulations for which they may be held legally responsible.

The scope of the second option—blanket restrictions—reveals why it proves unworkable. Given the breadth of ITAR and EAR controls, which collectively span aerospace, defense, advanced manufacturing, emerging technologies, and dual-use items, comprehensive restrictions would undermine the use of cutting-edge tools for legitimate research, education, and commercial development.

Consider the strategic implications. If U.S. companies deploy frontier models that are hobbled by overbroad restrictions while Chinese labs like DeepSeek and Moonshot operate without comparable export restrictions, American competitiveness suffers without corresponding national security benefit. The EAR recognizes this dynamic in its foreign availability provisions, which allow BIS to remove or modify controls when comparable items are available from foreign sources. But these provisions only apply to specific items assessed case-by-case—they were never designed to address foreign AI models capable of generating new export-controlled information across multiple regulatory classifications.

 Managing Deemed Export Risks for Internal Models

While public-facing models present the most visible challenge, AI labs also face deemed export risks from internal model use by employees. As models are being developed, and as they are deployed internally within labs prior to public release, they may lack the safety guardrails eventually built into public versions. Internal models may also be more capable than their public counterparts. If foreign national employees—who represent a substantial portion of the U.S. AI workforce in key technical fields—use these internal systems and elicit ITAR- or EAR-controlled outputs, deemed export violations could occur.

This internal challenge, however, has an established compliance mechanism, even if the technical implementation requires adaptation. AI labs can implement Technology Control Plans (TCPs)—the same framework used successfully across research universities, national laboratories, and the defense industrial base. A robust TCP for AI development would include comprehensive logging of internal model interactions, personnel screening protocols, and information security measures protecting digital access. Additional components would encompass physical security controls, employee training on export controls, and regular compliance audits. These measures, standard in industries handling controlled technology, can substantially reduce deemed export risks without excluding the international talent critical to U.S. AI leadership.

 Why Government Engagement Is Essential

The challenges outlined above aren’t problems that AI developers can solve independently. Export controls exist to protect national security interests—a fundamentally governmental function requiring government leadership. Current policy effectively delegates this responsibility to private companies, asking them to navigate—without guidance—through a regulatory regime designed for an entirely different technology paradigm.

This approach carries real risks. Without authoritative guidance, labs face difficult choices between potentially violating export controls or implementing restrictions that degrade model utility. Some may adopt conservative approaches that limit innovation; others may take permissive stances that risk proliferation. This fragmentation serves neither national security nor competitiveness.

The stakes will only rise as capabilities advance. Today’s frontier models represent merely the beginning of what AI systems will be able to generate. And the scenarios explored here—closed-weight models deployed by U.S. developers—represent only one configuration. Open-weight models that allow for independent modification and deployment, U.S. cloud platforms hosting foreign-developed models, and cross-border collaborative development each raise distinct and complex export control questions. As models become more capable, the shortcomings of existing export control frameworks will be magnified absent active government engagement.

As we argue in a recent paper, the U.S. government needs to undertake a serious reassessment of how export controls apply to AI model outputs. The government should take a risk-based approach, focusing regulatory resources on the most security-sensitive domains, rather than attempting comprehensive control across all technical fields. Given that safety filters can be circumvented, a regulatory approach demanding zero-failure compliance for all controlled data is likely unachievable. Instead, compliance expectations must be calibrated—more stringent for the most sensitive technologies, more flexible for broader categories of dual-use items. Additionally, when evaluating whether U.S. models genuinely threaten national security by generating outputs that are currently export controlled, the government needs to account for the “foreign availability” of comparable capabilities in non-U.S. models. Developers should be given incentives to implement robust internal controls and work collaboratively with government to identify and address these high-priority risks.

Regulatory agencies like DDTC and BIS, drawing on the strategic assessments of the defense and intelligence communities and the technical expertise of bodies like NIST, possess the institutional knowledge to assess these tradeoffs. They can evaluate model capabilities against adversarial testing, analyze national security implications holistically, and develop compliance approaches that protect security without unnecessarily constraining innovation. But they must treat AI model outputs as an urgent policy priority—dedicating resources to understanding specific AI systems, engaging with developers and deployers, and adapting frameworks to address challenges current rules never contemplated.

The conversation about AI and national security must expand beyond semiconductors and model weights to encompass the outputs those technologies enable. DDTC and BIS have successfully adapted export controls to previous technological disruptions—from cryptography to additive manufacturing—and AI model outputs present the next adaptation challenge. The agencies possess the institutional knowledge to develop workable solutions, but doing so will require sustained attention and a willingness to rethink frameworks built for an earlier technological era. The race between safeguards and capability improvements is already underway; U.S. regulatory frameworks must move fast enough to keep pace.

The post AI Model Outputs Demand the Attention of Export Control Agencies appeared first on Just Security.

]]>
126643
Governing AI Agents Globally: The Role of International Law, Norms and Accountability Mechanisms https://www.justsecurity.org/121990/governing-ai-agents-globally/?utm_source=rss&utm_medium=rss&utm_campaign=governing-ai-agents-globally Fri, 17 Oct 2025 13:05:06 +0000 https://www.justsecurity.org/?p=121990 Stakeholders must creatively leverage existing legal and normative tools to ensure AI agents serve humanity — not destabilize it.

The post Governing AI Agents Globally: The Role of International Law, Norms and Accountability Mechanisms appeared first on Just Security.

]]>
The Rise of AI Agents

Industry leaders have dubbed 2025 “the year of the AI agent.” Unlike chatbots, these systems can set goals and act autonomously without continuous human oversight. The most popular AI agents can book appointments and make online purchases, or write code and conduct research. Some types of AI agents—known as “action-taking AI agents”—can interact with external tools or systems via application programming interfaces (APIs), and even write and execute computer code with software development kits (SDKs). Their potential is enormous: automating work, optimizing systems, and freeing up time. But their ability to take actions in the real world also brings new risks that extend far beyond national borders. This post explores why global governance is key to managing those risks and how it should be grounded in existing, non-AI-specific international law, norms, and accountability mechanisms.

Action-taking AI agents can directly affect the digital and physical infrastructure around them in complex and unpredictable ways, posing new challenges for human oversight. This could exacerbate well-documented AI risks, including privacy breaches, mis- and disinformation, misalignment, adversarial attacks, adverse uses (including to carry out cyberattacks), job displacement, corporate power concentration, and anthropomorphism (resulting in overreliance, manipulation, and emotional dependence). AI agents may also give rise to new risks, such as function calling hallucination, cascading errors across interconnected systems, self-preservation and loss of control. Because many of these systems operate online, their actions—and harms—can easily cross borders.

Managing cross-border risks or harms is a task that can hardly be accomplished solely at the national level. This is why it is crucial for policymakers, companies, and other stakeholders to examine how to best govern AI agents globally, and why we at the Partnership on AI (PAI) have confronted this issue head-on in our latest policy brief on the topic.

AI Agents and Global Governance 

There is no shortage of principles and best practices that were crafted at the international level specifically for AI technologies and also apply to AI agents. Most notably, UNESCO’s Ethics of AI Recommendation, the OECD AI Principles, and the G7’s Hiroshima Code of Conduct all emphasize transparency, safety, security, and respect for human rights. There is also much discussion about developing new international agreements and institutions to govern AI. For example, over 300 experts and 90 organizations recently issued a Global Call urging governments to reach an international agreement on red lines for AI by the end of 2026. The Global Call expressed particular concern about action-taking AI agents, noting that “[s]ome advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world.”

But stakeholders should not overlook the foundational, technology-neutral tools that they already have: existing international law, non-binding norms, and accountability mechanisms. These are the result of decades of global negotiations and have helped the international community navigate complex global challenges—from war and famine to climate change and cybersecurity. Understanding how they apply to AI agents is key to governing this new technology inclusively in a challenging geopolitical environment.

The importance of governing AI—and AI agents specifically—through these foundational global governance tools is underscored by the United Nations (U.N.)’s recent announcement of two new dedicated AI mechanisms: the Independent International Scientific Panel and the Global Dialogue on AI Governance. The Panel will be an independent body of 40 multidisciplinary experts, tasked with issuing “evidence-based scientific assessments synthesizing and analysing existing research related to the opportunities, risks and impacts of artificial intelligence.” And the Dialogue is intended to function as a multistakeholder forum for discussions of AI governance questions, including, in particular, “[r]espect for and protection and promotion of human rights in the field of artificial intelligence” and “transparency, accountability and robust human oversight of artificial intelligence systems in a manner that complies with international law.”

In our work on AI agents and global governance, we have focused on potential cross-border harms and human rights impacts because of the scale and frequency with which these are anticipated to occur if action-taking AI agents are deployed at scale. Yet, we are conscious that there are many other risks that need to be managed globally, including inequitable technology adoption, environmental impacts, and specific risks arising in the military context.

Addressing Cross-Border Risks

Because many AI agents take actions online, their impacts can easily cross borders and affect governments, companies, and individuals worldwide. Consider an AI agent able to generate content and post it on social media. Such a system can hallucinate or be exploited by malicious actors to spread disinformation online, undermining public trust. Or take an AI agent that can write and run code. Not only can an error affect a computer program’s source code and alter how it works (for e.g., by creating a software vulnerability); the technology could also be exploited for malicious purposes, such as adversarial attacks or to build sophisticated forms of agentic malware. These risks are even more acute given the prospect that AI agents may eventually be deployed in critical sectors, such as energy, finance, education, healthcare, transportation, and telecommunications.

International law prohibits states from using AI agents in ways that undermine the sovereignty of other states or interfere in their internal or external affairs in a coercive manner. Examples include using AI agents to cause physical harm or to interfere in democratic processes abroad. International law also protects AI agents deployed by public or private entities for inherently governmental functions, including healthcare, education, agriculture, social services, transport, and financial services. International law also arguably imposes a duty on states to exercise care when allowing AI agents to be developed and deployed in their territory. This due diligence obligation requires states to seek to prevent or mitigate the harms that AI agents might cause not only to other states, but also to private companies and individuals in other jurisdictions, whether the harm is caused by an agent malfunction or misuse by a state or non-state actor.

Non-binding norms complement these rules by recommending best practices for states and companies in the context of information and communications technologies (ICTs). When AI agents take actions online, they are part of the ICT environment and therefore subject to these norms. Examples include the U.N. voluntary norms of responsible state behaviour in the use of ICTs and the Paris Call for Trust and Security in Cyberspace, which promote cooperation, critical infrastructure protection, and the prevention of harmful tools.

Protecting Human Rights

Even when the risks or impacts of AI agents are restricted to a single jurisdiction, they can affect internationally recognized human rights. Privacy is a key concern: in order to perform often highly-personalized tasks, AI agents must access different types of personal data, such as personal files, emails, or calendars. Not only might this data be inappropriately accessed, it could also be leaked to other applications that the AI agent interacts with, whether due to an agent malfunction or an adversarial attack. There is also evidence that AI agents can resort to manipulation and other coercive techniques to achieve certain goals. For example, Anthropic reported that Claude Opus 4—an agentic AI assistant—blackmailed a supervisor to prevent being shut down, and that several models it tested resorted to blackmail and information leaks to avoid replacement or achieve their goals. These kinds of behavior might affect individuals’ right to freely form and express their opinions. Given AI agents’ high levels of autonomy and complexity, there are also concerns that they will more significantly impact the job market than other AI technologies, such as chatbots.

Human rights treaties such as the International Covenants on Civil and Political Rights (ICCPR) and Economic, Social, and Cultural Rights (ICESCR) impose both negative obligations (to refrain from violations) and positive obligations (to protect rights from third-party interference). States must therefore refrain from and prevent human rights harms that might arise from designing, developing, or deploying AI agents within and arguably beyond their borders.

Companies, though not bound by international law, are guided by the U.N. Guiding Principles on Business and Human Rights. These call for corporate due diligence to prevent and mitigate human rights impacts—a responsibility that extends to AI agents’ design, development, and deployment.

Accountability and Potential Gaps

When states breach international law, they are required to stop the violation and remedy any harm caused. They can be compelled to do so through international courts, countermeasures (e.g., sanctions) by the injured state, or a decision of the U.N. Security Council. Yet there is no centralized enforcement mechanism, and the U.N. Security Council is often paralyzed due to geopolitical divides. Companies’ human rights commitments remain voluntary and therefore are not internationally enforceable. AI agents also complicate accountability. Their actions are not directly attributable to a state, and state responsibility usually arises for foreseeable harms, not AI agents’ perhaps unpredictable actions. This means that international accountability for the global harms of AI agents is not a given; when responsibility does arise, it hinges on states and domestic legal systems for enforcement.

Moving from Principles to Action

Existing global governance tools provide a foundation for governing AI agents, but they must be implemented appropriately and tailored to specific use cases. In particular, governments and companies should ensure:

  • Rigorous pre- and post-deployment testing and evaluations;
  • Failure and vulnerability detection systems;
  • Limited affordances (i.e., what an AI agent’s architecture enables it to do) and sufficient human oversight for high-stakes decisions, particularly those affecting people’s rights in sectors such as healthcare, social care, employment, immigration, and national security;
  • Transparency in the use of the technology;
  • Effective remedies for those affected;
  • Critical infrastructure resilience, including through robust safety, security and redundancy mechanisms; and
  • Societal resilience through AI agent literacy and awareness-raising.

The United Nations should appoint a Special Rapporteur on AI and Human Rights to clarify how existing frameworks apply to AI agents, and leverage its new AI mechanisms—the Scientific Panel and the Global Dialogue—to foster inclusive dialogue on the topic.

All stakeholders should invest in more research into the risks of AI agents to help close accountability gaps, including through the International AI Safety Reports and by expanding the International Network of AI Safety Institutes.

AI agents are just beginning to emerge, but their potential global impact is significant. The choices stakeholders make now—about governance, accountability, and enforcement—will shape whether this technology strengthens or undermines the international order. The world already has many of the legal, normative, and institutional tools to address the challenges that the era of agentic AI will bring. The task ahead is to leverage these tools decisively and creatively to ensure AI agents serve humanity, not destabilize it.

The post Governing AI Agents Globally: The Role of International Law, Norms and Accountability Mechanisms appeared first on Just Security.

]]>
121990
Embedded Human Judgment in the Age of Autonomous Weapons https://www.justsecurity.org/121345/embedded-human-judgment-autonomous-weapons/?utm_source=rss&utm_medium=rss&utm_campaign=embedded-human-judgment-autonomous-weapons Thu, 16 Oct 2025 12:46:08 +0000 https://www.justsecurity.org/?p=121345 A new framework for autonomous weapons shows that real control depends on embedded human judgment across design, command, and operation.

The post Embedded Human Judgment in the Age of Autonomous Weapons appeared first on Just Security.

]]>
Few phrases dominate debates about autonomous weapons more than “meaningful human control.” It has become a central topic in diplomatic forums, academic discussions, and civil society campaigns. States negotiating at the United Nations Group of Governmental Experts on lethal autonomous weapons (GGE LAWS) have invoked the phrase regularly, and advocates see it as the minimum requirement for regulating AI-enabled autonomous weapons systems (AWS). Despite its prominence and controversy, the phrase is often considered vague, impractical, and misunderstood. What exactly does meaningful control mean? What is the threshold for meaningful? What does control look like and entail? And, importantly, who exercises it, when, and under what limits?

In a recent three-part series in International Law Studies, I provide some answers (and more questions) about human control. The analysis views human control not as a single act at a moment in time by one person—such as pressing a trigger or turning a system left or right—but as a series of distributed, embedded human judgments throughout a system’s lifecycle. Each article examines the three types of actors with distinct roles in the human control framework—software designers/developers, commanders, and operators—all of whom make decisions that influence whether an AWS can be used legally and responsibly. Each group faces unique technical, cognitive, legal, and operational challenges across three stages:

  1. Design and development decisions: where engineers, data scientists, and software designers set the parameters through, inter alia, data training, software architecture, and interface design.
  2. Command decisions: where commanders authorize deployment, establish mission parameters, and impose operational constraints.
  3. Operator decisions: where individuals in the field guide, observe, and terminate systems to avoid unintended outcomes.

Each stage involves distinct forms of human control. Treating them as part of a continuum reveals both the opportunities for embedding judgment and the risks of assuming that human control exists when, in practice, it may not.

Control Starts with the Code

The design and development phase has received more attention in the AI context than in the development of traditional weapon systems, for obvious reasons. Decisions made during this phase have a significant impact on the ultimate performance of an AI system. In the case of AI-enabled weapons, international humanitarian law (IHL), or the law of armed conflict, begins to apply well before a weapon is deployed. Article 36 of Additional Protocol I (AP I) to the Geneva Conventions requires States to determine if the employment of a new means or method of warfare would, in some or all circumstances, be prohibited by AP I. In this context, fulfilling that obligation is impossible without scrutinizing design choices.

Software developers and engineers are responsible for key decisions, including the selection of machine learning techniques, the compilation and cleaning of training data, and the design and presentation of data at the operator interface. Each of these choices carries legal implications. For example, a system trained on unrepresentative or biased data may systematically misidentify targets, undermining the principle of distinction. In another example, a poorly designed interface may result in cognitive overload for operators or inadvertently bias their decisions toward a particular outcome. Clearly, at this stage, recognizing how developer decisions are opportunities to embed human judgment is a critical piece of a larger puzzle for harnessing human control, identifying foreseeable risks or shortcomings, and implementing responsible AI.

Human control, therefore, starts with the code. If other stakeholders—notably legal advisors—are not involved from the beginning, important decisions about compliance with IHL are essentially delegated to technical design choices. Bringing in multidisciplinary expertise early on guarantees that data curation, algorithm development, and testing procedures align with both technical instruments and States’ legal obligations under IHL.

Commanders and the Weight of Deployment

Commanders serve as a crucial link between design and deployment. Commanders determine whether to deploy an autonomous system, the conditions under which it will be deployed, and other associated constraints. Despite holding such an important decision-making role, the opportunities commanders have to incorporate human judgment are often overlooked within the broader human control debate.

Commanders can exercise human control through procedures such as testing and training, setting mission parameters, and developing rules of engagement. For example, an AWS might be authorized to operate only within a geographically-defined area, against specific categories of military objectives, or for a limited period of time. These constraints are intended to mitigate the risk of civilian harm and define the conditions for optimal system performance.

However, commanders rarely have complete knowledge of how systems will perform or how an environment may change over time. No testing regime can predict every possible outcome, especially against adaptable or near-peer adversaries. Therefore, commanders need access to essential information about the risks of system failures and malfunctions under specific conditions to understand these limitations during planning. Uncertainties will always be present, and commanders must use their judgment to balance these uncertainties with operational demands.

Additionally, at a tactical level, commanders will be responsible for implementing maintenance and monitoring regimes. This includes unit training with new systems, overseeing system updates and testing and evaluation of deployed systems, and, if necessary, legal reviews of substantially altered systems.

Commanders thus carry the burden of embedding human judgment through procedures such as testing systems at the edge, training units responsible for new weapon systems, operational and mission planning, and maintenance and monitoring. By defining mission constraints and ensuring that system capabilities align with legal objectives, they exercise a form of control that can be more impactful than last-minute interventions.

Operators at the Edge

Finally, operators are often portrayed as the ultimate safeguard against failures in AWS. Public debate tends to assume that their ability to authorize or abort engagements is the essence of human control. But in practice, their position may be far less impactful.

Cognitive considerations and limitations are key issues in harnessing human control with operators. Operators face challenges such as automation bias, vigilance fatigue, and cognitive overload. Furthermore, systems may operate at machine speeds difficult for humans to keep pace, or they may monitor for long periods, leading to cognitive drift. Interfaces that hide uncertainty or limit override functions exacerbate the issue, making intervention more symbolic than substantive.

Despite these challenges, operators remain essential. And importantly, operators have unique functions separate from developers and commanders—operators guide, observe, and terminate.

Operators guide AWS by authorizing updates or adjustments to ensure alignment with command intent and legal constraints. They observe system behavior in real-time, providing oversight that machines alone cannot achieve. Additionally, they terminate engagements when the system’s actions risk violating IHL or diverging from mission parameters. These functions are unique to operators, making them the final—but not the only—safeguard of human judgment.

Across design, command, and operation, human control is distributed rather than centralized. No single actor bears all responsibility. Instead, a chain of embedded judgments is incorporated into AWS.

Policy Implications

The lifecycle approach demonstrates that States and militaries cannot rely on vague assurances of human control in AWS; they must specify where, when, and by whom judgment is exercised. Four policy recommendations result from this:

1. Harness embedded judgment throughout the lifecycle of AWS. 

A key idea connecting these stages is embedded human judgment. Instead of viewing human control as a final override, this framework highlights that human decisions are woven throughout an AWS’s lifecycle, and a comprehensive human control policy should harness that timeline.

During design, judgment is embedded through decisions like data selection, machine learning techniques, and the construction of testing regimes, as well as interface design. These decisions determine parameters that will guide a system’s performance in a combat environment. At the command level, judgment is embedded through processes like mission planning and rules of engagement, which shape the system’s operating environment and permissible targets. In operations, judgment is embedded through operator functions of guiding, observing, and terminating, which are critical to mitigating risk and unintended outcomes.

2. Integrate legal and technical design choices.

Recognizing embedded judgment highlights the significance of technical considerations in shaping legal decisions. Data selection, system architecture, and verification methods directly affect whether IHL principles can be upheld. States must actively harness that technical-legal relationship through greater discourse, cross recruitment, and training.

3. Enhance command accessibility and responsibility. 

Commanders at all levels must understand their role within the larger framework of this emerging capability. However, policymakers also need to be realistic about their expectations of commanders. There are many calls for commanders to be more tech-savvy in order to understand or accept responsibility for their use of AI. Commanders do not need to code to understand the risks of a specific weapon system, just as we expect for non-AI weapon systems. For example, commanders are not expected to have an in-depth understanding of the physics behind bullets or bombs. Subject matter experts will be able to provide that necessary information. Nonetheless, commanders must have a thorough understanding of the risks associated with particular systems to establish and maintain acceptable environments for their use, monitoring, and maintenance.

4. Maintain perspective on the limited role of operators. 

Operators are part of a far larger network of human control. And in some ways, they have the least amount of ‘control’ relative to other actors within the lifecycle. Future policies surrounding human control must acknowledge their limited functions and utilize earlier stages to mitigate biases or other limitations that affect operators in their guiding, observing, and terminating functions.

Looking Ahead

The framework outlined above provides a foundation for further inquiry, but several critical questions must still be addressed. What levels of autonomy are appropriate for specific use cases? How can we truly operationalize “meaningful” control? Can States converge on common standards for testing, constraint-setting, and oversight? Are there legal or operational implications of States with different or contrasting approaches to human control?

This lifecycle perspective shows that human control must be more than a policy slogan; it should drive practical actions. What matters is who makes judgments, when, and how. Maintaining and utilizing embedded human judgment throughout the lifecycle is vital for ensuring that autonomous weapons are developed responsibly and used lawfully. Identifying where that judgment exists is crucial for the lawful and responsible use of emerging military technologies.

The post Embedded Human Judgment in the Age of Autonomous Weapons appeared first on Just Security.

]]>
121345
Trading Sovereignty for Scale? The Costs of the U.S.–U.K. Tech Prosperity Deal https://www.justsecurity.org/121723/us-uk-deal-tech-sovereignty/?utm_source=rss&utm_medium=rss&utm_campaign=us-uk-deal-tech-sovereignty Wed, 15 Oct 2025 12:59:32 +0000 https://www.justsecurity.org/?p=121723 In its Tech Prosperity Deal with the US, the United Kingdom may be trading its sovereignty for dependence on American tech firms.

The post Trading Sovereignty for Scale? The Costs of the U.S.–U.K. Tech Prosperity Deal appeared first on Just Security.

]]>
On Sept. 16 the United States and United Kingdom announced the Tech Prosperity Deal. Launched at a major bilateral summit attended both by heads of state and of frontier AI labs and technology companies (including Nvidia’s Jensen Huang and OpenAI’s Sam Altman), it promises an unprecedented $200 billion investment from U.S. firms into the United Kingdom’s AI ecosystem.

The agreement has been celebrated as a game-changer for U.K. prosperity, security and sovereignty. British Prime Minister Keir Starmer declared it a “generational” advancement in the special relationship, expanding U.K. jobs, innovation, healthcare, and science.

Starmer’s assessment is spot-on: this deal does change the game. But it is a game the United Kingdom might already be losing. The U.K. will become more beholden to U.S. technology firms and less able to shape the important technologies it hopes to build on top of. This is a premature – but not irreversible – sacrifice of its AI sovereignty.

The Art of the Deal?

The Tech Prosperity Deal is wide-ranging, covering nuclear power, quantum technology, AI infrastructure, and AI applications in research, science, engineering and business. It prioritizes building out data centers in designated AI Growth Zones, as well as U.S.-U.K. collaboration on drug discovery and fusion energy.

But the feasibility of this deal is as of yet unclear, as is its ability to supercharge the United Kingdom’s AI ambitions.

While the deal has been framed as a boost for Britain’s AI industry, most companies involved – including Nvidia, OpenAI, Blackstone, Google, and Microsoft – are based across the Atlantic. Although framed as a “building on British [AI] success stories” (like Deepmind, Wayve and Arm), the design of the deal might very well kneecap homegrown alternatives that might struggle to compete with U.S. heavyweights or stay in the U.K. (Google’s 2014 acquisition of U.K.-grown Deepmind is front-of-mind.)

Critics of the deal like former Deputy Prime Minister Nick Clegg have decried the U.K.’s decision to tie itself to the U.S. and lose its ability to retain “good people and good ideas.” Supporters of bolstering UK and European technology sovereignty have also disparaged London’s choice to opt for “off-the-shelf sovereignty,” spotlighting the risks to national security triggered by vendor lock-in. Converging with Washington on technology risks entrenching unsustainable or even politically dangerous and unreliable dependencies on U.S. technology providers.

Industry and security aside, the deal is a byproduct of global AI hype. This is dangerous. It is promising to see British jobs and wellbeing front-and-center of a new international partnership, particularly as the country faces an unemployment and healthcare crisis. Broad-scale transformation is evidently and urgently needed.

But this government – and the last – have rightly attracted criticism for confusing promises and potential for capability: both of the technology itself and the United Kingdom’s ability to deploy it.

Experts tracking AI’s implications for people and society warn that U.K. leaders have championed broad-scale investment and delayed regulation to reach the “promised land” of AI transformation at any cost: that is, before determining what it is that AI can do for the British people. Whether London’s AI ambitions are environmentally feasible is another matter entirely.

A Sovereignty Puzzle

The Tech Prosperity Deal was brokered to supercharge U.K. AI sovereignty – that is to say, building out and innovating emerging technology and infrastructure in the national interest, ranging from AI data centers to quantum computers and civil nuclear energy projects.

In some lights, it will do exactly that. An important pathway will be ensuring sustained access to U.S. frontier AI labs crowded at the front of the global AI race.

But in other respects, it may very well jeopardize the country’s autonomy and further entrench its dependence on a handful of U.S. providers. The deal is surprising in that it is both an unnecessary and premature capitulation. Rather than being caught between a rock and a hard place, the United Kingdom appears to have negotiated the deal from a position of relative strength.

Why this is the case is a story not only about the United Kingdom’s ambitions and limitations, but also about middle powers more generally in pursuit of technology sovereignty. Middle powers lack the capacity of AI superpowers like the United States. and China, but are still eager to influence, develop and deploy AI in line with national interests.

Strategies for building so-called “sovereign AI” have surged in recent years. Although different in scope and ambition, most share common themes, like ensuring AI aligns with and advances national values (initiatives like Eurostack or Singapore’s SEALION come to mind), or deploying AI in pursuit of economic competitiveness and governance credibility. The U.K.’s AI plans pursue economic security, enabled by “sufficient, secure and sustainable AI infrastructure.”

Where countries stand on global AI capacity depends where they sit. Do they have the right combination of “enabling” capabilities (like a robust energy grid, talent – an area the U.K. excels in – and an industrial base)? Do these enablers support “primary” capabilities: access to advanced AI models trained and operated with sufficient computing power?

Political rhetoric on regional or even global AI leadership aside, most middle power plans for sovereign AI reflect the sobering realities of a two-horse race. Competing with the U.S. and China is futile. They hold the world’s most powerful labs (notwithstanding unpredictable upsets, like DeepSeek’s open-source explosion earlier this year). Together, they possess over 90 percent of global AI supercomputers (with the United States leading by a large margin).

This reality means states like the United Kingdom face a strategic dilemma (hinged on the assumption that building ever-more powerful AI capabilities is necessary and inevitable). How should they position themselves to maximize their sovereignty in a race they can’t win? What are the risks of getting this positioning wrong?

There are different pathways. Small countries like Taiwan and the Netherlands have curated specialized offerings in niche parts of the global AI supply chain. Blocs like the European Union have opted for developing shared capabilities and building “AI factories.” Others have picked a side in the U.S.-China AI race: the United Arab Emirates has opted for U.S. over Chinese offerings, as the first international Stargate partner and soon-to-be host of the world’s largest data center.

The Winning Bet

It is unsurprising that the United Kingdom has picked a side. A comprehensive U.S.-U.K. partnership on AI does not come out of the blue. It comes off the heels of years of strategic investment and cross-pollination, not to mention an enduring transatlantic alliance.

The Starmer government has pursued closer political alignment with the Trump administration in other domains as well, reportedly putting digital tax concessions on the negotiating table (although Technology Secretary Liz Kendall confirms the new deal excludes digital tax).

On the international stage, in February, the United Kingdom. joined the United States as the only two participating non-signatories of the Paris Statement on AI, a symbolically important declaration on global AI cooperation. While not mimicking the U.S. deregulatory spiral on AI, London has delayed a comprehensive AI bill, distancing itself from the E.U.’s more comprehensive (but also watered-down) approach.

What is surprising is that the United Kingdom was not forced to entrench its dependency on U.S. technology. Nor did London have to politically distance itself from Brussels on technology. Neither of these outcomes appeared to be a foregone conclusion before the summit. Viewed from above, the U.K. side of the deal triggers concern: a country falsely convinced they must negotiate from inevitable weakness.

There are reasons for this approach. The United Kingdom does not dominate global AI research and development (R&D). However, it does consistently punch above its weight in the global impact of its research ecosystem and innovation pipeline. After all, many U.K. spinouts and startups are bought up by U.S. firms. Geopolitics experts have called for a strategic U.K. response to U.S. H1B visa reversals: the country might outcompete other middle powers by distinguishing itself as a hub for global talent. There are valid fears that the deal will do little to stem an asymmetrical talent flow to U.S. shores and companies.

On data, the United Kingdom’s data and data-sharing ecosystem is certainly not perfect but it is relatively robust. Proposed public initiatives on data (namely, the National Data Library) and access to compute (via the AI Research Resource) could supercharge its innovation ecosystem. But the new deal’s reference to U.S. firm access to and the use of U.K. datasets – like Biobank – might be reminiscent of a 2023 contract to build out the NHS data platform offered to the U.S.-based Palantir, not U.K. competitors. This resulted in outcry, with the British Medical Association calling it “deeply worrying.”

Anxieties about the security implications of dependency on politically powerful but not infallible U.S. technology providers are widespread. They have reached a fever pitch in Brussels, with key influencers proposing different solutions to mitigate this dependency.

While the U.S.-U.K. deal will by no means prevent U.K. technology deal-making with the European Union, it sends a powerful political message to Brussels. Policymakers hoping for closer E.U.-U.K. cooperation on AI – for autonomy and collective security – might well be disappointed.

The partnership also raises tricky questions about U.K. entanglements with this powerful coalition of U.S. technology companies and their leaders. Oracle – close to the Trump administration and tied to U.K. policy influencers – has promised to expand the AI infrastructure provided to the U.K. government to the tune of $5 billion. How might companies like Oracle – acting autonomously, or in line with U.S. AI ambitions – set and disrupt technology agendas in the United Kingdom? How will U.S. technology firms compete and cooperate in the build-out of infrastructure U.K. soil?

In years to come, the Tech Prosperity Deal may be remembered less of a triumph of British sovereignty than its proponents hope.

The post Trading Sovereignty for Scale? The Costs of the U.S.–U.K. Tech Prosperity Deal appeared first on Just Security.

]]>
121723
Export Controls and U.S. Trade Policy: Making Sense of the New Terrain https://www.justsecurity.org/121725/export-controls-trade-policy-new-terrain/?utm_source=rss&utm_medium=rss&utm_campaign=export-controls-trade-policy-new-terrain Tue, 14 Oct 2025 13:04:00 +0000 https://www.justsecurity.org/?p=121725 The Trump administration's use of export controls as leverage in trade diplomacy creates risks for key U.S. national security interests.

The post Export Controls and U.S. Trade Policy: Making Sense of the New Terrain appeared first on Just Security.

]]>
U.S. export controls are evolving from a narrow national security tool to a broader trade policy instrument, reflecting U.S. President Donald Trump’s willingness to blend economic and national security negotiations and such controls’ growing impact on economic growth, technological leadership, and geopolitical influence.Trade policy experts are now scrambling to learn the world of export controls, which were a key sticking point in a series of high-stakes trade negotiations this summer between U.S. and Chinese officials.

For the national security community, however, this shift may portend even more dramatic changes: deploying export controls as a multi-purpose instrument creates risks for core U.S. security interests, including the perception that national security can be traded away for commercial gain. To alleviate such risks, U.S. policymakers should clarify which national security export controls are not up for negotiation and reassure allies and the private sector that export control policy will be applied consistently and predictably.

The Traditional Role—and Expanding Reach—of Export Controls

To understand the shift currently underway, it is helpful to first review the traditional rationale and use cases of export controls. Export controls are statutory and regulatory measures designed to restrict the transfer of certain goods, technologies, and services across national borders. The United States has primarily used export controls to prevent strategic competitors from acquiring dual-use technologies, curtail the proliferation of weapons of mass destruction (WMD), sanction foreign states or entities, and support human rights by blocking the transfer of tools and technologies that could facilitate surveillance or repression.

Traditional export controls advanced national security in significant ways. For example, the Missile Technology Control Regime ensured that participating states did not transfer advanced technologies such as ballistic missiles and long-range unmanned aerial vehicles abroad unless the transfers were first reviewed and approved by their governments. The Nuclear Suppliers Group and the Australia Group track emerging products that have potential nuclear and chemical weapons applications, respectively. When warranted, they place these items on control lists to guide government export regulations. Similarly, the Wassenaar Arrangement seeks to control emerging dual-use technologies, such as advanced semiconductors and quantum technologies. However, the effectiveness of these regimes has been undermined in recent years, as Russia has begun blocking proposals for typically agreed-upon controls.

At the same time, as successive U.S. administrations have recognized the potentially vast power and leverage associated with restricting access to certain U.S. goods and technologies, they have used these tools in more expansive ways, stretching the traditional export control regimes beyond core national security interests. The first Trump administration added Huawei to the Bureau of Industry and Security’s (BIS) Entity List, on the grounds that the company was diverting U.S. technology to Iran. Over time, however, the policy narrative behind export controls on Huawei evolved, focusing on the potential that the PRC’s intelligence services could put digital backdoors in Huawei’s global telecom system. This claim provided the rationale to impose stringent export controls restrictions on Huawei globally through the novel use of U.S. unilateral extraterritorial controls. Though the rationale for these export control measures was still rooted in national security, the controls had a much greater commercial impact in restricting Huawei’s global telecom business.

Export Controls as Tools of Competition and Coercion

The first Trump administration began controlling the export of semiconductor manufacturing equipment to China, working with the Netherlands and Japan. The Biden administration significantly expanded this policy, restricting both the equipment to make chips and the most advanced chips themselves. Again, these controls were founded in a national security rationale, preventing China from using advanced AI for military modernization purposes. Yet over time the rationale for the semiconductor controls also expanded into a broader contest in winning the AI race. While export controls traditionally had narrow economic effects, U.S. controls on China’s compute supply have had sweeping commercial impacts. This was, in part, a predictable outcome of China’s military-civil fusion strategy, which sought to break down barriers between the country’s civilian commercial and research technology sectors and the defense industrial sector: for the United States to constrain China’s military modernization, it would also need to constrain civilian technological development.

Over time, U.S. allies became more dubious of Biden’s national security rationale regarding America’s unilateral export controls  targeting the PRC’s semiconductor industry. Allies questioned whether these actions—and U.S. pressure to align their own controls with those of the United States’—were rooted in commercial competition with China, rather than narrower national security concerns.

Nor was the more expansive approach to export controls limited to the Chinese tech sector. Following Russia’s full-scale invasion of Ukraine in 2022, the Biden administration and U.S. partners and allies imposed a wide-ranging export control program on Russia and Belarus. The controls targeted not only factories that produce weapons, but also luxury items such as Gucci and Versace bags and clothing. The rationale for these controls was to impose economic pain on Russian and Belarussian elites–a form of coercive pressure more often pursued via sanctions than export controls.

In its waning days, the Biden administration released perhaps the most expansive and ambitious U.S. export control policy ever attempted—the AI diffusion rule. The rule placed strict limits on how many advanced AI chips could be exported to specific countries around the world. It had an explicit industrial policy objective of keeping leading technological capabilities in the United States and ensuring U.S. AI compute dominance. Early on, the second Trump administration announced that it would rescind the rule. It criticized the rule as overly bureaucratic, burdensome for industry, and an unnecessary constraint on U.S. competitiveness. It remains to be seen what rule – if any – the current administration will put in its place.

At the same time, China has been developing its own export control capabilities, using its dominance over key strategic sectors as a leverage point for broader geopolitical objectives. China has long had an export controls system, developed in part to implement its commitments as a member of the Nuclear Suppliers Group, a group of nuclear supplier countries that seek to limit proliferation of nuclear weapons. In 2010, China demonstrated its willingness to use export restrictions for political aims, targeting Japan with ad-hoc unilateral controls on rare earths in response to bilateral tensions. In the face of increased U.S.-led export controls targeting China in 2020, the Chinese government enacted a new export control law, which formalized its dual-use list and created an unreliable entities list, mirroring the United States. In late 2024, China revamped its dual-use regulations and imposed controls on critical minerals. In early January of 2025, China added 28 US companies to its unreliable entities list.

This expansion of export controls has set the stage for increasing conflation–and potential conflict–between the worlds of trade and national security. So long as export controls were understood as a narrow tool of national security with limited commercial or economic impact, they could be carved out from normal international trade rules and obligations and generally ignored by most trade and economic policy officials. But the new world of export controls is coming into more direct contact with the trade regime–most immediately in the context of the current U.S.-China trade wars.

The Trump Administration Puts Export Controls Up For Negotiation

When Trump imposed steep tariffs in April, the Chinese government retaliated by exploiting its dominant position in the global market for rare earths. China imposed sweeping new controls on the export of certain rare earth minerals and magnets critical for the production of semiconductors, autos, and various defense industrial base goods. While it was not unprecedented for China to leverage global economic chokepoints to pressure foreign governments, the United States appeared unprepared to weather such disruption.

After several weeks of mounting trade tensions, U.S. and Chinese officials agreed in Geneva to deescalate their tariff fight. This included easing China’s rare earths controls. At the same time as these negotiations were being finalized, the Commerce Department issued a statement warning industry that using Huawei’s Ascend AI chips “anywhere in the world” violated U.S. export controls. The Chinese government was furious, arguing that the United States’ export control move had undermined the trade truce agreed days earlier. It is not clear if the U.S. officials negotiating on trade issues with China were involved in the export control decision, or or were aware of when the statement would be released. But from China’s perspective, even a small shift in U.S. export control policy violated the spirit of the trade deal the two sides had just struck, and set back bilateral cooperation.

Chinese and U.S. officials agreed to meet again in June. Shortly before the next round of negotiations, media reported that the United States was imposing new additional export controls on China, including on software used for designing semiconductors, as well as ethane, a gas used in petrochemical production. Ethane did not meet the traditional strategic rationale for U.S. export controls: it is a commodity that is widely available from alternative sources and is not closely linked to military production. Rather than directly advancing U.S. national security, these controls were clearly an effort by U.S. trade negotiators to gain leverage over China, signaling that other restrictions that the U.S. had imposed on China’s advanced semiconductor industry might also be up for negotiation. Coming out of this second round of U.S.-China trade talks, China pledged to resume more regular export of critical minerals to the United States, and the United States agreed to unwind the export controls imposed just days earlier. Negotiating over export restrictions was now normalized as part of U.S.-China trade diplomacy.

The trade and export control truce, however, was fragile. On September 29, the United States introduced the “Affiliates Rule” change to the Entity List, which substantially expanded the number of firms subject to U.S. export controls. On October 9, 2025, likely partly in response to this U.S. policy change, China imposed extraterritorial controls on rare earths and added 14 additional companies to its Unreliable List, escalating trade tensions between the U.S. and China.  In response to the latest Chinese export control restrictions, Trump announced on October 10 that he would impose 100 percent tariffs on all Chinese goods beginning November 1, though he has also signaled a willingness to negotiate with China on these new restrictions.

 Are Export Control Licenses for Sale?

Several weeks after the June 2025 London talks concluded, the United States announced it would soon begin allowing Nvidia to sell its H20 chips to China, leading to speculation that this was a concession under the framework agreement in order to maintain access to Chinese critical materials. However, Chinese officials then contradicted the U.S. account, stating the United States made a unilateral decision to allow H20 sales, and this policy shift was not taken at China’s request. Additionally, the Trump administration then announced the loosening of these export controls would only come with a twist: Nvidia and AMD would be able to receive export control licenses to sell certain of their AI chips to China, but only if they agreed to pay the U.S. government 15 percent of their revenues from the sales.

This introduced yet another foundational—and deeply controversial—shift in U.S. export control policy. In addition to now being a point of leverage in foreign trade negotiations, the Trump administration also now views export controls as a potential source of revenue. The potential legal challenges to implementing this at scale may be significant. The U.S. constitution bans export tariffs. Furthermore, the primary export control legislation—the 2018 Export Control Reform Act (ECRA) —explicitly states that the government will not charge fees for the “submission, processing, or consideration” of export control licenses. Treasury Secretary Scott Bessent has noted he is interested in pursuing this model with other companies, but it is not clear how such an effort might progress.

Strategic Implications of Negotiating Export Controls

The key takeaway from these developments is that the U.S. government now treats export controls as a flexible tool to serve a wide range of policy objectives. The semiconductor controls on China that have been building since the first Trump administration demonstrated the use of such controls in broader geopolitical and tech competition. The AI diffusion rule represented an experiment in using export controls to tilt global markets in favor of U.S. market development. In imposing controls as a bargaining chip in trade negotiations with China, the second Trump administration sought to use such controls as a transactional means to gain negotiating leverage. And the most recent suggestion that companies should pay a share of their profits as a fee back to the U.S. government raises the spectre of the use of export controls as a revenue source.

But stretching this tool to so far beyond its traditional, narrow national security purposes will come with costs and trade-offs—and could undermine the original intent of export controls.

To begin with, in the specific context of managing bilateral trade, technology, and security relationships with China, the United States now faces growing pressures from China to further roll back U.S. technology controls. Traditionally the United States refused to entertain the question of negotiating away export controls, contending that such national security actions would not be a concession to be traded off against commercial or other objectives. But now that this window has been opened, Chinese negotiators are likely to press for further relaxation of U.S. controls. Indeed, in the run up to a third round of U.S.-China talks in Stockholm this summer, reporting suggests the Trump administration decided to freeze any new export controls to avoid upsetting trade talks. The challenge for U.S. negotiators will be to distinguish between foundational export controls, that are central to long term U.S. security objectives and not up for negotiation, and transactional controls implemented to gain leverage, which are bargaining chips that could be traded away.

Second, this shift will also complicate U.S. diplomatic efforts to encourage partners in Europe and Asia to align their export controls with those of the United States. These countries have long been more skeptical of aggressive export controls than the United States is—and also worried that the United States may at times be pursuing an economic competitiveness agenda under the guise of national security concerns. They will be even more reluctant to follow U.S. appeals to implement strong controls if they worry the United States may subsequently decide to negotiate away those same controls. Plurilateral cooperation on export controls, already strained, will become ever more difficult.

Third, an expansion of the scope of export controls could also threaten the Trump administration’s stated objective of increasing U.S. technology exports. As the government advances a wide-ranging strategy to export the U.S. AI stack, would-be foreign buyers of U.S. technology products may worry that their purchases will get caught up in future export controls, which the administration might implement at some future date to gain leverage in a trade negotiation. Similarly, U.S. tech companies may be more reluctant to invest in securing new export markets if they worry the administration may seek to extract a financial cut in return for granting a U.S. export control license.

The Path Ahead

The Trump administration has clearly signaled that it intends to incorporate export controls as part of trade negotiations. But there is less clarity on how it views export controls as part of a broader trade and national security strategy, rather than simply as a transactional point of leverage the administration can use to extract any range of concessions from foreign governments or U.S. companies.

There are steps the administration could take to better communicate and calibrate its new, broader approach to export controls while ensuring it does not inadvertently sacrifice national security priorities. This begins with articulating a strategy on the appropriate role of export controls, including clarity on what export controls are not up for negotiation. Such messaging is important for many audiences, but particularly for reengaging allies to encourage them to align their controls with those of the United States while assuring them that the only controls that will be relaxed are those on items where there is wide foreign availability. The Trump administration should also better articulate the threats posed by China’s economic, technological, and military model to reach shared risk assessments with allies and partners, to facilitate collective responses to common threats.

Additionally, if the administration wants to liberalize export controls to lower trade imbalances and as a carrot in trade negotiations, there are ways to do so without endangering U.S. national security. There are many legacy items that are controlled on the Commerce Control List, which at the time of their listing had national security implications, but which today, given advances in technologies and market shifts, may no longer warrant controls. Such items include, for example, legacy civil nuclear-related items and certain machine tools.  The United States should review the Commerce Control List and identify items that have broad foreign availability, not only outside of China but inside of China and  decontrol these items. This would help some U.S. companies that produce these legacy items export them to China and other markets, and allow the administration to highlight efforts to relax export controls without sacrificing national security.

Finally, the administration should review and reconsider the proposal of charging exporters to receive export licenses from the Commerce Department. If the administration does want to impose some fees for the processing of export control licenses, it should work with Congress to amend ECRA which could allow Commerce to publish a new fee structure in the regulations. Such a fee program itself need not be novel or controversial, and indeed the Department of State requires a registration fee of exporters to sell defense articles abroad. The revenue from fees could be used to improve the speed and scale of license issuance. But it is vital any such fee program be implemented in a transparent and proportionate manner, to avoid any perception that the United States is willing to sell export licenses that pose risks to U.S. national security if the price is right.

The post Export Controls and U.S. Trade Policy: Making Sense of the New Terrain appeared first on Just Security.

]]>
121725