Visualization of a global network

Key Trends that Will Shape Tech Policy in 2026

The global technology policy landscape is entering a pivotal year. In the United States, the AI governance debate has evolved from whether to preempt state-level regulation to what a substantive federal framework might actually contain. Internationally, competition between Washington and Beijing is accelerating, with semiconductors and compute capacity now central to national security strategy. The same competitive logic is shaping quantum computing, where the United States, Europe, and China are rethinking national programs while navigating fragile supply chains.

As global technology competition intensifies, new security risks are emerging. The first publicly reported AI-orchestrated hacking campaign appeared in 2025, and agentic AI systems are expected to reshape the offense-defense balance in cyberspace in the year ahead. More broadly, as AI diffuses through societies, policymakers must grapple with its implications for democracy, human rights, and the distribution of power between states and their citizens.

To make sense of these developments, we asked leading experts to identify key trends they will be watching in 2026 and beyond.

AI Federalism and the Future of U.S. Regulation

Josh Geltzer, Partner in WilmerHale’s Defense, National Security and Government Contracts Practice; Former Deputy Assistant to the President, Deputy White House Counsel, and Legal Advisor to the National Security Council: 

In 2025, the central AI policy debate in Congress revolved around whether to impose a federal “AI moratorium” that would block states from regulating AI for a set period of time. This proposal, strongly supported by the Trump administration, nearly passed as part of the summer’s “Big Beautiful Bill” and later resurfaced in the year-end National Defense Authorization Act, but ultimately failed due to insufficient congressional backing. As an alternative, U.S. President Donald Trump issued a December executive order aimed at limiting the impact of state-level AI laws without full federal statutory preemption. The order directed the Department of Justice to develop ways to challenge state-level AI laws deemed contrary to administration policy and, furthermore, instructed executive branch agencies to withhold certain funds from states maintaining AI regulations viewed as restrictive. It also committed the White House to draft a legislative framework for a uniform federal AI policy that would preempt state laws, while preserving state authority over issues like child safety, data center infrastructure, and government AI procurement.

Looking ahead to 2026, the debate is shifting from whether to “preempt something with nothing” to whether to “preempt something with something.” In other words, the key question will no longer be about eliminating state AI laws—which continue to proliferate—without a federal substitute, but will instead become about replacing them with a concrete federal regulatory framework. This change fundamentally alters the conversation: both supporters and critics of the 2025 moratorium must recognize that preemption with a substantive policy is a different proposition from preemption without one. What that federal framework will actually look like remains uncertain, but its details will be critical in determining the level of support for renewed legislation. The bottom line: expect a very different—and potentially more consequential— discussion about federalizing AI law in 2026.

David S. Rubenstein, James R. Ahrens Chair in Constitutional Law and Director of the Robert J. Dole Center for Law and Government at Washburn University School of Law:

The biggest AI federalism story of 2026 will not be about algorithms. It will be about silicon and steel. The National Conference of State Legislatures predicts that data centers will be a central legislative concern. While the dominant political narrative focuses on energy affordability and sustainability, the grassroots data center backlash runs deeper. People vote how they feel, and many Americans feel negatively about an AI-driven future. Data centers are vessels for AI anxiety and antipathy toward big tech more generally. This matters for two related reasons. First, the backlash reflects a broad coalition, spanning affordability, sustainability, job security, and corporate accountability. Second, even if energy costs are contained, the backlash probably will not be. For constituents anxious about AI, job loss, and cultural decay, blocking a local land-use permit or a corporate tax credit is how their voices will be heard.

Beyond infrastructure, states will continue to regulate AI itself. However, comprehensive AI acts are losing momentum. Colorado’s flagship law illustrates why. Originally passed in 2024, Colorado’s AI Act was designed to regulate AI discrimination across employment, housing, healthcare, and more. As the effective date approached, however, Colorado’s Governor and Attorney General backed the industry’s effort to gut the law. Instead, the Colorado legislature delayed the effective date to mid-2026, and future setbacks are likely. States are now pivoting to more targeted approaches, focusing on high-risk applications and legacy sectors. AI chatbots, for example, are in the legislative crosshairs, following headline news that linked chatbots to suicide, defamation, and deception. In 2026, states likely will respond with transparency laws, age-gating, and other guardrails. Pricing algorithms are also on the agenda. Some states may take a general approach, for example, by amending antitrust codes. But most states will seek to regulate price-setting algorithms in specific domains, like housing and insurance. Meanwhile, major legislation enacted in 2025 will take effect this year, including California’s “companion chatbot” law and Illinois’ employment-decision protections.

None of this sits well with the Trump administration. Acceleration and deregulation are twin pillars of the White House’s domestic AI agenda. Most recently, Trump issued an executive order to limit state AI laws through a multi-pronged approach: litigation, federal funding conditions, and regulatory preemption. The order’s ambition makes it legally vulnerable. The executive branch cannot unilaterally preempt state law without a delegation from Congress. Nor can the executive branch impose spending conditions that Congress itself rejected. Agencies will be hard-pressed to demonstrate otherwise in court. Legal issues aside, the order is politically tone-deaf. By large margins, Americans favor AI regulation. States are delivering. The federal government has not. Expect more of the same in 2026.

Losing Control of Autonomous Cyber Operations

Brianna Rosen, Executive Director of the Oxford Programme for Cyber and Technology Policy; Senior Fellow and Director of the AI and Emerging Technologies Initiative at Just Security: 

The emergence of highly autonomous cyber capable agents represents a qualitative shift in the threat landscape that existing governance frameworks are ill-equipped to address. Unlike AI systems that assist human operators, these agents can identify vulnerabilities, develop exploitation strategies, and execute intrusion campaigns with minimal human oversight. The GTG-1002 campaign disclosed last year offered an early glimpse of this future, but the systems now under development by both state and non-state actors will be far more sophisticated.

The governance challenge is twofold. First, attribution becomes significantly harder when autonomous agents can be deployed at scale, adapt their tactics in real time, and obscure their origins. Traditional deterrence models assume adversaries can be identified and held accountable; that assumption is eroding. Second, the speed of autonomous operations may compress decision cycles to the point where meaningful human control becomes impractical. An agent that can move from reconnaissance to exploitation in minutes leaves little room for deliberation.

These dynamics have implications beyond cybersecurity. Highly autonomous agents operating in other domains, from financial markets to critical infrastructure management, raise similar questions about accountability, control, and escalation risk. Policymakers in 2026 will need to move beyond sector-specific frameworks toward a more integrated approach to autonomous systems governance, one that addresses the underlying capabilities rather than their application in any single domain. The forthcoming U.S. cybersecurity strategy and ongoing discussions among allies about AI security cooperation offer opportunities to begin this work, but only if governments are willing to grapple with the harder questions about autonomy, speed, and control that these systems pose.

Teddy Nemeroff, Co-Founder of Verific AI; Former Director for International Cyber Policy on the National Security Council staff:

In 2026, agentic AI will play a decisive role in determining the balance between offense and defense in cyberspace. Last year saw the first publicly reported AI-orchestrated hacking campaign, perpetrated by Chinese state-sponsored cyber actors. Although the campaign used unsophisticated hacking tools and only compromised a handful of the approximately 30 organizations targeted, it provided a proof of concept for how AI could be used to automate 80 to 90 percent of hacking operations. This year, AI-orchestrated hacking campaigns will become the norm. Cyber criminals and states like Russia, Iran, and North Korea will adopt similar approaches, increasingly using agentic capabilities to run their own campaigns.

On the other hand, agentic AI will drive new innovations in cyber defense in 2026—from AI-enabled penetration testing to automated incident response. A key question will be how quickly these innovations can be implemented in cybersecurity products and distributed to owners and operators especially in key critical infrastructure sectors. This time lag will almost certainly give hackers the advantage in the short term, and that advantage may be overwhelming for under-resourced entities, like schools and hospitals, as well as poorer countries with weaker cyber defenses.

In addition to being a tool that cyber attackers and defenders use, in 2026, AI will increasingly shape the landscape in which they compete. As companies integrate AI capabilities into their enterprise systems, this will create new vulnerabilities that adversaries will seek to exploit and defenders must address—from prompt injection attacks to data exfiltration from AI systems. The widespread adoption of AI coding tools will also shape the cybersecurity environment in 2026, as hackers increasingly exploit weaknesses in hastily produced “vibe coded” software. On the other hand, experts hope AI can ultimately reduce software vulnerabilities through better pre-release testing and by lowering the cost to re-write flawed code in legacy technology used to operate many critical infrastructure systems today.

The White House is expected to issue a new cybersecurity strategy in January 2026—one that is expected to feature a heavy emphasis on offensive cyber and deterrence. This will provide an important signal as to how the Trump administration plans to address these challenges in the coming year.

U.S.-China Competition Accelerates Across the Tech Stack 

Martijn Rasser, Vice President of the Technology Leadership Directorate at the Special Competitive Studies Project:

In 2026, the transition from large language models to agentic AI—systems capable of autonomous reasoning and real-world execution—will redefine the stakes of technology transfer. We have moved from information risks to functional risks. Because agentic systems require vast, low-latency compute to manage industrial supply chains, cyber operations, and financial markets, the underlying hardware is no longer merely a commodity; it is the physical infrastructure of sovereignty.

To protect this infrastructure, the United States must move past the strategic myth that selling mid-tier hardware to adversaries ensures dependency. History and current industrial policy suggest the opposite: providing frontier compute to strategic rivals only provides the scaffold they need to train indigenous models and bridge the gap toward self-sufficiency. In 2026, economic security requires building a high fence around the entire compute stack. The goal of export controls should not be to keep adversaries addicted to U.S. technology, but to ensure that the most capable agentic frameworks—those that can disrupt global markets or automate high-speed cyber warfare—remain a privileged asset of the democratic world.

True diffusion should be a strategic reward for allies, not a commercial compromise with rivals. By strictly limiting compute access for adversaries while building a secure, high-capacity compute-as-a-service architecture for trusted partners, the United States can lead a coalition that ensures the rules of the road for autonomous agents are set in Washington, London, and Tokyo—not Beijing. In this new era, U.S. security lies in maintaining America’s technological lead, which should not be compromised for the sake of profit.

Geoffrey Gertz, Senior Fellow in the Energy, Economics, and Security Program at the Center for a New American Security; Former Director for International Economics at the White House: 

Over the course of 2025, the United States and China rapidly escalated trade and tech restrictions toward each other, then negotiated to roll them back. By the end of the year, the superpowers had established a fragile détente, with the Trump administration refraining from imposing new export controls or sanctions to retaliate for China’s cyberespionage out of fears of upsetting this shaky truce. The result is a quiet sea change in U.S. economic security policy. After years of steady progress on the China derisking policy agenda—marked by new controls on dual-use technology exports, outbound investments, and the transfer of sensitive personal data—at the outset of 2026 the Trump administration has effectively paused any new competitive actions toward China.

The trend to watch this year is how long this truce will last, and what happens to the U.S. tech protection agenda in the meantime. The Commerce Department recently withdrew plans to restrict the import of Chinese drones and the White House opted not to introduce any significant new tariffs on semiconductors. Yet even as the Trump administration studiously avoids new tech restrictions that might destabilize the status quo, other parts of the government may have an incentive to act. Late last year, Congress passed a law that codified and expanded the outbound investment rules, and there are ongoing legislative efforts to strengthen U.S. chip controls. Meanwhile the Federal Communications Commission (FCC), a regulatory agency (debatably) independent of the administration, may  step in where the Commerce Department is stepping back: in late December the FCC issued its own rule restricting the import of foreign drones. Unlike Commerce’s proposed approach, the FCC rule applies to foreign drones from any country, rather than explicitly targeting China, perhaps in an effort to avoid provoking Chinese retaliation. This may become a model for additional new restrictions.

Ultimately, the current U.S.-China truce is likely to break down at some point, whether due to a miscalculation, unforeseen shock, or simply because one side or the other determines it no longer serves its interests. At that point the floodgates may open on new tech restrictions, as the various constituencies in favor of more rapid derisking seek to make up for lost time. Any companies making business decisions on the assumption the current détente truly represents a break in the longer-term trend of heightened geoeconomic competition may be disappointed.

Scott Singer, Fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace; Co-Founder of the Oxford China Policy Lab:  

In 2026, general-purpose AI systems will begin to transform major economies and societies in more tangible, noticeable, and irreversible ways than we have seen before. Yet backlash will intensify, creating new and more potent coalitions to rein in AI. The United States and China will be ground zeroes for both these phenomena, with trends operating in parallel but taking shapes under two very different national systems.

AI systems continue to advance in both technical capacity and real-world utility. In late 2025, tools like Claude Code enabled AI-savvy users with no coding experience to fully automate a range of computer-based tasks. With Washington and Beijing both hoping to diffuse such tools throughout their economies, 2026 will showcase a race between the relatively laissez-faire U.S. approach and China’s top-down, whole-of-society AI+ initiative. At the same time, both countries are simultaneously grappling with AI risks that are no longer speculative but clearly present. China will begin implementing its recently published interim guidelines on “human-like” AI interaction services, responding to growing social concerns about emotional dependence and addiction. Meanwhile, parallel anxieties have crystallized in the United States around child safety, companion chatbots, and the ethics of rapid AI development, with bipartisan congressional calls for child safety legislation. Additionally, California has passed laws targeting companion chatbots.

As AI capabilities diffuse more broadly in 2026, expect to see new stakeholder coalitions emerge in both countries demanding governance frameworks that address growing harms. In the United States, the pro-regulatory coalition may include populists of the right and left, labor, parents, Catholics and other religious communities, and civil rights groups. As legislation moves slowly, many will turn to the courts to hold companies liable for AI-related harms. In China, the CCP will enforce its own values and monitor society for grass-roots sentiments that require a response. Beijing will not hesitate to penalize companies if deemed necessary.

Sam Winter-Levy, Fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace:

A year ago, there was something close to bipartisan consensus on AI export controls: the United States should maintain as large a lead as possible over China in advanced chips and prevent Beijing from accessing the hardware needed to train competitive frontier AI systems. That’s still the dominant view in Congress, as well as the national security and AI policy communities. But in the White House, that consensus has collapsed. In December, U.S. President Donald Trump announced he would allow Nvidia to sell its advanced H200 chip to China, in return for the U.S. government receiving a 25 percent cut of the revenue. What was once a set of national security controls off the table for bargaining has become something closer to a pricing negotiation with both Beijing and Nvidia.

The key questions for 2026 center not on whether the Trump administration will maintain this approach, but on how far it will go and what constraints it will face. If the administration greenlights the sale of millions of H200 chips to China, it risks significantly eroding the U.S. compute advantage. But the license requirement remains in place, meaning multiple agencies will have to sign off on license conditions—along with Chinese companies and Beijing. If this process results in significantly limited sales, the effect will be much more muted. Pressure from Congress, where bipartisan concern about AI transfers to adversaries remains strong, may also slow things down.

On the other side of the ledger, how much progress will China make in indigenizing semiconductor manufacturing equipment and scaling domestic chip production—and will the relaxation of U.S. controls slow that drive or give Beijing less incentive to invest? For now, most projections suggest China will continue to struggle to make competitive chips in large quantities. If the administration succeeds in plugging gaps in its semiconductor manufacturing equipment export control regime, which for now it remains committed to, China will face even steeper obstacles. But Washington will need alignment in export restrictions from allies like the Netherlands, Japan, and South Korea, which may be reluctant to bear significant costs enforcing complementary controls now that the United States seems comfortable selling advanced chips to Beijing. If U.S. allies loosen their controls, China’s ability to manufacture high-end chips domestically could improve substantially.

Meanwhile, the competition is broadening. Chinese open-source models are seeing widespread adoption across the Global South, with significant support from the Chinese government—a trend that will accelerate as Beijing scales its domestic chip production. Toward the end of last year, the Trump administration launched its AI exports program to bolster the United States as the global supplier of AI and bring partners into an American-led ecosystem. Together with its newly announced Pax Silica initiative—an attempt to build a “coalition of capabilities” among states in the semiconductor supply chain—it represents one of the administration’s most prominent efforts to marshal public resources to compete internationally with China on AI. For now, both initiatives lack substance. But if the administration follows through with sustained funding and diplomatic engagement, they have the potential to strengthen the U.S. position in what is fast becoming a truly global contest for AI influence.

Lennart Heim, Independent AI Policy & Semiconductors Researcher: 

I will continue to closely watch China’s AI chip ecosystem in 2026. Chinese manufacturers still struggle enormously to produce advanced chips at scale—domestic production remains a small fraction of what the United States and its partners design and manufacture, yields are poor, and their technology nodes lag years behind. This gap has been widening, not closing, as TSMC and other chip manufacturers keep advancing. The key questions I am tracking: How many advanced AI chips will China actually produce domestically this year, and who will buy them? Can China manage to produce high-bandwidth memory domestically, given that the country can no longer import it? I expect this to be a major challenge—China’s production will certainly increase, but from a very low base.

But something has also shifted. In December, the Trump administration approved exports of Nvidia’s H200—the most powerful chip ever cleared for sale to China. With access to high-end foreign chips for domestic use, China might now be able to divert some of its scarce domestic production toward exports, potentially creating tech ecosystem lock-ins abroad. Will China do this, and where will these chips end up?

So far, I am not aware of data centers with advanced Chinese AI chips outside of China. But as the United States pursues major AI infrastructure deals overseas, China faces pressure to promote an alternative tech ecosystem. I will be watching whether China channels domestically produced chips to foreign deployments—or whether Chinese firms might even use their now legally purchased Nvidia chips to compete with U.S. hyperscalers in third markets. China certainly cannot compete on the largest projects with its own chips; they simply are not produced at that scale. But China does not need to match U.S. scale to be strategically relevant. More likely, it might stack modest volumes of AI chips with mobile networking, subsidized financing, and smartphones preloaded with Chinese AI as the default—the whole tech stack. The chip volumes will be limited, but these beachheads matter. As we have seen with other technologies, early deployments can create long-term dependencies if they are strategic.

Quantum Computing’s Industrial Challenge

Constanza M. Vidal Bustamante, Fellow in the Technology and National Security Program at the Center for a New American Security:

​​The United States, China, and Europe are preparing to refresh their national quantum programs in 2026, making this a pivotal year for quantum policy. As quantum sensors and computers move toward real-world utility and nations compete to secure their economic and security advantages, they are converging on a defining challenge: whether their industrial bases and supply chains are ready to support scale.

Despite boasting a world-leading ecosystem of universities and startups, thin and globally dispersed supply chains increasingly constrain U.S. quantum progress. The United States relies heavily on foreign (including Chinese) or fragile single-supplier markets for critical inputs, from precision lasers and cryogenics to photonic materials and advanced microfabrication. Yet less than twelve percent of federal quantum funding supports domestic enabling technologies and manufacturing capacity. Congressional bills and rumored upcoming executive orders signal awareness of some of these gaps, but concrete outcomes remain uncertain, especially as quantum continues to compete for attention with higher-profile policy priorities such as AI and conventional semiconductor manufacturing.

Meanwhile, China’s Fifteenth Five-Year Plan, due this March, is expected to further strengthen its already formidable industrial base by elevating quantum as the top “industry of the future.” And Europe, for its part, is preparing a Quantum Act for release in the second quarter, emphasizing “Made in Europe” industrial capacity and supply chains as part of a broader push for technological sovereignty.

The United States and Europe must take care not to turn their drive for self-reliance into costly fragmentation. Fully indigenizing quantum supply chains on either side of the Atlantic would demand time and investment neither can afford if they hope to stay ahead of China. A more credible path may lie in pooling allied capabilities now to secure trusted sources of critical materials, components, fabrication, and systems, while building domestic capacity over time—an approach reflected in initiatives such as the U.S. Department of State’s Pax Silica framework for AI. Whether the United States and its allies act on this logic in 2026 may determine whether they reap the substantial national security and economic gains of quantum technologies they have long sought—or cede that value to strategic competitors.

An Authoritarian Turn in Tech Policy?

Justin Hendrix, Cofounder and CEO of Tech Policy Press:

In February, heads of state and leaders of industry will gather in Delhi for the AI Impact Summit, the fourth in a series of global conferences that kicked off just one year following the launch of OpenAI’s ChatGPT. The tagline for the event is “Welfare for All, Happiness for All.” While we can expect another measured announcement from the gathered elites about international cooperation towards that goal, the year ahead appears set to more fully reveal what earlier techno-optimism and billions of dollars in marketing have obscured: that under present conditions, AI is more likely, on balance, to undermine democracy and strengthen authoritarianism.

Indeed, where authoritarianism is rising—and that is nearly everywhere, according to the 2025 editions of the Economist Intelligence Unit Democracy Index, the Freedom House Freedom in the World report, and the V-Dem Institute Democracy Report—AI is increasingly a tool of authoritarian control and a threat to democratic systems. OpenAI might be selling “democratic AI,” but as legal scholars Woodrow Hartzog and Jessica Silbey contend, today’s “AI systems are built to function in ways that degrade and are likely to destroy our crucial civic institutions,” which they say includes universities, the free press, and the rule of law itself. That is even before all of the ways AI is being deployed around the world for surveillance, manipulation, and control.

A clinical look at the situation requires adopting a new frame for the new year, and to prioritize interrogating the “tenacious connections” between AI and authoritarianism rather than building more intellectual scaffolding for “responsible AI.” Instead of prioritizing questions like, “How do we regulate tech to ensure a healthier democracy?” we should instead put more effort into answering “How do we preserve space for human agency and resistance in an increasingly authoritarian century?”

This may appear to be a pessimistic reorientation, but it points towards what should be sources of hope: the clear need for solidarity with existing movements for the rule of law, democracy, human rights, social and environmental justice; and the urgency of building alternative public technology infrastructures that are free from both corporate and state control. This is the reorientation that is needed in tech policy beyond simply asking corporations and states to behave more responsibly when it comes to AI.

Petra Molnar, Associate Director of York University’s Refugee Law Lab; Faculty Associate at Harvard’s Berkman Klein Center for Internet and Society: 

I expect a shift in the use of border technologies by Global North states, where surveillance and screening start well before a person reaches a port of entry through increasingly networked systems of social media surveillance, predictive analytics, and automated decision-making. This direction also complements the growth of more automated physical surveillance infrastructure, including autonomous surveillance towers at the U.S.-Mexico border, and additional in-land routine surveillance practices, such as biometric data gathering, app-based verification, and social media monitoring that expand already discretionary decision-making. All these technologies have profound ramifications for people’s human rights and civil liberties, yet governance mechanisms are lacking. While the European Union’s AI Act will likely shape regulatory conversations in 2026 on border technologies, the Act also leaves substantial room for state security rationales and procurement realities to determine what is deployed in practice instead of leading from a rights-protecting perspective at the border.

At the same time, global migration numbers will likely continue increasing in 2026. People in need of protection will continue to flee protracted and new conflicts, exercising their internationally-protected right to seek asylum. But in the current anti-migration political climate, governments will continue pushing for the normalization of surveillance and expanded data collection, with private companies like Anduril, Palantir, and Elbit Systems benefiting from lucrative public-private partnerships in a multi-billion dollar border industrial complex. As such, 2026 may be less about “new” tools—like robo-dogs, iris-scanning orbs and border vigilante apps—and more about existing tools becoming integrated into a continuous data-driven pipeline, from social media and biometrics to automated triage. Practical harms will not only include privacy loss, but also the amplification of discrimination and exclusion through opacity, algorithmic error, chilled speech and association, as well as the weakening of international legal norms.

Filed Under

, , , , , , , , , ,
Send A Letter To The Editor

DON'T MISS A THING. Stay up to date with Just Security curated newsletters: