Brianna Rosen https://www.justsecurity.org/author/rosenbrianna/ A Forum on Law, Rights, and U.S. National Security Fri, 19 Dec 2025 15:12:40 +0000 en-US hourly 1 https://i0.wp.com/www.justsecurity.org/wp-content/uploads/2021/01/cropped-logo_dome_fav.png?fit=32%2C32&ssl=1 Brianna Rosen https://www.justsecurity.org/author/rosenbrianna/ 32 32 77857433 Just Security’s Artificial Intelligence Archive https://www.justsecurity.org/99958/just-securitys-artificial-intelligence-archive/?utm_source=rss&utm_medium=rss&utm_campaign=just-securitys-artificial-intelligence-archive Mon, 15 Dec 2025 12:00:45 +0000 https://www.justsecurity.org/?p=99958 Just Security's collection of articles analyzing the implications of AI for society, democracy, human rights, and warfare.

The post Just Security’s Artificial Intelligence Archive appeared first on Just Security.

]]>
Since 2020, Just Security has been at the forefront of analysis on rapid shifts in AI-enabled technologies, providing expert commentary on risks, opportunities, and proposed governance mechanisms. The catalog below organizes our collection of articles on artificial intelligence into general categories to facilitate access to relevant topics for policymakers, academic experts, industry leaders, and the general public. The archive will be updated as new articles are published. The archive also is available in reverse chronological order at the artificial intelligence articles page.

AI Governance

Trump’s Chip Strategy Needs Recalibration
By Michael Schiffer (December 15, 2025)

AI Model Outputs Demand the Attention of Export Control Agencies
By Joe Khawam and Tim Schnabel (December 12, 2025)

Governing AI Agents Globally: The Role of International Law, Norms and Accountability Mechanisms
By Talita Dias (October 17, 2025)

Dueling Strategies for Global AI Leadership? What the U.S. and China Action Plans Reveal
By Zena Assaad (September 4, 2025)

Selling AI Chips Won’t Keep China Hooked on U.S. Technology
By Janet Egan (September 3, 2025)

The AI Action Plans: How Similar are the U.S. and Chinese Playbooks?
By Scott Singer and Pavlo Zvenyhorodskyi (August 26, 2025)

Assessing the Trump Administration’s AI Action Plan
By Sam Winter-Levy (July 25, 2025)

Decoding Trump’s AI Playbook: The AI Action Plan and What Comes Next
Brianna Rosen interview with Joshua Geltzer, Jenny Marron and Sam Winter-Levy (July 24, 2025)

Rethinking the Global AI Race
By Lt. Gen. (ret.) John (Jack) N.T. Shanahan and Kevin Frazier (July 21, 2025)

The Trump Administration’s AI Action Plan Is Coming. Here’s What to Look For.
By Joshua Geltzer (July 18, 2025)

AI Copyright Wars Threaten U.S. Technological Primacy in the Face of Rising Chinese Competition
By Bill Drexel (July 8, 2025)

What Comes Next After Trump’s AI Deals in the Gulf
By Alasdair Phillips-Robins and Sam Winter-Levy (June 4, 2025)

AI Governance Needs Federalism, Not a Federally Imposed Moratorium
By David S. Rubenstein (May 29, 2025)

Open Questions for China’s Open-Source AI Regulation
By Nanda Min Htin (May 5, 2025)

The Just Security Podcast: Trump’s AI Strategy Takes Shape
Brianna Rosen interview with Joshua Geltzer (April 17, 2025)

Shaping the AI Action Plan: Responses to the White House’s Request for Information
By Clara Apt and Brianna Rosen (March 18, 2025)

Export Controls on Open-Source Models Will Not Win the AI Race
By Claudia Wilson and Emmie Hine (February 25, 2025)

The Just Security Podcast: Key Takeaways from the Paris AI Action Summit
Paras Shah interview with Brianna Rosen (February 12, 2024)

The Just Security Podcast: Diving Deeper into DeepSeek
Brianna Rosen interview with Lennart Heim, Keegan McBride and Lauren Wagner (February 4, 2025)

What DeepSeek Really Changes About AI Competition
By Konstantin F. Pilz and Lennart Heim (February 3, 2025)

Throwing Caution to the Wind: Unpacking the U.K. AI Opportunities Action Plan
By Elke Schwarz (January 30, 2025)

What Just Happened: Trump’s Announcement of the Stargate AI Infrastructure Project
By Justin Hendrix (January 22, 2025)

The Future of the AI Diffusion Framework
By Sam Winter-Levy (January 21, 2025)

Unpacking the Biden Administration’s Executive Order on AI Infrastructure
By Clara Apt and Brianna Rosen (January 16, 2025)

Trump’s Balancing Act with China on Frontier AI Policy
By Scott Singer (December 23, 2024)

The AI Presidency: What “America First” Means for Global AI Governance
By Brianna Rosen (December 16, 2024)

The United States Must Win The Global Open Source AI Race
By Keegan McBride and Dean W. Ball (November 7, 2024)

AI at UNGA79: Recapping Key Themes
By Clara Apt (October 1, 2024)

Rethinking Responsible Use of Military AI: From Principles to Practice
By Brianna Rosen and Tess Bridgeman (September 26, 2024)

Competition, Not Control, is Key to Winning the Global AI Race
By Matthew Mittelsteadt and Keegan McBride (September 17, 2024)

The Just Security Podcast: Strategic Risks of AI and Recapping the 2024 REAIM Summit
Paras Shah interview with Brianna Rosen (September 12, 2024)

Putting the Second REAIM Summit into Context
By Tobias Vestner and Simon Cleobury (September 5, 2024)

The Nuts and Bolts of Enforcing AI Guardrails
By Amos Toh and Ivey Dyson (May 30, 2024)

House Meeting on White House AI Overreach Highlights Congressional Inaction
By Melanie Geller and Julian Melendi (April 12, 2024)

Why We Need a National Data Protection Strategy
By Alex Joel (April 4, 2024)

Is the Biden Administration Reaching a New Consensus on What Constitutes Private Information
By Justin Hendrix (March 19, 2024)

The Just Security Podcast: How Should the World Regulate Artificial Intelligence?
Paras Shah and Brianna Rosen interview with Robert Trager (February 2, 2024)

It’s Not Just Technology: What it Means to be a Global Leader in AI
By Kayla Blomquist and Keegan McBride (January 4, 2024)

AI Governance in the Age of Uncertainty: International Law as a Starting Point
By Talita de Souza Dias and Rashmin Sagoo (January 2, 2024)

Experts React: Unpacking the Biden Administration’s New Efforts on AI
By Ian Miller (November 14, 2023)

Biden’s Executive Order on AI Gives Sweeping Mandate to DHS
By Justin Hendrix (November 1, 2023)

The Tragedy of AI Governance
By Simon Chesterman (October 18, 2023)

Introducing the Symposium on AI Governance: Power, Justice, and the Limits of the Law
By Brianna Rosen (October 18, 2023)

U.S. Senate AI Hearings Highlight Increased Need for Regulation
By Faiza Patel and Melanie Geller (September 25, 2023)

The Perils and Promise of AI Regulation
By Faiza Patel and Ivey Dyson (July 26, 2023)

Weighing the Risks: Why a New Conversation is Needed on AI Safety
By Michael Depp (June 30, 2023)

To Legislate on AI, Schumer Should Start with the Basics
By Justin Hendrix and Paul M. Barrett (June 28, 2023)

Regulating Artificial Intelligence Requires Balancing Rights, Innovation
By Bishop Garrison (January 11, 2023)

Emerging Tech Has a Front-Row Seat at India-Hosted UN Counterterrorism Meeting. What About Human Rights?
By Marlena Wisniak (October 28, 2022)

NATO Must Tackle Digital Authoritarianism
By Michèle Flournoy and Anshu Roy (June 29, 2022)

NATO’s 2022 Strategic Concept Must Enhance Digital Access and Capacities
By Chris Dolan (June 8, 2022)

Watchlisting the World: Digital Security Infrastructures, Informal Law, and the “Global War on Terror”
By Ramzi Kassem, Rebecca Mignot-Mahdavi and Gavin Sullivan (October 28, 2021)

One Thousand and One Talents: The Race for A.I. Dominance
by Lucas Irwin (April 7, 2021)

National Security & War

Embedded Human Judgment in the Age of Autonomous Weapons
By Lena Trabucco (October 16, 2025)

AI’s Hidden National Security Cost
By Caroline Baxter (October 1, 2025)

Harnessing the Transformative Potential of AI in Intelligence Analysis
By Rachel Bombach (August 12, 2025)

The Law Already Supports AI in Government — RAG Shows the Way
By Tal Feldman (May 16, 2025)

The United States Must Avoid AI’s Chernobyl Moment
By Janet Egan and Cole Salvador (March 10, 2025)

A Start for AI Transparency at DHS with Room to Grow
by Rachel Levinson-Waldman and Spencer Reynolds (January 22, 2025)

The U.S. National Security Memorandum on AI: Leading Experts Weigh In 
by Just Security (October 25, 2024)

The Double Black Box: AI Inside the National Security Ecosystem
By Ashley Deeks (August 14, 2024)

As DHS Implements New AI Technologies, It Must Overcome Old Shortcomings
By Spencer Reynolds and Faiza Patel (May 21, 2024)

The Machine Got it Wrong? Uncertainties, Assumptions, and Biases in Military AI
By Arthur Holland Michel (May 13, 2024)

Bringing Transparency to National Security Uses of Artificial Intelligence
By Faiza Patel and Patrick C. Toomey (April 4, 2024)

An Oversight Model for AI in National Security: The Privacy and Civil Liberties Oversight Board
By Faiza Patel and Patrick C. Toomey (April 26, 2024)

National Security Carve-Outs Undermine AI Regulations
By Faiza Patel and Patrick C. Toomey (December 21, 2023)

Unhuman Killings: AI and Civilian Harm in Gaza
By Brianna Rosen (December 15, 2023)

DHS Must Evaluate and Overhaul its Flawed Automated Systems
By Rachel Levinson-Waldman and José Guillermo Gutiérrez (October 19, 2023)

The Path to War is Paved with Obscure Intentions: Signaling and Perception in the Era of AI
By Gavin Wilde (October 20, 2023)

AI and the Future of Drone Warfare: Risks and Recommendations
By Brianna Rosen (October 3, 2023)

Latin America and Caribbean Nations Rally Against Autonomous Weapons Systems
By Bonnie Docherty and Mary Wareham (March 6, 2023)

Investigating (Mis)conduct in War is Already Difficult
By Laura Brunn (January 5, 2023)

Gendering the Legal Review of New Means and Methods of Warfare
By Andrea Farrés Jiménez (August 23, 2022)

Artificial Intelligence in the Intelligence Community: Oversight Must Not Be an Oversight
By Corin R. Stone (November 30, 2021)

Artificial Intelligence in the Intelligence Community: Know Risk, Know Reward
By Corin R. Stone (October 19, 2021)

Artificial Intelligence in the Intelligence Community: The Tangled Web of Budget & Acquisition
By Corin R. Stone (September 28, 2021)

Embedding Gender in International Humanitarian Law: Is Artificial Intelligence Up to the Task?
By Andrea Farrés Jiménez (August 27, 2021)

Artificial Intelligence in the Intelligence Community: Culture is Critical
By Corin R. Stone (August 17, 2021)

Artificial Intelligence in the Intelligence Community: Money is Not Enough
By Corin R. Stone (July 12, 2021)

Adding AI to Autonomous Weapons Increases Risks to Civilians in Armed Conflict
By Neil Davison and Jonathan Horowitz (March 26, 2021)

Democracy

The AI Action Plan and Federalism: A Constitutional Analysis
By David S. Rubenstein (July 30, 2025)

U.S. AI-Driven “Catch and Revoke” Initiative Threatens First Amendment Rights
By Faiza Patel (March 18, 2025)

The Munich Security Conference Provides an Opportunity to Improve on the AI Elections Accord
By Alexandra Reeve Givens (February 13, 2025)

Q&A with Marietje Schaake on the Tech Coup and Trump
By Marietje Schaake (February 6, 2025)

Maintaining the Rule of Law in the Age of AI
By Katie Szilagyi (October 9, 2024)

Shattering Illusions: How Cyber Threat Intelligence Augments Legal Action against Russia’s Influence Operations
By Mason W. Krusch (October 8, 2024)

Don’t Downplay Risks of AI for Democracy
By Suzanne Nossel (August 28, 2024)

Tracking Tech Company Commitments to Combat the Misuse of AI in Elections
By Allison Mollenkamp and Clara Apt (March 28, 224)

Multiple Threats Converge to Heighten Disinformation Risks to This Year’s US Elections
By Lawrence Norden, Mekela Panditharatne and David Harris (February 16, 2024)

Is AI the Right Sword for Democray?
By Arthur Holland Michel (November 13, 2023)

The Just Security Podcast: The Dangers of Using AI to Ban Books
Paras Shah interview with Emile Ayoub (October 27, 2023)

Process Rights and the Automation of Public Services through AI: The Case of the Liberal State
By John Zerilli (October 26, 2023)

Using AI to Comply With Book Bans Makes Those Laws More Dangerous
By Emile Ayoub and Faiza Patel (October 3, 2023)

Regulation is Not Enough: A Blueprint for Winning the AI Race
By Keegan McBride (June 29, 2023)

The Existential Threat of AI-Enhanced Disinformation Operations
By Bradley Honigberg (July 8, 2022)

System Rivalry: How Democracies Must Compete with Digital Authoritarians
By Ambassador (ret.) Eileen Donahoe (September 27, 2021)

Surveillance
Social Media & Content Moderation
Further Reading

The post Just Security’s Artificial Intelligence Archive appeared first on Just Security.

]]>
99958
Decoding Trump’s AI Playbook: The AI Action Plan and What Comes Next https://www.justsecurity.org/117752/just-security-podcast-trump-ai-action-plan/?utm_source=rss&utm_medium=rss&utm_campaign=just-security-podcast-trump-ai-action-plan Thu, 24 Jul 2025 18:00:59 +0000 https://www.justsecurity.org/?p=117752 Joshua Geltzer, Jenny Marron, and Sam Winter-Levy join Brianna Rosen on the Just Security podcast to discuss the Trump administration's AI Action Plan.

The post Decoding Trump’s AI Playbook: The AI Action Plan and What Comes Next appeared first on Just Security.

]]>
Yesterday, the White House released its long-awaited AI Action Plan and signed three executive orders on AI, laying out the Trump administration’s strategy to secure what it calls “unquestioned and unchallenged” U.S. dominance across the entire AI tech stack. Framing AI as a global race for technological supremacy, the Plan envisions nothing short of an industrial revolution, an information revolution—and even a renaissance—all driven by AI.

To achieve that vision, the Plan is centered around three pillars: innovation, infrastructure, and international diplomacy and security. It calls for upskilling the workforce, revising federal rules, building high-security data centers, and tightening export controls—all whilst removing what the administration views as regulatory obstacles to faster AI adoption.

The plan also raises major questions. What’s the role of government in steering this technology responsibly? Are we building the right guardrails as we scale up? And what message is the U.S. sending to allies and adversaries as it charts a new course in AI policy?

Show Notes:

The post Decoding Trump’s AI Playbook: The AI Action Plan and What Comes Next appeared first on Just Security.

]]>
117752
Intelligence Implications of the Shifting Iran Strike Narrative https://www.justsecurity.org/115642/intelligence-implications-iran-midnight-hammer/?utm_source=rss&utm_medium=rss&utm_campaign=intelligence-implications-iran-midnight-hammer Thu, 26 Jun 2025 18:58:04 +0000 https://www.justsecurity.org/?p=115642 How the growing politicization of the U.S. intelligence community undermines the integrity of decision-making on Iran and national security more broadly.

The post Intelligence Implications of the Shifting Iran Strike Narrative appeared first on Just Security.

]]>
At a press conference on Thursday, U.S. Defense Secretary Pete Hegseth and Chairman of the Joint Chiefs of Staff Gen. Dan Caine presented the most detailed explanation yet of Operation Midnight Hammer, the June 22 strikes on three Iranian nuclear sites. Defending President Donald Trump’s claim that the operation “obliterated” Iran’s nuclear program, Hegseth cited CIA, Israeli, and international reports as proof the strikes “severely damaged” the program, setting it back by years. Caine struck a more apolitical tone, deferring to the intelligence community on the battle damage assessment. 

Even as Trump officials project confidence, the intelligence picture remains fluid and contested. A leaked preliminary report by the Pentagon’s Defense Intelligence Agency (DIA) assessed the operation set Iran’s nuclear program back by mere months, according to several media reports, while CIA Director John Ratcliffe and Director of National Intelligence Tulsi Gabbard have circulated statements suggesting more severe damage, citing “new intelligence” reporting. Evolving intelligence assessments are not unusual in the immediate aftermath of a military operation, as full battle damage assessments typically take weeks to produce, and the preliminary DIA report—made with “low confidence” within 24 hours of the strike—was not coordinated with the broader U.S. intelligence community. 

What is unusual, and deeply troubling, is the overt politicization of the intelligence process on a critical national security issue such as Iran. The Trump administration’s rapid efforts to discredit and sideline internal dissent, delay or restrict intelligence sharing with congressional oversight committees, and elevate favorable Israeli intelligence assessments signals a dangerous shift toward politicization. In a moment of heightened regional instability, the erosion of objectivity within the intelligence community undermines the integrity of U.S. decision-making on Iran and puts broader national security interests at risk. 

What We Know So Far  

According to the New York Times, the leaked DIA assessment concluded the strikes delayed Iran’s nuclear program by fewer than six months, citing evidence that the strikes sealed off the entrance to two of the nuclear sites but did not collapse their underground buildings. The assessment also reportedly claimed Iran moved much of its uranium enrichment stockpile prior to the strike, suggesting it retains access to highly enriched nuclear material. Notably, the text of the leaked assessment has not been made public yet, and Trump’s personal attorney has disputed and threatened to sue the New York Times for libel over its characterization of the DIA assessment. 

The CIA and Office of the Director of National Intelligence (ODNI) have presented a starker picture, although one that is potentially not inconsistent with the initial DIA report. In a statement, Ratcliffe confirmed the CIA had new intelligence from a “historically reliable” source that “several key Iranian nuclear facilities were destroyed and would have to be rebuilt over the course of years.” It is not clear yet what he meant by “several facilities” and whether these are within the three nuclear sites that were struck. Gabbard similarly asserted new intelligence reporting indicates Fordow, Natanz, and Esfahan were “severely damaged” and “would likely take years to rebuild.” Gabbard previously testified before Congress in March that Iran was not building a nuclear weapon, but seemed to walk back that line after Trump publicly criticized her assessment as “wrong.”  

Israeli and Iranian officials, for their part, have offered mixed damage assessments. On June 25, Israeli Defense Forces (IDF) Chief of Staff Gen. Eyal Zamir asserted Iran’s nuclear program had suffered “severe, extensive, and deep damage and has been set back by years.” In the first pre-recorded speech since the ceasefire between Israel and Iran, Iranian Supreme Leader Ayatollah Ali Khamenei claimed victory over the United States and Israel, echoing his earlier statement that Trump’s assessment of the damage to Iran’s nuclear program was “exaggerated.” The Supreme Leader’s remarks appeared in tension with previous statements from the Iranian Foreign Ministry spokesperson, who said Iran’s nuclear installations were “badly damaged” in the U.S. and Israeli strikes.

Even if the United States did severely damage the three nuclear sites, it does not necessarily follow that they set Iran’s overall nuclear program back by years. It may indeed take Iran years to rebuild those facilities but, crucially, Iran does not need to rebuild all three to weaponize. Some Israeli officials assess Iran maintains small covert enrichment facilities that would allow it to continue its nuclear program in the event of an attack on its larger facilities, raising the prospect of Iran being able to construct a crude nuclear device. Striking nuclear facilities, or even nuclear scientists themselves, does not eliminate the knowledge Iran has already acquired of the weaponization process. 

Irrespective of what the damage assessment ultimately says, there is no firm evidence yet that Iran’s nuclear program was destroyed. Gathering such evidence will take time, requiring monitoring from the International Atomic Energy Agency (IAEA), should Iran allow inspectors back in, or human intelligence sources of the ground. On Thursday, the IAEA Director General Rafael Grossi suggested it would be difficult to evaluate damage from the strikes based on satellite imagery alone, while confirming that centrifuges at Fordow were “no longer operational.” Preliminary intelligence assessments reportedly provided to European governments indicate Iran’s enriched uranium stockpile remains largely intact. 

In the coming weeks, additional technical assessments will follow, with Ratcliffe pledging to “provide updates and information to the American people.” Even then, the ultimate effectiveness of the strikes will remain a matter of debate, as Iran could maintain covert enrichment facilities or uranium stockpiles that were either unknown to U.S. intelligence or moved before or after the strikes. 

The U.S. intelligence community now faces the difficult task of parsing through satellite imagery, signals intelligence, human intelligence, allied intelligence reports, and open source information to try to present policymakers with the fullest picture of the facts. This is never a straightforward process, as even Iranian decisionmakers may not yet understand the full extent of the damage, or might be misled by their own military officials seeking to downplay it. The risk of mis- and disinformation, as well as covert deception campaigns, is high. In the days ahead, the intelligence community must remain laser focused on presenting the facts with analytic integrity, a task made immeasurably more difficult by the increasing politicization of intelligence under Trump. 

Growing Signs of Politicization 

On Thursday, the Senate received a classified briefing on the strikes, with a briefing for the House planned for tomorrow. Ratcliffe represented the intelligence community, alongside Hegseth, Caine, and Secretary of State Marco Rubio, with Gabbard notably absent. Several Democrats leaving the briefing expressed skepticism about the Trump administration’s claims, with Senator Chris Murphy (D-CT) asserting it “still appears we have only set back the Iranian nuclear program by a handful of months.” The White House previously postponed classified briefings on the operation required under the 1973 War Powers Resolution, prompting alarm among lawmakers. “I’m very concerned about [Trump] distorting, manipulating, and even lying about intelligence,” Sen. Chris Van Hollen (D-MD) said. “We’ve been here before, We went to war in Iraq under false pretenses.” 

The Trump administration has said it plans to limit intelligence sharing with Congress following the unauthorized disclosure of the DIA report, which White House Press Secretary Karoline Leavitt characterized as “flat-out wrong” and leaked by a “low-level loser in the intelligence community.” The FBI has since opened an investigation into the source of the leaks. 

In an unprecedented move, the Trump administration has promoted Israeli intelligence assessments and even selective Iranian statements to bolster its case on the effectiveness of the strikes. A White House statement entitled, “Iran’s Nuclear Facilities Have Been Obliterated — And Suggestions Otherwise are Fake News” cites the IDF Chief of Staff and Iranian Foreign Ministry Spokesperson, alongside U.S. officials. 

The Iran strikes are not the first indication of growing politicization within the intelligence community. In May, Gabbard fired two senior officials at the National Intelligence Council (NIC) after the group produced a partially declassified intelligence assessment contradicting Trump’s rationale for invoking the Alien Enemies Act to deport alleged Venezuelan gang members. In emails obtained by the Times from DNI Chief of Staff Joe Kent to senior ODNI officials, Kent pushed for “some rewriting” so the NIC assessment “would not be used against the DNI or POTUS [President Trump].” That is after he had earlier pressed analysts to redo a Feb. 26 assessment of the Venezuelan matter. “Some intelligence officials took Mr. Kent’s intervention as an attempt to politicize the findings and push them in line with the Justice Department arguments and the Trump administration policy,” the Times reported.

More broadly, Gabbard has sought to consolidate control over the intelligence process by moving the NIC to ODNI headquarters and attempting to do the same with the office responsible for compiling the President’s Daily Brief (PDB). Since Trump returned to power, senior intelligence personnel have been removed or reassigned as part of a wider restructuring of the intelligence community. 

What Congress Should Do Now 

The increasing politicization of the intelligence community on critical national security matters puts everyone at risk. But Congress is not powerless to act. Lawmakers can take a number of steps in the immediate and longer term to ensure that intelligence remains politically neutral. 

First, Congress must press for the fullest possible picture of what happened during Operation Midnight Hammer. That begins with robust, closed-door briefings that feature career analysts alongside senior appointees, so lawmakers can hear the range of professional judgments and interrogate assumptions. 

In the longer term, a careful, bipartisan review—akin to the Iraq or Afghanistan War Commision but on a smaller scale—would clarify the facts surrounding this strike and send a signal that intelligence must not be politicized as a matter of national security. As part of such a review, Congress should scrutinize the unusual prominence accorded to Israeli intelligence assessments to ensure that U.S. intelligence is not being sidelined. Allies provide invaluable insight, yet each has its own strategic calculus in doing so. Elevating any external narrative above the collective judgment of the U.S. intelligence community risks outsourcing American decision-making to Israel. The very creation of such a body can help insulate the intelligence community from political interference going forward. It would also send an important signal that, on this and other issues, there is a prospect of external review that can detect and deal with any politicization of intelligence work. 

Alongside these actions, broader measures are urgently needed within the intelligence community to protect whistleblowers, safeguard independent analysis, and ensure that leak investigations are not used as instruments of intimidation against dissenting analysis or inconvenient assessments. If career analysts believe they will face retribution for assessments that do not align with White House policy, the entire intelligence and decision-making process collapses. One step, as former senior CIA executive Brian O’Neill has proposed in these pages, would be for Congress to establish a standing analytic review board—independent of the ODNI—to adjudicate internal complaints about intelligence manipulation, coercion, or suppression. Other potential steps include establishing an analytic integrity review panel, modeled on the State Department’s Historical Advisory Committee, which would serve as a neutral, public-facing check on institutional drift. Congress could require quarterly testimony from this review panel, as well as from the ODNI’s Analytic Ombuds Office and senior officials responsible for producing the PDB, to ensure political pressure or bias are not shaping intelligence analysis. 

Finally, Congress should push the Trump administration to be transparent with the American people about the effectiveness of the strikes without delay. The administration can do this by declassifying intelligence assessments to the fullest extent possible, while protecting sources and methods. 

By asserting rigorous oversight now, Congress can begin to repair the damage being done to the intelligence community. The goal of such oversight should not be to score shortsighted partisan victories, but rather to ensure that any future course of action on Iran and other matters is anchored in accurate, robust intelligence. Without that foundation, even the most decisive tactical successes ultimately risk becoming strategic liabilities. 

The post Intelligence Implications of the Shifting Iran Strike Narrative appeared first on Just Security.

]]>
115642
The Day After U.S. Strikes on Iran’s Nuclear Program: A Policy and Legal Assessment https://www.justsecurity.org/115234/policy-legal-iran-nuclear-strikes/?utm_source=rss&utm_medium=rss&utm_campaign=policy-legal-iran-nuclear-strikes Sun, 22 Jun 2025 19:14:09 +0000 https://www.justsecurity.org/?p=115234 An expert policy and legal assessment of the U.S. strikes on Iran's nuclear facilities and what comes next.

The post The Day After U.S. Strikes on Iran’s Nuclear Program: A Policy and Legal Assessment appeared first on Just Security.

]]>
On June 21, the United States conducted airstrikes on three nuclear sites in Iran—Fordow, Natanz, and Esfahan—marking the first direct U.S. military attack on Iran’s nuclear program. The strikes bring the United States directly into the Israel-Iran War that began on June 13 with Israeli attacks against Iranian nuclear and military infrastructure, as well as top officials and scientists. The strikes also risk a dramatic escalation, potentially expanding into a much broader conflict. In a White House address, President Donald Trump described the U.S. operation as “highly successful,” claiming the targeted sites were “totally and completely obliterated.” At the time of this writing, no independent battle damage assessment (BDA) has been released. Chairman of the Joint Chiefs of Staff Gen. Dan Caine stated, “I think BDA is still pending, and it would be way too early for me to comment on what may or may not still be there.” Iran has condemned the strikes as a violation of international law and requested an emergency U.N. Security Council session, threatening “everlasting consequences.”

These developments raise urgent questions about the risks of regional escalation, the legality of unilateral military action, and the long-term implications for the global nonproliferation regime. The following analysis aims to assist policymakers, lawmakers, journalists, and the public assess the implications of this moment. For further expert commentary, see other installments in Just Security’s collection on the Israel-Iran conflict. 

Iran’s Retaliatory Options

Iran now faces a range of constrained but potentially destabilizing choices. Iranian leaders likely will try to reassert deterrence through a show of continued strength, while avoiding actions that could trigger full-scale war with the United States. Setting aside legal considerations, the below options are not mutually exclusive and carry varying levels of risk in terms of conflict escalation and potential blowback to Iran: 

Missile and Drone Strikes. Iran maintains a diverse and advanced missile stockpile, with some capable of targeting the United States and its allies, although the Israeli government claims to have destroyed half of Iran’s missile launchers since June 13. Tehran may opt for a calibrated strike—similar to its 2020 missile attack on Al-Asad Airbase in Iraq, which houses U.S. and allied forces, following the U.S. drone strike that killed Iranian general Qassem Soleimani in Iraq. (That strike resulted in over 100 U.S. troops suffering traumatic brain injuries.) Such a strike could demonstrate resolve while aiming to stay below the threshold of major escalation, but it may not be viewed within Iranian leadership as a sufficient response to the U.S. use of force against Iran. 

Proxy and Terrorist Attacks. Iran may activate regional proxies such as Hezbollah in Lebanon, Shia militias in Iraq and Syria, or the Houthis in Yemen. These groups could target U.S. personnel, military installations, or allied interests in the region. The Houthis have already threatened to target U.S. naval vessels in the Red Sea in response to the U.S. strikes. A Hezbollah spokesperson, by contrast, indicated the group has no immediate plans to retaliate following the U.S. strikes. 

Even if these groups do join the fighting, Iran’s most capable regional proxies—Hezbollah and Hamas—have suffered major setbacks since the Israeli response to the Hamas-led attacks of Oct. 7, 2023. Israeli officials estimate that Hezbollah maintains only some 20 percent of the missiles and rockets it had before the war. Dozens of senior Hezbollah commanders, including the longtime leader Hassan Nasrallah, have been killed in targeted Israeli strikes. Hamas, meanwhile, has seen much of its leadership killed or displaced, and its military infrastructure in Gaza severely degraded by sustained Israeli operations.

Tehran may also turn to transnational terrorism. The direct use of force against Iran could prompt attacks beyond the region. Hezbollah-linked operatives and Iranian sleeper cells may pose a potential threat to Israeli, European, or U.S. interests. Iran has demonstrated the intent and capacity to orchestrate extraterritorial attacks, including a foiled 2011 attempt to assassinate the Saudi ambassador in Washington, DC; a 2022 murder-for-hire plot targeting Iranian-American journalist Masih Alinejad in New York City; alleged plots against former National Security Advisor John Bolton and former Secretary of State Mike Pompeo; and reported threats against Trump in 2024. 

Maritime Disruption. Iran could attempt to disrupt maritime traffic in the Strait of Hormuz through mining, drone activity, or harassing commercial and military vessels. Such action would threaten global energy markets and signal Tehran’s willingness to retaliate, but also pose a high risk of direct confrontation with U.S. and allied forces that could threaten regime survival. Such a move also risks angering China—the largest importer of Iranian oil—whose economy is dependent on energy transported through the Strait, and has so far remained relatively neutral in the conflict. 

Cyber Operations. Cyber retaliation remains a likely and strategically flexible option. Iran has previously targeted critical infrastructure through cyberattacks and has intensified such operations amid ongoing conflict with Israel. Iranian malicious cyber activity reportedly surged by more than 700 percent since June 13. 

A cyber response could allow Iran to impose costs while retaining plausible deniability. Potential targets include energy infrastructure, telecommunications systems, financial networks, and other critical infrastructure within the United States and abroad. U.S. companies are on high alert, but defensive gaps remain—particularly in light of the Trump administration’s recent staff reductions and proposed additional budget cuts at the U.S. Cybersecurity and Infrastructure Security Agency. 

Cyber escalation is inherently unpredictable. Even without deliberate targeting, malicious code can propagate widely, as seen in the unintended global spread of Stuxnet—an alleged U.S.-Israel cyberweapon designed to disrupt Iran’s centrifuges that ultimately infected companies based in the United States.

Withdrawal from the Nuclear Non-Proliferation Treaty (NPT). Iran may choose to withdraw from the NPT and reconstitute its nuclear program outside international oversight. (The treaty obligates non-nuclear-weapon States to forgo the development or acquisition of nuclear weapons, while allowing access to peaceful nuclear technology under IAEA safeguards, among other constraints.) While this would carry significant diplomatic and economic consequences, it could serve as a form of strategic signaling and raise the specter of a nuclear breakout (see further discussion on potential NPT withdrawal below.)

Legal Status of the Strikes

From a domestic legal perspective, the U.S. Constitution vests the power to declare war in Congress. The June 21 strikes were carried out without congressional authorization, and the Trump administration has not invoked any existing statutory basis—such as the 2001 or 2002 Authorizations for the Use of Military Force (AUMFs)—to justify the operation. As Brian Egan and one of us (Tess Bridgeman) have explained, neither of those two AUMFs authorize war against Iran; moreover, direct strikes against Iran on its territory with the obvious threat of regional escalation almost certainly rise to the level of “war in the constitutional sense,” such that the President could not rely on Article II authority alone but would need congressional authorization. 

The War Powers Resolution requires the president to notify Congress within 48 hours of hostilities and requires withdrawal from otherwise unauthorized military operations after 60 days unless Congress provides authorization. Congress could also try to pass a bill under the War Powers Resolution to require withdrawal immediately (a few such bills have already been introduced), but it would have to overcome a potential presidential veto to be enacted. Regardless, the absence of any legislative authorization in such a consequential use of force raises serious concerns about democratic accountability and the erosion of Congress’ constitutional role.

From an international legal perspective, the U.N. Charter prohibits the use of force against the territorial integrity of another State under Article 2(4), except in narrow cases of self-defense under Article 51 or with U.N. Security Council authorization. The United States has not yet publicly presented evidence of an imminent armed attack that would justify self-defense under Article 51. 

The United States also has not cited a formal request for collective self-defense from Israel, although Israel has publicly called on the United States to do so. At a press conference following the U.S. strikes, Secretary of Defense Pete Hegseth said, “the president authorized a precision operation to neutralize the threats to our national interests posed by the Iranian nuclear program and [in support of] the collective self-defense of our troops and our ally, Israel.” Any formal invocation of collective self-defense, however, would merit close scrutiny, considering the legality of Israel’s military intervention is subject to debate (see here and here). In the absence of either a self-defense or collective self-defense justification, the strikes may be viewed as preventive, placing them on uncertain legal footing at best. 

Even if the United States does present such a case, any use of force under international law must meet the requirements of necessity and proportionality. The United States has not yet explained how targeting multiple nuclear facilities within a sovereign country—absent an imminent threat originating from those sites—comports with these legal requirements. Importantly, the imminent threat would need to encompass not only Iran’s potential to acquire a nuclear weapon soon but also an intention to use it. As Adil Haque recently wrote, “Any notion that Iran intends to trigger mutually assured destruction is delusional and cannot provide a rational basis for the use of force under international law.”

One key question is if President Trump is correct that “Iran’s key nuclear enrichment facilities have been completely and totally obliterated,” then the U.S. and Israeli operations would have achieved their objective. In such a case, the legal basis for the continued use of force would no longer be necessary or proportionate.

(Note: we do not address here the rules of armed conflict that apply to attacks on nuclear facilities.)

Implications for the Nonproliferation Regime

On June 12, the International Atomic Energy Agency’s Board of Governors found Iran in “non-compliance” with its safeguards obligations, marking the second such finding in two decades. Notably, Iran was in compliance with its NPT obligations and additional monitoring measures under the 2015 Joint Comprehensive Plan of Action (JCPOA) until the United States withdrew from the deal in 2018. However, Iran still remains a party to the NPT, which legally requires it to permit IAEA inspections of its declared nuclear facilities.

The U.S. strikes place IAEA verification and monitoring at serious risk and may have significant consequences for the credibility of the NPT, which is regarded as the cornerstone of the global nuclear nonproliferation regime: 

  • Undermining the Inspection Process. While the IAEA has formally declared Iran to be in breach of its nonproliferation obligations and has decreased international access since the U.S. JCPOA withdrawal, attacking a State under IAEA monitoring, particularly before exhausting diplomatic options, may still be perceived as punishing transparency and cooperation. This risks discouraging other States from voluntarily submitting to IAEA safeguards out of concern that openness could increase their vulnerability to military action. 
  • Risk of NPT Withdrawal. Iran could choose to invoke Article X of the NPT and withdraw from the treaty, following North Korea’s path in 2003. Taking this step could allow Iran to reconstitute and expand its nuclear program outside most legal or technical constraints.
  • Precedent for Preventive Force. The strikes may set a corrosive precedent for the use of force against nuclear infrastructure in the absence of clear evidence of an imminent threat. This challenges the core logic of the NPT—that proliferation concerns should be addressed through verification and diplomacy, rather than unilateral military action.

More broadly, these developments could reinforce a perception that non-nuclear-weapon States remain vulnerable to coercion or attack, while nuclear-weapon States may act with impunity. Over time, such perceptions may erode trust in the nonproliferation regime and incentivize other States to pursue nuclear capabilities as a means of deterrence.  

On the other hand, an argument sometimes raised is that the use of force can reinforce the nonproliferation regime by deterring States from developing nuclear weapons or violating their legal obligations under the NPT. Yet this line of argument would be widely understood by foreign capitals as a rejection of the diplomatic choices in constructing the NPT itself. It is difficult to see how such an approach and the NPT could both remain viable.  

The post The Day After U.S. Strikes on Iran’s Nuclear Program: A Policy and Legal Assessment appeared first on Just Security.

]]>
115234
The Just Security Podcast: A Conversation with Jen Easterly — Cybersecurity at a Crossroads https://www.justsecurity.org/114354/podcast-cybersecurity-crossroads-jen-easterly/?utm_source=rss&utm_medium=rss&utm_campaign=podcast-cybersecurity-crossroads-jen-easterly Tue, 10 Jun 2025 15:38:55 +0000 https://www.justsecurity.org/?p=114354 How do leaders steer through cyber crises and chart a path forward? Jen Easterly unpacks challenges, breakthroughs, and lessons from the front lines of U.S. cybersecurity.

The post The Just Security Podcast: A Conversation with Jen Easterly — Cybersecurity at a Crossroads appeared first on Just Security.

]]>
In recent years, the United States has sustained some of the most severe cyber threats in recent history– from the Russian-government directed hack SolarWinds to China’s prepositioning in U.S. critical infrastructure for future sabotage attacks through groups like Volt Typhoon. The Cybersecurity Infrastructure Security Agency (CISA) is responsible for responding to, and protecting against these attacks.

How do leaders steer through cyber crises, build trust, and chart a path forward?

In conversation with Dr. Brianna Rosen, Just Security Senior Fellow and Director of the AI and Emerging Technologies Initiative, Jen Easterly, who just completed a transformative tenure as Director of CISA under the Biden Administration, unpacks the challenges, breakthroughs, and lessons from the front lines of America’s cybersecurity efforts.

Jen Easterly podcast

Show Notes: 

The post The Just Security Podcast: A Conversation with Jen Easterly — Cybersecurity at a Crossroads appeared first on Just Security.

]]>
114354
The Just Security Podcast: Peace Diplomacy and the Russo-Ukraine War https://www.justsecurity.org/113355/podcast-peace-diplomacy-russia-ukraine/?utm_source=rss&utm_medium=rss&utm_campaign=podcast-peace-diplomacy-russia-ukraine Wed, 14 May 2025 13:22:36 +0000 https://www.justsecurity.org/?p=113355 How should we understand the prospects for a sustainable peace in Ukraine amidst evolving geopolitical dynamics and continued battlefield uncertainty?

The post The Just Security Podcast: Peace Diplomacy and the Russo-Ukraine War appeared first on Just Security.

]]>
Now in its third year, the Russo-Ukraine War has upended the post-Cold War security landscape, exposing deep fractures in the global balance of power. 

As western unity frays and U.S. diplomacy shifts under President Trump, the war has become a flashpoint for competing visions of the international order. 

This week, the European Union gave Russia an ultimatum: accept a proposed ceasefire or face expanded sanctions—just days ahead of a potential round of direct peace talks in Istanbul on Thursday. The stakes are high, and the choices made this week could reshape not only the trajectory of the war but the future of global security.

How should we understand the prospects for a sustainable peace in Ukraine amidst evolving geopolitical dynamics and continued battlefield uncertainty? 

To help make sense of these developments, Just Security Senior Fellow and Director of the Oxford Programme for Cyber and Tech Policy, Brianna Rosen, sat down with Sir Lawrence Freedman, Emeritus Professor of War Studies at King’s College London and Professor Janina Dill, Dame Louise Richardson Chair in Global Security at Oxford University’s Blavatnik School of Government. 

This conversation was part of the Calleva-Airey Neave Global Security Seminar Series at the University of Oxford.

 

Show Notes: 

The post The Just Security Podcast: Peace Diplomacy and the Russo-Ukraine War appeared first on Just Security.

]]>
113355
The Just Security Podcast: Trump’s AI Strategy Takes Shape https://www.justsecurity.org/110517/the-just-security-podcast-trumps-ai-strategy-takes-shape/?utm_source=rss&utm_medium=rss&utm_campaign=the-just-security-podcast-trumps-ai-strategy-takes-shape Thu, 17 Apr 2025 18:26:57 +0000 https://www.justsecurity.org/?p=110517 Is a distinct Trump strategy for AI beginning to emerge—and what will that mean for the United States and the rest of the world?  

The post The Just Security Podcast: Trump’s AI Strategy Takes Shape appeared first on Just Security.

]]>
In early April 2025, the White House Office of Management and Budget (OMB) released two major policies on Federal Agency Use of AI and Federal Procurement of AI – OMB memos M-25-21 and M-25-22, respectively. These memos were revised at the direction of President Trump’s January 2025 executive order, “Removing Barriers to American Leadership in Artificial Intelligence” and replaced the Biden-era guidance. Under the direction of the same executive order, the Department of Energy (DOE) also put out a request for information on AI infrastructure on DOE lands, following the announcement of the $500 billion Stargate project that aims to rapidly build new data centers and AI infrastructure throughout the United States. 

As the Trump administration is poised to unveil its AI Action Plan in the near future, the broader contours of its strategy for AI adoption and acceleration already seem to be falling into place.

Is a distinct Trump strategy for AI beginning to emerge—and what will that mean for the United States and the rest of the world?  

Show Notes:

Trump's AI Strategy Takes Shape

The post The Just Security Podcast: Trump’s AI Strategy Takes Shape appeared first on Just Security.

]]>
110517
Introduction to Series: Data Preservation Under the Trump Administration https://www.justsecurity.org/110255/series-data-preservation-trump/?utm_source=rss&utm_medium=rss&utm_campaign=series-data-preservation-trump Wed, 16 Apr 2025 12:25:35 +0000 https://www.justsecurity.org/?p=110255 A new series on what is at stake — and what can be done — to ensure government information remains publicly accessible and properly stored.

The post Introduction to Series: Data Preservation Under the Trump Administration appeared first on Just Security.

]]>
Preserving federal data has long been essential to transparency, accountability, and evidence-based policymaking in the United States. While every administration faces its share of data management challenges, the sheer volume of lost or altered information under the Trump administration suggests these challenges have now reached an unprecedented level.

In the months since the presidential transition, more than 8,000 webpages and thousands of datasets have been removed or altered across agencies, including the Centers for Disease Control and Prevention, Census Bureau, and Food and Drug Administration. Removal of this data from government agency websites jeopardizes access to vital public information on health, safety, environmental, and demographic issues. At the same time, sweeping executive orders on interagency data-sharing and the consolidation of sensitive information under the Department of Government Efficiency (DOGE) have drawn criticism from privacy advocates, transparency experts, and civil society groups. These concerns are magnified by weakened oversight mechanisms following the dismissal of key members of the Privacy and Civil Liberties Oversight Board. 

In response to these developments, Just Security is launching a new series, “Data Preservation Under the Trump Administration,” to explore the legal, ethical, and policy implications of current federal data practices. As political pressures intensify and the risk of data loss grows, this series aims to provide comprehensive analysis of how federal data is stored, shared, and accessed. 

The series contains the following articles and will be updated regularly: 

IMAGE: Blue U.S. Capitol building in front of a red background of data (via Getty Images)

The post Introduction to Series: Data Preservation Under the Trump Administration appeared first on Just Security.

]]>
110255
Shaping the AI Action Plan: Responses to the White House’s Request for Information https://www.justsecurity.org/109203/us-ai-action-plan/?utm_source=rss&utm_medium=rss&utm_campaign=us-ai-action-plan Tue, 18 Mar 2025 13:34:54 +0000 https://www.justsecurity.org/?p=109203 A thematic roundup of proposals aimed at shaping the Trump administration's new AI Action Plan.

The post Shaping the AI Action Plan: Responses to the White House’s Request for Information appeared first on Just Security.

]]>
In February, the U.S. Office of Science and Technology Policy (OSTP) issued a Request for Information (RFI) aimed at shaping the Trump administration’s new AI Action Plan. Stakeholders from across industry, academia, civil society, and the media submitted comments before the March 15 deadline, laying out their visions for AI policy under a second Trump term.  Respondents from OpenAI, Anthropic, Google, the Center for Data Innovation, the Center for Democracy & Technology (CDT), the Center for a New American Security (CNAS), Georgetown’s Center for Security and Emerging Technology (CSET), Business Roundtable, News/Media Alliance, MITRE, and other organizations offered perspectives on how the Trump administration can advance U.S. technological leadership without stifling innovation.

While diverse in approach, the submissions converge around several core themes: infrastructure and energy development, federal preemption of state AI laws, export controls to maintain U.S. competitiveness against rivals like China, promoting domestic AI adoption, safeguarding national security, and defining clear copyright and licensing frameworks for AI data. What follows is a thematic roundup of these proposals, culminating in a reference table at the end.

Innovation, Workforce Adoption & Economic Impacts

A dominant concern among commenters is how the federal government can accelerate AI growth and create a robust domestic workforce. Many commenters worry that a patchwork of state laws might create a “fragmented regulatory environment.” Google, OpenAI, and the Business Roundtable — an association representing U.S. CEOs — explicitly support federal preemption of state-level AI regulations that create a fragmented regulatory environment which could “undermine America’s innovation lead.”

The News/Media Alliance, CDT, and CSET, meanwhile, call for measures that support “Little Tech” to avoid a market dominated by a few large AI providers. The latter two, in particular, highlight the importance of supporting open-source models, which can enable “greater participation” in the AI domain by lowering barriers to entry for smaller firms with fewer resources.

Additionally, some organizations (Anthropic, Business Roundtable, Google, Center for Data Innovation, MITRE) spotlight the labor market implications of AI, urging the administration to invest in technical education and workforce training. To address labor shortages in high-demand, AI-related jobs, CNAS and Google both propose leveraging immigration authorities, including expediting visa applications. Anthropic also recommends that the White House monitor and report on how AI reshapes the national economy, including its effect on the tax base and labor composition.

Export Controls and Global AI Leadership

Concern over China’s rapid AI progress permeates nearly every submission. OpenAI cites the rise of DeepSeek as an example of the country’s swiftly narrowing AI gap, warning that the Chinese Community Party’s (CCP) subsidized efforts could undermine U.S. advantages. Anthropic and CNAS similarly stresses the urgency of preventing smuggling of advanced chips to China, recommending new government-to-government agreements to close supply chain loopholes. Most of the submissions include calls to strengthen the U.S. Commerce Department’s Bureau of Industry and Security, such as by increasing its funding, instituting scenario planning assessments prior to implementing export controls, and consulting with the private sector.

On the other hand, the Center for Data Innovation warns that current, often “reactive” U.S. export controls hamper U.S. firms’ global competitiveness “without meaningfully slowing China’s progress.” The Center suggests a pivot toward enhancing domestic AI capabilities, streamlining export licensing, and collaborating with allies to promote a democratic vision for AI standards.

Google’s position is that any new rules — particularly the AI Diffusion Rule, set forth by the Biden administration — should avoid imposing “disproportionate burdens on U.S. cloud service providers” and factor in potential adverse impacts on American market share. Google, OpenAI, and Anthropic underscore that effective export controls should avoid inadvertently accelerating foreign AI development.

CDT, CNAS and CSET underscore the strategic importance of the United States to “remain at the frontier” of open-source models in the AI race against China. They argue that without compelling U.S. alternatives,  China could embed “authoritarian values” in AI models adopted by developing countries. To counter this, CNAS proposes that the United States rapidly release modified versions of open-source Chinese models to “strip away hidden censorship mechanisms” and promote democratic values abroad.

Infrastructure and Energy

Submissions from OpenAI, Anthropic, Google, CNAS, and Business Roundtable eachpress for robust infrastructure and energy reforms to meet AI’s skyrocketing computational demands. OpenAI and CNAS propose establishing special zones to attract massive private investment in new data centers and transmission lines, as well as to “minimize barriers” and “eliminate redundancies” through tax incentives, streamlined permitting, and partial exemptions from the National Environmental Policy Act.

Anthropic proposes a national target of “50 additional gigawatts of power dedicated to the AI industry by 2027,” cautioning that if the United States fails to supply reliable, low-cost energy, domestic AI developers might relocate model training to authoritarian countries, exposing U.S. intellectual property to theft or coercion. Google underscores the need for consistent federal and state incentives to promote grid enhancements, data center resilience, and advanced energy-generation projects.

Government Adoption of AI

The majority of submissions highlight lagging AI adoption by federal agencies. OpenAI characterizes government usage as “unacceptably low,” proposing to waive or streamline certain compliance requirements to accelerate pilot programs. Anthropic goes further, calling for a government-wide audit to “systematically identify” every text, image, audio, and video workflow that could be AI-augmented.

Proposals also emphasize removing procurement barriers in both civilian and national security contexts. Anthropic wants to mobilize the U.S. Department of Defense (DoD) and Intelligence Community (IC) to expedite AI adoption, while Google and CSET urge the government to avoid duplicative or siloed AI compliance rules across agencies. The Center for Data Innovation warns against the government’s “risk-only” mindset, imploring the administration to pivot to an “action” framework that proactively integrates AI where it can transform mission delivery. CNAS also advises that the U.S. military take “full advantage” of AI and autonomous systems, provided that the DoD develops “rigorous and streamlined” testing procedures and “permit warfighters an early and ongoing role.”

On the other hand, CDT’s proposal cautions against the government rushing forward on AI adoption, claiming that it could lead to “wasted” tax dollars on ineffective, “snake oil” AI tools. CDT instead advocates for stronger guardrails on government AI usage, including the establishment of an independent external oversight mechanism to monitor AI deployment in national security and intelligence contexts. It further recommends that agencies expand existing use case inventories to transparently catalogue how AI systems are being utilized. Notably, CDT urges that the Trump administration should “clarify and proactively communicate” how the Department of Government Efficiency (DOGE), an unofficial federal body led by Elon Musk, is reportedly using AI to make high-risk decisions.

AI Security and Safety

Anthropic, Google, and the Center for Data Innovation each underscore the national security implications of frontier AI models. Anthropic’s submission notes that new AI systems are trending toward capabilities that could facilitate the development of biological or cyber weapons, emphasizing the need for “rapidly assessing” advanced models for potential misuse. To mitigate such risks, the Center for AI Policy (CAIP) recommends that the U.S. government establish a clear definition of frontier AI so that national security regulations effectively address the most high-risk models.

Anthropic and others also advocate for keeping the U.S. AI Safety Institute (AISI) intact while bolstering it with statutory authorities and interagency coordination to test AI models for national security risks. The Center for Data Innovation, CNAS, and CSET, meanwhile, propose creating a national AI incident database and a vulnerability database — akin to the National Vulnerability Database for cybersecurity — to track AI failures, identify systemic weaknesses, and coordinate risk mitigation. CAIP takes this a step further, urging the Trump administration to create an “AI Emergency Response Preparedness Program” involving “realistic simulations” of AI-driven threats — including AI-enabled drone and cyber attacks — requiring AI developers to respond to these scenarios.

Google, CNAS, and CSET call for collaboration with private labs and the IC to evaluate and mitigate potential security threats, including espionage and chemical, biological, radiological, and nuclear (CBRN) vulnerabilities. Google also opposes mandated disclosures that could reveal trade secrets or model architecture details, warning that such transparency “could provide a roadmap” for malicious actors to circumvent AI guardrails.

CDT  suggests the U.S. AI Action Plan should incorporate measures to address other AI safety risks, including privacy violations and discrimination concerns. It recommends that the National Institute of Standards and Technology should “holistically and accurately” assess the efficacy and fairness of AI systems, as well as issue guidance on evaluating the validity of measurements used. Lastly, CSET proposes that the Trump administration create “standard pathways” to challenge “adverse” AI-enabled decisions and implement whistleblower protections at frontier AI firms to discourage “dangerous” practices.

Obligations for AI Developers, Deployers, and Users

A recurring theme, particularly in Google’s submission, is the need to clearly delineate liability throughout the AI lifecycle. Google argues that developers cannot be held responsible for every downstream deployment — particularly when they lack control or visibility into final uses. Instead, they advocate “role-based” accountability: Developers should provide transparency around model training, but the ultimate deployers should bear liability for misuse in their applications.

At the same time, Google concedes that certain minimal disclosures (for instance, about synthetic media) may be warranted, but it resists broad, mandatory “AI usage” labels that could inadvertently help adversaries “jailbreak” or circumvent AI security features.

Copyright Issues and Development of High-Quality Datasets

OpenAI, Google, the Center for Data Innovation, and MITRE each argue for policies that expand access to robust, high-quality datasets while preserving fair use protections. OpenAI maintains that applying fair use to AI is a “matter of national security” in the face of Chinese competitors who “enjoy unfettered access” to copyrighted data. The company warns that narrowing data access could give Beijing an irreparable advantage in the race to develop state-of-the-art AI models.

The News/Media Alliance, representing over 2,000 media organizations, focuses on publisher rights. It raises concerns that generative AI models are trained on vast quantities of copyrighted material without permission, threatening traditional news revenue streams. The Alliance proposes collaborative licensing agreements and clearer guidelines about disclosing when and how AI-generated content uses news materials.

Finally, the Center for Data Innovation recommends a National Data Foundation, analogous to the National Science Foundation, that would fund the creation, structuring, and curation of large-scale datasets across both public and private sectors. Business Roundtable also highlights the importance of unlocking access to government datasets from the perspective of producing more representative, less biased AI models.

* * *

The table below highlights key aspects of public submissions responding to the White House’s RFI.

ThemeOrganizationSubmission
Innovation and RegulationBusiness Roundtable• “The Administration should assess regulatory gaps to ensure that any new regulations, if necessary, are appropriately narrowly scoped to address identified gaps without harming U.S. companies’ ability to innovate. Many AI applications are covered under topic-and sector-specific federal statutes. Where regulatory guardrails are deemed necessary, whether in new or existing rules covering AI systems, policymakers should provide clear guidance to businesses, foster U.S. innovation, and adopt a risk-based approach that carefully considers and recognizes the nuances of different use cases, including those that are low-risk and routine. Reporting requirements should be carefully crafted to avoid unnecessary information collection and onerous compliance burdens that slow innovation.”
• “Companies have experienced the challenges of dealing with a fragmented and increasingly complex regulatory landscape due to the patchwork of state data privacy laws, which hinders innovation and the ability to provide consumer services. Federal AI legislation with strong preemption should provide protection for consumers and certainty for businesses developing and deploying AI.”
Center for Data Innovation• “Scientific breakthroughs powered by AI—whether in medicine, climate science, or materials—are critical to progress, however, without AI-driven improvements to the systems that apply these discoveries, even the most advanced innovations risk being trapped in inefficient, outdated structures that fail to serve people effectively.”
• “The administration should preserve but refocus the AI Safety Institute (AISI) to ensure the federal government provides the foundational standards that inform AI governance. While AISI, housed at NIST, does not set laws, it plays a critical role in developing safety standards and working with international partners—functions that are essential for maintaining a coherent federal approach. Without this, AI governance will continue to lack a structured federal foundation, leaving states to introduce their own regulations in response to AI risks without clear federal guidance. This risks creating a fragmented regulatory landscape where businesses must comply with conflicting requirements, and policymakers struggle to craft effective, evidence-based laws.”
Center for Democracy & Technology• "The AI Action Plan should set a course that ensures America remains a home for open model development… Restricting open model development now would not improve public safety or further national security — rather, it would sacrifice the considerable benefits associated with open models and cede leadership in the open model ecosystem to foreign adversaries. Rather than restricting open model development, the AI Action Plan should ensure that open models retain their central position in the American AI ecosystem, while promoting the development of voluntary standards to enable their safe and responsible development and use.”
Center for Security and Emerging Technology• “To promote U.S. AI R&D leadership, the government should incentivize and award projects that take interdisciplinary approaches, encourage research findings to be disseminated openly and widely, and support public sector research in coordination with private sector innovation. Since AI is a general-purpose technology, basic R&D supports downstream model development for commercial use, application, and, eventually, profits.”
• “The U.S. government should support the release of open-source AI models, datasets, and tools that can be used to fuel U.S. AI development, innovation, and economic growth. Open-source models and tools enable greater participation in the AI domain, allowing lower-resource organizations that cannot develop base models themselves to access, experiment, and build upon them.”
Google• “Long-term, sustained investments in foundational domestic R&D and AI-driven scientific discovery have given the U.S. a crucial advantage in the race for global AI leadership. Policymakers should significantly bolster these efforts—with a focus on speeding funding allocations to early-market R&D and ensuring essential compute, high-quality datasets, and advanced AI models are widely available to scientists and institutions.”
• “The Administration should ensure that the U.S. avoids a fragmented regulatory environment that would slow the development of AI, including by supporting federal preemption of state-level laws that affect frontier AI models. Such action is properly a federal prerogative and would ensure a unified national framework for frontier AI models focused on protecting national security while fostering an environment where American AI innovation can thrive. Similarly, the Administration should support a national approach to privacy, as state-level fragmentation is creating compliance uncertainties for companies and can slow innovation in AI and other sectors.”
News/Media Alliance• “The AI Action Plan should support measures to promote competition amongst actors, reduce abusive dominance by Big Tech, and prevent unfair competition in the marketplace. Without transparency and other guardrails to protect the marketplace, AI risks being captured by Big Tech, discouraging competition, reducing investments, undermining innovation and ultimately hurting American consumers.”
OpenAI• “We propose creating a tightly-scoped framework for voluntary partnership between the federal government and the private sector to protect and strengthen American national security… Overseen by the US Department of Commerce and in coordination with the AI Czar, perhaps by reimagining the US AI Safety Institute, this effort would provide domestic AI companies with a single, efficient “front door” to the federal government that would coordinate expertise across the entire national security and economic competitiveness communities.”
Workforce Adoption and Economic Impacts of AIAnthropic• “We anticipate that 2025 will likely mark the beginning of more visible, large-scale economic effects from AI technologies”
• “We believe that computing power will become an increasingly significant driver of economic growth. Accordingly, the White House should track the relationship between investments in AI computational resources and economic performance to inform strategic investments in domestic infrastructure and related supply chains.”
• “The White House should engage with Congress on and task relevant agencies with examining how AI adoption might reshape the composition of the national tax base, and ensure the government maintains visibility into potential structural economic shifts.”
Business Roundtable• “America needs a workforce with the skills and training required for the in-demand jobs of today and tomorrow, including developing AI models, using AI applications and tools, and building and supporting AI infrastructure… Policymakers should complement these private-sector initiatives with reforms to the workforce development system that support employers’ ever-evolving workforce needs and worker advancement in an increasingly technology-based economy.”
Center for Data Innovation• “The U.S. AI Action Plan should make rapid AI adoption across all sectors of the U.S. economy the cornerstone of its policy. It can take a leaf out of the UK’s AI Opportunities Action Plan and, as the UK rightly puts it, “push hard on cross-economy AI adoption.”
Center for a New American Security• To “leverage America’s talent advantage once more,” the U.S. government should add “high-demand AI jobs with demonstrated shortages to the Schedule A list… Employers in Schedule A categories can hire foreign talent while bypassing cumbersome recruitment and labor certifications requirements, filling critical roles more expeditiously.”
• The Trump administration should also “expedite appointments, vetting, and processing for visa applicants with job offers in cutting-edge AI research, development, and innovation.”
Center for Security and Emerging Technology• The U.S. government should “increase funding for the federal National Apprenticeship system, with an emphasis on technical occupations and industry intermediaries,” “fund and reauthorize career and technical education programs,” and “support the creation of an AI scholarship-for-service program.”
• The Trump administration should also “work with Congress to support AI literacy efforts for the American people” and provide them with the “necessary education and information to make informed decisions about their AI use and consumption.
Google• “This moment offers an opportunity to ensure that AI can be integrated as a core component of U.S. education and professional development systems. The Administration and agency stakeholders have an opportunity to ensure that access to technical skilling and career support programs (including investments in K-12 STEM education and retraining for workers) are broadly accessible to U.S. communities to ensure a resilient labor force.”
• “Where practicable, U.S. agencies should use existing immigration authorities to facilitate recruiting and retention of experts in occupations requiring AI-related skills, such as AI development, robotics and automation, and quantum computing.”
Global AI LeadershipBusiness Roundtable• “The domestic AI ecosystem can be further strengthened by U.S. efforts to shape international AI policies, ensuring they promote security and prosperity while avoiding conflicting legal obligations. U.S. leadership helps set global AI standards that align with democratic values, including transparency, fairness and privacy. Without American influence, authoritarian regimes could shape AI development and regulatory structures in ways that undermine human rights and increase surveillance.”
Center for Data Innovation• “AISI should take a leading role in collaborating on open-source AI safety with international partners, industry leaders, and academic experts. 18 While nations may compete aggressively to drive innovation and diffusion of open-source models, they need not compete on developing the foundational safety standards that underpin open-source AI… By aligning on shared protocols for incident reporting, safety benchmarks, and post-deployment evaluations, the United States can support the robust diffusion of open-source AI while mitigating its inherent risks.”
• “The United States is losing ground to China in the race to become Africa’s preferred AI partner. Over the past few years, the U.S. government has only offered vague commitments and diplomatic statements to the continent, while China has taken concrete action…. The United States should get proactive about strengthening strategic ties and better positioning itself as the preferred partner for AI innovation in emerging markets… DeepSeek’s open-source approach has already made it a preferred choice for many developers in Africa. If the United States wants to remain competitive, it should ensure its own AI companies stay at the forefront of open-source innovation. That means continuing to resist undue restrictions on open- source AI and open model weights, ensuring American-developed models remain accessible and widely adopted.”
Center for Democracy & Technology• “If America remains at the frontier of open model development, its models will likely become the basis for AI-based technologies in much of the world. But if the U.S. stifles domestic open model development, the basis for those technologies would likely be models developed by authoritarian governments.”
Center for a New American Security• “DeepSeek-R1 demonstrates China's success in projecting cost-effective, open source AI leadership to the world despite embedding authoritarian values in its AI. The United States can counter this strategy by rapidly releasing modified versions of leading open source Chinese models that strip away hidden censorship mechanisms and the ‘core socialist values’ required by Chinese AI regulation. In doing so, the United States can expose the contradiction in China's approach, erode the appeal of Chinese AI, and position America as the legitimate champion of authentic open source AI.
• “The Biden administration created the U.S.-China AI Working Group but it only convened twice, with few tangible outcomes. The Trump administration’s new AI Action Plan should reframe this group as a technical expert body to tackle shared AI risks and reduce tensions without undermining America's AI lead. This reformulated group would serve as a body to discuss shared AI risks, instead of acting as a forum for comprehensive political changes in the U.S.-China relationships. This means avoiding politically contentious or overly broad areas of discussion, such AI disinformation or its effect on human rights, and focusing instead on narrow, less politically contentious technical problems ripe for scientific collaboration, such as identifying and responding to dangerous behaviors in AI models, including deception, attempted self-replication, or circumventing human control.”
• “Establishing frameworks for international cooperation and discussion channels for emerging AI- accelerated biotech issues remains crucial, despite anticipated Chinese resistance to joining such an initiative. Following the model of the U.S. ‘Political Declaration on Responsible Military Use of AI and Autonomy,’ articulating guiding principles during early development stages can positively influence technological trajectories for both participating and non-participating nations.”
Center for Security and Emerging Technology• The U.S. government should “prioritize, alongside AI capability advancements, the diffusion of American AI models in the U.S. and global AI ecosystem. Adoption of U.S. open models abroad builds reliance on U.S. technology, thereby endowing the U.S. government with soft power, serving as a foundation for stronger relationships and alliances with partners, and encouraging further paid use of related U.S. AI technologies like enterprise subscription services and cloud platforms. Promotion of U.S. AI technology abroad can also combat the growing influence of Chinese models especially in developing and emerging economies, and prevent China from providing the foundation for large parts of the global digital infrastructure, with implications for the diffusion of Chinese ideologies on the world.”
Google• “We encourage the Department of Commerce, and the National Institute of Standards and Technology (NIST) in particular, to continue its engagement on standards and critical frontier security work. Aligning policy with existing, globally recognized standards, such as ISO 42001, will help ensure consistency and predictability across industry.”
• “The U.S. government should work with aligned countries to develop the international standards needed for advanced model capabilities and to drive global alignment around risk thresholds and appropriate security protocols for frontier models. This includes promulgating an international norm of “home government” testing—wherein providers of AI with national security-critical capabilities are able to demonstrate collaboration with their home government on narrowly targeted, scientifically rigorous assessments that provide ‘test once, run everywhere’ assurance.”
• “The U.S. government should oppose mandated disclosures that require divulging trade secrets, allow competitors to duplicate products, or compromise national security by providing a roadmap to adversaries on how to circumvent protections or jailbreak models. Overly broad disclosure requirements (as contemplated in the EU and other jurisdictions) harm both security and innovation while providing little public benefit.”
OpenAI• “As with Huawei, there is significant risk in building on top of DeepSeek models in critical infrastructure and other high-risk use cases given the potential that DeepSeek could be compelled by the CCP to manipulate its models to cause harm. And because DeepSeek is simultaneously state-subsidized, state-controlled, and freely available, the cost to its users is their privacy and security, as DeepSeek faces requirements under Chinese law to comply with demands for user data and uses it to train more capable systems for the CCP’s use.”
• “While America maintains a lead on AI today, DeepSeek shows that our lead is not wide and is narrowing. The AI Action Plan should ensure that American-led AI prevails over CCP-led AI, securing both American leadership on AI and a brighter future for all Americans.”
Export ControlsAnthropic• “We strongly recommend the administration strengthen export controls on computational resources and implement appropriate export restrictions on certain model weights.”
• The U.S. government should require countries “to sign government-to-government agreements outlining measures to prevent smuggling. As a prerequisite for hosting data centers with more than 50,000 chips from U.S. companies, the U.S. should mandate that countries at high-risk for chip smuggling comply with a government-to-government agreement that 1) requires them to align their export control systems with the U.S., 2) takes security measures to address chip smuggling to China, and 3) stops their companies from working with the Chinese military.”
• The U.S. government should also “consider reducing the number of H100s that Tier 2 countries can purchase without review to further mitigate smuggling risks.”
Business Roundtable• “The Administration should collaborate closely with the business community to ensure that all new controls on emerging and foundational technologies effectively advance U.S. national and economic security objectives. Business Roundtable recommends that the White House National Security and Economic Councils create a standing, private-sector Export Control Advisory Board (ECAB) with security clearance to ensure that private sector members understand the national security reasons for contemplated controls and policymakers are appraised of their potential commercial and economic implications.”
• The U.S. Department of Commerce’s Bureau of Industry and Security should analyze the “potential commercial, economic and competitiveness effects” of export controls and consult with potentially affected industries, as well as “advocate that key allies embrace comparable controls to ensure that U.S. companies are not uniquely disadvantaged.”
Center for AI Policy• “Even the best-designed export controls will be porous without adequate staff to enforce them. Smuggling of advanced AI chips is rampant, largely because the BIS is severely under-resourced… To solve this problem, the Trump Administration should work with Congress to ensure that BIS receives the $75 million in additional annual funding it requested to hire an adequate staff, along with a one-time additional payment of $100 million to immediately address information technology issues.”
Center for Data Innovation• “The current reactive, whack-a-mole approach to AI export controls doesn’t meaningfully slow China’s progress, but it does erode the global position of U.S. AI companies. The U.S. government should maintain targeted export restrictions of advanced AI technologies to countries of concern, even if these restrictions act more as hurdles than roadblocks. However, the government’s priority should be to expand the global market share of American AI firms. export controls are misaligned with the realities of market competition. While intended to weaken China’s AI sector, they are increasingly disadvantaging U.S. firms instead. Chinese companies are adept at circumventing these controls by leveraging stockpiles, utilizing inference-optimized chips, and ramping up domestic semiconductor production.”
• “Rather than focusing narrowly on restricting access, U.S. policy should pivot towards bolstering domestic AI capabilities, enhancing global export competitiveness, and advocating for reciprocal market access. If China continues gaining ground despite restrictions while U.S. firms lose opportunities abroad, the current approach will have done more harm than good.”
• “The Bureau of Industry and Security (BIS) should take a more proactive approach by tightening and enforcing export controls. Current export controls focus on restricting finished AI chips, but gaps in the supply chain undermine their effectiveness… To close these gaps, BIS should expand restrictions to cover upstream components and advanced packaging materials, apply U.S. controls to any technology using American IP regardless of where it is manufactured, and strengthen enforcement on suppliers facilitating these workarounds. Without these measures, China will continue stockpiling essential AI hardware while U.S. firms lose market access without achieving meaningful strategic gains.”
Center for a New American Security• “The current approach of annual export control updates fails to keep pace with rapid technological change in AI and emerging new evidence. The Bureau of Industry and Security (BIS) should instead adopt a quarterly review process with the authority to make targeted adjustments to controls as new capabilities emerge.”
• To address chip smuggling into China, Congress should “significantly increase BIS's budget to enhance its monitoring and enforcement capabilities, including hiring additional technical specialists and field investigators.”
Center for Security and Emerging Technology• “The Bureau of Industry and Security (BIS) in the Department of Commerce should institute scenario planning assessments before implementing new export controls and rigorously monitor the effectiveness of current export control policies… BIS should also conduct regular post-implementation assessments that track progress toward stated control objectives, second-order effects, impact on China's semiconductor manufacturing equipment industry, developments in China's semiconductor fabrication capabilities, and advancements in China's AI sector.”
• For the “broader U.S. export strategy to work,” BIS should “clearly articulate and justify the objectives of the export controls to allies.”
Google• “AI export rules imposed under the previous Administration (including the recent Interim Final Rule on AI Diffusion) may undermine economic competitiveness goals the current Administration has set by imposing disproportionate burdens on U.S. cloud service providers. While we support the national security goals at stake, we are concerned that the impacts may be counterproductive.
• “The U.S. government should adequately resource and modernize the Bureau of Industry and Security (BIS), including through BIS’s own adoption of cutting-edge AI tools for supply chain monitoring and counter-smuggling efforts, alongside efforts to streamline export licensing processes and consideration of wider ecosystem issues beyond limits on hardware exports.”
OpenAI• “We propose that the US government consider the Total Addressable Market (TAM), i.e., the entire world less the PRC and its few allies, against the Serviceable Addressable Market (SAM), i.e., those countries who prefer to build AI on democratic rails, and help as many of the latter as possible commit, including by actually committing to deploy AI in line with democratic principles set out by the US government.”
• OpenAI proposes maintaining the three-tiered framework of the AI diffusion rule but expanding the countries in Tier I (countries that commit to democratic AI principles by deploying AI systems in ways that promote more freedoms for their citizens could be considered Tier I countries.)
• “This strategy would encourage global adoption of democratic AI principles, promoting the use of democratic AI systems while protecting US advantage. Making sure that open-sourced models are readily available to developers in these countries also will strengthen our advantage. We believe the question of whether AI should be open or closed source is a false choice—we need both, and they can work in a complementary way that encourages the building of AI on American rails.”
Infrastructure and EnergyAnthropic• “The federal government should consider establishing an ambitious national target: build 50 additional gigawatts of power dedicated to the AI industry by 2027.”
• The U.S. government should “task federal agencies with streamlining permitting processes by accelerating reviews, enforcing timelines, and promoting inter-agency coordination to eliminate bureaucratic bottlenecks.”
• “Some authoritarian regimes who do not share our country’s democratic values and may pose security threats are already actively courting American AI companies with promises of abundant, low-cost energy. If U.S. developers migrate model development or storing of model weights to these countries in order to access these energy sources, this could expose sensitive intellectual property to transfer or theft, enable the creation of AI systems without proper security protocols, and potentially subject valuable AI assets to disruption or coercion by foreign powers.”
Business Roundtable• “Business Roundtable supports Administration actions to facilitate investment in data centers, including streamlining permitting processes to expedite project approvals for both new data centers and related infrastructure.”
• “The Administration should work to shorten decision timelines on environmental reviews, provide preliminary feedback on application completion and accuracy, and digitize operations to streamline processes, including application submissions, necessary document uploads, feedback for revisions and status updates.”
Center for a New American Security• “While U.S. energy infrastructure languishes in a quagmire of red tape, China can expeditiously direct large-scale build outs, underscored by its unprecedented speed in nuclear power plant construction. Other nations, such as the United Arab Emirates and Saudi Arabia, also have the capital, energy, and government cut-through to expedite AI and energy infrastructure to meet anticipated demand. Paired with sufficient access to chips, this creates a risk that they could leapfrog U.S. AI leadership with world- leading AI computing infrastructure.”
• The U.S. government should “partner with state and local regulators to create designated special compute zones that aim to—as much as possible—align permitting and regulatory frameworks across jurisdictions and minimize barriers to AI infrastructure development.”
Google• “The U.S. government should adopt policies that ensure the availability of energy for data centers and other growing business applications that are powering the growth of the American economy. This includes transmission and permitting reform to ensure adequate electricity for data centers coupled with federal and state tools for de-risking investments in advanced energy-generation and grid-enhancing technologies.”
OpenAI• “Today, hundreds of billions of dollars in global funds are waiting to be invested in AI infrastructure. If the US doesn't move fast to channel these resources into projects that support democratic AI ecosystems around the world, the funds will flow to projects backed and shaped by the CCP.”
• The U.S. government should adopt a “National Transmission Highway Act” to “expand transmission, fiber connectivity and natural gas pipeline construction” and streamline the processes of planning, permitting and paying to “eliminate redundancies.”
• The U.S. government should also develop a “Compact for AI” among U.S. allies and partners that streamlines access to capital and supply chains to compete with Chinese AI infrastructure alliances, as well as institute “AI Economic Zones” that “speed up permitting for building AI infrastructure like new solar arrays, wind farms, and nuclear reactors.”
Government Adoption of AI Anthropic• “We propose an ambitious initiative: across the whole of government, the Administration should systematically identify every instance where federal employees process text, images, audio, or video data, and augment these workflows with appropriate AI systems.”
• The U.S. government should also “eliminate regulatory and procedural barriers to rapid AI deployment at the federal agencies, for both civilian and national security applications” and “direct the Department of Defense and the Intelligence Community to use the full extent of their existing authorities to accelerate AI research, development, and procurement.”
• “We also encourage the White House to leverage existing frameworks to enhance federal procurement for national security purposes, particularly the directives in the October 2024 National Security Memorandum (NSM) on Artificial Intelligence and the accompanying Framework to Advance AI Governance and Risk Management in National Security.”
• “Additionally, we strongly advocate for the creation of a joint working group between the Department of Defense and the Office of the Director of National Intelligence to develop recommendations for the Federal Acquisition Regulatory Council (FARC) on accelerating procurement processes for AI systems while maintaining rigorous security and reliability standards.”
Center for Data Innovation• “Former President Biden’s 2023 executive order instructed federal agencies to integrate AI, but it was overwhelmingly focused on risk mitigation—requiring oversight boards, governance guidelines, and guardrails against potential pitfalls. The government needs to do more than just play defense. Many U.S. government officials recognize AI’s transformative potential in fields like education, energy, and disaster response, as highlighted at the recent AI Aspirations conference. What’s missing isn’t vision, it’s action.”
• “Agencies should establish clear visions for how AI will be used in sectors and AI adoption “grand challenges” (i.e., highly ambitious and impactful goals for how AI can transform an industry) to accelerate deployment in critical sectors.”
Center for Democracy & Technology• The Trump administration should “ensure compliance with these principles in its existing use of AI, most significantly in DOGE efforts which appear to be leveraging AI without transparency or these other necessary guardrails in place.”
• The AI Action Plan can develop public trust in federal government’s use of AI by “building on agencies’ existing use case inventories – a key channel for the public to learn information about how agencies are using and governing AI systems and for industry to understand AI needs within the public sector – and by requiring agencies to provide public notice and appeal when individuals are affected by AI systems in high-risk settings.”
• “The AI Action Plan should recognize that independent external oversight is also critically important to promote safe, trustworthy, and efficient use of AI in the national security/intelligence arena. Many such uses will be classified and exposure of them could put national security at risk. At the same time, because the risk of abuse and misuse is high when such functions are kept secret, an oversight mechanism with expertise, independence and power to access relevant information (even if classified) should be established in the Executive Branch. CDT has recommended that Congress establish such a body, and the AI Action Plan should support such an approach.”
Center for a New American Security• “The U.S. military can take full advantage of AI and autonomy, but only if DoD develops rigorous and streamlined processes that allow systems to be tested thoroughly and permit warfighters an early and ongoing role. Developing warfighter trust is a complex process and requires their active participation from conception to fielding of an AI-enabled and/or autonomous system.”
• The U.S. military can address the concerns of potential coordination conflicts “by working across services to clarify concepts of employment and identify potential points of conflict between friendly heterogeneous AI and autonomous systems.”
Center for Security and Emerging Technology• “If all federal agencies agree to abide by a unified set of minimum AI standards for purposes of acquisition and deployment, this would greatly reduce the burden on companies offering AI solutions, accelerate the adoption of standard tools and metrics, and reduce inefficiencies caused by the need to repeatedly draft and respond to similar but different requirements in government contracts.”
• The Office of the Secretary of Defense (OSD) has “not empowered a DOD-wide entity to set AI policies for the services. This results in duplication of efforts across the military services, with multiple memos guiding efforts across the DOD in different ways. For example, within each service, different commands have different network ATO standards, which require substantial rework by the government and AI vendors to satisfy before deployment. Continuous ATOs and ATO reciprocity must be enforced across OSD and an entity should be empowered to synchronize policies, rapidly certify reliable AI solutions, and act to stop emerging security issues.”
Google• “The U.S. government, including the defense and intelligence communities, should pursue improved interoperability and data portability between cloud solutions; streamline outdated accreditation, authorization, and procurement practices to enable quicker adoption of AI and cloud solutions; and accelerate digital transformation via greater adoption of machine-readable documents and data.”
• “Federal agencies should avoid implementing unique compliance or procurement requirements just because a system includes AI components. To the extent they are needed, any agency-specific guidelines should focus on unique risks or concerns related to the deployment of the AI for the procured purpose.”
OpenAI• “AI adoption in federal departments and agencies remains unacceptably low, with federal employees, and especially national security sector employees, largely unable to harness the benefits of the technology.”
• The U.S. government should establish a “faster, criteria-based path for approval of AI tools” and “allow federal agencies to test and experiment with real data using commercial-standard practices—such as SOC 2 or International Organization for Standardization (ISO) audit reports—and potentially grant a temporary waiver for FedRAMP. AI vendors would still be required to meet FedRAMP continuous monitoring requirements while awaiting full accreditation.”
AI Security and SafetyAnthropic• “Our most recent system, Claude 3.7 Sonnet, demonstrates concerning improvements in its capacity to support aspects of biological weapons development—insights we uncovered through our internal testing protocols and validated through voluntary security exercises conducted in partnership with the U.S. and U.K. AI Safety and Security Institutes. This trajectory suggests, consistent with scaling laws research, that numerous AI systems will increasingly embody significant national security implications in the coming years.”
• The U.S. government should “preserve the AI Safety Institute in the Department of Commerce and build on the MOUs it has signed with U.S. AI companies—including Anthropic—to advance the state of the art in third-party testing of AI systems for national security risks.”
• The White House should also “direct the National Institutes of Standards and Technology (NIST), in consultation with the Intelligence Community, Department of Defense, Department of Homeland Security, and other relevant agencies, to develop comprehensive national security evaluations for powerful AI models, in partnership with frontier AI developers, and develop a protocol for systematically testing powerful AI models for these vulnerabilities.”
• “To mitigate these risks, the federal government should partner with industry leaders to substantially enhance security protocols at frontier AI laboratories to prevent adversarial misuse and abuse of powerful AI technologies.”
Center for AI Policy• The U.S. government should “develop and apply a practical definition of frontier AI so that national security regulations target only the largest and most dangerous AI models. Most AI systems – especially the smaller systems that are more likely to be developed by startups, academics, and small businesses – are relatively benign and do not pose major national security risks.”
• “AI systems are advancing at an unprecedented pace, and it’s only a matter of time before intentional or inadvertent harm from AI threatens U.S. national security, economic stability, or public safety. The U.S. government must act now to ensure it has insights into the capabilities of frontier AI models before they are deployed and that it has response plans in place for when failures inevitably occur. To fill this critical preparedness gap, President Trump should immediately direct the Department of Homeland Security (DHS) to establish an AI Emergency Response Program as a public-private partnership. Under this program, frontier AI developers like OpenAI, Anthropic, DeepMind, Meta, and xAI would participate in emergency preparedness exercises.”
• “These preparedness exercises would involve realistic simulations of AI-driven threats, explicitly requiring participants to actively demonstrate their responses to unfolding scenarios. Similar to the DHS-led ‘Cyber Storm’ exercises, which rigorously simulate cyberattacks and test real-time interagency and private-sector coordination, these AI-focused simulations should clearly define roles and responsibilities, ensure swift and effective communication between federal agencies and frontier AI companies, and systematically identify critical gaps in existing response protocols… Most frontier AI developers have already made voluntary commitments to share the information needed to create these exercises. To encourage additional companies to participate, this type of cooperation should be treated as a prerequisite for federal contracts, grants, or other agreements involving advanced AI.”
• “In the near future, small autonomous drones will pose a threat to U.S. civilians on par with large strategic missiles. To meet this threat, the Administration should procure and distribute equipment for disabling unauthorized drones, and ensure that there are clear lines of legal authority for civilian law enforcement to deploy this equipment.”
Center for Data Innovation• “The administration should direct AISI to establish a national AI incident database and an AI vulnerability database, creating essential infrastructure for structured reporting and proactive risk management. AI failures and vulnerabilities are currently tracked inconsistently across different sectors, making it difficult to identify trends, address systemic weaknesses, or prevent recurring Issues… Additionally, an AI vulnerability database—similar to the National Vulnerability Database used for cybersecurity—would catalog weaknesses in AI models, helping organizations mitigate risks before they escalate.”
Center for Democracy & Technology• “The AI Action Plan should direct NIST to continue building on the foundation it set with the AI RMF and subsequent work… The standards-development process should center not only the prospective security risks arising from capabilities related to chemical, biological, and radiological weapons and dual-use foundation models, but also the current, ongoing risks of AI such as privacy harms, ineffectiveness of the system, lack of fitness for purpose, and discrimination. NIST’s standards should also include a multifaceted approach for holistically and accurately measuring different qualities of an AI system, such as safety, efficacy, or fairness, and provide guidance on determining the validity and reliability of the measurements used.”
• “Federal agencies should take steps to align all AI uses with existing privacy and cybersecurity requirements – such as requirements for agencies to conduct privacy impact assessments – and to proactively guard against novel privacy and security risks introduced by AI.”
Center for a New American Security• “AI datacenters and companies will become increasingly attractive targets for adversarial nations seeking to steal advanced models or sabotage critical systems. The private sector alone is neither equipped nor incentivized to effectively counter sophisticated state actors. The federal government must deploy its security expertise to protect this critical technology and infrastructure. As an immediate priority, the National Security Agency and broader national security community should partner with leading labs and AI datacenters to build resilience against espionage and attacks. The National Institute of Standards and Technology should also play an active role in co-developing best practice security standards for model weights—the sensitive intellectual property that encapsulates the capability of an AI model.”
• “The administration should empower the AISI as a hub of AI expertise for the broader federal government to ensure AI strengthens rather than undermines U.S. national security. The administration could further support this AI hub of expertise with continued implementation of the AI National Security Memorandum, which strengthens engagement with national security agencies to better integrate expertise across classified and non-classified domains.
• “The federal government needs a systematic way to track and learn from real-world incidents. A central reporting system for AI-related incidents would allow the government to investigate and update its approach to evaluations where appropriate.”
Center for Security and Emerging Technology• The U.S. government should “significantly expand open-source intelligence (OSINT) gathering and analysis on AI. This work is particularly neglected in the intelligence community, which remains focused on classified sources.”
• “The federal government should significantly ramp up efforts to monitor China's AI ecosystem, including the Chinese government itself (at all relevant levels and organizations), related actors such as state-owned enterprises, state research labs, and state-sponsored technology investment funds, and other actors, such as universities and tech companies.”
• “The U.S. government should partner with AI companies to share suspicious patterns of user behavior and other types of threat intelligence. In particular, the Intelligence Community and the Department of Homeland Security should partner with AI companies to share cyber threat intelligence, and the Department of Homeland Security should partner with AI companies to prepare for potential emergencies caused by malicious use or loss of control over AI systems. In addition, the Department of Commerce should receive, triage, and distribute reports on CBRN and cyber capabilities of frontier AI models to support classified evaluations of novel AI-enabled threats, building on a 2024 Memorandum of Understanding between the Departments of Energy and Commerce.”
• The Trump administration should “implement a mandatory AI incident reporting regime for sensitive applications across federal agencies. Federal agencies deploy AI systems for a wide range of safety- and rights-impacting use cases, such as using AI to deliver government services or predict criminal recidivism. AI failures, malfunctions, and other incidents in these contexts should be tracked and investigated to determine their root cause, inform risk management practices, and reduce the risk of recurrence.”
• “The Trump administration should establish a secure line for employees to report problematic company practices, such as failure to report system capabilities that threaten national security.”
• The U.S. government should “Define capabilities of concern and support the creation of threat profiles for different types of AI models… . A coalition of government agencies should develop frameworks that clearly define risky capabilities, including chem-bio capabilities of concern, so evaluators know what risks to test for. These frameworks could draw upon Appendix D of the National Institute of Standards and Technology’s (NIST) draft Managing Misuse Risk for Dual-Use Foundation Models. In addition, government agencies should build threat profiles that consider different combinations of users, AI tools, and intended outcomes, and design targeted policy solutions for these highly variable scenarios.
• “The Trump administration should empower AISI to develop quantitative benchmarks for AI, including benchmarks that test a model’s resistance to jailbreaks, usefulness for making CBRN weapons, and capacity for deception… AISI should develop standards that cover topics including model training, pre-release internal and external security testing, cybersecurity practices, if-then commitments, AI risk assessments, and processes for testing and re-testing systems as they change over time.”
Google• “Policymakers should also consider measures to safeguard critical infrastructure and cybersecurity, including by partnering with the private sector. For example, pilots that build on the Defense Advanced Research Projects Agency’s AI Cyber Challenge and joint R&D activities can help develop breakthroughs in areas such as data center security, chip security, confidential computing, and more. Expanded threat sharing with industry will similarly help identify and disrupt both security threats to AI and threat actor use of AI.”
• “It is particularly valuable for the U.S. government to develop and maintain an ability to evaluate the capabilities of frontier models in areas where it has unique expertise, such as national security, CBRN issues, and cybersecurity threats. The Department of Commerce and NIST can lead on: (1) creating voluntary technical evaluations for major AI risks; (2) developing guidelines for responsible scaling and security protocols; (3) researching and developing safety benchmarks and mitigations (like tamper-proofing); and (4) assisting in building a private-sector AI evaluation ecosystem.”
Obligations for AI Developers, Deployers, and UsersGoogle• “To the extent a government imposes specific legal obligations around high-risk AI systems, it should clearly delineate the roles and responsibilities of AI developers, deployers, and end users. The actor with the most control over a specific step in the AI lifecycle should bear responsibility (and any associated liability) for that step. In many instances, the original developer of an AI model has little to no visibility or control over how it is being used by a deployer and may not interact with end users. Even in cases where a developer provides a model directly to deployers, deployers will often be best placed to understand the risks of downstream uses, implement effective risk management, and conduct post-market monitoring and logging. Nor should developers bear responsibility for misuse by customers or end users. Rather, developers should provide information and documentation to the deployers, such as documentation of how the models were trained or mechanisms for human oversight, as needed to allow deployers to comply with regulatory requirements.”
• “The U.S. government should support the further development and broad uptake of evolving multistakeholder standards and best practices around disclosure of synthetic media—such as the use of C2PA protocols, Google’s industry-leading SynthID watermarking, and other watermarking/provenance technologies, including best practices around when to apply watermarks and when to notify users that they are interacting with AI-generated content.”
Copyright Issues and Development of High-Quality DatasetsBusiness Roundtable• “An important technical resource for AI innovation is government datasets, which are typically much larger in size and scope and more representative of diverse populations than non-governmental datasets. This makes them uniquely valuable for conducting research, testing, reducing bias and producing better AI models. But while open data is encouraged and often required in government, federal agencies do not prioritize publishing high-impact unclassified datasets. Increasing access to advanced computing resources and tools empowers more organizations to engage in AI research and development by reducing barriers to entry.”
Center for Data Innovation• “Unlike other foundational inputs to AI, such as physical infrastructure or scientific research, the United States treats data more as a regulatory challenge than a national asset. The result is an AI ecosystem constrained by gaps, inconsistencies, and bottlenecks, leaving businesses and researchers struggling to find and use the data they need. The AI Action Plan should correct this by establishing a National Data Foundation (NDF), an institution dedicated to funding and facilitating the production, structuring, and responsible sharing of high-quality datasets. An NDF would do for data what the National Science Foundation (NSF) does for research—ensuring the United States isn’t just competing on AI models but on the quality and availability of the data that powers them. It could fund data generation, creating large-scale, machine-readable datasets”
• “In contrast, an NDF recognizes that in many critical areas, the U.S. lacks the necessary high-quality, AI-ready data not just in the public sector, but also in key private-sector domains. Rather than just improving discoverability, the NDF would fund the creation, structuring, and strategic enhancement of both public and private-sector datasets”
Google• “Policymakers should move quickly to further incentivize partnerships with national labs to advance research in science, cybersecurity, and chemical, biological, radiological, and nuclear (CBRN) risks. The U.S. government should make it easier for national security agencies and their partners to use commercial, unclassified storage and compute capabilities, and should take steps to release government datasets, which can be helpful for commercial training.”
• “Balanced copyright rules, such as fair use and text-and-data mining exceptions, have been critical to enabling AI systems to learn from prior knowledge and publicly available data, unlocking scientific and social advances. These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rights holders and avoid often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation.”
OpenAI• “Applying the fair use doctrine to AI is not only a matter of American competitiveness —it’s a matter of national security. The rapid advances seen with the PRC’s DeepSeek, among other recent developments, show that America’s lead on frontier AI is far from guaranteed. Given concerted state support for critical industries and infrastructure projects, there’s little doubt that the PRC’s AI developers will enjoy unfettered access to data—including copyrighted data—that will improve their models. If the PRC’s developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over.
• To ensure the copyright system “continues to support American AI leadership,” the U.S. government should work to “prevent less innovative countries from imposing their legal regimes on American AI firms and slowing our rate of progress” and encourage “more access to government-held or government-supported data. This would boost AI development in any case, but would be particularly important if shifting copyright rules restrict American companies’ access to training data.”
• The U.S. government should also partner with industry to “develop custom models for national security. The government needs models trained on classified datasets that are fine-tuned to be exceptional at national security tasks for which there is no commercial market—such as geospatial intelligence or classified nuclear tasks. This will likely require on-premises deployment of model weights and access to significant compute, given the security requirements of many national security agencies.”
News/Media Alliance• “Publishers should not be forced to subsidize the development of AI models and commercial products without a fair return for their own investments, no more than cloud providers would be expected to bear the costs of compute without payment for their input. The future of generative AI requires sustaining the incentives for the continued production of news and other quality content that, in turn, builds and powers generative AI models and products. Without high-quality, reliable materials, these tools will become less useful to consumers, and may jeopardize our country’s leadership in the sector. IP laws also protect AI companies, including when their original creations are misappropriated by foreign companies.5 We are committed to establishing a symbiotic, mutually beneficial framework between content production and AI development that respects intellectual property, facilitates technological development, and takes a balanced, market-based approach to AI innovation and regulation.”
• “The sufficiency of existing copyright law notwithstanding, we remain concerned that many AI stakeholders have used copyright protected material to build and operationalize their models without consent, in ways damaging to publishers. While the legality of such activities are the subject of litigation, there is a danger that it will not be possible to undo the damage before a judicial resolution can occur. The AI Action Plan should therefore encourage AI developers to engage more collaboratively with content industries in a manner that serves the broader national interest and a win-win result for our global aspirations.”
• “The Administration should push back on the flawed text and data mining (TDM) opt-out frameworks being considered or recently adopted in various countries. These opt-out policies do not work, have the potential to harm American creators and businesses through the uncompensated taking of their property, overregulate content licensing, and turn copyright law and free market licensing upside down.”
IMAGE: Visualization of an AI chip (via Getty Images)

The post Shaping the AI Action Plan: Responses to the White House’s Request for Information appeared first on Just Security.

]]>
109203
The Just Security Podcast: Key Takeaways from the Paris AI Action Summit https://www.justsecurity.org/107813/podcast-artificial-intelligence-action-summit/?utm_source=rss&utm_medium=rss&utm_campaign=podcast-artificial-intelligence-action-summit Wed, 12 Feb 2025 18:25:30 +0000 https://www.justsecurity.org/?p=107813 Brianna Rosen joins the podcast to recap key takeaways from the recently concluded Artificial Intelligence Action Summit.

The post The Just Security Podcast: Key Takeaways from the Paris AI Action Summit appeared first on Just Security.

]]>
The Artificial Intelligence Action Summit recently concluded in Paris, France, drawing world leaders including U.S. Vice President JD Vance. The Summit led to a declaration on “inclusive and sustainable” artificial intelligence, which the United States and United Kingdom have refused to join, though 60 other nations, including China and India support the declaration.

What are the key takeaways from the Summit? How might it shape other global efforts to regulate artificial intelligence?

Joining the show to discuss the Summit is Dr. Brianna Rosen, Director of Just Security’s AI and Emerging Technologies Initiative and Senior Research Associate at the University of Oxford.

Show Notes:

  • Just Security’s Artificial Intelligence coverage
  • Just Security’s Tech Policy under Trump 2.0 Series
  • Music: “Parisian Dream” by Albert Behar from Uppbeat: https://uppbeat.io/t/albert-behar/parisian-dream (License code: RXLDKOXCM02WX2LL) 

Listen to the episode, with a transcript available, by clicking below.

The episode title appears with sound waves behind it.

The post The Just Security Podcast: Key Takeaways from the Paris AI Action Summit appeared first on Just Security.

]]>
107813