Social Media Platforms Archives - Just Security https://www.justsecurity.org/category/ai-emerging-technology/social-media-platforms/ A Forum on Law, Rights, and U.S. National Security Thu, 15 Jan 2026 17:44:34 +0000 en-US hourly 1 https://i0.wp.com/www.justsecurity.org/wp-content/uploads/2021/01/cropped-logo_dome_fav.png?fit=32%2C32&ssl=1 Social Media Platforms Archives - Just Security https://www.justsecurity.org/category/ai-emerging-technology/social-media-platforms/ 32 32 77857433 Who Will Stand Up for Human Rights in 2026 – and How? https://www.justsecurity.org/128753/who-will-stand-for-human-rights-2025/?utm_source=rss&utm_medium=rss&utm_campaign=who-will-stand-for-human-rights-2025 Thu, 15 Jan 2026 14:05:10 +0000 https://www.justsecurity.org/?p=128753 The deterioration in human rights in 2025 heightens the risks for defenders going forward, all worsened by donors' deep funding cuts, especially those of the United States.

The post Who Will Stand Up for Human Rights in 2026 – and How? appeared first on Just Security.

]]>
The year 2025 was difficult for human rights and human rights defenders.

Unceasing attacks came from governments, including the most powerful, as well as from the private sector and non-state groups, pushing agendas in opposition to human rights. Many of these assaults are amped up by technology, with the methods and means becoming ever cheaper and ever more accessible to the masses.

An annual analysis from the Dublin-based international rights group Frontline Defenders paints a devastating picture of killings, arbitrary detention, surveillance, and harassment. CIVICUS, an organization that measures civic space (defined as “the respect in policy, law and practice for freedoms of association, expression and peaceful assembly and the extent to which states protect these fundamental rights”), documented declines in 15 countries and improvements in only three. The location and nature of the drops were diverse, taking place from mature democracies such as the United States, Germany, France, and Switzerland, to authoritarian regimes such as Burundi and Oman, and including countries in crisis and conflict such as Sudan and Israel. Some types of human rights were uniquely politicized and singled out in 2025, including women’s rights and environmental rights. Freedom House recorded the 19th straight year of declines in global freedom.

All this is compounded by an unprecedented slash-and-burn to international aid budgets for organizations and individuals working on human rights worldwide. The Human Rights Funders Network of almost 450 institutions across 70 countries estimates that by 2026, human rights funding globally will experience a $1.9 billion reduction compared to levels in 2023.

Taken together, this makes the world more dangerous than ever for human rights defenders and they have fewer resources at their disposal to combat the threats.

In 2026 and moving forward, two crucial questions arise for the defense of human rights globally. First, who will do the work of fighting to protect and advance human rights in the year ahead, and second, how can those in the international community still fiercely committed to human rights support them? These questions will be shadowed by another trend: impunity. Yet, at the same time, lessons and a few positive developments from 2025 can guide human rights defenders on how to seize opportunities in the coming year, beginning even this month at the United Nations.

The Earthquakes of 2025

Eviscerating Democracy, Human Rights, and Governance Assistance

In the United States, 2025 began with the newly inaugurated Trump administration dismantling the U.S. Agency for International Development (USAID) and canceling approximately 85 percent of its programming (from a budget of more than $35 billion in the fiscal year ending in September 2024). The gutting eliminated hundreds of millions of dollars of support for those working to protect human rights and expand freedom and democracy around the world. The State Department’s grantmaking efforts were similarly cut, with more than half of its awards canceled, including programs directly supporting human rights defenders such as one initiative providing emergency financial assistance to civil society organizations and a fund to promote human rights and democracy and respond to related crises.

Most other major donor countries followed suit, though not with the same sweep or to nearly the same degree. Canada said it would reduce foreign aid by $2.7 billion over the next four years, the Dutch announced structural spending cuts of € 2.4 billion on development aid starting in 2027, and the European Union announced a €2 billion reduction in its main mechanism for development aid for 2025-2027. Multilateral funders were not immune to the trend: the United Nations, for one, will see major budget and staffing cuts for human rights in 2026.

The U.S. retreat from foreign assistance rapidly impacted all development sectors, from health, to education, to humanitarian assistance, but no sector was targeted with such enmity as that of democracy, human rights, and governance. Advocates and implementers saw not only the dire resource clawbacks discussed above, but also found themselves tarred by a steady diet of derisive commentary from the very policymakers doing the cutting.

Secretary of State Marco Rubio, who, once championed human rights and democracy “activists” as a U.S. Senator, even serving on the board of the democracy-promoting International Republican Institute before the administration eliminated the congressional funding that supported it. He once told a crowd at the Brookings Institution “[f]oreign aid is a very cost-effective way, not only to export our values and our example, but to advance our security and our economic interests.”

But as secretary of state, he abruptly reversed course, writing last April that the State Department unit overseeing civilian security, human rights, and democracy had “a bloated budget and unclear mandate,” and that its “Bureau of Democracy, Human Rights, and Labor had become a platform for left-wing activists to wage vendettas against `anti-woke’ leaders in nations such as Poland, Hungary.” Other members of the administration were similarly sharp-tongued about the sector, with now-former USAID Administrator Pete Marocco conflating the promotion of “civic society” with “regime change” in official court documents and President Donald Trump himself referring to USAID’s leadership as “radical lunatics.”

The rhetoric mirrors similar language used by authoritarians across the globe who have long been opposed to foreign assistance for democracy, human rights, and governance work, and it has real-world consequences for those advocating for human rights and freedom. Leaders of multiple countries have seized on the words of the Trump administration to launch spurious investigations of human rights defenders and other civil society activists who had received U.S. funding.

Closing Civic Space and New Technology

Closing civic space is not a new threat to human rights defenders, but it is one that has reached a fevered pitch in the last few years. This has included both an increase in traditional attacks and a greater reliance on new tactics for suppression, especially in the digital sphere.

Nearly 45 percent of all civic space violations CIVICUS recorded for its annual analysis were related to the freedom of expression. The organization documented more than 900 violations of the right to peaceful assembly and more than 800 violations of freedom of association. The most frequent examples were detentions of protesters and journalists, followed by the detention of human rights defenders outside the context of a protest or journalism, merely for doing their work.

Authoritarian regimes also have become ever more adept at utilizing the digital space for repression. Tactics such as doxing, censorship, smearing, and online harassment are important tools in an authoritarian approach. They have been supplemented in recent years by less evident tactics such as shadow-banning, which the CIVICUS analysis defined as when “a platform restricts content visibility without notifying the user,” allowing the platform to maintain an appearance that it is neutral.

Women rights defenders face additional risks online, including technology-facilitated gender-based violence: In a global survey by the Economist Intelligence Unit, 38 percent of women reported personal experience with violence online, from hacking and stalking to image-based sexual abuse.

Attacks in the digital space often are also connected with or fuel physical attacks, “including killings, enforced disappearances, arbitrary detention and harassment,” as Frontline Defenders reported in its analysis. Tunisia is paradigmatic. Amnesty International reported that, beginning in 2024, a “wave of arrests followed a large-scale online campaign…which saw homophobic and transphobic hate speech and discriminatory rhetoric against LGBTI activists and organizations spreading across hundreds of social media pages, including those espousing support for the Tunisian President Kais Said. Traditional media outlets also broadcast inflammatory messages by popular TV and radio hosts attacking LGBTI organizations, calling for their dissolution and for the arrests of LGBTI activists.” 

What to Expect for Human Rights in 2026 

The absence of meaningful and unified international pushback to human rights abuses by some of the world’s most powerful nations means the rights-based international system will continue to face unprecedented attacks, and the challenges that rights defenders face in the year ahead are likely to increase in number and intensity. Authoritarians worldwide have monitored the assault against human rights in the past year — from genocide in Gaza to the crackdowns on protesters in Tanzania to restrictions on freedom of association and expression in El Salvador and so many more instances — and they have learned that they are unlikely to be held accountable internationally in the near term.

Yet despite these challenges, a few developments in 2025 offer some reasons for optimism in the year ahead. Several large-scale, youth-led movements in 2025 held their governments accountable for rights violations, from the July Revolution in Bangladesh that ousted an abusive prime minister to the Gen Z protests in Kenya over economic conditions and government corruption, a protest moniker that spread to other countries as well.

Some governments passed rights-protecting laws, from Thailand’s legalization of same-sex marriage to Colombia’s laws preventing child marriage. Courts stood up for human rights and held perpetrators to account, from the International Criminal Court’s conviction of Sudan’s Ali Muhammad Ali Abd-Al-Rahman for war crimes and crimes against humanity to the U.S. conviction of The Gambia’s Michael Saang Correa for torture, to the symbolic judgment of the People’s Tribunal for Women of Afghanistan. These trends are likely to continue in 2026, despite the challenges, because courageous human rights defenders are using every avenue to fight for rights.

This year will also bring targeted opportunities to continue the fight for human rights. A preparatory committee for a proposed international crimes against humanity treaty begins work this month at the United Nations. Also at the U.N., this year’s Universal Periodic Reviews, a regular peer review of countries’ human rights records, will focus on some of the world’s worst rights offenders — including Sudan, Eswatini, and Rwanda — as well as countries with highly mixed records. These reviews provide an opportunity for the world to examine, publicly and critically, the rights records of all 193 countries and for victims and activists to share their stories and insights. While the United States has not submitted its self-evaluation due late last year, the process continued with the usual submissions from the U.N. and others.

Creative activists also are likely to use prominent events, such as the 2026 Olympic Games, to push for the expansion and recognition of human rights. They can take the opportunity of the United States’ 250th anniversary celebrations to highlight and internationalize the country’s founding principles of life, liberty, and the pursuit of happiness, as well as the requirement that all governments “[derive] their just powers from the consent of the governed.”

Who Will Lead the Fight for Human Rights in 2026? 

As many governments pull back and even attack human rights, the work of human rights defenders and organizations will become more critical than ever. Some of them have been leading the fight for decades, including leading international NGOs, national organizations, networks, and prominent individual leaders. Others have done critical human rights work but haven’t labeled themselves as rights defenders, such as organizations providing access to clean water, supporting girls’ education, or working to prevent violent conflict.

Many work at the community level, alongside neighbors and friends, with human rights defenders networks around the world, from the Mozambique Human Rights Defenders Network to Somos Defensores in Colombia. Some are in exile, fighting for rights in their home countries and for refugee and diaspora communities, like the brave Afghan women who organized a landmark People’s Tribunal in 2025 to expose rights violations against women. Others are professionals whose skills directly relate to human rights — lawyers, judges, journalists, and more. They include people like the brave journalists who continue to report on the context in Gaza, despite the incredible risks, and the Burmese lawyers who continue to document rights violations. Some are individual activists, using their platforms and skills to protect rights and call attention to attacks against them, like Iranian Nobel laureate Narges Mohammadi who was recently detained alongside other rights defenders while attending a memorial service for a human rights lawyer. Some are informal coalitions, student and youth groups, or protest participants — social movements have been and will be an essential component of the fight for human rights. All of these actors play a critical role in the human rights ecosystem. All of them are human rights defenders.

Aid funding cuts have devastated civil society organizations and will continue to impact human rights advocates. A survey by the International Foundation for Electoral Systems and International IDEA of 125 civil society organizations based in 42 countries found that 84 percent of respondents had lost funding due to U.S. and other countries’ aid cuts, with the same number expecting further cuts in 2026. UN Women reported that more than one in three women’s rights and civil society organizations have suspended or shut down programs to end violence against women and girls and more than 40 percent have scaled back or closed life-saving services. The philanthropic organization Humanity United found that 44 percent of peacebuilding organizations that it surveyed would run out of funds by the end of 2025.

These cuts will only be amplified as time goes on, as fewer young people can become human rights professionals while managing to put food on the table, as legal cases that take years to process aren’t filed for lack of funding, as human rights abuses aren’t documented, as the attacks from authoritarian regimes go unchecked. Shrinking development budgets will no longer provide similar levels of support to courts and anti-corruption bodies that human rights defenders have traditionally approached to pursue justice or for support hotlines where ordinary people can call in anonymously to report abuses at the hands of security forces. Such foreign assistance enabled vital avenues of accountability, but also signified solidarity, that at least some political decisionmakers both at home and abroad believed in human rights and supported those working to deepen and protect them.

But despite the myriad challenges, there will be human rights defenders who continue to fight the fight. For many, changes in funding or the withdrawal of political top-cover won’t stop them from finding avenues. One need only look at Iran’s protests today, where thousands of people are exercising and demanding their human rights amidst a brutal crackdown, internet blackout, and without international funding. Rights defenders have been doing a lot with a little for many years. Some — especially women, youth, Indigenous people, and disabled defenders — have often been excluded from human rights funding and support in the past. A new generation has seen the horrors of Gaza, El-Fasher, eastern Ukraine, or even around the corner from their home, in the news and online, and they have committed themselves to social justice and the prevention of atrocities.

Human rights has always been a universal endeavor which has required diverse supporters, advocates, and allies – this is true now more than ever.

How Can the International Community Support? 

Even those governments and institutions that continue to lead in supporting human rights internationally will need to do more with less, as the above-outlined cuts exemplify, to support those on the front lines. This is the chance to shift “localization” – the practice of funding local civil society organizations directly and based on their priorities, rather than via large overhead-requiring NGOs funded by donor countries — from an ideal to a necessary strategy. A grant of $20,000 may not keep a major international organization online, but it can fund a community-based service provider. Donors can integrate a rights-based approach across portfolios instead of siloing the issue, integrating human rights goals and strategies into other foreign policy initiatives. For example, companies can integrate human rights efforts and measurements into their supply chains for products from batteries to chocolate, producing products they would already produce but in a way that advances human rights as well. Military operations can add human rights and gender considerations with little cost but potentially huge impact. This requires training, tools, and high-level political will to succeed. And they can continue to advocate for rights and use diplomatic pressure and support as key tools.

The elephant in the room is the United States. The Trump administration not only is backtracking on the traditional U.S. commitment and values of democracy and human rights internally and internationally but also has sought to hamper others in funding such initiatives. But there are still important steps that can be taken to protect human rights. Congress must do its job and provide oversight, holding the administration accountable to the laws that protect this important work. Members should speak out against injustices and rights violations, at home and abroad. Rep. Chris Smith (R-NJ), for example, has played a key role in the Tom Lantos Human Rights Commission, calling out rights abuses in places like Turkey, and Rep. Tim Kennedy (D-NY) led a congressional letter to the Department of Homeland Security urging the Trump administration to overturn its decision to terminate Temporary Protective Status for Burmese people.  State governments have always played a key role in advancing rights, and this will become more critical than ever.

Foreign governments that have engaged on human rights issues but haven’t been the largest international donors or advocates will be particularly important. Some of them are stepping up already. Examples include Japan playing a leading role in advancing women’s issues, South Africa and Gambia taking cases to the International Court of Justice accusing Israel and Myanmar, respectively, of violating the Genocide Convention, and Ireland continuing its steadfast allyship with human rights defenders.

Now is the time for committed countries around the world to continue to demonstrate the global nature of this agenda, set out more than 75 years ago in the Universal Declaration of Human Rights and reinvigorated by 18 international human rights treaties.

Philanthropy and the international private sector will be more essential than ever in 2026.  Foundations cannot offset the huge funding gaps left by governments and multilateral donors — total U.S. philanthropic giving is about $6 billion per year, whereas U.S. overseas development assistance alone in 2023 accounted for $223 billion — but they can provide strategic investments that help protect rights and those defending them, amplify their voices, fund innovative new approaches, and help the ecosystem survive. Philanthropies around the world provided nearly $5 billion in human rights support globally in 2020 alone, and their funding is critical for many organizations.

Companies have their own role to play, one that includes but goes well beyond corporate social responsibility, from responsible tech and AI to eliminating forced labor from supply chains to hiring diverse employees. The private sector has a unique opportunity to ensure that human rights remain on the global agenda, because there is a strong business case in favor of human rights protections and alliances with those who truly understand the needs and wants of local populations. A great example is the effort by numerous auto and electronics companies to move away from cobalt batteries, both a recognition of the horrible rights violations facing individuals and communities around cobalt mines in the Democratic Republic of Congo and a recognition that this move is also better for business due to supply chain volatility.

Defending against challenges to human rights, democracy, and good governance in 2026 and beyond will require creativity and broad coalition-building across sectors that too often are siloed, such as health, peacebuilding, humanitarian assistance, and the field of democracy, human rights, and governance. Everyone who does not traditionally think of themselves as a human rights defender, from government officials to the private sector, will need to step up to support those on the frontlines of the fight to defend human rights.

The post Who Will Stand Up for Human Rights in 2026 – and How? appeared first on Just Security.

]]>
128753
Just Security’s Artificial Intelligence Archive https://www.justsecurity.org/99958/just-securitys-artificial-intelligence-archive/?utm_source=rss&utm_medium=rss&utm_campaign=just-securitys-artificial-intelligence-archive Mon, 15 Dec 2025 12:00:45 +0000 https://www.justsecurity.org/?p=99958 Just Security's collection of articles analyzing the implications of AI for society, democracy, human rights, and warfare.

The post Just Security’s Artificial Intelligence Archive appeared first on Just Security.

]]>
Since 2020, Just Security has been at the forefront of analysis on rapid shifts in AI-enabled technologies, providing expert commentary on risks, opportunities, and proposed governance mechanisms. The catalog below organizes our collection of articles on artificial intelligence into general categories to facilitate access to relevant topics for policymakers, academic experts, industry leaders, and the general public. The archive will be updated as new articles are published. The archive also is available in reverse chronological order at the artificial intelligence articles page.

AI Governance

Trump’s Chip Strategy Needs Recalibration
By Michael Schiffer (December 15, 2025)

AI Model Outputs Demand the Attention of Export Control Agencies
By Joe Khawam and Tim Schnabel (December 12, 2025)

Governing AI Agents Globally: The Role of International Law, Norms and Accountability Mechanisms
By Talita Dias (October 17, 2025)

Dueling Strategies for Global AI Leadership? What the U.S. and China Action Plans Reveal
By Zena Assaad (September 4, 2025)

Selling AI Chips Won’t Keep China Hooked on U.S. Technology
By Janet Egan (September 3, 2025)

The AI Action Plans: How Similar are the U.S. and Chinese Playbooks?
By Scott Singer and Pavlo Zvenyhorodskyi (August 26, 2025)

Assessing the Trump Administration’s AI Action Plan
By Sam Winter-Levy (July 25, 2025)

Decoding Trump’s AI Playbook: The AI Action Plan and What Comes Next
Brianna Rosen interview with Joshua Geltzer, Jenny Marron and Sam Winter-Levy (July 24, 2025)

Rethinking the Global AI Race
By Lt. Gen. (ret.) John (Jack) N.T. Shanahan and Kevin Frazier (July 21, 2025)

The Trump Administration’s AI Action Plan Is Coming. Here’s What to Look For.
By Joshua Geltzer (July 18, 2025)

AI Copyright Wars Threaten U.S. Technological Primacy in the Face of Rising Chinese Competition
By Bill Drexel (July 8, 2025)

What Comes Next After Trump’s AI Deals in the Gulf
By Alasdair Phillips-Robins and Sam Winter-Levy (June 4, 2025)

AI Governance Needs Federalism, Not a Federally Imposed Moratorium
By David S. Rubenstein (May 29, 2025)

Open Questions for China’s Open-Source AI Regulation
By Nanda Min Htin (May 5, 2025)

The Just Security Podcast: Trump’s AI Strategy Takes Shape
Brianna Rosen interview with Joshua Geltzer (April 17, 2025)

Shaping the AI Action Plan: Responses to the White House’s Request for Information
By Clara Apt and Brianna Rosen (March 18, 2025)

Export Controls on Open-Source Models Will Not Win the AI Race
By Claudia Wilson and Emmie Hine (February 25, 2025)

The Just Security Podcast: Key Takeaways from the Paris AI Action Summit
Paras Shah interview with Brianna Rosen (February 12, 2024)

The Just Security Podcast: Diving Deeper into DeepSeek
Brianna Rosen interview with Lennart Heim, Keegan McBride and Lauren Wagner (February 4, 2025)

What DeepSeek Really Changes About AI Competition
By Konstantin F. Pilz and Lennart Heim (February 3, 2025)

Throwing Caution to the Wind: Unpacking the U.K. AI Opportunities Action Plan
By Elke Schwarz (January 30, 2025)

What Just Happened: Trump’s Announcement of the Stargate AI Infrastructure Project
By Justin Hendrix (January 22, 2025)

The Future of the AI Diffusion Framework
By Sam Winter-Levy (January 21, 2025)

Unpacking the Biden Administration’s Executive Order on AI Infrastructure
By Clara Apt and Brianna Rosen (January 16, 2025)

Trump’s Balancing Act with China on Frontier AI Policy
By Scott Singer (December 23, 2024)

The AI Presidency: What “America First” Means for Global AI Governance
By Brianna Rosen (December 16, 2024)

The United States Must Win The Global Open Source AI Race
By Keegan McBride and Dean W. Ball (November 7, 2024)

AI at UNGA79: Recapping Key Themes
By Clara Apt (October 1, 2024)

Rethinking Responsible Use of Military AI: From Principles to Practice
By Brianna Rosen and Tess Bridgeman (September 26, 2024)

Competition, Not Control, is Key to Winning the Global AI Race
By Matthew Mittelsteadt and Keegan McBride (September 17, 2024)

The Just Security Podcast: Strategic Risks of AI and Recapping the 2024 REAIM Summit
Paras Shah interview with Brianna Rosen (September 12, 2024)

Putting the Second REAIM Summit into Context
By Tobias Vestner and Simon Cleobury (September 5, 2024)

The Nuts and Bolts of Enforcing AI Guardrails
By Amos Toh and Ivey Dyson (May 30, 2024)

House Meeting on White House AI Overreach Highlights Congressional Inaction
By Melanie Geller and Julian Melendi (April 12, 2024)

Why We Need a National Data Protection Strategy
By Alex Joel (April 4, 2024)

Is the Biden Administration Reaching a New Consensus on What Constitutes Private Information
By Justin Hendrix (March 19, 2024)

The Just Security Podcast: How Should the World Regulate Artificial Intelligence?
Paras Shah and Brianna Rosen interview with Robert Trager (February 2, 2024)

It’s Not Just Technology: What it Means to be a Global Leader in AI
By Kayla Blomquist and Keegan McBride (January 4, 2024)

AI Governance in the Age of Uncertainty: International Law as a Starting Point
By Talita de Souza Dias and Rashmin Sagoo (January 2, 2024)

Experts React: Unpacking the Biden Administration’s New Efforts on AI
By Ian Miller (November 14, 2023)

Biden’s Executive Order on AI Gives Sweeping Mandate to DHS
By Justin Hendrix (November 1, 2023)

The Tragedy of AI Governance
By Simon Chesterman (October 18, 2023)

Introducing the Symposium on AI Governance: Power, Justice, and the Limits of the Law
By Brianna Rosen (October 18, 2023)

U.S. Senate AI Hearings Highlight Increased Need for Regulation
By Faiza Patel and Melanie Geller (September 25, 2023)

The Perils and Promise of AI Regulation
By Faiza Patel and Ivey Dyson (July 26, 2023)

Weighing the Risks: Why a New Conversation is Needed on AI Safety
By Michael Depp (June 30, 2023)

To Legislate on AI, Schumer Should Start with the Basics
By Justin Hendrix and Paul M. Barrett (June 28, 2023)

Regulating Artificial Intelligence Requires Balancing Rights, Innovation
By Bishop Garrison (January 11, 2023)

Emerging Tech Has a Front-Row Seat at India-Hosted UN Counterterrorism Meeting. What About Human Rights?
By Marlena Wisniak (October 28, 2022)

NATO Must Tackle Digital Authoritarianism
By Michèle Flournoy and Anshu Roy (June 29, 2022)

NATO’s 2022 Strategic Concept Must Enhance Digital Access and Capacities
By Chris Dolan (June 8, 2022)

Watchlisting the World: Digital Security Infrastructures, Informal Law, and the “Global War on Terror”
By Ramzi Kassem, Rebecca Mignot-Mahdavi and Gavin Sullivan (October 28, 2021)

One Thousand and One Talents: The Race for A.I. Dominance
by Lucas Irwin (April 7, 2021)

National Security & War

Embedded Human Judgment in the Age of Autonomous Weapons
By Lena Trabucco (October 16, 2025)

AI’s Hidden National Security Cost
By Caroline Baxter (October 1, 2025)

Harnessing the Transformative Potential of AI in Intelligence Analysis
By Rachel Bombach (August 12, 2025)

The Law Already Supports AI in Government — RAG Shows the Way
By Tal Feldman (May 16, 2025)

The United States Must Avoid AI’s Chernobyl Moment
By Janet Egan and Cole Salvador (March 10, 2025)

A Start for AI Transparency at DHS with Room to Grow
by Rachel Levinson-Waldman and Spencer Reynolds (January 22, 2025)

The U.S. National Security Memorandum on AI: Leading Experts Weigh In 
by Just Security (October 25, 2024)

The Double Black Box: AI Inside the National Security Ecosystem
By Ashley Deeks (August 14, 2024)

As DHS Implements New AI Technologies, It Must Overcome Old Shortcomings
By Spencer Reynolds and Faiza Patel (May 21, 2024)

The Machine Got it Wrong? Uncertainties, Assumptions, and Biases in Military AI
By Arthur Holland Michel (May 13, 2024)

Bringing Transparency to National Security Uses of Artificial Intelligence
By Faiza Patel and Patrick C. Toomey (April 4, 2024)

An Oversight Model for AI in National Security: The Privacy and Civil Liberties Oversight Board
By Faiza Patel and Patrick C. Toomey (April 26, 2024)

National Security Carve-Outs Undermine AI Regulations
By Faiza Patel and Patrick C. Toomey (December 21, 2023)

Unhuman Killings: AI and Civilian Harm in Gaza
By Brianna Rosen (December 15, 2023)

DHS Must Evaluate and Overhaul its Flawed Automated Systems
By Rachel Levinson-Waldman and José Guillermo Gutiérrez (October 19, 2023)

The Path to War is Paved with Obscure Intentions: Signaling and Perception in the Era of AI
By Gavin Wilde (October 20, 2023)

AI and the Future of Drone Warfare: Risks and Recommendations
By Brianna Rosen (October 3, 2023)

Latin America and Caribbean Nations Rally Against Autonomous Weapons Systems
By Bonnie Docherty and Mary Wareham (March 6, 2023)

Investigating (Mis)conduct in War is Already Difficult
By Laura Brunn (January 5, 2023)

Gendering the Legal Review of New Means and Methods of Warfare
By Andrea Farrés Jiménez (August 23, 2022)

Artificial Intelligence in the Intelligence Community: Oversight Must Not Be an Oversight
By Corin R. Stone (November 30, 2021)

Artificial Intelligence in the Intelligence Community: Know Risk, Know Reward
By Corin R. Stone (October 19, 2021)

Artificial Intelligence in the Intelligence Community: The Tangled Web of Budget & Acquisition
By Corin R. Stone (September 28, 2021)

Embedding Gender in International Humanitarian Law: Is Artificial Intelligence Up to the Task?
By Andrea Farrés Jiménez (August 27, 2021)

Artificial Intelligence in the Intelligence Community: Culture is Critical
By Corin R. Stone (August 17, 2021)

Artificial Intelligence in the Intelligence Community: Money is Not Enough
By Corin R. Stone (July 12, 2021)

Adding AI to Autonomous Weapons Increases Risks to Civilians in Armed Conflict
By Neil Davison and Jonathan Horowitz (March 26, 2021)

Democracy

The AI Action Plan and Federalism: A Constitutional Analysis
By David S. Rubenstein (July 30, 2025)

U.S. AI-Driven “Catch and Revoke” Initiative Threatens First Amendment Rights
By Faiza Patel (March 18, 2025)

The Munich Security Conference Provides an Opportunity to Improve on the AI Elections Accord
By Alexandra Reeve Givens (February 13, 2025)

Q&A with Marietje Schaake on the Tech Coup and Trump
By Marietje Schaake (February 6, 2025)

Maintaining the Rule of Law in the Age of AI
By Katie Szilagyi (October 9, 2024)

Shattering Illusions: How Cyber Threat Intelligence Augments Legal Action against Russia’s Influence Operations
By Mason W. Krusch (October 8, 2024)

Don’t Downplay Risks of AI for Democracy
By Suzanne Nossel (August 28, 2024)

Tracking Tech Company Commitments to Combat the Misuse of AI in Elections
By Allison Mollenkamp and Clara Apt (March 28, 224)

Multiple Threats Converge to Heighten Disinformation Risks to This Year’s US Elections
By Lawrence Norden, Mekela Panditharatne and David Harris (February 16, 2024)

Is AI the Right Sword for Democray?
By Arthur Holland Michel (November 13, 2023)

The Just Security Podcast: The Dangers of Using AI to Ban Books
Paras Shah interview with Emile Ayoub (October 27, 2023)

Process Rights and the Automation of Public Services through AI: The Case of the Liberal State
By John Zerilli (October 26, 2023)

Using AI to Comply With Book Bans Makes Those Laws More Dangerous
By Emile Ayoub and Faiza Patel (October 3, 2023)

Regulation is Not Enough: A Blueprint for Winning the AI Race
By Keegan McBride (June 29, 2023)

The Existential Threat of AI-Enhanced Disinformation Operations
By Bradley Honigberg (July 8, 2022)

System Rivalry: How Democracies Must Compete with Digital Authoritarians
By Ambassador (ret.) Eileen Donahoe (September 27, 2021)

Surveillance
Social Media & Content Moderation
Further Reading

The post Just Security’s Artificial Intelligence Archive appeared first on Just Security.

]]>
99958
The Global Retreat from Content Moderation Is Endangering Free Expression: Kenya Shows Why https://www.justsecurity.org/125691/retreat-content-moderation-kenya/?utm_source=rss&utm_medium=rss&utm_campaign=retreat-content-moderation-kenya Wed, 26 Nov 2025 19:30:46 +0000 https://www.justsecurity.org/?p=125691 By abandoning proactive content moderation, platforms are accelerating a global slide toward censorship — the very outcome they claim to oppose.

The post The Global Retreat from Content Moderation Is Endangering Free Expression: Kenya Shows Why appeared first on Just Security.

]]>
Across the world, major social media platforms are undergoing a profound and troubling shift: a structured retreat from proactive content moderation. Platforms are framing this move as a principled defense of “free speech,” but in practice, it is a deliberate choice to expose users to unprecedented levels of harm, making genuine freedom of expression more fragile.

This post-moderation philosophy creates the perfect conditions for State repression and digital authoritarian drift. Kenya, where Internet Sans Frontieres (I serve as Executive Director of the organization) conducted a seven-month investigation with the KenSafeSpace coalition — created to safeguard a democratic and safe digital space in Kenya — offers a revealing and deeply concerning case study of this global trend.

A Global Context of Harmful Deregulation by Platforms

For years, online experiences differed sharply depending on geography. In wealthier regions such as North America and Europe, content moderation infrastructure, although imperfect, remained more robust than in under-resourced regions like Africa, Latin America, or parts of Asia. This structural inequality, documented notably by whistleblower Frances Haugen, shaped global content governance.

But today, platforms are abandoning proactive moderation altogether. They are replacing it with a narrow, reactive model that intervenes online only when imminent harm can be demonstrated. Platforms are often justifying this shift as a correction to “censorship.” In reality, it strips away the minimal safeguards designed to prevent dangerous content from spiraling into real-world violence.

The consequences are unfolding now, not only in Kenya, but worldwide.

Why Kenya Matters

Kenya is entering a tense  period ahead of the 2027 general elections. The country’s recent history, including post-election violence, shows how quickly inflammatory speech can translate into real-world harm: In this context, the disappearance of proactive content moderation is a direct risk amplifier.

From January to July 2025, Internet Sans Frontieres and the KenSafeSpace coalition observed content circulating on the most widely used platforms in the country: X (formerly known as Twitter), Facebook, and TikTok. We also collected user reports through a dedicated submission form.

Here is what we found:

  • 43 percent of analyzed content showed strong indicators of hate speech, particularly along ethnic and religious lines. In one example, commenting on a video of a religious figure denouncing “infidels,” a user explicitly called for violence against Muslim communities “worse than in 2007” (when Kenya experienced widespread violence after presidential elections). The post was still available online in October 2025 and had already received close to 400,000 views.
  • 26 percent of the content analyzed involved normalized and unmoderated gender-based violence. In one widely viewed publication (over one million views), a user asked X’s AI chatbot Grok to “nudify” a picture of a woman, without the consent of the woman. While Grok did not provide the requested answer, other users responded with “nudified” pictures of the woman. The post was still available on X in October 2025.
  • Close to 30 percent of the posts posed serious risks of electoral disinformation. One recurring narrative falsely alleges that the incumbent government is mobilizing Somali-born citizens, largely Muslim, to manipulate the next election and secure another term for Kenyan President William Ruto.

Our findings reveal a system where harmful content is allowed to spread at scale, precisely because proactive intervention has been abandoned.

From Platform Neglect to State Overreach

Crucially, this erosion of safeguards is unfolding amidst conditions in which governments have intensified their own efforts to restrict freedom of expression.

In Kenya, authorities increasingly invoke the Computer Misuse & Cybercrimes Act to arrest bloggers, activists, or influencers on vague accusations of spreading false information. Yet a recent report by Amnesty International documents how the most potent and unmoderated disinformation often originates from State actors themselves.

Freedom House’s Freedom of the Net 2025 report places Kenya as only “partly free,” citing among other issues the government-ordered internet disruptions during the 2024 anti-government protests, which have been condemned by several civil society organizations, including Internet Sans Frontieres.

This pattern is visible elsewhere. As platforms withdraw from basic moderation duties, States feel licensed to step in with heavy-handed, often illiberal measures, including social media bans, arrests, surveillance, or criminal liability for intermediaries.

Brazil’s Block of X and the New Era of State-Platform Confrontation

The Brazilian government’s decision to block X nationwide for two months in 2024 illustrates the conflicts to come. The ban had been imposed by Brazil’s Supreme Court after X “had refused to ban several profiles deemed by the government to be spreading misinformation about the 2022 Brazilian Presidential election.” Beyond the cost incurred for Elon Musk’s X, and the disrupted access for millions of Brazilians, many digital rights organizations — fierce defenders of open internet principles — struggled to publicly condemn the block. Why? Because X’s own withdrawal from responsible moderation — and even defiance of court orders — had created such a toxic environment that defending the platform became nearly untenable.

Another warning sign of this new confrontation between Big Tech and  sovereign nations was illustrated by the August 2024 arrest of Telegram founder, Pavel Durov, in France. Internet Sans Frontieres publicly condemned the arrest as a dangerous precedent for intermediary liability (without defending the problematic circulation of extremely harmful content on the app). At the same time, we tried to warn platforms that their disinvestment in safety was making such State actions more likely in the future.

There is still time to reverse the course: Tech companies can make the responsible decision of enforcing proactive moderation and establishing moderation safeguards sensitive to local context. Not all countries can afford the luxury of relying on the marketplace of ideas to curb the negative societal effects of harmful speech online. Additionally, authorities in Kenya should refrain from using the fight against hate speech and disinformation as a pretext to invoking the law and silencing dissent online. Civil society organizations should double down on efforts to research and explain the impact of harmful speech in Kenya, before and during the general elections of 2027. Citizens in Kenya should continue their vigilance and demand accountability from authorities and social media companies.

By abandoning proactive content moderation, platforms are accelerating a global slide toward censorship — the very outcome they claim to oppose. In the short term, some companies may benefit from reduced operational costs or increased engagement metrics. But in the long term, history will not look kindly on those who turned away from the responsibility to protect freedom of expression when it mattered most.

The post The Global Retreat from Content Moderation Is Endangering Free Expression: Kenya Shows Why appeared first on Just Security.

]]>
125691
Before Enforcing the New Foreign Data Law (PADFAA), Congress Must Fix These Five Things https://www.justsecurity.org/124403/padfaa-congress-fix-five-things/?utm_source=rss&utm_medium=rss&utm_campaign=padfaa-congress-fix-five-things Tue, 11 Nov 2025 13:49:50 +0000 https://www.justsecurity.org/?p=124403 PADFAA was enacted with the right intent but the wrong architecture. Congress must adopt five targeted amendments before enforcement begins.

The post Before Enforcing the New Foreign Data Law (PADFAA), Congress Must Fix These Five Things appeared first on Just Security.

]]>
When Congress enacted the Protecting Americans’ Data from Foreign Adversaries Act (PADFAA) in 2024, it did so with the right goal: preventing Americans’ sensitive personal data from reaching foreign adversaries. But the law Congress actually passed suffers from five critical drafting flaws that make responsible enforcement nearly impossible. Before the Federal Trade Commission (FTC) or any agency brings its first enforcement action, Congress must fix these problems: (1) add a knowledge requirement, (2) narrow the definition of “controlled by a foreign adversary,” (3) align the “data broke” definition with established law, (4) require DOJ consultation on enforcement, and (5) narrow the overly broad treatment of web browsing data as categorically sensitive.

These are not cosmetic edits. PADFAA as written could penalize legitimate U.S. companies for routine global operations while failing to deliver the targeted national security tool Congress intended. The law was included in a high-stakes supplemental appropriations package that also funded military aid to Israel, Ukraine, and Taiwan, and carried the TikTok divestiture measure. As a result, PADFAA never received the careful legislative scrutiny that major statutes typically require. That speed created two structural problems: substantive overbreadth that sweeps in normal commerce, and a mismatched enforcement structure that assigns national security determinations to the FTC, a consumer protection agency never designed for that role.

Kevin Moriarty has argued that the FTC’s failure to enforce PADFAA undermines the case for federal privacy legislation. But this analysis misreads the problem. The issue isn’t FTC reluctance. It’s that Congress gave enforcement authority to the wrong agency for the wrong reasons and drafted a statute with critical flaws that make responsible enforcement nearly impossible without congressional fixes first.

Congressional Turf Wars, Not National Security, Put the FTC in Charge

At first glance, it looks odd that a statute framed as a national security measure was handed to the FTC rather than the Department of Justice (DOJ) or one of the national security components of the executive branch. That was not a thoughtful, security-driven design decision. It was a jurisdictional workaround.

The House Energy and Commerce Committee (E&C) played the lead role in moving PADFAA. E&C has jurisdiction over the FTC and over consumer protection. It does not have jurisdiction over DOJ, over DOJ’s National Security Division, or over the broader national security and counterintelligence apparatus of the executive branch. If the bill had been drafted to vest primary enforcement power in DOJ, or to require DOJ to make core determinations, it would likely have been referred to the House Judiciary Committee. That would have slowed or jeopardized the package.

To keep the bill within E&C’s jurisdiction, the drafters gave enforcement authority to the FTC. That move protected the bill’s path to passage, especially because the entire supplemental package was on a tight timeline and politically sensitive. But that same move produced a predictable capability gap. The FTC is an expert consumer protection and competition regulator. It is not, and has never claimed to be, a national security agency.

The legislative record underscores just how unusual that process was. The official House Energy and Commerce Committee Report (H. Rept. 118-418) on PADFAA spans only a few pages and contains almost no substantive policy discussion. The report’s “Purpose and Summary” section merely restates that the bill would prohibit data brokers from transferring sensitive data of U.S. individuals to foreign adversaries. The “Background and Need for Legislation” cites only a few news stories and a decade-old FTC study, while offering no analysis of definitions, scope, or the rationale for giving enforcement to the FTC instead of the Department of Justice. There is no record of a classified briefing, no legislative findings about national security risks, and no committee report language explaining how key terms like “controlled by a foreign adversary” or “data broker” should be interpreted. For a statute carrying significant civil penalties and broad national security implications, the absence of such analysis is striking. PADFAA moved through the committee and the House floor with extraordinary speed, reflecting political urgency, not policy deliberation.

That mismatch matters. PADFAA prohibits certain transfers of “personally identifiable sensitive data” about U.S. individuals to “foreign adversary countries,” or to private entities that are “controlled by a foreign adversary.” Violations can trigger penalties of more than $50,000 per incident. But applying those standards in the real world requires answers to national security questions: Which foreign entities are acting as cutouts for adversary governments? Which corporate structures are effectively state influenced? Who exercises indirect control? What is the adversary’s access model?

Those are determinations that usually rely on classified reporting, interagency intelligence analysis, and visibility into mergers, investments, and ownership structures across borders. The FTC simply does not have routine access to that type of material or systems by which to access this material, such as sensitive compartmented information facilities (SCIFs). It also has limited staff who have active security clearances. It has no independent authority to compel intelligence agencies to share classified threat information for civil enforcement purposes. And it has no organic capacity to do adversarial ownership tracing in the same way Treasury’s Office of Foreign Assets Control or DOJ’s National Security Division can.

The Senate never had the opportunity to consider PADFAA on its own terms. The measure was attached to the supplemental appropriations package. Senators were presented with a binary choice: accept the entire package or risk derailing urgently needed national security funding. It was understandable that the Senate chose not to hold up the supplemental over a data provision. But the consequence was that PADFAA became law without the Senate holding committee hearings, without Senators receiving expert testimony, and without the kind of inter-chamber reconciliation that would normally refine definitions and enforcement structure. The Senate’s procedural constraint, combined with the House’s jurisdictional shortcut, produced a statute that now asks a consumer protection agency to manage a national security regime.

Two Frameworks, Zero Coordination

This jurisdictional workaround didn’t just create an institutional mismatch, it produced direct conflicts with existing national security frameworks already in place. The Department of Justice and the Department of Commerce have already built a structured, interagency process to manage the very risks PADFAA was intended to address. That framework, developed under Executive Order 14117 and formalized in the Department of Justice’s Bulk Data Regulations, offers a useful contrast.

The DOJ and Commerce built a structured, interagency process to manage national security risks. It defines covered “bulk data” transactions through clear thresholds and categories, focusing on data sets large enough to enable strategic exploitation such as geolocation, health, genomic, and biometric data. It also requires interagency consultation, intelligence-informed risk assessments, and case-by-case review before designating a transaction or sector as restricted. Importantly, DOJ must coordinate with Commerce, Treasury, State, and the intelligence community, allowing for a comprehensive threat evaluation before enforcement.

By contrast, PADFAA hands the Federal Trade Commission, a civil consumer protection regulator, sole authority to interpret and enforce prohibitions on transfers of “personally identifiable sensitive data” to “foreign adversaries.” There is no requirement for interagency consultation, no classified input, and no formal mechanism to resolve conflicts with the DOJ and Commerce framework. The FTC may issue civil investigative demands and seek penalties, but it lacks the analytic or procedural infrastructure to make national security determinations.

The differences are not academic. They create direct conflicts in law and practice. For instance, DOJ’s Bulk Data Rule defines foreign “control” using a 50 percent ownership threshold, while PADFAA applies a 20 percent standard, meaning the same entity could be legal under DOJ’s rule but unlawful under FTC enforcement. DOJ’s framework includes licensing exceptions, mitigation measures, and safe harbors for legitimate transactions; PADFAA has none. DOJ’s process ties enforcement to risk-based categories, while PADFAA applies a flat prohibition to any “personally identifiable sensitive data,” regardless of scale or context.

The result is a two-track system that can produce contradictory outcomes. A U.S. company compliant under DOJ and Commerce rules could still be exposed to massive FTC penalties. A retailer that shares loyalty card data with a European analytics partner partially owned by a Chinese investor, for example, could be deemed “controlled by a foreign adversary” under PADFAA but not under DOJ’s test. A health app that uses a third-party software development kit (a code package that adds features like analytics or advertising) cleared under Commerce’s risk review could nonetheless face FTC scrutiny. Even routine logistics tracking data, protected under DOJ’s licensing process, could trigger liability under PADFAA.

These inconsistencies create real compliance paralysis. Companies cannot reconcile two systems that define risk differently and delegate enforcement to agencies with opposing mandates. DOJ and Commerce rely on intelligence-driven review and coordinated oversight; the FTC operates through notice, subpoena, and civil penalty. Congress’s decision to split these roles was a procedural workaround, not a policy plan, and it has yielded a fragmented and contradictory enforcement regime.

Why the FTC Can’t Just Enforce

These contradictions aren’t just bureaucratic inconveniences. They reveal why the FTC cannot simply begin enforcing PADFAA without congressional fixes. Others have argued that the FTC already has the tools to enforce PADFAA and that its failure to announce early cases is troubling, highlighting several prior FTC matters involving data transfers or alleged transfers to companies in China.

It is true that the FTC has an extensive privacy and data security record. The Commission has, for example, brought cases against firms that allegedly allowed sensitive data to flow to China-based analytics providers, including in the digital health space. It has brought cases against mobile device makers that embedded undeclared data collection software. It has brought cases against location data brokers that sold precise geolocation data revealing visits to places such as health clinics, places of worship, and shelters.

But PADFAA is not simply a privacy-oriented unfairness or deception statute. Those traditional FTC cases typically hinge on consumer notice, consent, transparency, retention, or misuse. PADFAA is different. It is explicitly framed as a national security control on data flows to certain countries and certain entities. That distinction matters for at least three reasons.

First, the factual predicates are different. To bring a PADFAA case, the FTC would need to determine not only that data was transferred, but that the recipient was “controlled by a foreign adversary” or located in a “foreign adversary country.” That requires national security determinations, and in some cases intelligence reporting, that the FTC does not generate and is not guaranteed to receive.

Second, the penalty posture is different. PADFAA authorizes significant civil penalties, and it links those penalties to findings that look, in substance, like sensitive national security designations. If the FTC were to bring an aggressive early case against a global company that later turned out not to fall within the statutory definition of “controlled by a foreign adversary,” that would not just be a litigation loss. It could be an international diplomatic event.

Third, the resource posture is different. The FTC is already carrying an expanded merger enforcement program, a broader AI and commercial surveillance program, and multiple rulemakings. The Commission’s total headcount is well under 1,200 personnel, a fraction of the more than 115,000 employees at the Department of Justice, which has dedicated national security divisions, cleared personnel, and intelligence infrastructure. It has been asked to “do more with less” for years. Congress did not stand up a new cleared unit inside the FTC or provide a surge of national security personnel when it assigned PADFAA to the Commission. The expectation that the FTC can absorb a sensitive, high-stakes national security enforcement program on top of its existing load is detached from budget reality.

Seen in that light, the FTC’s public caution is not dereliction. It is prudence. The contrast with the Department of Justice and Commerce Department’s joint rulemaking under Executive Order 14117 is telling. That process, which produced the Bulk Data Regulations, involved months of notice-and-comment rulemaking, interagency consultation, and technical briefings with cleared experts from the intelligence community. The FTC, by contrast, was given enforcement power without any corresponding procedural infrastructure. Its staff were not granted new security clearances or liaison authorities, and there was no interagency working group to align PADFAA with the Bulk Data Regulations. The result is two overlapping national-security data regimes that define key terms differently and risk placing companies in impossible compliance conflicts.

Five Amendments Congress Must Make Before Enforcement

Even if the FTC had infinite capacity, PADFAA as written would still need repair before responsible enforcement. The statute suffers from at least five core drafting problems. Fixing them would preserve national security protections while reducing collateral damage to legitimate commerce. Most troubling, PADFAA directly conflicts with the Department of Justice’s Bulk Data Regulations in several places, creating the possibility that a company could comply with one framework and still violate the other.

I. Knowledge Requirement

The current text does not clearly require that a company know it is transferring data to a prohibited recipient. A data broker could be penalized for transferring data to an entity that is later determined, through some opaque national security analysis the company never saw, to be “controlled by a foreign adversary.” This exposes companies to strict liability for ownership structures they cannot possibly map with certainty.

The proposed fix is straightforward. Congress should amend the statute so that it is unlawful for a data broker to knowingly sell, license, rent, trade, transfer, release, disclose, provide access to, or otherwise make available personally identifiable sensitive data of a United States individual to either a foreign adversary country or any entity that the data broker knows is controlled by a foreign adversary.

A knowledge standard would focus enforcement on willful or reckless conduct. It would also align PADFAA with other FTC-enforced privacy laws, such as the Children’s Online Privacy Protection Act, which ties liability to knowing collection or disclosure.

II. “Controlled by a Foreign Adversary”

PADFAA currently defines “controlled by a foreign adversary” so broadly that it could reach ordinary global business structures. For example, an entity can be deemed controlled if a person from a “foreign adversary country” directly or indirectly owns as little as 20 percent. In practice, many U.S. companies with operations in China, or with Chinese national employees who have access to certain internal systems, could be characterized as “controlled” under a literal reading.

This definition is not administrable. It asks private actors to make national security status determinations about their vendors, advertising partners, analytics providers, logistics networks, franchisees, or affiliates, and to do that across borders. It also creates serious overbreadth problems. Normal coordination between a U.S. parent company and its affiliated business in China could be reinterpreted, after the fact, as a PADFAA violation carrying massive per-violation penalties. Consider a U.S. cloud provider that licenses its software to a European analytics firm partly owned by a sovereign wealth fund in an adversary country. Under DOJ’s Bulk Data Regulations, that transaction would be permitted because it does not involve a bulk transfer of sensitive data. Under PADFAA’s current text, however, the same transaction could trigger penalties simply because of minority foreign ownership, even when no national-security risk exists.

Congress should adopt one of two alternative approaches to restore clarity. One approach would require a presidential determination, accompanied by notice to Congress, that a particular entity presents a significant national security threat before that entity is treated as “controlled by a foreign adversary.” Another approach would align the definition with existing federal lists and designations, such as the Treasury Department’s Specially Designated Nationals list, the Commerce Department’s Entity List, the Federal Communications Commission’s covered list, or the Defense Department’s lists of Chinese military companies.

Either approach would give companies an objective way to understand who counts as a prohibited counterparty. It would also ensure that PADFAA remains tied to clearly articulated national security concerns rather than general suspicion of any company with a China footprint.

III. Data Broker

PADFAA defines “data broker” in a way that departs from settled practice. The statute currently sweeps in any entity that “provides access” to data, even if that entity is a service provider or has a direct relationship with the consumer. By contrast, state laws in California, Vermont, and Oregon define a data broker as a business that sells or trades personal information about a consumer with whom the business does not have a direct relationship.

Why does this matter? Because without a narrower definition, PADFAA could unintentionally cover retailers, restaurants, franchisors, logistics providers, and advertisers whose ordinary operations involve sharing consumer data with service providers or affiliates. Those are not the “shady data brokers” that lawmakers routinely describe when they talk about Americans’ data being sold on open markets. Those are mainstream businesses that are trying to comply with U.S. law while operating globally. The divergence between PADFAA’s and DOJ’s definitions compounds the confusion. DOJ’s rule applies to large-scale bulk data sets that could be exploited for intelligence purposes, while PADFAA extends to individual records and routine consumer transactions. A company could therefore follow DOJ’s framework and still be exposed to liability under PADFAA for the same conduct.

The fix is to bring PADFAA’s “data broker” definition into line with state law and common understanding of the term. Congress should amend the statute to clarify that a data broker is an entity that knowingly collects and sells to third parties the personal information of a consumer with whom the business does not have a direct relationship.

IV. Department of Justice Consultation

Congress should correct the procedural design flaw that resulted from the House jurisdiction issue. The ideal solution would be to transfer primary PADFAA enforcement authority to the Department of Justice, which already administers the Bulk Data Regulations and has the national security infrastructure, classified intelligence access, and interagency coordination mechanisms necessary to make “controlled by a foreign adversary” determinations.

Short of that transfer, Congress should at minimum require the FTC to consult with the Department of Justice on allegations, evidence, and proposed enforcement related to PADFAA, and give DOJ the opportunity to review proposed regulations or guidance to ensure consistency with national security goals.

This is not cosmetic. It is how you inject national security expertise back into a statute that was structurally diverted from DOJ for jurisdictional reasons. Requiring DOJ consultation would not sideline the FTC. Instead, it would create an accountability and validation loop that protects both agencies. The FTC would no longer be forced to make sensitive designations alone. DOJ would have insight into, and some responsibility for, the national security posture of PADFAA enforcement.

V. Web Browsing Data as “Sensitive” Information

PADFAA also goes further than any previous federal law by classifying ordinary web browsing history as “personally identifiable sensitive data.” That is not a carefully considered policy choice; it is an overreach born of haste. For the first time, a federal statute places routine online activity such as visiting news sites, shopping platforms, or social media pages on the same legal footing as medical, biometric, or financial records.

No existing federal privacy or security framework treats web browsing information as categorically sensitive. The Federal Trade Commission’s own data security precedents, the Gramm-Leach-Bliley Act, and the Health Insurance Portability and Accountability Act all distinguish between data that is inherently sensitive and data that becomes sensitive only through context. Even the most far-reaching state laws, like California’s Consumer Privacy Act, treat browsing data as sensitive only when it reveals particular characteristics such as health, religion, or sexual orientation. Similarly, the European Union’s General Data Protection Regulation (GDPR) treats browsing data as requiring heightened protection only when it reveals special categories of information such as health, religion, or sexual orientation. PADFAA, by contrast, treats all browsing data as sensitive without qualification or threshold.

That overbreadth would make compliance nearly impossible for any company with a digital footprint. Every advertising exchange, analytics provider, and e-commerce platform necessarily processes some form of browsing information. Under PADFAA’s current language, even basic functions like displaying an ad impression or logging website traffic could be construed as handling “personally identifiable sensitive data.” If any downstream data recipient is later found to have a foreign minority investor, that normal commercial activity could trigger severe penalties.

This is not a rational or risk-based approach to national security. By sweeping ordinary web traffic into a national security statute, Congress has blurred the line between legitimate privacy protection and unwarranted regulation of the internet economy. If left unchanged, this definition will generate confusion, deter investment, and invite selective enforcement untethered from any real security concern. Congress should narrow the definition to cover only web browsing data that reveals genuinely sensitive categories of information or is aggregated at a scale that poses a clear intelligence risk.

Conclusion

PADFAA was enacted with the right intent but the wrong architecture. Congress must adopt five targeted amendments before enforcement begins: adding a knowledge requirement, narrowing “controlled by a foreign adversary” to align with federal designation lists or requiring presidential determinations, conforming the “data broker” definition to state law standards, requiring FTC-DOJ consultation on all enforcement decisions and regulatory guidance, and limiting web browsing data protections to genuinely sensitive categories or intelligence-scale aggregation. Ideally, Congress should transfer primary enforcement to DOJ, which already administers the Bulk Data Regulations with proper national security infrastructure.

In parallel, the FTC and DOJ should issue joint interim guidance before bringing any enforcement actions. That guidance should clarify how “controlled by a foreign adversary” will be interpreted, what evidence will be required before a case is filed, and what safe harbors will apply for companies making good-faith compliance efforts. Joint guidance would reduce uncertainty for legitimate U.S. businesses while signaling to actual bad actors that the government intends to act decisively and coherently. Given that neither agency has issued PADFAA enforcement guidance despite the statute taking effect in June 2024, this coordinated approach is both urgent and overdue.

Protecting Americans’ data from foreign adversaries is too important to leave to a flawed statute. Congress does not need to revisit the core question that animated PADFAA; there is bipartisan agreement that Americans’ sensitive data should not be for sale to hostile governments or their proxies. What Congress does need to do is adopt these targeted statutory corrections before enforcement creates precedent that becomes difficult to reverse. The necessary amendments are straightforward and could be enacted quickly.

Amend first, then enforce. That is the responsible path for both national security and the legitimacy of federal privacy enforcement going forward.

The post Before Enforcing the New Foreign Data Law (PADFAA), Congress Must Fix These Five Things appeared first on Just Security.

]]>
124403
The Feedback Loop Between Online Extremism and Acts of Violence https://www.justsecurity.org/124249/feedback-loop-online-extremism-violence/?utm_source=rss&utm_medium=rss&utm_campaign=feedback-loop-online-extremism-violence Mon, 10 Nov 2025 13:52:00 +0000 https://www.justsecurity.org/?p=124249 Each new incident of political violence is followed by a wave of digital celebration, intimidation, and imitation. Responses remain polarized and superficial.

The post The Feedback Loop Between Online Extremism and Acts of Violence appeared first on Just Security.

]]>
In the wake of the Charlie Kirk assassination and other recent attacks, the United States faces a resurgence of politically motivated violence that is deeply intertwined with the digital sphere. Digital Aftershocks, our new report at the NYU Stern Center for Business and Human Rights, examines how extremists across the ideological spectrum — far-right, far-left, violent Islamist, and nihilistic violent extremists (NVEs) — exploit acts of violence to recruit followers, justify their ideologies, and sustain propaganda networks.

Our findings are grounded in open-source intelligence collected from March to September 2025, a period marked by deadly attacks in Utah, Minneapolis, and Washington, D.C. While scholars and policymakers have long debated whether online rhetoric “causes” real-world violence, this report looks primarily at the middle of that cycle: how violent incidents are transformed into digital fuel that normalizes aggression and prepares the ground for future attacks.

Cross-Ideological Threat Landscape

We found that extremist networks are increasingly converging around similar tactics and, sometimes, targets. Far-right channels used the stabbing of 17-year-old Austin Metcalf, the killing of 23-year-old Iryna Zarutska, and the assassination of Charlie Kirk to advance narratives of white victimhood and calls for revenge. Far-left networks, dominated during the monitoring period by militant pro-Palestine activism, used similar methods: doxxing, dehumanizing rhetoric, and glorification of attacks such as the shooting outside the Capital Jewish Museum in Washington, D.C.

The analysis also captures the growing threat from NVEs, individuals motivated not by ideology but by misanthropy and the pursuit of viral infamy. These actors blur the line between political violence and performance, celebrating mass shootings regardless of motive and borrowing aesthetic cues from previous attackers. Their actions highlight a new, troubling frontier: violence as content.

Violent Islamist groups, by contrast, maintained a lower but persistent online presence, forced into smaller, decentralized networks on applications such as Rocket.Chat after waves of moderation crackdowns. This disparity reveals an enforcement asymmetry. Foreign Islamist groups face aggressive monitoring, while domestic extremists using similar, if not more explicit, rhetoric often operate with relative impunity.

Cross-Platform Strategy and Adaptation

Across the ideological spectrum, one consistent finding stands out: violent actors use multi-platform strategies to balance reach and security. Telegram has become a central coordination hub, while X serves as the principal amplifier for mainstream visibility. Encrypted or decentralized platforms like Rocket.Chat and SimpleX provide operational cover, while video-sharing platforms such as YouTube or TikTok are exploited for viral reach.

The report documents the practice of “out-linking” — posts that embed links directing users to posts on another platform — to evade moderation and preserve content. This cross-platform strategy ensures that when one account or channel is taken down, the network’s connective tissue remains intact. As long as platforms offer complementary features — some maximizing virality, others privacy or monetization — extremist networks will adapt.

Threats, Incitement, and the Legal Line 

A central aim of Digital Aftershocks is to bring precision to an often-muddled debate about the legality of online speech. U.S. constitutional doctrine distinguishes between true threats, which are statements expressing a genuine intent to commit violence against a specific target, and incitement, which is speech likely to produce imminent lawless action. Both categories fall outside First Amendment protection. But much of the rhetoric circulating online, while dangerous, remains lawful.

To navigate this complexity, the report draws on international human rights frameworks like the Rabat Plan of Action, and the Dangerous Speech Project, which offers analytical tools for assessing when speech meaningfully increases the risk of violence. The goal is not to criminalize offensive expression but to help policymakers and platforms act consistently and proportionately without crossing into censorship.

Key Recommendations

Digital Aftershocks outlines practical, rights-respecting measures for platforms and policy-makers. Among them:

  • Adopt precise definitions of threats and incitement to ensure consistent platform enforcement.
  • Implement privacy-preserving reporting tools to let users flag illegal content and ensure timely review of those reports.
  • Use metadata responsibly, collecting only what is necessary for safety purposes and deleting it after defined retention periods.
  • Mandate transparency and procedural standards requiring platforms to publish detailed moderation and abuse-detection reports.
  • Evaluate extremist and terrorist designation frameworks so enforcement applies consistently across ideologies.
  • Recognize the limits of legal remedies, distinguishing harmful but protected speech from illegal threats or incitement, and clarify protocols for platform-law enforcement cooperation.
  • Support counter-speech and civic resilience, investing in partnerships that promote credible voices and reduce polarization.

A Bipartisan Imperative

The surge of political violence in the United States shows no sign of abating. Each new incident is followed by a wave of digital celebration, intimidation, and imitation. Yet responses remain polarized and often superficial.

The patterns we document cut across ideology and party lines. Violent intimidation online threatens everyone’s safety, regardless of political identity. Our hope is that Digital Aftershocks helps policymakers, platforms, and civil society move beyond reflexive partisanship and toward sensible, bipartisan solutions that safeguard both public safety and freedom of expression.

The post The Feedback Loop Between Online Extremism and Acts of Violence appeared first on Just Security.

]]>
124249
How Tech Platforms Allowed Russia Into Moldova: Lessons for the EU and Others https://www.justsecurity.org/123410/tech-platforms-russia-moldova-eu/?utm_source=rss&utm_medium=rss&utm_campaign=tech-platforms-russia-moldova-eu Wed, 29 Oct 2025 12:51:32 +0000 https://www.justsecurity.org/?p=123410 What played out across social media throughout Moldova's recent election exposed how easily disinformation fills the gaps between state regulation and platform indifference. 

The post How Tech Platforms Allowed Russia Into Moldova: Lessons for the EU and Others appeared first on Just Security.

]]>
The ballots have been counted, the speeches made, and yet Moldova’s latest parliamentary election left one fight unresolved: the battle for truth online. The Sept. 28 balloting reaffirmed a pro-European majority, with the ruling Party of Action and Solidarity securing the largest share of seats amid record diaspora participation. But what played out across social media before, during, and after the vote exposed something Europe still struggles to confront: how easily disinformation fills the gaps between state regulation and platform indifference.

At its heart, though, Moldova’s story – like those elsewhere — is not about censorship or propaganda alone. It’s about what happens when no one quite owns the rules of the digital arena.

When parliament revised the Audiovisual Media Services Code in July 2025, the goals sounded simple enough: align national law with European norms, curb disinformation, and promote responsible media. The text expanded the Audiovisual Council’s powers to tackle “false information” and “manipulation.” But it stopped short of explaining what those words actually mean in law. Without clear definitions or consistent standards, enforcement became a guessing game. And even where the law is clear, its reach stops at the country’s borders.

Moldova’s framework only applies to online platforms with a legal presence inside the country, which Meta, Google, and TikTok do not have. Their content moderation runs entirely on internal corporate policy, not Moldovan statute. So while any Moldovan-licensed broadcaster, large or small, can be fined for imbalance under national law, a viral post reaching hundreds of thousands may remain untouched because it falls under “community guidelines” written thousands of kilometers away.

The imbalance leaves regulators frustrated and citizens exposed. When a government cannot act fast enough to stem falsehoods, the temptation grows to regulate more aggressively to provide expedient relief to an identified threat. That’s where freedom of speech begins to erode. Moldova’s framework was built to protect expression, yet the ambiguity now risks silencing legitimate media out of fear.

Taking Advantage of Ambiguity

That same ambiguity is a gift to those who know how to exploit it. Across Moldova’s digital space, murky financing and foreign interests have found fertile ground.

Investigators have shown that Ilan Shor, an exiled businessman under U.S. sanctions, continued to fund social media advertising for his banned party from abroad. In 2024, researchers traced more than 100 Facebook pages tied to his network. Collectively, they drew hundreds of millions of views and real revenue – apparently more than $200,000 — for the platform. The ads framed protests as spontaneous public uprisings, attacked European integration, and seeded doubt about state institutions. When Meta removed some of them, mirror campaigns quickly reappeared under new names.

By 2025, the same tactics used by others evolved into something more professional. An outlet called REST Media flooded TikTok, Telegram, and YouTube with anti-EU narratives, a campaign later linked not to Shor’s network, but to Rybar, a Russian influence operation known for re-packaging Kremlin messaging through AI-generated voices and translated scripts. Cybersecurity researchers later linked the operation to Rybar, a Russian influence network known for re-packaging Kremlin messaging through AI-generated voices and translated scripts.

Promo-LEX, Moldova’s leading election-observer group, identified approximately 500 coordinated accounts promoting nearly identical content during just three days of the campaign. Among the content were videos that accumulated more than 1 million views, often boosted by inauthentic engagement. Each click and share fed an invisible economy where dark money buys reach, and the platforms profit from the traffic.

The ‘Commercialization of Deception’

It’s not hard to see why this matters. When false stories spread faster than fact, and when sanctioned figures can still purchase digital megaphones through intermediaries, the result is not pluralism. It’s the commercialization of deception.

Here lies the real paradox. The debate about online regulation is often framed by governments and tech companies as a fight between freedom and control. In reality, the greater threat to free speech is inauthentic speech, content generated or amplified by fake, automated, or paid accounts that simulate public consensus and distort genuine debate.

If every genuine journalist or voter competes with hundreds of coordinated or even automated accounts pretending to be citizens, the marketplace of ideas stops functioning. Protecting expression now means safeguarding authenticity: ensuring that those speaking are who they say they are, and that influence cannot be quietly purchased by hostile interests.

That will require more from the platforms than after-the-fact press releases. They need regional moderation hubs with local-language staff empowered to respond in hours, not weeks. Political advertising must meet strict transparency standards. Who paid? How much? And through which intermediaries? If the funding trail disappears into shell companies or opaque agencies, the ad should not run.

Moldova’s experience also highlights the limits of the EU’s own reach. The Digital Services Act may demand accountability from major tech firms within the Union, but beyond its borders, countries like Moldova remain vulnerable. Democracies on Europe’s edge are effectively beta-testing the future of digital interference. If they fail, that failure will not stop at their borders.

The Moldovan election ultimately held, the system bent but did not break. Voters navigated manipulation, media bias, and fatigue to make a choice, despite weeks of disinformation aimed at eroding public trust and depressing turnout. But resilience should not be the benchmark for democratic success.

As long as algorithms amplify deceit faster than institutions can counter it, Europe’s smaller democracies will continue to fight uphill. The question is whether platforms will keep treating the region as a low-priority market or whether they will concede that their business models now shape its political destiny and take the accompanying responsibility. In the long run, doing so is not just an ethical choice but a strategic one: sustained instability and state backlash threaten the very access and credibility on which their markets depend.

Freedom of speech is not the freedom to deceive. It is the ability for real citizens to speak and be heard without being drowned out by machinery — and manipulation — designed elsewhere. If platforms keep monetizing manipulation, they are not protecting democracy — they are selling it off, click by click.

The post How Tech Platforms Allowed Russia Into Moldova: Lessons for the EU and Others appeared first on Just Security.

]]>
123410
Brazil’s Digital Sovereignty Is Under Attack: How Courts, Platforms, and Constitutional Law Are Redefining Democracy Online https://www.justsecurity.org/118915/brazil-digital-sovereignty-courts-democracy/?utm_source=rss&utm_medium=rss&utm_campaign=brazil-digital-sovereignty-courts-democracy Wed, 13 Aug 2025 12:55:15 +0000 https://www.justsecurity.org/?p=118915 At the heart of Brazil’s approach to digital constitutionalism is a legal framework that treats platform governance as essential to democracy.

The post Brazil’s Digital Sovereignty Is Under Attack: How Courts, Platforms, and Constitutional Law Are Redefining Democracy Online appeared first on Just Security.

]]>
On July 30, 2025, U.S. President Donald Trump signed an executive order imposing additional tariffs on Brazilian goods, raising the total rate to 50 percent, with the measure taking effect on August 6. While officially framed as a response to currency manipulation and unfair trade, the deeper message was clear. The tariff signals not just economic retaliation, but a transnational backlash against Brazil’s emerging digital constitutionalism.

At the heart of Brazil’s approach is a legal framework that treats platform governance as essential to democracy. The Supreme Federal Court (STF) has taken a leading role in countering disinformation, political extremism, and digital abuse. In doing so, it is redefining the boundaries of platform responsibility and free expression — not as private matters, but as constitutional questions vital to the health of democracy itself.

From the Marco Civil to Judicial Assertiveness

The groundwork for Brazil’s constitutional approach to digital governance was laid with the passage of the Marco Civil da Internet in 2014, which codified principles like net neutrality, user rights, and transparency in content moderation. In 2018, the Lei Geral de Proteção de Dados (LGPD) established a comprehensive data protection regime, and in 2021, data protection was recognized as a fundamental right through a constitutional amendment.

But legal architecture alone was not sufficient to contend with the turbulent political reality of the late 2010s and early 2020s. The rise of Bolsonarismo (a form of far-right populism marked by anti-establishment rhetoric, institutional confrontation, and nationalist identity politics), the institutional destabilization of public life, and the weaponization of social media platforms forced Brazil’s judiciary into an active role. In 2019, the STF initiated Inquérito 4781 — the “Fake News Inquiry” — to investigate the spread of online attacks against justices and efforts to undermine democratic institutions. This inquiry marked the STF’s first sustained intervention in platform governance and signalled that the judiciary saw the digital sphere as squarely within its constitutional jurisdiction.

Subsequent rulings escalated the Court’s involvement. In 2022, the STF ordered the temporary suspension of Telegram after it failed to share Court-ordered information pertaining to neo-Nazi groups believed responsible for inciting violence. In other cases, the Court adopted similar coercive measures against platforms over electoral misinformation. Taken together, these episodes show a consistent pattern in which the STF uses fines, service blocks, and other enforcement tools to compel compliance with judicial orders in the digital realm, regardless of whether the underlying content concerns violent extremism or disinformation.

The STF as Digital Constitutional Actor

By 2025, Brazil’s Supreme Federal Court (STF) had positioned itself as a central force in platform regulation. In June, it issued a landmark ruling that partially invalidated Article 19 of the Marco Civil da Internet, which had shielded tech platforms from liability unless they failed to act after court orders. The Court redefined this framework: judicial authorization remains necessary for content removal in individual harm cases—such as defamation or privacy violations—but tech platforms must now act proactively against clearly illegal content that threatens democratic order, which the Court defined to include incitement to violence or electoral manipulation.

This shift reframes tech platforms not merely as private service providers, but as essential components of the public sphere, subject to constitutional obligations. It introduces a doctrine of “constitutional immediacy,” as a method to counter tech platform inaction in the face of systemic threats. In short, constitutional immediacy refers to the obligation of platforms to act proactively against content that poses systemic threats to democratic order, without waiting for judicial orders. Brazil is now one of the few democracies that legally mandates proactive platform responsibility in matters that are deemed to be of public interest, as defined by the Court.

Justice Alexandre de Moraes of the STF has defended this model as essential to preserving democratic institutions. His rulings cite constitutional protections against hate speech and authoritarian disruption — understood in this context as coordinated efforts to undermine democratic institutions, including incitement to coup attempts, electoral disinformation, and digital campaigns to delegitimize the judiciary. His actions have been controversial – for example, critics argue that his decision to suspend the messaging app Telegram in 2022 for its failure to remove disinformation and appoint a legal representative in Brazil amounted to judicial overreach and censorship.  Nevertheless, his actions have received broad judicial backing and withstood constitutional scrutiny.

Many STF interventions were shaped by Jair Bolsonaro’s presidency, during which disinformation networks attacked electoral institutions by spreading false claims about the integrity of the voting system and by circulating conspiracy theories and incitement to unrest — culminating in his disqualification. Bolsonaro was declared ineligible until 2030 by Brazil’s Superior Electoral Court (TSE), which ruled that he had abused political power and misused media channels in ways that endangered the democratic process. His ally, Donald Trump, vocally condemned the decision as a “witch hunt,” signaling political solidarity.

Legislative Backing and Expanding the Framework

The judiciary has not acted alone. Legislative proposals have increasingly moved to reinforce and systematize Brazil’s digital governance model. Brazil’s “Fake News Bill” (PL 2630) “) remains under debate in the legislature, but its core ambition is clear: to codify platform obligations for transparency, moderation, and accountability. It would require user identity verification, algorithmic transparency, limits on anonymous accounts, and independent oversight mechanisms potentially involving external regulatory bodies or multi-stakeholder entities, rather than being operated solely by the platforms themselves.

In parallel, Brazil’s Congress is considering proposals like PL 4530/2023, which would expand penalties for the misuse of personal data by both public and private entities — including government agencies and corporations — and PL 526/2025, which regulates artificial intelligence by banning mass biometric surveillance and requiring human oversight for high-risk automated systems, such as AI used to generate disinformation, enforce discriminatory profiling, or issue judicial or administrative decisions without human review. These efforts reflect a shared commitment across branches of government to operationalize rights-based governance in the digital sphere.

Underlying these proposals is a constitutional shift: whereas digital platforms were previously treated as private companies governed primarily by market logic and contractual terms, they are now increasingly viewed as actors embedded within public law structures — subject to constitutional norms, democratic oversight, and obligations tied to fundamental rights. Brazil’s National Data Protection Authority (ANPD) has taken on an increasingly prominent role in coordinating this framework, issuing technical guidance, enforcing compliance, and serving as an intermediary between State power and private infrastructure.

Sovereignty, Resistance, and the Global Stakes

Brazil’s legal trajectory has faced resistance — not only from platforms, but from transnational political actors. Elon Musk’s X has publicly refused to comply with Brazilian court orders, claiming censorship. U.S.-based think tanks and political figures aligned with Thiel and Vance have portrayed Brazil’s judiciary as authoritarian. Yet these critiques ignore the clear and public legal processes through which Brazil’s digital governance framework has developed.

They also obscure the geopolitical asymmetries at play. Brazil’s legal assertion is a form of sovereignty — an effort to maintain domestic jurisdiction over digital infrastructure that is increasingly dominated by foreign firms. When the United States imposes tariffs citing tech censorship as justification, it leverages economic coercion against a sovereign State’s attempt to enforce its own domestic constitutional order. Also, while decrying Brazil’s attacks on “online free speech,” American authorities now deny visas based on political social media posts, indicating that they, too, have an interest in regulating (at least certain kinds of) speech.

Brazil’s legal response to these dynamics is not without controversy. Unlike the opaque policy decisions made by corporate content moderators, Brazil’s system is legible, justiciable, and subject to judicial review. It places questions of content, data, and algorithmic power into a democratic process — even if that process is slow, flawed, and intensely contested. Critics, however, have raised concerns about risks of over-compliance, particularly among foreign tech companies with limited fluency in Brazilian constitutional law and high sensitivity to liability exposure.

There is apprehension that such actors may adopt overly restrictive moderation  to avoid penalties under broadly defined categories. Others warn of potential bias or censorship stemming not from the law itself, but from how platforms interpret and apply legal expectations without adequate local grounding. These risks are real, but Brazil’s framework incorporates institutional safeguards that mitigate them: legal challenges to takedown requests can be adjudicated by the judiciary, including the Supreme Federal Court, which has emphasized proportionality, due process, and fundamental rights in its digital jurisprudence. Indeed, judicial review has already shaped the evolving boundaries of content regulation — for example, by reaffirming that prior judicial authorization remains required in individual harm cases.

There are, nonetheless, ongoing debates about jurisdictional reach, especially regarding foreign companies with no legal representation in Brazil, and about whether certain emergency judicial orders have bypassed procedural guarantees. These tensions are part of Brazil’s constitutional experiment: rather than insulating digital power from public institutions, it seeks to embed platform governance within the same structures of rights, accountability, and contestation that define democratic legitimacy.

Constitutionalizing the Digital Public Sphere — and Shaping the Global South’s Digital Future

Despite critique, what Brazil is attempting is neither censorship nor deregulation, but a constitutional approach to digital governance — anchoring platform accountability in human rights, democratic oversight, and legal legitimacy.

For many in the Global South, Brazil offers a rare example of digital self-determination — a break from imported governance norms and an assertion of sovereignty over online infrastructure. By grounding internet policy in constitutional law and public process, Brazil is helping reshape the normative debate.

Though contested by platforms and critics abroad — and actively debated through democratic and judicial processes at home — Brazil’s model offers a transparent, accountable response to digital harms. As AI manipulation and platform impunity escalate, its experience may chart a path for other democracies navigating the digital age under geopolitical and institutional pressure.

The post Brazil’s Digital Sovereignty Is Under Attack: How Courts, Platforms, and Constitutional Law Are Redefining Democracy Online appeared first on Just Security.

]]>
118915
It’s Time to Designate The Base as an FTO https://www.justsecurity.org/118427/designate-base-fto/?utm_source=rss&utm_medium=rss&utm_campaign=designate-base-fto Tue, 05 Aug 2025 13:05:11 +0000 https://www.justsecurity.org/?p=118427 With increasing violent extremism and waning DOJ interest in curbing far-right extremism, a failure to address the threats posed by The Base could prove fatal.

The post It’s Time to Designate The Base as an FTO appeared first on Just Security.

]]>
The Base – a neo-Nazi accelerationist network – recently claimed responsibility for the assassination of Colonel Ivan Voronych, a Ukrainian intelligence officer, in Kyiv. The attack should serve as a wake-up call for U.S. policymakers. For years, The Base has operated in a legal gray zone, recruiting American extremists, disseminating propaganda, and promoting racial war. While the group has been designated as a terrorist entity by four member nations of the Five Eyes alliance and the European Union, the U.S. government has yet to follow suit. That inaction is no longer tenable.

The Kyiv assassination marks a dangerous escalation, with an extremist group engaging in international political violence. This is not an isolated act but a continuation of The Base’s ideology of “accelerationism,” which calls for collapsing governments through terror and chaos.

By striking a foreign intelligence target during wartime, the group has demonstrated both the intent and operational capability to influence a global conflict by acting on behalf of a foreign adversary – characteristics that are similar to those of foreign terrorist organizations (FTOs) already designated by the State Department, including Hamas and Hezbollah. Failure to designate The Base as an FTO has the potential to undermine national security on multiple fronts, including by stifling law enforcement’s ability to investigate its members and recruiting practices, restricting intelligence-sharing with allies, and signaling to extremists that the U.S. government is unwilling to take meaningful action against transnational white supremacist terrorism.

More importantly, in an era when domestic and international violent extremism increasingly overlap, and the Department of Justice’s interest in curbing far-right extremism continues to wane, a failure to address the threats posed by The Base could prove fatal.

American Roots

The Base was founded by a U.S.-born former Pentagon intelligence contractor and now Russian citizen, Rinaldo Nazzaro, who claimed to have served alongside special operations forces (SOF) in overseas combat zones. Drawing on his experience in private security and the intelligence gold rush of the post-9/11 years, Nazzaro sought to build a network of like-minded militants that could outlast law enforcement crackdowns by embracing a decentralized structure.

Nazzaro cultivated a significant online following of fellow white supremacists under the aliases “Roman Wolf” and “Norman Spear.” In June 2018, he officially launched The Base on the podcast of a decades-long white supremacist, Billy Roper – an intentional move to appeal to hardcore white supremacists already steeped in accelerationist thinking. At the time, though, Nazzaro’s true identity remained unknown.

From its inception, The Base positioned itself as more than just an online movement confined to encrypted chat rooms. It promoted the strategy of leaderless resistance, an organizational model pioneered by far-right extremist Louis Beam, who encouraged autonomous cells and lone actors to wage violence without direct orders from central leadership. This structure makes attribution difficult and allows the organization to persist even if key figures are arrested.

Importantly, though, The Base was never purely domestic. From the outset, it sought global reach, recruiting in North America, Europe, and Australia. Nazzaro’s relocation to Russia in 2018 further underscores the group’s foreign nexus and potential facilitation by hostile state actors. Today, its transnational orientation is undeniable, manifested in operations that extend beyond digital spaces into kinetic action abroad.

A Pattern of Escalation 

Before its most recent attack in Kyiv, The Base was already on a troubling trajectory. In 2018, Nazzaro bought several acres of land in rural Washington State to secure “off-grid” training sites for his followers to prepare for an impending race war. After local antifascists exposed Nazzaro’s plans, other members organized paramilitary camps across the country in Georgia and Michigan, training with firearms and explosives. A slew of arrests followed, the first of which came in November 2019, when then-18-year-old Richard Tobin was charged with federal hate crimes for vandalizing multiple synagogues.

In August 2019, Ryan Thorpe, a junior reporter from The Winnipeg Free Press, made headlines when he successfully infiltrated The Base on one of his first assignments, eventually exposing Canadian Armed Forces reservist Jordan Patrik Mathews as a member. Less than six months later, in January 2020, retired FBI agent Scott Payne’s seven-month undercover stint in The Base led to the arrest of three of its members for plotting to murder an antifascist couple in Georgia. Simultaneously, the crackdown in Georgia helped authorities disrupt a plot by several members of a Maryland-based cell just before they could carry out a mass shooting at a Virginia gun rights rally in hopes of sparking an armed civil conflict – a plot that included Mathews, who fled Canada just days after his exposure. Barely a week later, Nazzaro’s true identity was revealed by Jason Wilson in an article for The Guardian. Notably, rumors of Nazzaro’s alleged relationship with the Kremlin began to swirl among members of The Base around this time, citing his fluent Russian language skills and frequent travel between the United States, Russia, and other nations.

Despite the turmoil inflicted by the arrests and investigative journalism, The Base deepened its ties with the global white supremacist ecosystem, forging relationships with other violent accelerationist groups like the Atomwaffen Division (AWD) and the Nordic Resistance Movement (NRM). Members also sought combat experience abroad, with some even venturing to Ukraine to gain training from the Azov Battalion – a pattern all too reminiscent of jihadists flocking to war zones across the Middle East.

Despite Nazzaro’s public denial of any involvement, the Kyiv incident is just the latest development in The Base’s arc. By assassinating a foreign intelligence officer in wartime, The Base has adopted the tactics of other internationally recognized terrorist organizations – namely, carrying out targeted killings to influence geopolitical outcomes. This escalatory act proves intent and capability on a scale that reaches far beyond the borders of the United States, significantly weakening any argument against an FTO designation.

The Base Meets the FTO Criteria 

Under Section 219 of the Immigration and Nationality Act, the U.S. Secretary of State may designate an entity as an FTO if it meets three conditions:

  1. “It must be a foreign organization”;
  2. The organization “must engage in terrorist activity,” or “retain the capability and intent to engage in terrorist activity or terrorism”; and
  3. The “organization’s terrorist activity or terrorism must threaten the security of U.S. nationals or the national security (national defense, foreign relations, or the economic interests) of the United States.”

The Base meets all three of these criteria.

First, although The Base emerged in the United States, its de facto leader, Rinaldo Nazzaro, lives in Russia and is a Russian citizen. Moreover, Nazzaro operates the organization from there, directing key propaganda and recruitment activities using Russian digital platforms such as VK and a Mail.ru email address to recruit and distribute propaganda. Precedent also exists for this standard, as the U.S. government previously designated the Russian Imperial Movement (RIM) and its leadership – who have trained foreign militants – as a Specially Designated Global Terrorist (SDGT) in 2020.

Second, the assassination in Kyiv qualifies as terrorist activity – premeditated violence intended to influence political events. Even if one were to argue that Voronych was not a civilian, and therefore his assassination was not strictly an act of terrorism, The Base has already conspired to attack critical civilian infrastructure in Ukraine and carry out mass shootings inside the United States. Therefore, The Base has clearly demonstrated both the willingness and capacity to carry out acts of terror.

Finally, The Base is a serious threat to U.S. nationals and security, recently attempting to resurrect its previous activities within the United States. Its ideology explicitly advocates for the destruction of the U.S. government. Furthermore, it operates in foreign conflict zones in which U.S. personnel are also involved, creating a risk that it will strengthen hostile actors, although the scope of that assistance is not yet clear.

By these metrics, The Base meets the threshold for designation. Failure to apply the same standards to white supremacist groups as are regularly applied to jihadist organizations or other ideologically motivated threat actors creates a dangerous double standard that extremists will continue to exploit.

Designating The Base is not just a symbolic matter, but rather a force multiplier that could enhance the range of other punitive measures available to the U.S. government. With an FTO designation, the U.S. Treasury Department has the power to freeze and seize financial assets belonging to the group, as well as enforce existing legal statutes against any individuals providing it with material support, such as fundraising and logistical assistance. Lastly, a designation would help foster greater intelligence cooperation and sharing among allies – an invaluable tool for disrupting the group’s operations at a time when militant far-right networks are increasingly transnational in nature.

White Supremacist Terrorism is Global and Evolving

The assassination in Kyiv is more than a shocking headline or an outlier – it is a harbinger of attacks to come. Global white supremacist terrorism is evolving, and U.S. policy must adapt with it. Designating The Base would signal to the rest of the world that the U.S. government is prepared to treat racially and ethnically motivated violent extremist (REMVE) groups with the same urgency as jihadist threats. A designation would affirm that white supremacist terrorism receives the same legal treatment as ISIS or al-Qaeda. That consistency is vital at a time when REMVE is a quickly growing category of terrorism worldwide.

Moreover, a designation could undermine adversary narratives. Russia has repeatedly leveraged extremist movements as instruments of destabilization. Allowing a U.S.-founded group with Russian ties to operate unchecked erodes American credibility and emboldens future proxy activity.

The statutory criteria are clear and the precedent is established. What remains to be seen is if there is the political will necessary to acknowledge that white supremacist terrorism is not just an American problem – it is a growing transnational threat. If Washington cannot muster the resolve to designate The Base after an overseas assassination, then when? The choice is stark: lead the fight against this threat now, or watch it metastasize beyond our ability to contain in the future.

The post It’s Time to Designate The Base as an FTO appeared first on Just Security.

]]>
118427
Vague by Design? The Oversight Board, Meta’s DOI Policy, and the Kolovrat Symbol Decision https://www.justsecurity.org/117886/oversight-board-meta-doi-kolovrat/?utm_source=rss&utm_medium=rss&utm_campaign=oversight-board-meta-doi-kolovrat Tue, 29 Jul 2025 12:51:39 +0000 https://www.justsecurity.org/?p=117886 The Oversight Board's Kolovrat decision reveals more than a dispute over a symbol—it lays bare the deep fault lines in Meta’s content moderation architecture.

The post Vague by Design? The Oversight Board, Meta’s DOI Policy, and the Kolovrat Symbol Decision appeared first on Just Security.

]]>
The Oversight Board – an independent body established by Meta to review and advise on difficult content moderation decisions – recently issued a decision in the Kolovrat symbol case that illuminates critical tensions in content moderation: balancing hate symbolism, cultural identity, and interpretive ambiguity. Meta removed two Instagram posts and retained a third post under its Dangerous Organizations and Individuals (DOI) policy. The two posts removed for hate glorification included one featuring the Kolovrat symbol with a “Slavic Army” caption urging Slavic people to “wake up” and another using #DefendEurope and M8 rifle symbols. The third post was retained as non-violating. While the board ultimately upheld the removals – a defensible position given the specific context – several critical gaps in its reasoning and scope remain unaddressed.

First, I highlight the scope of the board’s decision (which I agree with) in identifying a fundamental failure of transparency and foreseeability in Meta’s enforcement of its DOI policy. Second, while the board rightly focused on this failure, its analysis stopped short of addressing the structural incoherence and internal contradictions within Meta’s publicly available DOI policy – issues that generate user uncertainty regardless of internal guidelines. Third, despite acknowledging in its call for comments that extremists routinely evade moderation through coded tactics, the board failed to meaningfully engage with these strategies or press Meta to adopt stronger, forward-looking due diligence measures.

This piece will delve less into the merits of the case but instead concentrate on the underlying structural flaws highlighted above.

What the Board Got Right

The Oversight Board scrutinized Meta’s application invoking DOI policy in the Kolovrat symbol case, specifically challenging the company’s reliance on the term “reference” to justify removal. Surprisingly, Meta did not assert that the posts explicitly glorified white nationalism (a hateful ideology). Instead, it removed the content by classifying it as a “reference” to it under its internal enforcement rules.

To aid clarity, I have reproduced below all relevant excerpts from Meta’s DOI policy that address how content related to the glorification of hateful ideologies is to be treated. For ease of reference, I have classified these excerpts as Rule 1 through Rule 4.

Rule 1 (p. 2)Rule 2 (p. 5)Rule 3 (p. 7)Rule 4 (p. 8)
“We also remove content that Glorifies, Supports or Represents ideologies that promote hate, such as Nazism and white supremacy. We remove unclear references to these designated events or ideologies.”“For Tier 1 and designated events, we may also remove unclear or contextless references if the user's intent was not clearly indicated. This includes unclear humor, captionless or positive references that do not glorify the designated entity's violence or hate.”“We also do not allow Glorification, Support or Representation of designated hateful ideologies, as well as unclear references to them.”“In these cases, we designate the ideology itself and remove content that supports this ideology from our platform. These ideologies include:
• Nazism
• White supremacy
• White nationalism
• White separatism
We remove explicit Glorification, Support and Representation of these ideologies, and remove individuals and organisations that ascribe to one or more of these hateful ideologies.”

[Note: The page numbers referenced correspond to the pagination of the PDF print version of Meta’s DOI policy (as of 07-28-2025), given the online version is not paginated.]

Meta cited its public DOI policy (Rule 2) – which permits removal of “unclear or contextless references if the user’s intent was not clearly indicated” and explicitly includes “unclear humor, captionless or positive references that do not glorify” – as justification. However, in response to this, the board revealed a critical flaw: Meta’s internal definition of “reference” (disclosed to the board) was broader than the examples given in the DOI policy. Internally, “reference” encompassed five types of standalone violations: 1) Positive references (even without ambiguity); 2) Incidental depictions (e.g., accidental symbol appearances); 3) Captionless images; 4) Unclear satire/humor; and 5) Symbols.

This hidden framework created two major problems. First, it effectively rewrote the scope of enforcement. While the public facing policy (Rule 2) frames removal as discretionary and limited to ambiguous content, Meta’s internal definitions allowed removal even when meaning was clear and non-violative. Second, it left users without fair notice. Subcategories such as “incidental depictions” were never disclosed, meaning that users posting cultural or historical content had no way to anticipate enforcement. A user sharing a historical photo with a background Kolovrat symbol, for instance, could find their post removed without any indication of violation in the publicly accessible policy.

The board upheld the removal and rightly concluded that this undisclosed expansion violated Article 19 of the International Covenant on Civil and Political Rights (ICCPR), which protects the right to freedom of expression. Under the “legality” requirement of Article 19(3), any restriction on expression must be provided by law and be sufficiently clear and foreseeable to those subject to it. The board found that Meta’s undisclosed and overly broad internal definition of “reference” failed this standard. It ordered Meta to publicly define its internal use of the term and to clarify subcategories such as “positive references” and “incidental depictions.” Although these transparency measures were necessary, they were not sufficient for the reasons detailed below.

A Policy at War with Itself

A close reading of the DOI policy reveals a structural incoherence in how it addresses content linked to hateful ideologies. The four primary enforcement rules – Rules 1 through 4 – each speak to the types of content that may or must be removed. However, when considered together, they exhibit a fragmented logic that compromises the policy’s internal consistency and the predictability of enforcement.

Rule 1 asserts that Meta “removes unclear references” to designated events or ideologies. This is a categorical formulation that imposes a mandatory obligation to take down ambiguous content, regardless of whether it glorifies or supports hateful views.

Rule 2, by contrast, introduces a discretionary standard: Meta “may remove unclear or contextless references” where the user’s intent is not clearly indicated, including references that do not glorify violence or hate. This discretionary formulation fundamentally contradicts Rule 1’s mandatory tone.

If all unclear references must be removed (Rule 1), it is logically incoherent for Meta to also retain the discretion to remove only some (Rule 2), particularly where the content is explicitly non-glorifying. The rules pull in opposite directions: Rule 1 demands removal of ambiguity, while Rule 2 permits it selectively. Moreover, the inclusion in Rule 2 of examples like “unclear humor” or “positive references that do not glorify” widens the enforcement net even further, capturing content that is legally and morally distinguishable from hate promotion.

Rule 3 reiterates the language of prohibition: Meta “does not allow glorification, support or representation” of designated hateful ideologies, as well as “unclear references to them.” It mirrors Rule 1’s broad sweep by including both explicit and ambiguous content, again in categorical terms.

But Rule 4 marks a shift. It limits removal to only “explicit glorification, support and representation,” making no mention of unclear or ambiguous references. In doing so, it appears to elevate the threshold for enforcement, suggesting that unless a user unambiguously promotes or endorses a hateful ideology, their content is not removable under Rule 4.

Taken together, these rules generate a field of contradiction. Some provisions treat unclear references as requiring removal (Rules 1 and 3), another treats them as optionally removable (Rule 2), and the final rule does not include them at all (Rule 4).

Theoretically, one might infer a baseline principle: that any unclear or clear reference glorifying a designated hateful ideology is prohibited. However, in practice, the policy fragments this principle across four separate excerpts, each articulating different thresholds and enforcement triggers. This patchwork of obligations and permissions creates a regime where the same content may be interpreted as either mandatory for removal, permissible for retention, or outside of scope entirely – depending solely on which rule is applied.

For users, this multiplicity of standards could generate uncertainty: they are left unable to determine with any reliability what the actual enforcement criteria are. The net effect is a policy framework that fails the foreseeability requirement under the principle of legality, and that undermines user trust by obscuring where the lines of acceptable expression truly lie.

A Failure to Address Coded Extremism

The Oversight Board asked a key question in its “call for comments”: how do neo-Nazi and other extremist actors disguise their content to slip past moderation on social media? The board acknowledged that understanding how extremists disguise content was central to tackling online hate. In its final decision, it found that the Kolovrat post contained elements of white nationalism, citing references like “Slavic Army” and calls for people to “wake up” as glorifying this ideology. Yet despite recognizing the presence of coded hate, the ruling did not seriously examine the broader evasion techniques that enable white nationalist hate content to persist. Its focus remained limited to the specific posts and Meta’s enforcement framework, while the wider tactics of concealment were addressed only in passing.

The board’s ruling noted how extremists combine seemingly neutral symbols with subtle signals – like using the Odal rune (an ancient symbol appropriated by neo-Nazis), hashtags such as #DefendEurope (commonly used by anti-immigration and far-right groups), or styling posts in Fraktur font (Gothic script associated with Nazi-era propaganda). These nods showed the board understood that extremist content rarely announces itself outright. Instead, it hides in plain sight, coded in ways that resonate with in-groups but pass under the radar of moderation tools. As the Centre for Advanced Studies in Cyber Law and Artificial Intelligence, India emphasized in its submission to the board, unless platforms actively unpack the mechanics of such disguise – how extremists shape ambiguity to avoid detection – moderation will inevitably fall behind.

One of the most pervasive tactics in this arsenal is coded language, deliberately crafted to slip past moderation systems. An Al Jazeera investigation in 2020 revealed that as many as 120 Facebook pages espousing white supremacist ideology had collectively amassed over 800,000 likes, with some operating openly for more than a decade. One such page belonged to M8l8th, a Ukrainian black metal act. The use of “88” in its name – commonly understood in neo-Nazi circles as code for “Heil Hitler,” with H being the eighth letter of the alphabet – is emblematic of how extremist content conceals itself through numerical symbolism.

Another common method highlighted is the use of homoglyphs – visually similar characters substituted to bypass keyword filters. For example, if a platform were to block the term “far-right,” bad actors may re-spell it as “f4r-r!ght” to evade detection. These tactics are neither trivial nor accidental. They represent a calculated effort to craft content that appears innocuous to algorithms and casual users but carries unmistakable ideological signals for those in the know.

This is precisely where the Oversight Board’s analytical shortfall collides with its human rights obligations under the U.N. Guiding Principles on Business and Human Rights. Principle 17 requires companies like Meta to carry out due diligence that not only responds to harm but anticipates it, especially when that harm arises from the systemic misuse of platforms to spread hate through evasive tactics.

By failing to interrogate these structural vulnerabilities, the board missed a vital chance to demand stronger, future-proof enforcement standards. If the board continues to overlook how extremists exploit policy gaps, it risks becoming not a safeguard against harm, but a pillar of the very system that allows such harm to endure.

Conclusion

The Kolovrat decision reveals more than a dispute over a symbol; it lays bare the deep fault lines in Meta’s content moderation architecture. While the Oversight Board exposed Meta’s opaque enforcement, it left too much untouched: a contradictory policy and a platform still vulnerable to hate disguised in plain sight. In the end, it feels as though the board held up a mirror to Meta’s practices but failed to turn on the light. Until structural incoherence is resolved and evasive hate addressed with foresight, platform governance will remain a house on shifting sand – clear only to those who know how to read between the rules.

The post Vague by Design? The Oversight Board, Meta’s DOI Policy, and the Kolovrat Symbol Decision appeared first on Just Security.

]]>
117886
The Taliban’s Slow Dismantling of Afghan Media https://www.justsecurity.org/116034/talibans-slow-dismantling-afghan-media/?utm_source=rss&utm_medium=rss&utm_campaign=talibans-slow-dismantling-afghan-media Thu, 10 Jul 2025 13:05:42 +0000 https://www.justsecurity.org/?p=116034 The slow death of Afghan media is a tragedy not just for the many brave Afghan journalists, but for the country as a whole.

The post The Taliban’s Slow Dismantling of Afghan Media appeared first on Just Security.

]]>
In November 2024, people in Afghanistan’s northern Takhar province were met with a surprise when they tuned on the nightly television news: a blank screen. Instead of an anchor sitting at the usual desk, all they saw was Mah-e-Now channel’s logo imposed on a blue background. While a voice was reading out the headlines, there was no actual footage of the events. 

This was no technical issue, but rather Mah-E-Now had been forced to comply with a new “morality law” imposed by the Taliban, barring broadcasters from showing images of living beings under the group’s harsh interpretation of sharia law. Soon, similar reports emerged from other parts of the country – from Badghis, Wardak and Kandahar. In Helmand, private TV stations were reportedly forced off the air altogether.

It was the latest blow against Afghanistan’s embattled media landscape under the Taliban. Once one of the country’s few unqualified success stories, Afghan journalists and outlets are now struggling for survival due to repressive policies, a wave of arrests, and dried up foreign funding.

How the Taliban Controls Afghan Media

After seizing power in August 2021, the Taliban moved swiftly to impose a stifling control over Afghan society. The former constitution and legal framework were both suspended pending a “review” of their compatibility with sharia law. In their place, the Taliban have gradually installed a complex web of new laws and policies, many dictated directly by the group’s elusive supreme leader, Hibatullah Akhundzadah.

The media sector is no exception. Just a month after the fall of Kabul, the new regime issued an 11-point “guidance note” to the media, banning coverage that goes against Islam and Afghan culture, or that is “insulting” to public figures. Since then, the Taliban have issued more than 20 other regulations on the media, ranging from bans on non-religious music to extensive pre-broadcast censorship.

It is no surprise that female journalists have borne the brunt of these restrictions. Taliban-imposed regulations mandate that women must cover their faces when appearing on camera, work separately from men in newsrooms, and are not allowed to share a screen with male presenters. In some provinces, women’s voices are even banned from radio broadcasts, while female reporters are often shut out from official Taliban press conferences. Other Taliban policies – such as a ban on movements without a mahram (male chaperone) – have made field work for female journalists essentially impossible.

Beyond restrictive policies on the media, the Taliban have also threatened, detained and tortured scores of individual journalists for critical coverage. During the first three years of Taliban rule, the United Nations recorded more than 250 arbitrary arrests of media workers and more than 130 cases of torture and ill-treatment. These numbers are likely just the tip of the iceberg.

The crackdown has led to a widespread climate of fear and self-censorship by journalists, essentially eliminating independent reporting. Afghanistan has plunged from 118th place (out of 180) in 2018 to currently 175th on Reporters Without Borders’ World Press Freedom Index.

A Dramatic Reversal of Progress 

The Taliban’s decimation of the Afghan media scene is all the more tragic given the very real achievements of Afghan journalists over the past decades. After the 2001 U.S.-led toppling of the Taliban, Afghan media grew rapidly. By 2021, there were close to 12,000 journalists in the country, working for more than 600 outlets. News agencies like Pajhwok provided quality journalism and reached parts of the country inaccessible to international outlets. The private Tolo TV network was a commercial success watched by millions, widely known for its mix of hard-hitting investigative work, vibrant talk shows, and entertainment like the singing competition Afghan Star. Many Afghan journalists often did heroic work under extremely difficult circumstances. They faced threats and attacks not just from armed groups like the Taliban and the Islamic State, but often also from government officials.

During the last nearly four years of Taliban rule, this progress has been dramatically reversed. Media watchdogs estimate that at least half of all outlets in Afghanistan have closed. Many struggled financially or could not carry out their work under the new restrictions, while others were simply shut down by the Taliban. The media workforce has also been decimated, as many journalists have fled the country rather than risking life under the new restrictions. Women reporters have been hit particularly hard: a reported 84 percent of female journalists lost their jobs within two months of the Taliban takeover.

It is not just draconian policies that have forced media houses to close, but also a lack of funding. Many outlets were dependent on foreign aid money that simply doesn’t exist anymore, to a large extent because donors have been loath to channel funds to Taliban-controlled Afghanistan. Humanitarian funding nosedived from some $3.8 billion in 2022 to $1.9 billion in 2023, while the Trump administration’s drastic aid cuts announced this year are likely to have a further devastating impact. Although the U.N. reported a small uptick in the number of outlets and journalists in 2023, this is still a much-decimated media scene.

Inside the country, the outlets that are still active face ever tightening pre-publication censorship. The Taliban have imposed strict approval of content before it is aired. In September 2024, the Taliban even told TV stations that they could only invite talk show guests from a list of 68 pre-approved “experts.” The group’s policies are also informally imposed, with journalists reporting widespread threats and harassment by Taliban officials after the slightest perceived critical reporting. International media still maintain some presence in the country, but it has been dramatically reduced since the height of the Afghan conflict.

Instead, much of the coverage of Afghanistan now comes from “media-in-exile,” including outlets like Afghan International, Amu TV and Etilaatroz, or the more recently launched Rights Monitor. Some are newly established while others were previously based in Afghanistan and forced to move operations abroad since the Taliban takeover. These outlets valiantly try to fill the information gap in Afghanistan while keeping international focus on the Taliban’s excesses. The extreme risks of news gathering in the country, however, often make detailed reporting a challenge. There are also recent signs that the Trump administration’s aid cuts have already forced some of them – including Afghan International – to at least scale back on their reporting, while Voice of America’s Dari and Pashto services have been gutted.

The Taliban have targeted these outlets in-country, labeling them “illegal” while blocking websites and jamming broadcast signals. Several journalists working for exiled media have also been targeted through harassment, arrest and torture. In August 2023, for example, Taliban intelligence carried out apparently coordinated arrests of at least seven journalists accused of working for such “diaspora” media.

In fact, “cooperating with foreigners” has become one of the most common charges against media outlets and individual reporters in Afghanistan. In early December 2024, officers from the Taliban’s intelligence and so-called morality police stormed into the offices of Arezo TV, a private Kabul-based broadcaster. They seized mobile phones, computers and other equipment, hauling the station manager and one of its anchors to the notorious Pul-i-Charkhi prison, where they still remain. Their “crimes” were to collaborate with exiled media and to air programs – apparently Indian soap operas – that “go against Islamic values.”

The Taliban’s Propaganda Has Evolved

Despite the crackdown, the Taliban portray themselves as media friendly, at least to international audiences. Unlike during their last stint in power, “Taliban 2.0” have refrained from destroying TV sets and video tapes, and instead embraced visual media in their own propaganda efforts. Authorities have even announced tax breaks for media outlets, and last year held events to – seemingly without irony – mark World Press Freedom Day. The Taliban often deflect criticism of the group’s harsh treatment of independent media, saying outlets are free to operate as long as they are “in line with related regulations and principles.”

In 2023, I experienced some of this myself when I was based in Afghanistan with a humanitarian organization. I asked to meet an official with the Taliban’s communications ministry to discuss a recent wave of arrests targeting journalists in the east of the country. When I arrived, however, the official was much more interested in discussing international funding for a new press club he wanted to create – a space for journalists to meet and mingle. Taken aback, I tried to explain that there were probably other fundamentals of press freedom we had to discuss first.

The Slow Death of Afghan Media is a Tragedy

There are few signs that the situation for journalists in Afghanistan will improve under the Taliban. Despite intense international pressure and rumors of internal splits, the group has only doubled down on its authoritarian approach to governing – whether on women’s rights or media freedoms. Just in the last year, the Taliban banned women from attending medical training institutes and renewed threats against civil society organizations that still employ female staff.

In August 2024, the Taliban’s supreme leader also announced a sweepingly repressive “morality law,” known as the “Law on the Prevention of Vice and Promotion of Virtue” (or “the PVPV Law”). The law widens already existing restrictions and introduces a plethora of new ones – including on women praying in public, strict dress and grooming codes for both men and women, and – as the people in Takhar continue to experience – bans on images of living beings. There are widespread reports of “moral police” officials beating and arresting those that fail to comply.

As always in the Taliban’s Afghanistan, however, policies are implemented gradually and inconsistently. For now, presenters, including women, are still allowed on camera on national outlets like Tolo TV, but it is anyone’s guess how long that will last. There are persistent rumors that the Taliban are planning to shut down all state TV broadcasts and replace them with a single radio station. This would echo the Taliban’s approach to media during its first stint in power, when the group only allowed a single media outlet – its own Radio Sharia.

The slow death of Afghan media is a tragedy not just for the many brave Afghan journalists, but also for the country as a whole. With international interest waning – and few foreign outlets with a presence in the country – homegrown Afghan media is needed more than ever to shine a light on Taliban abuses.

The post The Taliban’s Slow Dismantling of Afghan Media appeared first on Just Security.

]]>
116034