Justin Hendrix https://www.justsecurity.org/author/hendrixjustin/ A Forum on Law, Rights, and U.S. National Security Wed, 10 Dec 2025 14:15:23 +0000 en-US hourly 1 https://i0.wp.com/www.justsecurity.org/wp-content/uploads/2021/01/cropped-logo_dome_fav.png?fit=32%2C32&ssl=1 Justin Hendrix https://www.justsecurity.org/author/hendrixjustin/ 32 32 77857433 Questions Lawmakers Should Ask About Inspector General Report on Signalgate https://www.justsecurity.org/126591/inspector-general-report-hegseth-signal/?utm_source=rss&utm_medium=rss&utm_campaign=inspector-general-report-hegseth-signal Wed, 10 Dec 2025 13:51:08 +0000 https://www.justsecurity.org/?p=126591 The OIG report on the "Signalgate" incident is far from the “total exoneration” claimed by Hegseth and his aides.

The post Questions Lawmakers Should Ask About Inspector General Report on Signalgate appeared first on Just Security.

]]>
On Dec. 2, the Department of Defense (DoD) Office of the Inspector General (OIG) published an unclassified version of a report on the incident that has come to be known as “Signalgate.” The report concerns Secretary of Defense Pete Hegseth’s use of a personal device and the encrypted messaging app Signal to share sensitive information with other officials—and the editor-in-chief of The Atlantic, who was added to the group chat—about an impending military strike in Yemen.

On March 24, 2025, The Atlantic published the first in a series of articles containing material from the group chat, including screenshots of Signal messages between cabinet-level officials discussing the authorization and operational details of the strikes, which took place on March 15. Hegseth, then National Security Advisor Michael Waltz, and other cabinet members, including the administration’s two most senior intelligence officials, discussed matters including the number of aircraft involved in the attack, the kinds of munitions dropped, specific times for the attack, and targets on the ground, according to The Atlantic.

OIG published its report on the incident alongside a companion report offering recommendations for the handling of sensitive information on “non-DoD controlled electronic messaging systems.” The OIG conducted its evaluation of the incident from April through October 2025, collecting information and documents and conducting interviews with current and former DoD personnel to identify “the factual circumstances and adherence to policies and procedures surrounding the Secretary’s reported use of Signal to conduct official government business from approximately March 14 through March 16, 2025.”

In a previous article on Just Security, Ryan Goodman analyzed the criminal laws that could apply to Signalgate. This fell outside of the scope of the OIG report, which did “not try to identify whether any person violated criminal laws.” Instead, the report assessed whether Hegseth and other DoD officials “complied with DoD policies and procedures for the use of the Signal commercial messaging application for official business” in “compliance with classification and records retention requirements.” The report’s findings and the recommendations raise a number of questions that lawmakers should address.

OIG Report Findings and Recommendations

The OIG found that Hegseth, who declined to be interviewed, shared sensitive, non-public information from a USCENTCOM briefing in the Signal group just hours before the United States conducted strikes in Yemen. In doing so, the report says, “the Secretary’s actions did not comply with DoD Instruction 8170.01, which prohibits using a personal device for official business and using a nonapproved commercially available messaging application to send nonpublic DoD information.”

The OIG explicitly stated that the information shared in the group could have created a risk to U.S. forces, contradicting a written statement by Hegseth.

Although the Secretary wrote in his July 25 statement to the DoD OIG that “there were no details that would endanger our troops or the mission,” if this information had fallen into the hands of U.S. adversaries, Houthi forces might have been able to counter U.S. forces or reposition personnel and assets to avoid planned U.S. strikes. Even though these events did not ultimately occur, the Secretary’s actions created a risk to operational security that could have resulted in failed U.S. mission objectives and potential harm to U.S. pilots.

The report also found that Hegseth and his office failed to retain the messages as required by federal law, since some of the messages were “auto-deleted before preservation.” DoD was only able to provide “a partial transcript of the Signal messages based on screenshots taken from the Secretary’s personal cell phone on March 27, but this record did not include a significant portion of the Secretary’s conversations disclosed by The Atlantic,” according to the report. The OIG therefore “relied on The Atlantic’s version of the Signal group chat.”

The report also detailed procedural issues with the classification of operational information in USCENTCOM communications, including a lack of appropriate markings on certain communications.

With regard to Hegseth’s failure to comply with DoD instructions on the use of a personal device and non-approved commercial app for the conduct of official business, the OIG did not make a recommendation, asserting that “the use of Signal to send sensitive, nonpublic, operational information is only one instance of a larger, DoD-wide issue.” The single actionable recommendation from the OIG evaluation is that USCENTCOM should review its classification procedures and “ensure that clear requirements are communicated” for marking classified information.

The companion report included a number of other recommendations. It suggests that the DoD should take efforts to remove the incentive for personnel to use apps such as Signal by providing better official alternatives, that it should conduct department-wide cybersecurity training, and that senior leadership should receive training and “a knowledge assessment” on the use of mobile devices and applications. The DoD Chief Information Officer agreed with most of the recommendations, but quibbled with creating new department-wide training, arguing they would be expensive and “redundant” to existing efforts.

Questions Lawmakers Should Ask Now 

The OIG report raises questions, including about its drafting and scope. For instance, while Appendix A of the report stipulates that it “does not try to identify whether any person violated criminal laws,” two pages later it says OIG “obtained support from the Administrative Investigations and Defense Criminal Investigative Service Components in the DoD OIG,” which “advised and assisted the project team with analysis of potential criminal conduct and taking recorded and sworn testimony from DoD officials.” Was the inquiry truly limited in its scope, or did OIG implicitly conduct a criminal-adjacent investigation without stating so? Was any material left out of the report that would have been important for Congress or the public to know?

Regardless, the OIG report is far from the “total exoneration” claimed by Hegseth and his aides. Rep. Don Bacon (R-NE) told CNN’s Brianna Keilar that claims the report exonerated Hegseth are “total baloney,” while Sen. Jim Himes (D-CT) told CBS News’ Face the Nation that his Republican colleagues are expressing concern over the findings. But when asked if he would use Signal again, the Secretary told a Fox News correspondent on Saturday that he does not “live with any regrets.”

Given bipartisan concern over the issue, Congress should pursue a more substantial inquiry into the incident, and look into the “DoD-wide issue” that the OIG report says stems from the use of Signal. Perhaps there are legislative solutions. Congress could write a law to require DoD to deploy a secure messaging application to reduce the incentive to use consumer apps, or more clearly codify consequences for senior officials—including Cabinet members—who violate electronic communications or records laws.

That might create accountability for a future Secretary of Defense. It would at least put the same degree of accountability in place for the civilian leader of the military as for his subordinates. As The Atlantic’s Goldberg put it on Friday:

I try not to express my personal views from this chair, but since Signal Gate happened on my phone, let me say that the most disturbing aspect of this whole episode is that if any other official at the Department of Defense and certainly any uniform military officer shared information 1 in 100th as sensitive as Hegseth and others shared on an insecure messaging app, without even knowing that the editor-in-chief of The Atlantic was on the chat, they would be fired or court-martialed for their incompetence.

Perhaps such common sense is insufficient to serve in place of a rule. Congress has the opportunity now to use the OIG report as the starting point to consider what should happen next. If it fails to do so, then the report will be filed away as the endpoint Hegseth claims it is.

The post Questions Lawmakers Should Ask About Inspector General Report on Signalgate appeared first on Just Security.

]]>
126591
Regulating Social Media Platforms: Government, Speech, and the Law https://www.justsecurity.org/109603/regulating-social-media-platforms-series/?utm_source=rss&utm_medium=rss&utm_campaign=regulating-social-media-platforms-series Wed, 02 Apr 2025 12:55:11 +0000 https://www.justsecurity.org/?p=109603 Launching a new series with leading experts on regulating the information environment, co-organized by NYU Stern Center for Business and Human Rights and Tech Policy Press.

The post Regulating Social Media Platforms: Government, Speech, and the Law appeared first on Just Security.

]]>
Just Security, Tech Policy Press, and the NYU Stern Center for Business and Human Rights are pleased to present a new symposium, Regulating Social Media Platforms: Government, Speech, and the Law.

This will be a pivotal year for technology regulation in the United States and around the world. With respect to social media companies in particular, the EU is already regulating platforms based on perceived harms they cause, most notably through its Digital Services Act, with other countries pursuing a range of regulatory approaches. In the United States, regulatory proposals at the federal level, such as renewed efforts to repeal or reform Section 230 of the Communications Decency Act, have been stalled in Congress while federal regulators at the Federal Trade Commission and Federal Communications Commission appear set to take aim at the content moderation practices of major platforms. Meanwhile, individual states such as Florida and Texas have tried to restrict content moderation and run into serious constitutional challenges. Other states, such as California and New York, have primarily aimed to force greater transparency on the part of social media companies; they, too, have encountered constitutional challenges.

Leading expert authors in this symposium evaluate current and prospective regulatory approaches to answer the following questions: is it lawful, feasible, and desirable for government actors to regulate social media platforms to reduce harmful effects on democracy and society? If so, how? What are the prospects for meaningful federal regulation, given the divisions in Congress and increasingly partisan leanings of regulators? And will the states be viable laboratories of experimentation? How will regulation in other parts of the globe impact content moderation practices for U.S.-based companies? What is the short-term future for government regulation of social media platforms, and how might the regulatory landscape evolve over the longer term? These and related questions will be considered in this ongoing symposium. 

We encourage you to visit this page regularly, as it will be updated with new articles as they are published.

 

IMAGE: (L) Abstract chat icons over a digital surface (via Getty Images); (M) Visualization of an online network (via Getty Images); (R) Popular social media apps on an Apple iPhone (via Getty Images).

The post Regulating Social Media Platforms: Government, Speech, and the Law appeared first on Just Security.

]]>
109603
“Fired” Member of U.S. Privacy Oversight Board Discusses What He Considers at Stake https://www.justsecurity.org/108525/fired-pclob-privacy-board/?utm_source=rss&utm_medium=rss&utm_campaign=fired-pclob-privacy-board Fri, 28 Feb 2025 17:05:51 +0000 https://www.justsecurity.org/?p=108525 "The risks to U.S. persons, as well as non-U.S. persons, from the misuse, abuse, and exfiltration of data are quite substantial."

The post “Fired” Member of U.S. Privacy Oversight Board Discusses What He Considers at Stake appeared first on Just Security.

]]>
On Jan. 22, 2025, three Democratic members of the Privacy and Civil Liberties Oversight Board (PCLOB)—an intelligence watchdog established in 2004 and charged with monitoring the government’s compliance with procedural safeguards on surveillance activities—were terminated by President Donald Trump. I recently interviewed one of them, and in the transcribed interview below, we discuss the significance for privacy protections, EU-US data transfers, and more.

The removal of the Democratic board members leaves the PCLOB with only one active member—the sole remaining Republican appointee, Beth Williams. As Andrew Weissmann indicated on Just Security shortly after the announced terminations, the board cannot take on any new projects without a quorum. That’s also because, by statute, the Board requires bipartisan membership. That means this “indispensable and important body for conducting valuable oversight”—as Weissmann put it—of the federal government’s vast intelligence and surveillance apparatus is effectively neutered.

On Monday, two of the Democratic members that were terminated–Travis LeBlanc and Edward Felten—filed suit against the government, asking the court to declare their removal was illegal and to restore them to their positions. The stakes are high, they argue:

“The President’s actions strike at the heart of the separation of powers. Not only do Plaintiffs’ removals eradicate a vital check on the infringement of ordinary Americans’ civil liberties, they also hobble an agency that Congress created to assist it with oversight of the executive branch.” 

The suit points out that three years after the PCLOB was first established by law as an office in the White House, the 9/11 Commission Act of 2007 “deleted language from the prior statute stating that members served at the President’s ‘pleasure,’” as well as another clause that said, “[t]he Board shall perform its functions within the executive branch and under the general supervision of the President.”

The courts will now have to decide if the terminations were legal. 

There are broader implications beyond domestic concerns about surveillance and privacy abuses. For instance, the EU-US Data Privacy Framework relies on the work of the PCLOB, including reporting that aids the European Commission in assessing whether the U.S. government’s surveillance practices are aligned with EU data protection standards.

Just days before the lawsuit was filed against the government by LeBlanc and Felten, I spoke to LeBlanc about what is at stake. In addition to making the case for the importance of the board’s independence, LeBlanc highlighted a number of issues:

  • He noted that the board will not be able to start new investigations or issue reports, including on its imminent work on the use of biometrics in aviation security, the FBI’s use of open source information, the reauthorization of Section 702 of the Foreign Intelligence Surveillance Act (FISA) in 2026, and the role of tactical terrorism response teams (TTRTs) at the U.S. border.
  • He referenced European concern about whether the United States is meeting the standard for adequate protections for the personal data of Europeans required under the EU-U.S. Data Privacy Framework, the assurance of which relied substantially on the PCLOB and its independent reporting. 
  • He suggested that a lack of proper oversight could leave U.S. citizens vulnerable to surveillance overreach.

What follows is a lightly edited transcript of the discussion.

Justin Hendrix: Travis, what does it mean for the Privacy and Civil Liberties Oversight Board not to have a quorum?

Travis LeBlanc: Without a quorum, the board is unable to conduct business at the board level. Under the PCLOBS statute, there have to be at least three board members to conduct business. And have a quorum. Currently, there is only one board member, Beth Williams. She is a part-time board member who is at the agency and, therefore, is unable to issue board guidance, board reports. She can issue a statement in her own name, but it would just be in her individual official capacity … it would not be at the board level and would not reflect a bipartisan, independent review of any of the issues that the PCLOB normally has to confront. 

Justin Hendrix: And what business are you most concerned about going essentially undone at the moment?

Travis LeBlanc: The PCLOB has several oversight projects that are currently ongoing. The one that was most advanced was an investigation into the use of biometrics in aviation security, which generally would include agencies such as Customs and Border Protection or the Transportation Security Administration [TSA].

You could think of this investigation as one that looks at issues like the use of facial recognition technology by the TSA. I’m sure just about all of us who’ve traveled through airports have probably experienced the ability to use facial recognition at TSA checkpoints to verify your identity. It is opt-in currently, although the former TSA administrator now has indicated that he’d like to make it mandatory. We’ve been looking at facial recognition in aviation for several years now, and that is a report, for example, that the board could not put out right now without a quorum. 

Other issues that the board is working on involve open source—the use of information by the FBI, in particular, of open source information. There are also two new projects that have been opened. One is the board is looking at section 702 of the Foreign Intelligence Surveillance Act [FISA] to prepare for the reauthorization of that provision next year, about a year from now.

Historically, the PCLOB has put out the seminal report looking at section 702 and unfortunately, the board will not be able to do that in 2026 without a quorum. Another final project that I’ll mention. There are others out there, but one final one that I’ll mention involves the use of tactical terrorism response teams (TTRTs) by the Department of Homeland Security.

These TTRTs are deployed at the border and oftentimes are collecting a lot of sensitive information, for example, from a phone that might be on a person crossing the border. Many U.S. persons, Americans, may not know that at the border they don’t have the same Fourth Amendment protections as they have in other places in the country.

And so when you’re at the border, even if you’re a U.S. citizen, the government doesn’t necessarily need a subpoena or a search warrant. to get access to your personal phone and or laptop that you may have with you. There are concerns about the sharing of information gathered there, the targeting of people.

And so that’s another investigation for which the board could not put out a report without a quorum. 

Justin Hendrix: I want to switch gears just a moment and ask you a little bit about EU-U.S. data privacy, and the framework, and the role that PCLOB played in that. What might be the immediate impacts on data transfers between Europe and the U.S.? I suppose if you were at the European Commission or you put yourself in the shoes of a counterpart in Europe, what would you be thinking about knowing that PCLOB is essentially in the circumstances it is in at the moment? 

Travis LeBlanc: PCLOB is very involved in the negotiations and resolution over data transfers between Europe and the United States.

I served on the board for nearly six years, and during that time, we participated in several of the European reviews on the United States’ adequacy status. And a couple of years ago, President Biden issued Executive Order 14086, which is one of the most significant reforms of signals intelligence activities that have happened in the history of the country in that executive.

There are several commitments that are made about the legitimate basis for the exercise of signals intelligence, as well as the direction for a framework that would allow Europeans in particular to have and seek redress in the United States for concerns about the misuse of their personal data by the United States government.

That order encourages—it doesn’t instruct—the PCLOB because we are an independent agency, it encourages the PCLOB to support the redress mechanism by making recommendations on judges for a new data protection review court that would consider redress claims brought by Europeans. It also encourages the PCLOB to conduct a review of the redress process and, in particular, the data protection review court to ensure that it is complying with governing procedures and policies, the executive order, and the broader data privacy framework. The PCLOB is given that role and the PCLOB accepted those roles. So, the president encouraged it, the PCLOB has formally accepted those roles to do exactly what the president encouraged us to do. This has been critical to the negotiations.

There’s concern that’s being raised in Europe about whether the United States continues to offer adequate protections for the personal data of Europeans

And to the data privacy framework with Europe because the PCLOB is an independent agency and has built a reputation as being fair and transparent. And the U.S. government, when it’s talking to the Europeans about the broad oversight mechanism that we have in the United States, often highlights the PCLOB as one of the central features of our oversight of intelligence activities in the United States, the gutting of the board and in particular removing the Democrats and the U.S. Taking the position that all board members must serve at the pleasure of the president shows and demonstrates to the Europeans that the PCLOB actually isn’t independent, and while in this instance, the decision appears to have been made solely because we were Democrats, it’s not a far leap that in the future if a Democratic board member were to draft an opinion or a statement that the administration disagreed with that would be a terminal offense.

And so, the idea that the PCLOB of the future could produce independent or nonpartisan expert reports is immediately called into question, because presumably, those reports would be reviewed by the current administration. And it would essentially be the product of the views of the administration, which would not have the indicia of independence that the Europeans have been looking for with the PCLOB or, frankly, even with the data protection review court because presumably all of those people would serve all those judges would serve at the pleasure of the president as well and also wouldn’t have any protections from independence on a going-forward basis.

From what we’ve seen, there’s concern that’s being raised in Europe about whether the United States continues to offer adequate protections for the personal data of Europeans, given that the oversight mechanisms that were put in redress mechanisms that were put in place may not have the same independence to them as they did in the past.

Justin Hendrix: We will see if the European Commission decides to take such concerns seriously and in some procedural way. I want to ask you—knowing that you only have a moment left—I read one report that suggested you might be considering legal action regarding the termination of the board positions. Is that right? If so, what would be the argument? 

Travis LeBlanc: Ed Felton and I have retained Arnold and Porter in connection with our unlawful terminations from the Privacy and Civil Liberties Oversight Board. 

Justin Hendrix: Speaking as a citizen. What would you most want your neighbor to understand about the PCLOB and what’s missing at this moment? What’s different about this particular lack of a quorum versus in the past? 

Travis LeBlanc: As technology has developed, it has become easier for the U.S. government to obtain a massive trove of information on U.S. persons as well as non-U.S. persons. It has also become much easier for the government to share that information between agencies, between the federal and state and local governments, as well as between the U.S. government and international governments. It has also become easier for the U.S. government to lose that information, whether through an insider who releases it or through a vulnerability or cyber attack of some sort.

The risks to U.S. persons, as well as non-U.S. persons, from the misuse, abuse, and exfiltration of data are quite substantial. It is a national security issue, and it’s an issue where there is only one agency that has been focused on it full-time. In fact, it’s the only agency that has privacy in its name in an age where privacy is one of the key civil rights issues of the century. At this time, when there is so much risk, it is more important than ever to have an expert body that is there to look at matters, to advise on issues around any illegality or compliance infractions, and to be able to offer expert advice to Congress and the president on how to approach and address those issues on a going-forward basis.

Justin Hendrix: Travis LeBlanc, thank you very much. 

Travis LeBlanc: Thank you, Justin. 

The podcast audio of the interview is available at Tech Policy Press

 

IMAGE: Travis LeBlanc, nominee to be a member of the Privacy and Civil Liberties Oversight Board, testifies during a Senate Judiciary confirmation hearing on Capitol Hill on February 5, 2019 in Washington, DC. (Photo by Zach Gibson/Getty Images)

The post “Fired” Member of U.S. Privacy Oversight Board Discusses What He Considers at Stake appeared first on Just Security.

]]>
108525
Nine Experts on the Impact of President Trump’s Pardons and Commutations for January 6 Offenders https://www.justsecurity.org/107288/nine-experts-pardons-january-6/?utm_source=rss&utm_medium=rss&utm_campaign=nine-experts-pardons-january-6 Mon, 03 Feb 2025 20:57:30 +0000 https://www.justsecurity.org/?p=107288 We asked nine experts about what clemencies might herald for the future of the rule of law, political violence, and extremism.

The post Nine Experts on the Impact of President Trump’s Pardons and Commutations for January 6 Offenders appeared first on Just Security.

]]>
On the first day of his new administration, President Donald Trump granted clemency to the nearly 1,600 people convicted in connection with the January 6, 2021 attack on the United States Capitol. They included individuals convicted of violence against Capitol police officers and extremist leaders such as Oath Keepers founder Stewart Rhodes and Proud Boys leader Enrique Tarrio. Rhodes and Tarrio were sentenced to a combined forty years in prison after their convictions for seditious conspiracy and other crimes. 

Trump has long cast those convicted and imprisoned for crimes related to the January 6 attack as victims. As a presidential candidate, he made clemency and securing their freedom a campaign promise. But Trump’s decision to release even the most violent figures, including those associated with organized militias, is also consistent with his history of providing support and legitimacy to armed right-wing extremists. Subsequent events, including the firing of Department of Justice prosecutors and a possible purge of Federal Bureau of Investigations agents who handled January 6 investigations, raise questions as to the extent Trump is willing to go to punish his perceived opponents. 

Chillingly, The Washington Post has reported that FBI agents in the Washington field office were told “to prepare for the White House to publicly release the names of the agents who worked on the two Trump criminal cases,” which could set the stage for harassment or even violent reprisals against rank-and-file agents. 

Beyond the immediate implications of these actions, there is reason for substantial concern about the longer-term effects on the rule of law, political violence, and extremism in the U.S. We asked nine experts to respond to a set of questions on what the clemencies might herald for the future:

  • How might these pardons and commutations influence future acts of political violence? 
  • What are the risks that these pardons and commutations will embolden violent right-wing extremist militias or other organized groups to expand their activities? 
  • How do these pardons and commutations affect the likelihood of future disruptions to democratic processes or other forms of political violence? 
  • What individuals, communities, events, or other potential targets of violence are of most immediate concern due to Trump’s act of presidential clemency? 

We received responses from the following experts:

  • Susan Benesch is the executive director of the Dangerous Speech Project, a faculty associate of the Berkman Klein Center for Internet and Society at Harvard University, and an adjunct associate professor at American University’s School of International Service.
  • Cody Buntain is an assistant professor in the College of Information at the University of Maryland with an affiliate appointment at the University of Maryland Institute for Advanced Computer Studies.
  • Ryan Greer is president of Bedrock and a Security Fellow at the Truman National Security Project.
  • Shannon Hiller is the executive director of the Bridging Divides Initiative at Princeton University and a Security Fellow at the Truman National Security Project.
  • Jared Holt is a senior research analyst at the Institute for Strategic Dialogue (ISD) and a resident fellow at the Atlantic Council’s Digital Forensic Research Lab (DFRLab).
  • Tom Joscelyn is a senior fellow at Just Security and a senior fellow at the Reiss Center on Law and Security. He was most recently a senior professional staff member on the House Select Committee to Investigate the January 6th Attack on the U.S. Capitol.
  • Nathan Kalmoe is the executive director of the Center for Communication and Civic Renewal (CCCR) at the University of Wisconsin-Madison.
  • Rachel Kleinfeld is a senior fellow in the Democracy, Conflict, and Governance Program at the Carnegie Endowment for International Peace. 
  • Cynthia Miller-Idriss is a professor in the School of Public Affairs and the School of Education at American University, where she runs the Polarization and Extremism Research and Innovation Lab (PERIL).

Susan Benesch

Because of the pardons and commutations, we can expect the far right to push further and harder against the law. By excusing and even celebrating past illegal attacks, President Trump has given a tacit but clear endorsement of future violence. And in pardoning everyone, he declined to draw any line between acceptable and unacceptable or even unlawful conduct, so extremists will see no reason to abide by norms. On the contrary, they’ll see lawbreaking as virtuous or even heroic. They are grateful to Trump for rescuing them and want to return the favor, for example, by taking revenge on his real or perceived enemies. More than ever, some will do anything to keep Trump in power. 

Therefore, I won’t be surprised by vigilante attacks on Trump’s opponents or anyone perceived by his loyalists as foreign, non-MAGA, or traitorous, from journalists and federal government employees to Jackson Reffitt, who turned in his father Guy for attacking the U.S. Capitol on January 6 and went on to receive “death threats by the minute.” 

Finally, a new tranche of people will embrace and even commit political violence in the United States – those who feared the law and its enforcement but are now disinhibited by the pardons and commutations. 

Cody Buntain

President Trump’s recent large-scale pardons for those involved in the January 6 insurrection and its planning will, I think, increase the potential for future acts of political violence, both by lone wolf actors and by organized groups. Primarily, this clemency signals to future actors that if their chosen candidate wins, violence or law-breaking in service of that candidate will be forgiven. 

This clemency is particularly concerning as it coincides with moves by tech companies and social media platforms to lower barriers against hate speech and discussion of the kinds of plans and actions we saw around January 6. Given Meta’s recent moves to weaken its internal fact-checking processes, coupled with X’s increasingly overt support of conservative voices, it seems likely to me that we will see more people, particularly vulnerable young men, exposed to the kinds of violent rhetoric we saw on fringe platforms in the lead up to January 6. This increased exposure is likely to lead to more radicalization among these audiences, leading to more isolated incidents of political violence and hate crimes. 

As online social spaces become safer places to openly discuss violence and sedition, however, not only are we likely to see more isolated violence, but I see a potential for more organized political violence as well, as semi-radicalized individuals will have more opportunities to come together in mainstream online spaces and have their grievances weaponized and directed at supposed political foes. More concerningly, these spaces are global in nature. As we saw with the international spread of QAnon messaging, it seems likely that the violent and hateful rhetoric we will see more online has more opportunity to spill over as the platforms governing these spaces move into ideological alignment with President Trump and his tolerance for political violence.

Ryan Greer

The pardons of the January 6th insurrectionists send a dangerous signal, normalizing violence and reducing the fear of consequences. This action deepens the normalization of ideological violence. As it feels more normal, strong responses to counter violence—and the movements that inspire it—are less likely. 

Individuals with an unfortunate cocktail of psychological, social, and ideological maladies may be more likely to take violent actions. That future violence will not only be tragic in itself but will also inspire vicarious trauma—the very real trauma of seeing traumatic acts perpetrated against someone with whom you identify—and increase Americans’ fear of “the other”—deepening polarization—thereby continuing this cycle of the normalization of hate and violence. And, of course, it makes it less likely that those in a position to stand up to hate and violence will do so for fear of being targeted themselves. 

As such, lone actors and violent movements will likely take cues from the perceived social permission and feel emboldened. Marginalized communities already targeted by hate are likely to be targeted even more than they are now.  All Americans will suffer an absence of safety and belonging and increased suspicion and fear.

My organization issued a statement noting that the pardons were a threat to democracy. A colleague at a partner organization called it “brave.” That condemning violence used to be normal but now is brave is a signal that democratic norms are fraying and more violence is likely. Returning to an era of accountability for ideological violence is not just a laudable idea, it is a necessity – before it is too late.

Shannon Hiller

Taking this blanket approach to pardons for individuals who attempted to forcefully overturn the results of a fair election sets a dangerous precedent that raises the risk of future political violence. 

Legal accountability played an important role in the decrease in offline mobilization by organized groups with a proven track record of using threats, intimidation, and physical force against political opponents over the last few years. These pardons have the potential to re-energize groups like the Proud Boys or Oath Keepers, who may again feel emboldened to use violence to close off civic space. 

Pardoning or commuting the sentences of the most violent offenders also missed an opportunity, however slim it may have been, for the incoming President to move on from January 6 in the context of his re-election. Prior to these pardons, Vice President JD Vance himself pointed to a potential area of consensus — that protestors who use violence against others or against police should be held accountable. Pardons for non-violent offenders — especially those who expressed remorse or those who were already back in their communities — may have had a chance of moving the country slowly toward reconciliation in the long term. Instead, these blanket pardons continue to push divisive and factually inaccurate narratives of the attack, setting us even further back in any meaningful attempts to build a more unifying narrative.

Jared Holt

The decision to award pardons and commuted sentences to people jailed for the Capitol riot could be internalized by many extremist movements as permission to attempt openly organizing on a larger national scale again. Many movement groups had all but abandoned these aspirations after the Capitol riot, often citing a variety of reasons that included increased scrutiny from federal law enforcement. In the week since the pardons were issued, we’ve already seen extremist movement groups like the Proud Boys try to capitalize on the moment to recruit new members. The effects of these shifts largely remain to be seen, but I worry it will embolden groups with violent track records and work to help sanitize their image in the eyes of the public.

One big thing I’m watching for in the coming months is whether extremist movement groups will latch on to official efforts taken by the Trump administration—particularly those surrounding immigration. For example, I’m curious to see whether militia movement groups try to organize on larger scales at the US-Mexico border or rally to assist law enforcement in carrying out Trump’s promised “mass deportations.” Anti-Trump protests seem to be in short supply this time around, so I imagine we’ll likely see fewer instances where groups show up to antagonize those sorts of events. In the absence of that, some groups might seek other ways to negatively agitate the political environment. 

Tom Joscelyn

President Trump’s pardons and commutations were based on his claim that the January 6th defendants were “hostages.” Of course, they were not “hostages” at all. Instead, they were defendants and convicts whose crimes were and remain, in many cases, well-documented. But Trump and his movement have turned this reality on its head, portraying the January 6th aggressors as victims of a supposedly corrupt and partisan criminal justice system. This is a very extreme idea — a fictional grievance that has now been normalized with the presidential seal of approval. And it is likely to embolden right-wing extremists, including both so-called long wolves and groups. 

Lone wolves are often motivated by a mix of personal and political grievances. For lone right-wing extremists, the pardons could be viewed as a vindication of the belief that government officials are bad actors deserving of retribution. Similar grievances have already served as motivation for a series of threats against law enforcement and others. 

Right-wing extremist groups such as the Proud Boys, Oath Keepers, and Three Percenters planned for violence on January 6th. In particular, the Proud Boys instigated the attack on the U.S. Capitol. In the months that followed, these groups lost many of their key leaders as they were convicted of, or pleaded guilty to, serious crimes. Trump’s decision has now returned these same leaders, including some with charisma and organizational skills, back into the fold. They are not only free to rejoin their comrades but can also do so with a presidential endorsement. Dozens of lower-level figures, if not more, could also rejoin their former groups as well. Trump has effectively said they were in the right and those who jailed them were in the wrong.

Not all these extremist leaders will seamlessly reintegrate with their former groups. But those who do will be perceived as heroes – martyrs for their cause. That can only help them grow their ranks and, ultimately, expand their footprint and activities.

Nathan Kalmoe

The pardons for violent insurrectionists make January 6th a dangerous template for future disruptive violence at the federal, state, and local levels. We’ve already seen extremist groups attempt to influence political outcomes through armed takeovers of state capitol buildings, armed demonstrations outside the offices of public officials, and violent threats. Following these recent pardons, political violence is more likely to manifest against wavering Republicans at the federal level rather than Democrats, who hold little power now. But Democratic officials at the state and local levels who resist federal overreach and participants in movements organizing against Trump’s policies might be targeted. Down the road, I’m especially concerned about actions against Democratic governors, Democratic Congressional candidates in competitive elections, and convenings of Electoral College voters in the state capitals in 2028. 

But since violence is instrumental, we’ll mostly just see it deployed when government action (whether legal or not) is insufficient to accomplish Trump’s goals. It need not always involve physical violence. Violent threats against officials and other public-facing people are orders of magnitude higher than they were before Trump took office in 2017—many tens of thousands of threats per year. And many more people are made fearful by hearing about those threats and the few acts of violence. We saw this happen during Trump’s second impeachment trial, where Republican Senators who were considering voting to convict him (thus barring him from the presidency) were convinced not to with the reminder that their families would be threatened by enraged Trump supporters. In the future, a weaponized Department of Justice, Internal Revenue Service, or pro-Trump judiciary could persecute political enemies without the threat of physical violence, and that could be a more appealing and effective route to achieve the same goals. 

Elected and appointed leaders changing their political and administrative decisions from fear of harm is antithetical to democracy—it’s rule by violence, not by the people. It was through political violence that white supremacists effectively ended fair elections in the South and condemned Black Southerners to segregation, a loss of civic rights, vulnerability to white violence, and many other harms during the Jim Crow era. That is an important lens through which to view these pardons.

Rachel Kleinfeld

On January 6, 2021, violent individuals harmed 140 police officers and caused millions in property damage. The legal consequences they and others in the mob faced were hugely effective in reducing political violence. After their convictions, rallies and violent behavior from the Proud Boys decreased sharply, and the Oath Keepers largely ceased to exist. Between 2015 and 2021, threats had been skyrocketing against Members of Congress, federal judges, and state and local officials. After the convictions, this growth leveled off, while violent chatter on far-right websites slowed down due to fear of FBI infiltration and feelings of betrayal over Trump’s refusal to financially help the January 6 rioters.

So, the pardons and commutations of these sentences will likely lead to an expansion of political violence. Stewart Rhodes and Enrique Tarrio may have trouble reconstituting the Oath Keepers and Proud Boys—at least under their leadership—but there are hundreds of other extremist groups to join, and new ones will form. Moreover, most political violence in America is not committed by people who formally belong to a group. While they act individually, they are usually inspired by online communities and are highly influenced by whether those communities and political leaders view violence as laudable or appalling.

Research in Israel, Germany, and the U.S. shows that when political extremists think the government is on their side and they will not be held accountable for violence, they commit more violence. We are likely to see more vigilante action in communities facing immigrant round-ups, more militias along the border trying to “assist” law enforcement, and greater threats inspired by the words of politicians and influencers. Hate crimes, which started to rise in 2015 and did not decline, are likely to continue their catastrophic rise as violence against minorities is normalized.

But political violence that becomes this pervasive rarely stays political. When hundreds of violent individuals are allowed to go free because of political connections, it sends a broader message to law enforcement, which does not like to waste scant resources on cases they are less likely to win. As the rule of law is applied more politically, the country can also expect to see greater violence from disturbed individuals mixing personal and political motives – such as the recent school shooter in Tennessee—and more criminal violence.

Cynthia Miller-Idriss

The pardons will have long-term effects on U.S. democratic stability and an immediate impact on Americans’ (already record-low) trust in the judiciary and courts. They reinforce a false narrative that the January 6 attackers did nothing wrong and are an insult to the scores of Capitol police who were injured or died as a result of that day. And they ensure there will never be a commonly accepted historical account of what happened on January 6 (or in the 2020 election more broadly).

The pardons are also a dangerous move for our national security, especially because among those pardoned or granted clemency were more than dozens of members of far-right militant groups, some of which our allies overseas (such as Canada and New Zealand) consider terrorists. A dozen members of the Proud Boys and Oath Keepers were convicted on the rare and serious charge of seditious conspiracy, which is one step shy of treason. These clemencies send a clear message that some kinds of political violence are acceptable and won’t be prosecuted—undoubtedly emboldening violent actors in ways that put us all at risk.

Extremist actors have celebrated the pardons and pledged revenge. The Proud Boys marched through Washington on Inauguration Day in celebratory anticipation of the pardons, cheered along by Trump supporters as they chanted, “Whose streets? Our streets!” Stewart Rhodes, former leader of the Oath Keepers, and Enrique Tarrio, former leader of the Proud Boys, who were released from decades-long sentences, have called for revenge or retribution against those responsible for their imprisonment—a not-so-subtle threat against prosecutors, judges, jury members, and investigators, among others.

It’s hard to imagine a more combustible moment than the one we are in. The pardons are another layer of kindling that sends a message that violence is not only an acceptable solution to political problems but a preferable one.

Editor’s note: Readers may also be interested in Tom Josecelyn‘s, What Just Happened: Trump’s January 6 Pardons and Assaults on Law Enforcement Officers By The Numbers, Just Security (January 22, 2025)

 

IMAGE: Members of the Proud Boys make a hand gesture while walking near the US Capitol in Washington, DC on Wednesday, January 6, 2021. (Amanda Andrade-Rhoades/For The Washington Post via Getty Images)

The post Nine Experts on the Impact of President Trump’s Pardons and Commutations for January 6 Offenders appeared first on Just Security.

]]>
107288
What Just Happened: Trump’s Announcement of the Stargate AI Infrastructure Project https://www.justsecurity.org/106688/what-happened-trumps-announcement-stargate-ai-project/?utm_source=rss&utm_medium=rss&utm_campaign=what-happened-trumps-announcement-stargate-ai-project Wed, 22 Jan 2025 15:06:23 +0000 https://www.justsecurity.org/?p=106688 An expert explainer on President Donald Trump's announcement of the Stargate AI infrastructure investment project.

The post What Just Happened: Trump’s Announcement of the Stargate AI Infrastructure Project appeared first on Just Security.

]]>
On Tuesday, Jan. 21, OpenAI CEO Sam Altman, SoftBank CEO Masayoshi Son, and Oracle Chairman Larry Ellison appeared at the White House with President Donald Trump to announce the creation of a joint AI infrastructure project called Stargate. Introducing the CEOs, President Trump hailed Stargate as “ a new American company that will invest $500 billion at least in AI infrastructure in the United States and move and very, very quickly, moving very rapidly, creating over 100,000 American jobs almost immediately.” The goal of the venture, Trump said, is to “build the physical and virtual infrastructure to power the next generation of advancements in AI. And this will include the construction of colossal data centers, very, very massive structures. I was in the real estate business. These buildings, these are big, beautiful buildings.”

Oracle’s Larry Ellison spoke of data centers already under construction in Texas. OpenAI’s Altman promised advances in AI will mean “ diseases get cured at an unprecedented rate.” And Softbank’s Masayoshi Son made more grandiose promises. “ [Artificial general intelligence] is coming very, very soon,” he said. “And then after that, that’s not the goal. After that, artificial superintelligence. We’ll come to solve the issues that mankind would never ever have thought that we could solve.  Well, this is the beginning of our golden age.” Masayoshi Son promised an initial investment of $100 billion with a promise to reach $500 billion within five years.

A statement on OpenAI’s website laid out more details, promising the venture would “not only support the re-industrialization of the United States but also provide a strategic capability to protect the national security of America and its allies.” It says the first equity funders in the Stargate venture are SoftBank, OpenAI, Oracle, and MGX. “SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. Masayoshi Son will be the chairman.” The statement adds that “Arm, Microsoft, NVIDIA, Oracle, and OpenAI are the key initial technology partners,” and that the partners are “evaluating potential sites across the country for more campuses” even as the agreements are being drawn up.

The announcement followed President Trump’s order, issued hours after his inauguration, rescinding President Joe Biden’s Oct. 30, 2023, Executive Order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” That move fulfilled a promise of the GOP platform adopted at the Republican National Convention in July. The GOP claimed that the Biden order “hinders AI innovation” and “imposes radical leftwing ideas” on its development. But large-scale AI infrastructure development was a focus of the Biden administration, as well. A week before he left office, President Biden signed an Executive Order “leasing federal sites owned by Defense and Energy departments to host gigawatt-scale AI data centers and new clean power facilities,” according to Reuters.

Across the United States, some communities have resisted the development of data centers, often over environmental and natural resource concerns – data centers use an enormous amount of electricity and water. Whether the Trump administration will succeed in clearing the path for development, and whether the demand exists to meet this scale of investment, remains to be seen.

Editor’s note: This piece is part of the Collection: Just Security’s Coverage of the Trump Administration’s Executive Actions

IMAGE: US President Trump speaks in the Roosevelt Room flanked by Masayoshi Son (2L), Chairman and CEO of SoftBank Group Corp, Larry Ellison (2R), Executive Charmain Oracle and Sam Altman (R), CEO of Open AI at the White House on January 21, 2025, in Washington, DC. (Photo by Jim Watson/AFP via Getty Images)

The post What Just Happened: Trump’s Announcement of the Stargate AI Infrastructure Project appeared first on Just Security.

]]>
106688
Musk, X, and Trump 2024: Where are the Legal and Ethical Boundaries? https://www.justsecurity.org/100265/musk-trump-legal-boundaries/?utm_source=rss&utm_medium=rss&utm_campaign=musk-trump-legal-boundaries Fri, 20 Sep 2024 08:55:03 +0000 https://www.justsecurity.org/?p=100265 Analysis of the ethical and legal lines for social media companies in influencing the outcome of a presidential election, in light of Elon Musk's support for former President Trump.

The post Musk, X, and Trump 2024: Where are the Legal and Ethical Boundaries? appeared first on Just Security.

]]>
Editor’s note: This article is part of a new series from leading experts with practical solutions to democratic backsliding, polarization, and political violence.

Elon Musk, the world’s richest person, is actively using X (formerly Twitter), the social media platform he bought in 2022, to promote his far-right political interests. Vittoria Elliott at WIRED writes that Musk has made the platform “his own personal political bullhorn.” Tech journalist Casey Newton says that X is now “just a political project.” 

Even before his endorsement of former President Donald Trump in July, Musk had already changed X in substantial ways that favor MAGA politics and personalities. He restored accounts of far-right figures such as white nationalist Nick Fuentes and conspiracy theorist Alex Jones, relaxed content moderation policies, and disposed of much of the company’s trust and safety and election integrity teams

But after the endorsement, Musk’s efforts on behalf of the Trump campaign appeared to surge. He launched a political action committee dedicated to Trump’s reelection. He hosted a livestream with the former president, whom Musk coaxed back onto the platform after he was previously banned for incitement to violence after the January 6, 2021 attack on the U.S, Capitol. According to The New York Times, Musk “hired a Republican operative with expertise in field organizing” to help direct his political activities. And according to Trump, Musk has agreed to head a “government efficiency” task force that aims to make “recommendations for drastic reforms” across the entire federal government should Trump win a second term. All the while, Musk continues to promote his preferred brand of politics through posts on his own account, which has close to 200 million followers (and frequently appears in users’ feed as recommended content whether they follow him or not).

While it may be jarring for users and others who hoped the platform might not succumb so completely to the interests of its singular owner, it is Musk’s prerogative to shape X to his tastes. No social media platform is truly politically neutral. And it is not a novel circumstance that the singular owner of a platform has such extraordinary influence over its operations, policies, and political decisions. But Musk’s use of X to try to influence the 2024 U.S. election is the most extreme example to date of a person in his position using a major social media platform for a specific political outcome. It is so blatant and outside the norm, it can be considered a class of its own. 

This unique scenario offers an opportunity to consider the various ways that social media platforms could be used to provide advantages for one political campaign over another, and where the ethical and legal lines might currently be drawn. The exercise may also be useful for X staff and board members, who should think carefully in advance about the potential legal and public policy questions arising from Musk’s politicking, including what actions could trip into possible violations of campaign finance and elections law. 

Where are the red lines? 

The operators of social media platforms regularly make design, technical, product, and policy decisions that may favor certain parties and their politics. In the past, though, they have typically done so incidentally or to protect themselves from accusations, good faith or otherwise, of political bias. For example, there are many past cases of platforms shielding conservative users from content moderation. However, we are not aware of a clear example in which the owners of another major social media platform acted to intentionally boost a specific candidate or party. 

Musk is different. So his employees may do well to be on the lookout for evidence they are being directed to engage in political manipulation. But not all acts of political manipulation are equal. Some may violate the law, while some are likely legal even if unethical. And some may not be clearly unethical either.

Below, we present possible ways that various aspects of the platform, its policies, and its advertising products could be used to specifically advance the interests of a candidate for office. In order to assess where legal and ethical lines might be drawn, we asked experts on privacy, campaign finance, and election law about different forms of potential manipulation of such a social media platform. Based on their insights, we sorted these possibilities into three categories: those that are potentially illegal, those which fall into a gray zone, and those which raise serious policy questions but few legal risks for X and Musk. 

Potentially illegal use of social media tools

Platform operators have broad First Amendment rights to determine which speech they allow, how they moderate content, and with whom they do business. Many of their decisions are also shielded from liability by Section 230 of the Communications Decency Act

But platform operators are also restricted by Federal Election Commission (FEC) regulations, which stipulate that certain actions might be considered in-kind campaign contributions. And, of course, actions taken by corporations or individuals that encourage violence or interfere with electoral processes are also subject to laws governing voting rights and incitement.

Even so, the bar for proving a violation of these laws is often high. For FEC violations, in particular, it is typically necessary to show, with concrete evidence, that an actor coordinated directly with a campaign. Though this high bar might benefit X in its current incarnation, it also protected Twitter from conservative critics in the past. For instance, an Aug. 2021 decision by the FEC found the company did not make a material contribution to the Biden campaign by briefly blocking links to a New York Post article about Hunter Biden’s laptop. 

There are additional considerations as well. For example, political action committees (PACs) and other non-campaign entities are not bound by the same rules as campaigns. FEC rules also carry a “press exemption,” which might apply to X. The FEC is politically deadlocked, making enforcement through it even more unlikely. And the Justice Department appears focused on less controversial conduct. Violations can be difficult to detect given the opacity of social media platforms and their internal operations. Legal proceedings – FEC or otherwise – can take months or years. By the time a case is resolved, the election could be long past. 

But even if laws and regulations go unenforced, potential violations are still a matter of public concern. Were alleged violations to come to light, they could inspire future congressional hearings, investigations, or legislation. They might also lead to pressure from advertisers or others on Musk to change. And while enforcement may take time, a well-founded criminal indictment or the opening of a formal investigation of a company could also help propel actors toward legal compliance. 

What follows are three areas that may raise legal concerns:

1. Advertising and monetization. Consider if Musk decided to offer discounted advertising rates to favored campaigns or PACs; or if he implemented stricter approval processes for disfavored political actors. Indeed, Musk’s live-streamed interview with former President Trump X is already under scrutiny, as some observers have wondered if the Trump campaign paid to promote the event and, if so, how much it paid. X also heavily promoted the interview with a designated hashtag and in its “trending” section for users. Neither X nor the Trump campaign has provided the public with the details of their arrangement. 

According to Adav Noti, executive director of the nonpartisan Campaign Legal Center (CLC), charging differential rates for campaign advertising would likely be illegal. 

2. Incitement to violence. Another scenario to contemplate is Musk, through his account or platform support for other accounts, encouraging political violence during or after the election. He could, for example, encourage militia members to stake out polling places, encourage violent demonstrations, or invoke ideas of a “civil war” (as he recently did in the United Kingdom). In terms of technical tools, one should consider scenarios in which Musk could manipulate the content recommendation algorithm or interfere in internal enforcement against similar content. Of these possibilities, posts by Musk directly inciting violence or issuing what Noti called “legally cognizable threats” are the most likely to violate the law; but the bar for proving incitement to violence in court can be quite high. 

[Editor’s note: Readers may be interested in Berke Gursoy, “True Threats” and the Difficulties of Prosecuting Threats Against Election Workers]

3. Interference with election processes or voting rights. Another scenario to consider is Musk making statements or using X to amplify statements violating state and federal election laws about voting rights and election interference. He has already publicly embraced conspiracy theories about election fraud by, for example, claiming that non-citizens are preparing to vote in the upcoming election. Building on these conspiracy theories, Musk could encourage protests and election “observers” (including armed militia members) to monitor polling places. Alternatively, Musk could propagate conspiracies that encourage rogue poll workers and other potential saboteurs to take actions that threaten the election’s integrity. These scenarios might violate both recent and longstanding civil rights laws, such as section 11 of the Voting Rights Act or the KKK Act, but precedent for their enforcement is evolving. For example, the pending case of United States v. Mackey concerns a man charged under the Enforcement Act of 1870 with using his social media presence to spread false information about voting procedures in a way intended to suppress voter turnout. 

That case may be an exception in terms of social media being the mechanism, while the existing laws need to be updated, not just for the artificial intelligence (“AI”) age, but the internet age in general. As a case in point, prosecutors charged the man allegedly responsible for generating a deepfake call of President Joe Biden in New Hampshire under a law that specifically applies to robocalls. A social media post containing the same type of deepfake might not have even triggered prosecution under the same law because social media firms are not regulated in the same way as telecoms (see, for example, the Telephone Consumer Protection Act of 1991, codified at 47 U.S.C. § 227, and its definition of “automatic telephone dialing systems;” but see also the federal law prohibiting fraudulent misrepresentation of campaign authority, 52 U.S.C. § 30124, and analysis of its application to AI/deepfakes). 

Legally questionable and ethically dubious

Other actions Musk might take fall into a legal gray zone, in which an argument could be made for illegality but existing precedent is unclear. Though ethically dubious, these acts might not violate existing law (or might violate it in ways that are not enforceable). Some might even argue (perhaps cynically) that these kinds of manipulations are common for both sides of the aisle in contemporary politics.

4. Recommendation algorithms and visibility of content. One potential scenario to consider is if Musk chose to alter how X delivers content to users in order to promote certain accounts or narratives over others. For example, consider if he were to use recommendation algorithms to show content promoting one candidate to more users, or conversely to show supportive messages about the candidate’s opponent to fewer users. He could also manipulate features such as trending topics or hashtags to bury content unfavorable to a candidate, boost content friendly to their campaign messaging, or do the opposite for the candidate’s opponent. The CLC’s Noti said it is possible a judge could regard manipulation of recommendation algorithms as a material contribution to a campaign, but that it is “at minimum a gray area.” It is also possible courts would find such activity to be protected speech.

5. Disinformation and synthetic content. Another scenario is if Musk chose to allow certain types of behavior to go unchecked, such as coordinated disinformation, inauthentic behavior benefitting political allies, or allowing the spread of misleading content that would otherwise violate content policies. He has already shared a video featuring a clone of Vice President and Democratic presidential nominee Kamala Harris’s voice. There is currently no federal law prohibiting misrepresentations of this sort (the federal law prohibiting fraudulent misrepresentation of campaign authority is limited in scope and applies only to conduct perpetrated by a candidate or their agents).

Senators Amy Klobuchar (D-MN) and Lisa Murkowski (R-AK) have introduced a bill requiring transparency disclosures on ads with “AI-generated content.” Even if such a law existed, though, perpetrators may argue that such deepfakes are protected as satire. The Harris video contained such sarcastic messages it would be hard for a relatively informed voter to believe it was authentic. A number of state laws do exist, but they are new and largely untested. 

6. Other forms of support. Other scenarios to contemplate include if Musk offered other valuable support to a favored political candidate, such as providing more human resources to assist one campaign in utilizing the platform than their competitor, providing early access to new features or tools, using company resources to conduct voter outreach, or even providing direct monetary support. There are many years of precedent governing corporate contributions such as get-out-the-vote campaigns, which must be “nonpartisan.” If it could be proven that X intended to benefit one campaign over another, such activity might be illegal. Other forms of support have proven controversial, but do not appear to be illegal unless they are offered on preferential terms. This was a subject of debate following revelations about the embedding of Facebook and Google staffers in the 2016 Trump campaign. 

Probably legal, whether ethically right or wrong

Finally, under a number of federal laws (or lack thereof), there are ways Musk could use X to tip the electoral scale that are almost certainly legal, even if they raise serious ethical questions. These include: 

7. Interfering in content moderation. Musk could influence the visibility of certain viewpoints through selective enforcement of content moderation policies. This might also include overruling moderation decisions on high-profile posts, including those of leading candidates, or handling certain accounts differently from others, such as political influencers. Asymmetries in how moderation resources are allocated, as well as policy adjustments that favor one viewpoint over another, could result in favoring certain candidates and parties. Imagine if, for example, the United States were to see another round of protests against police violence, and X were to take a light touch approach toward hate speech and violent incitement by MAGA supporters against protesters. Recent rulings by the Supreme Court make it more likely this would be considered protected First Amendment expression, and such conduct presumably would also be protected under Section 230 of the Communications Decency Act. 

8. Targeting and user data. Manipulative behavior in this domain might include providing privileged access to user data for a preferred candidate or campaign’s targeting, unique lead generation, data sharing (including through custom APIs), data collection mechanisms, or even the selective enforcement of privacy policies to advantage or disadvantage parties. As with preferential advertising rates, one could argue this is a material contribution to a campaign. However, legal experts we spoke to said it would be easy for X to argue in court that this was a business decision to help deliver content of interest to its user base. It would take exceptionally compelling evidence to prove X had coordinated with the Trump campaign to provide a material contribution in violation of FEC rules.

9. Marketing and messaging channels. Musk could choose to utilize X’s own communications mechanisms, such as emails or in-app notifications, to issue announcements or selectively communicate to certain targeted parties. Or, Musk might privilege certain accounts, as he appears to have done for himself after firing an engineer because he felt engagement with his account was too low. Craig Holman, an expert on campaign finance issues at the watchdog group Public Citizen, told us this would probably not run afoul of federal election law unless coordination with a campaign could be proven.

10. Search and AI-generated content and material. Musk could manipulate other platform functions to favor one partisan side over another. Such use functions range from standard search tools to newer generative AI tools, and include automatically generated snippets, as well as information in knowledge panels, the boxes that appear when a user searches for people, places, organizations, or other things that are in Google’s Knowledge Graph. Musk’s AI chatbot, Grok, has already been criticized for falsely claiming Harris is ineligible for the ballot in many states — leading five secretaries of state to send Musk a letter asking him to implement better guardrails. But this is an emerging area with little federal regulation at present. 

After the election

The above scenarios consider potential actions Musk might take before the election. Efforts to manipulate communications after an election present unique threats. In the event of Trump’s loss, all of the above techniques could be employed to boost claims of electoral improprieties—damaging public confidence, pressuring key decision makers (such as boards of election or judges), or stoking offline unrest. 

Imagine the online chaos that would follow if enough election boards refused to certify an election, leading to court challenges and recounts disrupted by violent protest. (It has happened before.) Now imagine not only a total failure to put out the fire, but a decision at the top of X to fan the flames. Given Musk’s brazen willingness to spread false claims (and even manipulated media), many worry about what he will be willing to say or do if Trump loses the election and tries extra-legal means to overturn the results.

There is a very real risk that Musk would aid Trump’s efforts to subvert the election outcome. Potential whistleblowers inside of Musk’s X may be the best defense – especially if they act in real time – against such anti-democratic behavior. Taking action as a whistleblower carries considerable risks. But even if there are few legal consequences for Musk or X, public controversy might alter their behavior in important ways – or encourage policymakers to pursue reforms. 

Congress should act—but barring that, whistleblowers may be the best hope

If legislators worry about the political influence of billionaires — as both sides of the aisle now seem to — the above scenarios offer a guide to curtailing it. In the era of Citizens United and a gridlocked, anemic FEC, campaign finance reform is an obvious place to start. That reform should include penalties capable of meaningfully deterring deep-pocketed actors like Musk from unlawful behavior. Clearer laws governing incitement to violence, voter intimidation, and interference in election proceedings would be a meaningful step. Congress also has yet to weigh in on the ongoing legal fights about section 230, content moderation, and the editorial discretion of social media companies, preferring to let the Courts sort it out. 

Another area where Congress could act, but has not, is transparency legislation, which would require platforms to share data with researchers about their content recommendation and advertising systems. Such legislation would empower watchdogs in both civil society and government. State legislatures are to be commended for acting on synthetic media in political campaign contexts, but that is not a substitute for federal regulation. State bills also pose a variety of First Amendment questions about satire and other protected speech that Congress should consider more carefully. 

In the meantime, concerned parties within X should not hold their breath waiting for legislative action. If X engages in clandestine actions to the benefit of a political candidate, whistleblowers may be the best hope for accountability.

IMAGE: Elon Musk, left center, and Wendell P. Weeks, right center, listen to President Donald Trump, right, as he meets with business leaders at the White House on Monday January 23, 2017 in Washington, DC. (Photo by Matt McClain/The Washington Post via Getty Images)

The post Musk, X, and Trump 2024: Where are the Legal and Ethical Boundaries? appeared first on Just Security.

]]>
100265
Durov, Musk, and Zuckerberg: Tech Oligarchs Cry Censorship and What It All Means https://www.justsecurity.org/99626/durov-musk-zuckerberg-censorship/?utm_source=rss&utm_medium=rss&utm_campaign=durov-musk-zuckerberg-censorship Fri, 30 Aug 2024 12:50:02 +0000 https://www.justsecurity.org/?p=99626 "It is important for the heads of social media companies to demand fair and legitimate boundaries are set. The current strategy by some of them seems more like the tale of the boy who cried wolf."

The post Durov, Musk, and Zuckerberg: Tech Oligarchs Cry Censorship and What It All Means appeared first on Just Security.

]]>
Co-published with Tech Policy Press.

If you happen to be someone who studies the relationship between tech oligarchs and state power, the past few days of news events have provided you with much to consider:

  • On Saturday, French authorities detained Pavel Durov, the Russian-born billionaire who is cofounder and CEO of Telegram, as part of an investigation opened in July into the moderation of alleged criminal activity on the platform.
  • On Monday, Meta founder and CEO Mark Zuckerberg sent a letter to House Judiciary Committee Chairman Jim Jordan (R-OH), in which Zuckerberg expressed regret about how his company responded to “repeated[] pressure” from the US government on COVID-19 content moderation.
  • And on Wednesday, Brazil’s Supreme Court gave the billionaire Elon Musk 24 hours to name a legal representative for X in Brazil or face a ban of the platform in the country.

Beyond the simple fact that these stories represent three significant developments in billionaire operators of social media and messaging applications encountering elements of the state, these three touch points also signal how their commitment to the so-called “marketplace of ideas” may increasingly be tested, especially when it comes to hosting harmful content. These developments also show the extent to which these billionaires all employ the concept of “censorship” in different ways to defend themselves against claims regarding their business practices and content moderation decisions. In fashioning themselves as proponents of free speech, these oligarchs argue they must not be held to account by agents of democratically elected governments. Of course there is a line of government censorship that may well improperly intrude upon the companies, and by extension the rights of their users to communicate and receive information freely. So, what then might the week’s head-turning moments tell us about the future of the relationship between social media and democracy?

Cries of Censorship as Get-Out-Of-Jail Free Card

Of course, the circumstances are quite different in each case. One way to think of these men is on a spectrum when it comes to their rhetoric and the conduct of their companies regarding “free speech.”

  • Telegram’s Durov is perhaps on the most extreme end of the spectrum, given the platform is widely understood to engage in next to no content moderation in its public channels. The French indictment says his behavior crossed various lines, including by permitting Telegram to trade in child pornography and doing business with organized criminals. His lawyers say the allegations against him are “absurd.” But even staunch defenders of laws that protect platform owners from liability are not convinced. Daphne Keller, director of the Program on Platform Regulation at the Stanford Cyber Policy Center, said in a post on LinkedIn that this may simply not be about speech. “I am usually one of the people making noise about free expression consequences when lawmakers go overboard regulating platforms. Possibly this will turn out to be one of those cases. But so far, I don’t think so.” She explained, if “Telegram fails to remove things like unencrypted CSAM [child sexual abuse material] or accounts of legally designated terrorist organizations even when notified[, t]hat could make a platform liable in most legal systems, including ours.”
  • Musk believes himself to be a free speech champion –a duplicitous assertion given, among other things, his prolific use of litigation to chill the speech of his critics. He has greatly reduced the barriers to a variety of forms of speech on X that were once considered violations of the platform’s policies, and has reinstated the accounts of various right-wing and white supremacist figures, moves which can’t be understood apart from his personal efforts to promote far-right politics around the world. His conduct in Brazil includes refusing to cooperate with an investigation of individuals who staged the riot in the seat of government in Brasilia in 2022. While the situation in Brazil is not clear cut, Musk is responsible for the trouble, at least according to UC Irvine law professor and former UN Special Rapporteur David Kaye. “Musk has only himself to blame for this mess, which will harm Brazilian[s] who still use X/Twitter for public discourse,” he wrote on the social media platform Bluesky.
  • Zuckerberg, who oversees arguably the largest content moderation operation in the world outside of China, occupies a less radical position, even if he appears to be moving closer to the other two than he has been in recent years. His letter to Rep. Jordan this week indicates he regrets taking certain moderation decisions that were consistent with admonitions of the US government, even as he claims those decisions were his own. Zuckerberg’s missive amounts to a series of concessions to a Congressman who himself orchestrated a disinformation campaign intended to undermine the result of the 2020 US presidential election and has used his office in the intervening years to build the case that Republicans are the victims of state censorship. As TechDirt editor Mike Masnick pointed out, what Zuckerberg says in the letter “misleadingly plays into Jordan’s mendaciously misleading campaign.” Notably, Musk was also quick to embrace Zuckerberg’s letter under the banner of First Amendment rights.
    Zuckerberg may believe that his letter will reset relations with Republicans as he sets a new standard for where he intends to draw the line when it comes to engaging with governments on issues of “public discourse and public safety.” And he may believe it will placate former President and 2024 Republican nominee Donald Trump, who according to Politico writes in a new book that Zuckerberg plotted against him in the 2020 election, in part through his philanthropic support for election infrastructure. Politico says Trump writes that Zuckerberg “would ‘spend the rest of his life in prison’ if he did it again.”

In all three of these instances, the oligarchs appear to believe they should have been the ones to set the rules, and that governments should back off. And in all three cases, free speech absolutists and far-right activists have cheered or defended the oligarchs, even if the underlying facts are not precisely in their favor. But these incidents, occurring so proximate to each other, suggest the fault lines between states and tech platforms are trembling. Who are the legitimate champions of democracy – the politicians and officials installed through elections, or the tech oligarchs who claim to defend free speech? What measures may even democratic governments not implement out of legitimate concerns for freedom of speech and the right to association?

The oligarchs have a vested interest in promoting the illusion of a marketplace of ideas. Politically, they can position themselves as champions of free speech, and use that position to defend against claims a government or a prosecutor might bring against them. But this power struggle is fraught with risks. Governments around the world are introducing a raft of new regulations meant to curb various harms associated with social media and messaging platforms. As these policies take shape, they may lead to even more confrontations with the oligarchs and social media platforms. This will have significant implications for free expression, particularly in regions where democracy is weak or nonexistent. Whether states prove effective at policing the platforms and the oligarchs who own and operate them will determine the balance of power between tech and democracies going forward. There are important lines to draw that respect free speech but guard against social harms. It is important for the heads of social media companies to demand fair and legitimate boundaries are set. The current strategy by some of them seems more like the tale of the boy who cried wolf.

Photo credit: Pavel Durov (Steve Jennings/Getty Images for Tech Crunch)l Elon Musk (EU News); Mark Zuckerberg (Justin Sullivan/Getty Images)

The post Durov, Musk, and Zuckerberg: Tech Oligarchs Cry Censorship and What It All Means appeared first on Just Security.

]]>
99626
Dept of Justice Promises to Declassify Standard Operating Procedure for Coordinating with Social Media Platforms https://www.justsecurity.org/98164/fbi-social-media-foreign-influence/?utm_source=rss&utm_medium=rss&utm_campaign=fbi-social-media-foreign-influence Fri, 26 Jul 2024 14:00:02 +0000 https://www.justsecurity.org/?p=98164 Department of Justice set to release declassified Standard Operating Procedure for coordinating with social media platforms on foreign malign influence and First Amendment.

The post Dept of Justice Promises to Declassify Standard Operating Procedure for Coordinating with Social Media Platforms appeared first on Just Security.

]]>
Co-published with Tech Policy Press.

After the US Supreme Court punted, just last month, on First Amendment questions concerning how U.S. law enforcement agencies should interact with social media companies, the Department of Justice is now days away from declassifying its operating procedures for handling such matters. That outcome is the result of a multi-year investigation by the Department’s Office of Inspector General (OIG), which released a report earlier this week.

Background

In late June, the Supreme Court ruled in favor of the Biden administration in Murthy v Missouri, a case that concerned whether the federal government violated the First Amendment rights of citizens by allegedly coordinating with social media platforms to remove disfavored speech. In a 6-3 ruling, the Supreme Court reversed the decision by the Fifth Circuit, finding that the plaintiffs did not have standing to bring the case, since the evidence failed to connect any specific moderation decision by the platforms to inappropriate government influence.

The case’s path to the Supreme Court started on May 5, 2022, when Missouri Attorney General Andrew Bailey filed a lawsuit accusing Biden administration officials of either coercing or colluding with tech companies in a “coordinated campaign” to remove disfavored content. Among the defendants named in the suit were the Section Chief of the Federal Bureau of Investigation’s Foreign Influence Task Force (FITF), and a special agent in the San Francisco division of the FBI who regularly interacted with social media platforms. The suit alleged that “communications” that influenced Meta’s decision to temporarily limit the propagation of a story about Hunter Biden’s laptop ahead of the 2020 election emerged from the FITF.

The allegations in the suit sparked a wave of congressional investigations, and while its most lurid claims turned out to be flimsy or bogus on inspection, fair-minded experts generally agree that there should be clearer rules about how social media platforms interact with the federal government, particularly when it comes to law enforcement, security, and intelligence agencies. Missouri’s Attorney General was likely unaware at the time he filed his suit that the Justice Department’s Inspector General had already initiated a probe to look into the subject. This week, the OIG published the result of its effort in a report titled, “Evaluation of the U.S. Department of Justice’s Efforts to Coordinate Information Sharing About Foreign Malign Influence Threats to U.S. Elections.”

The Next Phase

The report says that OIG’s goal was to assess the effectiveness of the Justice Department’s system for sharing information related to “foreign malign influence directed at U.S. elections,” to evaluate “the Department’s oversight and management of its response,” and to help streamline or otherwise improve the department’s processes. The evaluation, which included fieldwork that commenced in October 2021 and concluded in April 2023, focused on “information sharing with social media companies to evaluate the aspect of the Department’s information-sharing system that the FITF developed following foreign malign influence directed at the 2016 U.S. presidential election.” During that election cycle, a substantial campaign by the Russian government gave rise to concerns over social media platforms as a vector for foreign influence.

The OIG report found that while the FBI had developed an “intelligence sharing model” (depicted in a graphic reproduced below) involving various actors in US law enforcement and intelligence communities and social media companies, there was in fact a significant gap in policies and guidance governing interactions between the government and the platforms. Until February 2024, when the OIG report says the Department of Justice issued a new standard operating procedure (SOP), there was no formal policy, which the OIG says created potential risks such as the perception of coercion or undue influence over private entities and, in turn, the speech of American citizens. The SOP, which is contained in a classified document, sets criteria for determining what constitutes foreign malign influence, lays out supervisory approval requirements for investigating it, and provides standard language for disclosures and guidance on engagement with social media companies, according to the OIG report.

Source: Department of Justice Office of the Inspector General

While the OIG says that the operating procedure is an improvement, it notes the information’s classified nature is a drawback. “Because DOJ’s credibility and reputation are potentially impaired when its activities are not well understood by the public, we recommend that the Department identify a way that it can inform the public about the procedures it has put into place to transmit foreign malign influence threat information to social media companies in a manner that is protective of First Amendment rights,” the report says.

The report also notes that DOJ lacks “a comprehensive strategy guiding its approach to engagement with social media companies on foreign malign influence directed at U.S. elections,” a circumstance that resulted “in varied approaches to its information-sharing relationships with social media companies depending on where those companies were based,” a particular problem when firms are foreign-owned or located outside of the US.

While the OIG found that the FBI’s model for tracking foreign influence and sharing that information with social media companies primarily relied on identifying foreign actors rather than monitoring for specific types of content, the report does note that there were instances where the FBI did share “content” information, such as specific posts, with social media companies in order to alert them to specific themes or narratives that foreign actors intended to promote. This is a particularly fraught area, as it opens the door to the types of risks that were at the core of the original complaint in Murthy v Missouri.

The OIG report makes two recommendations “to address risks in DOJ’s mission to combat foreign malign influence directed at U.S. elections,” including that the department:

1. Develop an approach for informing the public about the procedures the Department has put into place to transmit foreign malign influence threat information to social media companies that is protective of First Amendment rights.

2. Develop and implement a comprehensive strategy to ensure that the Department of Justice’s approach to information sharing with social media companies to combat foreign malign influence directed at U.S. elections can adapt to address the evolving threat landscape.

The DOJ’s response to the OIG report, located in an appendix, notes that the department concurs with the first recommendation, and plans to address it “by making publicly available a detailed summary version of the SOP and posting that summary on DOJ’s website by July 31, 2024.” The response notes that the SOP emerged from the development of “a standardized approach” to sharing information on foreign influence operations that kicked off after the Supreme Court issued a stay of the Fifth Circuit injunction that temporarily prohibited contact between agencies such as the FBI and social media firms.

DOJ also concurred with the second OIG recommendation, and plans to address it with “additional actions by August 31, 2024,” including:

Development and Release of Strategic Principles. DOJ will set forth in a public manner the principles reflecting DOJ’ s strategy for sharing FMI information with social media companies to combat the evolving threat landscape.

Resumption of FBI’s Regular Meetings with Social Media Companies. As part of that strategy, FBI will resume regular meetings in the coming weeks with social media companies to brief and discuss potential FMI threats involving the companies’ platforms. FBI will conduct these meetings-as FBI did before pausing the meetings in summer 2023 due to the now-vacated Missouri injunction (see infra at 5)–in a manner that is entirely consistent with applicable first Amendment principles. As has been FBI’ s practice, FBI will conduct these engagements with social media companies located across the country, depending on the circumstances and nature of the potential threats.

Outreach by FBI Field Offices, to Social Media Companies. FBI will instruct FBI Field Offices in the coming weeks to conduct outreach– in coordination with the FBI’s Foreign Influence Task Force (FITF)– to any identified social media companies located in their areas of responsibility, to develop contacts at those companies and ensure they are aware of the SOP and DOJ’ s overall approach for engaging with social media companies regarding FMI threat information.

Engagements by Senior Officials. In the coming months, senior DOJ officials will highlight and explain, during public engagements with relevant stakeholders and the public, DOJ’s strategy for information sharing with social media companies to combat FMI directed at our elections.

Launch of DOJ Website Page. DOJ plans to launch a new section on its website dedicated to ensuring public awareness of DOJ’s strategy for engaging with social media companies regarding FMI. The website page will collect and highlight in a single location relevant resources, guidance, and other materials, including the summary of the SOP discussed above.

The OIG report notes that the relationship between the FBI and social media companies has been successful in disrupting multiple campaigns by foreign actors to interfere in US elections. Whether the DOJ, FBI, and other security and intelligence agencies can effectively quell public fears not about foreign interference but rather about their own government’s interference in elections is an open question; certainly, these recommendations and the DOJ’s responsiveness to them should help.

 

 

The post Dept of Justice Promises to Declassify Standard Operating Procedure for Coordinating with Social Media Platforms appeared first on Just Security.

]]>
98164
Tech Platforms Must Do More to Avoid Contributing to Potential Political Violence https://www.justsecurity.org/95998/tech-platforms-political-violence/?utm_source=rss&utm_medium=rss&utm_campaign=tech-platforms-political-violence Wed, 22 May 2024 14:00:00 +0000 https://www.justsecurity.org/?p=95998 A new report recommends tech platforms use content moderation, transparency, and consistency to avoid the threat of political violence.

The post Tech Platforms Must Do More to Avoid Contributing to Potential Political Violence appeared first on Just Security.

]]>
This essay is co-published with Tech Policy Press.

At the end of March, we convened a working group of experts on social media, election integrity, extremism, and political violence to discuss the relationship between online platforms and election-related political violence. The goal was to provide realistic and effective recommendations to platforms on steps they can take to ensure their products do not contribute to the potential for political violence, particularly in the lead-up to and aftermath of the U.S. general election in November, but with implications for states around the world. 

Today, we released a paper that represents the consensus of the working group titled “Preventing Tech-Fueled Political Violence: What online platforms can do to ensure they do not contribute to election-related violence.” Given the current threat landscape in the United States, we believe this issue is urgent. While relying on online platforms to “do the right thing” without the proper regulatory and business incentives in place may seem increasingly futile, we believe there remains a critical role for independent experts to play in both shaping the public conversation and shining a light on where these companies can act more responsibly. 

Indications of potential political violence mount

The January 6th, 2021, attack on the U.S. Capitol looms large over the 2024 election cycle. Former President Donald Trump and many Republican political elites continue to advance false claims about the outcome of the 2020 election, a potential predicate to efforts to delegitimize the outcome of the vote this November. 

Yet such rhetoric is but one potential catalyst for political violence in the United States this political season. In a feature on the subject this month, The New York Times noted that across the country, “a steady undercurrent of violence and physical risk has become a new normal,” particularly targeting public officials and democratic institutions. And, a survey from the Brennan Center conducted this spring found that 38% of election officials have experienced violent threats. And to this already menacing environment, add conflict over Israel-Gaza protests on college campuses and in major cities, potentially controversial developments in the various trials of the former president, and warnings from the FBI and the Department of Homeland Security about potential threats to LGBTQ+ Pride events this summer. It would appear that the likelihood of political violence in the United States is, unfortunately, elevated.

The neglect of tech platforms may exacerbate the situation

What role do online platforms play in this threat environment? It is unclear if the major platforms are prepared to meet the moment. A number of platforms have rolled back moderation policies on false claims of electoral fraud, gutted trust and safety teams, and appear to be sleep walking into a rising tide of threats to judges and election officials. These developments suggest the platforms have ignored the lessons of the last few years, both in the United States and abroad. For instance, a year after January 6th, supporters of Brazil’s outgoing president Jair Bolsonaro used social media to organize and mobilize attacks on governmental buildings. And an American Progress study of the 2022 U.S. midterm elections concluded that “social media companies have again refused to grapple with their complicity in fueling hate and informational disorder…with key exceptions, companies have again offered cosmetic changes and empty promises not backed up by appropriate staffing or resources.”

Platforms’ failure to prepare for election violence suggests that in many ways, 2024 mirrors 2020. In advance of that election, two of the authors (Eisenstat and Kreiss) convened a working group of experts to lay out what platforms needed to do to protect elections. Sadly, platforms largely ignored these and many other recommendations from independent researchers and civil society groups, including enforcing voting misinformation restrictions against all users (including political leaders), clearly refuting election disinformation, and amplifying reliable electoral information. The failure of platforms to adequately follow such recommendations helped create the context for January 6th, as documented by the draft report on the role of social media in the assault on the Capitol prepared by an investigative team of the House Select Committee on the January 6 Attacks. 

Recommendations

To avoid a similar outcome, we propose a number of steps the platforms can, and should, take if they want to ensure they do not fuel political violence. None of the recommendations are entirely novel. In fact, a number of them are congruent with any number of papers that academics and civil society leaders have published over the years. And yet, they bear repeating, even though time is short to implement them. 

The full set of seven recommendations and details can be found in our report, but in general they center on a number of themes where online platforms are currently falling short, including:

  • Platforms must develop robust standards for threat assessment and engage in scenario planning, crisis training, and engagement with external stakeholders, with as much transparency as possible.
  • Platforms should enforce clear and actionable content moderation policies that address election integrity year-round, proactively addressing election denialism and potential threats against election workers.


  • Politicians and other political influencers should not receive exemptions from content policies or special treatment from the platforms. Platforms should enforce their rules uniformly.
  • Platforms must clearly explain important content moderation decisions during election periods, ensuring transparency especially when it comes to the moderation of high profile accounts.

This election cycle, so much of the conversation about tech accountability has moved on to what to do about deceptive uses of AI. But the distribution channels for AI-generated content still run largely through the online platforms where users spread the “Stop the Steal” narrative in 2020 and galvanized the people who ultimately engaged in political violence at the U.S. Capitol. We will continue to draw attention to these unresolved issues, in the hope that rising demands for accountability will prompt platforms to act more responsibly and prioritize the risk of political violence both in the United States and abroad.

The post Tech Platforms Must Do More to Avoid Contributing to Potential Political Violence appeared first on Just Security.

]]>
95998
Is Generative AI the Answer for the Failures of Content Moderation? https://www.justsecurity.org/94118/is-generative-ai-the-answer-for-the-failures-of-content-moderation/?utm_source=rss&utm_medium=rss&utm_campaign=is-generative-ai-the-answer-for-the-failures-of-content-moderation Wed, 03 Apr 2024 13:00:44 +0000 https://www.justsecurity.org/?p=94118 Companies ought to proceed cautiously and with transparency if they use generative AI for content moderation.

The post Is Generative AI the Answer for the Failures of Content Moderation? appeared first on Just Security.

]]>
[Editor’s Note: This article is based partly on the event, “Moderating AI and Moderating with AI,” that Harvard University’s Berkman Klein Center for Internet & Society held on March 20, 2024 as part of its Rebooting Social Media Speaker Series]. 

As head of content policy at Facebook more than a decade ago, Dave Willner helped invent the moderation of content for social media platforms. He has served in senior trust and safety roles at AirBnB, Otter, and, most recently, OpenAI. On March 20, he visited Harvard University to talk about a bold new development in the field of content moderation.

From its inception, content moderation has been fraught with policy and operational challenges. In 2010, Willner and about a dozen Facebook colleagues were following a one-page checklist of forbidden material as they manually deleted posts celebrating Hitler and displaying naked people. But the platform had 100 million users at the time and was growing exponentially. Willner’s tiny team wasn’t keeping up with the rising tide of spam, pornography, and hate speech. So he began drafting “community standards” that eventually would comprise hundreds of pages of guidelines enforced by thousands of moderators, most of whom were outsourced employees of third-party vendors. Over time, Facebook also introduced machine learning classifiers to automatically filter out certain disfavored content. Other major platforms followed suit.

But nearly 15 years later, Willner told his audience at Harvard’s Berkman Klein Center for Internet & Society, it is clear that content moderation just “doesn’t work very well”—a conclusion confirmed by the persistent, bipartisan, society-wide complaints about the high volume of dreck that appears on platforms such as Facebook, Instagram, TikTok, YouTube, and X. Outsourced human moderators are ineffective, Willner said, because they typically are poorly paid, inadequately trained, and traumatized by exposure to the worst that the internet has to offer. The current generation of automated systems, he added, mimic human failure to appreciate nuance, irony, and context.

A New Hall Monitor for the Internet

In his remarks, Willner proposed a potential technological solution: using generative artificial intelligence to identify and remove unwanted material from social media platforms and other kinds of websites. In short, ChatGPT as a hall monitor for the internet.

Willner is not merely spitballing. For the past two-and-a-half years, he has worked full-time or as a contractor on trust and safety policy for ChatGPT’s creator, OpenAI. Fueled by billions of dollars in investment capital from Microsoft, OpenAI is at the center of the commercial and popular furor over technology that can generate uncannily human-like text, audio, and imagery based on simple natural-language prompts.

OpenAI itself has made no secret of its ambition to sell its wares for labor- and cost-saving content moderation. In a corporate blog post last August, the company boasted that GPT-4 can effectively handle “content policy development and content moderation decisions, enabling more consistent labeling, a faster feedback loop for policy refinement, and less involvement from human moderators.” OpenAI suggested that its technology represents a big leap beyond what existing machine learning classifiers can accomplish. The promise is that moderation systems powered by large language models (LLMs) can be more versatile and effective than ones utilizing prior generations of machine learning technology, potentially doing more of the work that has to date fallen to human moderators.

Just a few weeks after OpenAI’s blog post, in September 2023, Amazon Web Services posted its own notice on how to “build a generative AI-based content moderation solution” using its SageMaker JumpStart machine learning hub. Microsoft, meanwhile, says that it is investigating the use of LLMs “to build more robust hate speech detection tools.” A startup called SafetyKit promises potential customers that they will be able to “define what you do and don’t want on your platform and have AI models execute those policies with better-than-human precision and speed.” And a relatively new social media platform called Spill says that it employs a content moderation tool powered by an LLM trained with Black, LGBTQ+, and other marginalized people in mind.

For his part, Willner is working with Samidh Chakrabarti, who formerly led Facebook/Meta’s Civic Integrity product team, on how to build smaller, less expensive AI models for content moderation. In a jointly authored article for Tech Policy Press published in January 2024, Willner and Chakrabarti, now nonresident fellows at the Stanford Cyber Policy Center, wrote that “large language models have the potential to revolutionize the economics of content moderation.”

The Promise and Peril of Using AI for Content Moderation

It is not difficult to understand why social media companies might welcome the “revolutionary” impact—read: cost savings—of using generative AI for content moderation. Once fine-tuned for the task, LLMs would be far less expensive to deploy and oversee than armies of human content reviewers. Users’ posts could be fed into an LLM-based system trained on the content policies of the platform in question. The AI-powered system almost instantaneously would determine whether a post passed muster or violated a policy and therefore needed to be blocked. In theory, this new automated approach would apply policies more accurately and consistently than human employees, making content moderation more effective.

In the long run, replacing some or most outsourced human moderators might also benefit the moderators themselves. As journalists and civil society researchers have observed, content moderation is an inherently stressful occupation, one that in many cases leaves workers psychologically damaged. Social media companies have exacerbated this problem by outsourcing moderation to third-party vendors that are incentivized to keep a tight lid on pay and benefits, leading to haphazard supervision, burnout, and high turnover rates.

Social media companies could have addressed this situation by directly employing human moderators, paying them more generously, providing high-quality counseling and healthcare, and offering them potential long-term career paths. But the companies have shirked this responsibility for 15 years and are unlikely to change course. From this perspective, simply eliminating the job category of outsourced content reviewer in favor of LLM-based moderation has a certain appeal.

Unfortunately, it’s not that simple. LLMs achieve their eerie conversational abilities after being trained on vast repositories of content scraped from the internet. But training data contain not only the supposed wisdom of the crowd, but also the falsehoods, biases, and hatred for which the internet is notorious. To cull the unsavory stuff, LLM developers have to build separate AI-powered toxicity detectors, which are integrated into generative AI systems and are supposed to eliminate the most malignant material. The toxicity detectors, of course, have to learn what to filter out, and that requires human beings to label countless examples of deleterious content as worthy of exclusion.

In January 2023, TIME published an expose of how OpenAI used an outsourced-labor firm in Kenya to label tens of thousands of pieces of content, some of which described graphic child abuse, bestiality, murder, suicide, and torture. So, LLMs will not necessarily sanitize content moderation to the degree that their most avid enthusiasts imply. OpenAI did not respond to requests for an interview.

Then there are questions about just how brilliant LLMs are at picking out noxious content. To its credit, OpenAI included a bar chart with its August 2023 blog post that compared GPT-4’s performance to humans on identifying content from categories such as “sexual/minors,” “hate/threatening,” “self-harm,” and “violence/graphic.” The upshot, according to OpenAI, was that GPT-4 performed similarly to humans with “light training,” but “are still overperformed by experienced, well-trained human moderators.” This concession implies that if social media companies are seeking the best possible content moderation, they ought to acknowledge the full cost of doing business responsibly and cultivate loyal, knowledgeable employees who stick around.

Explanations and Audit Trails

There are some tasks at which existing moderation systems usually fail but conversational LLMs likely would excel. Most social media platforms do a bad job of explaining why they have removed a user’s post—or don’t even bother to offer an explanation. More automated systems should have the capacity to show their homework, meaning they can be queried as to how and why they have reached a certain conclusion. This ability, if activated by platforms, would allow outsiders to audit content moderation activity far more readily.

(As an aside, such a facility for explanation might undercut a legal argument that social media companies are making in opposition to state legislative requirements that they provide individualized explanations of all moderation decisions. The companies’ claim that these requirements are “unduly burdensome,” and therefore unconstitutional, would seem far less credible if LLMs could readily supply such explanations.)

It’s possible that using LLMs would even allow platforms to select from a wider range of responses to problematic content, short of spiking a post altogether. One new option could be for an LLM-based moderation system to prompt a user to reformulate a post before it is made public, so that the item complies with the platform’s content policies.

Currently, when a platform updates its policies, the process can take months as human moderators are retrained to enforce amended or brand-new guidelines. Realigning existing machine learning classifiers is also an arduous process, Willner told his audience at Harvard. In contrast, LLMs that are properly instructed can learn and apply new rules rapidly—a valuable attribute, especially in an emergency situation such as a public health crisis or the aftermath of an election in which the loser refuses to concede defeat.

“Hallucination” and Suppression

There are other risks to be weighed, including that LLMs used for moderation may turn out to be factually unreliable or perhaps too reliable. As widely reported, LLMs sometimes make things up, or “hallucinate.” One of the technology’s unnerving features is that even its designers don’t understand exactly why this happens—a problem referred to as an “interpretability” deficit. Whether AI researchers can iron out the hallucination wrinkle will help determine whether LLMs can become a trusted tool for moderation.

On the other hand, LLMs could in a sense become too reliable, providing a mechanism for suppression masquerading as moderation. Imagine what governments prone to shutting down unwanted online voices, such as the ruling Hindu nationalist Bharatiya Janata Party in India, might do with a faster, more efficient method for muffling Muslim dissenters. Censors in China, Russia, and Iran would also presumably have a field day.

The implications for use of generative AI in content moderation cut in conflicting directions. Because of the possibility of cost savings, most social media companies will almost certainly explore the new approach, as will many other companies that host user-generated content and dialogue. But having employed highly flawed methods for content moderation to date, these companies ought to proceed cautiously and with transparency if they hope to avoid stirring even more resentment and suspicion about the troubled practice of filtering what social media users see in their feeds.

IMAGE: Visualization of content moderation (via Getty Images)

The post Is Generative AI the Answer for the Failures of Content Moderation? appeared first on Just Security.

]]>
94118