Michael Karanicolas https://www.justsecurity.org/author/karanicolasmichael/ A Forum on Law, Rights, and U.S. National Security Thu, 18 Sep 2025 20:08:25 +0000 en-US hourly 1 https://i0.wp.com/www.justsecurity.org/wp-content/uploads/2021/01/cropped-logo_dome_fav.png?fit=32%2C32&ssl=1 Michael Karanicolas https://www.justsecurity.org/author/karanicolasmichael/ 32 32 77857433 The Digital Divide Meets the Quantum Divide https://www.justsecurity.org/111035/digital-divide-meets-quantum-divide/?utm_source=rss&utm_medium=rss&utm_campaign=digital-divide-meets-quantum-divide Thu, 01 May 2025 12:50:48 +0000 https://www.justsecurity.org/?p=111035 Governments should pursue policies that aim to make the benefits of quantum technologies accessible to all, not just the Global North.

The post The Digital Divide Meets the Quantum Divide appeared first on Just Security.

]]>
For decades, critics have argued that intellectual property rules–particularly in patent law–have entrenched global inequality by ensuring the fruits of technological development remain concentrated in wealthy countries. This has slowed, and in some cases stalled, growth in the Global South. Similar concerns are now emerging with the rise of the AI economy, where first movers are setting governance standards to suit their own interests, often at the expense of poorer nations.

As countries in the Global North race toward quantum supremacy, this familiar pattern of technological gatekeeping is poised to repeat itself. Disparities in both access to and control over transformative technologies are likely to widen, driven by the national security implications of quantum advancements. Already, we are seeing restrictive export controls and competitive research initiatives designed to hoard these technologies among the countries developing them. The Global South risks exclusion—not only from the technological and economic benefits of quantum innovation, but also from the enhanced security protections it promises. These emerging silos of privileged access to quantum technology echo problematic trends from previous technological regimes but also introduce new challenges, particularly given quantum’s potential to undermine existing digital infrastructure.

Entrenching Global Divisions 

While the practical applications of quantum technologies are mostly still on the horizon, the growing global quantum divide is already becoming apparent. Export controls for quantum-relevant technologies have expanded significantly over the last five years. The United States has taken an increasingly aggressive approach to protecting its quantum research, beginning with targeted bans on eight Chinese companies in 2021, then issuing comprehensive export license requirements for quantum hardware, software, and complete systems in late 2024. In 2021, the European Union (EU) established a unified framework limiting exports on dual-use technologies, with France, Spain, and the Netherlands implementing specific controls on functional qubits (basic units of quantum computing that can maintain quantum properties long enough to perform reliable calculations) and banning exports of systems exceeding 34 qubits. China has implemented its own protective measures, adding quantum encryption technology and ultra-low temperature technology (essential for superconducting computers) to its restricted exports list in 2020.

These technology controls create concentric circles of access;  allies enjoy privileged exchange while competitors face increasing challenges in establishing domestic quantum research programs. Though these controls are often justified as a means to constrain rivals—particularly China—they also have the unintended effect of locking out researchers and institutions from the Global South.

Siloes of Innovation 

While technology transfer restrictions have hardened, approaches to research collaboration have grown more nuanced. Strategic knowledge-sharing frameworks now allow individual researchers greater freedom to collaborate internationally. This shift reflects practical considerations: the global quantum workforce is small, and many of its top minds are internationally mobile.  Nearly half of quantum professionals in the United States are foreign nationals. The United States continues to allow information sharing with these individuals, though it mandates strict recordkeeping. It also depends heavily on European and Japanese partners for key components and materials, and around half of American-authored quantum research papers include foreign co-authors.

The European Union has followed suit, walking back earlier efforts to confine quantum research initiatives to Member States and finalizing cooperation agreements with the United Kingdom while exploring similar arrangements with Switzerland. China, by contrast, has doubled down on self-reliance, prioritizing domestic talent development and limiting international collaborations. All three approaches to quantum technology control and development raise critical questions about equitable technological access and the implications for countries left outside these privileged silos of innovation.

Restrictions on quantum-relevant technology transfers could also serve to maintain the military and intelligence dominance of advanced economies, while crippling economic development in the Global South. For example, deploying a large-scale quantum communications network — which the European Union and Canada have both expressed interest in — could prove essential to the next generation of secure infrastructure, leaving countries without access to this, or similarly effective quantum security, at risk of attack by their geopolitical rivals, or even mercenary hackers. Likewise, quantum sensing could herald a new generation of precision design and offer a range of civil applications, from disaster monitoring to medical imaging. However, this cutting-edge innovation will inevitably produce tension between pressure to allow poor countries to access the economic and development benefits associated with quantum technologies and the desire to maintain strategic dominance, particularly over advances with civil and military applications. The clustering of quantum supply chains in the Global North compounds this problem, reinforcing the gatekeeping role of first movers.

A New Nonproliferation Framework?

This dilemma is not without historical precedent. At the dawn of the nuclear age, U.S. President Dwight Eisenhower proposed the “Atoms for Peace” initiative in 1953—offering support for peaceful nuclear development in exchange for commitments to forgo military applications. This led to the evolution of the nuclear nonproliferation regime, complete with safeguards on transferred materials and technologies.

Could a similar “Qubits for Peace” framework be devised for quantum technologies? Perhaps—but the geopolitical context is far less favorable. Today’s multipolar world makes it unlikely that a single power—or even a coalition—could dictate terms. Moreover, Global South countries may be unwilling to accept limits on their own pursuit of strategic technologies.

Even if advanced countries were willing to transfer quantum technology through such a framework, the difficulty of distinguishing civilian from military quantum applications presents a major obstacle. Cryptography and communication are particularly thorny areas: states have a collective interest in secure systems but also seek to penetrate their rivals’ networks. This mirrors longstanding debates around strong encryption, now updated for a quantum era.

In the current fractured geopolitical environment—marked by nationalism and deteriorating alliances—it is hard to imagine a broad-based commitment to equitable quantum development. The risk is that a handful of governments will monopolize secure communications, prioritizing espionage advantages over global security. If so, the Global South could be left even more vulnerable.

Missing the Quantum Revolution

Without meaningful access to quantum resources, the Global South may face a triple disadvantage. First, their security infrastructure might become increasingly vulnerable as quantum computing threatens to break existing encryption protocols. Second, their economic competitiveness might diminish as they miss opportunities to develop quantum-enhanced technologies. Lastly, their technological sovereignty might erode as dependence on external providers for quantum-resistant security becomes unavoidable. This could deeply exacerbate the existing digital divide, and finally lay to rest any hope that the Global South may someday catch up to the technological standard of living the Global North enjoys.

Quantum advancements present significant challenges for those left behind. Unlike digital technologies that benefitted from a gradual diffusion, quantum technologies threaten to create immediate security risks and expose the global digital infrastructure to unprecedented vulnerabilities while simultaneously denying Global South countries access to quantum resources. The changed approach towards research collaboration by major quantum powers could signal a potential pathway forward, with strategic knowledge-sharing frameworks broadened to include Global South interests and bridge the quantum divide. As governments pour resources into advancing these technologies, they need to consider strategies for ensuring that their benefits do not remain siloed across a handful of wealthy countries, and are accessible to all.

The post The Digital Divide Meets the Quantum Divide appeared first on Just Security.

]]>
111035
Governing the Quantum Revolution https://www.justsecurity.org/111044/series-governing-quantum-revolution/?utm_source=rss&utm_medium=rss&utm_campaign=series-governing-quantum-revolution Thu, 01 May 2025 12:47:13 +0000 https://www.justsecurity.org/?p=111044 Introducing a new series bringing together a range of perspectives on the policy challenges emerging from the quantum revolution.

The post Governing the Quantum Revolution appeared first on Just Security.

]]>
Quantum technologies have the potential to upend the economic, social, and national security landscape. While many of their most transformative applications remain on the horizon, researchers envision breakthroughs ranging from predictive modeling systems capable of forecasting weather, traffic, and market fluctuations, to ultra-sensitive sensors revolutionizing medical diagnostics. As with all emerging technologies, breakthroughs in quantum computing also bring potential risks to manage, including the amplification of government surveillance and a range of advanced military applications. Persistent questions also remain surrounding  the equitable distribution of quantum’s economic and social benefits, alongside the need to foster innovation without reinforcing existing concentrations of power and wealth.

The implications of quantum technologies are deeply political. The choices made today about how to fund research, structure international and public-private cooperation, and design guardrails will shape who benefits from quantum advances and who bears the burdens. As such, quantum policy demands a broad, inclusive dialogue that engages lawmakers, civil society, and global stakeholders, alongside leading quantum scientists and engineers. Only by acting early can we ensure that the quantum revolution advances human flourishing and democratic governance, rather than exacerbating inequality or fueling authoritarian control.

This series brings together a wide range of perspectives on the policy challenges emerging from the quantum revolution, offering insights into how advances in quantum computing, networking, and sensing could reshape our world. While the contributors approach these issues from different angles, they share a common conviction: that we must confront these questions now—before the technology becomes ubiquitous—so that scholars, policymakers, civil society, and other stakeholders are better equipped to shape a more equitable and secure quantum future.

The articles have been developed for Just Security as part of a research collaboration between the Center for Quantum Networks, the Narang Lab, and Dalhousie University’s Law and Technology Institute (LATI), with support from the National Science Foundation’s Responsible Design, Development, and Deployment of Technologies program.

Together, we hope this series will advance policy discussions on quantum technologies, help prepare decisionmakers  for the quantum revolution, and cultivate legal and policy frameworks that harness quantum’s full potential while mitigating potential risks.

The post Governing the Quantum Revolution appeared first on Just Security.

]]>
111044
Government Use of AI is Expanding: We Should Hope for the Best but Safeguard Against the Worst https://www.justsecurity.org/96988/governments-use-of-ai-is-expanding/?utm_source=rss&utm_medium=rss&utm_campaign=governments-use-of-ai-is-expanding Wed, 26 Jun 2024 13:09:17 +0000 https://www.justsecurity.org/?p=96988 Robust governance over the use of AI in the public sector requires centralized, specialized oversight of decision-making.

The post Government Use of AI is Expanding: We Should Hope for the Best but Safeguard Against the Worst appeared first on Just Security.

]]>
Editor’s Note: This article is the next installment of our Symposium on AI and Human Rights, co-hosted with the Promise Institute for Human Rights at UCLA School of Law.

Across the U.S. public sector, the AI revolution has already begun. At the Patent and Trademark Office AI-based programs have become a critical research tool for assessing applications. The Securities and Exchange Commission and the Internal Revenue Service use AI to search for suspicious or illegal behavior. AI is routinely employed as part of post-market surveillance efforts by the Food and Drug Administration. And if you submitted a comment to the Federal Communications Commission lately, there’s a good possibility it was processed using AI

These use cases are the tip of the iceberg. Administrative agencies at all levels of government are racing to develop and deploy new AI-based tools for everything from interfacing with the public to regulatory enforcement. While lawmakers are slowly waking up to the dangers that AI can pose, there is still little formal oversight, or even centralized tracking, for how these technologies are proliferating within the government itself. This gap is particularly concerning since, for all the talk of AI’s existential risk to humanity, its potential to chip away at our fundamental rights presents a far more realistic threat than some Terminator-style apocalypse. While there are plenty of legitimate areas of concern related to the role of Big Tech in the AI ecosystem, governments are the primary duty-bearers for guaranteeing fundamental rights, and have a responsibility to be accountable to the people they serve. There are particularly serious concerns where public agencies misuse these tools, for example, in ways that discriminate against certain individuals or restrict access to healthcare. As more and more government functions are outsourced to machines, there are troubling implications for the future of democracy.

None of this is intended to discount the potential benefits that AI can bring to the complicated challenges of governing, especially in the context of ongoing demands for administrative agencies to do more with less. Innovation in the provision of public services is something we should welcome. In some cases, government uses of AI may be necessary to keep pace with the growing technical complexity of their oversight functions. But at a time when public trust in government, and in our broader public institutions, has declined to critical levels, the potential for these systems to further erode the relationship between the people and the administrative state should be an extremely serious consideration. Developing appropriate safeguards to guide the piloting, development, and deployment of AI across the public sector, especially when these systems have the potential to impact fundamental rights, is vital.

U.S. Responses

To their credit, successive U.S. administrations have taken this challenge seriously. In January 2023, the National Institute of Standards and Technology (NIST) published their Artificial Intelligence Risk Management Framework (AI RMF), which provides a model assessment process for agencies to map potential risks, develop tracking mechanisms, and respond appropriately. In March 2024, the Office of Management and Budget (OMB) published an AI policy memo including a number of requirements and recommendations for executive branch agencies. Importantly, this includes a requirement to track and publicly report all AI use cases. The lack of any central monitoring or coordination around how these technologies are being deployed has been a serious deficiency: it is difficult to come up with a coherent public policy response to the use of AI in the public sector if nobody has a comprehensive understanding of what exactly is going on. The guidance also exempts national security systems, a large carve-out that could seriously impact individual rights. The AI policy memo also requires agencies to designate Chief AI Officers, and agencies established by the Chief Financial Officers (FCO) Act to convene an AI Governance committee, in order to guide and coordinate issues related to AI implementation, including managing risks.

Strengthening AI Governance

While these are largely positive developments, their efficacy depends, in large part, on good faith engagement by administrative officials in ensuring that AI deployment is safe, accountable, and reflective of the democratic principles that are meant to guide the exercise of state power. In the absence of meaningful engagement, it is easy for risk assessment processes to devolve into a box checking exercise, where any concerns about the broader impacts of AI on agency operations are outweighed by pressure to slash budgets, downsize workforces, and present a façade that the agency is on the leading edge of innovation. While we all share a collective interest in maintaining public trust in government, individual managers within agencies may face competing short-term incentives that outweigh these structural values. In the absence of meaningful oversight, decision-making around whether a system is performing appropriately may also be unduly influenced by concerns about sunk costs, and the negative implications of abandoning a tool that has been developed at considerable time and expense.

Robust governance over the use of AI in the public sector requires centralized, specialized oversight of decision-making at administrative agencies, including where these systems are succeeding or failing, and whether the use of AI is appropriate in the first place. However, this presents a challenge since AI risk assessment is heavily contextual. Part of the reason why the AI RMF delegates so much flexibility to the individual agencies is that an accurate risk assessment requires an intimate understanding of the agency’s workflow, procedures, and external stakeholders, as well as the specific way that a tool fits into these complex dynamics. One cannot properly assess risks without a comprehensive understanding of the tool’s operational context, which is difficult for those outside of the agency to fully grasp.

These tensions are not irreconcilable, however, and may be overcome by formalizing a system which combines robust first-instance risk assessments within the relevant agency with an appropriate mechanism for centralized oversight over these decisions, and around broader agency policies related to the use of AI One model could be to delegate oversight to administrative law judges, who already play a parallel role in adjudicating certain aspects of agency conduct. However, judicial enforcement tends to be expensive and time consuming, and this model is generally not conducive to cultivating the sort of collaborative relationships which are necessary to develop robust administrative infrastructure. Oversight could also be concentrated within an existing agency, such as the OMB, or the Government Accountability Office, the latter of which already plays an important role in technology assessments, and which also enjoys the advantage of being located within the legislative branch, making it better able to act as an accountability mechanism over executive agencies. However, it is unclear whether either of these agencies possess the requisite authority to impact meaningful change across the administrative state.

As an alternative model, it may be interesting to consider the establishment of a specialized administrative oversight body. Such a specialized oversight body would not be entirely unprecedented under U.S. law: the Privacy and Civil Liberties Oversight Board (PCLOB) serves a roughly analogous function in assessing risks from counterterrorism programs. However, it too has a mixed record of impacting positive change across its areas of responsibility, due in part to its limited budget and powers, as well as persistent challenges in finding a quorum. Around the world, a number of the most successful models of administrative oversight may be found within freedom of information (or right to information) systems, which are often overseen by an independent information commission or commissioner. While there is no existing body which carries out this function for the federal Freedom of Information Act, they are not unusual within state transparency systems. Where these systems are well-designed, they anticipate significant bureaucratic resistance, since public transparency programs function as a critical public accountability and oversight mechanism. Processing public information requests also requires a sophisticated understanding of agency operations, not only to know where responsive information may be found for a given request, but also to accurately apply any exceptions to disclosure. Information Commissions or Commissioners play a role in both developing appropriate standards for local information officers to apply, and in overseeing agency compliance with these standards, as well as in supporting public understanding of the right to information. Information Commissions or Commissioners are likewise often empowered to exercise a more general oversight function over agency operations, and to interface with the public in promoting their right to information, including through hearing appeals against administrative non-compliance and ordering remedial measures. Information Commissions can even be multistakeholder agencies, incorporating perspectives from law enforcement and national security agencies, civil society, and a range of other subject matter experts. Their importance to sustaining core democratic values is demonstrated by the fact that, among backsliding democracies, information commissions are often one of the first targets for attack. However, as a result of this tendency, there is a robust set of international best practices for ensuring these agencies’ independence and resilience.

Applying this model to AI governance, it is possible to conceive of a specialized, independent, multistakeholder body which may be tasked with ensuring that agency efforts to incorporate AI into their operations comply with centrally set standards, and reviewing agency generated risk assessments to ensure that they are carried out in an effective and meaningful way. These reviews could either be carried out periodically, with each agency reporting back to the “AI Commission” on an annual or semi-annual basis, or be prompted in response to public complaints or appeals against the use of AI for a particular function. Importantly, the Commission would need to incorporate appropriate safeguards to guarantee its independence, and its ability to push back against bad decision-making at other agencies, as opposed to acting as a mere inter-agency hub.

While a specialized agency dealing only with government uses of AI may seem like overkill in the present moment, it is appropriate and necessary in consideration of the likely transformative impact of these technologies across the administrative state. It is vital to equip some sort of voice within government that can push back, in an adversarial way if need be, against the rising tide of automation of a growing number of government functions. However, in addition to challenges related to the cost and political will associated with standing up such an agency, it would require significant conceptual work to determine how this body might fit into the existing U.S. federal apparatus, particularly given growing regulatory fragmentation on AI, and broader judicial attacks on the administrative state. Nonetheless, as AI continues to be integrated into an increasing number of core government functions, it is important to think creatively about how to harness the advantages of these new technologies without losing the core qualities that make our government responsive, accountable, and fundamentally human.

IMAGE: Artistic depiction of a brain symbolizing artificial intelligence. (Photo by Geralt via Creazilla.com, CC0 1.0)

The post Government Use of AI is Expanding: We Should Hope for the Best but Safeguard Against the Worst appeared first on Just Security.

]]>
96988
Why An Encryption Backdoor for Just the “Good Guys” Won’t Work https://www.justsecurity.org/53316/criminalize-security-criminals-secure/?utm_source=rss&utm_medium=rss&utm_campaign=criminalize-security-criminals-secure Fri, 02 Mar 2018 13:53:08 +0000 https://www.justsecurity.org/?p=53316 Recently, U.S. law enforcement officials have re-energized their push for a technical means to bypass encryption. But seeking to undermine encryption only looks backward instead of focusing on where technology is going. We should be having conversations about new investigative techniques, not trying to preserve the access enjoyed in the days before encryption was so widespread, particularly when so much is at stake in doing so.

The post Why An Encryption Backdoor for Just the “Good Guys” Won’t Work appeared first on Just Security.

]]>
Recently, U.S. law enforcement officials have re-energized their push for a technical means to bypass encryption, pointing to Symphony: a chat application implemented by banks in 2015 which includes some backdoor access functionality to be utilized in the case of an investigation. This system, officials argue, could be implemented more broadly to provide for secure backdoor access to all communications for law enforcement. They further claim that they have spoken with technical experts (without providing any names), who claim that this approach is possible to implement more broadly. Publicly, security experts have expressed skepticism as to whether a Symphony-like system could be safe, particularly if scaled out the way officials seem to want. Policy experts have further explained that these conversations are increasingly providing political cover for bad laws and practices in other countries. For instance, in the last few years China has passed a range of laws with serious implications for human rights. As a result of these laws, Apple has now announced that it will shift to contracting with a Chinese company for provision of its iCloud service to Chinese users. This means all iCloud data – including encryption keys – would have to be stored locally.

The Apple example is a timely reminder that the ongoing debate over encryption backdoors has important global implications. Additionally, it provides a concrete example of how even the staunchest advocates for encryption can buckle in the face of powerful government actors. Everyone on the Internet has a shared interest in safe encryption, and so any system that would undermine security should be a non-starter. However, even putting aside these technical debates, and granting the dubious assertion that a backdoor could be implemented securely, there are a number of practical reasons why backdooring encryption would still be a bad idea.

Just who are the “good guys”?

Any system mandating some hypothetical secure backdoor would need to decide who can obtain access and under what circumstances. Even if it were possible to build a backdoor which only “good guys” could use, this would still leave tech companies in the position of having to determine who counts as a “good guy.”

To American lawmakers, it may seem logical to require that access should be provided to U.S. law enforcement in line with U.S. law, including, at a minimum, the receipt of a valid warrant. However, digital communications are international, as are the tech firms at the center of this debate. If Apple, for example, had the technical capability to circumvent encryption on their devices, and they had a policy of facilitating access for U.S. law enforcement when presented with a warrant, they would face tremendous pressure to provide equivalent services to other governments, and, in some cases, like China, even legal obligation to do so.

While it may seem acceptable enough for a company to cooperate with warrants in friendly democracies with independent judiciaries, like Canada or Germany, a request from the Chinese police to help track dissidents, or Saudi police to catch anyone using Grindr, would place the companies in a difficult position. Many repressive states, including China, already have laws requiring cooperation from the private sector. Apple’s acquiescence to relocate its Chinese users’ data into China was made in response to new legislation requiring local storage. While Apple has indicated that this will not undermine security for users, experts have pointed out that that is not a guarantee they can make. The fact that tech firms are unable break their own security features limits states’ ability to make demands for access too user information, though this would certainly change if a backdoor were introduced for American law enforcement.

If a tech firm introduced a backdoor into its systems, it would therefore have two options: it could facilitate access to all governments equally, which would mean complicity in a wide range of human rights abuses, or they could commit to evaluating all requests for access on their merits and potential human rights impact. In the latter case, besides being manifestly unqualified to perform this role, such a stand would be very difficult for tech firms to maintain. The mere capability to facilitate backdoor access would subject companies to tremendous pressure, and a failure to comply would have high stakes: Chinese law has no upper limit on the fines they can charge for non-compliance with government demands for access. A non-compliant company could also risk losing access to that market entirely, or even seeing employees jailed or harmed. Given these alternatives, it is not surprising that tech firms have thus far sought to avoid these results by maintaining a technical inability to provide access to their users’ secure communications. The real costs of backdoors would be born by ordinary people, and global tech firms who will have to shoulder the financial and moral cost of supporting repressive governments.

Just how secure is your “secure” backdoor?

While government officials may seem comfortable with the idea that exceptional access can be provided securely in line with today’s technology, they fail to examine the future implications. Major data breaches are already happening with alarming frequency, with compromised credit card numbers or social security numbers for sale online for only pennies. While encryption cannot prevent all breaches, it may well be our best defense against the activities of malicious actors, something that will only get more important as the Internet of Things continues to expand. Intel has forecasted that there will be more than 200 billion IoT devices in use by 2020, including cars, smoke detectors, and home security systems, most of which will generate sensitive user data. In a connected world, digital security will likely become a matter of life and death. The next generation of breaches may threaten not just your data, but your life, and those of your loved ones.

And the techniques of these actors are getting increasingly more sophisticated. Offensive strategies to gain access to lucrative data sets are constantly evolving, making it a constant race for those in control of those data sets to stay just a half step in front of those trying to find a chink in their systems. Any limitations on the strength or type of encryption that can be used to protect data would constitute the equivalent of tying one hand of every engineer behind his or her back as they work to protect user information.

Complicating matters further, governments from the U.S. to China are already making strides toward quantum computing, which could potentially break any encryption used today. And, while these breakthroughs often stay in the exclusive use of governments for short periods, inevitably they trickle down to non-state actors. To protect everyone, companies must be incentivized to constantly pursue better and stronger forms of protecting data if they have any hope of being prepared to face evolving generations of would-be criminals.

Government can legislate, but the market will respond

It is nearly impossible to keep people from accessing certain tools or technologies online. In 2017, Russia passed a law which banned the use of Tor, a program for facilitating anonymous, encrypted communication, but as of early 2018 there were still over 250,000 daily users of the service in Russia. Despite having been sued repeatedly, and having their founders jailed, The Pirate Bay website remains stubbornly online. As long as there are countries on earth which do not mandate backdoors for encryption, or do not impose these rules universally, strong encryption is going to remain available.

The mostly likely result of a move to introduce backdoors into tech products would be a migration by criminals and terrorists to smaller, less regulated products which could offer strong encryption without consequence. The use of encryption tools is quickly becoming a skill for criminals, much like hot-wiring a car or buying a stolen credit card. Eventually the only people who would still use tools subject to the government mandate will be ordinary people without the knowledge or incentives to adopt other tools, and for those people the repercussions of weaker security could be serious and long-felt. This is one of the main reasons why experts agree that it is unlikely that any access mechanism, even if immediately effective, would maintain its efficacy over time.

Building and securing the architecture to ensure government access would also constitute a significant cost on companies. The level of sophistication involved would be a particular challenge for smaller firms, which would inevitably lead to their products being less safe. This insecurity would be compounded if different governments mandated different access mechanisms, with users bearing the weight of any breach of their sensitive information and communications. Accordingly, mandatory encryption backdoors can be seen as a market-distorting force, making security so expensive that only the biggest companies could offer it. While this impact could be mitigated by limiting a rule requiring backdoors to only apply to companies over a particular size, that would in turn further limit the mechanism’s efficacy, and anyone who needed security for illegitimate ends would presumably migrate to services outside the scope of the rule

 Why is this necessary again?

 Despite government officials’ decades-long push to move forward with proposals for building a backdoor into encrypted systems, we still have not answered the basic threshold question: would a backdoor access regime even give law enforcement the information they are seeking? Even more important, we don’t know what it is that law enforcement is seeking. While some numbers have been published about the number of encrypted devices in custody of U.S. officials, these data are not useful. Was the encrypted data critical? Could it have been accessed in other ways? What is the likelihood that any encrypted information would have actually contributed to the case? What cases are we looking at, and what is the technical sophistication of the criminals involved?

Law enforcement claims that modern criminals are increasingly sophisticated, and warn that the spread of encryption means they are “going dark”, harming law enforcement officials’ ability to do their jobs. The reality is that the digital world has provided investigators with a vastly more sophisticated toolkit for solving crimes than ever before. A generation ago, tracking down who a suspect was communicating with and what they were saying involved having law enforcement agents physically follow them around, or break into their establishments to plant remote listening devices. Today, not only is that information available in a consolidated format, and capable of being conveniently delivered to any field office, in many cases it can be traced back for months or even years depending on the policy of the company handling the communications.

The challenges posed by security conscious criminals are hardly unprecedented. Indeed, people have always had ways of rendering information inaccessible to investigators – including simply burning or burying incriminating material. The fact that information is now encrypted does not represent an unprecedented challenge for law enforcement; it merely represents a slight retreat from the “golden age of surveillance” that we currently live in.

Moreover, even sophisticated encryption systems are not a black hole for criminal data. After the 2015 San Bernardino attack, the FBI tried to compel Apple to build a new operating system that could be installed on a phone belonging to one of the shooters, bypassing its security features. At the time, the Bureau claimed that Apple’s cooperation was the only possible way its investigators could get access to the contents of the phone. After Apple refused, however, the FBI found a researcher to hack into the phone for them. Today, at least one company likely has the ability to compromise any iPhone model on the market. Additionally, American law enforcement agencies recently sought – and received – an update to criminal procedure rules to more easily facilitate their hacking operations (despite having no law that authorizes such activity or provides adequate protections), and are using malware to gain access to information on encrypted devices. In other cases, police are reverting to more traditional investigative methods which, while time consuming, are still effective. Russian spy Anna Chapman encrypted her information, but relied on passwords which she wrote down on paper, and which were later found by a police search. In order to catch Ross William Ulbricht, the alleged head of the illicit Silk Road market, investigators waited until he had logged into his computer, before snatching the device away. Such “legal muggings” have become a standard part of the investigative toolkit in the UK. If these techniques are good enough to catch professional spies and international drug traffickers, surely the law enforcement’s fears of “going dark” cannot be as dire as claimed?

Focus on the future, instead of trying to recapture the past

While the technical debates about whether encryption can be securely backdoored are interesting, they only represent one part of the argument for why these proposals are a terrible idea. The use of encryption technologies has been widely recognized as a core component of freedom of speech, as well as the right to privacy. While these rights are not absolute, a broad mandate requiring backdoors that would impact anyone other than specific targets in investigations would likely be considered a violation of human rights standards. Moreover, in the United States we legally recognize code as speech, and therefore compelling companies to create and implement new access mechanisms would violate their own First Amendment rights. It seems unlikely that any mechanism can – or should – survive scrutiny under these standards.

Fundamentally, we all have an interest in a safe and secure Internet. Even U.S. intelligence agencies have fallen victim to bad security, most notably with the Shadow Brokers data breach. This may be why, despite arguing against the dangers of widespread encryption, the U.S. government remains a major financial backer of Tor, largely because of a belief in the national security value of enabling secure communication. Modern technology is fluid and fast moving. To keep up, law enforcement needs to adapt. But seeking to undermine encryption only looks backward instead of focusing on where technology is going. We should be having conversations about new investigative techniques, not trying to preserve the access enjoyed in the days before encryption was so widespread, particularly when so much is at stake in doing so.

(Alex Wong/Getty Images)

The post Why An Encryption Backdoor for Just the “Good Guys” Won’t Work appeared first on Just Security.

]]>
53316