There is an ongoing debate as to which central banks will launch digital currencies, by which date, and how currencies will be interoperable with one another. Simon Sharwood, writing for The Register, is reporting that China’s Digital Yuan is taking big steps to answering many of those questions:
According to an account of the meeting in state-controlled media, Fan said standardization across payment systems will be needed to ensure the success of the Digital Yuan.
The kind of standardization he envisioned is interoperability between existing payment systems – whether they use QR codes, NFC or Bluetooth.
That’s an offer AliPay and WeChat Pay can’t refuse, unless they want Beijing to flex its regulatory muscles and compel them to do it.
With millions of payment terminals outside China already set up for AliPay and WeChat Pay, and the prospect of the Digital Yuan being accepted in the very same devices, Beijing has the beginnings of a global presence for its digital currency.
When I walk around my community I very regularly see options to use AliPay or WeChat Pay, and see many people using these options. The prospect that the Chinese government might be able to take advantage of existing payment structures to also use a government-associated digital fiat currency would be a remarkable manoeuvre that could theoretically occur quite quickly. I suspect that when/if some Western politicians catch wind of this they will respond quickly and bombastically.
Other governments’ central banks should, ideally, be well underway in developing the standards for their own digital fiat currencies. These standards should be put into practice in a meaningful way to assess their strengths and correct their deficiencies. Governments that are not well underway in launching such digital currencies are running the risk of seeing some of their population move away from domestically-controlled currencies, or basket currencies where the state determines what composes the basket, to currencies managed by foreign governments. This would represent a significant loss of policy capacity and, arguably, economic sovereignty for at least some states.
Why might some members of their population shift over to, say, the Digital Yuan? In the West this might occur when individuals are travelling abroad, where WeChat Pay and AliPay infrastructure is often more usable and more secure than credit card infrastructures. After using these for a while the same individuals may continuing to use those payment methods for ease and low cost when they return home. In less developed parts of the world, where AliPay and WeChat Pay are already becoming dominant, it could occur as members of the population continue their shift to digital transactions and away from currencies controlled or influenced by their governments. The effect would be, potentially, to provide a level of influence to the Chinese government while potentially exposing sensitive macro-economic consumer habits that could be helpful in developing Chinese economic, industrial, or foreign policy.
Western government responses might be to bar the use of the Digital Yuan in their countries but this could be challenging should it rely on common standards with AliPay and WeChat Pay. Could a ban surgically target the Digital Yuan or, instead, would it need to target all payment terminals using the same standard and, thus, catch AliPay and WeChat Pay as collateral damage? What if a broader set of states all adopt common standards, which happen to align with the Digital Yuan, and share infrastructure: just how many foreign and corporate currencies could be disabled without causing a major economic or diplomatic incident? To what extent would such a ban create a globally bifurcated (trifurcated? quadfurcated?) digital payment environment?
Though some governments might regard this kind of ‘burn them all’ approach as desirable there would be an underlying question of whether such an effect would be reasonable and proportionate. We don’t ban WeChat in the West, as an example, in part due to such an action being manifestly disproportionate to risks associated with the communications platform. It is hard to imagine how banning the Digital Yuan, along with WeChat Pay or AliPay or other currencies using the same standards, might not be similarly disproportionate where such a decision would detrimentally affect hundreds of thousands, or millions, of people and businesses that already use these payment systems or standards. It will be fascinating to see how Western central banks move forward to address the rise of digital fiat currencies and, also, how their efforts intersect with the demands and efforts of Western politicians that regularly advocate for anti-China policies and laws.
Charley Johnson has a good line of questions and critique for any organization or group which is promoting a ‘technology for good’ program. The crux is that any and all techno-utopian proposals suggest a means of technology to solve a problem as defined by the party making the proposal. Put another way, these kinds of solutions do not tend to solve real underlying problems but, instead, solve the ‘problems’ for which hucksters have build a pre-designed a ‘solution’.
This line of analysis isn’t new, per se, and follows in a long line of equity, social justice, feminism, and critical theory writers. Still, Johnson does a good job in extracting key issues with techno-utopianism. Key, is that any of these solutions tend to present a ‘tech for good’ mindset that:
… frames the problem in such a way that launders the interests, expertise, and beliefs of technologists…‘For good’ is problematic because it’s self-justifying. How can I question or critique the technology if it’s ‘for good’? But more importantly, nine times out of ten ‘for good’ leads to the definition of a problem that requires a technology solution.
One of the things that we are seeing more commonly is the use of data, in and of itself, as something that can be used for good: data for good initiatives are cast as being critical to solving climate change, making driving safer, or automating away the messier parties of our lives. Some of these arguments are almost certainly even right! However, the proposed solutions tend to rely on collecting, using, or disclosing data—derived from individuals’ and communities’ activities—without obtaining their informed, meaningful, and ongoing consent. ‘Data for good’ depends, first and often foremost, on removing the agency to say ‘yes’ or ‘no’ to a given ‘solution’.
In the Canadian context efforts to enable ‘good’ uses of data have emerged through successively introducedpieces of commercial privacy legislation. The legislation would permit the disclosure of de-identified personal information for “socially beneficial purposes.” Information could be disclosed to government, universities, public libraries, health care institutions, organizations mandated by the government to carry out a socially beneficial purpose, and other prescribed entities. Those organizations could use the data for a purpose related to health, the provision or improvement of public amenities or infrastructure, the protection of the environment or any other prescribed purpose.
Put slightly differently, whereas Johnson’s analysis is towards a broad concept of ‘data for good’ in tandem with elucidating examples, the Canadian context threatens to see broad-based techno-utopian uses of data enabled at the legislative level. The legislation includes the ability to expand whom can receive de-identified data and the range of socially beneficial uses, with new parties and uses being defined by regulation. While there are a number of problems with these kinds of approaches—which include the explicit removal of consent of individuals and communities to having their data used in ways they may actively disapprove of—at their core the problems are associated with power: the power of some actors to unilaterally make non-democratic decisions that will affect other persons or communities.
This capacity to invisibly express power over others is the crux of most utopian fantasies. In such fantasies, power relationships are resolved in the absence of making them explicit and, in the process, an imaginary is created wherein social ills are fixed as a result of power having been hidden away. Decision making in a utopia is smooth and efficient, and the power asymmetries which enable such situations is either hidden away or just not substantively discussed.
Johnson’s article concludes with a series of questions that act to re-surface issues of power vis-a-vis explicitly raising questions of agency and the origin and nature of the envisioned problem(s) and solution(s):
Does the tool increase the self-determination and agency of the poor?
Would the tool be tolerated if it was targeted at non-poor people?
What problem does the tool purport to solve and who defined that problem?
How does the way they frame the problem shape our understanding of it?
What might the one framing the problem gain from solving it?
We can look to these questions as, at their core, raising issues of power—who is involved in determining how agency is expressed, who has decision-making capabilities in defining problems and solutions—and, through them, issues of inclusion and equity. Implicit through his writing, at least to my eye, is that these decisions cannot be assigned to individuals but to individuals and their communities.
One of the great challenges for modern democratic rule making is that we must transition from imagining political actors as rational, atomic, subjects to ones that are seen as embedded in their community. Individuals are formed by their communities, and vice versa, simultaneously. This means that we need to move away from traditional liberal or communitarian tropes to recognize the phenomenology of living in society, alone and together simultaneously, while also recognizing and valuing the tilting power and influence of ‘non-rational’ aspects of life that give life much of its meaning and substance. These elements of life are most commonly those demonized or denigrated by techno-utopians on the basis that technology is ‘rational’ and is juxtaposed against the ‘irrationality’ of how humans actually live and operate in the world.
Broad and in conclusion, then, techno-utopianism is functionally an issue of power and domination. We see ‘tech bros’ and traditional power brokers alike advancing solutions to their perceived problems, and this approach may be further reified should legislation be passed to embed this conceptual framework more deeply into democratic nation-states. What is under-appreciated is that while such legislative efforts may make certain techno-utopian activities lawful the subsequent actions will not, as a result, necessarily be regarded as legitimate by those affected by the lawful ‘socially beneficial’ uses of de-identified personal data.
The result? At best, ambivalence that reflects the population’s existing alienation from democratic structures of government. More likely, however, is that lawful but illegitimate expressions of ‘socially beneficial’ uses of data will further delegitimize the actions and capabilities of the states, with the effect of further weakening the perceived inclusivity of our democratic traditions.
Cameron F. Kerry has a helpful piece in Brookings that unpacks the recently published ‘Declaration on the Future of the Internet.’ As he explains, the Declaration was signed by 60 States and is meant, in part, to rebut a China-Russia joint statement. Those countries’ statement would support their positions on ‘securing’ domestic Internet spaces and removing Internet governance from multi-stakeholder forums to State-centric ones.
So far, so good. However, baked into the Kerry’s article is language suggesting that either he misunderstands, or understates, some of the security-related elements of the Declaration. He writes:
There are additional steps the U.S. government can take that are more within its control than the actions and policies of foreign states or international organizations. The future of the Internet declaration contains a series of supporting principles and measures on freedom and human rights, Internet governance and access, and trust in use of digital network technology. The latter—trust in the use of network technology— is included to “ensure that government and relevant authorities’ access to personal data is based in law and conducted in accordance with international human rights law” and to “protect individuals’ privacy, their personal data, the confidentiality of electronic communications and information on end-users’ electronic devices, consistent with the protection of public safety and applicable domestic and international law.” These lay down a pair of markers for the U.S. to redeem.
I read this, against the 2019 Ministerial and recent Council of Europe Cybercrime Convention updates, and see that a vast swathe of new law enforcement and security agency powers would be entirely permissible based on Kerry’s assessment of the Declaration and States involved in signing it. While these new powers have either been agreed to, or advanced by, signatory States they have simultaneously been directly opposed by civil and human rights campaigners, as well as some national courts. Specifically, there are live discussions around the following powers:
the availability of strong encryption;
the guarantee that the content of communications sent using end-to-end encrypted devices cannot be accessed or analyzed by third-parties (include by on-device surveillance);
the requirement of prior judicial authorization to obtain subscriber information; and
the oversight of preservation and production powers by relevant national judicial bodies.
Laws can be passed that see law enforcement interests supersede individuals’ or communities’ rights in safeguarding their devices, data, and communications from the State. When or if such a situation occurs, the signatories of the Declaration can hold fast in their flowery language around protecting rights while, at the same time, individuals and communities experience heightened surveillance of, and intrusions into, their daily lives.
In effect, a lot of international policy and legal infrastructure has been built to facilitate sweeping new investigatory powers and reforms to how data is, and can be, secured. It has taken years to build this infrastructure and as we leave the current stage of the global pandemic it is apparent that governments have continued to press ahead with their efforts to expand the powers which could be provided to law enforcement and security agencies, notwithstanding the efforts of civil and human rights campaigners around the world.
The next stage of things will be to asses how, and in what ways, international agreements and legal infrastructure will be brought into national legal systems and to determine where to strategically oppose the worst of the over reaches. While it’s possible that some successes are achieved in resisting the expansions of state powers not everything will be resisted. The consequence will be both to enhance state intrusions into private lives as well as to weaken the security provided to devices and data, with the resultant effect of better enabling criminals to illicitly access or manipulate our personal information.
The new world of enhanced surveillance and intrusions is wholly consistent with the ‘Declaration on the Future of the Internet.’ And that’s a big, glaring, and serious problem with the Declaration.
The history of Canada is linked to settle colonialism and white supremacy. Only recently have elements of Canada come to truly think through what this means: Canada, and settler Canadians, owe their existence to the forceful removal of indigenous populations from their terrorities.
Toronto is currently hosting an art exhibit, “Built on Genocide.” It’s created by the indigenous artist, Jay Soule | CHIPPERWAR,1 and provides a visual record of the link between the deliberate decimation of the buffalo and its correlation with the genocide of indigenous populations. From the description of the exhibit:
Built on Genocide is a powerful visual record of the 19th-century buffalo genocide that accompanied John A. MacDonald’s colonial expansion west with the railroad. In the mid-19th century, an estimated 30 to 60 million buffalo roamed the prairies, by the late 1880s, fewer than 300 remained. As the buffalo were slaughtered and the prairie ecosystem decimated, Indigenous peoples were robbed of their foods, lands, and cultures. The buffalo genocide became a genocide of the people.
Working from archival records, Soule combines installation and paintings to connect the past with the present, demanding the uncomfortable acknowledgement that Canada is a nation built on genocide.
What follows are a series of photographs that I made while visiting the exhibit on October 13, 2021. All images were made using an iPhone 12 Pro using the ‘Noir’ filter in Apple Photos, and subsequently edited using a Darkroom App filter.
Canada is, and needs to be, going through a reckoning concerning its past. This process is challenging for settlers, both to appreciate their actual histories and to be made to account for how they arrived at their current life situations. There are, obviously, settlers who are in challenging life situations—som experience poverty and are otherwise disadvantaged in society—but their challenges routinely pale in comparison to what is sadly normal and typical in Canada’s indigenous societies. As just one example, while poverty is a real issue for some white and immigrant Canadians, few lack routine access to safe and clean drinking water. None have lacked access to safe and clean water for over 26 years but this is the lived reality of indigenous populations in Canada.
Jay creates art under the name CHIPPEWAR, which represents the hostile relationship that Canada’s Indigenous peoples have with the government of the land they have resided in since their creation. CHIPPEWAR is also a reminder of the importance of the traditional warrior role that exists in Indigenous cultures across North America that survives into the present day. ↩︎
Steven Chaplin has a really great explanation of whether the Canadian government can rely on national security and evidentiary laws to lawfully justify refusing to provide documents to the House of Commons, and to House committees. His analysis and explanation arose as a result of the Canadian government doing everything it could to, first, refuse to provide documents to the Parliamentary Committee which was studying Canadian-Chinese relations and, subsequently, refusing to provide the documents when compelled to do so by the House of Commons itself.
Rather than releasing the requested documents the government turned to the courts to adjudicate whether the documents in question–which were asserted to contain sensitive national security information–must, in fact, be released to the House or whether they could instead be sent to an executive committee, filled with Members of Parliament and Senators, to assess the contents instead. As Chaplin notes,
Having the courts intervene, as proposed by the government’s application in the Federal Court, is not an option. The application is clearly precluded by Article 9 of the Bill of Rights, 1689, which provides that a proceeding in Parliament ought not to be impeached or questioned in court. Article 9 not only allows for free speech; it is also a constitutional limit on the jurisdiction of the courts to preclude judicial interference in the business of the House.
The House ordered that the documents be tabled without redaction. Any decision of the court that found to the contrary would impeach or question the proceeding that led to the Order. And any attempt by the courts to balance the interests involved would constitute the courts becoming involved in ascertaining, and thereby questioning, the needs of the House and why the House wants the documents.
Beyond the Court’s involvement impeding into the territory of Parliament, there could be serious and long-term implications of letting the court become a space wherein the government and the House fight to obtain information that has been demanded. Specifically,
It may be that at the end of the day the government will continue to refuse to produce documents. In the same way that the government cannot use the courts to withhold documents, the House cannot go to court to compel the government to produce them, or to order witnesses to attend proceedings. It could also invite disobedience of witnesses, requiring the House to either drop inquiries or involve the courts to compel attendance or evidence. Allowing, or requiring, the government and the House to resolve their differences in the courts would not only be contrary to the constitutional principles of Article 9, but “would inevitably create delays, disruption, uncertainties and costs which would hold up the nation’s business and on that account would be unacceptable even if, in the end, the Speaker’s rulings were vindicated as entirely proper” (Canada (House of Commons) v. Vaid ). In short, the courts have no business intervening one way or the other.
Throughout the discussions that have taken place about this issue in Canada, what has been most striking is that the national security commentators and elites have envisioned that the National Security and Intelligence Committee of Parliamentarians (NSICOP) could (and should) be tasked to resolve any and all particularly sensitive national security issues that might be of interest to Parliament. None, however, seems to have contemplated that Parliament, itself, might take issue with the government trying to exclude Parliament from engaging in assessments of the government’s national security decisions nor that issue would be taken when topics of interest to Parliamentarians were punted into an executive body, wherein their fellow Members of Parliament on the body were sworn to the strictest secrecy. Instead, elites have hand waved to the importance of preserving secrecy in order for Canada to receive intelligence from allies, as well as asserted that the government would never mislead Parliament on national security matters (about which, these same experts explain, Members of Parliament are not prepared to receive, process, or understand given the sophistication of the intelligence and the apparent simplicity of most Parliamentarians themselves).
This was the topic of a recent episode of the Intrepid Podcast, where Philippe Lagassé noted that the exclusion of parliamentary experts when creating NSICOP meant that these entirely predictable showdown situations were functionally baked into how the executive body was composed. As someone who raised the issue of adopting an executive, versus a standing House, committee and was rebuffed as being ignorant of the reality of national security it’s with more than a little satisfaction that the very concerns which were raised when NSICOP was being created are, in fact, arising on the political agenda.
With regard to the documents that the House Committee was seeking, I don’t know or particularly care what their contents include. From my own experience I’m all too well aware that ‘national security’ is often stamped on things that either governments want to keep from the public because they can be politically damaging, be kept from the public just generally because of a culture of non-transparency and refusal of accountability, as well as (less often) be kept from the public on the basis that there are bonafide national security interests at stake. I do, however, care that the Government of Canada has (again) acted counter to Parliament’s wishes and has deliberately worked to impede the House from doing its work.
Successive governments seem to genuinely believe that they get to ‘rule’ Canada absolutely and with little accountability. While this is, in function, largely true given how cowed Members of Parliament are to their party leaders it’s incredibly serious and depressing to see the government further erode Parliament’s powers and abilities to fulfil its duties. A healthy democracy is filled with bumps for the government as it is held to account but, sadly, the Government of Canada–regardless of the party in power–is incredibly active in keeping itself, and its behaviours, from the public eye and thus held to account.
If only a committee might be struck to solve this problem…
Matt Tait, as normal, has good insights into just why the Kaseya ransomware attack1 was such a big deal:
In short, software supply chain security breaches don’t look like other categories of breaches. A lot of this comes down to the central conundrum of system security: it’s not possible to defend the edges of a system without centralization so that we can pool defensive resources. But this same centralization concentrates offensive action against a few single points of failure that, if breached, cause all of the edges to fall at once. And the more edges that central failure point controls, the more likely the collateral real-world consequences of any breach, but especially a ransomware breach will be catastrophic, and cause overwhelm the defensive cybersecurity industry’s ability to respond.
Managed Service Providers (MSPs) are becoming increasingly common targets. It’s worth noting that the Canadian Centre for Cybersecurity‘s National Cyber Threat Assessment 2020 listed ransomware as well as the exploitation of MSPs as two of the seven key threats to Canadian financial and economic health. The Centre went so far as to state that it expected,
… that over the next two years ransomware campaigns will very likely increasingly target MSPs for the purpose of targeting their clients as a means of scaling targeted ransomware campaigns.
Sadly, if not surprisingly, this assessment has been entirely correct. It remains to be seen what impact the 2020 threats assessment has, or will have, on Canadian organizations and their security postures. Based on conversations I’ve had over the past few months the results are not inspiring and the threat assessment has generally been less effective than hoped in driving change in Canada.
As discussed by Steven Bellovin, part of the broader challenge for the security community in preparing for MSP operations has been that defenders are routinely behind the times; operators modify what and who their campaigns will target and defenders are forced to scramble to catch up. He specifically, and depressingly, recognizes that, “…when it comes to target selection, the attackers have outmaneuvered defenders for almost 30 years.”
These failures are that much more noteworthy given that the United States has trumpeted for years that the NSA will ‘defend forward‘ to identify and hunt threats, and respond to them before they reach ‘American cybershores’.2 The seemingly now routine targeting of both system update mechanisms as well as vendors which provide security or operational controls for wide swathes of organizations demonstrates that things are going to get a lot worse before they’re likely to improve.
A course correction could follow from Western nations developing effective and meaningful cyber-deterrence processes that encourage nations such as Russia, China, Iran, and North Korea to punish computer operators who are behind some of the worst kinds of operations that have emerged in public view. However, this would in part require the American government (and its allies) to actually figure out how they can deter adversaries. It’s been 12 years or so, and counting, and it’s not apparent that any American administration has figured out how to implement a deterrence regime that exceeds issuing toothless threats. The same goes for most of their allies.
Absent an actual deterrence response, such as one which takes action in sovereign states that host malicious operators, Western nations have slowly joined together to issue group attributions of foreign operations. They’ve also come together to recognize certain classes of cyber operations as particularly problematic, including ransomware. Must nations build this shared capacity, first, before they can actually undertake deterrence activities? Should that be the case then it would strongly underscore the need to develop shared norms in advance of sovereign states exercising their latent capacities in cyber and other domains and lend credence to the importance of the Tallinn manual process . If, however, this capacity is built and nothing is still undertaken to deter, then what will the capacity actually be worth? While this is a fascinating scholarly exercise–it’s basically an opportunity to test competing scholarly hypotheses–it’s one that has significant real-world consequences and the danger is that once we recognize which hypothesis is correct, years of time and effort could have been wasted for little apparent gain.
What’s worse is that this even is a scholarly exercise. Given that more than a decade has passed, and that ‘cyber’ is not truly new anymore, why must hypotheses be spun instead of states having developed sufficient capacity to deter? Where are Western states’ muscles after so much time working this problem?
As a point of order, when is an act of ransomware an attack versus an operation? ↩︎
I just made that one up. No, I’m not proud of it. ↩︎
Roland Paris and Jennifer Walsh have an excellent, and thought-provoking, column in the Globe and Mail where they argue that Western democracies need to adopt a ‘democratic support’ agenda. Such an agenda has multiple points comprising:
States getting their own democratic houses in order;
States defending themselves and other democracies against authoritarian states’ attempts to disrupt democracies or coerce residents of democracies;
States assisting other democracies which are at risk of slipping toward authoritarianism.
In principle, each of these points make sense and can interoperate with one another. The vision is not to inject democracy into states but, instead, to protect existing systems and demonstrate their utility as a way of weaning nations towards adopting and establishing democratic institutions. The authors also assert that countries like Canada should learn from non-Western democracies, such as Korea or Taiwan, to appreciate how they have maintained their institutions in the face of the pandemic as a way to showcase how ‘peer nations’ also implement democratic norms and principles.
While I agree with the positions the authors suggest, far towards the end of the article they delicately slip in what is the biggest challenge to any such agenda. Namely, they write:
Time is short for Canada to articulate its vision for democracy support. The countdown to the 2024 U.S. presidential election is already under way, and no one can predict its outcome. Meanwhile, two of Canada’s closest democratic partners in Europe, Germany and France, may soon turn inward, preoccupied by pivotal national elections that will feature their own brands of populist politics.1
In warning that the United States may be an unreliable promoter of democracy (and, by extension, human rights and international rules and order which have backstopped Western-dominated world governance for the past 50 years) the authors reveal the real threat. What does it mean when the United States is regarded as likely to become more deeply mired in internecine ideological conflicts that absorbs its own attention, limits its productive global engagements, and is used by competitor and authoritarian nations to warn of the consequences of “American-style” democracy?
I raise these questions because if the authors’ concerns are fair (and I think they are) then any democracy support agenda may need to proceed with the presumption that the USA may be a wavering or episodic partner in associated activities. To some extent, assuming this position would speak more broadly to a recognition that the great power has significantly fallen. To even take this as possible–to the extent that contingency planning is needed to address potential episodic American commitment to the agenda of buttressing democracies–should make clear that the American wavering is the key issue: in a world where the USA is regarded as unreliable, what does this mean for other democracies and how they support fellow democratic states? Do countries, such as Canada and others with high rule-of-law democratic governments, focus first and foremost on ‘supporting’ US democracy? And, if so, what does this entail? How do you support a flailing and (arguably) failing global hegemon?
I don’t pretend to have the answers. But it seems that when we talk about supporting democracies, and can’t rely on the USA to show up in five years, then the metaphorical fire isn’t approaching our house but a chunk of the house is on fire. And that has to absolutely be our first concern: can we put out the fire and save the house, or do we need to retreat with our children and most precious objects and relocate? And, if we must retreat…to where do we retreat?
Elizabeth Dubois has a great episode of Wonks and War Rooms where she interviews Etienne Rainville of The Boys in Short Pants podcast, former Hill staffer, and government relations expert. They unpack how government staffers collect information, process it, and identify experts.
Broadly, the episode focuses on how the absence of significant policy expertise in government and political parties means that social media—and Twitter in particular—can play an outsized role in influencing government, and why that’s the case.
While the discussion isn’t necessarily revelatory to anyone who has dealt with some elements of government of Canada, and especially MPs and their younger staffers, it’s a good and tight conversation that could be useful for students of Canadian politics, and also helpfully distinguishes of of the differences between Canadian and American political cultures. I found the forthrightness of the conversation and the honesty of how government operates was particularly useful in clarifying why Twitter is, indeed, a place for experts in Canada to spend time if they want to be policy relevant.
Jason Healey and Robert Jervis have a thought provoking piece over at the Modern War Institute at West Point. The crux of the argument is that, as a result of overclassification, it’s challenging if not impossible for policymakers or members of the public (to say nothing of individual analysts in the intelligence community or legislators) to truly understand the nature of contemporary cyberconflict. While there’s a great deal written about how Western organizations have been targeted by foreign operators, and how Western governments have been detrimentally affected by foreign operations, there is considerably less written about the effects of Western governments’ own operations towards foreign states because those operations are classified.
To put it another way, there’s no real way of understanding the cause and effect of operations, insofar as it’s not apparent why foreign operators are behaving as they are in what may be reaction to Western cyber operations or perceptions of Western cyber operations. The kinds of communiques provided by American intelligence officials, while somewhat helpful, also tend to obscure as much as they reveal (on good days). Healey and Jervis write:
General Nakasone and others are on solid ground when highlighting the many activities the United States does not conduct, like “stealing intellectual property” for commercial profit or disrupting the Olympic opening ceremonies. There is no moral equivalent between the most aggressive US cyber operations like Stuxnet and shutting down civilian electrical power in wintertime Ukraine or hacking a French television station and trying to pin the blame on Islamic State terrorists. But it clouds any case that the United States is the victim here to include such valid complaints alongside actions the United States does engage in, like geopolitical espionage. The concern of course is a growing positive feedback loop, with each side pursuing a more aggressive posture to impose costs after each fresh new insult by others, a posture that tempts adversaries to respond with their own, even more aggressive posture.
Making things worse, the researchers and academics who are ostensibly charged with better understanding and unpacking what Western intelligence agencies are up to sometimes decline to fulfill their mandate. The reasons are not surprising: engaging in such revelations threaten possible career prospects, endanger the very publication of the research in question, or risk cutting off access to interview subjects in the future. Healey and Jervis focus on the bizarre logics of working and researching the intelligence community in the United States, saying (with emphasis added):
Think-tank staff and academic researchers in the United States often shy away from such material (with exceptions like Ben Buchanan) so as not to hamper their chances of a future security clearance. Even as senior researchers, we were careful not to directly quote NSA’s classified assessment of Iran, but rather paraphrased a derivative article.
A student, working in the Department of Defense, was not so lucky, telling us that to get through the department’s pre-publication review, their thesis would skip US offensive operations and instead focus on defense.
Such examples highlight the distorting effects of censorship or overclassification: authors are incentivized to avoid what patrons want ignored and emphasize what patrons want highlighted or what already exists in the public domain. In paper after paper over the decades, new historical truths are cumulatively established in line with patrons’ preferences because they control the flow and release of information.
What are the implications as written by Healey and Jervis? In intelligence communities the size of the United States’, information gets lost or not passed to whomever it ideally should be presented to. Overclassification also means that policy makers and legislators who aren’t deeply ‘in the know’ will likely engage in decisions based on half-founded facts, at best. In countries such as Canada, where parliamentary committees cannot access classified information, they will almost certainly be confined to working off of rumour, academic reports, government reports that are unclassified, media accounts that divulge secrets or gossip, and the words spoken by the heads of security and intelligence agencies. None of this is ideal for controlling these powerful organizations, and the selective presentation of what Western agencies are up to actually risks compounding broader social ills.
Legislative Ignorance and Law
One of the results of overclassification is that legislators, in particular, become ill-suited to actually understanding national security legislation that is presented before them. It means that members of the intelligence and national security communities can call for powers and members of parliament are largely prevented from asking particularly insightful questions, or truly appreciate the implications of the powers that are being asked for.
Indeed, in the Canadian context it’s not uncommon for parliamentarians to have debated a national security bill in committee for months and, when asked later about elements of the bill, they admit that they never really understood it in the first place. The same is true for Ministers who have, subsequently, signed off on broad classes of operations that have been authorized by said legislation.
Part of that lack of understanding is the absence of examples of how powers have been used in the past, and how they might be used in the future; when engaging with this material entirely in the abstract, it can be tough to grasp the likely or possible implications of any legislation or authorization that is at hand. This is doubly true in situations where new legislation or Ministerial authorization will permit secretive behaviour, often using secretive technologies, to accomplish equally secretive objectives.
Beyond potentially bad legislative debates leading to poorly understood legislation being passed into law and Ministers consenting to operations they don’t understand, what else may follow from overclassification?
Nationalism, Miscalculated Responses, and Racism
To begin with, it creates a situation where ‘we’ in the West are being attacked by ‘them’ in Russia, Iran, China, North Korea, or other distant lands. I think this is problematic because it casts Western nations, and especially those in the Five Eyes, as innocent victims in the broader world of cyber conflict. Of course, individuals with expertise in this space will scoff at the idea–we all know that ‘our side’ is up to tricks and operations as well!–but for the general public or legislators, that doesn’t get communicated using similarly robust or illustrative examples. The result is that the operations of competitor nations can be cast as acts of ‘cyberwar’ without any appreciation that those actions may, in fact, be commensurate with the operations that Five Eyes nations have themselves launched. In creating an Us versus Them, and casting the Five Eyes and West more broadly as victims, a kind of nationalism can be incited where ‘They’ are threats whereas ‘We’ are innocents. In a highly complex and integrated world, these kinds of sharp and inaccurate concepts can fuel hate and socially divisive attitudes, activities, and policies.
At the same time, nations may perceive themselves to be targeted by Five Eyes nations, and deduce effects to Five Eyes operations even when that isn’t the case. When a set of perimeter logs show something strange, or when computers are affected by ransomware or wiperware, or another kind of security event takes place, these less resourced nations may simply assume that they’re being targeted by a Five Eyes operation. The result is that foreign government may both drum up nationalist concerns about ‘the West’ or ‘the Five Eyes’ while simultaneously queuing up their own operations to respond to what may, in fact, have been an activity that was totally divorced from the Five Eyes.
I also worry that the overclassification problem can lead to statements in Western media that demonizes broad swathes of the world as dangerous or bad, or threatening for reasons that are entirely unapparent because Western activities are suppressed from public commentary. Such statements arise with regular frequency, where China is attributed to this or to that, or when Russia or Middle Eastern countries are blamed for the most recent ill on the Internet.
The effect of such statements can be to incite differential degrees of racism. When mainstream newspapers, as an example, constantly beat the drum that the Chinese government (and, by extension, Chinese people) are threats to the stability and development of national economies or world stability, over time this has the effect of teaching people that China’s government and citizens alike are dangerous. Moreover, without information about Western activities, the operations conducted by foreign agencies can be read out of context with the effect that people of certain ethnicities are regarded as inherently suspicious or sneaky as compared to those (principally white) persons who occupy the West. While I would never claim that the overclassification of Western intelligence operations are the root cause of racism in societies I do believe that overclassification can fuel misinformation about the scope of geopolitics and Western intelligence gathering operations, with the consequence of facilitating certain subsequent racist attitudes.
A colleague of mine has, in the past, given presentations and taught small courses in some of Canada’s intelligence community. This colleague lacks any access to classified materials and his classes focus on how much high quality information is publicly available when you know how and where to look for it, and how to analyze it. Students are apparently regularly shocked: they have access to the classified materials, but their understandings of the given issues are routinely more myopic and less robust. However, because they have access to classified material they tend to focus as much, or more, on it because the secretive nature of the material makes it ‘special’.
This is not a unique issue and, in fact, has been raised in the academic literature. When someone has access to special or secret knowledge they are often inclined to focus in on that material, on the assumption that it will provide insights in excess of what are available in open source. Sometimes that’s true, but oftentimes less so. And this ‘less so’ becomes especially problematic when operating in an era where governments tend to classify a great deal of material simply because the default is to assume that anything could potentially be revelatory to an agency’s operations. In this kind of era, overvaluing classified materials can lead to less insightful understandings of the issues of the day while simultaneously not appreciating that much of what is classified, and thus cast as ‘special’, really doesn’t provide much of an edge when engaging in analysis.
The solution is not to declassify all materials but, instead, to adopt far more aggressive declassification processes. This could, as just an example, entail tying declassification in some way to organizations’ budgets, such that if they fail to declassify materials their budgets are forced to be realigned in subsequent quarters or years until they make up from the prior year(s)’ shortfalls. Extending the powers of Information Commissioners, which are tasked with forcing government institutions to publish documents when they are requested by members of the public or parliamentarians (preferably subject to a more limited set of exemptions than exist today) might help. And having review agencies which can unpack higher-level workings of intelligence community organizations can also help.
Ultimately, we need to appreciate that national security and intelligence organizations do not exist in a bubble, but that their mandates mean that the externalized problems linked with overclassification are typically not seen as issues that these organizations, themselves, need to solve. Nor, in many cases, will they want to solve them: it can be very handy to keep legislators in the dark and then ask for more powers, all while raising the spectre of the Other and concealing the organizations’ own activities.
We do need security and intelligence organizations, but as they stand today their tendency towards overclassification runs the risk of compounding a range of deleterious conditions. At least one way of ameliorating those conditions almost certainly includes reducing the amount of material that these agencies currently classify as secret and thus kept from public eye. On this point, I firmly agree with Healey and Jervis.
Mark Stenberg has a good assessment of the challenges facing Clubhouse, the newest ‘hot’ social media app that involves individuals having audio discussions in real-time with one another in rooms that are created on the platform. He suspects that Clubhouse may work best in quarantine:
A glimpse of Instagram brings a fleeting burst of serotonin, but a second’s worth of Clubhouse is meaningless. Will you then, at night, leave your family in the other room so you can pop your headphones in and listen to strangers swapping their valuable thoughts on the news of the day?
When commutes and daily life return, people will once again have a few parceled-off periods of the day in which they can listen to audio entertainment. If there are no good Clubhouse conversations at those exact times, the app is far less valuable than a podcast platform or music-streaming service. The very characteristic that makes it so appealing — its real-time nature — will make it challenging for listeners to fold it into their lives when reality returns.
Whether a real-time app that depends on relative quiet and available time, and which is unsuitable for multitasking, survives in its current form as people emerge from their relative isolation will be interesting to measure in real-time once vaccines are widely spread throughout society. But, equally interesting (to my mind) are the assumptions baked into that very question: why not just ask people (e.g., essential workers) who continue to commute en mass and inquire about whether they are, or will be, using Clubhouse? Why not ask those who do not have particularly fungible or quiet lives at the moment (e.g., parents who are homeschooling younger children while working their day jobs) whether the app is compelling during quarantine periods?
To put it another way, the very framing of Clubhouse presupposes a number of affordances that really mostly pertain to a subset of relatively privileged members of society. It’s lovely that some tech workers, who work from home, and journalists who have similar lifestyles are interested in the app. But that doesn’t mean that it’ll broadly interest people, just as most people are dismissive of text-based social media applications (e.g., Twitter) and even visual-based apps (e.g., Instagram).
But, at the same time, this may not matter. If the founders are aiming for growing and sustaining the existing platform and not for the typical Silicon Valley viral growth, then their presently suggested modes of deriving profits might work. Specifically, current proposals include, “tipping, subscriptions, and ticketing” which, if adopted, could mean this is a social networking platform that doesn’t rely on the normal advertising or data brokerage models which have been adopted by most social media platforms and companies.
Will any of this work? Who knows. Most social media companies are here today, gone tomorrow, and I bet that Clubhouse is probably in that category. But, at the same time, it’s worth thinking through who these kinds of apps are designed for so that we can appreciate the politics, privilege, and power which are imbued into the technologies which surround us and the ways that we talk about those technologies.