The Answer to Why Twitter Influences Canadian Politics

Elizabeth Dubois has a great episode of Wonks and War Rooms where she interviews Etienne Rainville of The Boys in Short Pants podcast, former Hill staffer, and government relations expert. They unpack how government staffers collect information, process it, and identify experts.

Broadly, the episode focuses on how the absence of significant policy expertise in government and political parties means that social media—and Twitter in particular—can play an outsized role in influencing government, and why that’s the case.

While the discussion isn’t necessarily revelatory to anyone who has dealt with some elements of government of Canada, and especially MPs and their younger staffers, it’s a good and tight conversation that could be useful for students of Canadian politics, and also helpfully distinguishes of of the differences between Canadian and American political cultures. I found the forthrightness of the conversation and the honesty of how government operates was particularly useful in clarifying why Twitter is, indeed, a place for experts in Canada to spend time if they want to be policy relevant.


Safe Streets and Systemic Racism

Sabat Ismail, writing at Spacing Toronto, interrogates who safe streets are meant to be safe for. North American calls for adopting Nordic models of urban cityscapes are often focused on redesigning streets for cycling whilst ignoring that Nordic safety models are borne out of broader conceptions of social equity. Given the broader (white) recognition of the violent threat that police can represent to Black Canadians, cycling organizations which are principally advocating for safe streets must carefully think through how to make them safe, and appreciate why calls for greater law enforcement to protect non-automobile users may run counter to an equitable sense of safety. To this point, Ismail writes:

I recognize the ways that the safety of marginalized communities and particularly Black and Indigenous people is disregarded at every turn and that, in turn, we are often provided policing and enforcement as the only option to keep us safe. The options for “safety” presented provide a false choice – because we do not have the power to determine safety or to be imagined within its folds.

Redesigning streets without considering how the design of urban environments are rife with broader sets of values runs the very real risk of further systematizing racism while espousing values of freedom and equality. The values undergirding the concept of safe streets must be assessed by a diverse set of residents to understand what might equitably provide safety for all people; doing anything less will likely re-embed existing systems of power in urban design and law, to the ongoing detriment and harm of non-white inhabitants of North American cities.

Another Bad Proposal to Globally Weaken Security

Photo by Federica Galli on Unsplash

Steven Levy has an article out in Wired this week in which he, vis-a-vis the persons he interviewed, proclaims that the ‘going dark’ solution has been solved to the satisfaction of (American) government agencies and (unnamed and not quoted) ‘privacy purists’.1 Per the advocates of the so-called-solution, should the proposed technical standard be advanced and developed then (American) government agencies could access encrypted materials and (American) users will enjoy the same degrees of strong encryption as they do today. This would ‘solve’ the problem of (American) agencies’ investigations being stymied by suspects’ adoption of encrypted communications systems and personal devices.

Unfortunately Levy got played: the proposal he dedicates his article to is just another attempt to advance a ‘solution’ that doesn’t address the real technical or policy problems associated with developing a global backdoor system to our most personal electronic devices. Specifically the architect of the solution overestimates the existent security characteristics of contemporary devices,2 overestimates the ability of companies to successfully manage a sophisticated and globe-spanning key management system,3 fails to address international policy issues about why other governments couldn’t or wouldn’t demand similar kinds of access (think Russia, China, Iran, etc),4 fails to contemplate an adequate key revocation system, and fails to adequately explain why why the exceptional access system he envisions is genuinely needed. With regards to that last point, government agencies have access to more data than ever before in history and, yet, because they don’t have access to all of the data in existence the agencies are claiming they are somehow being ‘blinded’.

As I’ve written in a draft book chapter, for inclusion in a book published later this year or early next, the idea that government agencies are somehow worse off than in the past is pure nonsense. Consider that,

[a]s we have embraced the digital era in our personal and professional lives, [Law Enforcement and Security Agencies] LESAs have also developed new techniques and gained additional powers in order to keep pace as our memories have shifted from personal journals and filing cabinets to blogs, social media, and cloud hosting providers. LESAs now subscribe to services designed to monitor social media services for intelligence purposes, they collect bulk data from telecommunications providers in so-called ‘tower dumps’ of all the information stored by cellular towers, establish their own fake cellular towers to collect data from all parties proximate to such devices, use malware to intrude into either personal endpoint devices (e.g. mobile phones or laptops) or networking equipment (e.g. routers), and can even retroactively re-create our daily online activities with assistance from Canada’s signals intelligence agency. In the past, each of these kinds of activities would have required dozens or hundreds or thousands of government officials to painstakingly follow persons — many of whom might not be specifically suspected of engaging in a criminal activity or activity detrimental to the national security of Canada — and gain lawful entry to their personal safes, install cameras in their homes and offices, access and copy the contents of filing cabinets, and listen in on conversations that would otherwise have been private. So much of our lives have become digital that entirely new investigative opportunities have arisen which were previously restricted to the imaginations of science fiction authors both insofar as it is easier to access information but, also, because we generate and leave behind more information about our activities vis-a-vis our digital exhaust than was even possible in a world dominated by analog technologies.

In effect: the ‘solution’ covered by Levy doesn’t clearly articulate what problem must be solved and it would end up generating more problems than it solves by significantly diminishing the security properties of devices while, simultaneously, raising international policy issues of which countries’ authorities, and under what conditions, could lawfully obtain decryption keys. Furthermore, companies and their decryption keys will suddenly become even more targeted by advanced adversaries than they are today. Instead of even attempting to realistically account for these realities of developing and implementing secure systems, the proposed ‘solution’ depends on a magical pixie dust assumption that you can undermine the security of globally distributed products and have no bad things happen.5

The article as written by Levy (and the proposed solution at the root of the article) is exactly the kind of writing and proposal that gives law enforcement agencies the energy to drive a narrative that backdooring all secure systems is possible and that the academic, policy, and technical communities are merely ideologically opposed to doing so. As has become somewhat common to say, while we can land a person on the moon, that doesn’t mean we can also land a person on the sun; while we can build (somewhat) secure systems we cannot build (somewhat) secure systems that include deliberately inserted backdoors. Ultimately, it’s not the case that ‘privacy purists’ oppose such solutions to undermine the security of all devices on ideological grounds: they’re opposed based on decades of experience, training, and expertise that lets them recognize such solutions as the charades that they are.


  1. I am unaware of a single person in the American or international privacy advocacy space who was interviewed for the article, let alone espouses positions that would be pacified by the proposed solution.
  2. Consider that there is currently a way of bypassing the existing tamper-resistant chip in Apple’s iPhone, which is specifically designed to ‘short out’ the iPhone if someone attempts to enter an incorrect password too many times. A similar mechanism would ‘protect’ the master key that would be accessible to law enforcement and security agencies.
  3. Consider that Microsoft has, in the past, lost its master key that is used to validate copies of Windows as legitimate Microsoft-assured products and, also, that Apple managed to lose key parts of its iOS codebase and reportedly its signing key.
  4. Consider that foreign governments look at the laws promulgated by Western nations as justification for their own abusive and human rights-violating legislation and activities.
  5. Some of the more unhelpful security researchers just argue that if Apple et al. don’t want to help foreign governments open up locked devices they should just suspend all service into those jurisdictions. I’m not of the opinion that protectionism and nationalism are ways of advancing international human rights or of raising the qualities of life of all persons around the world; it’s not morally right to just cast the citizens of Russia, Ethiopia, China, India, Pakistan, or Mexico (and others!) to the wolves of their own oftentimes overzealous or rights abusing government agencies.

From Salon:

But here’s the thing. Mnuchin’s shameless posturing about the administration’s tax plans—at one point he even promised there would be “no absolute tax cut for the upper class,” which was a laugher given every proposal Trump had ever backed—points to a deeper problem. The man regularly says things that just aren’t true. He’s been claiming that there was an analysis underway. There wasn’t. And while a lot of people may roll their eyes about that in the context of a wonky tax debate, his complete lack of credibility is going to be a problem if we ever run into a serious economic or financial crisis. Just ask yourself: If the markets were crashing and Steve Mnuchin held a press conference assuring everybody that the administration had an action plan in the works, would you believe him? His complete detachment from reality has mostly been an infuriating sideshow during this tax push. If stuff ever really hits the fan, though, his reputation for fibbing is going to make things even worse. Just like someone else we know.


While policies may vary, the sensitive nature of the data produced does not. Traffic data analysis generates more sensitive profiles of an individual’s actions and intentions, arguably more so than communica- tions content. In a communication with another individual, we say what we choose to share; in a transaction with another device, for example, search engines and cell stations, we are disclosing our actions, movements, and intentions. Technology- neutral policies continue to regard this transactional data as POTS traffic data, and accordingly apply inadequate protections.

This is not faithful to the spirit of updating laws for new technology. We need to acknowledge that changing technological environments transform the policy itself. New policies need to reflect the totality of the new environment.

* Alberto Escudero-Pascual and Ian Hosein, “Questioning Lawful Access to Traffic Data”

Prism threatens ‘sovereignty’ of all EU data

Caspar Bowden has been aggressively lobbying the EU Parliament over the implications of the FISA Amendments Act for some time. In short, the Act authorizes capturing data from ‘Electronic Communications Service Providers’ when the data possesses foreign intelligence value. The result is that business and personal information, in addition to information directly concerning ‘national security’, can be legitimately collected by the Agency. (For more, see pages 33-35 of this report.)

Caspar’s most recent article outlines the unwillingness of key members of the EU Parliament to take seriously the implications of American surveillance … until it ceases to be an issue for policy wonks, and one of politics. Still, the Parliament has yet to retract recent amendments that would detrimentally affect the privacy rights of European citizens: it will be interesting to see whether the politics of the issue reverse the parliamentarians’ decisions or if lobbying by corporate interests win the day.

Notes EM: Fiction vs reality


Tim Wu on my book:

Too much assault and battery creates a more serious problem: wrongful appropriation, as Morozov tends to borrow heavily, without attribution, from those he attacks. His critique of Google and other firms engaged in “algorithmic gatekeeping”is basically taken from Lessig’s first book, “Code and Other Laws of Cyberspace,” in which Lessig argued that technology is necessarily ideological and that choices embodied in code, unlike law, are dangerously insulated from political debate. Morozov presents these ideas as his own and, instead of crediting Lessig, bludgeons him repeatedly. Similarly, Morozov warns readers of the dangers of excessively perfect technologies as if Jonathan Zittrain hadn’t been saying the same thing for the past 10 years. His failure to credit his targets gives the misimpression that Morozov figured it all out himself and that everyone else is an idiot.

What my book actually says:

Alas, Internet-centrism prevents us from grasping many of these issues as clearly as we must. To their credit, Larry Lessig and Jonathan Zittrain have written extensively about digital preemption (and Lessig even touched on the future of civil disobedience). However, both of them, enthralled with the epochalist proclamations of Internet-centrism, seem to operate under the false assumption that digital preemption is mostly a new phenomenon that owes its existence to “the Internet,” e-books, and MP3 files. Code is law—but so are turnstiles. Lessig does note that buildings and architecture can and do regulate, but he makes little effort to explain whether the possible shift to code-based regulation is the product of unique contemporary circumstances or merely the continuation of various long-term trends in criminological thinking.

As Daniel Rosenthal notes in discussing the work of both Lessig and Zittrain, “Academics have sometimes portrayed digital preemption as an unfamiliar and novel prospect… In truth, digital preemption is less of a revolution than an extension of existing regulatory techniques.” In Zittrain’s case, his fascination with “the Internet” and its values of “openness” and “generativity,” as well as his belief that “the Internet” has important lessons to teach us, generates the kind of totalizing discourse that refuses to see that some attempts to work in the technological register might indeed be legitimate and do not necessarily lead to moral depravity.

One of the theoretical frames that I use in my dissertations is path dependency. Specifically, I consider whether early decisions with regards to Internet standards (small, early, decisions) actually lead to systems that are challenging to significantly change after systems relying on those protocols are widely adopted (i.e. big, late, decisions aren’t that influential). Once systems enjoy a network effect and see high levels of sunk capital, do they tend to be maintained even if something new comes along that is theoretically ‘superior’?

I mention this background in path dependency because a lot of the really interesting work in this field was written well before Lessig’s and Zittrain’s popular books (yes: there’s still excellent stuff being written today, but core literature predates Lessig or Zittrain). There’s also a extensive literature in public policy, with one of the more popular works being Tools of Government (1983). Hood, in Tools, that outlines how detectors and effectors work for institutions. Hood’s work, in part,  attends to how built infrastructure is used to facilitate governance; by transforming the world itself into a regulatory field (e.g. turnstiles, bridges and roads that possess particular driving characteristics, and so forth) the world becomes embedded with an aesthetic of regulation. This aesthetic can significantly ‘nudge’ the actions we choose to take. This thematic of ‘regulation by architecture’ is core to Lessig’s and Zittrain’s arguments, though there are no references to the ‘core books or sources’ that really launched some of this work in the academy.

This said, while there are predecessors that Lessig and Zittrain probably ought to have spent more time writing about, such complaints are true of practically any book or work that is designed to be read by the public and policy makers and academics. The real ‘magic’ of Zittrain and Lessig (and Morozov!) is that their works speak to a wide audience: their books are not, i would argue, written just for academics. As a result some of the nuance or specificity you’d expect in a $150 book that’s purchased by the other 10 specialists in your field is missing. And that’s ok.

Morozov’s key complaint, as I understand it, is that really important problems arise from how these authors’ books are perceived as what they are not. In other words, many people will not understand that many of the more populist books on ‘the Internet’ are being written by people with specific political intentions, who want their books to affect very particular public policy issues and that, as a consequence, these books and other writings have to be read as political works instead of ’dispassionate academic works’.* Their writings act as a kind of trojan horse through which particular ways of thinking of the world become ‘naturalized’, and the authors are ‘first’ to write on topics largely because of their skill in writing about the present while avoiding elongated literature reviews on the past.

I can appreciate Morozov’s concerns around language framing issues, and around the (sometimes) sloppy thinking of these authors. And I can appreciate Morozov’s critics who see him as being blunt and often similarly failing to ‘show all of his work’. For the public, however, I hope that they don’t necessarily see the very public conflicts between Morozov and his colleagues as necessarily an academic dispute in public so much as an unmasking and contestation of divergent political conceptions of the Internet and of literature more generally.


* I write this on the basis of having attended conferences with American legal scholars working in this area. Papers and reports are often written with specific members of federal sub-committees, Congressional and Senate assistants, or federal/state justices in mind. In effect, these authors are writing for people in power to change specific laws and policies. As such you should always hunt for what is ‘really going on’ when reading most popular American legal scholarship.

Notes EM: Fiction vs reality


Lawyers are trained in reading, understanding, interpreting and advising on laws and legal compliance programs, and defending their clients from litigants and regulators. Privacy laws, everywhere in the world, are vague, so they leave much room for legal interpretations. The lawyers’ skill set is becoming more and more central to the role of privacy leadership. Moreover, lawyers benefit from attorney-client privileged communications internally, which is becoming an absolutely essential mechanism for privacy lawyers to have deep, unfettered, unfiltered exchanges of information and advice with their clients.

Of course, non-legal disciplines will always play an essential role in safeguarding privacy at companies, e.g., the vital role played by security engineers. Privacy will always be a cross-disciplinary project. I’m not saying that the rise of the lawyer-privacy-leader is necessarily the best thing for “privacy”. Yet in the face of rampant litigation, discovery orders, vague laws, political debates, regulatory actions, threats of billion dollar fines, companies will be looking to their privacy lawyers for a lot more than drafting a privacy policy. It’s a great profession, if you like stretch goals.

* Peter Fleischer, “Stretch Goals for Privacy Lawyers

Privacy Policies Don’t Need to Be Obtuse

Peter Fleischer has a good summary piece on the (miserable) state of online privacy policies today. As he writes:

Today, privacy policies are being written to try to do two contradictory things.  Like most things in life, if you try to do two contradictory things at the same time, you end up doing neither well.  Here’s the contradiction:  should a privacy policy be a short, simple, readable notice that the average end-user could understand? Or should it be a long, detailed, legalistic disclosure document written for regulators?  Since average users and expert regulators have different expectations about what should be disclosed, the privacy policies in use today largely disappoint both groups.


The time has come for a global reflection on what, exactly, a privacy policy should look like.  Today, there is no consensus.  I don’t just mean consensus amongst regulators and lawyers.  My suggestion would be to start by doing some serious user-research, and actually ask Johnny and Jean and Johann.

I entirely, fully, wholeheartedly agree: most policies today are absolute garbage. I actually read a lot of them – and research on social media policies will be online and available soon! – and they are more often than not an elaborate act of obfuscation than something that explains, specifically and precisely, what a service does or is doing with the data that is collected.

The thing is, these policies don’t need to be as bad as they are. It really is possible to bridge ‘accessible’ and ‘legalese’ but doing so takes time, care, and effort.

And fewer lawyers.

As a good example of how this can be done check out how Tunnelbear has written their privacy policy: it’s reasonably accessible and lacks a lot of the ‘weasel phrases’ you’ll find in most privacy policies. Even better, read the company’s Terms of Service document; I cannot express how much ‘win’ is captured in their simultaneously legal and layperson disclosure of how and why their service functions as it does.