Link

Can University Faculty Hold Platforms To Account?

Heidi Tworek has a good piece with the Centre for International Governance Innovation, where she questions whether there will be a sufficient number of faculty in Canada (and elsewhere) to make use of information that digital-first companies might be compelled to make available to researchers. The general argument goes that if companies must make information available to academics then these academics can study the information and, subsequently, hold companies to account and guide evidence-based policymaking.

Tworek’s argument focuses on two key things.

  1. First, there has been a decline in the tenured professoriate in Canada, with the effect that the adjunct faculty who are ‘filling in’ are busy teaching and really don’t have a chance to lead research.
  2. While a vanishingly small number of PhD holders obtain a tenure track role, a reasonable number may be going into the very digital-first companies that researchers needs data from to hold them accountable.

On this latter point, she writes:

If the companies have far more researchers than universities have, transparency regulations may not do as much to address the imbalance of knowledge as many expect.

I don’t think that hiring people with PhDs necessarily means that companies are addressing knowledge imbalances. Whatever is learned by these researchers tends to be sheltered within corporate walls and protected by NDAs. So those researchers going into companies may learn what’s going on but be unable (or unmotivated) to leverage what they know in order to inform policy discussions meant to hold companies to account.

To be clear, I really do agree with a lot in this article. However, I think it does have a few areas for further consideration.

First, more needs to be said about what, specifically, ’transparency’ encompasses and its relationships with data type, availability, etc. Transparency is a deeply contested concept and there are a lot of ways that the revelation of data basically creates a funhouse of mirrors effect, insofar as what researchers ‘see’ can be very distorted from the reality of what truly is.

Second, making data available isn’t just about whether universities have the professors to do the work but, really, whether the government and its regulators have the staff time as well. Professors are doing a lot of things whereas regulators can assign staff to just work the data, day in and day out. Focus matters.

Third, and related, I have to admit that I have pretty severe doubts about the ability of professors to seriously take up and make use of information from platforms, at scale and with policy impact, because it’s never going to be their full time jobs to do so. Professors are also going to be required to publish in books or journals, which means their outputs will be delayed and inaccessible to companies, government bureaucrats and regulators, and NGO staff. I’m sure academics will have lovely and insightful discussions…but they won’t happen fast enough, or in accessible places or in plain language, to generally affect policy debates.

So, what might need to be added to start fleshing out how universities are organised to make use of data released by companies and have policy impacts in research outputs?

First, universities in Canada would need to get truly serious about creating a ’researcher class’ to analyse corporate reporting. This would involve prioritising the hiring of research associates and senior research associates who have few or no teaching responsibilities.1

Second, universities would need to work to create centres such as the Citizen Lab, or related groups.2 These don’t need to be organisations which try and cover the waterfront of all digital issues. They could, instead, be more focused to reduce the number of staff or fellows that are needed to fulfil the organisation’s mandate. Any and all centres of this type would see a small handful of people with PhDs (who largely lack teaching responsibilities) guide multidisciplinary teams of staff. Those same staff members would not typically need a a PhD. They would need to be nimble enough to move quickly while using a peer-review lite process to validate findings, but not see journal or book outputs as their primacy currency for promotion or hiring.

Third, the centres would need a core group of long-term staffers. This core body of long-term researchers is needed to develop policy expertise that graduate students just don’t possess or develop in their short tenure in the university. Moreover, these same long-term researchers can then train graduate student fellows of the centres in question, with the effect of slowly building a cadre of researchers who are equipped to critically assess digital-first companies.

Fourth, the staff at research centres needs to be paid well and properly. They cannot be regarded as ‘graduate student plus’ employees but as specialists who will be of interest to government and corporations. This means that the university will need to pay competitive wages in order to secure the staff needed to fulfil centre mandates.

Basically if universities are to be successful in holding big data companies to account they’ll need to incubate quasi-NGOs and let them loose under the university’s auspice. It is, however, worth asking whether this should be the goal of the university in the first place: should society be outsourcing a large amount of the ‘transparency research’ that is designed to have policy impact or guide evidence-based policy making to academics, or should we instead bolster the capacities of government departments and regulatory agencies to undertake these activities

Put differently, and in context with Tworek’s argument: I think that assuming that PhDs holders working as faculty in universities are the solution to analysing data released by corporations can only hold if you happen to (a) hold or aspire to hold a PhD; (b) possesses or aspire to possess a research-focused tenure track job.

I don’t think that either (a) or (b) should guide the majority of the way forward in developing policy proposals as they pertain to holding corporations to account.

Do faculty have a role in holding companies such as Google, Facebook, Amazon, Apple, or Netflix to account? You bet. But if the university, and university researchers, are going to seriously get involved in using data released by companies to hold them to account and have policy impact, then I think we need dedicated and focused researchers. Faculty who are torn between teaching, writing and publishing in inaccessible locations using baroque theoretical lenses, pursuing funding opportunities and undertaking large amounts of department service and performing graduate student supervision are just not going to be sufficient to address the task at hand.


  1. In the interests of disclosure, I currently hold one of these roles. ↩︎
  2. Again in the interests of disclosure, this is the kind of place I currently work at. ↩︎
Link

Vulnerability Exploitability eXchange (VEX)

CISA has a neat bit of work they recently published, entitled “Vulnerability Exploitability eXchange (VEX) – Status Justifications” (warning: opens to .pdf.).1 Product security teams that adopt VEX could assert the status of specific vulnerabilities in their products. As a result, clients’ security staff could allocate time to remediate actionable vulnerabilities instead of burning time on potential vulnerabilities that product security teams have already closed off or mitigated.

There are a number of different machine-readable status types that are envisioned, including:

  • Component_not_present
  • Vulnerable_code_not_present
  • Vulnerable_code_cannot_be_controlled_by_adversary
  • Vulnerable_code_not_in_execute_path
  • Inline_mitigations_already_exist

CISA’s publication spells out what each status entails in more depth and includes diagrams to help readers understand what is envisioned. However, those same readers need to pay attention to a key caveat, namely, “[t]his document will not address chained attacks involving future or unknown risks as it will be considered out of scope.” Put another way, VEX is used to assess known vulnerabilities and attacks. It should not be relied upon to predict potential threats based on not-yet-public attacks nor new ways of chaining known vulnerabilities. Thus, while it would be useful to ascertain if a product is vulnerable to EternalBlue, today, it would not be useful to predict or assess the exploited vulnerabilities prior to EternalBlue having been made public nor new or novel ways of exploiting the vulnerabilities underlying EternalBlue. In effect, then, VEX is meant to address the known risks associated with N-Days as opposed to risks linked with 0-Days or novel ways of exploiting N-Days.2

For VEX to best work there should be some kind of surrounding policy requirements, such as when/if a supplier falsely (as opposed to incorrectly) asserts the security properties of its product there should be some disciplinary response. This can take many forms and perhaps the easiest relies on economics and not criminal sanction: federal governments or major companies will decline to do business with a vendor found to have issued a deceptive VEX, and may have financial recourse based on contactual terms with the product’s vendor. When or if this economic solution fails then it might be time to turn to legal venues and, if existent approaches prove insufficient, potentially even introduce new legislation designed to further discipline bad actors. However, as should be apparent, there isn’t a demonstrable requirement to introduce legislation to make VEX actionable.

I think that VEX continues work under the current American administration to advance a number of good policies that are meant to better secure products and systems. VEX works hand-in-hand with SBOMs and, also, may be supported by US Executive Orders around cybersecurity.

While Canada may be ‘behind’ the United States we can see that things are potentially shifting. There is currently a consultation underway to regenerate Canada’s cybersecurity strategy and infrastructure security legislation was introduced just prior to Parliament rising for its summer break. Perhaps, in a year’s time, we’ll see stronger and bolder efforts by the Canadian government to enhance infrastructure security with some small element of that recommending the adoption of VEXes. At the very least the government won’t be able to say they lack the legislative tools or strategic direction to do so.


  1. You can access a locally hosted version if the CISA link fails. ↩︎
  2. For a nice discussion of why N-days are regularly more dangerous then 0-Days, see: “N-Days: The Overlooked Cyber Threat for Utilities.” ↩︎

Thoughts on Developing My Street Photography

(Dead Ends by Christopher Parsons)

For the past several years I’ve created a ‘best of’ album that summarizes the year’s best photos that I made. I use the yearly album to assess how my photography has changed and what, if any, changes are common across those images. The process of making these albums and then printing them forces me to look at my images, how they work against one another, and better understand what I learned over the course of taking photos for a year.

I have lots of favourite photographs but what I’ve learned the most, at least over the past few years, is to ignore a lot of the information and ‘tips’ that are often shared about street photography. Note that the reason to avoid ignore them is not because they are wrong per se, or that photographers shouldn’t adopt them, but because they don’t work for how I prefer to engage in street photography.

I Don’t Do ‘Stealth’ Photography

Probably the key tip that I generally set to the side is that you should be stealthy, sneaky, or otherwise hidden from the subjects in the photos that I capture. It’s pretty common for me to see a scene and wait with my camera to my eye until the right subjects enter the scene and are positioned where I want them in my frame. Sometimes that means that people will avoid me and the scene and other times they’ll clearly indicate that they don’t want to have their photo taken. In these cases the subject is communicating their preferences quite clearly and I won’t take their photograph. It’s just an ethical line I don’t want to cross.

(Winter Troop by Christopher Parsons)

In yet other instances, my subjects will be looking right at me as they pass through the scene. They’re often somewhat curious. And in many situations they stop and ask me what I’m taking photos of, and then a short conversation follows. In an odd handful of situations they’ve asked me to send along an image I captured of them or a link to my photos; to date, I’ve had pretty few ‘bad’ encounters while shooting on the streets.

I Don’t Imitate Others

I’ve spent a lot of time learning about classic photographers over the past couple years. I’ve been particularly drawn to black and white street photography, in part because I think it often has a timeless character and because it forces me to more carefully think about positioning a subject so they stand out.

(Working Man by Christopher Parsons)

This being said, I don’t think that I’m directly imitating anyone else. I shoot with a set of focal ranges and periodically mix up the device I’m capturing images on; last year, a bulk of my favourite photos came from an intensive two week photography vacation where I forced myself to walk extensively and just use an iPhone 12 Pro. Photos that I’m taking, this year, have largely been with a Fuji X100F and some custom jpg recipes that generally produce results that I appreciate.

Don’t get me wrong: in seeing some of the photos of the greats (and less greats and less well-knows) I draw inspiration from the kinds of images they make, but I don’t think I’ve ever gone out to try and make images like theirs. This differs from when I started taking shots in my city, and when I wanted to make images that looked similar to the ‘popular’ shots I was seeing. I still appreciate those images but they’re not what I want to make these days.

I Create For Myself

While I don’t think that I’m alone in this, the images that I make are principally for myself. I share some of those images but, really, I just want to get out and walk through my environment. I find the process of slowing down to look for instances of interest and beauty help ground me.

Because I tend to walk within the same 10-15km radius of my home, I have a pretty good sense of how neighbourhoods are changing. I can see my city changing on a week to week basis, and feel more in tune with what’s really happening based on my observations. My photography makes me very present in my surroundings.

(Dark Sides by Christopher Parsons)

I also tend to use my walks to both cover new ground and, also, go into back alleys, behind sheds, and generally in the corners of the city that are less apparent unless you’re looking for them. Much of the time there’s nothing particularly interesting to photograph in those spaces. But, sometimes, something novel or unique emerges.

Change Is Normal

For the past year or so, a large volume (95% or more) of my images have been black and white. That hasn’t always been the case! But I decided I wanted to lean into this mode of capturing images to develop a particular set of skills and get used to seeing—and visualizing—scenes and subjects monochromatically.

But my focus on black and white images, as well as images that predominantly include human subjects, is relatively new: if I look at my images from just a few years ago there was a lot of colour and stark, or empty, cityscapes. I don’t dislike those images and, in fact, several remain amongst my favourite images I’ve made to date. But I also don’t want to be constrained by one way of looking at the world. The world is too multifaceted, and there’s too many ways of imagining it, to be stuck permanently in one way of capturing it.

(Alley Figures by Christopher Parsons)

This said, over time, I’d like to imagine I might develop a way of seeing the world and capturing images that provides a common visual language across my images. Though if that never happens I’m ok with that, so long as the very practice of photography continues to provide the dividends of better understanding my surroundings and feeling in tune with wherever I’m living at the time.

Hopes for WWDC 2022

Judgement
(Judgement by Christopher Parsons)

Apple’s Word Wide Developer Conference starts tomorrow and we can all expect a bunch of updates to Apple’s operating systems and, if we’re lucky, some new hardware. In no particular order, here are some things I want updated in iOS applications and, ideally, that developers could hook into as well.

Photos

  • The ability to search photos by different cameras and/or focal lengths
  • The ability to select a point on a photo to set the white point for exposure balancing when editing photos
  • Better/faster sync across devices
  • Enable ability to edit geolocation
  • Enable tags in photos

Camera

  • Working (virtual) spirit level!
  • Set burst mode to activate by holding the shutter button; this was how things used to be and I want the option to go back to the way things were!
  • Advanced metering modes, such as the ability to set center, multi-zone, spot, and expose for highlights!
  • Set and forget auto-focus points in the frame; not focus lock, but focus zones
  • Zone focusing

Maps

  • Ability to collaborate on a guide
  • Option to select who’s restaurant data is running underneath the app (I never will install Yelp which is the current app linked in Maps)

Music

  • Ability to collaborate on a playlist
  • Have multiple libraries: I want one ‘primary’ or ‘all albums’ and others with selected albums. I do not want to just make playlists

Reminders

  • Speed up sync across shared reminders; this matters for things like shared grocery shopping! 1
  • Integrate reminders’ date/time in calendar, as well as with whom reminders are shared

Messages

  • Emoji reactions
  • Integration with Giphy!

News

  • When I block a publication actually block it instead of giving me the option to see stories from publications I’ve blocked
  • It’d be great to see News updated so I can add my own RSS feeds

Fitness

  • Need ability to have off days; when sick or travelling or something it can be impossible to maintain streaks which is incredibly frustrating if you regularly live a semi-active life

Health

  • Show long-term data (e.g. year vs year vs year) in a user friendly way; currently this requires third-party apps and should be default and native

Of course, I’d also love to see Apple announce a new MacBook Air. I need a new laptop but don’t want to get one that’s about to be deprecated and just don’t need the power of the MacBook Pro line. Here’s hoping Apple makes this announcement next week!


  1. In general I want iCloud to sync things a hella lot faster! ↩︎

Mitigating AI-Based Harms in National Security

Photo by Pixabay on Pexels.com

Government agencies throughout Canada are investigating how they might adopt and deploy ‘artificial intelligence’ programs to enhance how they provide services. In the case of national security and law enforcement agencies these programs might be used to analyze and exploit datasets, surface threats, identify risky travellers, or automatically respond to criminal or threat activities.

However, the predictive software systems that are being deployed–‘artificial intelligence’–are routinely shown to be biased. These biases are serious in the commercial sphere but there, at least, it is somewhat possible for researchers to detect and surface biases. In the secretive domain of national security, however, the likelihood of bias in agencies’ software being detected or surfaced by non-government parties is considerably lower.

I know that organizations such as the Canadian Security Intelligence Agency (CSIS) have an interest in understanding how to use big data in ways that mitigate bias. The Canadian government does have a policy on the “Responsible use of artificial intelligence (AI)” and, at the municipal policing level, the Toronto Police Service has also published a policy on its use of artificial intelligence. Furthermore, the Office of the Privacy Commissioner of Canada has published a proposed regulatory framework for AI as part of potential reforms to federal privacy law.

Timnit Gebru, in conversation with Julia Angwin, suggests that there should be ‘datasheets for algorithms’ that would outline how predictive software systems have been tested for bias in different use cases prior to being deployed. Linking this to traditional circuit-based datasheets, she says (emphasis added):

As a circuit designer, you design certain components into your system, and these components are really idealized tools that you learn about in school that are always supposed to work perfectly. Of course, that’s not how they work in real life.

To account for this, there are standards that say, “You can use this component for railroads, because of x, y, and z,” and “You cannot use this component for life support systems, because it has all these qualities we’ve tested.” Before you design something into your system, you look at what’s called a datasheet for the component to inform your decision. In the world of AI, there is no information on what testing or auditing you did. You build the model and you just send it out into the world. This paper proposed that datasheets be published alongside datasets. The sheets are intended to help people make an informed decision about whether that dataset would work for a specific use case. There was also a follow-up paper called Model Cards for Model Reporting that I wrote with Meg Mitchell, my former co-lead at Google, which proposed that when you design a model, you need to specify the different tests you’ve conducted and the characteristics it has.

What I’ve realized is that when you’re in an institution, and you’re recommending that instead of hiring one person, you need five people to create the model card and the datasheet, and instead of putting out a product in a month, you should actually do it in three years, it’s not going to happen. I can write all the papers I want, but it’s just not going to happen. I’m constantly grappling with the incentive structure of this industry. We can write all the papers we want, but if we don’t change the incentives of the tech industry, nothing is going to change. That is why we need regulation.

Government is one of those areas where regulation or law can work well to discipline its behaviours, and where the relatively large volume of resources combined with a law-abiding bureaucracy might mean that formally required assessments would actually be conducted. While such assessments matter, generally, they are of particular importance where state agencies might be involved in making decisions that significantly or permanently alter the life chances of residents of Canada, visitors who are passing through our borders, or foreign national who are interacting with our government agencies.

As it stands, today, many Canadian government efforts at the federal, provincial, or municipal level seem to be signficiantly focused on how predictive software might be used or the effects it may have. These are important things to attend to! But it is just as, if not more, important for agencies to undertake baseline assessments of how and when different predictive software engines are permissible or not, as based on robust testing and evaluation of their features and flaws.

Having spoken with people at different levels of government the recurring complaint around assessing training data, and predictive software systems more generally, is that it’s hard to hire the right people for these assessment jobs on the basis that they are relatively rare and often exceedingly expensive. Thus, mid-level and senior members of government have a tendency to focus on things that government is perceived as actually able to do: figure out and track how predictive systems would be used and to what effect.

However, the regular focus on the resource-related challenges of predictive software assessment raises the very real question of whether these constraints should just compel agencies to forgo technologies on the basis of failing to determine, and assess, their prospective harms. In the firearms space, as an example, government agencies are extremely rigorous in assessing how a weapon operates to ensure that it functions precisely as meant given that the weapon might be used in life-changing scenarios. Such assessments require significant sums of money from agency budgets.

If we can make significant budgetary allocations for firearms, on the grounds they can have life-altering consequences for all involved in their use, then why can’t we do the same for predictive software systems? If anything, such allocations would compel agencies to make a strong(er) business case for testing the predictive systems in question and spur further accountability: Does the system work? At a reasonable cost? With acceptable outcomes?

Imposing cost discipline on organizations is an important way of ensuring that technologies, and other business processes, aren’t randomly adopted on the basis of externalizing their full costs. By internalizing those costs, up front, organizations may need to be much more careful in what they choose to adopt, when, and for what purpose. The outcome of this introspection and assessment would, hopefully, be that the harmful effects of predictive software systems in the national security space were mitigated and the systems which were adopted actually fulfilled the purposes they were acquired to address.

Link

A Brief Unpacking of a Declaration on the Future of the Internet

Cameron F. Kerry has a helpful piece in Brookings that unpacks the recently published ‘Declaration on the Future of the Internet.’ As he explains, the Declaration was signed by 60 States and is meant, in part, to rebut a China-Russia joint statement. Those countries’ statement would support their positions on ‘securing’ domestic Internet spaces and removing Internet governance from multi-stakeholder forums to State-centric ones.

So far, so good. However, baked into the Kerry’s article is language suggesting that either he misunderstands, or understates, some of the security-related elements of the Declaration. He writes:

There are additional steps the U.S. government can take that are more within its control than the actions and policies of foreign states or international organizations. The future of the Internet declaration contains a series of supporting principles and measures on freedom and human rights, Internet governance and access, and trust in use of digital network technology. The latter—trust in the use of network technology— is included to “ensure that government and relevant authorities’ access to personal data is based in law and conducted in accordance with international human rights law” and to “protect individuals’ privacy, their personal data, the confidentiality of electronic communications and information on end-users’ electronic devices, consistent with the protection of public safety and applicable domestic and international law.” These lay down a pair of markers for the U.S. to redeem.

I read this, against the 2019 Ministerial and recent Council of Europe Cybercrime Convention updates, and see that a vast swathe of new law enforcement and security agency powers would be entirely permissible based on Kerry’s assessment of the Declaration and States involved in signing it. While these new powers have either been agreed to, or advanced by, signatory States they have simultaneously been directly opposed by civil and human rights campaigners, as well as some national courts. Specifically, there are live discussions around the following powers:

  • the availability of strong encryption;
  • the guarantee that the content of communications sent using end-to-end encrypted devices cannot be accessed or analyzed by third-parties (include by on-device surveillance);
  • the requirement of prior judicial authorization to obtain subscriber information; and
  • the oversight of preservation and production powers by relevant national judicial bodies.

Laws can be passed that see law enforcement interests supersede individuals’ or communities’ rights in safeguarding their devices, data, and communications from the State. When or if such a situation occurs, the signatories of the Declaration can hold fast in their flowery language around protecting rights while, at the same time, individuals and communities experience heightened surveillance of, and intrusions into, their daily lives.

In effect, a lot of international policy and legal infrastructure has been built to facilitate sweeping new investigatory powers and reforms to how data is, and can be, secured. It has taken years to build this infrastructure and as we leave the current stage of the global pandemic it is apparent that governments have continued to press ahead with their efforts to expand the powers which could be provided to law enforcement and security agencies, notwithstanding the efforts of civil and human rights campaigners around the world.

The next stage of things will be to asses how, and in what ways, international agreements and legal infrastructure will be brought into national legal systems and to determine where to strategically oppose the worst of the over reaches. While it’s possible that some successes are achieved in resisting the expansions of state powers not everything will be resisted. The consequence will be both to enhance state intrusions into private lives as well as to weaken the security provided to devices and data, with the resultant effect of better enabling criminals to illicitly access or manipulate our personal information.

The new world of enhanced surveillance and intrusions is wholly consistent with the ‘Declaration on the Future of the Internet.’ And that’s a big, glaring, and serious problem with the Declaration.

Link

The Broader Implications of Data Breaches

Ikea Canada notified approximately 95,000 Canadian customers in recent weeks about a data breach the company has suffered. An Ikea employee conducted a series of searches between March 1 to March 3 which surfaced the account records of the aforementioned customers.1

While Ikea promised that financial information–credit card and banking information–hadn’t been revealed a raft of other personal information had been. That information included:

  • full first and last name;
  • postal code or home address;
  • phone number and other contact information;
  • IKEA loyalty number.

Ikea did not disclose who specifically accessed the information nor their motivations for doing so.

The notice provided by Ikea was better than most data breach alerts insofar as it informed customers what exactly had been accessed. For some individuals, however, this information is highly revelatory and could cause significant concern.

For example, imagine a case where someone has previously been the victim of either physical or digital stalking. Should their former stalker be an Ikea employee the data breach victim may ask whether their stalker now has confidential information that can be used to renew, or further amplify, harmful activities. With the customer information in hand, as an example, it would be relatively easy for a stalker to obtain more information such as where precisely someone lived. If they are aggrieved then they could also use the information to engage in digital harassment or threatening behaviour.

Without more information about the motivations behind why the Ikea employee searched the database those who have been stalked or had abusive relations with an Ikea employee might be driven to think about changing how they live their lives. They might feel the need to change their safety habits, get new phone numbers, or cycle to a new email. In a worst case scenario they might contemplate vacating their residence for a time. Even if they do not take any of these actions they might experience a heightened sense of unease or anxiety.

Of course, Ikea is far from alone in suffering these kinds of breaches. They happen on an almost daily basis for most of us, whether we’re alerted of the breach or not. Many news reports about such breaches focus on whether there is an existent or impending financial harm and stop the story there. The result is that journalist reporting can conceal some of the broader harms linked with data breaches.

Imagine a world where our personal information–how you can call us or find our homes–was protected equivalent to how our credit card numbers are current protected. In such a world stalkers and other abusive actors might be less able to exploit stolen or inappropriately accessed information. Yes, there will always be ways by which bad actors can operate badly, but it would be possible to mitigate some of the ways this badness can take place.

Companies could still create meaningful consent frameworks whereby some (perhaps most!) individuals could agree to have their information stored by the company. But, for those who have a different risk threshold they could make a meaningful choice so they could still make purchases and receive deliveries without, at the same time, permanently increasing the risks that their information might fall into the wrong hand. However, getting to this point requires expanded threat modelling: we can’t just worry about a bad credit card purchase but, instead, would need to take seriously the gendered and intersectional nature of violence and its intersection with cybersecurity practices.


  1. In the interests of disclosure, I was contacted as an affected party by Ikea Canada. ↩︎
Link

Messaging Interoperability and Client Security

Eric Rescorla has a thoughtful and nuanced assessment of recent EU proposals which would compel messaging companies to make their communications services interoperable. To his immense credit he spends time walking the reader through historical and contemporary messaging systems in order to assess the security issues prospectively associated with requiring interoperability. It’s a very good, and compact, read on a dense and challenging subject.

I must admit, however, that I’m unconvinced that demanding interoperability will have only minimal security implications. While much of the expert commentary has focused on whether end-to-end encryption would be compromised I think that too little time has been spent considering the client-end side of interoperable communications. So if we assume it’s possible to facilitate end-to-end communications across messaging companies and focus just on clients receiving/sending communications, what are some risks?1

As it stands, today, the dominant messaging companies have large and professional security teams. While none of these teams are perfect, as shown by the success of cyber mercenary companies such as NSO group et al, they are robust and constantly working to improve the security of their products. The attacks used by groups such as NSO, Hacking Team, Candiru, FinFisher, and such have not tended to rely on breaking encryption. Rather, they have sought vulnerabilities in client devices. Due to sandboxing and contemporary OS security practices this has regularly meant successfully targeting a messaging application and, subsequently, expanding a foothold on the device more generally.

In order for interoperability to ‘work’ properly there will need to be a number of preconditions. As noted in Rescorla’s post, this may include checking what functions an interoperable client possesses to determine whether ‘standard’ or ‘enriched’ client services are available. Moreover, APIs will need to be (relatively) stable or rely on a standardized protocol to facilitate interoperability. Finally, while spam messages are annoying on messaging applications today, they may become even more commonplace where interoperability is required and service providers cannot use their current processes to filter/quality check messages transiting their infrastructure.

What do all the aforementioned elements mean for client security?

  1. Checking for client functionality may reveal whether a targeted client possesses known vulnerabilities, either generally (following a patch update) or just to the exploit vendor (where they know of a vulnerability and are actively exploiting it). Where spam filtering is not great exploit vendors can use spam messaging as reconnaissance messaging with the service provider, client vendor, or client applications not necessarily being aware of the threat activity.
  2. When or if there is a significant need to rework how keying operates, or surveillance of identity properties more broadly that are linked to an API, then there is a risk that implementation of updates may be delayed until the revisions have had time to be adopted by clients. While this might be great for competition vis-a-vis interoperability it will, also, have the effect of signalling an oncoming change to threat actors who may accelerate activities to get footholds on devices or may warn these actors that they, too, need to update their tactics, techniques, and procedures (TTPs).
  3. As a more general point, threat actors might work to develop and propagate interoperable clients that they have, already, compromised–we’ve previously seen nation-state actors do so and there’s no reason to expect this behaviour to stop in a world of interoperable clients. Alternately, threat actors might try and convince targets to move to ‘better’ clients that contain known vulnerabilities but which are developed and made available by legitimate vendors. Whereas, today, an exploit developer must target specific messaging systems that deliver that systems’ messages, a future world of interoperable messaging will likely expand the clients that threat actors can seek to exploit.

One of the severe dangers and challenges facing the current internet regulation landscape has been that a large volume of new actors have entered the various overlapping policy fields. For a long time there’s not been that many of us and anyone who’s been around for 10-15 years tends to be suitably multidisciplinary that they think about how activities in policy domain X might/will have consequences for domains Y and Z. The new raft of politicians and their policy advisors, in contrast, often lack this broad awareness. The result is that proposals are being advanced around the world by ostensibly well-meaning individuals and groups to address issues associated with online harms, speech, CSAM, competition, and security. However, these same parties often lack awareness of how the solutions meant to solve their favoured policy problems will have effects on neighbouring policy issues. And, where they are aware, they often don’t care because that’s someone else’s policy domain.

It’s good to see more people participating and more inclusive policy making processes. And seeing actual political action on many issue areas after 10 years of people debating how to move forward is exciting. But too much of that action runs counter to the thoughtful warnings and need for caution that longer-term policy experts have been raising for over a decade.

We are almost certainly moving towards a ‘new Internet’. It remains in question, however, whether this ‘new Internet’ will see resolutions to longstanding challenges or if, instead, the rush to regulate will change the landscape by finally bringing to life the threats that long-term policy wonks have been working to forestall or prevent for much of their working lives. To date, I remain increasingly concerned that we will experience the latter than witness the former.


  1. For the record, I currently remain unconvinced it is possible to implement end-to-end encryption across platforms generally. ↩︎
Link

The Risks Linked With Canadian Cyber Operations in Ukraine

Photo by Sora Shimazaki on Pexels.com

Late last month, Global News published a story on how the Canadian government is involved in providing cyber support to the Ukrainian government in the face of Russia’s illegal invasion. While the Canadian military declined to confirm or deny any activities they might be involved in, the same was not true of the Communications Security Establishment (CSE). The CSE is Canada’s foreign signals intelligence agency. In addition to collecting intelligence, it is also mandated to defend Canadian federal systems and those designated as of importance to the government of Canada, provide assistance to other federal agencies, and conduct active and defensive cyber operations.1

From the Global News article it is apparent that the CSE is involved in both foreign intelligence operations as well as undertaking cyber defensive activities. Frankly these kinds of activity are generally, and persistently, undertaken with regard to the Russian government and so it’s not a surprise that these activities continue apace.

The CSE spokesperson also noted that the government agency is involved in ‘cyber operations’ though declined to explain whether these are defensive cyber operations or active cyber operations. In the case of the former, the Minister of National Defense must consult with the Minister of Foreign Affairs before authorizing an operation, whereas in the latter both Ministers must consent to an operation prior to it taking place. Defensive and active operations can assume the same form–roughly the same activities or operations might be undertaken–but the rationale for the activity being taken may vary based on whether it is cast as defensive or active (i.e., offensive).2

These kinds of cyber operations are the ones that most worry scholars and practitioners, on the basis that there is a risk that foreign operators or adversaries may misread a signal from a cyber operation or because the operation might have unintended consequences. Thus, the risk is that the operations that the CSE is undertaking run the risk of accidentally (or intentionally, I guess) escalating affairs between Canada and the Russian Federation in the midst of the shooting war between Russian and Ukrainian forces.

While there is, of course, a need for some operational discretion on the part of the Canadian government it is also imperative that the Canadian public be sufficiently aware of the government’s activities to understand the risks (or lack thereof) which are linked to the activities that Canadian agencies are undertaking. To date, the Canadian government has not released its cyber foreign policy doctrine nor has the Canadian Armed Forces released its cyber doctrine.3 The result is that neither Canadians nor Canada’s allies or adversaries know precisely what Canada will do in the cyber domain, how Canada will react when confronted, or the precise nature of Canada’s escalatory ladder. The government’s secrecy runs the risk of putting Canadians in greater jeopardy of a response from the Russian Federation (or other adversaries) without the Canadian public really understanding what strategic or tactical activities might be undertaken on their behalf.

Canadians have a right to know at least enough about what their government is doing to be able to begin assessing the risks linked with conducting operations during an active militant conflict against an adversary with nuclear weapons. Thus far such information has not been provided. The result is that Canadians are ill-prepared to assess the risk that they may be quietly and quickly drawn into the conflict between the Russian Federation and Ukraine. Such secrecy bodes poorly for being able to hold government to account, to say nothing of preventing Canadians from appreciating the risk that they could become deeply drawn into a very hot conflict scenario.


  1. For more on the CSE and the laws governing its activities, see “A Deep Dive into Canada’s Overhaul of Its Foreign Intelligence and Cybersecurity Laws.↩︎
  2. For more on this, see “Analysis of the Communications Security Establishment Act and Related Provisions in Bill C-59 (An Act respecting national security matters), First Reading (December 18, 2017)“, pp 27-32. ↩︎
  3. Not for lack of trying to access them, however, as in both cases I have filed access to information requests to the government for these documents 1 years ago, with delays expected to mean I won’t get the documents before the end of 2022 at best. ↩︎

Policing the Location Industry

Photo by Ingo Joseph on Pexels.com

The Markup has a comprehensive and disturbing article on how location information is acquired by third-parties despite efforts by Apple and Google to restrict the availability of this information. In the past, it was common for third-parties to provide SDKs to application developers. The SDKs would inconspicuously transfer location information to those third-parties while also enabling functionality for application developers. With restrictions being put in place by platforms such as Apple and Google, however, it’s now becoming common for application developers to initiate requests for location information themselves and then share it directly with third-party data collectors.

While such activities often violate the terms of service and policy agreements between platforms and application developers, it can be challenging for the platforms to actually detect these violations and subsequently enforce their rules.

Broadly, the issues at play represent significant governmental regulatory failures. The fact that government agencies often benefit from the secretive collection of individuals’ location information makes it that much harder for the governments to muster the will to discipline the secretive collection of personal data by third-parties: if the government cuts off the flow of location information, it will impede the ability of governments themselves obtain this information.

In some cases intelligence and security services obtain location information from third-parties. This sometimes occurs in situations where the services themselves are legally barred from directly collecting this information. Companies selling mobility information can let government agencies do an end-run around the law.

One of the results is that efforts to limit data collectors’ ability to capture personal information often sees parts of government push for carve outs to collecting, selling, and using location information. In Canada, as an example, the government has adopted a legal position that it can collect locational information so long as it is de-identified or anonymized,1 and for the security and intelligence services there are laws on the books that permit the collection of commercially available open source information. This open source information does not need to be anonymized prior to acquisition.2 Lest you think that it sounds paranoid that intelligence services might be interested in location information, consider that American agencies collected bulk location information pertaining to Muslims from third-party location information data brokers and that the Five Eyes historically targeted popular applications such as Google Maps and Angry Birds to obtain location information as well as other metadata and content. As the former head of the NSA announced several years ago, “We kill people based on metadata.”

Any arguments made by either private or public organizations that anonymization or de-identification of location information makes it acceptable to collect, use, or disclose generally relies tricking customers and citizens. Why is this? Because even when location information is aggregated and ‘anonymized’ it might subsequently be re-identified. And in situations where that reversal doesn’t occur, policy decisions can still be made based on the aggregated information. The process of deriving these insights and applying them showcases that while privacy is an important right to protect, it is not the only right that is implicated in the collection and use of locational information. Indeed, it is important to assess the proportionality and necessity of the collection and use, as well as how the associated activities affect individuals’ and communities’ equity and autonomy in society. Doing anything less is merely privacy-washing.

Throughout discussions about data collection, including as it pertains to location information, public agencies and companies alike tend to provide a pair of argument against changing the status quo. First, they assert that consent isn’t really possible anymore given the volumes of data which are collected on a daily basis from individuals; individuals would be overwhelmed with consent requests! Thus we can’t make the requests in the first place! Second, that we can’t regulate the collection of this data because doing so risks impeding innovation in the data economy.

If those arguments sound familiar, they should. They’re very similar to the plays made by industry groups who’s activities have historically had negative environmental consequences. These groups regularly assert that after decades of poor or middling environmental regulation that any new, stronger, regulations would unduly impede the existing dirty economy for power, services, goods, and so forth. Moreover, the dirty way of creating power, services, and goods is just how things are and thus should remain the same.

In both the privacy and environmental worlds, corporate actors (and those whom they sell data/goods to) have benefitted from not having to pay the full cost of acquiring data without meaningful consent or accounting for the environmental cost of their activities. But, just as we demand enhanced environmental regulations to regulate and address the harms industry causes to the environment, we should demand and expect the same when it comes to the personal data economy.

If a business is predicated on sneaking away personal information from individuals then it is clearly not particularly interested or invested in being ethical towards consumers. It’s imperative to continue pushing legislators to not just recognize that such practices are unethical, but to make them illegal as well. Doing so will require being heard over the cries of government’s agencies that have vested interests in obtaining location information in ways that skirt the law that might normally discipline such collection, as well as companies that have grown as a result of their unethical data collection practices. While this will not be an easy task, it’s increasingly important given the limits of platforms to regulate the sneaky collection of this information and increasingly problematic ways our personal data can be weaponized against us.


  1. “PHAC advised that since the information had been de-identified and aggregated, it believed the activity did not engage the Privacy Act as it was not collecting or using “personal information”. ↩︎
  2. See, as example, Section 23 of the CSE Act ↩︎