Link

The Broader Implications of Data Breaches

Ikea Canada notified approximately 95,000 Canadian customers in recent weeks about a data breach the company has suffered. An Ikea employee conducted a series of searches between March 1 to March 3 which surfaced the account records of the aforementioned customers.1

While Ikea promised that financial information–credit card and banking information–hadn’t been revealed a raft of other personal information had been. That information included:

  • full first and last name;
  • postal code or home address;
  • phone number and other contact information;
  • IKEA loyalty number.

Ikea did not disclose who specifically accessed the information nor their motivations for doing so.

The notice provided by Ikea was better than most data breach alerts insofar as it informed customers what exactly had been accessed. For some individuals, however, this information is highly revelatory and could cause significant concern.

For example, imagine a case where someone has previously been the victim of either physical or digital stalking. Should their former stalker be an Ikea employee the data breach victim may ask whether their stalker now has confidential information that can be used to renew, or further amplify, harmful activities. With the customer information in hand, as an example, it would be relatively easy for a stalker to obtain more information such as where precisely someone lived. If they are aggrieved then they could also use the information to engage in digital harassment or threatening behaviour.

Without more information about the motivations behind why the Ikea employee searched the database those who have been stalked or had abusive relations with an Ikea employee might be driven to think about changing how they live their lives. They might feel the need to change their safety habits, get new phone numbers, or cycle to a new email. In a worst case scenario they might contemplate vacating their residence for a time. Even if they do not take any of these actions they might experience a heightened sense of unease or anxiety.

Of course, Ikea is far from alone in suffering these kinds of breaches. They happen on an almost daily basis for most of us, whether we’re alerted of the breach or not. Many news reports about such breaches focus on whether there is an existent or impending financial harm and stop the story there. The result is that journalist reporting can conceal some of the broader harms linked with data breaches.

Imagine a world where our personal information–how you can call us or find our homes–was protected equivalent to how our credit card numbers are current protected. In such a world stalkers and other abusive actors might be less able to exploit stolen or inappropriately accessed information. Yes, there will always be ways by which bad actors can operate badly, but it would be possible to mitigate some of the ways this badness can take place.

Companies could still create meaningful consent frameworks whereby some (perhaps most!) individuals could agree to have their information stored by the company. But, for those who have a different risk threshold they could make a meaningful choice so they could still make purchases and receive deliveries without, at the same time, permanently increasing the risks that their information might fall into the wrong hand. However, getting to this point requires expanded threat modelling: we can’t just worry about a bad credit card purchase but, instead, would need to take seriously the gendered and intersectional nature of violence and its intersection with cybersecurity practices.


  1. In the interests of disclosure, I was contacted as an affected party by Ikea Canada. ↩︎
Link

Messaging Interoperability and Client Security

Eric Rescorla has a thoughtful and nuanced assessment of recent EU proposals which would compel messaging companies to make their communications services interoperable. To his immense credit he spends time walking the reader through historical and contemporary messaging systems in order to assess the security issues prospectively associated with requiring interoperability. It’s a very good, and compact, read on a dense and challenging subject.

I must admit, however, that I’m unconvinced that demanding interoperability will have only minimal security implications. While much of the expert commentary has focused on whether end-to-end encryption would be compromised I think that too little time has been spent considering the client-end side of interoperable communications. So if we assume it’s possible to facilitate end-to-end communications across messaging companies and focus just on clients receiving/sending communications, what are some risks?1

As it stands, today, the dominant messaging companies have large and professional security teams. While none of these teams are perfect, as shown by the success of cyber mercenary companies such as NSO group et al, they are robust and constantly working to improve the security of their products. The attacks used by groups such as NSO, Hacking Team, Candiru, FinFisher, and such have not tended to rely on breaking encryption. Rather, they have sought vulnerabilities in client devices. Due to sandboxing and contemporary OS security practices this has regularly meant successfully targeting a messaging application and, subsequently, expanding a foothold on the device more generally.

In order for interoperability to ‘work’ properly there will need to be a number of preconditions. As noted in Rescorla’s post, this may include checking what functions an interoperable client possesses to determine whether ‘standard’ or ‘enriched’ client services are available. Moreover, APIs will need to be (relatively) stable or rely on a standardized protocol to facilitate interoperability. Finally, while spam messages are annoying on messaging applications today, they may become even more commonplace where interoperability is required and service providers cannot use their current processes to filter/quality check messages transiting their infrastructure.

What do all the aforementioned elements mean for client security?

  1. Checking for client functionality may reveal whether a targeted client possesses known vulnerabilities, either generally (following a patch update) or just to the exploit vendor (where they know of a vulnerability and are actively exploiting it). Where spam filtering is not great exploit vendors can use spam messaging as reconnaissance messaging with the service provider, client vendor, or client applications not necessarily being aware of the threat activity.
  2. When or if there is a significant need to rework how keying operates, or surveillance of identity properties more broadly that are linked to an API, then there is a risk that implementation of updates may be delayed until the revisions have had time to be adopted by clients. While this might be great for competition vis-a-vis interoperability it will, also, have the effect of signalling an oncoming change to threat actors who may accelerate activities to get footholds on devices or may warn these actors that they, too, need to update their tactics, techniques, and procedures (TTPs).
  3. As a more general point, threat actors might work to develop and propagate interoperable clients that they have, already, compromised–we’ve previously seen nation-state actors do so and there’s no reason to expect this behaviour to stop in a world of interoperable clients. Alternately, threat actors might try and convince targets to move to ‘better’ clients that contain known vulnerabilities but which are developed and made available by legitimate vendors. Whereas, today, an exploit developer must target specific messaging systems that deliver that systems’ messages, a future world of interoperable messaging will likely expand the clients that threat actors can seek to exploit.

One of the severe dangers and challenges facing the current internet regulation landscape has been that a large volume of new actors have entered the various overlapping policy fields. For a long time there’s not been that many of us and anyone who’s been around for 10-15 years tends to be suitably multidisciplinary that they think about how activities in policy domain X might/will have consequences for domains Y and Z. The new raft of politicians and their policy advisors, in contrast, often lack this broad awareness. The result is that proposals are being advanced around the world by ostensibly well-meaning individuals and groups to address issues associated with online harms, speech, CSAM, competition, and security. However, these same parties often lack awareness of how the solutions meant to solve their favoured policy problems will have effects on neighbouring policy issues. And, where they are aware, they often don’t care because that’s someone else’s policy domain.

It’s good to see more people participating and more inclusive policy making processes. And seeing actual political action on many issue areas after 10 years of people debating how to move forward is exciting. But too much of that action runs counter to the thoughtful warnings and need for caution that longer-term policy experts have been raising for over a decade.

We are almost certainly moving towards a ‘new Internet’. It remains in question, however, whether this ‘new Internet’ will see resolutions to longstanding challenges or if, instead, the rush to regulate will change the landscape by finally bringing to life the threats that long-term policy wonks have been working to forestall or prevent for much of their working lives. To date, I remain increasingly concerned that we will experience the latter than witness the former.


  1. For the record, I currently remain unconvinced it is possible to implement end-to-end encryption across platforms generally. ↩︎
Link

The Risks Linked With Canadian Cyber Operations in Ukraine

Photo by Sora Shimazaki on Pexels.com

Late last month, Global News published a story on how the Canadian government is involved in providing cyber support to the Ukrainian government in the face of Russia’s illegal invasion. While the Canadian military declined to confirm or deny any activities they might be involved in, the same was not true of the Communications Security Establishment (CSE). The CSE is Canada’s foreign signals intelligence agency. In addition to collecting intelligence, it is also mandated to defend Canadian federal systems and those designated as of importance to the government of Canada, provide assistance to other federal agencies, and conduct active and defensive cyber operations.1

From the Global News article it is apparent that the CSE is involved in both foreign intelligence operations as well as undertaking cyber defensive activities. Frankly these kinds of activity are generally, and persistently, undertaken with regard to the Russian government and so it’s not a surprise that these activities continue apace.

The CSE spokesperson also noted that the government agency is involved in ‘cyber operations’ though declined to explain whether these are defensive cyber operations or active cyber operations. In the case of the former, the Minister of National Defense must consult with the Minister of Foreign Affairs before authorizing an operation, whereas in the latter both Ministers must consent to an operation prior to it taking place. Defensive and active operations can assume the same form–roughly the same activities or operations might be undertaken–but the rationale for the activity being taken may vary based on whether it is cast as defensive or active (i.e., offensive).2

These kinds of cyber operations are the ones that most worry scholars and practitioners, on the basis that there is a risk that foreign operators or adversaries may misread a signal from a cyber operation or because the operation might have unintended consequences. Thus, the risk is that the operations that the CSE is undertaking run the risk of accidentally (or intentionally, I guess) escalating affairs between Canada and the Russian Federation in the midst of the shooting war between Russian and Ukrainian forces.

While there is, of course, a need for some operational discretion on the part of the Canadian government it is also imperative that the Canadian public be sufficiently aware of the government’s activities to understand the risks (or lack thereof) which are linked to the activities that Canadian agencies are undertaking. To date, the Canadian government has not released its cyber foreign policy doctrine nor has the Canadian Armed Forces released its cyber doctrine.3 The result is that neither Canadians nor Canada’s allies or adversaries know precisely what Canada will do in the cyber domain, how Canada will react when confronted, or the precise nature of Canada’s escalatory ladder. The government’s secrecy runs the risk of putting Canadians in greater jeopardy of a response from the Russian Federation (or other adversaries) without the Canadian public really understanding what strategic or tactical activities might be undertaken on their behalf.

Canadians have a right to know at least enough about what their government is doing to be able to begin assessing the risks linked with conducting operations during an active militant conflict against an adversary with nuclear weapons. Thus far such information has not been provided. The result is that Canadians are ill-prepared to assess the risk that they may be quietly and quickly drawn into the conflict between the Russian Federation and Ukraine. Such secrecy bodes poorly for being able to hold government to account, to say nothing of preventing Canadians from appreciating the risk that they could become deeply drawn into a very hot conflict scenario.


  1. For more on the CSE and the laws governing its activities, see “A Deep Dive into Canada’s Overhaul of Its Foreign Intelligence and Cybersecurity Laws.↩︎
  2. For more on this, see “Analysis of the Communications Security Establishment Act and Related Provisions in Bill C-59 (An Act respecting national security matters), First Reading (December 18, 2017)“, pp 27-32. ↩︎
  3. Not for lack of trying to access them, however, as in both cases I have filed access to information requests to the government for these documents 1 years ago, with delays expected to mean I won’t get the documents before the end of 2022 at best. ↩︎

Policing the Location Industry

Photo by Ingo Joseph on Pexels.com

The Markup has a comprehensive and disturbing article on how location information is acquired by third-parties despite efforts by Apple and Google to restrict the availability of this information. In the past, it was common for third-parties to provide SDKs to application developers. The SDKs would inconspicuously transfer location information to those third-parties while also enabling functionality for application developers. With restrictions being put in place by platforms such as Apple and Google, however, it’s now becoming common for application developers to initiate requests for location information themselves and then share it directly with third-party data collectors.

While such activities often violate the terms of service and policy agreements between platforms and application developers, it can be challenging for the platforms to actually detect these violations and subsequently enforce their rules.

Broadly, the issues at play represent significant governmental regulatory failures. The fact that government agencies often benefit from the secretive collection of individuals’ location information makes it that much harder for the governments to muster the will to discipline the secretive collection of personal data by third-parties: if the government cuts off the flow of location information, it will impede the ability of governments themselves obtain this information.

In some cases intelligence and security services obtain location information from third-parties. This sometimes occurs in situations where the services themselves are legally barred from directly collecting this information. Companies selling mobility information can let government agencies do an end-run around the law.

One of the results is that efforts to limit data collectors’ ability to capture personal information often sees parts of government push for carve outs to collecting, selling, and using location information. In Canada, as an example, the government has adopted a legal position that it can collect locational information so long as it is de-identified or anonymized,1 and for the security and intelligence services there are laws on the books that permit the collection of commercially available open source information. This open source information does not need to be anonymized prior to acquisition.2 Lest you think that it sounds paranoid that intelligence services might be interested in location information, consider that American agencies collected bulk location information pertaining to Muslims from third-party location information data brokers and that the Five Eyes historically targeted popular applications such as Google Maps and Angry Birds to obtain location information as well as other metadata and content. As the former head of the NSA announced several years ago, “We kill people based on metadata.”

Any arguments made by either private or public organizations that anonymization or de-identification of location information makes it acceptable to collect, use, or disclose generally relies tricking customers and citizens. Why is this? Because even when location information is aggregated and ‘anonymized’ it might subsequently be re-identified. And in situations where that reversal doesn’t occur, policy decisions can still be made based on the aggregated information. The process of deriving these insights and applying them showcases that while privacy is an important right to protect, it is not the only right that is implicated in the collection and use of locational information. Indeed, it is important to assess the proportionality and necessity of the collection and use, as well as how the associated activities affect individuals’ and communities’ equity and autonomy in society. Doing anything less is merely privacy-washing.

Throughout discussions about data collection, including as it pertains to location information, public agencies and companies alike tend to provide a pair of argument against changing the status quo. First, they assert that consent isn’t really possible anymore given the volumes of data which are collected on a daily basis from individuals; individuals would be overwhelmed with consent requests! Thus we can’t make the requests in the first place! Second, that we can’t regulate the collection of this data because doing so risks impeding innovation in the data economy.

If those arguments sound familiar, they should. They’re very similar to the plays made by industry groups who’s activities have historically had negative environmental consequences. These groups regularly assert that after decades of poor or middling environmental regulation that any new, stronger, regulations would unduly impede the existing dirty economy for power, services, goods, and so forth. Moreover, the dirty way of creating power, services, and goods is just how things are and thus should remain the same.

In both the privacy and environmental worlds, corporate actors (and those whom they sell data/goods to) have benefitted from not having to pay the full cost of acquiring data without meaningful consent or accounting for the environmental cost of their activities. But, just as we demand enhanced environmental regulations to regulate and address the harms industry causes to the environment, we should demand and expect the same when it comes to the personal data economy.

If a business is predicated on sneaking away personal information from individuals then it is clearly not particularly interested or invested in being ethical towards consumers. It’s imperative to continue pushing legislators to not just recognize that such practices are unethical, but to make them illegal as well. Doing so will require being heard over the cries of government’s agencies that have vested interests in obtaining location information in ways that skirt the law that might normally discipline such collection, as well as companies that have grown as a result of their unethical data collection practices. While this will not be an easy task, it’s increasingly important given the limits of platforms to regulate the sneaky collection of this information and increasingly problematic ways our personal data can be weaponized against us.


  1. “PHAC advised that since the information had been de-identified and aggregated, it believed the activity did not engage the Privacy Act as it was not collecting or using “personal information”. ↩︎
  2. See, as example, Section 23 of the CSE Act ↩︎

Glass and Community

OLYMPUS DIGITAL CAMERA
(New Heights by Christopher Parsons)

The founders of the photography application, Glass, were recently on Protocol’s Source Code. Part of what they emphasized, time and time again, was the importance of developing a positive community where photographers interacted with one another.

Glass continues to be the place where I’m most comfortable sharing my images. I really don’t care about how many people ‘appreciate’ a photo and I’m never going to be a photographic influencer. But I do like being in a community where I’m surrounded by helpful photographers, and where I’m regularly inspired by the work of other photographers.

Indeed, just today one of the photographers I most respect posted an image that I found really spectacular and we had a brief back and forth about what I saw/emotions it evoked, and his reaction to my experience of it. I routinely have these kinds of positive and meaningful back-and-forths on Glass. That’s not to say that similar experiences don’t, and can’t, occur on other companies’ platforms! But, from my own point of view, Glass is definitely creating the experiences that the developers are aiming for.

I also think that the developers of Glass are serious in their commitment to taking ideas from their community. I’d proposed via their ticketing system that they find a way of showcasing the excellent blog content that they’re producing, and that’s now on their roadmap for the application.

It’s also apparent that the developers, themselves, are involved in the application and watching what people are posting to showcase great work. They’ve routinely had excellent and interesting interviews with photographers on the platform, as well as highlighted photos that they found interesting each month in the categories that they have focused on (in interests of disclosure, one of my photos was included in their Cityscapes collection).

These are, admittedly, the kinds of features and activities that you’d hope developers to roll out and emphasize as they build a photography application and grow its associated community. Even the developers of Instagram, when it was still a sub-10 person shop were pretty involved in their community! I can only hope that Glass never turns into their Meta ‘competitor’!

Aside

Adding Some Positivity to the Internet

Beneath Old Grandfather
(Beneath Old Grandfather by Christopher Parsons)

Over the past two years or so the parts of the Internet that I inhabit have tended to become less pleasant. Messages that I see on a regular basis are just short, rude, and often mean. And the messages that are directed to people who have an online professional presence, such those who write and speak professionally, are increasingly abusive.

I’m one of those writers and speakers, and this year I decided to do something that isn’t particularly normal: when I come across a good piece of writing, or analysis of an issue, or just generally appreciate one of my colleagues’ work, I’ve been letting them know. The messages don’t tend to be long and usually focus on specific things I appreciated (to show that I’m familiar with the work in question) and thanking them for their contributions.

This might sound like a small thing. However, from experience I know that it’s surprisingly uncommon to receive much positive praise for the work that writers or speakers engage in. The times that I’ve received such positive feedback are pretty rare, but each time it’s made my day.

There are any number of policy proposals for ‘correcting’ online behaviour, many of which I have deep and severe concerns about. Simply saying ‘thanks’ in specific ways isn’t going to cure the ills of an increasingly cantankerous and abusive (and dangerous) Internet culture. But communicating our appreciation for one another can at least remind us that the Internet is filled with denizens who do appreciate the work that creators are undertaking day after day to inform, education, delight, and entertain us. That’s not nothing and can help to fuel the work that we all want to see produced for our benefit.

Improving My Photography In 2021

CB1A5DDF-8273-47CD-81CF-42C2FC0BA6F5
(Climbing Gear by Christopher Parsons)

I’ve spent a lot of personal time behind my cameras throughout 2021 and have taken a bunch of shots that I really like. At the same time, I’ve invested a lot of personal time learning more about the history of photography and how to accomplish things with my cameras. Below, in no particular order, is a list of the ways I worked to improve my photography in 2021.

Fuji Recipes

I started looking at different ‘recipes’ that I could use for my Fuji x100f, starting with those at Fuji X Weekly and some YouTube channels. I’ve since started playing around with my own black and white recipes to get a better sense of what works for making my own images. The goal in all of this is to create jpgs that are ‘done’ in body and require an absolute minimum amount of adjustment. It’s very much a work in progress, but I’ve gotten to the point that most of my photos only receive minor crops, as opposed to extensive edits in Darkroom.

Comfort in Street Photography

The first real memory I have of ‘doing’ street photography was being confronted by a bus driver after I took his photo. I was scared off of taking pictures of other people for years as a result.

Over the past year, however, I’ve gotten more comfortable by watching a lot of POV-style YouTube videos of how other street photographers go about making their images. I don’t have anyone else to go an shoot with, and learn from, so these videos have been essential to my learning process. In particular, I’ve learned a lot from watching and listening to Faizal Westcott, the folks over at Framelines, Joe Allan, Mattias Burling, and Samuel Lintaro Hopf.

Moreover, just seeing the photos that other photographers are making and how they move in the street has helped to validate that what I’m doing, when I go out, definitely fits within the broader genre of street photography.

Histories of Photography

In the latter three months of 2021 I spent an enormous amount of time watching videos from the Art of Photography, Tatiana Hopper, and a bit from Sean Tucker. The result is that I’m developing a better sense of what you can do with a camera as well as why certain images are iconic or meaningful.

Pocket Camera Investment

I really love my Fuji x100f and always have my iPhone 12 Pro in my pocket. Both are terrific cameras. However, I wanted something that was smaller than the Fuji and more tactile than the iPhone, and which I could always have in a jacket pocket.

To that end, in late 2021 I purchase a very lightly used Ricoh GR. While I haven’t used it enough to offer a full review of it I have taken a lot of photos with it that I really, really like. More than anything else I’m taking more photos since buying it because I always have a good, very tactile, camera with me wherever I go.

Getting Off Instagram

I’m not a particularly big fan of Instagram these days given Facebook’s unwillingness or inability to moderate its platform, as well as Instagram’s constant addition of advertisements and short video clips. So since October 2021 I’ve been posting my photos almost exclusively to Glass and (admittedly to a lesser extent) to this website.

Not only is the interface for posting to Glass a lot better than the one for Instagram (and Flickr, as well), the comments I get on my photos on Glass are better than anywhere else I’ve ever posted my images. Admittedly Glass still has some growing pains but I’m excited to see how it develops in the coming year.

Photography and Social Media

5CC1F7AF-32C8-44BD-9E44-C061F92CC924
(Passer By by Christopher Parsons)

Why do we want to share our photos online? What platforms are better or worse to use in sharing images? These are some of the questions I’ve been pondering for the past few weeks.

Backstory

About a month ago a colleague stated that she would be leaving Instagram given the nature of Facebook’s activities and the company’s seeming lack of remorse. Her decision has stuck with me and left me wondering whether I want to follow her lead.

I deleted my Facebook accounts some time ago, and have almost entirely migrated my community away from WhatsApp. But as an amateur photographer I’ve hesitated to leave an app that was, at least initially, designed with photographers in mind. I’ve used the application over the years to develop and improve my photographic abilities and so there’s an element of ‘sunk cost’ that has historically factored into my decision to stay or leave.

But Instagram isn’t really for photographers anymore. The app is increasingly stuffed with either videos or ads, and is meant to create a soft landing point for when/if Facebook truly pivots away from its main Facebook app.1 The company’s pivot makes it a lot easier to justify leaving the application though, at the same time, leaves me wondering what application or platform, if any, I want to move my photos over to.

The Competition(?)

Over the past week or two I’ve tried Flickr.2 While it’s the OG of photo sharing sites its mobile apps are just broken. I can’t create albums unless I use the web app. The sharing straight from the Apple Photos app is janky. I worry (for no good reason, really) about the cost for the professional version (do I even need that as an amateur?) as well as the annoyance of tagging photos in order to ‘find my tribe.’

It’s also not apparent to me how much community truly exists on Flickr: the whole platform seems a bit like a graveyard with only a small handful of active photographers still inhabiting the space.

I’m also trying Glass at the moment. It’s not perfect: search is non-existent, you can’t share your gallery of photos with non-Glass users at the moment, discovery is a bit rough, there’s no Web version, and it’s currently iPhone only. However, I do like that the app (and its creators) is focused on sharing images and that it has a clear monetization schema in the form of a yearly subscription. The company’s formal roadmap also indicates that some of these rough edges may be filed away in the coming months.

I also like that Glass doesn’t require me to develop a tagging system (that’s all done in the app using presets), let’s me share quickly and easily from the Photos app, looks modern, and has a relatively low yearly subscription cost. And, at least so far, most of the comments are better than on the other platforms, which I think is important to developing my own photography.

Finally, there’s my blog here! And while I like to host photo series here this site isn’t really designed as a photo blog first and foremost. Part of the problem is that WordPress continues to suck for posting media in my experience but, more substantively, this blog hosts a lot more text than images. I don’t foresee changing this focus anytime in the near or even distant future.

The Necessity of Photo Sharing?

It’s an entirely fair question to ask why even bother sharing photos with strangers. Why not just keep my images on my devices and engage in my own self-critique?

I do engage in such critique but I’ve personally learned more from putting my images into the public eye than I would just by keeping them on my own devices.3 Some of that is from comments but, also, it’s been based on what people have ‘liked’ or left emoji comments on. These kinds of signals have helped me better understand what is a better or less good photograph.

However, at this point I don’t think that likes and emojis are the source of my future photography development: I want actual feedback, even if it’s limited to just a sentence or so. I’m hoping that Glass might provide that kind of feedback though I guess only time will tell.


  1. For a good take on Facebook and why its functionally ‘over’ as a positive brand check out M.G. Siegler’s article, “Facebook is Too Big, Fail.” ↩︎
  2. This is my second time with Flickr, as I closed a very old account several years ago given that I just wasn’t using it. ↩︎
  3. If I’m entirely honest, I bet I’ve learned as much or more from reading photography teaching/course books, but that’s a different kind of learning entirely. ↩︎
Link

The Lawfare Dimension of Asymetrical Conflict

The past week has seen a logjam begin to clear in Canadian-Chinese-American international relations. After agreeing to the underlying facts associated with her (and Huawei’s) violation of American sanctions that have been placed on Iran, Meng Wanzhou was permitted to return to China after having been detained in Canada for several years. Simultaneously, two Canadian nationals who had been charged with national security crimes were themselves permitted to return to Canada on health-related grounds. The backstory is that these Canadians were seized shortly following the detainment of Huawei’s CFO, with the Chinese government repeatedly making clear that the Canadians were being held hostage and would only be released when the CFO was repatriated to China.

A huge amount of writing has taken place following the swap. But what I’ve found to be particular interesting in terms of offering a novel contribution to the discussions was an article by Julian Ku in Lawfare. In his article, “China’s Successful Foray Into Asymmetric Lawfare,” Ku argues that:

Although Canadians are relieved that their countrymen have returned home, the Chinese government’s use of its own weak legal system to carry out “hostage diplomacy,” combined with Meng’s exploitation of the procedural protections of the strong and independent Canadian and U.S. legal systems, may herald a new “asymmetric lawfare” strategy to counter the U.S. This strategy may prove an effective counter to the U.S. government’s efforts to use its own legal system to enforce economic sanctions, root out Chinese espionage, indict Chinese hackers, or otherwise counter the more assertive and threatening Chinese government.

I remain uncertain that this baseline premise, which undergirds the rest of his argument, holds true. In particular, his angle of analysis seems to set to the side, or not fully engage with, the following:

  1. China’s hostage taking has further weakened the trust that foreign companies will have in the Chinese government. They must now acknowledge, and build into their risk models, the possibility that their executives or employees could be seized should the Chinese government get into a diplomatic, political, or economic dispute with the country from which they operate.
  2. China’s blatant hostage taking impairs its world standing and has led to significant parts of the world shifting their attitudes towards the Chinese government. The results of these shifts are yet to be fully seen, but to date there have been doubts about entering into trade agreements with China, an increased solidarity amongst middle powers to resist what is seen as bad behaviour by China, and a push away from China and into the embrace of liberal democratic governments. This last point, in particular, runs counter to China’s long-term efforts to showcase its own style of governance as a genuine alternative to American and European models of democracy.
  3. Despite what has been written, I think that relying on hostage diplomacy associated with its weak rule of law showcases China’s comparatively weak hand. Relying on low rule of law to undertake lawfare endangers its international strategic interests, which rely on building international markets and being treated as a respectable and reputable partner on the world stage. Resorting to kidnapping impairs the government’s ability to demonstrate compliance with international agreements and fora so as to build out its international policies.

Of course, none of the above discounts the fact that the Chinese government did, in fact, exploit this ‘law asymmetry’ between its laws and those of high rule of law countries. And the Canadian government did act under duress as a result of their nationals having been taken hostage, including becoming a quiet advocate for Chinese interests insofar as Canadian diplomats sought a way for the US government to reach a compromise with Huawei/Meng so that Canada’s nationals could be returned home. And certainly the focus on relying on high rule of law systems can delay investigations into espionage or other illicit foreign activities and operations that are launched by the Chinese government. Nevertheless, neither the Canadian or American legal systems actually buckled under the foreign and domestic pressure to set aside the rule of law in favour of quick political ‘fixes.’

While there will almost certainly be many years of critique in Canada and the United States about how this whole affair was managed the fact will remain that both countries demonstrated that their justice systems would remain independent from the political matters of the day. And they did so despite tremendous pressure: from Trump, during his time as the president, and despite the Canadian government being subjected to considerable pressure campaigns by numerous former government officials who were supportive, for one reason or another, of the Chinese government’s position to return Huawei’s CFO.

While it remains to be written what the actual, ultimate, effect of this swap of Huawei’s CFO for two inappropriately detained Canadians will be, some lasting legacies may include diminished political capital for the Chinese government while, at the same time, a reinforcing of the trust that can be put in the American and Canadian (and, by extension, Western democratic) systems of justice. Should these legacies hold then China’s gambit will almost certainly prove to have backfired.

Link

The So-Called Privacy Problems with WhatsApp

(Photo by Anton on Pexels.com)

ProPublica, which is typically known for its excellent journalism, published a particularly terrible piece earlier this week that fundamentally miscast how encryption works and how Facebook vis-a-vis WhatsApp works to keep communications secured. The article, “How Facebook Undermines Privacy Protections for Its 2 Billion WhatsApp Users,” focuses on two so-called problems.

The So-Called Privacy Problems with WhatsApp

First, the authors explain that WhatsApp has a system whereby recipients of messages can report content they have received to WhatsApp on the basis that it is abusive or otherwise violates WhatsApp’s Terms of Service. The article frames this reporting process as a way of undermining privacy on the basis that secured messages are not kept solely between the sender(s) and recipient(s) of the communications but can be sent to other parties, such as WhatsApp. In effect, the ability to voluntarily forward messages to WhatsApp that someone has received is cast as breaking the privacy promises that have been made by WhatsApp.

Second, the authors note that WhatsApp collects a large volume of metadata in the course of using the application. Using lawful processes, government agencies have compelled WhatsApp to disclose metadata on some of their users in order to pursue investigations and secure convictions against individuals. The case that is focused on involves a government employee who leaked confidential banking information to Buzzfeed, and which were subsequently reported out.

Assessing the Problems

In the case of forwarding messages for abuse reporting purposes, encryption is not broken and the feature is not new. These kinds of processes offer a mechanism that lets individuals self-identify and report on problematic content. Such content can include child grooming, the communications of illicit or inappropriate messages or audio-visual content, or other abusive information.

What we do learn, however, is that the ‘reactive’ and ‘proactive’ methods of detecting abuse need to be fixed. In the case of the former, only about 1,000 people are responsible for intaking and reviewing the reported content after it has first been filtered by an AI:

Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems. These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.


Further, the employees are often reliant on machine learning-based translations of content which makes it challenging to assess what is, in fact, being communicated in abusive messages. As reported,

… using Facebook’s language-translation tool, which reviewers said could be so inaccurate that it sometimes labeled messages in Arabic as being in Spanish. The tool also offered little guidance on local slang, political context or sexual innuendo. “In the three years I’ve been there,” one moderator said, “it’s always been horrible.”

There are also proactive modes of watching for abusive content using AI-based systems. As noted in the article,

Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive. The unencrypted data available for scrutiny is extensive. It includes the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.

Unfortunately, the AI often makes mistakes. This led one interviewed content reviewer to state that, “[t]here were a lot of innocent photos on there that were not allowed to be on there … It might have been a photo of a child taking a bath, and there was nothing wrong with it.” Often, “the artificial intelligence is not that intelligent.”

The vast collection of metadata has been a long-reported concern and issue associated with WhatsApp and, in fact, was one of the many reasons why many individuals advocate for the use of Signal instead. The reporting in the ProPublica article helpfully summarizes the vast amount of metadata that is collected but that collection, in and of itself, does not present any evidence that Facebook or WhatsApp have transformed the application into one which inappropriately intrudes into persons’ privacy.

ProPublica Sets Back Reasonable Encryption Policy Debates

The ProPublica article harmfully sets back broader policy discussion around what is, and is not, a reasonable approach for platforms to take in moderating abuse when they have integrated strong end-to-end encryption. Such encryption prevents unauthorized third-parties–inclusive of the platform providers themselves–from reading or analyzing the content of the communications themselves. Enabling a reporting feature means that individuals who receive a communication are empowered to report it to a company, and the company can subsequently analyze what has been sent and take action if the content violates a terms of service or privacy policy clause.

In suggesting that what WhatsApp has implemented is somehow wrong, it becomes more challenging for other companies to deploy similar reporting features without fearing that their decision will be reported on as ‘undermining privacy’. While there may be a valid policy discussion to be had–is a reporting process the correct way of dealing with abusive content and messages?–the authors didn’t go there. Nor did they seriously investigate whether additional resources should be adopted to analyze reported content, or talk with artificial intelligence experts or machine-based translation experts on whether Facebook’s efforts to automate the reporting process are adequate, appropriate, or flawed from the start. All those would be very interesting, valid, and important contributions to the broader discussion about integrating trust and safety features into encrypted messaging applications. But…those are not things that the authors choose to delve into.

The authors could have, also, discussed the broader importance (and challenges) in building out messaging systems that can deliberately conceal metadata, and the benefits and drawbacks of such systems. While the authors do discuss how metadata can be used to crack down on individuals in government who leak data, as well as assist in criminal investigations and prosecutions, there is little said about what kinds of metadata are most important to conceal and the tradeoffs in doing so. Again, there are some who think that all or most metadata should be concealed, and others who hold opposite views: there is room for a reasonable policy debate to be had and reported on.

Unfortunately, instead of actually taking up and reporting on the very valid policy discussions that are at the edges of their article, the authors choose to just be bombastic and asserted that WhatsApp was undermining the privacy protections that individuals thought they have when using the application. It’s bad reporting, insofar as it distorts the facts, and is particularly disappointing given that ProPublica has shown it has the chops to do good investigative work that is well sourced and nuanced in its outputs. This article, however, absolutely failed to make the cut.