Link

The Broader Implications of Data Breaches

Ikea Canada notified approximately 95,000 Canadian customers in recent weeks about a data breach the company has suffered. An Ikea employee conducted a series of searches between March 1 to March 3 which surfaced the account records of the aforementioned customers.1

While Ikea promised that financial information–credit card and banking information–hadn’t been revealed a raft of other personal information had been. That information included:

  • full first and last name;
  • postal code or home address;
  • phone number and other contact information;
  • IKEA loyalty number.

Ikea did not disclose who specifically accessed the information nor their motivations for doing so.

The notice provided by Ikea was better than most data breach alerts insofar as it informed customers what exactly had been accessed. For some individuals, however, this information is highly revelatory and could cause significant concern.

For example, imagine a case where someone has previously been the victim of either physical or digital stalking. Should their former stalker be an Ikea employee the data breach victim may ask whether their stalker now has confidential information that can be used to renew, or further amplify, harmful activities. With the customer information in hand, as an example, it would be relatively easy for a stalker to obtain more information such as where precisely someone lived. If they are aggrieved then they could also use the information to engage in digital harassment or threatening behaviour.

Without more information about the motivations behind why the Ikea employee searched the database those who have been stalked or had abusive relations with an Ikea employee might be driven to think about changing how they live their lives. They might feel the need to change their safety habits, get new phone numbers, or cycle to a new email. In a worst case scenario they might contemplate vacating their residence for a time. Even if they do not take any of these actions they might experience a heightened sense of unease or anxiety.

Of course, Ikea is far from alone in suffering these kinds of breaches. They happen on an almost daily basis for most of us, whether we’re alerted of the breach or not. Many news reports about such breaches focus on whether there is an existent or impending financial harm and stop the story there. The result is that journalist reporting can conceal some of the broader harms linked with data breaches.

Imagine a world where our personal information–how you can call us or find our homes–was protected equivalent to how our credit card numbers are current protected. In such a world stalkers and other abusive actors might be less able to exploit stolen or inappropriately accessed information. Yes, there will always be ways by which bad actors can operate badly, but it would be possible to mitigate some of the ways this badness can take place.

Companies could still create meaningful consent frameworks whereby some (perhaps most!) individuals could agree to have their information stored by the company. But, for those who have a different risk threshold they could make a meaningful choice so they could still make purchases and receive deliveries without, at the same time, permanently increasing the risks that their information might fall into the wrong hand. However, getting to this point requires expanded threat modelling: we can’t just worry about a bad credit card purchase but, instead, would need to take seriously the gendered and intersectional nature of violence and its intersection with cybersecurity practices.


  1. In the interests of disclosure, I was contacted as an affected party by Ikea Canada. ↩︎
Link

Messaging Interoperability and Client Security

Eric Rescorla has a thoughtful and nuanced assessment of recent EU proposals which would compel messaging companies to make their communications services interoperable. To his immense credit he spends time walking the reader through historical and contemporary messaging systems in order to assess the security issues prospectively associated with requiring interoperability. It’s a very good, and compact, read on a dense and challenging subject.

I must admit, however, that I’m unconvinced that demanding interoperability will have only minimal security implications. While much of the expert commentary has focused on whether end-to-end encryption would be compromised I think that too little time has been spent considering the client-end side of interoperable communications. So if we assume it’s possible to facilitate end-to-end communications across messaging companies and focus just on clients receiving/sending communications, what are some risks?1

As it stands, today, the dominant messaging companies have large and professional security teams. While none of these teams are perfect, as shown by the success of cyber mercenary companies such as NSO group et al, they are robust and constantly working to improve the security of their products. The attacks used by groups such as NSO, Hacking Team, Candiru, FinFisher, and such have not tended to rely on breaking encryption. Rather, they have sought vulnerabilities in client devices. Due to sandboxing and contemporary OS security practices this has regularly meant successfully targeting a messaging application and, subsequently, expanding a foothold on the device more generally.

In order for interoperability to ‘work’ properly there will need to be a number of preconditions. As noted in Rescorla’s post, this may include checking what functions an interoperable client possesses to determine whether ‘standard’ or ‘enriched’ client services are available. Moreover, APIs will need to be (relatively) stable or rely on a standardized protocol to facilitate interoperability. Finally, while spam messages are annoying on messaging applications today, they may become even more commonplace where interoperability is required and service providers cannot use their current processes to filter/quality check messages transiting their infrastructure.

What do all the aforementioned elements mean for client security?

  1. Checking for client functionality may reveal whether a targeted client possesses known vulnerabilities, either generally (following a patch update) or just to the exploit vendor (where they know of a vulnerability and are actively exploiting it). Where spam filtering is not great exploit vendors can use spam messaging as reconnaissance messaging with the service provider, client vendor, or client applications not necessarily being aware of the threat activity.
  2. When or if there is a significant need to rework how keying operates, or surveillance of identity properties more broadly that are linked to an API, then there is a risk that implementation of updates may be delayed until the revisions have had time to be adopted by clients. While this might be great for competition vis-a-vis interoperability it will, also, have the effect of signalling an oncoming change to threat actors who may accelerate activities to get footholds on devices or may warn these actors that they, too, need to update their tactics, techniques, and procedures (TTPs).
  3. As a more general point, threat actors might work to develop and propagate interoperable clients that they have, already, compromised–we’ve previously seen nation-state actors do so and there’s no reason to expect this behaviour to stop in a world of interoperable clients. Alternately, threat actors might try and convince targets to move to ‘better’ clients that contain known vulnerabilities but which are developed and made available by legitimate vendors. Whereas, today, an exploit developer must target specific messaging systems that deliver that systems’ messages, a future world of interoperable messaging will likely expand the clients that threat actors can seek to exploit.

One of the severe dangers and challenges facing the current internet regulation landscape has been that a large volume of new actors have entered the various overlapping policy fields. For a long time there’s not been that many of us and anyone who’s been around for 10-15 years tends to be suitably multidisciplinary that they think about how activities in policy domain X might/will have consequences for domains Y and Z. The new raft of politicians and their policy advisors, in contrast, often lack this broad awareness. The result is that proposals are being advanced around the world by ostensibly well-meaning individuals and groups to address issues associated with online harms, speech, CSAM, competition, and security. However, these same parties often lack awareness of how the solutions meant to solve their favoured policy problems will have effects on neighbouring policy issues. And, where they are aware, they often don’t care because that’s someone else’s policy domain.

It’s good to see more people participating and more inclusive policy making processes. And seeing actual political action on many issue areas after 10 years of people debating how to move forward is exciting. But too much of that action runs counter to the thoughtful warnings and need for caution that longer-term policy experts have been raising for over a decade.

We are almost certainly moving towards a ‘new Internet’. It remains in question, however, whether this ‘new Internet’ will see resolutions to longstanding challenges or if, instead, the rush to regulate will change the landscape by finally bringing to life the threats that long-term policy wonks have been working to forestall or prevent for much of their working lives. To date, I remain increasingly concerned that we will experience the latter than witness the former.


  1. For the record, I currently remain unconvinced it is possible to implement end-to-end encryption across platforms generally. ↩︎
Aside

2022.4.9

I’ve been doing my own IT for a long while, as well as small tasks for others. But I haven’t had to do an email migration—while ensuring pretty well no downtime—in a long while.

Fortunately the shift from Google Mail (due to the deprecation of grandfathered accounts that offered free custom domain integration) to Apple’s iCloud+ was remarkably smooth and easy. Apple’s instructions were helpful as were those of the host I was dealing with. Downtime was a couple seconds, at most, though there was definitely a brief moment of holding my breath in fear that the transition hadn’t quite taken.

Solved: Mendeley-Related Error in Microsoft Word for MacOS

In the past I used Mendeley as a citation management system. I stopped using it, and uninstalled it from MacOS, when they deprecated the mobile application I relied upon. I had installed the Mendeley extension for Microsoft Word to facilitate easy citation insertion and updates. Ever since deleting Mendeley from MacOS I have received a popup window when opening Microsoft Word as well as a prompt to save changes to “Mendeley-word2016-1.19.4.dotm” when closing Word.

The Problem

I was receiving prompts when opening and closing Microsoft Word for MacOS after having uninstalled Mendeley. These were annoying and I wanted them to go away.

The Solution

In MacOS:

  1. Open Finder
  2. Search for “Mendeley”
  3. Delete “Mendeley Desktop.plist” and “Mendeley-word2016-1.19.4.dotm”

You should now be able to open Microsoft Word without being asked to point to where Mendeley is installed, and exit Word without being asked to save changes to Mendeley-word2016-1.19.4.dotm.

Cyber Attacks Versus Operations in Ukraine

Photo by cottonbro on Pexels.com

For the past decade there has been a steady drumbeat that ‘cyberwar is coming’. Sometimes the parties holding these positions are in militaries and, in other cases, from think tanks or university departments that are trying to link kinetic-adjacent computer operations with ‘war’.

Perhaps the most famous rebuttal to the cyberwar proponents has been Thomas Rid’s Cyber War Will Not Take Place. The title was meant to be provocative and almost has the effect of concealing a core insight of Rid’s argument: cyber operations will continue to be associated with conflicts but cyber operations are unlikely to constitute (or lead to) out-and-out war on their own. Why? Because it is very challenging to prepare and launch cyber operations that have significant kinetic results at the scale we associate with full-on war.

Since the Russian Federation’s war of aggression towards Ukraine there have regularly been shocked assertions that cyberware isn’t taking place. A series of pieces by The Economist, as an example, sought to prepare readers for a cyberwar that just hasn’t happened. Why not? Because The Economist–much like other outlets!–often presumed that the cyber dimensions of the conflict in Ukraine would bear at least some resemblance to the long-maligned concept of a ‘cyber Pearl Harbour’: a critical cyber-enable strike of some sort would have a serious, and potentially devastating, effect on how Ukraine could defend against Russian aggression and thus tilt the balance towards Russian military victory.

As a result of the early mistaken understandings of cyber operations, scholars and experts have once more come out and explained why cyber operations are not the same as an imagined cyber Pearl Harbour situation, while still taking place in the Ukrainian conflict. Simultaneously, security and malware researchers have taken the opportunity to belittle International Relations theorists who have written about cyberwar and argued that these theorists have fundamentally misunderstood how cyber operations take place.

Part of the challenge is ‘cyberwar’ has often been popularly seen as the equivalent of hundreds of thousands of soldiers and their associated military hardware being deployed into a foreign country. As noted by Rid in a recent op-ed, while some cyber operations are meant to be apparent others are much more subtle. The former might be meant to reduce the will to fight or diminish command and control capabilities. The latter, in contrast, will look a lot like other reconnaissance operations: knowing who is commanding which battle group, the logistical challenges facing the opponent, or state of infrastructure in-country. All these latter dimensions provide strategic and tactical advantages to the party who’s launched the surveillance operation. Operations meant to degrade capabilities may occur but will often be more subtle. This subtly can be a particularly severe risk in a conflict, such as if your ammunition convoy is sent to the wrong place or train timetables are thrown off with the effect of stymying civilian evacuation or resupply operations.1

What’s often seemingly lost in the ‘cyberwar’ debates–which tend to take place either between people who don’t understand cyber operations, those who stand to profit from misrepresentations of them, or those who are so theoretical in their approaches as to be ignorant of reality–is that contemporary wars entail blended forces. Different elements of those blends have unique and specific tactical and strategic purposes. Cyber isn’t going to have the same effect as a Grad Missile Launcher or a T-90 Battle Tank, but that missile launcher or tank isn’t going to know that the target it’s pointed towards is strategically valuable without reconnaissance nor is it able to impair logistics flows the same way as a cyber operation targeting train schedules. To expect otherwise is to grossly misunderstand how cyber operations function in a conflict environment.

I’d like to imagine that one result of the Russian war of aggression will be to improve the general population’s understanding of cyber operations and what they entail, and do not entail. It’s possible that this might happen given that major news outlets, such as the AP and Reuters, are changing how they refer to such activities: they will not be called ‘cyberattacks’ outside very nuanced situations now. In simply changing what we call cyber activities–as operations as opposed to attacks–we’ll hopefully see a deflating of the language and, with it, more careful understandings of how cyber operations take place in and out of conflict situations. As such, there’s a chance (hope?) we might see a better appreciation of the significance of cyber operations in the population writ-large in the coming years. This will be increasingly important given the sheer volume of successful (non-conflict) operations that take place each day.


  1. It’s worth recognizing that part of why we aren’t reading about successful Russian operations is, first, due to Ukrainian and allies’ efforts to suppress such successes for fear of reducing Ukrainian/allied morale. Second, however, is that Western signals intelligence agencies such as the NSA, CSE, and GCHQ, are all very active in providing remote defensive and other operational services to Ukrainian forces. There was also a significant effort ahead of the conflict to shore up Ukrainian defences and continues to be a strong effort by Western companies to enhance the security of systems used by Ukrainians. Combined, this means that Ukraine is enjoying additional ‘forces’ while, simultaneously, generally keeping quiet about its own failures to protect its systems or infrastructure. ↩︎

Russia, Nokia, and SORM

Photo by Mati Mango on Pexels.com

The New York Times recently wrote about Nokia providing telecommunications equipment to Russian ISPs, all while Nokia was intimately aware of how its equipment would be interconnected with System for Operative Investigative Activities (SORM) lawful interception equipment. SORM equipment has existed in numerous versions since the 1990s. Per James Lewis:

SORM-1 collects mobile and landline telephone calls. SORM-2 collects internet traffic. SORM-3 collects from all media (including Wi-Fi and social networks) and stores data for three years. Russian law requires all internet service providers to install an FSB monitoring device (called “Punkt Upravlenia”) on their networks that allows the direct collection of traffic without the knowledge or cooperation of the service provider. The providers must pay for the device and the cost of installation.

SORM is part of a broader Internet and telecommunications surveillance and censorship regime that has been established by the Russian government. Moreover, other countries in the region use iterations or variations of the SORM system (e.g., Kazakhstan) as well as countries which were previously invaded by the Soviet Union (e.g., Afghanistan).

The Time’s article somewhat breathlessly states that the documents they obtained, and which span 2008-2017,

show in previously unreported detail that Nokia knew it was enabling a Russian surveillance system. The work was essential for Nokia to do business in Russia, where it had become a top supplier of equipment and services to various telecommunications customers to help their networks function. The business yielded hundreds of millions of dollars in annual revenue, even as Mr. Putin became more belligerent abroad and more controlling at home.

It is not surprising that Nokia, as part of doing business in Russia, was complying with lawful interception laws insofar as its products were compatible with SORM equipment. Frankly it would have been surprising if Nokia had flouted the law given that Nokia’s own policy concerning human rights asserts that (.pdf):

Nokia will provide passive lawful interception capabilities to customers who have a legal obligation to provide such capabilities. This means we will provide products that meet agreed standards for lawful intercept capabilities as defined by recognized standards bodies such as the 3rd Generation Partner Project (3GPP) and the European Telecoms Standards Institute (ETSI). We will not, however, engage in any activity relating to active lawful interception technologies, such as storing, post-processing or analyzing of intercepted data gathered by the network operator.

It was somewhat curious that the Times’ article declined to recognize that Nokia-Siemens has a long history of doing business in repressive countries: it allegedly sold mobile lawful interception equipment to Iran circa 2009 and in 2010-11 its lawful interception equipment was implicated in political repression and torture in Bahrain. Put differently, Nokia’s involvement in low rule-of-law countries is not new and, if anything, their actions in Russia appear to be a mild improvement on their historical approaches to enabling repressive governments to exercise lawful interception functionalities.

The broad question is whether Western companies should be authorized or permitted to do business in repressive countries. To some extent, we might hope that businesses themselves would express restraint. But, in excess of this, companies such as Nokia often require some kind of export license or approval before they can sell certain telecommunications equipment to various repressive governments. This is particularly true when it comes to supplying lawful interception functionality (which was not the case when Nokia sold equipment to Russia).

While the New York Times casts a light on Nokia the article does not:

  1. Assess the robustness of Nokia’s alleged human rights commitments–have they changed since 2013 when they were first examined by civil society? How do Nokia’s sales comport with their 2019 human rights policy? Just how flimsy is the human rights policy in its own right?
  2. Assess the export controls that Nokia was(n’t) under–is it the case that the Norwegian government has some liability or responsibility for the sales of Nokia’s telecommunications equipment? Should there be?
  3. Assess the activities of the telecommunications provider Nokia was supplying in Russia, MTS, and whether there is a broader issue of Nokia supplying equipment to MTS since it operates in various repressive countries.

None of this is meant to set aside the fact that Western companies ought to behave better on the international stage. But…this has not been a priority in Russia, at least, until the country’s recent war of aggression. Warning signs were prominently on display before this war and didn’t result in prominent and public recriminations towards Nokia or other Western companies doing business in Russia.

All lawful interception systems, regardless of whether they conform with North America, European, or Russian standards, are surveillance systems. Put another way, they are all about empowering one group to exercise influence or power over others who are unaware they are being watched. In low rule-of-law countries, such as Russia, there is a real question as to whether they should should even be called ‘lawful interception systems’ as opposed to explicitly calling them ‘interception systems’.

There was a real opportunity for the New York Times to both better contextualize Nokia’s involvement in Russia and, then, to explain and problematize the nature of lawful interception capability and standards. The authors could also have spent time discussing the nature of export controls on telecommunications equipment, where the equipment is being sold into repressive states. Sadly this did not occur with the result that the authors and paper declined to more broadly consider and report on the working, and ethics and politics, of enabling telecommunications and lawful interception systems in repressive and non-repressive states alike. While other kicks at this can will arise, it’s evident that there wasn’t even an attempt to do so in this report on Nokia.

Link

‘Glass Time’ Shortcut

man people woman iphone
Photo by Ron Lach on Pexels.com

Like most photographers I edit my images with the brightness on my screen set to its maximum. Outside of specialized activities, however, I and others don’t tend to set the brightness this high so as to conserve battery power.

The result is that when we, as photographers, as well as members of the viewing public tend to look images on photography platforms we often aren’t seeing them as their creator(s) envisioned. The images are, quite starkly, darker on our screens than on those of the photographers who made them.1

For the past few months whenever I’ve opened Glass or looked at photos on other platforms I’m made an effort to ensure that I’ve maximized the brightness on my devices as I’ve opened the app. This said, I still forget sometimes and only realize halfway through a viewing session. So I went about ensuring this ‘mistake’ didn’t happen any more by creating a Shortcut called ‘Glass Time’!

The Shortcut is pretty simple: when I run it, it maximizes the brightness of my iOS device and opens the Glass app. If you download the Shortcut it’s pretty easy to modify it to instead open a different application (e.g., Instagram, 500px, Flickr, etc). It’s definitely improved my experiences using the app and helped me to better appreciate the images that are shared by individuals on the platform.

Download ‘Glass Time’ Shortcut


  1. Of course there are also issues associated with different devices having variable maximum brightness and colour profiles. These kinds of differences are largely intractable in the current technical milieu. ↩︎
Link

The Risks Linked With Canadian Cyber Operations in Ukraine

Photo by Sora Shimazaki on Pexels.com

Late last month, Global News published a story on how the Canadian government is involved in providing cyber support to the Ukrainian government in the face of Russia’s illegal invasion. While the Canadian military declined to confirm or deny any activities they might be involved in, the same was not true of the Communications Security Establishment (CSE). The CSE is Canada’s foreign signals intelligence agency. In addition to collecting intelligence, it is also mandated to defend Canadian federal systems and those designated as of importance to the government of Canada, provide assistance to other federal agencies, and conduct active and defensive cyber operations.1

From the Global News article it is apparent that the CSE is involved in both foreign intelligence operations as well as undertaking cyber defensive activities. Frankly these kinds of activity are generally, and persistently, undertaken with regard to the Russian government and so it’s not a surprise that these activities continue apace.

The CSE spokesperson also noted that the government agency is involved in ‘cyber operations’ though declined to explain whether these are defensive cyber operations or active cyber operations. In the case of the former, the Minister of National Defense must consult with the Minister of Foreign Affairs before authorizing an operation, whereas in the latter both Ministers must consent to an operation prior to it taking place. Defensive and active operations can assume the same form–roughly the same activities or operations might be undertaken–but the rationale for the activity being taken may vary based on whether it is cast as defensive or active (i.e., offensive).2

These kinds of cyber operations are the ones that most worry scholars and practitioners, on the basis that there is a risk that foreign operators or adversaries may misread a signal from a cyber operation or because the operation might have unintended consequences. Thus, the risk is that the operations that the CSE is undertaking run the risk of accidentally (or intentionally, I guess) escalating affairs between Canada and the Russian Federation in the midst of the shooting war between Russian and Ukrainian forces.

While there is, of course, a need for some operational discretion on the part of the Canadian government it is also imperative that the Canadian public be sufficiently aware of the government’s activities to understand the risks (or lack thereof) which are linked to the activities that Canadian agencies are undertaking. To date, the Canadian government has not released its cyber foreign policy doctrine nor has the Canadian Armed Forces released its cyber doctrine.3 The result is that neither Canadians nor Canada’s allies or adversaries know precisely what Canada will do in the cyber domain, how Canada will react when confronted, or the precise nature of Canada’s escalatory ladder. The government’s secrecy runs the risk of putting Canadians in greater jeopardy of a response from the Russian Federation (or other adversaries) without the Canadian public really understanding what strategic or tactical activities might be undertaken on their behalf.

Canadians have a right to know at least enough about what their government is doing to be able to begin assessing the risks linked with conducting operations during an active militant conflict against an adversary with nuclear weapons. Thus far such information has not been provided. The result is that Canadians are ill-prepared to assess the risk that they may be quietly and quickly drawn into the conflict between the Russian Federation and Ukraine. Such secrecy bodes poorly for being able to hold government to account, to say nothing of preventing Canadians from appreciating the risk that they could become deeply drawn into a very hot conflict scenario.


  1. For more on the CSE and the laws governing its activities, see “A Deep Dive into Canada’s Overhaul of Its Foreign Intelligence and Cybersecurity Laws.↩︎
  2. For more on this, see “Analysis of the Communications Security Establishment Act and Related Provisions in Bill C-59 (An Act respecting national security matters), First Reading (December 18, 2017)“, pp 27-32. ↩︎
  3. Not for lack of trying to access them, however, as in both cases I have filed access to information requests to the government for these documents 1 years ago, with delays expected to mean I won’t get the documents before the end of 2022 at best. ↩︎

Policing the Location Industry

Photo by Ingo Joseph on Pexels.com

The Markup has a comprehensive and disturbing article on how location information is acquired by third-parties despite efforts by Apple and Google to restrict the availability of this information. In the past, it was common for third-parties to provide SDKs to application developers. The SDKs would inconspicuously transfer location information to those third-parties while also enabling functionality for application developers. With restrictions being put in place by platforms such as Apple and Google, however, it’s now becoming common for application developers to initiate requests for location information themselves and then share it directly with third-party data collectors.

While such activities often violate the terms of service and policy agreements between platforms and application developers, it can be challenging for the platforms to actually detect these violations and subsequently enforce their rules.

Broadly, the issues at play represent significant governmental regulatory failures. The fact that government agencies often benefit from the secretive collection of individuals’ location information makes it that much harder for the governments to muster the will to discipline the secretive collection of personal data by third-parties: if the government cuts off the flow of location information, it will impede the ability of governments themselves obtain this information.

In some cases intelligence and security services obtain location information from third-parties. This sometimes occurs in situations where the services themselves are legally barred from directly collecting this information. Companies selling mobility information can let government agencies do an end-run around the law.

One of the results is that efforts to limit data collectors’ ability to capture personal information often sees parts of government push for carve outs to collecting, selling, and using location information. In Canada, as an example, the government has adopted a legal position that it can collect locational information so long as it is de-identified or anonymized,1 and for the security and intelligence services there are laws on the books that permit the collection of commercially available open source information. This open source information does not need to be anonymized prior to acquisition.2 Lest you think that it sounds paranoid that intelligence services might be interested in location information, consider that American agencies collected bulk location information pertaining to Muslims from third-party location information data brokers and that the Five Eyes historically targeted popular applications such as Google Maps and Angry Birds to obtain location information as well as other metadata and content. As the former head of the NSA announced several years ago, “We kill people based on metadata.”

Any arguments made by either private or public organizations that anonymization or de-identification of location information makes it acceptable to collect, use, or disclose generally relies tricking customers and citizens. Why is this? Because even when location information is aggregated and ‘anonymized’ it might subsequently be re-identified. And in situations where that reversal doesn’t occur, policy decisions can still be made based on the aggregated information. The process of deriving these insights and applying them showcases that while privacy is an important right to protect, it is not the only right that is implicated in the collection and use of locational information. Indeed, it is important to assess the proportionality and necessity of the collection and use, as well as how the associated activities affect individuals’ and communities’ equity and autonomy in society. Doing anything less is merely privacy-washing.

Throughout discussions about data collection, including as it pertains to location information, public agencies and companies alike tend to provide a pair of argument against changing the status quo. First, they assert that consent isn’t really possible anymore given the volumes of data which are collected on a daily basis from individuals; individuals would be overwhelmed with consent requests! Thus we can’t make the requests in the first place! Second, that we can’t regulate the collection of this data because doing so risks impeding innovation in the data economy.

If those arguments sound familiar, they should. They’re very similar to the plays made by industry groups who’s activities have historically had negative environmental consequences. These groups regularly assert that after decades of poor or middling environmental regulation that any new, stronger, regulations would unduly impede the existing dirty economy for power, services, goods, and so forth. Moreover, the dirty way of creating power, services, and goods is just how things are and thus should remain the same.

In both the privacy and environmental worlds, corporate actors (and those whom they sell data/goods to) have benefitted from not having to pay the full cost of acquiring data without meaningful consent or accounting for the environmental cost of their activities. But, just as we demand enhanced environmental regulations to regulate and address the harms industry causes to the environment, we should demand and expect the same when it comes to the personal data economy.

If a business is predicated on sneaking away personal information from individuals then it is clearly not particularly interested or invested in being ethical towards consumers. It’s imperative to continue pushing legislators to not just recognize that such practices are unethical, but to make them illegal as well. Doing so will require being heard over the cries of government’s agencies that have vested interests in obtaining location information in ways that skirt the law that might normally discipline such collection, as well as companies that have grown as a result of their unethical data collection practices. While this will not be an easy task, it’s increasingly important given the limits of platforms to regulate the sneaky collection of this information and increasingly problematic ways our personal data can be weaponized against us.


  1. “PHAC advised that since the information had been de-identified and aggregated, it believed the activity did not engage the Privacy Act as it was not collecting or using “personal information”. ↩︎
  2. See, as example, Section 23 of the CSE Act ↩︎

Glass and Community

OLYMPUS DIGITAL CAMERA
(New Heights by Christopher Parsons)

The founders of the photography application, Glass, were recently on Protocol’s Source Code. Part of what they emphasized, time and time again, was the importance of developing a positive community where photographers interacted with one another.

Glass continues to be the place where I’m most comfortable sharing my images. I really don’t care about how many people ‘appreciate’ a photo and I’m never going to be a photographic influencer. But I do like being in a community where I’m surrounded by helpful photographers, and where I’m regularly inspired by the work of other photographers.

Indeed, just today one of the photographers I most respect posted an image that I found really spectacular and we had a brief back and forth about what I saw/emotions it evoked, and his reaction to my experience of it. I routinely have these kinds of positive and meaningful back-and-forths on Glass. That’s not to say that similar experiences don’t, and can’t, occur on other companies’ platforms! But, from my own point of view, Glass is definitely creating the experiences that the developers are aiming for.

I also think that the developers of Glass are serious in their commitment to taking ideas from their community. I’d proposed via their ticketing system that they find a way of showcasing the excellent blog content that they’re producing, and that’s now on their roadmap for the application.

It’s also apparent that the developers, themselves, are involved in the application and watching what people are posting to showcase great work. They’ve routinely had excellent and interesting interviews with photographers on the platform, as well as highlighted photos that they found interesting each month in the categories that they have focused on (in interests of disclosure, one of my photos was included in their Cityscapes collection).

These are, admittedly, the kinds of features and activities that you’d hope developers to roll out and emphasize as they build a photography application and grow its associated community. Even the developers of Instagram, when it was still a sub-10 person shop were pretty involved in their community! I can only hope that Glass never turns into their Meta ‘competitor’!