Link

Housing in Ottawa Now a National Security Issue

David Pugliese is reporting in the Ottawa Citizen that the Canadian Forces Intelligence Command (CFINTCOM) is “trying to avoid posting junior staff to Ottawa because it has become too expensive to live in the region.” The risk is that financial hardship associated with living in Ottawa could make junior members susceptible to subversion. Housing costs in Ottawa have risen much faster than either wage increases or inflation. Moreover, the special allowance provided to staff that is meant to assauge the high costs of living in Canadian cities has been frozen for 13 years.

At this point energy, telecommunications, healthcare, and housing all raise their own national security concerns. To some extent, such concerns have tracked with these industry categories: governments have always worried about the security of telecommunications networks as well as the availability of sufficient energy supplies. But in other cases, such as housing affordability, the national security concerns we are seeing are the result of long-term governance failures. These failures have created new national security threats that would not exist in the face of good (or even just better) governance.1

There is a profound danger in trying to address all the new national security challenges and issues using national security tools or governance processes. National security incidents are often regarded as creating moments of exception and, in such moments, actions can be undertaken that otherwise could not. The danger is that states of exception become the norm and, in the process, the regular modes of governance and law are significantly set aside to resolve the crises of the day. What is needed is a regeneration and deployment of traditional governance capacity instead of a routine reliance on national security-type responses to these issues.

Of course, governments don’t just need to respond to these metastasized governance problems in order to alleviate national security issues and threats. They need to do so, in equable and inclusive ways, so as to preserve or (re)generate the trust between the residents of Canada and their government.

The public may justifiably doubt that their system of government is working where successive governments under the major political parties are seen as having failed to provide for basic needs. The threat, then, is that ongoing governance failures run the risk of placing Canada’s democracy under pressure. While this might seem overstated I don’t think that’s the case: we are seeing a rise of politicians who are capitalizing on the frustrations and challenges faced by Canadians across the country, but who do not have their own solutions. Capitalizing on rage and frustration, and then failing to deliver on fixes, will only further alienate Canadians from their government.

Governments across Canada flexed their muscles during the earlier phases of the COVID-19 pandemic. Having used them, then, it’s imperative they keep flexing these muscles to address the serious issues that Canadians are experiencing. Doing so will assuage existent national security issues. It will also, simultaneously, serve to prevent other normal governance challenges from metastasizing into national security threats.


  1. As an aside, these housing challenges are not necessarily new. Naval staff posted to Esquimalt have long complained about the high costs of off-base housing in Victoria and the surrounding towns and cities. ↩︎

Thoughts on Developing My Street Photography

(Dead Ends by Christopher Parsons)

For the past several years I’ve created a ‘best of’ album that summarizes the year’s best photos that I made. I use the yearly album to assess how my photography has changed and what, if any, changes are common across those images. The process of making these albums and then printing them forces me to look at my images, how they work against one another, and better understand what I learned over the course of taking photos for a year.

I have lots of favourite photographs but what I’ve learned the most, at least over the past few years, is to ignore a lot of the information and ‘tips’ that are often shared about street photography. Note that the reason to avoid ignore them is not because they are wrong per se, or that photographers shouldn’t adopt them, but because they don’t work for how I prefer to engage in street photography.

I Don’t Do ‘Stealth’ Photography

Probably the key tip that I generally set to the side is that you should be stealthy, sneaky, or otherwise hidden from the subjects in the photos that I capture. It’s pretty common for me to see a scene and wait with my camera to my eye until the right subjects enter the scene and are positioned where I want them in my frame. Sometimes that means that people will avoid me and the scene and other times they’ll clearly indicate that they don’t want to have their photo taken. In these cases the subject is communicating their preferences quite clearly and I won’t take their photograph. It’s just an ethical line I don’t want to cross.

(Winter Troop by Christopher Parsons)

In yet other instances, my subjects will be looking right at me as they pass through the scene. They’re often somewhat curious. And in many situations they stop and ask me what I’m taking photos of, and then a short conversation follows. In an odd handful of situations they’ve asked me to send along an image I captured of them or a link to my photos; to date, I’ve had pretty few ‘bad’ encounters while shooting on the streets.

I Don’t Imitate Others

I’ve spent a lot of time learning about classic photographers over the past couple years. I’ve been particularly drawn to black and white street photography, in part because I think it often has a timeless character and because it forces me to more carefully think about positioning a subject so they stand out.

(Working Man by Christopher Parsons)

This being said, I don’t think that I’m directly imitating anyone else. I shoot with a set of focal ranges and periodically mix up the device I’m capturing images on; last year, a bulk of my favourite photos came from an intensive two week photography vacation where I forced myself to walk extensively and just use an iPhone 12 Pro. Photos that I’m taking, this year, have largely been with a Fuji X100F and some custom jpg recipes that generally produce results that I appreciate.

Don’t get me wrong: in seeing some of the photos of the greats (and less greats and less well-knows) I draw inspiration from the kinds of images they make, but I don’t think I’ve ever gone out to try and make images like theirs. This differs from when I started taking shots in my city, and when I wanted to make images that looked similar to the ‘popular’ shots I was seeing. I still appreciate those images but they’re not what I want to make these days.

I Create For Myself

While I don’t think that I’m alone in this, the images that I make are principally for myself. I share some of those images but, really, I just want to get out and walk through my environment. I find the process of slowing down to look for instances of interest and beauty help ground me.

Because I tend to walk within the same 10-15km radius of my home, I have a pretty good sense of how neighbourhoods are changing. I can see my city changing on a week to week basis, and feel more in tune with what’s really happening based on my observations. My photography makes me very present in my surroundings.

(Dark Sides by Christopher Parsons)

I also tend to use my walks to both cover new ground and, also, go into back alleys, behind sheds, and generally in the corners of the city that are less apparent unless you’re looking for them. Much of the time there’s nothing particularly interesting to photograph in those spaces. But, sometimes, something novel or unique emerges.

Change Is Normal

For the past year or so, a large volume (95% or more) of my images have been black and white. That hasn’t always been the case! But I decided I wanted to lean into this mode of capturing images to develop a particular set of skills and get used to seeing—and visualizing—scenes and subjects monochromatically.

But my focus on black and white images, as well as images that predominantly include human subjects, is relatively new: if I look at my images from just a few years ago there was a lot of colour and stark, or empty, cityscapes. I don’t dislike those images and, in fact, several remain amongst my favourite images I’ve made to date. But I also don’t want to be constrained by one way of looking at the world. The world is too multifaceted, and there’s too many ways of imagining it, to be stuck permanently in one way of capturing it.

(Alley Figures by Christopher Parsons)

This said, over time, I’d like to imagine I might develop a way of seeing the world and capturing images that provides a common visual language across my images. Though if that never happens I’m ok with that, so long as the very practice of photography continues to provide the dividends of better understanding my surroundings and feeling in tune with wherever I’m living at the time.

Hopes for WWDC 2022

Judgement
(Judgement by Christopher Parsons)

Apple’s Word Wide Developer Conference starts tomorrow and we can all expect a bunch of updates to Apple’s operating systems and, if we’re lucky, some new hardware. In no particular order, here are some things I want updated in iOS applications and, ideally, that developers could hook into as well.

Photos

  • The ability to search photos by different cameras and/or focal lengths
  • The ability to select a point on a photo to set the white point for exposure balancing when editing photos
  • Better/faster sync across devices
  • Enable ability to edit geolocation
  • Enable tags in photos

Camera

  • Working (virtual) spirit level!
  • Set burst mode to activate by holding the shutter button; this was how things used to be and I want the option to go back to the way things were!
  • Advanced metering modes, such as the ability to set center, multi-zone, spot, and expose for highlights!
  • Set and forget auto-focus points in the frame; not focus lock, but focus zones
  • Zone focusing

Maps

  • Ability to collaborate on a guide
  • Option to select who’s restaurant data is running underneath the app (I never will install Yelp which is the current app linked in Maps)

Music

  • Ability to collaborate on a playlist
  • Have multiple libraries: I want one ‘primary’ or ‘all albums’ and others with selected albums. I do not want to just make playlists

Reminders

  • Speed up sync across shared reminders; this matters for things like shared grocery shopping! 1
  • Integrate reminders’ date/time in calendar, as well as with whom reminders are shared

Messages

  • Emoji reactions
  • Integration with Giphy!

News

  • When I block a publication actually block it instead of giving me the option to see stories from publications I’ve blocked
  • It’d be great to see News updated so I can add my own RSS feeds

Fitness

  • Need ability to have off days; when sick or travelling or something it can be impossible to maintain streaks which is incredibly frustrating if you regularly live a semi-active life

Health

  • Show long-term data (e.g. year vs year vs year) in a user friendly way; currently this requires third-party apps and should be default and native

Of course, I’d also love to see Apple announce a new MacBook Air. I need a new laptop but don’t want to get one that’s about to be deprecated and just don’t need the power of the MacBook Pro line. Here’s hoping Apple makes this announcement next week!


  1. In general I want iCloud to sync things a hella lot faster! ↩︎

Mitigating AI-Based Harms in National Security

Photo by Pixabay on Pexels.com

Government agencies throughout Canada are investigating how they might adopt and deploy ‘artificial intelligence’ programs to enhance how they provide services. In the case of national security and law enforcement agencies these programs might be used to analyze and exploit datasets, surface threats, identify risky travellers, or automatically respond to criminal or threat activities.

However, the predictive software systems that are being deployed–‘artificial intelligence’–are routinely shown to be biased. These biases are serious in the commercial sphere but there, at least, it is somewhat possible for researchers to detect and surface biases. In the secretive domain of national security, however, the likelihood of bias in agencies’ software being detected or surfaced by non-government parties is considerably lower.

I know that organizations such as the Canadian Security Intelligence Agency (CSIS) have an interest in understanding how to use big data in ways that mitigate bias. The Canadian government does have a policy on the “Responsible use of artificial intelligence (AI)” and, at the municipal policing level, the Toronto Police Service has also published a policy on its use of artificial intelligence. Furthermore, the Office of the Privacy Commissioner of Canada has published a proposed regulatory framework for AI as part of potential reforms to federal privacy law.

Timnit Gebru, in conversation with Julia Angwin, suggests that there should be ‘datasheets for algorithms’ that would outline how predictive software systems have been tested for bias in different use cases prior to being deployed. Linking this to traditional circuit-based datasheets, she says (emphasis added):

As a circuit designer, you design certain components into your system, and these components are really idealized tools that you learn about in school that are always supposed to work perfectly. Of course, that’s not how they work in real life.

To account for this, there are standards that say, “You can use this component for railroads, because of x, y, and z,” and “You cannot use this component for life support systems, because it has all these qualities we’ve tested.” Before you design something into your system, you look at what’s called a datasheet for the component to inform your decision. In the world of AI, there is no information on what testing or auditing you did. You build the model and you just send it out into the world. This paper proposed that datasheets be published alongside datasets. The sheets are intended to help people make an informed decision about whether that dataset would work for a specific use case. There was also a follow-up paper called Model Cards for Model Reporting that I wrote with Meg Mitchell, my former co-lead at Google, which proposed that when you design a model, you need to specify the different tests you’ve conducted and the characteristics it has.

What I’ve realized is that when you’re in an institution, and you’re recommending that instead of hiring one person, you need five people to create the model card and the datasheet, and instead of putting out a product in a month, you should actually do it in three years, it’s not going to happen. I can write all the papers I want, but it’s just not going to happen. I’m constantly grappling with the incentive structure of this industry. We can write all the papers we want, but if we don’t change the incentives of the tech industry, nothing is going to change. That is why we need regulation.

Government is one of those areas where regulation or law can work well to discipline its behaviours, and where the relatively large volume of resources combined with a law-abiding bureaucracy might mean that formally required assessments would actually be conducted. While such assessments matter, generally, they are of particular importance where state agencies might be involved in making decisions that significantly or permanently alter the life chances of residents of Canada, visitors who are passing through our borders, or foreign national who are interacting with our government agencies.

As it stands, today, many Canadian government efforts at the federal, provincial, or municipal level seem to be signficiantly focused on how predictive software might be used or the effects it may have. These are important things to attend to! But it is just as, if not more, important for agencies to undertake baseline assessments of how and when different predictive software engines are permissible or not, as based on robust testing and evaluation of their features and flaws.

Having spoken with people at different levels of government the recurring complaint around assessing training data, and predictive software systems more generally, is that it’s hard to hire the right people for these assessment jobs on the basis that they are relatively rare and often exceedingly expensive. Thus, mid-level and senior members of government have a tendency to focus on things that government is perceived as actually able to do: figure out and track how predictive systems would be used and to what effect.

However, the regular focus on the resource-related challenges of predictive software assessment raises the very real question of whether these constraints should just compel agencies to forgo technologies on the basis of failing to determine, and assess, their prospective harms. In the firearms space, as an example, government agencies are extremely rigorous in assessing how a weapon operates to ensure that it functions precisely as meant given that the weapon might be used in life-changing scenarios. Such assessments require significant sums of money from agency budgets.

If we can make significant budgetary allocations for firearms, on the grounds they can have life-altering consequences for all involved in their use, then why can’t we do the same for predictive software systems? If anything, such allocations would compel agencies to make a strong(er) business case for testing the predictive systems in question and spur further accountability: Does the system work? At a reasonable cost? With acceptable outcomes?

Imposing cost discipline on organizations is an important way of ensuring that technologies, and other business processes, aren’t randomly adopted on the basis of externalizing their full costs. By internalizing those costs, up front, organizations may need to be much more careful in what they choose to adopt, when, and for what purpose. The outcome of this introspection and assessment would, hopefully, be that the harmful effects of predictive software systems in the national security space were mitigated and the systems which were adopted actually fulfilled the purposes they were acquired to address.

Link

A Brief Unpacking of a Declaration on the Future of the Internet

Cameron F. Kerry has a helpful piece in Brookings that unpacks the recently published ‘Declaration on the Future of the Internet.’ As he explains, the Declaration was signed by 60 States and is meant, in part, to rebut a China-Russia joint statement. Those countries’ statement would support their positions on ‘securing’ domestic Internet spaces and removing Internet governance from multi-stakeholder forums to State-centric ones.

So far, so good. However, baked into the Kerry’s article is language suggesting that either he misunderstands, or understates, some of the security-related elements of the Declaration. He writes:

There are additional steps the U.S. government can take that are more within its control than the actions and policies of foreign states or international organizations. The future of the Internet declaration contains a series of supporting principles and measures on freedom and human rights, Internet governance and access, and trust in use of digital network technology. The latter—trust in the use of network technology— is included to “ensure that government and relevant authorities’ access to personal data is based in law and conducted in accordance with international human rights law” and to “protect individuals’ privacy, their personal data, the confidentiality of electronic communications and information on end-users’ electronic devices, consistent with the protection of public safety and applicable domestic and international law.” These lay down a pair of markers for the U.S. to redeem.

I read this, against the 2019 Ministerial and recent Council of Europe Cybercrime Convention updates, and see that a vast swathe of new law enforcement and security agency powers would be entirely permissible based on Kerry’s assessment of the Declaration and States involved in signing it. While these new powers have either been agreed to, or advanced by, signatory States they have simultaneously been directly opposed by civil and human rights campaigners, as well as some national courts. Specifically, there are live discussions around the following powers:

  • the availability of strong encryption;
  • the guarantee that the content of communications sent using end-to-end encrypted devices cannot be accessed or analyzed by third-parties (include by on-device surveillance);
  • the requirement of prior judicial authorization to obtain subscriber information; and
  • the oversight of preservation and production powers by relevant national judicial bodies.

Laws can be passed that see law enforcement interests supersede individuals’ or communities’ rights in safeguarding their devices, data, and communications from the State. When or if such a situation occurs, the signatories of the Declaration can hold fast in their flowery language around protecting rights while, at the same time, individuals and communities experience heightened surveillance of, and intrusions into, their daily lives.

In effect, a lot of international policy and legal infrastructure has been built to facilitate sweeping new investigatory powers and reforms to how data is, and can be, secured. It has taken years to build this infrastructure and as we leave the current stage of the global pandemic it is apparent that governments have continued to press ahead with their efforts to expand the powers which could be provided to law enforcement and security agencies, notwithstanding the efforts of civil and human rights campaigners around the world.

The next stage of things will be to asses how, and in what ways, international agreements and legal infrastructure will be brought into national legal systems and to determine where to strategically oppose the worst of the over reaches. While it’s possible that some successes are achieved in resisting the expansions of state powers not everything will be resisted. The consequence will be both to enhance state intrusions into private lives as well as to weaken the security provided to devices and data, with the resultant effect of better enabling criminals to illicitly access or manipulate our personal information.

The new world of enhanced surveillance and intrusions is wholly consistent with the ‘Declaration on the Future of the Internet.’ And that’s a big, glaring, and serious problem with the Declaration.

Link

The Broader Implications of Data Breaches

Ikea Canada notified approximately 95,000 Canadian customers in recent weeks about a data breach the company has suffered. An Ikea employee conducted a series of searches between March 1 to March 3 which surfaced the account records of the aforementioned customers.1

While Ikea promised that financial information–credit card and banking information–hadn’t been revealed a raft of other personal information had been. That information included:

  • full first and last name;
  • postal code or home address;
  • phone number and other contact information;
  • IKEA loyalty number.

Ikea did not disclose who specifically accessed the information nor their motivations for doing so.

The notice provided by Ikea was better than most data breach alerts insofar as it informed customers what exactly had been accessed. For some individuals, however, this information is highly revelatory and could cause significant concern.

For example, imagine a case where someone has previously been the victim of either physical or digital stalking. Should their former stalker be an Ikea employee the data breach victim may ask whether their stalker now has confidential information that can be used to renew, or further amplify, harmful activities. With the customer information in hand, as an example, it would be relatively easy for a stalker to obtain more information such as where precisely someone lived. If they are aggrieved then they could also use the information to engage in digital harassment or threatening behaviour.

Without more information about the motivations behind why the Ikea employee searched the database those who have been stalked or had abusive relations with an Ikea employee might be driven to think about changing how they live their lives. They might feel the need to change their safety habits, get new phone numbers, or cycle to a new email. In a worst case scenario they might contemplate vacating their residence for a time. Even if they do not take any of these actions they might experience a heightened sense of unease or anxiety.

Of course, Ikea is far from alone in suffering these kinds of breaches. They happen on an almost daily basis for most of us, whether we’re alerted of the breach or not. Many news reports about such breaches focus on whether there is an existent or impending financial harm and stop the story there. The result is that journalist reporting can conceal some of the broader harms linked with data breaches.

Imagine a world where our personal information–how you can call us or find our homes–was protected equivalent to how our credit card numbers are current protected. In such a world stalkers and other abusive actors might be less able to exploit stolen or inappropriately accessed information. Yes, there will always be ways by which bad actors can operate badly, but it would be possible to mitigate some of the ways this badness can take place.

Companies could still create meaningful consent frameworks whereby some (perhaps most!) individuals could agree to have their information stored by the company. But, for those who have a different risk threshold they could make a meaningful choice so they could still make purchases and receive deliveries without, at the same time, permanently increasing the risks that their information might fall into the wrong hand. However, getting to this point requires expanded threat modelling: we can’t just worry about a bad credit card purchase but, instead, would need to take seriously the gendered and intersectional nature of violence and its intersection with cybersecurity practices.


  1. In the interests of disclosure, I was contacted as an affected party by Ikea Canada. ↩︎
Link

Messaging Interoperability and Client Security

Eric Rescorla has a thoughtful and nuanced assessment of recent EU proposals which would compel messaging companies to make their communications services interoperable. To his immense credit he spends time walking the reader through historical and contemporary messaging systems in order to assess the security issues prospectively associated with requiring interoperability. It’s a very good, and compact, read on a dense and challenging subject.

I must admit, however, that I’m unconvinced that demanding interoperability will have only minimal security implications. While much of the expert commentary has focused on whether end-to-end encryption would be compromised I think that too little time has been spent considering the client-end side of interoperable communications. So if we assume it’s possible to facilitate end-to-end communications across messaging companies and focus just on clients receiving/sending communications, what are some risks?1

As it stands, today, the dominant messaging companies have large and professional security teams. While none of these teams are perfect, as shown by the success of cyber mercenary companies such as NSO group et al, they are robust and constantly working to improve the security of their products. The attacks used by groups such as NSO, Hacking Team, Candiru, FinFisher, and such have not tended to rely on breaking encryption. Rather, they have sought vulnerabilities in client devices. Due to sandboxing and contemporary OS security practices this has regularly meant successfully targeting a messaging application and, subsequently, expanding a foothold on the device more generally.

In order for interoperability to ‘work’ properly there will need to be a number of preconditions. As noted in Rescorla’s post, this may include checking what functions an interoperable client possesses to determine whether ‘standard’ or ‘enriched’ client services are available. Moreover, APIs will need to be (relatively) stable or rely on a standardized protocol to facilitate interoperability. Finally, while spam messages are annoying on messaging applications today, they may become even more commonplace where interoperability is required and service providers cannot use their current processes to filter/quality check messages transiting their infrastructure.

What do all the aforementioned elements mean for client security?

  1. Checking for client functionality may reveal whether a targeted client possesses known vulnerabilities, either generally (following a patch update) or just to the exploit vendor (where they know of a vulnerability and are actively exploiting it). Where spam filtering is not great exploit vendors can use spam messaging as reconnaissance messaging with the service provider, client vendor, or client applications not necessarily being aware of the threat activity.
  2. When or if there is a significant need to rework how keying operates, or surveillance of identity properties more broadly that are linked to an API, then there is a risk that implementation of updates may be delayed until the revisions have had time to be adopted by clients. While this might be great for competition vis-a-vis interoperability it will, also, have the effect of signalling an oncoming change to threat actors who may accelerate activities to get footholds on devices or may warn these actors that they, too, need to update their tactics, techniques, and procedures (TTPs).
  3. As a more general point, threat actors might work to develop and propagate interoperable clients that they have, already, compromised–we’ve previously seen nation-state actors do so and there’s no reason to expect this behaviour to stop in a world of interoperable clients. Alternately, threat actors might try and convince targets to move to ‘better’ clients that contain known vulnerabilities but which are developed and made available by legitimate vendors. Whereas, today, an exploit developer must target specific messaging systems that deliver that systems’ messages, a future world of interoperable messaging will likely expand the clients that threat actors can seek to exploit.

One of the severe dangers and challenges facing the current internet regulation landscape has been that a large volume of new actors have entered the various overlapping policy fields. For a long time there’s not been that many of us and anyone who’s been around for 10-15 years tends to be suitably multidisciplinary that they think about how activities in policy domain X might/will have consequences for domains Y and Z. The new raft of politicians and their policy advisors, in contrast, often lack this broad awareness. The result is that proposals are being advanced around the world by ostensibly well-meaning individuals and groups to address issues associated with online harms, speech, CSAM, competition, and security. However, these same parties often lack awareness of how the solutions meant to solve their favoured policy problems will have effects on neighbouring policy issues. And, where they are aware, they often don’t care because that’s someone else’s policy domain.

It’s good to see more people participating and more inclusive policy making processes. And seeing actual political action on many issue areas after 10 years of people debating how to move forward is exciting. But too much of that action runs counter to the thoughtful warnings and need for caution that longer-term policy experts have been raising for over a decade.

We are almost certainly moving towards a ‘new Internet’. It remains in question, however, whether this ‘new Internet’ will see resolutions to longstanding challenges or if, instead, the rush to regulate will change the landscape by finally bringing to life the threats that long-term policy wonks have been working to forestall or prevent for much of their working lives. To date, I remain increasingly concerned that we will experience the latter than witness the former.


  1. For the record, I currently remain unconvinced it is possible to implement end-to-end encryption across platforms generally. ↩︎
Aside

2022.4.9

I’ve been doing my own IT for a long while, as well as small tasks for others. But I haven’t had to do an email migration—while ensuring pretty well no downtime—in a long while.

Fortunately the shift from Google Mail (due to the deprecation of grandfathered accounts that offered free custom domain integration) to Apple’s iCloud+ was remarkably smooth and easy. Apple’s instructions were helpful as were those of the host I was dealing with. Downtime was a couple seconds, at most, though there was definitely a brief moment of holding my breath in fear that the transition hadn’t quite taken.

Solved: Mendeley-Related Error in Microsoft Word for MacOS

In the past I used Mendeley as a citation management system. I stopped using it, and uninstalled it from MacOS, when they deprecated the mobile application I relied upon. I had installed the Mendeley extension for Microsoft Word to facilitate easy citation insertion and updates. Ever since deleting Mendeley from MacOS I have received a popup window when opening Microsoft Word as well as a prompt to save changes to “Mendeley-word2016-1.19.4.dotm” when closing Word.

The Problem

I was receiving prompts when opening and closing Microsoft Word for MacOS after having uninstalled Mendeley. These were annoying and I wanted them to go away.

The Solution

In MacOS:

  1. Open Finder
  2. Search for “Mendeley”
  3. Delete “Mendeley Desktop.plist” and “Mendeley-word2016-1.19.4.dotm”

You should now be able to open Microsoft Word without being asked to point to where Mendeley is installed, and exit Word without being asked to save changes to Mendeley-word2016-1.19.4.dotm.

Cyber Attacks Versus Operations in Ukraine

Photo by cottonbro on Pexels.com

For the past decade there has been a steady drumbeat that ‘cyberwar is coming’. Sometimes the parties holding these positions are in militaries and, in other cases, from think tanks or university departments that are trying to link kinetic-adjacent computer operations with ‘war’.

Perhaps the most famous rebuttal to the cyberwar proponents has been Thomas Rid’s Cyber War Will Not Take Place. The title was meant to be provocative and almost has the effect of concealing a core insight of Rid’s argument: cyber operations will continue to be associated with conflicts but cyber operations are unlikely to constitute (or lead to) out-and-out war on their own. Why? Because it is very challenging to prepare and launch cyber operations that have significant kinetic results at the scale we associate with full-on war.

Since the Russian Federation’s war of aggression towards Ukraine there have regularly been shocked assertions that cyberware isn’t taking place. A series of pieces by The Economist, as an example, sought to prepare readers for a cyberwar that just hasn’t happened. Why not? Because The Economist–much like other outlets!–often presumed that the cyber dimensions of the conflict in Ukraine would bear at least some resemblance to the long-maligned concept of a ‘cyber Pearl Harbour’: a critical cyber-enable strike of some sort would have a serious, and potentially devastating, effect on how Ukraine could defend against Russian aggression and thus tilt the balance towards Russian military victory.

As a result of the early mistaken understandings of cyber operations, scholars and experts have once more come out and explained why cyber operations are not the same as an imagined cyber Pearl Harbour situation, while still taking place in the Ukrainian conflict. Simultaneously, security and malware researchers have taken the opportunity to belittle International Relations theorists who have written about cyberwar and argued that these theorists have fundamentally misunderstood how cyber operations take place.

Part of the challenge is ‘cyberwar’ has often been popularly seen as the equivalent of hundreds of thousands of soldiers and their associated military hardware being deployed into a foreign country. As noted by Rid in a recent op-ed, while some cyber operations are meant to be apparent others are much more subtle. The former might be meant to reduce the will to fight or diminish command and control capabilities. The latter, in contrast, will look a lot like other reconnaissance operations: knowing who is commanding which battle group, the logistical challenges facing the opponent, or state of infrastructure in-country. All these latter dimensions provide strategic and tactical advantages to the party who’s launched the surveillance operation. Operations meant to degrade capabilities may occur but will often be more subtle. This subtly can be a particularly severe risk in a conflict, such as if your ammunition convoy is sent to the wrong place or train timetables are thrown off with the effect of stymying civilian evacuation or resupply operations.1

What’s often seemingly lost in the ‘cyberwar’ debates–which tend to take place either between people who don’t understand cyber operations, those who stand to profit from misrepresentations of them, or those who are so theoretical in their approaches as to be ignorant of reality–is that contemporary wars entail blended forces. Different elements of those blends have unique and specific tactical and strategic purposes. Cyber isn’t going to have the same effect as a Grad Missile Launcher or a T-90 Battle Tank, but that missile launcher or tank isn’t going to know that the target it’s pointed towards is strategically valuable without reconnaissance nor is it able to impair logistics flows the same way as a cyber operation targeting train schedules. To expect otherwise is to grossly misunderstand how cyber operations function in a conflict environment.

I’d like to imagine that one result of the Russian war of aggression will be to improve the general population’s understanding of cyber operations and what they entail, and do not entail. It’s possible that this might happen given that major news outlets, such as the AP and Reuters, are changing how they refer to such activities: they will not be called ‘cyberattacks’ outside very nuanced situations now. In simply changing what we call cyber activities–as operations as opposed to attacks–we’ll hopefully see a deflating of the language and, with it, more careful understandings of how cyber operations take place in and out of conflict situations. As such, there’s a chance (hope?) we might see a better appreciation of the significance of cyber operations in the population writ-large in the coming years. This will be increasingly important given the sheer volume of successful (non-conflict) operations that take place each day.


  1. It’s worth recognizing that part of why we aren’t reading about successful Russian operations is, first, due to Ukrainian and allies’ efforts to suppress such successes for fear of reducing Ukrainian/allied morale. Second, however, is that Western signals intelligence agencies such as the NSA, CSE, and GCHQ, are all very active in providing remote defensive and other operational services to Ukrainian forces. There was also a significant effort ahead of the conflict to shore up Ukrainian defences and continues to be a strong effort by Western companies to enhance the security of systems used by Ukrainians. Combined, this means that Ukraine is enjoying additional ‘forces’ while, simultaneously, generally keeping quiet about its own failures to protect its systems or infrastructure. ↩︎