Link

Can University Faculty Hold Platforms To Account?

Heidi Tworek has a good piece with the Centre for International Governance Innovation, where she questions whether there will be a sufficient number of faculty in Canada (and elsewhere) to make use of information that digital-first companies might be compelled to make available to researchers. The general argument goes that if companies must make information available to academics then these academics can study the information and, subsequently, hold companies to account and guide evidence-based policymaking.

Tworek’s argument focuses on two key things.

  1. First, there has been a decline in the tenured professoriate in Canada, with the effect that the adjunct faculty who are ‘filling in’ are busy teaching and really don’t have a chance to lead research.
  2. While a vanishingly small number of PhD holders obtain a tenure track role, a reasonable number may be going into the very digital-first companies that researchers needs data from to hold them accountable.

On this latter point, she writes:

If the companies have far more researchers than universities have, transparency regulations may not do as much to address the imbalance of knowledge as many expect.

I don’t think that hiring people with PhDs necessarily means that companies are addressing knowledge imbalances. Whatever is learned by these researchers tends to be sheltered within corporate walls and protected by NDAs. So those researchers going into companies may learn what’s going on but be unable (or unmotivated) to leverage what they know in order to inform policy discussions meant to hold companies to account.

To be clear, I really do agree with a lot in this article. However, I think it does have a few areas for further consideration.

First, more needs to be said about what, specifically, ’transparency’ encompasses and its relationships with data type, availability, etc. Transparency is a deeply contested concept and there are a lot of ways that the revelation of data basically creates a funhouse of mirrors effect, insofar as what researchers ‘see’ can be very distorted from the reality of what truly is.

Second, making data available isn’t just about whether universities have the professors to do the work but, really, whether the government and its regulators have the staff time as well. Professors are doing a lot of things whereas regulators can assign staff to just work the data, day in and day out. Focus matters.

Third, and related, I have to admit that I have pretty severe doubts about the ability of professors to seriously take up and make use of information from platforms, at scale and with policy impact, because it’s never going to be their full time jobs to do so. Professors are also going to be required to publish in books or journals, which means their outputs will be delayed and inaccessible to companies, government bureaucrats and regulators, and NGO staff. I’m sure academics will have lovely and insightful discussions…but they won’t happen fast enough, or in accessible places or in plain language, to generally affect policy debates.

So, what might need to be added to start fleshing out how universities are organised to make use of data released by companies and have policy impacts in research outputs?

First, universities in Canada would need to get truly serious about creating a ’researcher class’ to analyse corporate reporting. This would involve prioritising the hiring of research associates and senior research associates who have few or no teaching responsibilities.1

Second, universities would need to work to create centres such as the Citizen Lab, or related groups.2 These don’t need to be organisations which try and cover the waterfront of all digital issues. They could, instead, be more focused to reduce the number of staff or fellows that are needed to fulfil the organisation’s mandate. Any and all centres of this type would see a small handful of people with PhDs (who largely lack teaching responsibilities) guide multidisciplinary teams of staff. Those same staff members would not typically need a a PhD. They would need to be nimble enough to move quickly while using a peer-review lite process to validate findings, but not see journal or book outputs as their primacy currency for promotion or hiring.

Third, the centres would need a core group of long-term staffers. This core body of long-term researchers is needed to develop policy expertise that graduate students just don’t possess or develop in their short tenure in the university. Moreover, these same long-term researchers can then train graduate student fellows of the centres in question, with the effect of slowly building a cadre of researchers who are equipped to critically assess digital-first companies.

Fourth, the staff at research centres needs to be paid well and properly. They cannot be regarded as ‘graduate student plus’ employees but as specialists who will be of interest to government and corporations. This means that the university will need to pay competitive wages in order to secure the staff needed to fulfil centre mandates.

Basically if universities are to be successful in holding big data companies to account they’ll need to incubate quasi-NGOs and let them loose under the university’s auspice. It is, however, worth asking whether this should be the goal of the university in the first place: should society be outsourcing a large amount of the ‘transparency research’ that is designed to have policy impact or guide evidence-based policy making to academics, or should we instead bolster the capacities of government departments and regulatory agencies to undertake these activities

Put differently, and in context with Tworek’s argument: I think that assuming that PhDs holders working as faculty in universities are the solution to analysing data released by corporations can only hold if you happen to (a) hold or aspire to hold a PhD; (b) possesses or aspire to possess a research-focused tenure track job.

I don’t think that either (a) or (b) should guide the majority of the way forward in developing policy proposals as they pertain to holding corporations to account.

Do faculty have a role in holding companies such as Google, Facebook, Amazon, Apple, or Netflix to account? You bet. But if the university, and university researchers, are going to seriously get involved in using data released by companies to hold them to account and have policy impact, then I think we need dedicated and focused researchers. Faculty who are torn between teaching, writing and publishing in inaccessible locations using baroque theoretical lenses, pursuing funding opportunities and undertaking large amounts of department service and performing graduate student supervision are just not going to be sufficient to address the task at hand.


  1. In the interests of disclosure, I currently hold one of these roles. ↩︎
  2. Again in the interests of disclosure, this is the kind of place I currently work at. ↩︎

Tech for Whom?

Charley Johnson has a good line of questions and critique for any organization or group which is promoting a ‘technology for good’ program. The crux is that any and all techno-utopian proposals suggest a means of technology to solve a problem as defined by the party making the proposal. Put another way, these kinds of solutions do not tend to solve real underlying problems but, instead, solve the ‘problems’ for which hucksters have build a pre-designed a ‘solution’.

This line of analysis isn’t new, per se, and follows in a long line of equity, social justice, feminism, and critical theory writers. Still, Johnson does a good job in extracting key issues with techno-utopianism. Key, is that any of these solutions tend to present a ‘tech for good’ mindset that:

… frames the problem in such a way that launders the interests, expertise, and beliefs of technologists…‘For good’ is problematic because it’s self-justifying. How can I question or critique the technology if it’s ‘for good’? But more importantly, nine times out of ten ‘for good’ leads to the definition of a problem that requires a technology solution.

One of the things that we are seeing more commonly is the use of data, in and of itself, as something that can be used for good: data for good initiatives are cast as being critical to solving climate change, making driving safer, or automating away the messier parties of our lives. Some of these arguments are almost certainly even right! However, the proposed solutions tend to rely on collecting, using, or disclosing data—derived from individuals’ and communities’ activities—without obtaining their informed, meaningful, and ongoing consent. ‘Data for good’ depends, first and often foremost, on removing the agency to say ‘yes’ or ‘no’ to a given ‘solution’.

In the Canadian context efforts to enable ‘good’ uses of data have emerged through successively introduced pieces of commercial privacy legislation. The legislation would permit the disclosure of de-identified personal information for “socially beneficial purposes.” Information could be disclosed to government, universities, public libraries, health care institutions, organizations mandated by the government to carry out a socially beneficial purpose, and other prescribed entities. Those organizations could use the data for a purpose related to health, the provision or improvement of public amenities or infrastructure, the protection of the environment or any other prescribed purpose.

Put slightly differently, whereas Johnson’s analysis is towards a broad concept of ‘data for good’ in tandem with elucidating examples, the Canadian context threatens to see broad-based techno-utopian uses of data enabled at the legislative level. The legislation includes the ability to expand whom can receive de-identified data and the range of socially beneficial uses, with new parties and uses being defined by regulation. While there are a number of problems with these kinds of approaches—which include the explicit removal of consent of individuals and communities to having their data used in ways they may actively disapprove of—at their core the problems are associated with power: the power of some actors to unilaterally make non-democratic decisions that will affect other persons or communities.

This capacity to invisibly express power over others is the crux of most utopian fantasies. In such fantasies, power relationships are resolved in the absence of making them explicit and, in the process, an imaginary is created wherein social ills are fixed as a result of power having been hidden away. Decision making in a utopia is smooth and efficient, and the power asymmetries which enable such situations is either hidden away or just not substantively discussed.

Johnson’s article concludes with a series of questions that act to re-surface issues of power vis-a-vis explicitly raising questions of agency and the origin and nature of the envisioned problem(s) and solution(s):

Does the tool increase the self-determination and agency of the poor?

Would the tool be tolerated if it was targeted at non-poor people?

What problem does the tool purport to solve and who defined that problem?

How does the way they frame the problem shape our understanding of it?

What might the one framing the problem gain from solving it?

We can look to these questions as, at their core, raising issues of power—who is involved in determining how agency is expressed, who has decision-making capabilities in defining problems and solutions—and, through them, issues of inclusion and equity. Implicit through his writing, at least to my eye, is that these decisions cannot be assigned to individuals but to individuals and their communities.

One of the great challenges for modern democratic rule making is that we must transition from imagining political actors as rational, atomic, subjects to ones that are seen as embedded in their community. Individuals are formed by their communities, and vice versa, simultaneously. This means that we need to move away from traditional liberal or communitarian tropes to recognize the phenomenology of living in society, alone and together simultaneously, while also recognizing and valuing the tilting power and influence of ‘non-rational’ aspects of life that give life much of its meaning and substance. These elements of life are most commonly those demonized or denigrated by techno-utopians on the basis that technology is ‘rational’ and is juxtaposed against the ‘irrationality’ of how humans actually live and operate in the world.

Broad and in conclusion, then, techno-utopianism is functionally an issue of power and domination. We see ‘tech bros’ and traditional power brokers alike advancing solutions to their perceived problems, and this approach may be further reified should legislation be passed to embed this conceptual framework more deeply into democratic nation-states. What is under-appreciated is that while such legislative efforts may make certain techno-utopian activities lawful the subsequent actions will not, as a result, necessarily be regarded as legitimate by those affected by the lawful ‘socially beneficial’ uses of de-identified personal data.

The result? At best, ambivalence that reflects the population’s existing alienation from democratic structures of government. More likely, however, is that lawful but illegitimate expressions of ‘socially beneficial’ uses of data will further delegitimize the actions and capabilities of the states, with the effect of further weakening the perceived inclusivity of our democratic traditions.

So You Can’t Verify Your Apple iCloud Custom Domain

Photo by Tim Gouw on Pexels.com

When you set up a custom iCloud email domain you have to modify the DNS records held by your domain’s registrar. On the whole, the information provided by Apple is simple and makes it easy to set up the custom domain.

However, if you change where your domain’s name servers point, such as when you modify the hosting for a website associated with the domain, you must update the DNS records with whomever you are pointing the name servers to. Put differently: if you have configured your Apple iCloud custom email by modifying the DNS information at host X, as soon as you shift to host Y by pointing your name servers at them you will also have to update DNS records with host Y.

Now, what if you don’t do this? Eventually as DNS information propagates over the subsequent 6-72 hours you’ll be in a situation where your custom iCloud domain email address will stop sending or receiving information because the routing information is no longer valid. This will cause Apple’s iCloud custom domain system to try and re-verify the domain; it will do this because the DNS information you initially supplied is no longer valid.

Should you run into this issue you might, naturally, first reach out to Apple support. You are, after all, running your email through their servers.

Positively: you will very quickly get a real-live human on the phone to help you. That’s great! Unfortunately, however, there is very little that Apple’s support staff can do to help you. There are very, very few internal help documents pertaining to custom domains. As was explained to me, the sensitivity and complexity of DNS (and the fact that information is non-standardized across registrars) means that the support staff really can’t help much: you’re mostly on your own. This is not communicated when setting up Apple custom email domains.

In a truly worst case scenario you might get a well meaning but ignorant support member who leads you deeply astray in attempting to help troubleshoot and fix the problem. This, unfortunately, was my experience: no matter what is suggested, the solution to this problem is not solved by deleting your custom email accounts hosted by Apple on iCloud. Don’t be convinced this is ever a solution.

Worse, after deleting the email accounts associated with your custom iCloud domain email you can get into a situation where you cannot click the re-verify button on the front end of iCloud’s custom email domain interface. The result is that while you see one thing on the graphical interface—a greyed out option to ‘re-verify’—folks at Apple/server-side do not see the same status. Level 1 and 2 support staff cannot help you at this stage.

As a result, you can (at this point) be in limbo insofar as email cannot be sent or received from your custom domain. Individuals who send you message will get errors that the email identify no longer exists. The only group at Apple who can help you, in this situation, are Apple’s engineering team.

That team apparently does not work weekends.

What does this mean for using custom email domains for iCloud? For many people not a lot: they aren’t moving their hosting around and so it’s very much a ‘set and forget’ situation. However, for anyone who does have an issue the Apple support staff lacks good documentation to determine where the problem lies and, as a result, can (frankly) waste an inordinate amount of time in trying to figure out what is wrong. I would hasten to note that the final Apple support member I worked with, Derek, was amazing in identifying what the issue was, communicating the challenges facing Apple internally, and taking ownership of the problem: Derek rocks. Apple support needs more people like him.

But, in the absence of being able to hire more Dereks, Apple needs better scripts to help their support staff assist users. And, moreover, the fact that Apple lacks a large enough engineering team to also have some people working weekends to solve issues is stunning: yes, hiring is challenging and expensive, but Apple is one of the most profitable companies in the world. Their lack of a true 24/7 support staff is absurd.

What’s the solution if you ever find yourself in this situation, then? Make sure that you’ve done what you can with your new domain settings and, then, just sit back and wait while Apple tries to figure stuff out. I don’t know how, exactly, Apple fixed this problem on their end, though when it is fixed you’ll get an immediate prompt on your iOS devices that you need to update your custom domain information. It’s quick to take the information provided (which will include a new DKIM record that is unique to your new domain) and then get Apple custom iCloud email working with whomever is managing your DNS records.

Ultimately, I’m glad this was fixed for me but, simultaneously, the ability of most of Apple’s support team to provide assistance was minimal. And it meant that for 3-4 days I was entirely without my primary email address, during a busy work period. I’m very, very disappointed in how this was handled irrespective of things ultimately working once again. At a minimum, Apple needs to update its internal scripts so that their frontline staff know the right questions to ask (e.g., did you change information about your website’s DNS information?) to get stuff moving in the right direction.

Thoughts on Developing My Street Photography

(Dead Ends by Christopher Parsons)

For the past several years I’ve created a ‘best of’ album that summarizes the year’s best photos that I made. I use the yearly album to assess how my photography has changed and what, if any, changes are common across those images. The process of making these albums and then printing them forces me to look at my images, how they work against one another, and better understand what I learned over the course of taking photos for a year.

I have lots of favourite photographs but what I’ve learned the most, at least over the past few years, is to ignore a lot of the information and ‘tips’ that are often shared about street photography. Note that the reason to avoid ignore them is not because they are wrong per se, or that photographers shouldn’t adopt them, but because they don’t work for how I prefer to engage in street photography.

I Don’t Do ‘Stealth’ Photography

Probably the key tip that I generally set to the side is that you should be stealthy, sneaky, or otherwise hidden from the subjects in the photos that I capture. It’s pretty common for me to see a scene and wait with my camera to my eye until the right subjects enter the scene and are positioned where I want them in my frame. Sometimes that means that people will avoid me and the scene and other times they’ll clearly indicate that they don’t want to have their photo taken. In these cases the subject is communicating their preferences quite clearly and I won’t take their photograph. It’s just an ethical line I don’t want to cross.

(Winter Troop by Christopher Parsons)

In yet other instances, my subjects will be looking right at me as they pass through the scene. They’re often somewhat curious. And in many situations they stop and ask me what I’m taking photos of, and then a short conversation follows. In an odd handful of situations they’ve asked me to send along an image I captured of them or a link to my photos; to date, I’ve had pretty few ‘bad’ encounters while shooting on the streets.

I Don’t Imitate Others

I’ve spent a lot of time learning about classic photographers over the past couple years. I’ve been particularly drawn to black and white street photography, in part because I think it often has a timeless character and because it forces me to more carefully think about positioning a subject so they stand out.

(Working Man by Christopher Parsons)

This being said, I don’t think that I’m directly imitating anyone else. I shoot with a set of focal ranges and periodically mix up the device I’m capturing images on; last year, a bulk of my favourite photos came from an intensive two week photography vacation where I forced myself to walk extensively and just use an iPhone 12 Pro. Photos that I’m taking, this year, have largely been with a Fuji X100F and some custom jpg recipes that generally produce results that I appreciate.

Don’t get me wrong: in seeing some of the photos of the greats (and less greats and less well-knows) I draw inspiration from the kinds of images they make, but I don’t think I’ve ever gone out to try and make images like theirs. This differs from when I started taking shots in my city, and when I wanted to make images that looked similar to the ‘popular’ shots I was seeing. I still appreciate those images but they’re not what I want to make these days.

I Create For Myself

While I don’t think that I’m alone in this, the images that I make are principally for myself. I share some of those images but, really, I just want to get out and walk through my environment. I find the process of slowing down to look for instances of interest and beauty help ground me.

Because I tend to walk within the same 10-15km radius of my home, I have a pretty good sense of how neighbourhoods are changing. I can see my city changing on a week to week basis, and feel more in tune with what’s really happening based on my observations. My photography makes me very present in my surroundings.

(Dark Sides by Christopher Parsons)

I also tend to use my walks to both cover new ground and, also, go into back alleys, behind sheds, and generally in the corners of the city that are less apparent unless you’re looking for them. Much of the time there’s nothing particularly interesting to photograph in those spaces. But, sometimes, something novel or unique emerges.

Change Is Normal

For the past year or so, a large volume (95% or more) of my images have been black and white. That hasn’t always been the case! But I decided I wanted to lean into this mode of capturing images to develop a particular set of skills and get used to seeing—and visualizing—scenes and subjects monochromatically.

But my focus on black and white images, as well as images that predominantly include human subjects, is relatively new: if I look at my images from just a few years ago there was a lot of colour and stark, or empty, cityscapes. I don’t dislike those images and, in fact, several remain amongst my favourite images I’ve made to date. But I also don’t want to be constrained by one way of looking at the world. The world is too multifaceted, and there’s too many ways of imagining it, to be stuck permanently in one way of capturing it.

(Alley Figures by Christopher Parsons)

This said, over time, I’d like to imagine I might develop a way of seeing the world and capturing images that provides a common visual language across my images. Though if that never happens I’m ok with that, so long as the very practice of photography continues to provide the dividends of better understanding my surroundings and feeling in tune with wherever I’m living at the time.

Mitigating AI-Based Harms in National Security

Photo by Pixabay on Pexels.com

Government agencies throughout Canada are investigating how they might adopt and deploy ‘artificial intelligence’ programs to enhance how they provide services. In the case of national security and law enforcement agencies these programs might be used to analyze and exploit datasets, surface threats, identify risky travellers, or automatically respond to criminal or threat activities.

However, the predictive software systems that are being deployed–‘artificial intelligence’–are routinely shown to be biased. These biases are serious in the commercial sphere but there, at least, it is somewhat possible for researchers to detect and surface biases. In the secretive domain of national security, however, the likelihood of bias in agencies’ software being detected or surfaced by non-government parties is considerably lower.

I know that organizations such as the Canadian Security Intelligence Agency (CSIS) have an interest in understanding how to use big data in ways that mitigate bias. The Canadian government does have a policy on the “Responsible use of artificial intelligence (AI)” and, at the municipal policing level, the Toronto Police Service has also published a policy on its use of artificial intelligence. Furthermore, the Office of the Privacy Commissioner of Canada has published a proposed regulatory framework for AI as part of potential reforms to federal privacy law.

Timnit Gebru, in conversation with Julia Angwin, suggests that there should be ‘datasheets for algorithms’ that would outline how predictive software systems have been tested for bias in different use cases prior to being deployed. Linking this to traditional circuit-based datasheets, she says (emphasis added):

As a circuit designer, you design certain components into your system, and these components are really idealized tools that you learn about in school that are always supposed to work perfectly. Of course, that’s not how they work in real life.

To account for this, there are standards that say, “You can use this component for railroads, because of x, y, and z,” and “You cannot use this component for life support systems, because it has all these qualities we’ve tested.” Before you design something into your system, you look at what’s called a datasheet for the component to inform your decision. In the world of AI, there is no information on what testing or auditing you did. You build the model and you just send it out into the world. This paper proposed that datasheets be published alongside datasets. The sheets are intended to help people make an informed decision about whether that dataset would work for a specific use case. There was also a follow-up paper called Model Cards for Model Reporting that I wrote with Meg Mitchell, my former co-lead at Google, which proposed that when you design a model, you need to specify the different tests you’ve conducted and the characteristics it has.

What I’ve realized is that when you’re in an institution, and you’re recommending that instead of hiring one person, you need five people to create the model card and the datasheet, and instead of putting out a product in a month, you should actually do it in three years, it’s not going to happen. I can write all the papers I want, but it’s just not going to happen. I’m constantly grappling with the incentive structure of this industry. We can write all the papers we want, but if we don’t change the incentives of the tech industry, nothing is going to change. That is why we need regulation.

Government is one of those areas where regulation or law can work well to discipline its behaviours, and where the relatively large volume of resources combined with a law-abiding bureaucracy might mean that formally required assessments would actually be conducted. While such assessments matter, generally, they are of particular importance where state agencies might be involved in making decisions that significantly or permanently alter the life chances of residents of Canada, visitors who are passing through our borders, or foreign national who are interacting with our government agencies.

As it stands, today, many Canadian government efforts at the federal, provincial, or municipal level seem to be signficiantly focused on how predictive software might be used or the effects it may have. These are important things to attend to! But it is just as, if not more, important for agencies to undertake baseline assessments of how and when different predictive software engines are permissible or not, as based on robust testing and evaluation of their features and flaws.

Having spoken with people at different levels of government the recurring complaint around assessing training data, and predictive software systems more generally, is that it’s hard to hire the right people for these assessment jobs on the basis that they are relatively rare and often exceedingly expensive. Thus, mid-level and senior members of government have a tendency to focus on things that government is perceived as actually able to do: figure out and track how predictive systems would be used and to what effect.

However, the regular focus on the resource-related challenges of predictive software assessment raises the very real question of whether these constraints should just compel agencies to forgo technologies on the basis of failing to determine, and assess, their prospective harms. In the firearms space, as an example, government agencies are extremely rigorous in assessing how a weapon operates to ensure that it functions precisely as meant given that the weapon might be used in life-changing scenarios. Such assessments require significant sums of money from agency budgets.

If we can make significant budgetary allocations for firearms, on the grounds they can have life-altering consequences for all involved in their use, then why can’t we do the same for predictive software systems? If anything, such allocations would compel agencies to make a strong(er) business case for testing the predictive systems in question and spur further accountability: Does the system work? At a reasonable cost? With acceptable outcomes?

Imposing cost discipline on organizations is an important way of ensuring that technologies, and other business processes, aren’t randomly adopted on the basis of externalizing their full costs. By internalizing those costs, up front, organizations may need to be much more careful in what they choose to adopt, when, and for what purpose. The outcome of this introspection and assessment would, hopefully, be that the harmful effects of predictive software systems in the national security space were mitigated and the systems which were adopted actually fulfilled the purposes they were acquired to address.

Cyber Attacks Versus Operations in Ukraine

Photo by cottonbro on Pexels.com

For the past decade there has been a steady drumbeat that ‘cyberwar is coming’. Sometimes the parties holding these positions are in militaries and, in other cases, from think tanks or university departments that are trying to link kinetic-adjacent computer operations with ‘war’.

Perhaps the most famous rebuttal to the cyberwar proponents has been Thomas Rid’s Cyber War Will Not Take Place. The title was meant to be provocative and almost has the effect of concealing a core insight of Rid’s argument: cyber operations will continue to be associated with conflicts but cyber operations are unlikely to constitute (or lead to) out-and-out war on their own. Why? Because it is very challenging to prepare and launch cyber operations that have significant kinetic results at the scale we associate with full-on war.

Since the Russian Federation’s war of aggression towards Ukraine there have regularly been shocked assertions that cyberware isn’t taking place. A series of pieces by The Economist, as an example, sought to prepare readers for a cyberwar that just hasn’t happened. Why not? Because The Economist–much like other outlets!–often presumed that the cyber dimensions of the conflict in Ukraine would bear at least some resemblance to the long-maligned concept of a ‘cyber Pearl Harbour’: a critical cyber-enable strike of some sort would have a serious, and potentially devastating, effect on how Ukraine could defend against Russian aggression and thus tilt the balance towards Russian military victory.

As a result of the early mistaken understandings of cyber operations, scholars and experts have once more come out and explained why cyber operations are not the same as an imagined cyber Pearl Harbour situation, while still taking place in the Ukrainian conflict. Simultaneously, security and malware researchers have taken the opportunity to belittle International Relations theorists who have written about cyberwar and argued that these theorists have fundamentally misunderstood how cyber operations take place.

Part of the challenge is ‘cyberwar’ has often been popularly seen as the equivalent of hundreds of thousands of soldiers and their associated military hardware being deployed into a foreign country. As noted by Rid in a recent op-ed, while some cyber operations are meant to be apparent others are much more subtle. The former might be meant to reduce the will to fight or diminish command and control capabilities. The latter, in contrast, will look a lot like other reconnaissance operations: knowing who is commanding which battle group, the logistical challenges facing the opponent, or state of infrastructure in-country. All these latter dimensions provide strategic and tactical advantages to the party who’s launched the surveillance operation. Operations meant to degrade capabilities may occur but will often be more subtle. This subtly can be a particularly severe risk in a conflict, such as if your ammunition convoy is sent to the wrong place or train timetables are thrown off with the effect of stymying civilian evacuation or resupply operations.1

What’s often seemingly lost in the ‘cyberwar’ debates–which tend to take place either between people who don’t understand cyber operations, those who stand to profit from misrepresentations of them, or those who are so theoretical in their approaches as to be ignorant of reality–is that contemporary wars entail blended forces. Different elements of those blends have unique and specific tactical and strategic purposes. Cyber isn’t going to have the same effect as a Grad Missile Launcher or a T-90 Battle Tank, but that missile launcher or tank isn’t going to know that the target it’s pointed towards is strategically valuable without reconnaissance nor is it able to impair logistics flows the same way as a cyber operation targeting train schedules. To expect otherwise is to grossly misunderstand how cyber operations function in a conflict environment.

I’d like to imagine that one result of the Russian war of aggression will be to improve the general population’s understanding of cyber operations and what they entail, and do not entail. It’s possible that this might happen given that major news outlets, such as the AP and Reuters, are changing how they refer to such activities: they will not be called ‘cyberattacks’ outside very nuanced situations now. In simply changing what we call cyber activities–as operations as opposed to attacks–we’ll hopefully see a deflating of the language and, with it, more careful understandings of how cyber operations take place in and out of conflict situations. As such, there’s a chance (hope?) we might see a better appreciation of the significance of cyber operations in the population writ-large in the coming years. This will be increasingly important given the sheer volume of successful (non-conflict) operations that take place each day.


  1. It’s worth recognizing that part of why we aren’t reading about successful Russian operations is, first, due to Ukrainian and allies’ efforts to suppress such successes for fear of reducing Ukrainian/allied morale. Second, however, is that Western signals intelligence agencies such as the NSA, CSE, and GCHQ, are all very active in providing remote defensive and other operational services to Ukrainian forces. There was also a significant effort ahead of the conflict to shore up Ukrainian defences and continues to be a strong effort by Western companies to enhance the security of systems used by Ukrainians. Combined, this means that Ukraine is enjoying additional ‘forces’ while, simultaneously, generally keeping quiet about its own failures to protect its systems or infrastructure. ↩︎

Russia, Nokia, and SORM

Photo by Mati Mango on Pexels.com

The New York Times recently wrote about Nokia providing telecommunications equipment to Russian ISPs, all while Nokia was intimately aware of how its equipment would be interconnected with System for Operative Investigative Activities (SORM) lawful interception equipment. SORM equipment has existed in numerous versions since the 1990s. Per James Lewis:

SORM-1 collects mobile and landline telephone calls. SORM-2 collects internet traffic. SORM-3 collects from all media (including Wi-Fi and social networks) and stores data for three years. Russian law requires all internet service providers to install an FSB monitoring device (called “Punkt Upravlenia”) on their networks that allows the direct collection of traffic without the knowledge or cooperation of the service provider. The providers must pay for the device and the cost of installation.

SORM is part of a broader Internet and telecommunications surveillance and censorship regime that has been established by the Russian government. Moreover, other countries in the region use iterations or variations of the SORM system (e.g., Kazakhstan) as well as countries which were previously invaded by the Soviet Union (e.g., Afghanistan).

The Time’s article somewhat breathlessly states that the documents they obtained, and which span 2008-2017,

show in previously unreported detail that Nokia knew it was enabling a Russian surveillance system. The work was essential for Nokia to do business in Russia, where it had become a top supplier of equipment and services to various telecommunications customers to help their networks function. The business yielded hundreds of millions of dollars in annual revenue, even as Mr. Putin became more belligerent abroad and more controlling at home.

It is not surprising that Nokia, as part of doing business in Russia, was complying with lawful interception laws insofar as its products were compatible with SORM equipment. Frankly it would have been surprising if Nokia had flouted the law given that Nokia’s own policy concerning human rights asserts that (.pdf):

Nokia will provide passive lawful interception capabilities to customers who have a legal obligation to provide such capabilities. This means we will provide products that meet agreed standards for lawful intercept capabilities as defined by recognized standards bodies such as the 3rd Generation Partner Project (3GPP) and the European Telecoms Standards Institute (ETSI). We will not, however, engage in any activity relating to active lawful interception technologies, such as storing, post-processing or analyzing of intercepted data gathered by the network operator.

It was somewhat curious that the Times’ article declined to recognize that Nokia-Siemens has a long history of doing business in repressive countries: it allegedly sold mobile lawful interception equipment to Iran circa 2009 and in 2010-11 its lawful interception equipment was implicated in political repression and torture in Bahrain. Put differently, Nokia’s involvement in low rule-of-law countries is not new and, if anything, their actions in Russia appear to be a mild improvement on their historical approaches to enabling repressive governments to exercise lawful interception functionalities.

The broad question is whether Western companies should be authorized or permitted to do business in repressive countries. To some extent, we might hope that businesses themselves would express restraint. But, in excess of this, companies such as Nokia often require some kind of export license or approval before they can sell certain telecommunications equipment to various repressive governments. This is particularly true when it comes to supplying lawful interception functionality (which was not the case when Nokia sold equipment to Russia).

While the New York Times casts a light on Nokia the article does not:

  1. Assess the robustness of Nokia’s alleged human rights commitments–have they changed since 2013 when they were first examined by civil society? How do Nokia’s sales comport with their 2019 human rights policy? Just how flimsy is the human rights policy in its own right?
  2. Assess the export controls that Nokia was(n’t) under–is it the case that the Norwegian government has some liability or responsibility for the sales of Nokia’s telecommunications equipment? Should there be?
  3. Assess the activities of the telecommunications provider Nokia was supplying in Russia, MTS, and whether there is a broader issue of Nokia supplying equipment to MTS since it operates in various repressive countries.

None of this is meant to set aside the fact that Western companies ought to behave better on the international stage. But…this has not been a priority in Russia, at least, until the country’s recent war of aggression. Warning signs were prominently on display before this war and didn’t result in prominent and public recriminations towards Nokia or other Western companies doing business in Russia.

All lawful interception systems, regardless of whether they conform with North America, European, or Russian standards, are surveillance systems. Put another way, they are all about empowering one group to exercise influence or power over others who are unaware they are being watched. In low rule-of-law countries, such as Russia, there is a real question as to whether they should should even be called ‘lawful interception systems’ as opposed to explicitly calling them ‘interception systems’.

There was a real opportunity for the New York Times to both better contextualize Nokia’s involvement in Russia and, then, to explain and problematize the nature of lawful interception capability and standards. The authors could also have spent time discussing the nature of export controls on telecommunications equipment, where the equipment is being sold into repressive states. Sadly this did not occur with the result that the authors and paper declined to more broadly consider and report on the working, and ethics and politics, of enabling telecommunications and lawful interception systems in repressive and non-repressive states alike. While other kicks at this can will arise, it’s evident that there wasn’t even an attempt to do so in this report on Nokia.

Policing the Location Industry

Photo by Ingo Joseph on Pexels.com

The Markup has a comprehensive and disturbing article on how location information is acquired by third-parties despite efforts by Apple and Google to restrict the availability of this information. In the past, it was common for third-parties to provide SDKs to application developers. The SDKs would inconspicuously transfer location information to those third-parties while also enabling functionality for application developers. With restrictions being put in place by platforms such as Apple and Google, however, it’s now becoming common for application developers to initiate requests for location information themselves and then share it directly with third-party data collectors.

While such activities often violate the terms of service and policy agreements between platforms and application developers, it can be challenging for the platforms to actually detect these violations and subsequently enforce their rules.

Broadly, the issues at play represent significant governmental regulatory failures. The fact that government agencies often benefit from the secretive collection of individuals’ location information makes it that much harder for the governments to muster the will to discipline the secretive collection of personal data by third-parties: if the government cuts off the flow of location information, it will impede the ability of governments themselves obtain this information.

In some cases intelligence and security services obtain location information from third-parties. This sometimes occurs in situations where the services themselves are legally barred from directly collecting this information. Companies selling mobility information can let government agencies do an end-run around the law.

One of the results is that efforts to limit data collectors’ ability to capture personal information often sees parts of government push for carve outs to collecting, selling, and using location information. In Canada, as an example, the government has adopted a legal position that it can collect locational information so long as it is de-identified or anonymized,1 and for the security and intelligence services there are laws on the books that permit the collection of commercially available open source information. This open source information does not need to be anonymized prior to acquisition.2 Lest you think that it sounds paranoid that intelligence services might be interested in location information, consider that American agencies collected bulk location information pertaining to Muslims from third-party location information data brokers and that the Five Eyes historically targeted popular applications such as Google Maps and Angry Birds to obtain location information as well as other metadata and content. As the former head of the NSA announced several years ago, “We kill people based on metadata.”

Any arguments made by either private or public organizations that anonymization or de-identification of location information makes it acceptable to collect, use, or disclose generally relies tricking customers and citizens. Why is this? Because even when location information is aggregated and ‘anonymized’ it might subsequently be re-identified. And in situations where that reversal doesn’t occur, policy decisions can still be made based on the aggregated information. The process of deriving these insights and applying them showcases that while privacy is an important right to protect, it is not the only right that is implicated in the collection and use of locational information. Indeed, it is important to assess the proportionality and necessity of the collection and use, as well as how the associated activities affect individuals’ and communities’ equity and autonomy in society. Doing anything less is merely privacy-washing.

Throughout discussions about data collection, including as it pertains to location information, public agencies and companies alike tend to provide a pair of argument against changing the status quo. First, they assert that consent isn’t really possible anymore given the volumes of data which are collected on a daily basis from individuals; individuals would be overwhelmed with consent requests! Thus we can’t make the requests in the first place! Second, that we can’t regulate the collection of this data because doing so risks impeding innovation in the data economy.

If those arguments sound familiar, they should. They’re very similar to the plays made by industry groups who’s activities have historically had negative environmental consequences. These groups regularly assert that after decades of poor or middling environmental regulation that any new, stronger, regulations would unduly impede the existing dirty economy for power, services, goods, and so forth. Moreover, the dirty way of creating power, services, and goods is just how things are and thus should remain the same.

In both the privacy and environmental worlds, corporate actors (and those whom they sell data/goods to) have benefitted from not having to pay the full cost of acquiring data without meaningful consent or accounting for the environmental cost of their activities. But, just as we demand enhanced environmental regulations to regulate and address the harms industry causes to the environment, we should demand and expect the same when it comes to the personal data economy.

If a business is predicated on sneaking away personal information from individuals then it is clearly not particularly interested or invested in being ethical towards consumers. It’s imperative to continue pushing legislators to not just recognize that such practices are unethical, but to make them illegal as well. Doing so will require being heard over the cries of government’s agencies that have vested interests in obtaining location information in ways that skirt the law that might normally discipline such collection, as well as companies that have grown as a result of their unethical data collection practices. While this will not be an easy task, it’s increasingly important given the limits of platforms to regulate the sneaky collection of this information and increasingly problematic ways our personal data can be weaponized against us.


  1. “PHAC advised that since the information had been de-identified and aggregated, it believed the activity did not engage the Privacy Act as it was not collecting or using “personal information”. ↩︎
  2. See, as example, Section 23 of the CSE Act ↩︎

Improving My Photography In 2021

CB1A5DDF-8273-47CD-81CF-42C2FC0BA6F5
(Climbing Gear by Christopher Parsons)

I’ve spent a lot of personal time behind my cameras throughout 2021 and have taken a bunch of shots that I really like. At the same time, I’ve invested a lot of personal time learning more about the history of photography and how to accomplish things with my cameras. Below, in no particular order, is a list of the ways I worked to improve my photography in 2021.

Fuji Recipes

I started looking at different ‘recipes’ that I could use for my Fuji x100f, starting with those at Fuji X Weekly and some YouTube channels. I’ve since started playing around with my own black and white recipes to get a better sense of what works for making my own images. The goal in all of this is to create jpgs that are ‘done’ in body and require an absolute minimum amount of adjustment. It’s very much a work in progress, but I’ve gotten to the point that most of my photos only receive minor crops, as opposed to extensive edits in Darkroom.

Comfort in Street Photography

The first real memory I have of ‘doing’ street photography was being confronted by a bus driver after I took his photo. I was scared off of taking pictures of other people for years as a result.

Over the past year, however, I’ve gotten more comfortable by watching a lot of POV-style YouTube videos of how other street photographers go about making their images. I don’t have anyone else to go an shoot with, and learn from, so these videos have been essential to my learning process. In particular, I’ve learned a lot from watching and listening to Faizal Westcott, the folks over at Framelines, Joe Allan, Mattias Burling, and Samuel Lintaro Hopf.

Moreover, just seeing the photos that other photographers are making and how they move in the street has helped to validate that what I’m doing, when I go out, definitely fits within the broader genre of street photography.

Histories of Photography

In the latter three months of 2021 I spent an enormous amount of time watching videos from the Art of Photography, Tatiana Hopper, and a bit from Sean Tucker. The result is that I’m developing a better sense of what you can do with a camera as well as why certain images are iconic or meaningful.

Pocket Camera Investment

I really love my Fuji x100f and always have my iPhone 12 Pro in my pocket. Both are terrific cameras. However, I wanted something that was smaller than the Fuji and more tactile than the iPhone, and which I could always have in a jacket pocket.

To that end, in late 2021 I purchase a very lightly used Ricoh GR. While I haven’t used it enough to offer a full review of it I have taken a lot of photos with it that I really, really like. More than anything else I’m taking more photos since buying it because I always have a good, very tactile, camera with me wherever I go.

Getting Off Instagram

I’m not a particularly big fan of Instagram these days given Facebook’s unwillingness or inability to moderate its platform, as well as Instagram’s constant addition of advertisements and short video clips. So since October 2021 I’ve been posting my photos almost exclusively to Glass and (admittedly to a lesser extent) to this website.

Not only is the interface for posting to Glass a lot better than the one for Instagram (and Flickr, as well), the comments I get on my photos on Glass are better than anywhere else I’ve ever posted my images. Admittedly Glass still has some growing pains but I’m excited to see how it develops in the coming year.

Book Review: Blockchain Chicken Farm And Other Stories of Tech in China’s Countryside (2020) ⭐️⭐️⭐️

Xiaowei Wang’s book, Blockchain Chicken Farm And Other Stories of Tech in China’s Countryside, presents a nuanced and detailed account of the lives reality of many people in China through the lenses of history, culture, and emerging technologies. She makes clear through her writing that China is undergoing a massive shift through efforts to digitize the economy and society (and especially rural economies and societies) while also effectively communicating why so many of these initiatives are being undertaken. 

From exploring the relationship between a fraught cold chain and organic chicken, to attempts to revitalize rural villages by turning them into platform manufacturing towns, to thinking through and reflecting on the state of contemporary capitalistic performativity in rural China and the USA alike, we see how technologies are being used to try and ‘solve’ challenges while often simultaneously undermining and endangering the societies within which they are embedded. Wang is careful to ensure that a reader leaves with an understanding of the positive attributes of how technologies are applied while, at the same time, making clear how they do not remedy—and, in fact, often reify or extenuate—unequal power relationships. Indeed, many of the positive elements of technologies, from the perspective of empowering rural citizens or improving their earning powers, are either being negatively impacted by larger capitalistic actors or the technology companies whose platforms many of these so-called improvements operate upon. 

Wang’s book, in its conclusion, recognizes that we need to enhance and improve upon the cultural spaces we operate and live within if we are to create a new or reformed politics that is more responsive to the specific needs of individuals and their communities. Put differently, we must tend to the dynamism of the Lifeworld if we are to modify the conditions of the System that surrounds, and unrelentingly colonizes, the Lifeworld. 

Her wistful ending—that such efforts of (re)generation are all that we can do—speaks both to a hope but also an almost resignation that (re)forming the systems we operate in can only take place if we manage to avoid being distracted by the bauble or technology that is dangled in front of us, to distract us from the existential crises facing our societies and humanity writ large. As such, it concludes very much in the spirit of our times: with hope for the future but a fearful resignation that despite our best efforts, we may be too late to succeed. But, what else can we do?