The Roundup for May 12-18, 2018 Edition

Soar by Christopher Parsons

It’s become incredibly popular to attribute the activities undertaken by the Facebooks and Googles of the work to ‘surveillance capitalism’. This concept generally asserts that the current dominant mode of economics has become reliant on surveillance to drive economic growth. Surveillance, specifically, is defined as the act of watching or monitoring activity with the intent of using captured information to influence behaviour. In the world of the Internet, this information tends to be used to influence purchasing behaviours.

The issue that I have with the term surveillance capitalism is that I’m uncertain whether it comprehensively captures the activities associated with the data-driven economy. Surveillance Studies scholars tend to apply the same theories which are used to understand CCTV to practices such as machine learning; in both cases, the technologies are understood as establishing feedback loops to influence an individual or entire population. But, just as often, neither CCTV nor machine learning actually have a person- or community-related feedback loop. CCTV cameras are often not attended to, not functional, or don’t provide sufficient information to take action against those being recorded. Nor do individuals necessarily modify their own behaviours in the presence of such cameras. Similarly, machine learning algorithms may not be used to influence all persons: in some cases, they may be sufficiently outside the scope of whatever the algorithm is intended to do that they are not affected. Also, like CCTV, individuals may not modify their own behaviours when machine learning algorithms are working on the data those individuals are generating on the basis of being unaware of machine learning operating on their data.

So, where surveillance capitalism depends on a feedback loop that is directly applied towards individuals within a particular economic framework, there may be instances where data is collected and monetized without clear or necessary efforts to influence individuals. Such situations could include those where a machine learning algorithm is designed to improve a facial recognition system, or improve battery life based on the activities undertaken by a user, or to otherwise very quietly make tools more effective without a clear attempt to modify user behaviour. I think that such activities may be very clearly linked to monetization and, more broadly, an ideology backed by capitalism. But I’m not sure it’s surveillance as it’s rigorously defined by scholars.

So one of the things that I keep thinking about is whether we should shift away from the increasingly-broad use of ‘surveillance capitalism’ to, more broadly, talk about ‘data capitalism’. I’m not suggesting doing away with the term surveillance capitalism but, instead, that surveillance capitalism is a sub-genus of data capitalism. Data capitalism would, I believe, better capture the ways in which information is collected, analyzed, and used to effect socio-technical changes. Further, I think such a term might also capture times where those changes are arguably linked to capitalist aims (i.e. enhancing profitability) but may be less obviously linked to the feedback loops towards individuals that are associated with surveillance itself.


After approximately twenty months of work, my colleagues and myself have published an extensive report on encryption policies in Canada. It’s a major accomplishment for all of us to have finally concluded the work, and we’re excited by the positive feedback we’ve received about it.


Inspiring Quotation of the Week

“Ambition is a noble passion which may legitimately take many forms… but the noblest ambition is that of leaving behind something of permanent value.”

– G.H. Hardy

Great Photography Shots

Some of these storm chaser photos are practically otherworldly.

Music I’m Digging

Neat Podcast Episodes

Good Reads for the Week

Cool Things

Link

MPs consider contempt charges for Canadian company linked to Cambridge Analytica after raucous committee meeting

Aggregate IQ executives came to answer questions before a Canadian parliamentary committee. Then they had the misfortune of dealing with a well-connected British Information Commissioner, Elizabeth Denham:

At Tuesday’s committee meeting, MPs pressed Silvester and Massingham on their company’s work during the Brexit referendum, for which they are currently under investigation in the UK over possible violations of campaign spending limits. Under questioning from Liberal MP Nathaniel Erskine-Smith, Silvester and Massingham insisted they had fully cooperated with the UK information commissioner Elizabeth Denham. But as another committee member, Liberal MP Frank Baylis, took over the questioning, Erskine-Smith received a text message on his phone from Denham which contradicted the pair’s testimony.

Erskine-Smith handed his phone to Baylis, who read the text aloud.  “AIQ refused to answer her specific questions relating to data usage during the referendum campaign, to the point that the UK is considering taking further legal action to secure the information she needs,” Denham’s message said.

Silvester replied that he had been truthful in all his answers and said he would be keen to follow up with Denham if she had more questions.

It’s definitely a bold move to inform parliamentarians, operating in a friendly but foreign jurisdiction, that they’re being misled by one of their witnesses. So long as such communications don’t overstep boundaries — such as enabling a government official to engage in a public witchhunt of a given person or group — these sorts of communications seem essential when dealing with groups which have spread themselves across multiple jurisdictions and are demonstrably behaving untruthfully.

Link

In western China, thought police instill fear

From the Associated Press:

Southern Xinjiang, where Korla is located, is one of the most heavily policed places on earth.

In Hotan, police depots with flashing lights and foot patrols are set up every 500 meters. Motorcades of more than 40 armored vehicles rumble down city boulevards. Police checkpoints on every other block stop cars to check identification and smartphones for religious content.

Xinjiang’s published budget data shows public security spending this year is on track to increase 50 percent from 2016 to roughly 45 billion yuan ($6.8 billion) after rising 40 percent a year ago. It’s quadrupled since 2009, when a Uighur riot broke out in Urumqi, killing nearly 200 people.

But much of the policing goes unseen.

Shoppers entering the Hotan bazaar must pass through metal detectors and place their national identification cards on a reader while having their faces scanned. AP reporters were stopped outside a hotel by a police officer who said the public security bureau had been remotely tracking the reporters’ movements by watching surveillance camera footage.

The government’s tracking efforts have extended to vehicles, genes and even voices. A biometric data collection program appears to have been formalized last year under “Document No. 44,” a regional public security directive to “comprehensively collect three-dimensional portraits, voiceprints, DNA and fingerprints.” The document’s full text remains secret, but the AP found at least three contracts referring to the 2016 directive in recent purchase orders for equipment such as microphones and voice analyzers.

The extent of the of technical and human surveillance, and punishments that are meted out for failing to adequately monitor family members and friends, is horrifying.1 And while the surveillance undertaken in this area of China is particularly severe, the kinds of monitoring that occur in China is more extensive and ever-present throughout the country than many people who haven’t travelled into China can appreciate. The Chinese surveillance infrastructure is the kind of apparatus that exists to sustain itself, first and foremost, by ensuring that contrary ideologies and philosophies are threatened and — where possible — rendered impotent by way of threats and fear.

  1. While much of the contemporary surveillance is now provided by Chinese-based companies it’s worth remembering that, historically, this equipment was sold by Western companies.
Link

How to protect yourself (and your phone) from surveillance

I understand what the person interviewed for this article is suggesting: smartphones are incredibly good at conducting surveillance of where a person is, whom they speak with, etc. But proposing that people do the following (in order) can be problematic:

  1. Leave their phones at home when meeting certain people (such as when journalists are going somewhere to speak with sensitive sources);
  2. Turn off geolocation, Bluetooth, and Wi-fi;
  3. Disable the ability to receive phone calls by setting the phone to Airplane mode;
  4. Use strong and unique passwords;
  5. And carefully evaluate whether or not to use fingerprint unlocks;

Number 1. is something that investigative journalists already do today when they believe that a high level of source confidentiality is required. I know this from working with, and speaking to, journalists over the past many years. The problem is when those journalists are doing ‘routine’ things that they do not regard as particularly sensitive: how, exactly, is a journalist (or any other member of society) to know what a government agency has come to regard as sensitive or suspicious? And how can a reporter – who is often running several stories simultaneously, and perhaps needs to be near their phone for other kinds of stories they’re working on – just choose to abandon their phone elsewhere on a regular basis?

Number 2 makes some sense, especially if you: a) aren’t going to be using any services (e.g. maps to get to where you’re going); b) attached devices (e.g. Bluetooth headphones, fitness trackers); c) don’t need quick geolocation services. But for a lot of the population they do need those different kinds of services and thus leaving those connectivity modes ‘on’ makes a lot of sense.

Number 3 makes sense as long as you don’t want to receive any phone calls. So, if you’re a journalist, so long as you never, ever, expect someone to just contact you with a tip (or you’re comfortable with that going to another journalist if your phone isn’t available) then that’s great. While a lot of calls are scheduled calls that certainly isn’t always the case.

Number 4 is a generally good idea. I can’t think of any issues with it, though I think that a password manager is a great idea if you’re going to have a lot of strong and unique passwords. And preferably a manager that isn’t tied to any particular operating system so you can move between different phone and computer manufacturers.

Number 5 is…complicated. Fingerprint readers facilitate the use of strong passwords but can also be used to unlock a device if your finger is pressed to a device. And if you add multiple people to the phone’s list of who can decrypt the device then you’re dealing with additional (in)security vectors. But for most people the concern is that their phone is stolen, or accessed by someone with physical access to the device. And against those threat models a fingerprint reader with a longer password is a good idea.

Link

How a Facial Recognition Mismatch Can Ruin Your Life

Via The Intercept:

“As an analytical scientist, whenever someone gives me absolute certainty, my red flag goes up,” said Jason Latham, who worked as a biochemist prior to becoming a forensic scientist and certified video examiner. “When I came from analytical sciences to forensic sciences, I was like some of these guys are not scientists. They are voodoo witchcraft.”

Forensic reports generally provide few details about the methods they use to arrive at points of similarity. But in Talley’s case, the FBI examiner’s report displayed a high degree of certainty. George Reis, a facial examiner who has testified more than 50 times for state, federal, and military courts throughout the country on forensic visual comparisons, pointed out that the report on Talley’s case was vague. “It is generally considered best practice to be specific in reports and to point out features of similarity, as well as differences, in any comparison illustration or chart,” Reis noted. “In the Talley case no such markings exist. The video frames that were used in the FBI illustration were of poor quality and limited value.”

Facial recognition: sorta fun if you’re using it for commercial stuff like tagging your friends, but really dangerous if its part of what is used to convict persons for crimes they’re alleged to have committed.

Link

RCMP is overstating Canada’s ‘surveillance lag’ | Toronto Star

From a piece that I wrote with Tamir Israel for the Toronto Star:

The RCMP has been lobbying the government behind the scenes for increased surveillance powers on the faulty premise that their investigative powers are lagging behind those foreign police services.

The centrepiece of the RCMP’s pitch is captured in an infographic that purports to show foreign governments are legislating powers that are more responsive to investigative challenges posed by the digital world. On the basis of this comparison, the RCMP appears to have convinced the federal government to transform a process intended to curb the excesses of Bill C-51 into one dominated by proposals for additional surveillance powers.

The RCMP’s lobbying effort misleadingly leaves an impression that Canadian law enforcement efforts are being confounded by digital activities.

An Op-ed that I published with a colleague of mine, Tamir Israel, earlier this week that calls out the RCMP for deliberately misleading the public with regards to government agencies’ existing surveillance powers and capabilities.