The Globe and Mail has a terrific photographic series entitled "A century caught on camera." As a Toronto resident I was struck by just how many traditions, rituals, and grievances have stuck with the city–or in the city–for over a century.
Further, the way in which the images have been captured has changed substantially over time as a result of the technical capacity of camera equipment, along with the interests or preferences of the photographers at different times. Images in the past decade or two, as an example, clearly draw more commonly from celebrity or artistic portraiture than 50 years ago. Moreover, it’s pretty impressive just how much photographers have done with their equipment over the past century and this, generally, speaks to how easy street and documentary photographers have it today as compared to when our compatriots were using slow lenses and film.
It may take you quite a while to get through all the images but I found the process to be exceedingly worthwhile. Though I admit that the first decade during which the Globe used colour images probably ranks as my least favourite period in the galleries that the paper has published.
ProPublica, which is typically known for its excellent journalism, published a particularly terrible piece earlier this week that fundamentally miscast how encryption works and how Facebook vis-a-vis WhatsApp works to keep communications secured. The article, “How Facebook Undermines Privacy Protections for Its 2 Billion WhatsApp Users,” focuses on two so-called problems.
The So-Called Privacy Problems with WhatsApp
First, the authors explain that WhatsApp has a system whereby recipients of messages can report content they have received to WhatsApp on the basis that it is abusive or otherwise violates WhatsApp’s Terms of Service. The article frames this reporting process as a way of undermining privacy on the basis that secured messages are not kept solely between the sender(s) and recipient(s) of the communications but can be sent to other parties, such as WhatsApp. In effect, the ability to voluntarily forward messages to WhatsApp that someone has received is cast as breaking the privacy promises that have been made by WhatsApp.
Second, the authors note that WhatsApp collects a large volume of metadata in the course of using the application. Using lawful processes, government agencies have compelled WhatsApp to disclose metadata on some of their users in order to pursue investigations and secure convictions against individuals. The case that is focused on involves a government employee who leaked confidential banking information to Buzzfeed, and which were subsequently reported out.
Assessing the Problems
In the case of forwarding messages for abuse reporting purposes, encryption is not broken and the feature is not new. These kinds of processes offer a mechanism that lets individuals self-identify and report on problematic content. Such content can include child grooming, the communications of illicit or inappropriate messages or audio-visual content, or other abusive information.
What we do learn, however, is that the ‘reactive’ and ‘proactive’ methods of detecting abuse need to be fixed. In the case of the former, only about 1,000 people are responsible for intaking and reviewing the reported content after it has first been filtered by an AI:
Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems. These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.
Further, the employees are often reliant on machine learning-based translations of content which makes it challenging to assess what is, in fact, being communicated in abusive messages. As reported,
… using Facebook’s language-translation tool, which reviewers said could be so inaccurate that it sometimes labeled messages in Arabic as being in Spanish. The tool also offered little guidance on local slang, political context or sexual innuendo. “In the three years I’ve been there,” one moderator said, “it’s always been horrible.”
There are also proactive modes of watching for abusive content using AI-based systems. As noted in the article,
Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive. The unencrypted data available for scrutiny is extensive. It includes the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.
Unfortunately, the AI often makes mistakes. This led one interviewed content reviewer to state that, “[t]here were a lot of innocent photos on there that were not allowed to be on there … It might have been a photo of a child taking a bath, and there was nothing wrong with it.” Often, “the artificial intelligence is not that intelligent.”
The vast collection of metadata has been a long-reported concern and issueassociated with WhatsApp and, in fact, was one of the many reasons why many individuals advocate for the use of Signal instead. The reporting in the ProPublica article helpfully summarizes the vast amount of metadata that is collected but that collection, in and of itself, does not present any evidence that Facebook or WhatsApp have transformed the application into one which inappropriately intrudes into persons’ privacy.
ProPublica Sets Back Reasonable Encryption Policy Debates
The ProPublica article harmfully sets back broader policy discussion around what is, and is not, a reasonable approach for platforms to take in moderating abuse when they have integrated strong end-to-end encryption. Such encryption prevents unauthorized third-parties–inclusive of the platform providers themselves–from reading or analyzing the content of the communications themselves. Enabling a reporting feature means that individuals who receive a communication are empowered to report it to a company, and the company can subsequently analyze what has been sent and take action if the content violates a terms of service or privacy policy clause.
In suggesting that what WhatsApp has implemented is somehow wrong, it becomes more challenging for other companies to deploy similar reporting features without fearing that their decision will be reported on as ‘undermining privacy’. While there may be a valid policy discussion to be had–is a reporting process the correct way of dealing with abusive content and messages?–the authors didn’t go there. Nor did they seriously investigate whether additional resources should be adopted to analyze reported content, or talk with artificial intelligence experts or machine-based translation experts on whether Facebook’s efforts to automate the reporting process are adequate, appropriate, or flawed from the start. All those would be very interesting, valid, and important contributions to the broader discussion about integrating trust and safety features into encrypted messaging applications. But…those are not things that the authors choose to delve into.
The authors could have, also, discussed the broader importance (and challenges) in building out messaging systems that can deliberately conceal metadata, and the benefits and drawbacks of such systems. While the authors do discuss how metadata can be used to crack down on individuals in government who leak data, as well as assist in criminal investigations and prosecutions, there is little said about what kinds of metadata are most important to conceal and the tradeoffs in doing so. Again, there are some who think that all or most metadata should be concealed, and others who hold opposite views: there is room for a reasonable policy debate to be had and reported on.
Unfortunately, instead of actually taking up and reporting on the very valid policy discussions that are at the edges of their article, the authors choose to just be bombastic and asserted that WhatsApp was undermining the privacy protections that individuals thought they have when using the application. It’s bad reporting, insofar as it distorts the facts, and is particularly disappointing given that ProPublica has shown it has the chops to do good investigative work that is well sourced and nuanced in its outputs. This article, however, absolutely failed to make the cut.
For the past year, the Toronto Star has repeatedlyrunarticles that take mobility data from mobile device advertisers, to then assess the extent to which Torontonians are moving too much. Reporting has routinely shown how people are moving more or less frequently, with articles often suggesting that people are moving too much when they’re supposed to be staying put.
The problem? The ways in which ‘too much’ is assessed runs contrary to public health advice and lacks sufficient nuance to inform the public. In the most recent reporting, we find that:
Between Jan. 18 and Feb. 28, average mobility across Ontario increased from 58 per cent to 65 per cent, according to the marketing firm Environics Analytics. Environics defines mobility as a percentage of residents 15 or older who travelled 500 metres or more beyond their home postal code.
To be clear: in Ontario the provincial and local public health leaders have strongly stated that people should get outside and exercise. That can involve walking or other outdoor activities. Those activities are not supposed to be restricted to 500 metres from your home, which was advice that was largely provided in more restrictive lockdowns in European countries. And we know that mobility data is often higher in areas with higher percentages of BIPOC residents because they tend to have lower-paying jobs and must travel further to reach their places of employment.
As has become the norm, the fact that people have moved around more frequently as (admittedly ineffective) restrictions have been raised, and that people are ‘region hopping’ by going from more restricted zones to less restricted ones, is being tightly associated with personal or individual failures. From a quoted expert, we find that:
“It shows that once things start to open, people just seem to do whatever, and that’s a recipe for disaster.”
I would suggest that what we are seeing is a pent up, pretty normal, human response: the provincial government has behaved erratically and you have some people racing around to get stuff done before returning to another (ineffective) set of restrictions, and a related set of people who believe that if the government is letting them move around then things must be comparatively safer. To put it another way, in the former case you have people behaving rationally (if, in some eyes, selfishly) whereas in the latter you have a failure by government to solve a collective action problem by downloading responsibility to individuals. In both cases you are seeing an uptick in behaviour which is suggestive that they believe it’s safer to do things, now, than weren’t before when the government assumed some responsibility and signalled that moving was less safe and actively discouraged it by keeping businesses and other ‘fun’ things shut down.
Throughout the pandemic response in Ontario, what has been evident is that the provincial government simply cannot develop and implement effective policies to mitigate the spread of the pandemic. The result of muddling through things has been that the public, and especially small business, has suffered extraordinarily whilst the gains have been meagre. The lack of paid sick leave, as an example, has seriously stymied the ability of lower-income workers to actually keep themselves apart from others while they wait for diagnoses and, if positive, recover from their infections.
To be fair, the Toronto Star and other outlets have covered paid sick leave issues, along with lots of other failures by the provincial government in its handling of the pandemic. And there is certainly some obligation on individuals to best adhere to public health advice. But we’ve long known these are collective action problems: there is a need to move beyond downloading responsibility to individuals and for governments to behave effectively, coherently, and accountably throughout major crises. The provincial government has failed, and continues to fail, on every one of these measures to the effect that individuals are responding to the past, present, and expected future actions of the government: more unpredictability and more restrictions on their daily lives as a result of government ineptitude.
Whereas the journalists could have cast what Ontarians are doing as a semi-natural response to the aforementioned government failings, instead those individuals are being castigated. We shouldn’t be blaming the victims of the pandemic, but I guess that’s what happens when assessing mobility data.
Per Politco, Trump staffers are worrying about their next job. I cannot believe that people working in the current administration continue to be given anonymity by the press: employees of the White House have knowingly supported a morally and ethically bankrupt president and administration, and what they’re most concerned about following the horror show of yesterday is their job prospects?
Expose them. Make them accountable for their culpability in what they have helped to nurture into existence. These people do not deserve anonymity.
The materials at issue relate to three stories Makuch wrote in 2014 on a Calgary man, Farah Shirdon, 22, charged in absentia with various terrorism-related offences. The articles were largely based on conversations Makuch had with Shirdon, who was said to be in Iraq, via the online instant messaging app Kik Messenger.
With court permission, RCMP sought access to Makuch’s screen captures and logs of those chats. Makuch refused to hand them over.
RCMP and the Crown argued successfully at two levels of court that access to the chat logs were essential to the ongoing investigation into Shirdon, who may or may not be dead. They maintained that journalists have no special rights to withhold crucial information.
Backed by alarmed media and free-expression groups, Makuch and Vice Media argued unsuccessfully that the RCMP demand would put a damper on the willingness of sources to speak to journalists.
The conflicting views will now be tested before the Supreme Court.
This case matters for numerous reasons.
First, there has been a real drying up of certain sources, which has prevented journalists in Canada from bringing material to public light. Such material doesn’t just pertain to terrorism and foreign combatants but, also, white collar crime, political scandals, cybercrime issues, and more. The Canadian public is being badly served by the Crown’s continued pursuit of this case.
Second, this case threatens to further diminish relations between the state and non-state actors who may, as a result, be (further) biased against state authorities. It’s important to be critical of the government and especially aspects of the government which can dramatically reshape citizens’ life opportunities. But should the press gallery adopt an unwarranted and more critical and combative tone towards the government there could be a deleterious impact on the trust Canadians have in their government . By extension, this could lead to a further decline in the willingness to see the government as something that tries to represent the citizenry writ large. That kind of democratic malaise is dangerous to ongoing governance and a threat to the legitimization of all kinds of state activities.
I understand what the person interviewed for this article is suggesting: smartphones are incredibly good at conducting surveillance of where a person is, whom they speak with, etc. But proposing that people do the following (in order) can be problematic:
Leave their phones at home when meeting certain people (such as when journalists are going somewhere to speak with sensitive sources);
Turn off geolocation, Bluetooth, and Wi-fi;
Disable the ability to receive phone calls by setting the phone to Airplane mode;
Use strong and unique passwords;
And carefully evaluate whether or not to use fingerprint unlocks;
Number 1. is something that investigative journalists already do today when they believe that a high level of source confidentiality is required. I know this from working with, and speaking to, journalists over the past many years. The problem is when those journalists are doing ‘routine’ things that they do not regard as particularly sensitive: how, exactly, is a journalist (or any other member of society) to know what a government agency has come to regard as sensitive or suspicious? And how can a reporter – who is often running several stories simultaneously, and perhaps needs to be near their phone for other kinds of stories they’re working on – just choose to abandon their phone elsewhere on a regular basis?
Number 2 makes some sense, especially if you: a) aren’t going to be using any services (e.g. maps to get to where you’re going); b) attached devices (e.g. Bluetooth headphones, fitness trackers); c) don’t need quick geolocation services. But for a lot of the population they do need those different kinds of services and thus leaving those connectivity modes ‘on’ makes a lot of sense.
Number 3 makes sense as long as you don’t want to receive any phone calls. So, if you’re a journalist, so long as you never, ever, expect someone to just contact you with a tip (or you’re comfortable with that going to another journalist if your phone isn’t available) then that’s great. While a lot of calls are scheduled calls that certainly isn’t always the case.
Number 4 is a generally good idea. I can’t think of any issues with it, though I think that a password manager is a great idea if you’re going to have a lot of strong and unique passwords. And preferably a manager that isn’t tied to any particular operating system so you can move between different phone and computer manufacturers.
Number 5 is…complicated. Fingerprint readers facilitate the use of strong passwords but can also be used to unlock a device if your finger is pressed to a device. And if you add multiple people to the phone’s list of who can decrypt the device then you’re dealing with additional (in)security vectors. But for most people the concern is that their phone is stolen, or accessed by someone with physical access to the device. And against those threat models a fingerprint reader with a longer password is a good idea.
Journalists targeted by security services can write about relatively banal subjects. They might report on the amount and quality of food available in markets. They might write about the slow construction of roads. They might write about dismal housing conditions. They might even just include comments about a politician that are seen as unfavourable, such as the politician wiped sweat from their brow before answering a question. Risky reporting from extremely hostile environments needn’t involve writing about government surveillance, policing, or corruption: far, far less ‘sensitive’ reporting can be enough for a government to cast a reporter as an enemy of the state.
The rationale for such hyper-vigilance on the part of dictatorships and authoritarian countries is that such governments regularly depend on international relief funds or the international community’s decision to not harshly impede the country’s access to global markets. Negative press coverage could cut off relief funds or monies from international organizations following a realization that the country lacks the ‘freedoms’ and ‘progress’ the government and most media publicly report on. If the international community realizes that the country in question is grossly violating human rights it might also limit the country’s access to capital markets. In either situation, limiting funds available to the government can endanger the reigning government or hinder leaders from stockpiling stolen wealth.
Calling for Help
Reaching out to international journalism protection organizations, or to foreign governments that might offer asylum, can raise serious negative publicity concerns for dictatorial or authoritarian governments. If a country’s journalists are fleeing because they believe they are in danger, and that fact rises to public attention, it could negatively affect a leader’s public image and the government’s access to funds. On this basis governments may place particular journalists under surveillance and punish them should they do anything to threaten the public image of the leader or country. Such surveillance is also utilized when reporters who are in a country are covering, and writing about, facts that stand in contravention to government propaganda.
The potential for electronic surveillance is particularly high, and serious, when the major telecommunications providers in a country tend to fully comply with, or willingly provide assistance to, state security and intelligence services. This degree of surveillance makes contacting international organizations that assist journalists risky; when a foreign organization does not encrypt communications sent to it, the organization’ security practices may further endanger a journalist calling for help. One of the many journalists covered in Bad News: Last Journalists in a Dictatorship who feared his life was in danger by the Rwandan government stated,
[h]e had written to the Committee to Protect Journalists, in New York, but someone in the president’s office had then shown him the application that he had filled out online. He didn’t trust people living abroad any longer.” (Bad News: Last Journalists in a Dictatorship, 83-4)
Such surveillance could have taken place in a few different ways: the local network or computer the journalist used to prepare and send the application might have been compromised. Alternately, the national network might have been subject to surveillance for ‘sensitive’ materials. Though the former case is a prevalent problem (e.g., Internet cafes being compromised by state actors) it’s not one that international journalist organizations are well suited to fix. The latter situation, however, where the national network itself is hostile, is something that media organizations can address.
Network inspection technologies can be configured to look for particular pieces of metadata and content that are of interest to government monitors. By sorting for certain kinds of metadata, such as websites visited, content selection can be applied relatively efficiently and automated analysis of that content subsequently be employed. That content analysis, however, depends on the government in question having access to plaintext communications.
Many journalism organizations historically have had ‘contact us’ pages on their websites, and many continue to have and use these pages. Some organizations secure their contact forms by using SSL encryption. But many organizations do not, including organizations that actively assert they will provide assistance to international journalists in need. These latter organizations make it trivial for states that are hostile to journalists to monitor in-country journalists who are making requests or issuing claims using these insecure contact forms.
Mitigating Threats
One way that journalism protection organizations can somewhat mitigate the risk of government surveillance is to implement SSL on their websites, which encrypts communications sent to the organization’s web server. It is still apparent to network monitors what website was visited but not which pages. And if the journalist sends a message using a ‘contact us’ form the data communicated will be encrypted, thus preventing network snoops from figuring out what is being said.
SSL isn’t a bulletproof solution to stopping governments from monitoring messages sent using contact forms. But it raises the difficulty of intercepting, decrypting, and analyzing the calls for help sent by at-risk journalists. And adding such security is relatively trivial to implement with the advent of free SSL encryption projects like ‘Let’s Encrypt’.
Ideally journalism organizations would either add SSL to their websites — to inhibit adversarial states from reading messages sent to these organizations — or only provide alternate means of communicating with them. That might mandate email, and list hosts that provide service-to-service encryption (i.e. those that have implemented STARTSSL), messaging applications that provide sufficient security to evade most state actors (everything from WhatsApp or Signal, to even Hangouts if the US Government and NSA aren’t the actors you’re hiding from), or any other kind of secure communications channel that should be secure from non-Five Eyes surveillance countries.
No organization wants to be responsible for putting people at risk, especially when those people are just trying to find help in dangerous situations. Organizations that exist to, in part, protect journalists thus need to do the bare minimum and ensure their baseline contact forms are secured. Doing anything else is just enabling state surveillance of at-risk journalists, and stands as antithetical to the organizations’ missions.
NOTE: This post was previously published on Medium.
According to Citizen Lab researcher Christopher Parsons, these same powers that target journalists can be used against non-journalists under C-13. And the only reason we know about the aforementioned cases is that the press has a platform to speak out.
“This is an area where transparency and accountability are essential,” Parsons said in an interview. “We’ve given piles and piles of new powers to law enforcement and security agencies alike. What’s happened to this journalist shows we desperately need to know how the government uses its powers to ensure they’re not abused in any way.”
…
“I expect that the use of these particular powers will become more common as the police get more used to using it and more savvy in using them,” Parsons said.
These were powers that were ultimately sold to the public (and passed into law) as needed to ‘child pornography’. And now they’re being used to snoop on journalists to figure out who their sources are, without being mandated to report on the regularity at which the powers are used to the efficacy of such uses. For some reason, this process doesn’t inspire a lot of confidence in me.
Major newspapers do their best to verify the authenticity of leaked documents they receive from sources. They only publish the ones they know are authentic. The newspapers consult experts, and pay attention to forensics. They have tense conversations with governments, trying to get them to verify secret documents they’re not actually allowed to admit even exist. This is only possible because the news outlets have ongoing relationships with the governments, and they care that they get it right. There are lots of instances where neither of these two things are true, and lots of ways to leak documents without any independent verification at all.
No one is talking about this, but everyone needs to be alert to the possibility. Sooner or later, the hackers who steal an organization’s data are going to make changes in them before they release them. If these forgeries aren’t questioned, the situations of those being hacked could be made worse, or erroneous conclusions could be drawn from the documents. When someone says that a document they have been accused of writing is forged, their arguments at least should be heard.
As someone who routinely receives, and consults on, leaked documents I can emphatically say this is a serious issue. And that journalists are generally very cautious these days about publishing based on mysteriously sourced documents.
“Travel “naked” as one encryption expert told me. If any government wants your information, they will get it no matter what,” she adds.
Something has gone terribly awry if this is the advice that journalists working for international news outlets are giving to those entering or exiting the United States.