Categories
Links Writing

Another Bad Proposal to Globally Weaken Security

federica-galli-449563-unsplash
Photo by Federica Galli on Unsplash

Steven Levy has an article out in Wired this week in which he, vis-a-vis the persons he interviewed, proclaims that the ‘going dark’ solution has been solved to the satisfaction of (American) government agencies and (unnamed and not quoted) ‘privacy purists’.1 Per the advocates of the so-called-solution, should the proposed technical standard be advanced and developed then (American) government agencies could access encrypted materials and (American) users will enjoy the same degrees of strong encryption as they do today. This would ‘solve’ the problem of (American) agencies’ investigations being stymied by suspects’ adoption of encrypted communications systems and personal devices.

Unfortunately Levy got played: the proposal he dedicates his article to is just another attempt to advance a ‘solution’ that doesn’t address the real technical or policy problems associated with developing a global backdoor system to our most personal electronic devices. Specifically the architect of the solution overestimates the existent security characteristics of contemporary devices,2 overestimates the ability of companies to successfully manage a sophisticated and globe-spanning key management system,3 fails to address international policy issues about why other governments couldn’t or wouldn’t demand similar kinds of access (think Russia, China, Iran, etc),4 fails to contemplate an adequate key revocation system, and fails to adequately explain why why the exceptional access system he envisions is genuinely needed. With regards to that last point, government agencies have access to more data than ever before in history and, yet, because they don’t have access to all of the data in existence the agencies are claiming they are somehow being ‘blinded’.

As I’ve written in a draft book chapter, for inclusion in a book published later this year or early next, the idea that government agencies are somehow worse off than in the past is pure nonsense. Consider that,

[a]s we have embraced the digital era in our personal and professional lives, [Law Enforcement and Security Agencies] LESAs have also developed new techniques and gained additional powers in order to keep pace as our memories have shifted from personal journals and filing cabinets to blogs, social media, and cloud hosting providers. LESAs now subscribe to services designed to monitor social media services for intelligence purposes, they collect bulk data from telecommunications providers in so-called ‘tower dumps’ of all the information stored by cellular towers, establish their own fake cellular towers to collect data from all parties proximate to such devices, use malware to intrude into either personal endpoint devices (e.g. mobile phones or laptops) or networking equipment (e.g. routers), and can even retroactively re-create our daily online activities with assistance from Canada’s signals intelligence agency. In the past, each of these kinds of activities would have required dozens or hundreds or thousands of government officials to painstakingly follow persons — many of whom might not be specifically suspected of engaging in a criminal activity or activity detrimental to the national security of Canada — and gain lawful entry to their personal safes, install cameras in their homes and offices, access and copy the contents of filing cabinets, and listen in on conversations that would otherwise have been private. So much of our lives have become digital that entirely new investigative opportunities have arisen which were previously restricted to the imaginations of science fiction authors both insofar as it is easier to access information but, also, because we generate and leave behind more information about our activities vis-a-vis our digital exhaust than was even possible in a world dominated by analog technologies.

In effect: the ‘solution’ covered by Levy doesn’t clearly articulate what problem must be solved and it would end up generating more problems than it solves by significantly diminishing the security properties of devices while, simultaneously, raising international policy issues of which countries’ authorities, and under what conditions, could lawfully obtain decryption keys. Furthermore, companies and their decryption keys will suddenly become even more targeted by advanced adversaries than they are today. Instead of even attempting to realistically account for these realities of developing and implementing secure systems, the proposed ‘solution’ depends on a magical pixie dust assumption that you can undermine the security of globally distributed products and have no bad things happen.5

The article as written by Levy (and the proposed solution at the root of the article) is exactly the kind of writing and proposal that gives law enforcement agencies the energy to drive a narrative that backdooring all secure systems is possible and that the academic, policy, and technical communities are merely ideologically opposed to doing so. As has become somewhat common to say, while we can land a person on the moon, that doesn’t mean we can also land a person on the sun; while we can build (somewhat) secure systems we cannot build (somewhat) secure systems that include deliberately inserted backdoors. Ultimately, it’s not the case that ‘privacy purists’ oppose such solutions to undermine the security of all devices on ideological grounds: they’re opposed based on decades of experience, training, and expertise that lets them recognize such solutions as the charades that they are.

Footnotes

  1. I am unaware of a single person in the American or international privacy advocacy space who was interviewed for the article, let alone espouses positions that would be pacified by the proposed solution.
  2. Consider that there is currently a way of bypassing the existing tamper-resistant chip in Apple’s iPhone, which is specifically designed to ‘short out’ the iPhone if someone attempts to enter an incorrect password too many times. A similar mechanism would ‘protect’ the master key that would be accessible to law enforcement and security agencies.
  3. Consider that Microsoft has, in the past, lost its master key that is used to validate copies of Windows as legitimate Microsoft-assured products and, also, that Apple managed to lose key parts of its iOS codebase and reportedly its signing key.
  4. Consider that foreign governments look at the laws promulgated by Western nations as justification for their own abusive and human rights-violating legislation and activities.
  5. Some of the more unhelpful security researchers just argue that if Apple et al. don’t want to help foreign governments open up locked devices they should just suspend all service into those jurisdictions. I’m not of the opinion that protectionism and nationalism are ways of advancing international human rights or of raising the qualities of life of all persons around the world; it’s not morally right to just cast the citizens of Russia, Ethiopia, China, India, Pakistan, or Mexico (and others!) to the wolves of their own oftentimes overzealous or rights abusing government agencies.
Categories
Aside Links

(In)Security and Scruff

From The Verge:

Ashley: And then, you mentioned it in transit, do you store these on Scruff’s personal servers? When it’s on the server, is it encrypted? What kind of protections do you have on the server?

We take a number of steps to secure our network. Encryption is a multifaceted and multilayered question and process. Yeah, I can say that the technical architecture of Scruff is one that we have had very smart people look into. We’ve worked with security researchers and security experts to ensure that the data that’s on Scruff stays safe and that our members can use Scruff with confidence and know that their information isn’t going to be disclosed to unauthorized parties.

This is exactly the kind of answer that should set off alarm bells: the developer of Scruff doesn’t actually answer the specific and direction question about the company’s encryption policies in an equivalently direct and specific way. Maybe Scruff really does have strong security protocols in place but you certainly wouldn’t know that was the case based on the answer provided.

It’d be a great idea if someone were to develop the equivalent of the EFF’s or IX Maps’ scorecards, which evaluate the policies of digital and Internet companies, and apply it to online dating services. I wonder how well these services would actually fare when evaluated on their privacy and security and anti-harassment policies…

Categories
Writing

When ‘Contact Us’ Forms Becomes Life Threatening

Journalists targeted by security services can write about relatively banal subjects. They might report on the amount and quality of food available in markets. They might write about the slow construction of roads. They might write about dismal housing conditions. They might even just include comments about a politician that are seen as unfavourable, such as the politician wiped sweat from their brow before answering a question. Risky reporting from extremely hostile environments needn’t involve writing about government surveillance, policing, or corruption: far, far less ‘sensitive’ reporting can be enough for a government to cast a reporter as an enemy of the state.

The rationale for such hyper-vigilance on the part of dictatorships and authoritarian countries is that such governments regularly depend on international relief funds or the international community’s decision to not harshly impede the country’s access to global markets. Negative press coverage could cut off relief funds or monies from international organizations following a realization that the country lacks the ‘freedoms’ and ‘progress’ the government and most media publicly report on. If the international community realizes that the country in question is grossly violating human rights it might also limit the country’s access to capital markets. In either situation, limiting funds available to the government can endanger the reigning government or hinder leaders from stockpiling stolen wealth.

Calling for Help

Reaching out to international journalism protection organizations, or to foreign governments that might offer asylum, can raise serious negative publicity concerns for dictatorial or authoritarian governments. If a country’s journalists are fleeing because they believe they are in danger, and that fact rises to public attention, it could negatively affect a leader’s public image and the government’s access to funds. On this basis governments may place particular journalists under surveillance and punish them should they do anything to threaten the public image of the leader or country. Such surveillance is also utilized when reporters who are in a country are covering, and writing about, facts that stand in contravention to government propaganda.

The potential for electronic surveillance is particularly high, and serious, when the major telecommunications providers in a country tend to fully comply with, or willingly provide assistance to, state security and intelligence services. This degree of surveillance makes contacting international organizations that assist journalists risky; when a foreign organization does not encrypt communications sent to it, the organization’ security practices may further endanger a journalist calling for help. One of the many journalists covered in Bad News: Last Journalists in a Dictatorship who feared his life was in danger by the Rwandan government stated,

[h]e had written to the Committee to Protect Journalists, in New York, but someone in the president’s office had then shown him the application that he had filled out online. He didn’t trust people living abroad any longer.” (Bad News: Last Journalists in a Dictatorship, 83-4)

Such surveillance could have taken place in a few different ways: the local network or computer the journalist used to prepare and send the application might have been compromised. Alternately, the national network might have been subject to surveillance for ‘sensitive’ materials. Though the former case is a prevalent problem (e.g., Internet cafes being compromised by state actors) it’s not one that international journalist organizations are well suited to fix. The latter situation, however, where the national network itself is hostile, is something that media organizations can address.

Network inspection technologies can be configured to look for particular pieces of metadata and content that are of interest to government monitors. By sorting for certain kinds of metadata, such as websites visited, content selection can be applied relatively efficiently and automated analysis of that content subsequently be employed. That content analysis, however, depends on the government in question having access to plaintext communications.

Many journalism organizations historically have had ‘contact us’ pages on their websites, and many continue to have and use these pages. Some organizations secure their contact forms by using SSL encryption. But many organizations do not, including organizations that actively assert they will provide assistance to international journalists in need. These latter organizations make it trivial for states that are hostile to journalists to monitor in-country journalists who are making requests or issuing claims using these insecure contact forms.

Mitigating Threats

One way that journalism protection organizations can somewhat mitigate the risk of government surveillance is to implement SSL on their websites, which encrypts communications sent to the organization’s web server. It is still apparent to network monitors what website was visited but not which pages. And if the journalist sends a message using a ‘contact us’ form the data communicated will be encrypted, thus preventing network snoops from figuring out what is being said.

SSL isn’t a bulletproof solution to stopping governments from monitoring messages sent using contact forms. But it raises the difficulty of intercepting, decrypting, and analyzing the calls for help sent by at-risk journalists. And adding such security is relatively trivial to implement with the advent of free SSL encryption projects like ‘Let’s Encrypt’.

Ideally journalism organizations would either add SSL to their websites — to inhibit adversarial states from reading messages sent to these organizations — or only provide alternate means of communicating with them. That might mandate email, and list hosts that provide service-to-service encryption (i.e. those that have implemented STARTSSL), messaging applications that provide sufficient security to evade most state actors (everything from WhatsApp or Signal, to even Hangouts if the US Government and NSA aren’t the actors you’re hiding from), or any other kind of secure communications channel that should be secure from non-Five Eyes surveillance countries.

No organization wants to be responsible for putting people at risk, especially when those people are just trying to find help in dangerous situations. Organizations that exist to, in part, protect journalists thus need to do the bare minimum and ensure their baseline contact forms are secured. Doing anything else is just enabling state surveillance of at-risk journalists, and stands as antithetical to the organizations’ missions.

NOTE: This post was previously published on Medium.

Categories
Links Writing

WhatsApp’s new vulnerability is a concession, not a backdoor

The underlying weakness has to do with alerts rather than cryptography. Although they share the same underlying encryption, the Signal app isn’t vulnerable to the same attack. If the Signal client detects a new key, it will block the message rather than risk sending it insecurely. WhatsApp will send that message anyway. Since the key alert isn’t on by default, most users would have no idea.

It’s a controversial choice, but WhatsApp has good reasons for wanting a looser policy. Hard security is hard, as anyone who’s forgotten their PGP password can attest. Key irregularities happen, and each app has different policies on how to respond. Reached by The Guardian, WhatsApp pointed to users who change devices or SIM cards, the most common source of key irregularities. If WhatsApp followed the same rules as Signal, any message sent with an unverified key would simply be dropped. Signal users are happy to accept that as the price of stronger security, but with over a billion users across the world, WhatsApp is playing to a much larger crowd. Most of those users aren’t aware of WhatsApp’s encryption at all. Smoothing over those irregularties made the app itself simpler and more reliable, at the cost of one specific security measure. It’s easy to criticize that decision, and many have — but you don’t need to invoke a government conspiracy to explain it.

A multitude of secure messaging applications are vulnerable to keys being changed at the server level without the end-user being notified. This theoretically opens a way for state security agencies to ‘break into’ secured communications channels but, to date, we don’t have any evidence of a company in the Western or Western-affiliated world engaging in such behaviours.

There are laws that require some types of communications to be interceptable. Mobile communications carried by telecommunications carriers in Canada must be interceptable, and VoIP along with most other kinds of voice communications that are transmitted by equivalent carriers are subject to interception in the United States. There are not, however, similar demands currently placed on companies that provide chat or other next-generation communications system.

While there are not currently laws mandating either interception or decryption of chat or next-generation communications it remains plausible that laws will be introduced to compel this kind of functionality. It’s that possibility that makes how encryption keys are managed so important: as politicians smell that there is even the possibility of demanding decrypted communications the potential for such interception laws increases dramatically. Such laws would formalize and calcify vulnerabilities into the communications that we use everyday, to the effect of not just ensuring that domestic authorities could always potentially be listening, but foreign and unauthorized parties as well.

Categories
Links Writing

Demand for secret messaging apps is rising as Trump takes office

From The Verge:

Marlinspike’s goal isn’t unicorn riches, but unicorn ubiquity. For that, he wants to make encrypted messaging as easy — as beautiful, as fun, as expressive, as emoji-laden — as your default messaging app. His reason: if encryption is difficult, it self-selects for people willing to jump through those hoops. And bad guys are always willing to jump through the hoops. “ISIS or high-risk criminal activity will be willing to click two extra times,” he told me. “You and I are not.”

Marlinspike’s protocol for secure communication is incredibly effective at protecting message content from third party observation. Few protocols are nearly as effective, however, and most chat companies now claim that they offer ‘secure’ communciations. Almost no consumers are situated to evaluate those claims: there are known deficient applications that are widely used, despite the security community having identified and discussed their problems. Encryption isn’t actually going to provide the security that most users think it does so unless the best-of-class protocols are widely adopted.1

The problem of imperfect consumer knowledge is a hard one to solve for, in part because the security community cannot evaluate all claims of encryption. In work that I’ve been involved in we’ve seen simplistic ciphers, hard coded passwords, and similar deficiencies. In some cases companies have asserted they secure data but then fail to encrypt data between smartphone apps and company servers. It’s laborious work to find these deficiencies and it’s cheap for companies to claim that they offer a ‘secure’ product. And it ultimately means that consumers (who aren’t experts in cryptography, nor should they be expected to be such experts) are left scratching their head and, sometimes, just throwing their hands up in frustration as a result of the limited information that is available.


  1. Admittedly, Marlinspike’s goal is to spread his protocol widely and the result has been that the largest chat service in the world, WhatsApp, not provides a robust level of communications security. To activate the protocol in other chat services, such as Google’s Allo or Facebook’s Messenger you need to first set up a private conversation. 

 

Categories
Links

150 Filmmakers Want Nikon and Canon to Sell Encrypted Cameras. Here’s Why

From Wired:

Implementing that feature wouldn’t be simple—particularly in high-definition cameras that have to write large files to an SD card at a high frequency, says Jonathan Zdziarski, an encryption and forensics expert who also works a semi-professional photographer. Integrating encryption without slowing down a camera would likely require not just new software, but new microprocessors dedicated to encrypting files with maximum efficiency, as well as security engineering talent that camera companies likely don’t yet have. He describes the process as “feasible,” but potentially expensive. “I don’t expect Nikon or Canon to know how to do this the way computer companies do. It’s a significant undertaking,” says Zdziarski. “Their first question is going to be, ‘how do we pay for that?‘”

Adding in encryption is a non-trivial undertaking. It’s one that is often done badly. And strong encryption – such that no party can access the content absent a passphrase – also has drawbacks because it you forget that phrase then you’re permanently locked out of the data. As someone who has suffered data loss for exactly that reason I’m incredibly sympathetic that the level of security proposed – opt-in strong security – is not necessarily something that most users want, nor something that most companies want to field support calls over.

Categories
Links

Privacy and Policing in a Digital World

As the federal government holds public consultations on what changes should be made to Bill C-51, the controversial anti-terrorism legislation passed by the Conservative government, various police agencies such as the RCMP and the Canadian Association of Chiefs of Police have petitioned to gain new powers to access telephone and internet data. Meanwhile nearly half of Canadians believe they should have the right to complete digital privacy. The Agenda examines the question of how to balance privacy rights with effective policing in the digital realm.

I was part of a panel that discussed some of the powers that the Government of Canada is opening for discussion as part of its National Security consultation, which ends on December 15, 2016. If you want to provide comments to the government, see: https://www.canada.ca/en/services/defence/nationalsecurity/consultation-national-security.html

Categories
Links

I’m giving up on PGP

This is one of the clearest (and bluntest) critiques of PGP/GPG I’ve read in a long time. It very, very clearly establishes PGP’s inability to successfully protect people facing diverse threat models, the failure of the Web of Trust to secure identities and communities of trust, and challenges of key security and rotation. I’d consider it assigned reading in a university class if the students were ever forced to learn about PGP itself.

Categories
Links Writing

Feds Walk Into A Building. Demand Everyone’s Fingerprints To Open Phones

Forbes:

Legal experts were shocked at the government’s request. “They want the ability to get a warrant on the assumption that they will learn more after they have a warrant,” said Marina Medvin of Medvin Law. “Essentially, they are seeking to have the ability to convince people to comply by providing their fingerprints to law enforcement under the color of law – because of the fact that they already have a warrant. They want to leverage this warrant to induce compliance by people they decide are suspects later on. This would be an unbelievably audacious abuse of power if it were permitted.”

Jennifer Lynch, senior staff attorney at the Electronic Frontier Foundation (EFF), added: “It’s not enough for a government to just say we have a warrant to search this house and therefore this person should unlock their phone. The government needs to say specifically what information they expect to find on the phone, how that relates to criminal activity and I would argue they need to set up a way to access only the information that is relevant to the investigation.

It’s insane that the US government is getting chained warrants that authorize expansive searches without clarifying what is being sought or the specific rationales for such searches. Such actions represent an absolute violation of due process.

But, at the same time, the government’s actions (again) indicate the relative weaknesses of the ‘going dark’ arguments. While iPhones and other devices are secured to prevent all actors from illegitimately accessing them, fingerprint-enabled devices can let government agencies bypass security protections with relative ease. This doesn’t mean that fingerprint scanners are bad – most people’s threat models aren’t police, but criminals, snoopy friends and family, etc – but instead that authorities can routinely bypass, rather than need to break, cryptographically-secured communications.

Categories
Links

Turkey coup plotters’ use of ‘amateur’ app helped unveil their network

The Guardian:

A senior Turkish official said Turkish intelligence cracked the app earlier this year and was able to use it to trace tens of thousands of members of a religious movement the government blames for last month’s failed coup.

Members of the group stopped using the app several months ago after realising it had been compromised, but it still made it easier to swiftly purge tens of thousands of teachers, police, soldiers and justice officials in the wake of the coup.

Starting in May 2015, Turkey’s intelligence agency was able to identify close to 40,000 undercover Gülenist operatives, including 600 ranking military personnel, by mapping connections between ByLock users, the Turkish official said.

However, the Turkish official said that while ByLock helped the intelligence agency identify Gülen’s wider network, it was not used for planning the coup itself. Once Gülen network members realised ByLock had been compromised they stopped using it, the official said.

But intelligence services are policing agencies are still ‘Going Dark’…