Pixie Dust and Data Encryption

CNet recently revealed that Google is encrypting some of their subscribers’ Google Drive data. Data has always been secured in transit, but Google is testing encrypting data at rest. This means that, without the private key, someone who got access to your data on Google’s Drive servers would just get reams of ciphertext. At issue, however, is that ‘encryption’ is only a significant barrier if the the third-party storing your data cannot decrypt the data when a government-backed actor comes knocking.

Encryption has become something like pixie dust, insofar as companies far and wide assure their end-users and subscribers that data is armoured in cryptographic shells. Don’t worry! You’re safe with us! Unfortunately, detailed audits of commercial encrypted products often reveal firms offering more snake oil than genuine protection. Just consider some of the following studies and reports that are, generally, damning[1]:

As noted in Bruce Schneier’s (still) excellent analysis of cryptographic snake oil, there are at least nine warning signs that the company you’re dealing with isn’t providing a working cryptographic solution:

  1. You come across a lot of “pseudo-mathematical goobledygook” that isn’t linked to referenced and reviewed third-party reviews of the cryptographic underpinnings.
  2. The company states that ‘new mathematics’ are used to secure your information.
  3. The cryptographic process is proprietary and neither you nor anyone else can examine how data is secured.
  4. Weird claims are made about the nature of the product, such that the claims or terms used could easily fit within the latest episode of a sci-fi show you’re watching.
  5. Excessive key lengths are trumpted as a demonstrated proof of cryptographic security.
  6. The company claims your data is secure because one-time pads are used.
  7. Claims are made that cannot be backed up in fact.
  8. Security proofs involve twists of linguistic logic, and lack demonstrations of mathematical logic.
  9. The product is somehow secure because it hasn’t been ‘cracked’. (Yet.)

Unfortunately, people have been conditioned by Hollywood and other media that as soon as something is ‘encrypted’ only super-duper hackers can subsequently ‘penetrate the codes and extract the meta-details to derive a data-intuition of the content’ (or some such similiar garbage). When you’re dealing with crappy ‘encryption’ – like showing private keys in plain text, or transmitting passphrases across the Internet in the clear – then the product is just providing consumers a false sense of security. You don’t need to be a hacker to ‘defeat’ particularly poor implementations of data encryption, you often just need to know how to read a file system.

Presently, however, there aren’t clear ways for consumers to know if a product is genuinely capable of securing their data in transit or at rest. There isn’t a clear solution to getting bad products off the market or generally improving product security, save for media shaming and/or the development of better cryptographic libraries that non-cryptographers (read: developers) can easily use when developing product. However, there are always going to be flaws and errors, and most consumers are never going to know that something has gone terribly awry until it’s far, far too late. So, despite there being a well-known problem, there isn’t a productive solution. And that has to change.


  1. The selection of studies were just chosen because they’re sitting on my computer now/I’ve referenced or written about them previously. If you spend a few minutes trawling Google Scholar using the search term ‘encryption broken’ you’re going to come across even more analyses of encryption ‘solutions’ that have been defeated.  ↩