Persuant to my last post on cryptography and pixie dust, it’s helpful to read through Matt Green’s highly accessible article “How to ‘backdoor’ an encryption app.” You’ll find that companies have a host of ways of enabling third-party surveillance, ranging from overt deception to having access to communications metadata to compromising their product’s security if required by authorities. In effect, there are lots of ways that data custodians can undermine their promises to consumers, and it’s pretty rare that the public ever learns that the method(s) used to secure their communications have either been broken or are generally ineffective.
Tag: Security
Pixie Dust and Data Encryption
CNet recently revealed that Google is encrypting some of their subscribers’ Google Drive data. Data has always been secured in transit, but Google is testing encrypting data at rest. This means that, without the private key, someone who got access to your data on Google’s Drive servers would just get reams of ciphertext. At issue, however, is that ‘encryption’ is only a significant barrier if the the third-party storing your data cannot decrypt the data when a government-backed actor comes knocking.
Encryption has become something like pixie dust, insofar as companies far and wide assure their end-users and subscribers that data is armoured in cryptographic shells. Don’t worry! You’re safe with us! Unfortunately, detailed audits of commercial encrypted products often reveal firms offering more snake oil than genuine protection. Just consider some of the following studies and reports that are, generally, damning[1]:
- N. Vratonjic, J. Freudiger, V. Bindschaedler, J-P. Hubaux. (2011). “The Inconvenient Truth about Web Certificates,” The Workshop on Economics of Information Security (WEIS), Fairfax, Virginia, USA. Available at: http://infoscience.epfl.ch/record/165676
- A. Arnbak, Nico Van Eijk. (2012). “Certificate Authority Collapse: Regulating Systemic Vulnerabilities in the HTTPS Value Chain,” SSRN. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2031409
- A. Belenko and D. Sklyyarov. (2012). ““Secure Password Managers” and “Military-Grade Encryption” on Smartphones: Oh, Really?” Elmsoft Co. Ltd. Available at: http://www.elcomsoft.com/WP/BH-EU–2012-WP.pdf
- A. Kingsley-Hughes. (2010). “Encryption busted on NIST-certified Kingston, SanDisk and Verbatim USB flash drives,” CNET. Available at: http://www.zdnet.com/blog/hardware/encryption-busted-on-nist-certified-kingston-sandisk-and-verbatim-usb-flash-drives/6655
- Steve Thomas. (2013). “DecryptoCat,” TobTu. Available at: http://tobtu.com/decryptocat-old.php.
- For a general overview of Skype insecurity, see: Christopher Parsons. (2012). “Some Literature on Skype Security,” Quirks in Tech. Available at: http://quirksintech.ca/post/28281569850/some-literature-on-skype-security
As noted in Bruce Schneier’s (still) excellent analysis of cryptographic snake oil, there are at least nine warning signs that the company you’re dealing with isn’t providing a working cryptographic solution:
- You come across a lot of “pseudo-mathematical goobledygook” that isn’t linked to referenced and reviewed third-party reviews of the cryptographic underpinnings.
- The company states that ‘new mathematics’ are used to secure your information.
- The cryptographic process is proprietary and neither you nor anyone else can examine how data is secured.
- Weird claims are made about the nature of the product, such that the claims or terms used could easily fit within the latest episode of a sci-fi show you’re watching.
- Excessive key lengths are trumpted as a demonstrated proof of cryptographic security.
- The company claims your data is secure because one-time pads are used.
- Claims are made that cannot be backed up in fact.
- Security proofs involve twists of linguistic logic, and lack demonstrations of mathematical logic.
- The product is somehow secure because it hasn’t been ‘cracked’. (Yet.)
Unfortunately, people have been conditioned by Hollywood and other media that as soon as something is ‘encrypted’ only super-duper hackers can subsequently ‘penetrate the codes and extract the meta-details to derive a data-intuition of the content’ (or some such similiar garbage). When you’re dealing with crappy ‘encryption’ – like showing private keys in plain text, or transmitting passphrases across the Internet in the clear – then the product is just providing consumers a false sense of security. You don’t need to be a hacker to ‘defeat’ particularly poor implementations of data encryption, you often just need to know how to read a file system.
Presently, however, there aren’t clear ways for consumers to know if a product is genuinely capable of securing their data in transit or at rest. There isn’t a clear solution to getting bad products off the market or generally improving product security, save for media shaming and/or the development of better cryptographic libraries that non-cryptographers (read: developers) can easily use when developing product. However, there are always going to be flaws and errors, and most consumers are never going to know that something has gone terribly awry until it’s far, far too late. So, despite there being a well-known problem, there isn’t a productive solution. And that has to change.
- The selection of studies were just chosen because they’re sitting on my computer now/I’ve referenced or written about them previously. If you spend a few minutes trawling Google Scholar using the search term ‘encryption broken’ you’re going to come across even more analyses of encryption ‘solutions’ that have been defeated. ↩
Worries about spectrum scarcity have prompted telecommunications providers to provide their subscribers with femotocells, which are small and low-powered cellular base stations. Often, these stations are linked into subscribers’ existing 802.11 wireless or wired networks, and are used to relieve stress placed upon commercial cellular towers whilst simultaneously expanding cellular coverage. Questions have recently been raised about the security of those low-powered stations:
Ritter and his colleague, Doug DePerry, demonstrated for Reuters how they can eavesdrop on text messages, photos and phone calls made with an Android phone and an iPhone by using a Verizon femtocell that they had previously hacked.
…
They said that with a little more work, they could have weaponized it for stealth attacks by packaging all equipment needed for a surveillance operation into a backpack that could be dropped near a target they wanted to monitor.
While Verizon has issued a patch for its femtocells, there isn’t any reason why additional vulnerabilities won’t be found. By placing the stations in the hands of end-users, as opposed to retaining control over commercially deployed cellular towers, third-party security researchers and attackers can persistenty test the cells until flaws are found. The consequence of this deployment strategy is that attackers will continue to find vulnerabilities to (further) weaken the security associated with cellular communications. Unfortunately, countering attackers will significantly depend on security researchers finding the same exploit(s) and reporting it/them to the affected companies. The likelihood of security researchers and attackers finding and exploiting the same flaws diminishes as more and more vulnerabilities are found in these devices.
In countries such as Canada, for researchers to conduct their research they must often first receive permission from the companies selling the femtocells: if there are any ‘digital locks’ around the technology, then researchers cannot legally investigate the code without prior corporate approval. Such restrictions don’t mean that researchers won’t conduct research, but do mean that researchers’ discoveries will go unreported and thus unpatched. As a result, consumers will largely remain reliant on the companies responsible for the security deficits in the first place to identify and correct those deficits, but absent public pressure that results from researchers disclosing vulnerabilities.
In light of the high economic costs of such identification and patching processes, I’m less than confident that femtocell providers are going to be investing oodles of cash just to potentially as opposed to necessarily identify and fix vulnerabilities. The net effect is that, at least in Canada, telecommunications providers can be assured that the public will remain relatively unconcerned about the security of providers’ products: security perceptions will be managed by preventing consumers from learning about prospective harms associated with telecommunications equipment. I guess this is just another area of research where Canadians will have to point to the US and say, “The same thing is likely happening here. But we’ll never know for sure.”
2013.7.9
Canadian carriers detect over 125 million attacks per hour on Canadians, comprising 80,000 new zero-day exploits identified every day. The vast majority of attacks are undetectable by traditional security software/hardware.
From “The Canadian Cyber Security Situation in 2011”
Snapchat: not for state secrets
Just in case you thought that Snapchat’s privacy settings were awesome, researchers have found that the security model is pretty piss poor.
Hackers who breached Google database appeared to seek identities of Chinese spies in U.S. who might be under watch.
This story is incredibly significant: it clarifies an additional target of the Aurora attacks in 2009 (the database that Google stored FISA warrant information in) and, as an extension, provides a notion of why NSA was involved in the investigation (i.e. any revelation of FISA information constitutes a national security issue).
I suspect we’ll never get the full story of what all occurred, but this article very nicely supplements some of the stuff we learned in Levy’s book In the Plex, as well as popular reporting around the series of attacks on major Western companies that happened in late 2009. It also reveals the significant of meta-data/information: it wasn’t necessarily required for attackers to know what specifically waas being monitored to take action to protect agents; all that was needed was information that the surveillance was occurring for countermeasures to be deployed.
Via the New Yorker:
This morning, The New Yorker launched Strongbox, an online place where people can send documents and messages to the magazine, and we, in turn, can offer them a reasonable amount of anonymity. It was put together by Aaron Swartz, who died in January, and Kevin Poulsen.
This has lots of interesting promise, though it’ll be *more* interesting when a non-US group of journalists use the system (the code will be open sourced). Frankly, given the history of American courts, I don’t think that leaking to a US publication is a terribly good idea at the moment if you want to remain anonymous.
Via Techdirt:
Good news, everyone. The terrorists will win and New York City Mayor Michael Bloomberg wants to help. Of course, his speech is all about not letting the terrorists win. But he’s giving them exactly what they want.
Bloomberg is an incredibly worrying political figure. He’s gone from earlier this year stating the privacy is important, but cannot be maintained in the face of expanding police surveillance, to this:
“The people who are worried about privacy have a legitimate worry,” Mr. Bloomberg said during a press conference in Midtown. “But we live in a complex word where you’re going to have to have a level of security greater than you did back in the olden days, if you will. And our laws and our interpretation of the Constitution, I think, have to change.”
This is the second time in very recent memory that he, on the one hand, supports a notion of privacy while, on the other, asserts that privacy has to be increasingly limited to enjoy ‘security’. This is an absolutely false dichotomy, and is often linked to blasé efforts to ‘secure’ a population in ineffective, inefficient, or incorrect ways. Strong security protections can and should be accompanied by equally strong privacy protections; we need to escape the dichotomy and recognize that privacy and security tend to be mutually supportive of one another, at least when security solutions are appropriately designed and implemented.