Categories
Quotations

2013.7.19

Mark Zuckerberg runs a giant spy machine in Palo Alto, California. He wasn’t the first to build one, but his was the best, and every day hundreds of thousands of peopl eupload the most intimate details of their lives to the Internet. The real coup wasn’t hoodwinking the public into revealing their thoughts, closest associates, and exact geographic coordinates at any given time. Rather, it was getting the public to volunteer that information. Then he turned off the privacy settings.

If the state had organized such an informationd rive, protestors would have burned down the White House. But the state is the natural beneficary of this new “social norm.” Today, that information is regularly used in court proceedings and law enforcement. There is no need for warrants or subpoenas. Judges need not be consulted. Th Forth Amendment does not come into play. Intelligence agencies don’t have to worry about violating laws protecting the citizenry from wiretapping and information gathering. Sharing information “more openly” and with “more people” is a step backward in civil liberties. And spies, whether foreign and domestic, are “more people,” too.

Marc Ambinder and D.B. Grady. (2013). Deep State: Inside the Government Secrecy Industry. New Jersey: Wiley. Pp. 27.
Categories
Aside Links

Backdooring an ‘Encrypted’ Application

Persuant to my last post on cryptography and pixie dust, it’s helpful to read through Matt Green’s highly accessible article “How to ‘backdoor’ an encryption app.” You’ll find that companies have a host of ways of enabling third-party surveillance, ranging from overt deception to having access to communications metadata to compromising their product’s security if required by authorities. In effect, there are lots of ways that data custodians can undermine their promises to consumers, and it’s pretty rare that the public ever learns that the method(s) used to secure their communications have either been broken or are generally ineffective.

Categories
Writing

Pixie Dust and Data Encryption

CNet recently revealed that Google is encrypting some of their subscribers’ Google Drive data. Data has always been secured in transit, but Google is testing encrypting data at rest. This means that, without the private key, someone who got access to your data on Google’s Drive servers would just get reams of ciphertext. At issue, however, is that ‘encryption’ is only a significant barrier if the the third-party storing your data cannot decrypt the data when a government-backed actor comes knocking.

Encryption has become something like pixie dust, insofar as companies far and wide assure their end-users and subscribers that data is armoured in cryptographic shells. Don’t worry! You’re safe with us! Unfortunately, detailed audits of commercial encrypted products often reveal firms offering more snake oil than genuine protection. Just consider some of the following studies and reports that are, generally, damning[1]:

As noted in Bruce Schneier’s (still) excellent analysis of cryptographic snake oil, there are at least nine warning signs that the company you’re dealing with isn’t providing a working cryptographic solution:

  1. You come across a lot of “pseudo-mathematical goobledygook” that isn’t linked to referenced and reviewed third-party reviews of the cryptographic underpinnings.
  2. The company states that ‘new mathematics’ are used to secure your information.
  3. The cryptographic process is proprietary and neither you nor anyone else can examine how data is secured.
  4. Weird claims are made about the nature of the product, such that the claims or terms used could easily fit within the latest episode of a sci-fi show you’re watching.
  5. Excessive key lengths are trumpted as a demonstrated proof of cryptographic security.
  6. The company claims your data is secure because one-time pads are used.
  7. Claims are made that cannot be backed up in fact.
  8. Security proofs involve twists of linguistic logic, and lack demonstrations of mathematical logic.
  9. The product is somehow secure because it hasn’t been ‘cracked’. (Yet.)

Unfortunately, people have been conditioned by Hollywood and other media that as soon as something is ‘encrypted’ only super-duper hackers can subsequently ‘penetrate the codes and extract the meta-details to derive a data-intuition of the content’ (or some such similiar garbage). When you’re dealing with crappy ‘encryption’ – like showing private keys in plain text, or transmitting passphrases across the Internet in the clear – then the product is just providing consumers a false sense of security. You don’t need to be a hacker to ‘defeat’ particularly poor implementations of data encryption, you often just need to know how to read a file system.

Presently, however, there aren’t clear ways for consumers to know if a product is genuinely capable of securing their data in transit or at rest. There isn’t a clear solution to getting bad products off the market or generally improving product security, save for media shaming and/or the development of better cryptographic libraries that non-cryptographers (read: developers) can easily use when developing product. However, there are always going to be flaws and errors, and most consumers are never going to know that something has gone terribly awry until it’s far, far too late. So, despite there being a well-known problem, there isn’t a productive solution. And that has to change.


  1. The selection of studies were just chosen because they’re sitting on my computer now/I’ve referenced or written about them previously. If you spend a few minutes trawling Google Scholar using the search term ‘encryption broken’ you’re going to come across even more analyses of encryption ‘solutions’ that have been defeated.  ↩
Categories
Links

Constraints

Matt has written one of the most succinct and clear pieces on product constraints. It’s well worth the time to read and subsequently mull over.

Categories
Writing

A Brief Comment on ‘Metadata’

We live in environments that are pervasively penetrated by digital systems. We carry personalized tracking devices with us everywhere (i.e. mobile phones) that have increasingly sophisticated sensors embedded in them. We rely on Internet-based systems for travel, work, and play. Even our ‘landline’ communications are pervasively turned into digital code when we call a friend or family member.

Every one of the previously mentioned transactions generates ‘non-content’ data: when and who we call, and for how long; which cellular towers we pass by; what (semi-)unique IP addresses are provided to websites we visit, and so forth. These identifiers can be used to trace our movements, practices, and who we communicate with: they are often far more revealing about ourselves than the pure content of our communications.

It’s with the reality of the surveillance potentials of metadata that we need to reorient how to talk about such ‘non-content’ data. It has become depressingly common to see elected officials and other authorities state that “it’s just metadata” as well as “we only use it for appropriate purposes.”

To the first statement, metadata can reveal incredibly sensitive infomation about individuals and about their community/communities. The collection and processing of such information therefore warrants a similar degree of care and concern as the processing of clearly personal information.

To the second statement, clarity around collection and use of metadata is needed. Moreover, data cannot be massively collected and ‘appropriate purposes’ just applied to how the data is subsequently parsed. The very collection of data itself needs to be targeted, justified, and enjoy significant oversight – arguably more oversight that ‘just’ the content of communications.

In a recent paper on metadata, Ontario Information and Privacy Commissioner Ann Cavoukian wrote:

we urge governments to adopt a proactive approach to securing the rights affected by intrusive surveillance programs. To protect privacy and liberty, any power to seize communications metadata must come with strong safeguards directly embedded into programs and technologies, that are clearly expressed in the governing legal framework. The purpose, scope, and duration of data collection must be strictly controlled. More robust judicial oversight, parliamentary or congressional controls, and systems capable of providing for effective public accountability should be brought to bear. The need for operational secrecy must not stand in the way of public accountability. Our essential need for privacy and the preservation of our freedoms are at stake.[1]

Commissioner Cavoukian is decidely correct that data collection, use, and intent must be carefully controlled. However, I would go a step further than the Commissioner has in her call for additional parliamentary oversight and control. In Canada, and unlike the United States and United Kingdom, there is not a committee of parliamentarians with security clearances to oversee how our intelligence and security authorities operate. Presently, the Canadian system predominantly enjoys only Cabinet-level political oversight: we need a broader set of eyes, and eyes that are not mindful of the ruling government’s optics, to evaluate the appropriateness of what our intelligence and security services are up to. So, in excess of Commissioner Cavoukian’s comments, we actually need to modify parliament such that oversight is even possible.

Reasonable people can disagree on the value and desire for national security and foreign intelligence services. Such disagreements should happen more prominently amongst parliamentarians and the public. However, there should be no disagreement that, in order to represent the public, at least some members of our legislative assemblies must know the extent of the government’s security and intelligence powers, capabilities, and practices.

Canada is a democracy and, as such, it is imperative that we establish a committee of parliamentarians to oversee how our security and spy agencies are collecting, using, and retaining the metadata and content associated with our communications. The actions that these agencies engage in are too significant to leave to Cabinet oversight alone.


  1. Ann Cavoukian. (2013). “A Primer on Metadata: Separating Fact from Fiction.” Office of the Information and Privacy Commissioner of Ontario. Available at: http://www.privacybydesign.ca/content/uploads/2013/07/Metadata.pdf. Pp. 10. Emphasis added.  ↩
Categories
Humour Links Writing

Definitions for the American Surveillance State

David Sirota of Salon has developed an excellent set of terms to speed along discussions about the contemporary American surveillance state. My own favorites include:

Least untruthful: A new legal doctrine that allows an executive branch official to issue a deliberate, calculated lie to Congress yet avoid prosecution for perjury, as long as the official is protecting the executive branch’s political interests. Usage example: Director of National Intelligence James Clapper avoided prosecution for perjury because he insisted that the blatant lie he told to Congress was merely the “least untruthful” statement he could have made.

And:

Modest encroachment: A massive, indiscriminate intrusion. Usage example: President Obama has deemed the NSA’s “collect it all” surveillance operation, which has captured 20 trillion information transactions and touches virtually all aspects of American life, a “modest encroachment” on citizens’ right to privacy.

The full listing of terms is depressingly cynical. However, the persistent – if often humorous – turn to cynicism may ultimately limit how politicians address and respond to Snowden’s surveillance revelations. What Snowden confirmed raises existential challenges to the potential to imagine, let alone actualize, a deliberative democratic state. The accompanying risk is that instead of addressing such challenges head on, citizens may retreat to cynicism rather than engaging in the hard work of recuperating their increasingly-authoritarian democratic institutions. We’re at a point where we need a more active, not more withdrawn and bemused, citizen response to government excesses.

Categories
Aside Links Quotations

How to Publish A  Story That Explains How to Use Social Media to Juice Your Story’s Popularity

emptyage:

I paid to have my latest Wired story promoted on social networks, like Twitter and Facebook, to try to show that a lot of the metrics* we use to measure a story’s success are bullshit. It worked. When the story went live today, the page appeared with more than 15,500 links on Twitter, and 6,500 likes on Facebook. The story is a part of Wired’s Cheats package for the latest issue of the magazine. It needed to go live online at the same time readers encountered it in print, and it needed to have all those social shares set up in advance. 

The entire package was going live at once. I could publish my story a little bit early, but the timing needed to be very close. I wanted all the public-facing stats (like the 15 thousand links and Twitter and 6,000 Facebook shares) to be live by the time the text appeared. Certainly, if someone found it in print or on the tablet, it needed those metrics to already be there. To make that happen, we cheated. 

This morning (or last night) at a little after 1 am, I added the story text, set it to the current time, and hit update. Now it showed up in RSS readers and I could openly tweet it form my main account. (I had originally used a secondary Twitter account I have for testing 3rd party stuff to link to it and score retweets.)

So now, the story goes “live” and as if by magic it has tens of thousands of social shares listed on it the instant real people start to encounter it. It worked. 

*As is site traffic, to a very large extent. My original idea was to use a botnet to throw traffic at it, but Wired’s lawyers said “no, no. Don’t do that.“ 

And, of course, people tend to associate lots of shares with an article’s significance or influence. Consequently, by ‘cheating’ ahead of time a content owner can add a false gravitas to the content in question. I’m curious to know how search companies that, in part, use social signals to surface content deal with this kind of ‘hacking the social.’

Categories
Links Writing

Cellular Security Called Into Question. Again.

Worries about spectrum scarcity have prompted telecommunications providers to provide their subscribers with femotocells, which are small and low-powered cellular base stations. Often, these stations are linked into subscribers’ existing 802.11 wireless or wired networks, and are used to relieve stress placed upon commercial cellular towers whilst simultaneously expanding cellular coverage. Questions have recently been raised about the security of those low-powered stations:

Ritter and his colleague, Doug DePerry, demonstrated for Reuters how they can eavesdrop on text messages, photos and phone calls made with an Android phone and an iPhone by using a Verizon femtocell that they had previously hacked.

They said that with a little more work, they could have weaponized it for stealth attacks by packaging all equipment needed for a surveillance operation into a backpack that could be dropped near a target they wanted to monitor.

While Verizon has issued a patch for its femtocells, there isn’t any reason why additional vulnerabilities won’t be found. By placing the stations in the hands of end-users, as opposed to retaining control over commercially deployed cellular towers, third-party security researchers and attackers can persistenty test the cells until flaws are found. The consequence of this deployment strategy is that attackers will continue to find vulnerabilities to (further) weaken the security associated with cellular communications. Unfortunately, countering attackers will significantly depend on security researchers finding the same exploit(s) and reporting it/them to the affected companies. The likelihood of security researchers and attackers finding and exploiting the same flaws diminishes as more and more vulnerabilities are found in these devices.

In countries such as Canada, for researchers to conduct their research they must often first receive permission from the companies selling the femtocells: if there are any ‘digital locks’ around the technology, then researchers cannot legally investigate the code without prior corporate approval. Such restrictions don’t mean that researchers won’t conduct research, but do mean that researchers’ discoveries will go unreported and thus unpatched. As a result, consumers will largely remain reliant on the companies responsible for the security deficits in the first place to identify and correct those deficits, but absent public pressure that results from researchers disclosing vulnerabilities.

In light of the high economic costs of such identification and patching processes, I’m less than confident that femtocell providers are going to be investing oodles of cash just to potentially as opposed to necessarily identify and fix vulnerabilities. The net effect is that, at least in Canada, telecommunications providers can be assured that the public will remain relatively unconcerned about the security of providers’ products: security perceptions will be managed by preventing consumers from learning about prospective harms associated with telecommunications equipment. I guess this is just another area of research where Canadians will have to point to the US and say, “The same thing is likely happening here. But we’ll never know for sure.”

Categories
Aside

Housekeeping Note

This website began as a space to do ‘little blogging’, giving me some leeway to think and write about issues without breaking up the tenor or character of the more lengthy analyses of contemporary privacy, security, and technology issues that I undertake at Technology, Thoughts, and Trinkets. Since I began writing at Quirks and Tech, a little over a year ago, I’ve posted 600 items. In effect, what began as a distraction space has become a little more serious and, as a result, a visual update has been in order.

I purchased a domain for the site several months back when I decided that I liked writing here. Today, I’m happy to unveil the new theme for the site.[1] It emphasizes readability and a general lack of visual clutter. The fonts are far easier to parse than those associated with the previous theme that I was using, and the shading of various areas is subtle enough distinguish between different blocks of content without becoming overbearing.

As part of the update I’ve formalized links to my other presences online, and begun thinking about systemic ‘top categories’ to help people wade through the morass of posts that I’ve generated. I’m going to continue making some minor tweaks over the next while, but the general structure and aethetic are going to remain for some time going forward.


  1. I’ve modified the Tewday theme in the course of these updates. I cannot express how disappointed I was with much of the theme upon delivery: sloppy CSS coding was pervasive, and many styling elements (like lists! like blockquotes) weren’t done properly. I’ll be reviewing the theme – and how to fix parts of it – at a later date.  ↩
Categories
Links Writing

Drawing Comparative Inferences from Canadian and American Network Investment

Peter Nowak recently had a good post concerning the nature of mobile pricing in Canada. You really should go read it all. However, there was one key piece that he noted, towards the end, that deserves to be highlighted. Specifically:

It was only a few short years ago when Bell and Telus were getting pummeled by Rogers, thanks to that company’s chosen technology. Rogers, like most of the carriers in the world, went with GSM network technology while Bell and Telus opted for CDMA instead. Without getting technical, GSM won, and Apple put the exclamation point on the battle in 2007 in the form of the iPhone. Unable to offer the latest and greatest devices, including that quintessential and hotly desired device, Bell and Telus moved quickly to upgrade to the next greatest and latest 4G technology. Rogers followed suit. The same is happening in the United States, with Sprint and Verizon – both former CDMA users – both spending heavily on LTE.

Network investment in both Canada and the United States does not reflect the competitiveness of either market, but rather phone makers’ decisions on technologies. Carriers are simply being pulled along for the ride.

One thing I may indeed have been wrong about in the past is how high prices were mainly the result of the lack of foreign competition in Canada, which wasn’t legally allowed until last year. The poor technological choices made by a number of carriers can’t be discounted as a factor. The industry is now waving the billions they’re having to spend to correct those mistakes in the faces of consumers and government, with prices – be they as they are – the necessary rationalization.

A key aspect of Nowak’s argument towards the end is that network investment was driven not so much by carrier-driven decisions but by the decision of a device manufacturer: Apple. I’d not really considered how Apple’s decision to ‘cut out’ a group of telecom companies from offering the iPhone could have been/was significantly responsible for massive re-engineering and investment in compatible networking technologies (i.e. GSM). Obviously such changes to the network infrastructure came at a significant fiscal cost.

It would be interesting to take Nowak’s point and then build on it to better understand how Canadian three year contracts might have alleviated the ‘hurt’ experienced by Canadian mobile providers. Specifically, we could ask the following:

  • what was the churn that Bell and TELUS experienced as a result of not being able to provide the iPhone?
  • was churn in Canada comparable to the CDMA providers in the United States?

Based around these questions we could establish a working hypothesis that churn was lower in Canada than the US. If this hypothesis bore out when tested we could try to ascertain why it bore out:

  • were Canadians happier with Bell and TELUS than their American counterparts?
  • were Canadians unable to choose their preferred economic options at a rate comparable to American customers because of the longer contracts associated with the Canadian carriers?
  • Other?

In effect the bad bets of American and Canadian carriers on CDMA offers an interesting comparative case from which we can draw inferences about the effects of the much-loathed three year cellular phone contracts in Canada. It would be awesome to see the numbers crunched to evaluate the effects of those contracts, especially before and after Bell/TELUS look launched their HSPA+ network(s). From there, I’m sure some interesting thoughts on the CRTC’s wireless code of conduct (which includes effectively mandating two year contracts) could follow: if a device as disruptive as the iPhone appears on the market, what would it do to the Canadian telecommunications market?