Categories
Quotations

2013.7.29

If code doesn’t receive constant love it turns to shit.

Brad Fitzpatrick, Gopher at Google
Categories
Writing

New Zealand Reveals the ‘Five Eyes’ Spying on Each Other

In an interesting bit of news, it seems we can certifiably state that the NSA spied on a New Zealand journalist at the behest of the New Zealand government. The government has apparently classified journalists alongside foreign intelligence services and ‘organizations with extreme ideologies’ (read: terrorists). The government’s defence security staff “viewed investigative journalists as ”hostile“ threats requiring ”counteraction“. The classified security manual lists security threats, including ”certain investigative journalists“ who may attempt to obtain ”politically sensitive information“.”[1]

So, while the information about the surveillance is shocking in its own right, there is also an important tidbit of information that can derived from the US intelligence services’ actions: despite the supposedly sacrosanct prohibition the Five Eyes partners not spy on one another, this prohibition was broken in this instance. Though Canadian experts have previously stated that such surveillance on Five Eyes partners would be an extreme exception, it’s striking that surveillance mechanisms designed to counter the FSB are being brought to bear on investigative journalists. That the NSA and other American intelligence services turned their ‘ears’ towards a journalist at the New Zealand government’s behest suggests that, despite protestations to the contrary, ‘friendly’ intelligence services do ‘help’ one another spy on people and groups that domestic intelligence services are prohibited from monitoring for either legal or technical reasons.

Reasonable people can disagree on how and why intelligence services operate. However, the routine (mis)information that has been put forward by Western agencies concerning governmeing spying has significantly undermined any foundation for a genuine democratic debate to arise around such spying. When the United States’ Director of National Intelligence asserts that he was providing the “least untruthful” answers to elected officials questioning dragnet surveillance, and supposed ‘red lines’ are being crossed in secret to target journalists tasked with providing truthful reporting to citizens, then the ability to support or even reform intelligence practices is undermined: why shouldn’t we, the people, radically and unilaterally curtail surveillance practices if the same services and their administrative officers won’t truthfully disclose even their most basic operational guidelines?


  1. I should note that, following the revelations that the NZ government is monitoring journalists and classed them alongside foreign intelligence sources and extremist organizations, the government has publicly come out against these allegations.  ↩
Categories
Links Writing

Another ‘Victory’ for the Internet of Things

Researchers have found, once again, that sensitive systems have been placed on the Internet without even the most basic of security precautions. The result?

Analyzing a database of a year’s worth of Internet scan results [H.D. Moore]’s assembled known as Critical.io, as well as other data from the 2012 Internet Census, Moore discovered that thousands of devices had no authentication, weak or no encryption, default passwords, or had no automatic “log-off” functionality, leaving them pre-authenticated and ready to access. Although he was careful not to actually tamper with any of the systems he connected to, Moore says he could have in some cases switched off the ability to monitor traffic lights, disabled trucking companies’ gas pumps or faked credentials to get free fuel, sent fake alerts over public safety system alert systems, and changed environmental settings in buildings to burn out equipment or turn off refrigeration, leaving food stores to rot.

Needless to say, Moore’s findings are telling insofar as they reveal that engineers responsible for maintaining our infrastructures are often unable to secure those infrastructures from third-parties. Fortunately, it doesn’t appear that a hostile third-party has significantly taken advantage of poorly-secured and Internet-connected equipment, but it’s really only a matter until someone does attack this infrastructure to advance their own interests, or simply to reap the lulz.

Findings like Moore’s are only going to be more commonly produced as more and more systems are integrated with the Internet as part of the ‘Internet of Things’. It remains to be seen whether vulnerabilities will routinely be promptly resolved, especially with legacy equipment that enjoys significant sunk costs and limited capital for ongoing maintenance. Given the cascading nature of failures in an interconnected and digitized world, failing to secure our infrastructure means that along with natural disasters we may get to ‘enjoy’ cyber disasters that are both harder to positively identify or subsequently remedy when/if appropriately identified.

Categories
Links Writing

The Significance of a ‘Three Hop’ Analysis

Washington’s Blog has an excellent, if somewhat long, post that outlines the significance of the NSA’s ‘three hop’ analysis. It collects and provides some numbers behind basic communications network analyses, and comes to the conclusion that upwards to 2.5 million Americans could be “caught up in dragnet for each suspected terrorist, means that a mere 140 potential terrorists could lead to spying on all Americans. There are tens of thousands of Americans listed as suspected terrorists … including just about anyone who protests anything that the government or big banks do.”

Go read the full post. Some of the numbers are a bit speculative, but on the whole it does a good job showing why ‘three hop’ analyses are so problematic: such analyses disproportionately collect data on American citizens the basis of the most limited forms of suspicion. Such surveillance should be set aside because it constitutes an inappropriate infringement on individuals’ and communities’ reasonable expectations of privacy; it runs counter to how a well ordered and properly functioning democracy should operate in theory and in practice.

Categories
Links Writing

Facebook’s ‘Other’ Folder

David Pogue’s recent post on Facebook’s ‘Other’ folder notes how the company is effectively hiding a significant number of legitimate messages from its users in an attempt to prevent spam and ‘unimportant’ messages from disturbing subscribers. What follows are a few examples of legitimate messages that subscribers missed because they were placed in this folder:

  • “Notification of the death of a friend was hidden in my Other box. I had been very hurt at not being told, and actually missed her funeral.”
  • “I just checked my ‘Other’ folder and found out that I won a free high-end kitchen faucet for a contest I entered last year. Rats.”
  • “Just looked at my ‘Other’ messages and found one about a job opening — in 2011. Think it’s been filled?”
  • “Whoa! There’s tons of important messages in here. Former students of mine were trying to reach out to me. I can’t believe Facebook doesn’t notify you in any way about these.”
  • “Unbelievable! My husband’s wallet was lost and presumed stolen — someone had found it a year ago and sent us a Facebook message, which was hidden until now! Thanks so much.”
  • “Just checked and found a message from someone telling me that they found my lost wallet…a year ago. They really need to redo some thinking on that ‘other’ folder.”

The intent of Facebook’s filtering is noble, insofar as it’s meant to cut down on the cruft and spam that people inevitably get in their email inboxes on a daily basis. I’m sure that the logic is as follows: if we can get people to like using Facebook messages more than email, then we can convince people to rely on our corporate system and wean people off of their traditional email services. Unfortunately, it looks like Facebook’s filtering system suffers from flaws, just as their competitors’ systems do. Worse, and unlike most of their competitors, Facebook subscribers can’t access this folder from their tablets or smartphones without visiting Facebook via the web interface. So, for people that predominantly engage with Facebook using the company’s mobile applications, this folder is effectively invisible. Messages simply vanish into a black hole. This is a very bad thing.

While Facebook’s system makes sense, I suspect that a great many people are as ignorant of the ‘Other’ folder’s existence as the people who wrote to Pogue. This information asymmetry between the developers and users suggests a problem in the UX or UI, insofar as it shouldn’t be a shock that this folder exists. Good UI and UX will prevent subscribers from getting ‘shocked’ about the existence of hidden messages, and will help ensure that the service remains ‘sticky’ for its user base.

Network effects can stymie subscriber churn but they can’t stop it entirely. If Facebook undermines professional or personal networks because of how it handles suspected ‘unimportant’ messages, then the network effect that Facebook currently enjoys could be weakened and expose a part of Facebook’s flank to companies that are more attuned to people’s communicative interests and desires. It will be curious to see how/whether Facebook incorporates the information that arose from Pogue’s columns, and if they actually modify users’ interfaces such that the ‘Other’ folder is more prominently displayed. At the very least, something should change in the mobile applications so users can at least theoretically access all of those ‘unimportant’ messages.

Categories
Aside

WiFi “Security”

This really isn’t the warning you want to get when signing into a wifi-portal.

Categories
Aside Links

AT&T’s Anti-Infringement Patent

AT&T’s recent patent to detect and act on network-based copyright infringement raises significant red flags for network neutrality advocates. However, we need to look beyond the most obvious (and nefarious!) red flags: when examining corporate surveillance prospects we need to reflect on the full range of reasons behind the practice. Only in taking this broader, and often more nuanced, view are we likely to come closer to the truth of what is actually going on, and why. And, if we don’t get closer to the specific truth of the situation, at least we can better understand the battleground and likely terms of the conflict.

Categories
Quotations

2013.7.19

Mark Zuckerberg runs a giant spy machine in Palo Alto, California. He wasn’t the first to build one, but his was the best, and every day hundreds of thousands of peopl eupload the most intimate details of their lives to the Internet. The real coup wasn’t hoodwinking the public into revealing their thoughts, closest associates, and exact geographic coordinates at any given time. Rather, it was getting the public to volunteer that information. Then he turned off the privacy settings.

If the state had organized such an informationd rive, protestors would have burned down the White House. But the state is the natural beneficary of this new “social norm.” Today, that information is regularly used in court proceedings and law enforcement. There is no need for warrants or subpoenas. Judges need not be consulted. Th Forth Amendment does not come into play. Intelligence agencies don’t have to worry about violating laws protecting the citizenry from wiretapping and information gathering. Sharing information “more openly” and with “more people” is a step backward in civil liberties. And spies, whether foreign and domestic, are “more people,” too.

Marc Ambinder and D.B. Grady. (2013). Deep State: Inside the Government Secrecy Industry. New Jersey: Wiley. Pp. 27.
Categories
Aside Links

Backdooring an ‘Encrypted’ Application

Persuant to my last post on cryptography and pixie dust, it’s helpful to read through Matt Green’s highly accessible article “How to ‘backdoor’ an encryption app.” You’ll find that companies have a host of ways of enabling third-party surveillance, ranging from overt deception to having access to communications metadata to compromising their product’s security if required by authorities. In effect, there are lots of ways that data custodians can undermine their promises to consumers, and it’s pretty rare that the public ever learns that the method(s) used to secure their communications have either been broken or are generally ineffective.

Categories
Writing

Pixie Dust and Data Encryption

CNet recently revealed that Google is encrypting some of their subscribers’ Google Drive data. Data has always been secured in transit, but Google is testing encrypting data at rest. This means that, without the private key, someone who got access to your data on Google’s Drive servers would just get reams of ciphertext. At issue, however, is that ‘encryption’ is only a significant barrier if the the third-party storing your data cannot decrypt the data when a government-backed actor comes knocking.

Encryption has become something like pixie dust, insofar as companies far and wide assure their end-users and subscribers that data is armoured in cryptographic shells. Don’t worry! You’re safe with us! Unfortunately, detailed audits of commercial encrypted products often reveal firms offering more snake oil than genuine protection. Just consider some of the following studies and reports that are, generally, damning[1]:

As noted in Bruce Schneier’s (still) excellent analysis of cryptographic snake oil, there are at least nine warning signs that the company you’re dealing with isn’t providing a working cryptographic solution:

  1. You come across a lot of “pseudo-mathematical goobledygook” that isn’t linked to referenced and reviewed third-party reviews of the cryptographic underpinnings.
  2. The company states that ‘new mathematics’ are used to secure your information.
  3. The cryptographic process is proprietary and neither you nor anyone else can examine how data is secured.
  4. Weird claims are made about the nature of the product, such that the claims or terms used could easily fit within the latest episode of a sci-fi show you’re watching.
  5. Excessive key lengths are trumpted as a demonstrated proof of cryptographic security.
  6. The company claims your data is secure because one-time pads are used.
  7. Claims are made that cannot be backed up in fact.
  8. Security proofs involve twists of linguistic logic, and lack demonstrations of mathematical logic.
  9. The product is somehow secure because it hasn’t been ‘cracked’. (Yet.)

Unfortunately, people have been conditioned by Hollywood and other media that as soon as something is ‘encrypted’ only super-duper hackers can subsequently ‘penetrate the codes and extract the meta-details to derive a data-intuition of the content’ (or some such similiar garbage). When you’re dealing with crappy ‘encryption’ – like showing private keys in plain text, or transmitting passphrases across the Internet in the clear – then the product is just providing consumers a false sense of security. You don’t need to be a hacker to ‘defeat’ particularly poor implementations of data encryption, you often just need to know how to read a file system.

Presently, however, there aren’t clear ways for consumers to know if a product is genuinely capable of securing their data in transit or at rest. There isn’t a clear solution to getting bad products off the market or generally improving product security, save for media shaming and/or the development of better cryptographic libraries that non-cryptographers (read: developers) can easily use when developing product. However, there are always going to be flaws and errors, and most consumers are never going to know that something has gone terribly awry until it’s far, far too late. So, despite there being a well-known problem, there isn’t a productive solution. And that has to change.


  1. The selection of studies were just chosen because they’re sitting on my computer now/I’ve referenced or written about them previously. If you spend a few minutes trawling Google Scholar using the search term ‘encryption broken’ you’re going to come across even more analyses of encryption ‘solutions’ that have been defeated.  ↩