Categories
Writing

The Painful Process of Updating Android

Android fragmentation is a very real problem; not only does it hinder software developers’ abilities to build and sell apps but, also, raises security issues. In a recent report from Open Signal, we learn that 34.1% of Android users are using the 2.3.3–2.3.7 version of Android, whereas just 37.9% of users using 4.x versions of the operating system, most of whom are themselves using a years-old version of Android. In effect, an incredibly large number of Android users are using very outdated versions of their mobile phone’s operating systems.

It’s easy to blame this versioning problem on the carriers. It’s even easier to blame the issue on the manufacturers. And both parties deserve blame. But perhaps not just for the reasons that they’re (rightly!) often crucified for: I want to suggest that the prevalence of 2.3.x devices in consumers’ hands might have as much to do with consumers not knowing how to update their devices, as it does with updates simply not being provided by carriers and manufacturers in the first place.

Earlier this month I spent some time with ‘normal’ gadget users: my family. One family member had a Samsung Galaxy S2…which was still using version 2.x of the Android operating system. Since February 2013, an operating system update has been available for the phone that would bring it up to Android version 4.1.2, but my family member neither knew or cared that it was available.

They didn’t know about the update because they had received no explicit notice that an update was available, or at least didn’t recall being notified. To be clear, they hadn’t updated the phone even once since purchasing the device about two years ago, and there have been a series of updates to the operating system since purchase time.

The family member also didn’t care about there being an update, because they only used the phone for basic functions (e.g. texting, voice calls, the odd game, social networking). They’re not a gadget monkey and so didn’t know about any of the new functions incorporated into the updated Android operating system. And, while they appreciate some of the new functionality (e.g. Google Now) they wouldn’t have updated the device unless I had been there.

A key reason for having not updated their phone was the absolute non-clarity in how they were supposed to engage in this task: special software had to be downloaded from Samsung to be installed on their computer,[1] and then wouldn’t run because the phone’s battery had possess at least a 50% charge,[2] and then it took about 3 hours because the phone couldn’t be updated to the most recent version of Android in one fell swoop. Oh, and there were a series of times when it wasn’t clear that the phone was even updating because the update notices were so challenging to understand that they could have been written in cipher-text.

Regardless of whether it was Rogers’, Samsung’s, Google’s, or the tooth fairy’s fault, it was incredibly painful to update the Android device. Painful to the point that there’s no reason why most people would know about the update process, and little reason for non-devoted Android users to bother with the hassle of updating if they knew what a pain in the ass it was going to be.

The current state of the Android OS ecosystem is depressing from a security perspective. But in addition to manufacturers and carriers often simply not providing updates, there is a further problem that Android’s OS update mechanisms are incredibly painful to use. Only after the significant security SNAFUs of Windows XP did Microsoft really begin to care about desktop OS security, and Google presently has a decent update mechanism for their own line of Nexus devices. What, exactly, is it going to take for mobile phone manufacturers (e.g. Samsung, HTC) and mobile phone carriers (e.g. Rogers, TELUS) to get their acts together and aggressively start pushing out updates to their subscribers? When are these parties going to ‘get’ that they have a long-term duties and commitments to protect their subscribers and consumers?[3]


  1. In theory there is an over the air update system that should have facilitated a system update in a relatively painless way. Unfortunately, that system didn’t work at all and so Samsung’s software had to be used to receive the updates.  ↩
  2. Really, this made no sense. To update the device it had to be plugged into a computer; why, then, did the phone (which was charging because it was plugged into the computer) need to have a 50%+ charge?  ↩
  3. I actually have a few ideas on this that will, hopefully, start coming to fruition in the coming months, but I’m open to suggestions from the community.  ↩
Categories
Writing

New Zealand Reveals the ‘Five Eyes’ Spying on Each Other

In an interesting bit of news, it seems we can certifiably state that the NSA spied on a New Zealand journalist at the behest of the New Zealand government. The government has apparently classified journalists alongside foreign intelligence services and ‘organizations with extreme ideologies’ (read: terrorists). The government’s defence security staff “viewed investigative journalists as ”hostile“ threats requiring ”counteraction“. The classified security manual lists security threats, including ”certain investigative journalists“ who may attempt to obtain ”politically sensitive information“.”[1]

So, while the information about the surveillance is shocking in its own right, there is also an important tidbit of information that can derived from the US intelligence services’ actions: despite the supposedly sacrosanct prohibition the Five Eyes partners not spy on one another, this prohibition was broken in this instance. Though Canadian experts have previously stated that such surveillance on Five Eyes partners would be an extreme exception, it’s striking that surveillance mechanisms designed to counter the FSB are being brought to bear on investigative journalists. That the NSA and other American intelligence services turned their ‘ears’ towards a journalist at the New Zealand government’s behest suggests that, despite protestations to the contrary, ‘friendly’ intelligence services do ‘help’ one another spy on people and groups that domestic intelligence services are prohibited from monitoring for either legal or technical reasons.

Reasonable people can disagree on how and why intelligence services operate. However, the routine (mis)information that has been put forward by Western agencies concerning governmeing spying has significantly undermined any foundation for a genuine democratic debate to arise around such spying. When the United States’ Director of National Intelligence asserts that he was providing the “least untruthful” answers to elected officials questioning dragnet surveillance, and supposed ‘red lines’ are being crossed in secret to target journalists tasked with providing truthful reporting to citizens, then the ability to support or even reform intelligence practices is undermined: why shouldn’t we, the people, radically and unilaterally curtail surveillance practices if the same services and their administrative officers won’t truthfully disclose even their most basic operational guidelines?


  1. I should note that, following the revelations that the NZ government is monitoring journalists and classed them alongside foreign intelligence sources and extremist organizations, the government has publicly come out against these allegations.  ↩
Categories
Links Writing

Another ‘Victory’ for the Internet of Things

Researchers have found, once again, that sensitive systems have been placed on the Internet without even the most basic of security precautions. The result?

Analyzing a database of a year’s worth of Internet scan results [H.D. Moore]’s assembled known as Critical.io, as well as other data from the 2012 Internet Census, Moore discovered that thousands of devices had no authentication, weak or no encryption, default passwords, or had no automatic “log-off” functionality, leaving them pre-authenticated and ready to access. Although he was careful not to actually tamper with any of the systems he connected to, Moore says he could have in some cases switched off the ability to monitor traffic lights, disabled trucking companies’ gas pumps or faked credentials to get free fuel, sent fake alerts over public safety system alert systems, and changed environmental settings in buildings to burn out equipment or turn off refrigeration, leaving food stores to rot.

Needless to say, Moore’s findings are telling insofar as they reveal that engineers responsible for maintaining our infrastructures are often unable to secure those infrastructures from third-parties. Fortunately, it doesn’t appear that a hostile third-party has significantly taken advantage of poorly-secured and Internet-connected equipment, but it’s really only a matter until someone does attack this infrastructure to advance their own interests, or simply to reap the lulz.

Findings like Moore’s are only going to be more commonly produced as more and more systems are integrated with the Internet as part of the ‘Internet of Things’. It remains to be seen whether vulnerabilities will routinely be promptly resolved, especially with legacy equipment that enjoys significant sunk costs and limited capital for ongoing maintenance. Given the cascading nature of failures in an interconnected and digitized world, failing to secure our infrastructure means that along with natural disasters we may get to ‘enjoy’ cyber disasters that are both harder to positively identify or subsequently remedy when/if appropriately identified.

Categories
Links Writing

The Significance of a ‘Three Hop’ Analysis

Washington’s Blog has an excellent, if somewhat long, post that outlines the significance of the NSA’s ‘three hop’ analysis. It collects and provides some numbers behind basic communications network analyses, and comes to the conclusion that upwards to 2.5 million Americans could be “caught up in dragnet for each suspected terrorist, means that a mere 140 potential terrorists could lead to spying on all Americans. There are tens of thousands of Americans listed as suspected terrorists … including just about anyone who protests anything that the government or big banks do.”

Go read the full post. Some of the numbers are a bit speculative, but on the whole it does a good job showing why ‘three hop’ analyses are so problematic: such analyses disproportionately collect data on American citizens the basis of the most limited forms of suspicion. Such surveillance should be set aside because it constitutes an inappropriate infringement on individuals’ and communities’ reasonable expectations of privacy; it runs counter to how a well ordered and properly functioning democracy should operate in theory and in practice.

Categories
Links Writing

Facebook’s ‘Other’ Folder

David Pogue’s recent post on Facebook’s ‘Other’ folder notes how the company is effectively hiding a significant number of legitimate messages from its users in an attempt to prevent spam and ‘unimportant’ messages from disturbing subscribers. What follows are a few examples of legitimate messages that subscribers missed because they were placed in this folder:

  • “Notification of the death of a friend was hidden in my Other box. I had been very hurt at not being told, and actually missed her funeral.”
  • “I just checked my ‘Other’ folder and found out that I won a free high-end kitchen faucet for a contest I entered last year. Rats.”
  • “Just looked at my ‘Other’ messages and found one about a job opening — in 2011. Think it’s been filled?”
  • “Whoa! There’s tons of important messages in here. Former students of mine were trying to reach out to me. I can’t believe Facebook doesn’t notify you in any way about these.”
  • “Unbelievable! My husband’s wallet was lost and presumed stolen — someone had found it a year ago and sent us a Facebook message, which was hidden until now! Thanks so much.”
  • “Just checked and found a message from someone telling me that they found my lost wallet…a year ago. They really need to redo some thinking on that ‘other’ folder.”

The intent of Facebook’s filtering is noble, insofar as it’s meant to cut down on the cruft and spam that people inevitably get in their email inboxes on a daily basis. I’m sure that the logic is as follows: if we can get people to like using Facebook messages more than email, then we can convince people to rely on our corporate system and wean people off of their traditional email services. Unfortunately, it looks like Facebook’s filtering system suffers from flaws, just as their competitors’ systems do. Worse, and unlike most of their competitors, Facebook subscribers can’t access this folder from their tablets or smartphones without visiting Facebook via the web interface. So, for people that predominantly engage with Facebook using the company’s mobile applications, this folder is effectively invisible. Messages simply vanish into a black hole. This is a very bad thing.

While Facebook’s system makes sense, I suspect that a great many people are as ignorant of the ‘Other’ folder’s existence as the people who wrote to Pogue. This information asymmetry between the developers and users suggests a problem in the UX or UI, insofar as it shouldn’t be a shock that this folder exists. Good UI and UX will prevent subscribers from getting ‘shocked’ about the existence of hidden messages, and will help ensure that the service remains ‘sticky’ for its user base.

Network effects can stymie subscriber churn but they can’t stop it entirely. If Facebook undermines professional or personal networks because of how it handles suspected ‘unimportant’ messages, then the network effect that Facebook currently enjoys could be weakened and expose a part of Facebook’s flank to companies that are more attuned to people’s communicative interests and desires. It will be curious to see how/whether Facebook incorporates the information that arose from Pogue’s columns, and if they actually modify users’ interfaces such that the ‘Other’ folder is more prominently displayed. At the very least, something should change in the mobile applications so users can at least theoretically access all of those ‘unimportant’ messages.

Categories
Writing

Pixie Dust and Data Encryption

CNet recently revealed that Google is encrypting some of their subscribers’ Google Drive data. Data has always been secured in transit, but Google is testing encrypting data at rest. This means that, without the private key, someone who got access to your data on Google’s Drive servers would just get reams of ciphertext. At issue, however, is that ‘encryption’ is only a significant barrier if the the third-party storing your data cannot decrypt the data when a government-backed actor comes knocking.

Encryption has become something like pixie dust, insofar as companies far and wide assure their end-users and subscribers that data is armoured in cryptographic shells. Don’t worry! You’re safe with us! Unfortunately, detailed audits of commercial encrypted products often reveal firms offering more snake oil than genuine protection. Just consider some of the following studies and reports that are, generally, damning[1]:

As noted in Bruce Schneier’s (still) excellent analysis of cryptographic snake oil, there are at least nine warning signs that the company you’re dealing with isn’t providing a working cryptographic solution:

  1. You come across a lot of “pseudo-mathematical goobledygook” that isn’t linked to referenced and reviewed third-party reviews of the cryptographic underpinnings.
  2. The company states that ‘new mathematics’ are used to secure your information.
  3. The cryptographic process is proprietary and neither you nor anyone else can examine how data is secured.
  4. Weird claims are made about the nature of the product, such that the claims or terms used could easily fit within the latest episode of a sci-fi show you’re watching.
  5. Excessive key lengths are trumpted as a demonstrated proof of cryptographic security.
  6. The company claims your data is secure because one-time pads are used.
  7. Claims are made that cannot be backed up in fact.
  8. Security proofs involve twists of linguistic logic, and lack demonstrations of mathematical logic.
  9. The product is somehow secure because it hasn’t been ‘cracked’. (Yet.)

Unfortunately, people have been conditioned by Hollywood and other media that as soon as something is ‘encrypted’ only super-duper hackers can subsequently ‘penetrate the codes and extract the meta-details to derive a data-intuition of the content’ (or some such similiar garbage). When you’re dealing with crappy ‘encryption’ – like showing private keys in plain text, or transmitting passphrases across the Internet in the clear – then the product is just providing consumers a false sense of security. You don’t need to be a hacker to ‘defeat’ particularly poor implementations of data encryption, you often just need to know how to read a file system.

Presently, however, there aren’t clear ways for consumers to know if a product is genuinely capable of securing their data in transit or at rest. There isn’t a clear solution to getting bad products off the market or generally improving product security, save for media shaming and/or the development of better cryptographic libraries that non-cryptographers (read: developers) can easily use when developing product. However, there are always going to be flaws and errors, and most consumers are never going to know that something has gone terribly awry until it’s far, far too late. So, despite there being a well-known problem, there isn’t a productive solution. And that has to change.


  1. The selection of studies were just chosen because they’re sitting on my computer now/I’ve referenced or written about them previously. If you spend a few minutes trawling Google Scholar using the search term ‘encryption broken’ you’re going to come across even more analyses of encryption ‘solutions’ that have been defeated.  ↩
Categories
Writing

A Brief Comment on ‘Metadata’

We live in environments that are pervasively penetrated by digital systems. We carry personalized tracking devices with us everywhere (i.e. mobile phones) that have increasingly sophisticated sensors embedded in them. We rely on Internet-based systems for travel, work, and play. Even our ‘landline’ communications are pervasively turned into digital code when we call a friend or family member.

Every one of the previously mentioned transactions generates ‘non-content’ data: when and who we call, and for how long; which cellular towers we pass by; what (semi-)unique IP addresses are provided to websites we visit, and so forth. These identifiers can be used to trace our movements, practices, and who we communicate with: they are often far more revealing about ourselves than the pure content of our communications.

It’s with the reality of the surveillance potentials of metadata that we need to reorient how to talk about such ‘non-content’ data. It has become depressingly common to see elected officials and other authorities state that “it’s just metadata” as well as “we only use it for appropriate purposes.”

To the first statement, metadata can reveal incredibly sensitive infomation about individuals and about their community/communities. The collection and processing of such information therefore warrants a similar degree of care and concern as the processing of clearly personal information.

To the second statement, clarity around collection and use of metadata is needed. Moreover, data cannot be massively collected and ‘appropriate purposes’ just applied to how the data is subsequently parsed. The very collection of data itself needs to be targeted, justified, and enjoy significant oversight – arguably more oversight that ‘just’ the content of communications.

In a recent paper on metadata, Ontario Information and Privacy Commissioner Ann Cavoukian wrote:

we urge governments to adopt a proactive approach to securing the rights affected by intrusive surveillance programs. To protect privacy and liberty, any power to seize communications metadata must come with strong safeguards directly embedded into programs and technologies, that are clearly expressed in the governing legal framework. The purpose, scope, and duration of data collection must be strictly controlled. More robust judicial oversight, parliamentary or congressional controls, and systems capable of providing for effective public accountability should be brought to bear. The need for operational secrecy must not stand in the way of public accountability. Our essential need for privacy and the preservation of our freedoms are at stake.[1]

Commissioner Cavoukian is decidely correct that data collection, use, and intent must be carefully controlled. However, I would go a step further than the Commissioner has in her call for additional parliamentary oversight and control. In Canada, and unlike the United States and United Kingdom, there is not a committee of parliamentarians with security clearances to oversee how our intelligence and security authorities operate. Presently, the Canadian system predominantly enjoys only Cabinet-level political oversight: we need a broader set of eyes, and eyes that are not mindful of the ruling government’s optics, to evaluate the appropriateness of what our intelligence and security services are up to. So, in excess of Commissioner Cavoukian’s comments, we actually need to modify parliament such that oversight is even possible.

Reasonable people can disagree on the value and desire for national security and foreign intelligence services. Such disagreements should happen more prominently amongst parliamentarians and the public. However, there should be no disagreement that, in order to represent the public, at least some members of our legislative assemblies must know the extent of the government’s security and intelligence powers, capabilities, and practices.

Canada is a democracy and, as such, it is imperative that we establish a committee of parliamentarians to oversee how our security and spy agencies are collecting, using, and retaining the metadata and content associated with our communications. The actions that these agencies engage in are too significant to leave to Cabinet oversight alone.


  1. Ann Cavoukian. (2013). “A Primer on Metadata: Separating Fact from Fiction.” Office of the Information and Privacy Commissioner of Ontario. Available at: http://www.privacybydesign.ca/content/uploads/2013/07/Metadata.pdf. Pp. 10. Emphasis added.  ↩
Categories
Humour Links Writing

Definitions for the American Surveillance State

David Sirota of Salon has developed an excellent set of terms to speed along discussions about the contemporary American surveillance state. My own favorites include:

Least untruthful: A new legal doctrine that allows an executive branch official to issue a deliberate, calculated lie to Congress yet avoid prosecution for perjury, as long as the official is protecting the executive branch’s political interests. Usage example: Director of National Intelligence James Clapper avoided prosecution for perjury because he insisted that the blatant lie he told to Congress was merely the “least untruthful” statement he could have made.

And:

Modest encroachment: A massive, indiscriminate intrusion. Usage example: President Obama has deemed the NSA’s “collect it all” surveillance operation, which has captured 20 trillion information transactions and touches virtually all aspects of American life, a “modest encroachment” on citizens’ right to privacy.

The full listing of terms is depressingly cynical. However, the persistent – if often humorous – turn to cynicism may ultimately limit how politicians address and respond to Snowden’s surveillance revelations. What Snowden confirmed raises existential challenges to the potential to imagine, let alone actualize, a deliberative democratic state. The accompanying risk is that instead of addressing such challenges head on, citizens may retreat to cynicism rather than engaging in the hard work of recuperating their increasingly-authoritarian democratic institutions. We’re at a point where we need a more active, not more withdrawn and bemused, citizen response to government excesses.

Categories
Links Writing

Cellular Security Called Into Question. Again.

Worries about spectrum scarcity have prompted telecommunications providers to provide their subscribers with femotocells, which are small and low-powered cellular base stations. Often, these stations are linked into subscribers’ existing 802.11 wireless or wired networks, and are used to relieve stress placed upon commercial cellular towers whilst simultaneously expanding cellular coverage. Questions have recently been raised about the security of those low-powered stations:

Ritter and his colleague, Doug DePerry, demonstrated for Reuters how they can eavesdrop on text messages, photos and phone calls made with an Android phone and an iPhone by using a Verizon femtocell that they had previously hacked.

They said that with a little more work, they could have weaponized it for stealth attacks by packaging all equipment needed for a surveillance operation into a backpack that could be dropped near a target they wanted to monitor.

While Verizon has issued a patch for its femtocells, there isn’t any reason why additional vulnerabilities won’t be found. By placing the stations in the hands of end-users, as opposed to retaining control over commercially deployed cellular towers, third-party security researchers and attackers can persistenty test the cells until flaws are found. The consequence of this deployment strategy is that attackers will continue to find vulnerabilities to (further) weaken the security associated with cellular communications. Unfortunately, countering attackers will significantly depend on security researchers finding the same exploit(s) and reporting it/them to the affected companies. The likelihood of security researchers and attackers finding and exploiting the same flaws diminishes as more and more vulnerabilities are found in these devices.

In countries such as Canada, for researchers to conduct their research they must often first receive permission from the companies selling the femtocells: if there are any ‘digital locks’ around the technology, then researchers cannot legally investigate the code without prior corporate approval. Such restrictions don’t mean that researchers won’t conduct research, but do mean that researchers’ discoveries will go unreported and thus unpatched. As a result, consumers will largely remain reliant on the companies responsible for the security deficits in the first place to identify and correct those deficits, but absent public pressure that results from researchers disclosing vulnerabilities.

In light of the high economic costs of such identification and patching processes, I’m less than confident that femtocell providers are going to be investing oodles of cash just to potentially as opposed to necessarily identify and fix vulnerabilities. The net effect is that, at least in Canada, telecommunications providers can be assured that the public will remain relatively unconcerned about the security of providers’ products: security perceptions will be managed by preventing consumers from learning about prospective harms associated with telecommunications equipment. I guess this is just another area of research where Canadians will have to point to the US and say, “The same thing is likely happening here. But we’ll never know for sure.”

Categories
Links Writing

Drawing Comparative Inferences from Canadian and American Network Investment

Peter Nowak recently had a good post concerning the nature of mobile pricing in Canada. You really should go read it all. However, there was one key piece that he noted, towards the end, that deserves to be highlighted. Specifically:

It was only a few short years ago when Bell and Telus were getting pummeled by Rogers, thanks to that company’s chosen technology. Rogers, like most of the carriers in the world, went with GSM network technology while Bell and Telus opted for CDMA instead. Without getting technical, GSM won, and Apple put the exclamation point on the battle in 2007 in the form of the iPhone. Unable to offer the latest and greatest devices, including that quintessential and hotly desired device, Bell and Telus moved quickly to upgrade to the next greatest and latest 4G technology. Rogers followed suit. The same is happening in the United States, with Sprint and Verizon – both former CDMA users – both spending heavily on LTE.

Network investment in both Canada and the United States does not reflect the competitiveness of either market, but rather phone makers’ decisions on technologies. Carriers are simply being pulled along for the ride.

One thing I may indeed have been wrong about in the past is how high prices were mainly the result of the lack of foreign competition in Canada, which wasn’t legally allowed until last year. The poor technological choices made by a number of carriers can’t be discounted as a factor. The industry is now waving the billions they’re having to spend to correct those mistakes in the faces of consumers and government, with prices – be they as they are – the necessary rationalization.

A key aspect of Nowak’s argument towards the end is that network investment was driven not so much by carrier-driven decisions but by the decision of a device manufacturer: Apple. I’d not really considered how Apple’s decision to ‘cut out’ a group of telecom companies from offering the iPhone could have been/was significantly responsible for massive re-engineering and investment in compatible networking technologies (i.e. GSM). Obviously such changes to the network infrastructure came at a significant fiscal cost.

It would be interesting to take Nowak’s point and then build on it to better understand how Canadian three year contracts might have alleviated the ‘hurt’ experienced by Canadian mobile providers. Specifically, we could ask the following:

  • what was the churn that Bell and TELUS experienced as a result of not being able to provide the iPhone?
  • was churn in Canada comparable to the CDMA providers in the United States?

Based around these questions we could establish a working hypothesis that churn was lower in Canada than the US. If this hypothesis bore out when tested we could try to ascertain why it bore out:

  • were Canadians happier with Bell and TELUS than their American counterparts?
  • were Canadians unable to choose their preferred economic options at a rate comparable to American customers because of the longer contracts associated with the Canadian carriers?
  • Other?

In effect the bad bets of American and Canadian carriers on CDMA offers an interesting comparative case from which we can draw inferences about the effects of the much-loathed three year cellular phone contracts in Canada. It would be awesome to see the numbers crunched to evaluate the effects of those contracts, especially before and after Bell/TELUS look launched their HSPA+ network(s). From there, I’m sure some interesting thoughts on the CRTC’s wireless code of conduct (which includes effectively mandating two year contracts) could follow: if a device as disruptive as the iPhone appears on the market, what would it do to the Canadian telecommunications market?