Bit9 has released a report that outlines a host of fairly serious concerns around Android devices and app permissions. To be upfront: Android isn’t special in this regard, as if you have a Blackberry, iPhone, or Windows Phone Device you’ll also find a pile of apps that have very, very strange permission requests (e.g. can a wallpaper application access your GPS and contact book?). The video (above) is a quick overview of some findings; the executive summary can be found here and the full report here (.pdf).
Tag: Google
Kashmir Hill wrote an article last week about how Google Now is informing some Nexus owners of how active they have been over the past week. She rightfully notes that this is really just making transparent the tracking that smartphones do all the time, though putting it to (arguably) good and helpful use. This said, Google’s actions raise a series of interesting issues and questions.
To begin, Google’s actions are putting a ‘friendly face’ on locational tracking. Their presentation of this data also reveals some of the ways that Google can – and apparently is – using locational data: for calculating not just distance but, based on the rate of movement between locations, the means by which users are getting from point A to B. This isn’t surprising,given that Google has had to develop algorithms to determine if subscribers’ phones are moving in cars (in fast or slow traffic) for some of their traffic alerts systems. Determining whether you’re walking/biking instead of driving is presumably just a happy outcome of that algorithmic determination. That said: is this mode of analyzing movement and location necessarily something that users want Google to be processing? Can they have been genuinely expected to consent to this surveillance – barring in jargon-ridden Terms of Service and Privacy Policies – and, moreover, can Now users get both raw data and the categories into which their locational data has been ‘sorted’ by Google? Can they have both sets of data fully, and permanently, expunged from Google databases?
Friendliness – or not, if you see this mode of tracking and notification as problematic – aside, I think that Google’s alerts speak to the important role that ambient technology can play in encouraging public fitness. In the interests of disclosure, I’ve used a non-GPS-based system to track the relative levels of my activity for the past six or seven months. It’s been the single best $100 that I’ve spent in the past five years and led to very important, and positive, changes in my personal health. I specifically chose a non-GPS system because I worry about the implications of linking health/fitness information with where individuals physically move: I see such data as a potential gold mine for health insurers and employers. This is where I see the primary (from my perspective) concerns: how can individuals be assured that GPS-related fitness information won’t be made available to health insurers who are setting Android users’ health premiums? How can they prevent the information from leaking to employers, or anyone else that might have an interest in this data?
Past this issue of data flow control I actually think that making basic fitness information very, very clear to people is a good idea. A comfortable one? No, not necessarily. No one really wants to see how little they may have been active. But I’m not certain that this mode of fitness analysis is necessarily creepy; it can definitely be unpleasant, however.
Of course individuals need to be able to opt-out of this kind of tracking if they’d like. Really, it should be opt-in (from a privacy perspective) though from a public health perspective I can’t help but wonder if it shouldn’t be opt-out. This is an area where there are competing public goods, and unlike a debate around security and privacy (which tends to feature pretty drawn out, well entrenched, battle lines) I’m not sure we’ve had a good discussion about the nature of locational tracking as it relates to basic facets of public fitness and, by extension, public health.
In the end, this is actually a tracking technology that I’m largely on the fence about, and my core reason for having problems with it are (a) I don’t think people had any real idea that they had opted-in to the fitness analysis; (b) I don’t trust third-parties not to get access to this data for purposes at odds with the data subject’s own interests. If both (a) and (b) could be resolved, however, I think I’d have a much harder time disagreeing with such ‘fitness alerts’ being integrated with smartphones given the significant problems of obesity amongst Western citizens.
What are your thoughts on this topic?
![]()
This is a terrific graphic that breaks down how Google collected data from wi-fi networks with Streetview vehicles
A great of speculation exists around mobile companies of all stripes: are they secure? Do they secretly insert backdoors for government? What kinds of assurances do customers and citizens have around the devices?
Recently these concerns exploded (again) following a Reuters article that notes serious problems in ZTE mobile phones. There are a series of reasons that security agencies can, and do, raise concerns about foreign built equipment (some related more to economics than good security practice). While it’s possible that ZTE’s vulnerabilities were part of a Chinese national-security initiative, it’s entirely likely (and more probable) that ZTE’s backdoor access into their mobiles is a genuine, gigantic, mistake. Let’s not forget that even ‘our’ companies are known for gross security incompetence.
In the ZTE case it doesn’t matter if the backdoor was deliberate or not. It doesn’t matter if the company patches the devices, either, because a large number of customers will never apply updates to their phones. This means that, for all intents and purposes, these devices will have well publicized security holes for the duration of their existence. It’s that kind of ongoing vulnerability – one that persists regardless of vendor ‘patches’ – that is increasingly dangerous in the mobile world, and a threat that is arguably more significant (at the moment) than whether we can trust company X or Y.
Security Bugs In Google Chrome Extensions
A piece that was authored last September, enumerating some of the security issues with Google Chrome Extensions. The authors:
reviewed 100 Chrome extensions and found that 27 of the 100 extensions leak all of their privileges to a web or WiFi attacker. Bugs in extensions put users at risk by leaking private information (like passwords and history) to web and WiFi attackers. Web sites may be evil or contain malicious content from users or advertisers. Attackers on public WiFi networks (like in coffee shops and airports) can change all HTTP content. We’ll show you how you can prevent attacks on your extension using Content Security Policy.
In a followup, the authors have published a full report (here) that outlines their methodology and identifies the extensions that, as of February 2012, remain unpatched.
Check out the article, and some of the other great pieces that they’ve published on security.
![]()
Google’s new privacy policy is going to be sheer gold for 1984 enthusiasts. While I’m not a fan of such simplistic references, it will provide a new round of comics for speakers at privacy, security, and surveillance conferences to rip off. Hopefully those same speakers aren’t themselves too tied to the notions of 1984 or the panopticon being the defining means of framing Google’s behaviours.
From the APPA’s letter to Google concerning Google’s new privacy police:
Initially, I would like to say that the TWG recognises Google’s efforts in making its privacy policies simpler and more understandable. Similarly, it notes Google’s education campaign announcing the changes. However, the TWG would suggest that combining personal information from across different services has the potential to significantly impact on the privacy of individuals. The group is also concerned that, in condensing and simplifying the privacy policies, important details may have been lost.
It’s a short, but valuable, letter for clarifying the principles that have privacy professionals concerned about Google’s policy changes. Go read it (.pdf link).
2012.2.28
This notion that apps should pay for bandwidth is insane. Telcos should pay developers a commission for helping them sell bandwidth.
Tim Bray, Developer Advocate at Google
Dan Goodin has a good piece on one of Bruce Schneier’s recent talks. From the top of the article:
Unlike the security risks posed by criminals, the threat from government regulation and data hoarders such as Apple and Google are more insidious because they threaten to alter the fabric of the Internet itself. They’re also different from traditional Internet threats because the perpetrators are shielded in a cloak of legitimacy. As a result, many people don’t recognize that their personal information or fortunes are more susceptible to these new forces than they ever were to the Russian Business Network or other Internet gangsters.
The notion that government – largely composed of security novices – large corporations, and a feudal security environment (where were trust Apple, Google, etc instead of having a generalizable good surveillance footprint) are key threats of security is not terribly new. This said, Bruce (as always) does a terrific job in explaining the issues in technically accurate ways that are simultaneously accessible to the layperson. Read the article; it’s well worth your time and will quickly demonstrate some of the ‘big’ threats to online security, privacy, and liberty.
Chrome Kills CA Revocation Checks
From Ars:
“While the benefits of online revocation checking are hard to find, the costs are clear: online revocation checks are slow and compromise privacy,” Langley added. That’s because the checks add a median time of 300 milliseconds and a mean of almost 1 second to page loads, making many websites reluctant to use SSL. Marlinspike and others have also complained that the services allow certificate authorities to compile logs of user IP addresses and the sites they visit over time.
Chrome will instead rely on its automatic update mechanism to maintain a list of certificates that have been revoked for security reasons. Langley called on certificate authorities to provide a list of revoked certificates that Google bots can automatically fetch. The time frame for the Chrome changes to go into effect are “on the order of months,” a Google spokesman said.
The problems with CA revocation checks have been particularly prominent over the past 12 months, given the large number of serious CA breaches. While even the Google fetch mechanism isn’t ideal – really, we need to move to an agile trust framework combined (ideally) with browser pinning that can’t be compromised by corporate admins – it’s better. Still, there’s a long way to go until SSL and the CA system are reformed to the point of being actual ‘trusted’ facets of the Internet.