Policing the Location Industry

Photo by Ingo Joseph on Pexels.com

The Markup has a comprehensive and disturbing article on how location information is acquired by third-parties despite efforts by Apple and Google to restrict the availability of this information. In the past, it was common for third-parties to provide SDKs to application developers. The SDKs would inconspicuously transfer location information to those third-parties while also enabling functionality for application developers. With restrictions being put in place by platforms such as Apple and Google, however, it’s now becoming common for application developers to initiate requests for location information themselves and then share it directly with third-party data collectors.

While such activities often violate the terms of service and policy agreements between platforms and application developers, it can be challenging for the platforms to actually detect these violations and subsequently enforce their rules.

Broadly, the issues at play represent significant governmental regulatory failures. The fact that government agencies often benefit from the secretive collection of individuals’ location information makes it that much harder for the governments to muster the will to discipline the secretive collection of personal data by third-parties: if the government cuts off the flow of location information, it will impede the ability of governments themselves obtain this information.

In some cases intelligence and security services obtain location information from third-parties. This sometimes occurs in situations where the services themselves are legally barred from directly collecting this information. Companies selling mobility information can let government agencies do an end-run around the law.

One of the results is that efforts to limit data collectors’ ability to capture personal information often sees parts of government push for carve outs to collecting, selling, and using location information. In Canada, as an example, the government has adopted a legal position that it can collect locational information so long as it is de-identified or anonymized,1 and for the security and intelligence services there are laws on the books that permit the collection of commercially available open source information. This open source information does not need to be anonymized prior to acquisition.2 Lest you think that it sounds paranoid that intelligence services might be interested in location information, consider that American agencies collected bulk location information pertaining to Muslims from third-party location information data brokers and that the Five Eyes historically targeted popular applications such as Google Maps and Angry Birds to obtain location information as well as other metadata and content. As the former head of the NSA announced several years ago, “We kill people based on metadata.”

Any arguments made by either private or public organizations that anonymization or de-identification of location information makes it acceptable to collect, use, or disclose generally relies tricking customers and citizens. Why is this? Because even when location information is aggregated and ‘anonymized’ it might subsequently be re-identified. And in situations where that reversal doesn’t occur, policy decisions can still be made based on the aggregated information. The process of deriving these insights and applying them showcases that while privacy is an important right to protect, it is not the only right that is implicated in the collection and use of locational information. Indeed, it is important to assess the proportionality and necessity of the collection and use, as well as how the associated activities affect individuals’ and communities’ equity and autonomy in society. Doing anything less is merely privacy-washing.

Throughout discussions about data collection, including as it pertains to location information, public agencies and companies alike tend to provide a pair of argument against changing the status quo. First, they assert that consent isn’t really possible anymore given the volumes of data which are collected on a daily basis from individuals; individuals would be overwhelmed with consent requests! Thus we can’t make the requests in the first place! Second, that we can’t regulate the collection of this data because doing so risks impeding innovation in the data economy.

If those arguments sound familiar, they should. They’re very similar to the plays made by industry groups who’s activities have historically had negative environmental consequences. These groups regularly assert that after decades of poor or middling environmental regulation that any new, stronger, regulations would unduly impede the existing dirty economy for power, services, goods, and so forth. Moreover, the dirty way of creating power, services, and goods is just how things are and thus should remain the same.

In both the privacy and environmental worlds, corporate actors (and those whom they sell data/goods to) have benefitted from not having to pay the full cost of acquiring data without meaningful consent or accounting for the environmental cost of their activities. But, just as we demand enhanced environmental regulations to regulate and address the harms industry causes to the environment, we should demand and expect the same when it comes to the personal data economy.

If a business is predicated on sneaking away personal information from individuals then it is clearly not particularly interested or invested in being ethical towards consumers. It’s imperative to continue pushing legislators to not just recognize that such practices are unethical, but to make them illegal as well. Doing so will require being heard over the cries of government’s agencies that have vested interests in obtaining location information in ways that skirt the law that might normally discipline such collection, as well as companies that have grown as a result of their unethical data collection practices. While this will not be an easy task, it’s increasingly important given the limits of platforms to regulate the sneaky collection of this information and increasingly problematic ways our personal data can be weaponized against us.


  1. “PHAC advised that since the information had been de-identified and aggregated, it believed the activity did not engage the Privacy Act as it was not collecting or using “personal information”. ↩︎
  2. See, as example, Section 23 of the CSE Act ↩︎

Solved: Apple Home Automation Not Firing After Buying New iPhone

(Photo by Dan Smedley on Unsplash)

One of the best pandemic purchases I’ve made has been a HomePod Mini. One of the many reasons that I’ve liked it is I can use a Home automation to set a playlist or album to wake up to. This corrects an annoyance with the iPhone’s Alarms app, where you need to download a song to your device to reliably use it as an alarm.

However, I recently got a new iPhone which broke my alarm automation. I couldn’t figure out what was going on: I deleted and re-created the automation a few times and totally restarted the HomePod Mini. Neither of these actions helped. Not only did the automation not work at the designated times but the automation wouldn’t even work while using the test feature.

The settings for the automation were:

  • Enable This Automation (Only when I am home): On
  • When: Weekdays at a given time (Only when I am home)
  • Scenes: Weekday morning
  • Accessories: HomePod Mini
  • Media: Play Audio (Designated playlist, Shuffle, Set Custom Volume)

No matter what I did, the automation never fired. However, I figured out that as soon as I disabled the location-specific triggers the automation worked. This helped me to start narrowing down the problem and how to correct it.

You see, when I moved all of my data to my new iPhone it failed to transfer a setting that told the Home app to use my iPhone as the location to from which to trigger events. As a result, setting an automation to only fire when I was home couldn’t work because the device which had been triggering the Home automation (i.e., my old iPhone) wasn’t never geolocated to my network. You can fix this, however, by opening: Settings >> Privacy >> Location Services (On) >> Share My Location >> My Location (Set to “This Device).

You can fix this by opening: Settings >> Privacy >> Location Services (On) >> Share My Location >> My Location (Set to “This Device”)

Now that the Home app knows to use my iPhone’s location as the way of determining whether I’m at home, the trigger fires reliably.

Facebook: Yes, it can get more invasive

Grace Nasri has a good – if worrying – story that walks through how Facebook could soon use geolocational information to advance its digital platform. One item that she focuses on is Facebook’s existing terms of service, which are vague enough to permit the harvesting of such information already. As much as it’s non-scientific I think that the company’s focus on knowing where its users are is really, really creepy.

I left Facebook after seeing they’d added phone numbers to my Facebook contacts for people who’d never been on Facebook, who didn’t own computers, and for who I didn’t even have the phone numbers. Seeing that Facebook had the landline numbers for my 80+ year old grandparents was the straw that broke my back several years ago; I wonder if this degree of tracking will encourage other Facebook users to flee.

Aside

Bit9 has released a report that outlines a host of fairly serious concerns around Android devices and app permissions. To be upfront: Android isn’t special in this regard, as if you have a Blackberry, iPhone, or Windows Phone Device you’ll also find a pile of apps that have very, very strange permission requests (e.g. can a wallpaper application access your GPS and contact book?). The video (above) is a quick overview of some findings; the executive summary can be found here and the full report here (.pdf).

Google’s ‘Friendly Tracking’: Fitfully Creepy?

Kashmir Hill wrote an article last week about how Google Now is informing some Nexus owners of how active they have been over the past week. She rightfully notes that this is really just making transparent the tracking that smartphones do all the time, though putting it to (arguably) good and helpful use. This said, Google’s actions raise a series of interesting issues and questions.

To begin, Google’s actions are putting a ‘friendly face’ on locational tracking. Their presentation of this data also reveals some of the ways that Google can – and apparently is – using locational data: for calculating not just distance but, based on the rate of movement between locations, the means by which users are getting from point A to B. This isn’t surprising,given that Google has had to develop algorithms to determine if subscribers’ phones are moving in cars (in fast or slow traffic) for some of their traffic alerts systems. Determining whether you’re walking/biking instead of driving is presumably just a happy outcome of that algorithmic determination. That said: is this mode of analyzing movement and location necessarily something that users want Google to be processing? Can they have been genuinely expected to consent to this surveillance – barring in jargon-ridden Terms of Service and Privacy Policies – and, moreover, can Now users get both raw data and the categories into which their locational data has been ‘sorted’ by Google? Can they have both sets of data fully, and permanently, expunged from Google databases?

Friendliness – or not, if you see this mode of tracking and notification as problematic – aside, I think that Google’s alerts speak to the important role that ambient technology can play in encouraging public fitness. In the interests of disclosure, I’ve used a non-GPS-based system to track the relative levels of my activity for the past six or seven months. It’s been the single best $100 that I’ve spent in the past five years and led to very important, and positive, changes in my personal health. I specifically chose a non-GPS system because I worry about the implications of linking health/fitness information with where individuals physically move: I see such data as a potential gold mine for health insurers and employers. This is where I see the primary (from my perspective) concerns: how can individuals be assured that GPS-related fitness information won’t be made available to health insurers who are setting Android users’ health premiums? How can they prevent the information from leaking to employers, or anyone else that might have an interest in this data?

Past this issue of data flow control I actually think that making basic fitness information very, very clear to people is a good idea. A comfortable one? No, not necessarily. No one really wants to see how little they may have been active. But I’m not certain that this mode of fitness analysis is necessarily creepy; it can definitely be unpleasant, however.

Of course individuals need to be able to opt-out of this kind of tracking if they’d like. Really, it should be opt-in (from a privacy perspective) though from a public health perspective I can’t help but wonder if it shouldn’t be opt-out. This is an area where there are competing public goods, and unlike a debate around security and privacy (which tends to feature pretty drawn out, well entrenched, battle lines) I’m not sure we’ve had a good discussion about the nature of locational tracking as it relates to basic facets of public fitness and, by extension, public health.

In the end, this is actually a tracking technology that I’m largely on the fence about, and my core reason for having problems with it are (a) I don’t think people had any real idea that they had opted-in to the fitness analysis; (b) I don’t trust third-parties not to get access to this data for purposes at odds with the data subject’s own interests. If both (a) and (b) could be resolved, however, I think I’d have a much harder time disagreeing with such ‘fitness alerts’ being integrated with smartphones given the significant problems of obesity amongst Western citizens.

What are your thoughts on this topic?