Percy Campbell et al.’s article, “User Perception of Smart Home Surveillance Among Adults Aged 50 Years and Older: Scoping Review,” is a really interesting bit of work into older adults/ perceptions of Smart Home Technologies (SMTs). The authors conducted a review of other studies on this topic to, ultimately, derive a series of aggregated insights that clarify the state of the literature and, also, make clear how policy makers could start to think about the issues older adults associate with SMTs.
Some key themes/issues that arose from the studies included:
Privacy: different SMTs were perceived differently. But key was that the privacy concerns were sometimes highly contextual based on region, with one possible effect being that it can be challenging to generalize from one study about specific privacy interests to a global population
Collection of Data — Why and How: People were generally unclear what was being collected or for what purpose. A lack of literacy may raise issues of ongoing meaningful consent of collection.
Benefits and Risks: Data breaches/hacks, malfunction, affordability, and user trust were all possible challenges/risks. However, participants in studies also generally found that there were considerable benefits with these technologies, and most significantly they perceived that their physical safety was enhanced.
Safety Perceptions: All types of SHT’s were seen as useful for safety purposes, especially in accident or emergency. Safety-enhancing features may be preferred in SHT’s for those 50+ years of age.
Given the privacy, safety, etc themes, and how regulatory systems are sometimes being outpaced by advances in technology, they authors propose a data justice framework to regulate or govern SHTs. This entails:
Visibility: there are benefits to being ‘seen’ by SHTs but, also, privacy needs to be applied so individuals can selectively remove themselves from being visible to commercial etc parties.
Digital engagement/ disengagement: individuals should be supported in making autonomous decisions about how engaged or in-control of systems they are. They should, also, be able to disengage, or only have certain SHTs used to monitor or affect them.
Right to challenge: individuals should be able to challenge decisions made about them by SHT. This is particularly important in the face of AI which may have ageist biases built into it.
While I still think that there is the ability of regulatory systems to be involved in this space — if only regulators are both appropriately resourced and empowered! — I take the broader points that regulatory approaches should, also, include ‘data justice’ components. At the same time, I think that most contemporary or recently updated Western privacy and human rights legislation includes these precepts and, also, that there is a real danger in asserting there is a need to build a new (more liberal/individualistic) approach to collective action problems that regulators, generally, are better equipped to address than are individuals.
It can be remarkably easy to target communications to individuals’ based on their personal location. Location information is often surreptitiously obtained by way of smartphone apps that sell off or otherwise provide this data to data brokers, or through agreements with telecommunications vendors that enable targeting based on mobile devices’ geolocation.
Senator Wyden’s efforts to investigate this brokerage economy recently revealed how this sensitive geolocation information was used to enable and drive anti-abortion activism in the United States:
Wyden’s letter asks the Federal Trade Commission and the Securities and Exchange Commission to investigate Near Intelligence, a location data provider that gathered and sold the information. The company claims to have information on 1.6 billion people across 44 countries, according to its website.
The company’s data can be used to target ads to people who have been to specific locations — including reproductive health clinic locations, according to Recrue Media co-founder Steven Bogue, who told Wyden’s staff his firm used the company’s data for a national anti-abortion ad blitz between 2019 and 2022.
…
In a February 2023 filing, the company said it ensures that the data it obtains was collected with the users’ permission, but Near’s former chief privacy officer Jay Angelo told Wyden’s staff that the company collected and sold data about people without consent, according to the letter.
While the company stopped selling location data belonging to Europeans, it continued for Americans because of a lack of federal privacy regulations.
While the company in question, Near Intelligence, declared bankruptcy in December 2023 there is a real potential for the data they collected to be sold to other parties as part of bankruptcy proceedings. There is a clear and present need to legislate how geolocation information is collected, used, as well as disclosed to address this often surreptitious aspect of the data brokerage economy.
Curious about what “cyber mercenaries” do? How they operate and facilitate targeting?
This excellent long-form piece from Reuters exquisitely details the history of Appin, an Indian cyber mercenary outfit, and confirms and publicly reveals many of the operations that it has undertaken.
As an aside, the sourcing in this article is particularly impressive, which is to expected from Satter et al. They keep showing they’re amongst the best in the business!
Moreover, the sidenote concerning the NSA’s awareness of the company, and why, is notable in its own right. The authors write,
The National Security Agency (NSA), which spies on foreigners for the U.S. government, began surveilling the company after watching it hack “high value” Pakistani officials around 2009, one of the sources said. An NSA spokesperson declined to comment.
This showcases that Appin may either have been seen as a source of fourth-party collection (i.e. where an intelligence service takes the collection material, as another service is themselves collecting it from a target) or have endangered the NSA’s own collection or targeting activities, on the basis that Appin could provoke targets to assume heightened cybersecurity practices or otherwise cause them to behave in ways that interfered with the NSA’s own operations.
While some emerging generative technologies may positively affect various domains (e.g., certain aspects of drug discovery and biological research, efficient translation between certain languages, speeding up certain administrative tasks, etc) they are, also, enabling new forms of harmful activities. Case in point, some individuals and groups are using generative technologies to generate child sexual abuse or exploitation materials:
Sexton says criminals are using older versions of AI models and fine-tuning them to create illegal material of children. This involves feeding a model existing abuse images or photos of people’s faces, allowing the AI to create images of specific individuals. “We’re seeing fine-tuned models which create new imagery of existing victims,” Sexton says. Perpetrators are “exchanging hundreds of new images of existing victims” and making requests about individuals, he says. Some threads on dark web forums share sets of faces of victims, the research says, and one thread was called: “Photo Resources for AI and Deepfaking Specific Girls.”
…
… realism also presents potential problems for investigators who spend hours trawling through abuse images to classify them and help identify victims. Analysts at the IWF, according to the organization’s new report, say the quality has improved quickly—although there are still some simple signs that images may not be real, such as extra fingers or incorrect lighting. “I am also concerned that future images may be of such good quality that we won’t even notice,” says one unnamed analyst quoted in the report.
The ability to produce generative child abuse content is becoming a wicked problem with few (if any) “good” solutions. It will be imperative for policy professionals to learn from past situations where technologies were found to sometimes facilitate child abuse related harms. In doing so, these professionals will need to draw lessons concerning what kinds of responses demonstrate necessity and proportionality with respect to the emergent harms of the day.
As just one example, we will have to carefully consider how generative AI-created child sexual abuse content is similar to, and distinctive from, past policy debates on the policing of online child sexual abuse content. Such care in developing policy responses will be needed to address these harms and to avoid undertaking performative actions that do little to address the underlying issues that drive this kind of behaviour.
Relatedly, we must also beware the promise that past (ineffective) solutions will somehow address the newest wicked problem. Novel solutions that are custom built to generative systems may be needed, and these solutions must simultaneously protect our privacy, Charter, and human rights while mitigating harms. Doing anything less will, at best, “merely” exchange one class of emergent harms for others.
For the past several months Neale James has talked about how new laws which prevent taking pictures of people on the street will inhibit the documenting of history in certain jurisdictions. I’ve been mulling this over while trying to determine what I really think about this line of assessment and photographic concern. As a street photographer it seems like an issue where I’ve got some skin in the game!
In short, while I’m sympathetic with this line of argumentation I’m not certain that I agree. So I wrote a longish email to Neale—which was included in this week’s Photowalk podcast—and I’ve largely reproduced the email below as a blog post.
I should probably start by stating my priors:
As a street photographer I pretty well always try to include people in my images, and typically aim to get at least some nose and chin. No shade to people who take images of peoples’ backs (and I selectively do this too) but I think that capturing some of the face’s profile can really bring many street photos to life.1
I, also, am usually pretty obvious when I’m taking photos. I find a scene and often will ‘set up’ and wait for folks to move through it. And when people tell me they aren’t pleased or want a photo deleted (not common but it happens sometimes) I’m usually happy to do so. I shoot between 28-50mm (equiv.) focal lengths and so it’s always pretty obvious when I’m taking photos, which isn’t the case with some street photographers who are shooting at 100mm . To each their own but I think if I’m taking a photo the subjects should be able to identify that’s happening and take issue with it, directly, if they so choose to.
Anyhow, with that out of the way:
If you think of street photography in the broader history of photography, it started with a lot of images with hazy or ghostly individuals (e.g. ‘Panorama of Saint Lucia, Naples’ by Jones or ’Physic Street, Canton’ by Thomson or ‘Rue de Hautefeuille’ by Marville). Even some of the great work—such as by Cartier-Bresson, Levitt, Bucquet, van Schaick, Atget, Friedlander, Robert French, etc—include photographs where the subjects are not clearly identified. Now, of course, some of their photographs include obvious subjects, but I think that it’s worth recognizing that many of the historical ‘greats’ include images where you can’t really identify the subject. And… that was just fine. Then, it was mostly a limitation of the kit whereas now, in some places, we’re dealing with the limitations of the law.
Indeed, I wonder if we can’t consider the legal requirement that individuals’ identifiable images not be captured as potentially a real forcing point for creativity that might inspire additional geographically distinctive street photography traditions: think about whether, in some jurisdictions, instead of aperture priority being a preferred setting, that shutter priority is a default, with speeds of 5-15 second shutters to get ghostly images.2
Now, if such a geographical tradition arises, will that mean we get all the details of the clothing and such that people are wearing, today? Well…no. Unless, of course, street photographers embrace creativity and develop photo essays that incorporate this in interesting or novel ways. But street photography can include a lot more than just the people, and the history of street photography and the photos we often praise as masterpieces showcase that blurred subjects can generate interesting and exciting and historically-significant images.
One thing that might be worth thinking about is what this will mean for how geographical spaces are created by generative AI in the future. Specifically:
These AI systems will often default to norms based on the weighting of what has been collected in training data. Will they ‘learn’ that some parts of the world are more or less devoid of people based on street photos and so, when generating images of certain jurisdictions, create imagery that is similarly devoid of people? Or, instead, will we see generative imagery that includes people whereas real photos will have to blur or obfuscate them?
Will we see some photographers, at least, take up a blending of the real and the generative, where they capture streets but then use programs to add people into those streetscapes based on other information they collect (e.g., local fashions etc)? Basically, will we see some street photographers adopt a hybrid real/generative image-making process in an effort to comply with law while still adhering to some of the Western norms around street photography?
As a final point, while I identify as a street photographer and avoid taking images of people in distress, the nature of AI regulation and law means that there are indeed some good reasons for people to be concerned about the taking of street photos. The laws frustrating some street photographers are born from arguably real concerns or issues.
For example, companies such as Cleaview AI (in Canada) engaged in the collection of images and, subsequently, generated biometric profiles of people based on scraping publicly available images.
Most people don’t really know how to prevent such companies from being developed or selling their products but do know that if they stop the creation of training data—photographs—then they’re at least less likely to be captured in a compromising or unfortunate situation.
It’s not the photographers, then, that are necessarily ‘bad’ but the companies who illegally exploit our work to our detriment, as well as to the detriment of the public writ large.
All to say: as street photographers, and photographers more generally, we should think broader than our own interests to appreciate why individuals may not want their images taken in light of technical developments that are all around us. And importantly, the difference is that as photographers we do often share our work whereas CCTV cameras and such do not, with the effect that the images we take can end up in generative AI, and non-generative AI training data systems, whereas the cameras that are monitoring all of us always are (currently…) less likely to be feeding the biometric surveillance training data beast.
While, at the same time, recognizing that sometimes a photo is preferred because people are walking away from the camera/towards something else in the scene. ↩︎
The Canadian Senate is debating Bill S-256, An Act to amend the Canada Post Corporation Act (seizure) and to make related amendments to other Acts. The relevant elements of the speech include:
Under the amendment to the Customs Act, a shipment entering Canada may be subject to inspection by border services officers if they have reason to suspect that its contents are prohibited from being imported into Canada. If this is the case, the shipment, whether a package or an envelope, may be seized. However, an envelope mailed in Canada to someone who resides at a Canadian address cannot be opened by the police or even by a postal inspector.
…
To summarize, nothing in the course of the post in Canada is liable to demand, seizure, detention or retention, except if a specific legal exception exists in the Canada Post Corporation Act or in one of the three laws I referenced. However, items in the mail can be inspected by a postal inspector, but if it is a letter, the inspector cannot open it to complete the inspection.
Thus, a police officer who has reasonable grounds to suspect that an item in the mail contains an illegal drug or a handgun cannot be authorized, pursuant to a warrant issued by a judge, to intercept and seize an item until it is delivered to the addressee or returned to the sender. I am told that letters containing drugs have no return address.
The Canadian Association of Chiefs of Police, in 2015, raised this very issue (.pdf). They recognised “that search and seizure authorities granted to law enforcement personnel under the Criminal Code of Canada or other criminal law authorities are overridden by the [Canada Post Corporation Act], giving law enforcement no authority to seize, detain or retain parcels or letters while they are in the course of mail and under Canada Post’s control.” The result was the Association was resolved:
that the Canadian Association of Chiefs of Police requests the Government of Canada to amend the Canada Post Corporation Act to provide police, for the purpose of intercepting contraband, with the ability to obtain judicial authorization to seize, detain or retain parcels or letters while they are in the course of mail and under Canada Post’s control.
It would seem as though, should Bill S-256 pass into law, that seven or eight years later some fairly impressive new powers that contrast with decades of mail privacy precedent may come undone.
Apple has announced it will begin rolling out new data security protections for Americans by end of 2022, and the rest of the world in 2023. This is a big deal.
One of the biggest, and most serious, gaping holes in the protections that Apple has provided to its users is linked to iCloud. Specifically, while a subset of information has been encrypted such that Apple couldn’t access or disclose the plaintext of communications or content (e.g., Health information, encrypted Apple Notes, etc) the company did not encrypt device backups, message backups, notes generally, iCloud contents, Photos, and more. The result is that third-parties could either compel Apple to disclose information (e.g., by way of warrant) or otherwise subvert Apple’s protections to access stored data (e.g., targeted attacks). Apple’s new security protections will expand the categories of protected data from 141 to 23.
I am very supportive of Apple’s decision and frankly congratulate them on the very real courage that it takes to implement something like this. It is:
courageous technically, insofar as this is a challenging thing to pull off at the scale at which Apple operates
courageous from a business perspective, insofar as it raises the prospect of unhappy customers should they lose access to their data and Apple unable to assist them
courageous legally, insofar as it’s going to inspire a lot of frustration and upset by law enforcement and government agencies around the world
It’ll be absolutely critical to observe how quickly, and how broadly, Apple extends its new security capacities and whether countries are able to pressure Apple to either not deploy them for their residents or roll them back in certain situations. Either way, Apple routinely sets the standard on consumer privacy protections; others in the industry will now be inevitably compared to Apple as either meeting the new standard or failing their own customers in one way or another.
From a Canadian, Australia, or British government point of view, I suspect that Apple’s decision will infuriate law enforcement and security agencies who had placed their hopes on CLOUD Act bilateral agreements to get access to corporate data, such as that held by Apple. Under a CLOUD bilateral British authorities could, as an example, directly serve a judicially authorised order to Apple about a British resident, to get Apple to disclose information back to the British authorities without having to deal with American authorities. It promised to substantially improve the speed at which countries with bilateral agreements could obtain electronic evidence. Now, it would seem, Apple will largely be unable to assist law enforcement and security agencies when it comes to Apple users who have voluntarily enabled heightened data protections. Apple’s decision will, almost certainly, further inspire governments around the world to double down on their efforts to advance anti-encryption legislation and pass such legislation into law.
Notwithstanding the inevitable government gnashing of teeth, Apple’s approach will represent one of the biggest (voluntary) increases in privacy protection for global users since WhatsApp adopted Signal’s underlying encryption protocols. Tens if not hundreds of millions of people who enable the new data protection will be much safer and more secure in how their data is stored while simultaneously restricting who can access that data without individuals’ own knowledge.
In a world where ‘high-profile’ targets are just people who are social influencers on social media, there are a lot of people who stand to benefit from Apple’s courageous move. I only hope that other companies, such as Google, are courageous enough to follow Apple at some point in the near future.
really, 13, given the issue of iMessage backups being accessible to Apple ↩︎
Cameron F. Kerry has a helpful piece in Brookings that unpacks the recently published ‘Declaration on the Future of the Internet.’ As he explains, the Declaration was signed by 60 States and is meant, in part, to rebut a China-Russia joint statement. Those countries’ statement would support their positions on ‘securing’ domestic Internet spaces and removing Internet governance from multi-stakeholder forums to State-centric ones.
So far, so good. However, baked into the Kerry’s article is language suggesting that either he misunderstands, or understates, some of the security-related elements of the Declaration. He writes:
There are additional steps the U.S. government can take that are more within its control than the actions and policies of foreign states or international organizations. The future of the Internet declaration contains a series of supporting principles and measures on freedom and human rights, Internet governance and access, and trust in use of digital network technology. The latter—trust in the use of network technology— is included to “ensure that government and relevant authorities’ access to personal data is based in law and conducted in accordance with international human rights law” and to “protect individuals’ privacy, their personal data, the confidentiality of electronic communications and information on end-users’ electronic devices, consistent with the protection of public safety and applicable domestic and international law.” These lay down a pair of markers for the U.S. to redeem.
I read this, against the 2019 Ministerial and recent Council of Europe Cybercrime Convention updates, and see that a vast swathe of new law enforcement and security agency powers would be entirely permissible based on Kerry’s assessment of the Declaration and States involved in signing it. While these new powers have either been agreed to, or advanced by, signatory States they have simultaneously been directly opposed by civil and human rights campaigners, as well as some national courts. Specifically, there are live discussions around the following powers:
the availability of strong encryption;
the guarantee that the content of communications sent using end-to-end encrypted devices cannot be accessed or analyzed by third-parties (include by on-device surveillance);
the requirement of prior judicial authorization to obtain subscriber information; and
the oversight of preservation and production powers by relevant national judicial bodies.
Laws can be passed that see law enforcement interests supersede individuals’ or communities’ rights in safeguarding their devices, data, and communications from the State. When or if such a situation occurs, the signatories of the Declaration can hold fast in their flowery language around protecting rights while, at the same time, individuals and communities experience heightened surveillance of, and intrusions into, their daily lives.
In effect, a lot of international policy and legal infrastructure has been built to facilitate sweeping new investigatory powers and reforms to how data is, and can be, secured. It has taken years to build this infrastructure and as we leave the current stage of the global pandemic it is apparent that governments have continued to press ahead with their efforts to expand the powers which could be provided to law enforcement and security agencies, notwithstanding the efforts of civil and human rights campaigners around the world.
The next stage of things will be to asses how, and in what ways, international agreements and legal infrastructure will be brought into national legal systems and to determine where to strategically oppose the worst of the over reaches. While it’s possible that some successes are achieved in resisting the expansions of state powers not everything will be resisted. The consequence will be both to enhance state intrusions into private lives as well as to weaken the security provided to devices and data, with the resultant effect of better enabling criminals to illicitly access or manipulate our personal information.
The new world of enhanced surveillance and intrusions is wholly consistent with the ‘Declaration on the Future of the Internet.’ And that’s a big, glaring, and serious problem with the Declaration.
Ikea Canada notified approximately 95,000 Canadian customers in recent weeks about a data breach the company has suffered. An Ikea employee conducted a series of searches between March 1 to March 3 which surfaced the account records of the aforementioned customers.1
While Ikea promised that financial information–credit card and banking information–hadn’t been revealed a raft of other personal information had been. That information included:
full first and last name;
postal code or home address;
phone number and other contact information;
IKEA loyalty number.
Ikea did not disclose who specifically accessed the information nor their motivations for doing so.
The notice provided by Ikea was better than most data breach alerts insofar as it informed customers what exactly had been accessed. For some individuals, however, this information is highly revelatory and could cause significant concern.
For example, imagine a case where someone has previously been the victim of either physical or digital stalking. Should their former stalker be an Ikea employee the data breach victim may ask whether their stalker now has confidential information that can be used to renew, or further amplify, harmful activities. With the customer information in hand, as an example, it would be relatively easy for a stalker to obtain more information such as where precisely someone lived. If they are aggrieved then they could also use the information to engage in digital harassment or threatening behaviour.
Without more information about the motivations behind why the Ikea employee searched the database those who have been stalked or had abusive relations with an Ikea employee might be driven to think about changing how they live their lives. They might feel the need to change their safety habits, get new phone numbers, or cycle to a new email. In a worst case scenario they might contemplate vacating their residence for a time. Even if they do not take any of these actions they might experience a heightened sense of unease or anxiety.
Of course, Ikea is far from alone in suffering these kinds of breaches. They happen on an almost daily basis for most of us, whether we’re alerted of the breach or not. Many news reports about such breaches focus on whether there is an existent or impending financial harm and stop the story there. The result is that journalist reporting can conceal some of the broader harms linked with data breaches.
Imagine a world where our personal information–how you can call us or find our homes–was protected equivalent to how our credit card numbers are current protected. In such a world stalkers and other abusive actors might be less able to exploit stolen or inappropriately accessed information. Yes, there will always be ways by which bad actors can operate badly, but it would be possible to mitigate some of the ways this badness can take place.
Companies could still create meaningful consent frameworks whereby some (perhaps most!) individuals could agree to have their information stored by the company. But, for those who have a different risk threshold they could make a meaningful choice so they could still make purchases and receive deliveries without, at the same time, permanently increasing the risks that their information might fall into the wrong hand. However, getting to this point requires expanded threat modelling: we can’t just worry about a bad credit card purchase but, instead, would need to take seriously the gendered and intersectional nature of violence and its intersection with cybersecurity practices.
In the interests of disclosure, I was contacted as an affected party by Ikea Canada. ↩︎
The Markup has a comprehensive and disturbing article on how location information is acquired by third-parties despite efforts by Apple and Google to restrict the availability of this information. In the past, it was common for third-parties to provide SDKs to application developers. The SDKs would inconspicuously transfer location information to those third-parties while also enabling functionality for application developers. With restrictions being put in place by platforms such as Apple and Google, however, it’s now becoming common for application developers to initiate requests for location information themselves and then share it directly with third-party data collectors.
While such activities often violate the terms of service and policy agreements between platforms and application developers, it can be challenging for the platforms to actually detect these violations and subsequently enforce their rules.
Broadly, the issues at play represent significant governmental regulatory failures. The fact that government agencies often benefit from the secretive collection of individuals’ location information makes it that much harder for the governments to muster the will to discipline the secretive collection of personal data by third-parties: if the government cuts off the flow of location information, it will impede the ability of governments themselves obtain this information.
In some cases intelligence and security services obtain location information from third-parties. This sometimes occurs in situations where the services themselves are legally barred from directly collecting this information. Companies selling mobility information can let government agencies do an end-run around the law.
One of the results is that efforts to limit data collectors’ ability to capture personal information often sees parts of government push for carve outs to collecting, selling, and using location information. In Canada, as an example, the government has adopted a legal position that it can collect locational information so long as it is de-identified or anonymized,1 and for the security and intelligence services there are laws on the books that permit the collection of commercially available open source information. This open source information does not need to be anonymized prior to acquisition.2 Lest you think that it sounds paranoid that intelligence services might be interested in location information, consider that American agencies collected bulk location information pertaining to Muslims from third-party location information data brokers and that the Five Eyes historically targeted popular applications such as Google Maps and Angry Birds to obtain location information as well as other metadata and content. As the former head of the NSA announced several years ago, “We kill people based on metadata.”
Any arguments made by either private or public organizations that anonymization or de-identification of location information makes it acceptable to collect, use, or disclose generally relies tricking customers and citizens. Why is this? Because even when location information is aggregated and ‘anonymized’ it might subsequently be re-identified. And in situations where that reversal doesn’t occur, policy decisions can still be made based on the aggregated information. The process of deriving these insights and applying them showcases that while privacy is an important right to protect, it is not the only right that is implicated in the collection and use of locational information. Indeed, it is important to assess the proportionality and necessity of the collection and use, as well as how the associated activities affect individuals’ and communities’ equity and autonomy in society. Doing anything less is merely privacy-washing.
Throughout discussions about data collection, including as it pertains to location information, public agencies and companies alike tend to provide a pair of argument against changing the status quo. First, they assert that consent isn’t really possible anymore given the volumes of data which are collected on a daily basis from individuals; individuals would be overwhelmed with consent requests! Thus we can’t make the requests in the first place! Second, that we can’t regulate the collection of this data because doing so risks impeding innovation in the data economy.
If those arguments sound familiar, they should. They’re very similar to the plays made by industry groups who’s activities have historically had negative environmental consequences. These groups regularly assert that after decades of poor or middling environmental regulation that any new, stronger, regulations would unduly impede the existing dirty economy for power, services, goods, and so forth. Moreover, the dirty way of creating power, services, and goods is just how things are and thus should remain the same.
In both the privacy and environmental worlds, corporate actors (and those whom they sell data/goods to) have benefitted from not having to pay the full cost of acquiring data without meaningful consent or accounting for the environmental cost of their activities. But, just as we demand enhanced environmental regulations to regulate and address the harms industry causes to the environment, we should demand and expect the same when it comes to the personal data economy.
If a business is predicated on sneaking away personal information from individuals then it is clearly not particularly interested or invested in being ethical towards consumers. It’s imperative to continue pushing legislators to not just recognize that such practices are unethical, but to make them illegal as well. Doing so will require being heard over the cries of government’s agencies that have vested interests in obtaining location information in ways that skirt the law that might normally discipline such collection, as well as companies that have grown as a result of their unethical data collection practices. While this will not be an easy task, it’s increasingly important given the limits of platforms to regulate the sneaky collection of this information and increasingly problematic ways our personal data can be weaponized against us.
“PHAC advised that since the information had been de-identified and aggregated, it believed the activity did not engage the Privacy Act as it was not collecting or using “personal information”. ↩︎