Categories
Aside Links

Liberal Fictions, AI technologies, and Human Rights

Although we talk the talk of individual consent and control, such liberal fictions are no longer sufficient to provide the protection needed to ensure that individuals and the communities to which they belong are not exploited through the data harvested from them. This is why acknowledging the role that data protection law plays in protecting human rights, autonomy and dignity is so important. This is why the human rights dimension of privacy should not just be a ‘factor’ to take into account alongside stimulating innovation and lowering the regulatory burden on industry. It is the starting point and the baseline. Innovation is good, but it cannot be at the expense of human rights.

— Prof. Teresa Scassa, “Bill C-27 and a human rights-based approach to data protection

It’s notable that Prof. Scassa speaks about the way in which Bill C-27’s preamble was supplemented with language about human rights as a way to assuage some public critique of the legislation. Preambles, however, lack the force of law and do not compel judges to interpret legislation,action in a particular way. They are often better read as a way to explain legislation to a public or strike up discussions with the judiciary when legislation repudiates a court decision.

For a long form analysis of the utility of preambles see Prof. Kent Roaches, “The Uses and Audiences of Preambles in Legislation.”

Categories
Links

Instagram’s Ongoing Trust and Safety Problem

A New York Times investigation reveals how Instagram promotes posts that include young girls to male users, including sexual predators.

Aside from reaching a surprisingly large proportion of men, the ads got direct responses from dozens of Instagram users, including phone calls from two accused sex offenders, offers to pay the child for sexual acts and professions of love.

The results suggest that the platform’s algorithms play an important role in directing men to photos of children. And they echo concerns about the prevalence of men who use Instagram to follow and contact minors, including those who have been arrested for using social media to solicit children for sex.



… though The Times chose topics that the company estimated were dominated by women, the ads were shown, on average, to men about 80 percent of the time, according to a Times analysis of Instagram’s audience data. In one group of tests, photos showing the child went to men 95 percent of the time, on average, while photos of the items alone went to men 64 percent of the time.

These findings are deeply disturbing to say the absolute least.

Categories
Links

New York City’s Chatbot: A Warning to Other Government Agencies?

A good article by The Markup assessed the accuracy of New York City’s municipal chatbot. The chatbot is intended to provide New Yorkers with information about starting or operating a business in the city. The journalists found the chatbot regularly provided false or incorrect information which could result in legal repercussions for businesses and significantly discriminate against city residents. Problematic outputs included incorrect housing-related information, whether businesses must accept cash for services rendered, whether employers can take cuts of employees’ tips, and more. 

While New York does include a warning to those using the chatbot, it remains unclear (and perhaps doubtful) that residents who use it will know when to dispute outputs. Moreover, the statements of how the tool can be helpful and sources it is trained on may cause individuals to trust the chatbot.

In aggregate, this speaks to how important it is to effectively communicate with users, in excess of policies simply mandating some kind of disclosure of the risks associated with these tools, as well as demonstrates the importance of government institutions more carefully assessing (and appreciating) the risks of these systems prior to deploying them.

Categories
Links Writing

RCMP Found to Unlawfully Collect Publicly Available Information

The recent report from Office of the Privacy Commissioner of Canada, entitled “Investigation of the RCMP’s collection of open-source information under Project Wide Awake,” is an important read for those interested in the restrictions that apply to federal government agencies’ collection of this information.

The OPC found that the RCMP:

  • had sought to outsource its own legal accountabilities to a third-party vendor that aggregated information,
  • was unable to demonstrate that their vendor was lawfully collecting Canadian residents’ personal information,
  • operated in contravention to prior guarantees or agreements between the OPC and the RCMP,
  • was relying on a deficient privacy impact assessment, and
  • failed to adequately disclose to Canadian residents how information was being collected, with the effect of preventing them from understanding the activities that the RCMP was undertaking.

It is a breathtaking condemnation of the method by which the RCMP collected open source intelligence, and includes assertions that the agency is involved in activities that stand in contravention of PIPEDA and the Privacy Act, as well as its own internal processes and procedures. The findings in this investigation build from past investigations into how Clearview AI collected facial images to build biometric templates, guidance on publicly available information, and joint cross-national guidance concerning data scraping and the protection of privacy.

Categories
Links Writing

Near-Term Threats Posed by Emergent AI Technologies

In January, the UK’s National Cyber Security Centre (NCSC) published its assessment of the near-term impact of AI with regards to cyber threats. The whole assessment is worth reading for its clarity and brevity in identifying different ways that AI technologies will be used by high-capacity state actors, by other state and well resourced criminal and mercenary actors, and by comparatively low-skill actors.

A few items which caught my eye:

  • More sophisticated uses of AI in cyber operations are highly likely to be restricted to threat actors with access to quality training data, significant expertise (in both AI and cyber), and resources. More advanced uses are unlikely to be realised before 2025.
  • AI will almost certainly make cyber operations more impactful because threat actors will be able to analyse exfiltrated data faster and more effectively, and use it to train AI models.
  • AI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations. This enhanced access will likely contribute to the global ransomware threat over the next two years.
  • Cyber resilience challenges will become more acute as the technology develops. To 2025, GenAI and large language models will make it difficult for everyone, regardless of their level of cyber security understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts.

There are more insights, such as the value of training data held by high capacity actors and the likelihood that low skill actors will see significant upskilling over the next 18 months due to the availability of AI technologies.

The potential to assess information more quickly may have particularly notable impacts in the national security space, enable more effective corporate espionage operations, as well as enhance cyber criminal activities. In all cases, the ability to assess and query volumes of information at speed and scale will let threat actors extract value from information more efficiently than today.

The fact that the same technologies may enable lower-skilled actors to undertake wider ransomware operations, where it will be challenging to distinguish legitimate versus illegitimate security-related emails, also speaks to the desperate need for organizations to transition to higher-security solutions, including multiple factor authentication or passkeys.

Categories
Links Writing

Older Adults’ Perception of Smart Home Technologies

Percy Campbell et al.’s article, “User Perception of Smart Home Surveillance Among Adults Aged 50 Years and Older: Scoping Review,” is a really interesting bit of work into older adults/ perceptions of Smart Home Technologies (SMTs). The authors conducted a review of other studies on this topic to, ultimately, derive a series of aggregated insights that clarify the state of the literature and, also, make clear how policy makers could start to think about the issues older adults associate with SMTs.

Some key themes/issues that arose from the studies included:

  • Privacy: different SMTs were perceived differently. But key was that the privacy concerns were sometimes highly contextual based on region, with one possible effect being that it can be challenging to generalize from one study about specific privacy interests to a global population
  • Collection of Data — Why and How: People were generally unclear what was being collected or for what purpose. A lack of literacy may raise issues of ongoing meaningful consent of collection.
  • Benefits and Risks: Data breaches/hacks, malfunction, affordability, and user trust were all possible challenges/risks. However, participants in studies also generally found that there were considerable benefits with these technologies, and most significantly they perceived that their physical safety was enhanced.
  • Safety Perceptions: All types of SHT’s were seen as useful for safety purposes, especially in accident or emergency. Safety-enhancing features may be preferred in SHT’s for those 50+ years of age.

Given the privacy, safety, etc themes, and how regulatory systems are sometimes being outpaced by advances in technology, they authors propose a data justice framework to regulate or govern SHTs. This entails:

  • Visibility: there are benefits to being ‘seen’ by SHTs but, also, privacy needs to be applied so individuals can selectively remove themselves from being visible to commercial etc parties.
  • Digital engagement/ disengagement: individuals should be supported in making autonomous decisions about how engaged or in-control of systems they are. They should, also, be able to disengage, or only have certain SHTs used to monitor or affect them.
  • Right to challenge: individuals should be able to challenge decisions made about them by SHT. This is particularly important in the face of AI which may have ageist biases built into it.

While I still think that there is the ability of regulatory systems to be involved in this space — if only regulators are both appropriately resourced and empowered! — I take the broader points that regulatory approaches should, also, include ‘data justice’ components. At the same time, I think that most contemporary or recently updated Western privacy and human rights legislation includes these precepts and, also, that there is a real danger in asserting there is a need to build a new (more liberal/individualistic) approach to collective action problems that regulators, generally, are better equipped to address than are individuals.

Categories
Links Writing

Location Data Used to Drive Anti-Abortion Campaigns

It can be remarkably easy to target communications to individuals’ based on their personal location. Location information is often surreptitiously obtained by way of smartphone apps that sell off or otherwise provide this data to data brokers, or through agreements with telecommunications vendors that enable targeting based on mobile devices’ geolocation. 

Senator Wyden’s efforts to investigate this brokerage economy recently revealed how this sensitive geolocation information was used to enable and drive anti-abortion activism in the United States:

Wyden’s letter asks the Federal Trade Commission and the Securities and Exchange Commission to investigate Near Intelligence, a location data provider that gathered and sold the information. The company claims to have information on 1.6 billion people across 44 countries, according to its website.

The company’s data can be used to target ads to people who have been to specific locations — including reproductive health clinic locations, according to Recrue Media co-founder Steven Bogue, who told Wyden’s staff his firm used the company’s data for a national anti-abortion ad blitz between 2019 and 2022.



In a February 2023 filing, the company said it ensures that the data it obtains was collected with the users’ permission, but Near’s former chief privacy officer Jay Angelo told Wyden’s staff that the company collected and sold data about people without consent, according to the letter.

While the company stopped selling location data belonging to Europeans, it continued for Americans because of a lack of federal privacy regulations.

While the company in question, Near Intelligence, declared bankruptcy in December 2023 there is a real potential for the data they collected to be sold to other parties as part of bankruptcy proceedings. There is a clear and present need to legislate how geolocation information is collected, used, as well as disclosed to address this often surreptitious aspect of the data brokerage economy.

Categories
Links Writing

The Near-Term Impact of AI Technologies and Cyber Threats

In January, the UK’s National Cyber Security Centre (NCSC) published its assessment of the near-term impact of AI with regards to cyber threats. The whole assessment is worth reading for its clarity and brevity in identifying different ways that AI technologies will be used by high-capacity state actors, by other state and well resourced criminal and mercenary actors, and by comparatively low-skill actors.

A few items which caught my eye:

  • More sophisticated uses of AI in cyber operations are highly likely to be restricted to threat actors with access to quality training data, significant expertise (in both AI and cyber), and resources. More advanced uses are unlikely to be realised before 2025.
  • AI will almost certainly make cyber operations more impactful because threat actors will be able to analyse exfiltrated data faster and more effectively, and use it to train AI models.
  • AI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations. This enhanced access will likely contribute to the global ransomware threat over the next two years.
  • Cyber resilience challenges will become more acute as the technology develops. To 2025, GenAI and large language models will make it difficult for everyone, regardless of their level of cyber security understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts.

There are more insights, such as the value of training data held by high capacity actors and the likelihood that low skill actors will see significant upskilling over the next 18 months due to the availability of AI technologies.

The potential to assess information more quickly may have particularly notable impacts in the national security space, enable more effective corporate espionage operations, as well as enhance cyber criminal activities. In all cases, the ability to assess and query volumes of information at speed and scale will let threat actors extract value from information more efficiently than today.

The fact that the same technologies may enable lower-skilled actors to undertake wider ransomware operations, where it will be challenging to distinguish legitimate versus illegitimate security-related emails, also speaks to the desperate need for organizations to transition to higher-security solutions, including multiple factor authentication or passkeys.

Categories
Links

Pulling Back the Curtain on the Appin Cyber Mercenary Organization

Curious about what “cyber mercenaries” do? How they operate and facilitate targeting?

This excellent long-form piece from Reuters exquisitely details the history of Appin, an Indian cyber mercenary outfit, and confirms and publicly reveals many of the operations that it has undertaken.

As an aside, the sourcing in this article is particularly impressive, which is to expected from Satter et al. They keep showing they’re amongst the best in the business!

Moreover, the sidenote concerning the NSA’s awareness of the company, and why, is notable in its own right. The authors write,

The National Security Agency (NSA), which spies on foreigners for the U.S. government, began surveilling the company after watching it hack “high value” Pakistani officials around 2009, one of the sources said. An NSA spokesperson declined to comment.

This showcases that Appin may either have been seen as a source of fourth-party collection (i.e. where an intelligence service takes the collection material, as another service is themselves collecting it from a target) or have endangered the NSA’s own collection or targeting activities, on the basis that Appin could provoke targets to assume heightened cybersecurity practices or otherwise cause them to behave in ways that interfered with the NSA’s own operations.

Categories
Links Photography

A Century Caught on Camera

The Globe and Mail has a terrific photographic series entitled "A century caught on camera." As a Toronto resident I was struck by just how many traditions, rituals, and grievances have stuck with the city–or in the city–for over a century.

Further, the way in which the images have been captured has changed substantially over time as a result of the technical capacity of camera equipment, along with the interests or preferences of the photographers at different times. Images in the past decade or two, as an example, clearly draw more commonly from celebrity or artistic portraiture than 50 years ago. Moreover, it’s pretty impressive just how much photographers have done with their equipment over the past century and this, generally, speaks to how easy street and documentary photographers have it today as compared to when our compatriots were using slow lenses and film.

It may take you quite a while to get through all the images but I found the process to be exceedingly worthwhile. Though I admit that the first decade during which the Globe used colour images probably ranks as my least favourite period in the galleries that the paper has published.