Categories
Links Writing

The Ongoing Problems of Placing Backdoors in Telecommunications Networks

In a cyber incident reminiscent of Operation Aurora,1 threat actors successfully penetrated American telecommunications companies (and a small number of other countries’ service providers) to gain access to lawful interception systems or associated data. The result was that:

For months or longer, the hackers might have held access to network infrastructure used to cooperate with lawful U.S. requests for communications data, according to people familiar with the matter, which amounts to a major national security risk. The attackers also had access to other tranches of more generic internet traffic, they said.

The surveillance systems believed to be at issue are used to cooperate with requests for domestic information related to criminal and national security investigations. Under federal law, telecommunications and broadband companies must allow authorities to intercept electronic information pursuant to a court order. It couldn’t be determined if systems that support foreign intelligence surveillance were also vulnerable in the breach.

Not only is this a major intelligence coup for the adversary in question, but it once more reveals the fundamental difficulties in deliberately establishing lawful access/interception systems in communications infrastructures to support law enforcement and national security investigations while, simultaneously, preventing adversaries from taking advantage of the same deliberately-designed communications vulnerabilities.

Categories
Links

Measuring the Effects of Active Disinformation Operations

This is a good long form piece by Thomas Rid on disinformation activities, with a particular focus on Russian operations. A key takeaway for me is that there is a real potential for the exposure of disinformation campaigns to beget subsequent campaigns, as the discovery (and journalistic coverage) of the initial campaign can bestow a kind of legitimacy upon adversaries in the eyes of their paymasters.

A way to overcome this ends up being the adoption of tactics that not just expose disinformation campaigns but, also, actively work to disable campaigners’ operational capacities at technical as well as staff levels. Merely revealing disinformation campaigns, by way of contrast, can serve as fuel for additional funding of disinformation operators and their abilities to launch subsequent campaigns or operations.

Categories
Links Writing

TikTok and the “Problem” of Foreign Influence

This is one of the clearer assessments of the efficacy (and lack thereof) of influencing social groups and populations using propaganda communicated over social media. While a short article can’t address every dimension of propaganda and influence operations, and their potential effects, this does a good job discussing some of the weaknesses of these operations and some of the less robust arguments about why we should be concerned about them.1

Key points in the article include:

  1. Individuals are actually pretty resistant to changing their minds when exposed to new or contradictory information which can have the effect of impeding the utility of propaganda/influence operations.
  2. While policy options tend to focus on the supply side of things (how do we stop propaganda/influence?) it is the demand side (I want to read about an issue) that is a core source of the challenge.
  3. Large scale one-time pushes to shift existing attitudes are likely to be detected and, subsequently, de-legitimize any social media source that exhibits obvious propaganda/influence operations.

This said, the article operates with a presumption that people’s pre-existing views are being challenged by propaganda/influence operations and that they will naturally resist such challenges. By way of contrast, where there are new or emerging issues, where past positions have been upset, or where information is sought in response to a significant social or political change, there remains an opportunity to affect change in individuals’ perceptions of issues.2 Nevertheless, those most likely to be affected will be those who are seeking out particular kinds of information on the basis that they believe something has epistemically or ontologically changed in their belief structures and, thus, they have shifted from a closed to open position to receive new positions/update their beliefs.


  1. In the past I have raised questions about the appropriateness of focusing so heavily on TikTok as a national security threat. ↩︎
  2. This phenomenon is well documented in the agenda-setting literatures. ↩︎
Categories
Links Writing

Russian State Media Disinformation Campaign Exposed

Today, a series of Western allies — including Canada, the United States, and the Netherlands — disclosed the existence of a sophisticated Russian social media influence operation that was being operated by RT. The details of the campaign are exquisite, and include some of code used to drive the operation.

Of note, the campaign used a covert artificial intelligence (AI) enhanced software package to create fictitious online personas, representing a number of nationalities, to post content on X (formerly Twitter). Using this tool, RT affiliates disseminated disinformation to and about a number of countries, including the United States, Poland, Germany, the Netherlands, Spain, Ukraine, and Israel.

Although the tool was only identified on X, the authoring organizations’ analysis of the software used for the campaign indicated the developers intended to expand its functionality to other social media platforms. The authoring organizations’ analysis also indicated the tool is capable of the following:

  1. Creating authentic appearing social media personas en masse;
  2. Deploying content similar to typical social media users;
  3. Mirroring disinformation of other bot personas;
  4. Perpetuating the use of pre-existing false narratives to amplify malign foreign influence; and
  5. Formulating messages, to include the topic and framing, based on the specific archetype of the bot.

Mitigations to address this influence campaign include:

  1. Consider implementing processes to validate that accounts are created and operated by a human person who abides by the platform’s respective terms of use. Such processes could be similar to well-established Know Your Customer guidelines.
  2. Consider reviewing and making upgrades to authentication and verification processes based on the information provided in this advisory;
  3. Consider protocols for identifying and subsequently reviewing users with known-suspicious user agent strings;
  4. Consider making user accounts Secure by Default by using default settings such as MFA, default settings that support privacy, removing personally identifiable information shared without consent, and clear documentation of acceptable behavior.

This is a continuation of how AI tools are being (and will be) used to expand the ability of actors to undertake next-generation digital influence campaigns. And while adversaries are found using these techniques, today, we should anticipate that private companies (and others) will offer similar capabilities in the near future in democratic and non-democratic countries alike.

Categories
Aside Links

Liberal Fictions, AI technologies, and Human Rights

Although we talk the talk of individual consent and control, such liberal fictions are no longer sufficient to provide the protection needed to ensure that individuals and the communities to which they belong are not exploited through the data harvested from them. This is why acknowledging the role that data protection law plays in protecting human rights, autonomy and dignity is so important. This is why the human rights dimension of privacy should not just be a ‘factor’ to take into account alongside stimulating innovation and lowering the regulatory burden on industry. It is the starting point and the baseline. Innovation is good, but it cannot be at the expense of human rights.

— Prof. Teresa Scassa, “Bill C-27 and a human rights-based approach to data protection

It’s notable that Prof. Scassa speaks about the way in which Bill C-27’s preamble was supplemented with language about human rights as a way to assuage some public critique of the legislation. Preambles, however, lack the force of law and do not compel judges to interpret legislation,action in a particular way. They are often better read as a way to explain legislation to a public or strike up discussions with the judiciary when legislation repudiates a court decision.

For a long form analysis of the utility of preambles see Prof. Kent Roaches, “The Uses and Audiences of Preambles in Legislation.”

Categories
Links

Instagram’s Ongoing Trust and Safety Problem

A New York Times investigation reveals how Instagram promotes posts that include young girls to male users, including sexual predators.

Aside from reaching a surprisingly large proportion of men, the ads got direct responses from dozens of Instagram users, including phone calls from two accused sex offenders, offers to pay the child for sexual acts and professions of love.

The results suggest that the platform’s algorithms play an important role in directing men to photos of children. And they echo concerns about the prevalence of men who use Instagram to follow and contact minors, including those who have been arrested for using social media to solicit children for sex.



… though The Times chose topics that the company estimated were dominated by women, the ads were shown, on average, to men about 80 percent of the time, according to a Times analysis of Instagram’s audience data. In one group of tests, photos showing the child went to men 95 percent of the time, on average, while photos of the items alone went to men 64 percent of the time.

These findings are deeply disturbing to say the absolute least.

Categories
Links

New York City’s Chatbot: A Warning to Other Government Agencies?

A good article by The Markup assessed the accuracy of New York City’s municipal chatbot. The chatbot is intended to provide New Yorkers with information about starting or operating a business in the city. The journalists found the chatbot regularly provided false or incorrect information which could result in legal repercussions for businesses and significantly discriminate against city residents. Problematic outputs included incorrect housing-related information, whether businesses must accept cash for services rendered, whether employers can take cuts of employees’ tips, and more. 

While New York does include a warning to those using the chatbot, it remains unclear (and perhaps doubtful) that residents who use it will know when to dispute outputs. Moreover, the statements of how the tool can be helpful and sources it is trained on may cause individuals to trust the chatbot.

In aggregate, this speaks to how important it is to effectively communicate with users, in excess of policies simply mandating some kind of disclosure of the risks associated with these tools, as well as demonstrates the importance of government institutions more carefully assessing (and appreciating) the risks of these systems prior to deploying them.

Categories
Links Writing

RCMP Found to Unlawfully Collect Publicly Available Information

The recent report from Office of the Privacy Commissioner of Canada, entitled “Investigation of the RCMP’s collection of open-source information under Project Wide Awake,” is an important read for those interested in the restrictions that apply to federal government agencies’ collection of this information.

The OPC found that the RCMP:

  • had sought to outsource its own legal accountabilities to a third-party vendor that aggregated information,
  • was unable to demonstrate that their vendor was lawfully collecting Canadian residents’ personal information,
  • operated in contravention to prior guarantees or agreements between the OPC and the RCMP,
  • was relying on a deficient privacy impact assessment, and
  • failed to adequately disclose to Canadian residents how information was being collected, with the effect of preventing them from understanding the activities that the RCMP was undertaking.

It is a breathtaking condemnation of the method by which the RCMP collected open source intelligence, and includes assertions that the agency is involved in activities that stand in contravention of PIPEDA and the Privacy Act, as well as its own internal processes and procedures. The findings in this investigation build from past investigations into how Clearview AI collected facial images to build biometric templates, guidance on publicly available information, and joint cross-national guidance concerning data scraping and the protection of privacy.

Categories
Links Writing

Near-Term Threats Posed by Emergent AI Technologies

In January, the UK’s National Cyber Security Centre (NCSC) published its assessment of the near-term impact of AI with regards to cyber threats. The whole assessment is worth reading for its clarity and brevity in identifying different ways that AI technologies will be used by high-capacity state actors, by other state and well resourced criminal and mercenary actors, and by comparatively low-skill actors.

A few items which caught my eye:

  • More sophisticated uses of AI in cyber operations are highly likely to be restricted to threat actors with access to quality training data, significant expertise (in both AI and cyber), and resources. More advanced uses are unlikely to be realised before 2025.
  • AI will almost certainly make cyber operations more impactful because threat actors will be able to analyse exfiltrated data faster and more effectively, and use it to train AI models.
  • AI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations. This enhanced access will likely contribute to the global ransomware threat over the next two years.
  • Cyber resilience challenges will become more acute as the technology develops. To 2025, GenAI and large language models will make it difficult for everyone, regardless of their level of cyber security understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts.

There are more insights, such as the value of training data held by high capacity actors and the likelihood that low skill actors will see significant upskilling over the next 18 months due to the availability of AI technologies.

The potential to assess information more quickly may have particularly notable impacts in the national security space, enable more effective corporate espionage operations, as well as enhance cyber criminal activities. In all cases, the ability to assess and query volumes of information at speed and scale will let threat actors extract value from information more efficiently than today.

The fact that the same technologies may enable lower-skilled actors to undertake wider ransomware operations, where it will be challenging to distinguish legitimate versus illegitimate security-related emails, also speaks to the desperate need for organizations to transition to higher-security solutions, including multiple factor authentication or passkeys.

Categories
Links Writing

Older Adults’ Perception of Smart Home Technologies

Percy Campbell et al.’s article, “User Perception of Smart Home Surveillance Among Adults Aged 50 Years and Older: Scoping Review,” is a really interesting bit of work into older adults/ perceptions of Smart Home Technologies (SMTs). The authors conducted a review of other studies on this topic to, ultimately, derive a series of aggregated insights that clarify the state of the literature and, also, make clear how policy makers could start to think about the issues older adults associate with SMTs.

Some key themes/issues that arose from the studies included:

  • Privacy: different SMTs were perceived differently. But key was that the privacy concerns were sometimes highly contextual based on region, with one possible effect being that it can be challenging to generalize from one study about specific privacy interests to a global population
  • Collection of Data — Why and How: People were generally unclear what was being collected or for what purpose. A lack of literacy may raise issues of ongoing meaningful consent of collection.
  • Benefits and Risks: Data breaches/hacks, malfunction, affordability, and user trust were all possible challenges/risks. However, participants in studies also generally found that there were considerable benefits with these technologies, and most significantly they perceived that their physical safety was enhanced.
  • Safety Perceptions: All types of SHT’s were seen as useful for safety purposes, especially in accident or emergency. Safety-enhancing features may be preferred in SHT’s for those 50+ years of age.

Given the privacy, safety, etc themes, and how regulatory systems are sometimes being outpaced by advances in technology, they authors propose a data justice framework to regulate or govern SHTs. This entails:

  • Visibility: there are benefits to being ‘seen’ by SHTs but, also, privacy needs to be applied so individuals can selectively remove themselves from being visible to commercial etc parties.
  • Digital engagement/ disengagement: individuals should be supported in making autonomous decisions about how engaged or in-control of systems they are. They should, also, be able to disengage, or only have certain SHTs used to monitor or affect them.
  • Right to challenge: individuals should be able to challenge decisions made about them by SHT. This is particularly important in the face of AI which may have ageist biases built into it.

While I still think that there is the ability of regulatory systems to be involved in this space — if only regulators are both appropriately resourced and empowered! — I take the broader points that regulatory approaches should, also, include ‘data justice’ components. At the same time, I think that most contemporary or recently updated Western privacy and human rights legislation includes these precepts and, also, that there is a real danger in asserting there is a need to build a new (more liberal/individualistic) approach to collective action problems that regulators, generally, are better equipped to address than are individuals.