Categories
Links

Measuring the Effects of Active Disinformation Operations

This is a good long form piece by Thomas Rid on disinformation activities, with a particular focus on Russian operations. A key takeaway for me is that there is a real potential for the exposure of disinformation campaigns to beget subsequent campaigns, as the discovery (and journalistic coverage) of the initial campaign can bestow a kind of legitimacy upon adversaries in the eyes of their paymasters.

A way to overcome this ends up being the adoption of tactics that not just expose disinformation campaigns but, also, actively work to disable campaigners’ operational capacities at technical as well as staff levels. Merely revealing disinformation campaigns, by way of contrast, can serve as fuel for additional funding of disinformation operators and their abilities to launch subsequent campaigns or operations.

Categories
Links Writing

TikTok and the “Problem” of Foreign Influence

This is one of the clearer assessments of the efficacy (and lack thereof) of influencing social groups and populations using propaganda communicated over social media. While a short article can’t address every dimension of propaganda and influence operations, and their potential effects, this does a good job discussing some of the weaknesses of these operations and some of the less robust arguments about why we should be concerned about them.1

Key points in the article include:

  1. Individuals are actually pretty resistant to changing their minds when exposed to new or contradictory information which can have the effect of impeding the utility of propaganda/influence operations.
  2. While policy options tend to focus on the supply side of things (how do we stop propaganda/influence?) it is the demand side (I want to read about an issue) that is a core source of the challenge.
  3. Large scale one-time pushes to shift existing attitudes are likely to be detected and, subsequently, de-legitimize any social media source that exhibits obvious propaganda/influence operations.

This said, the article operates with a presumption that people’s pre-existing views are being challenged by propaganda/influence operations and that they will naturally resist such challenges. By way of contrast, where there are new or emerging issues, where past positions have been upset, or where information is sought in response to a significant social or political change, there remains an opportunity to affect change in individuals’ perceptions of issues.2 Nevertheless, those most likely to be affected will be those who are seeking out particular kinds of information on the basis that they believe something has epistemically or ontologically changed in their belief structures and, thus, they have shifted from a closed to open position to receive new positions/update their beliefs.


  1. In the past I have raised questions about the appropriateness of focusing so heavily on TikTok as a national security threat. ↩︎
  2. This phenomenon is well documented in the agenda-setting literatures. ↩︎
Categories
Links Writing

Russian State Media Disinformation Campaign Exposed

Today, a series of Western allies — including Canada, the United States, and the Netherlands — disclosed the existence of a sophisticated Russian social media influence operation that was being operated by RT. The details of the campaign are exquisite, and include some of code used to drive the operation.

Of note, the campaign used a covert artificial intelligence (AI) enhanced software package to create fictitious online personas, representing a number of nationalities, to post content on X (formerly Twitter). Using this tool, RT affiliates disseminated disinformation to and about a number of countries, including the United States, Poland, Germany, the Netherlands, Spain, Ukraine, and Israel.

Although the tool was only identified on X, the authoring organizations’ analysis of the software used for the campaign indicated the developers intended to expand its functionality to other social media platforms. The authoring organizations’ analysis also indicated the tool is capable of the following:

  1. Creating authentic appearing social media personas en masse;
  2. Deploying content similar to typical social media users;
  3. Mirroring disinformation of other bot personas;
  4. Perpetuating the use of pre-existing false narratives to amplify malign foreign influence; and
  5. Formulating messages, to include the topic and framing, based on the specific archetype of the bot.

Mitigations to address this influence campaign include:

  1. Consider implementing processes to validate that accounts are created and operated by a human person who abides by the platform’s respective terms of use. Such processes could be similar to well-established Know Your Customer guidelines.
  2. Consider reviewing and making upgrades to authentication and verification processes based on the information provided in this advisory;
  3. Consider protocols for identifying and subsequently reviewing users with known-suspicious user agent strings;
  4. Consider making user accounts Secure by Default by using default settings such as MFA, default settings that support privacy, removing personally identifiable information shared without consent, and clear documentation of acceptable behavior.

This is a continuation of how AI tools are being (and will be) used to expand the ability of actors to undertake next-generation digital influence campaigns. And while adversaries are found using these techniques, today, we should anticipate that private companies (and others) will offer similar capabilities in the near future in democratic and non-democratic countries alike.