Categories
Links Writing

Russian State Media Disinformation Campaign Exposed

Today, a series of Western allies — including Canada, the United States, and the Netherlands — disclosed the existence of a sophisticated Russian social media influence operation that was being operated by RT. The details of the campaign are exquisite, and include some of code used to drive the operation.

Of note, the campaign used a covert artificial intelligence (AI) enhanced software package to create fictitious online personas, representing a number of nationalities, to post content on X (formerly Twitter). Using this tool, RT affiliates disseminated disinformation to and about a number of countries, including the United States, Poland, Germany, the Netherlands, Spain, Ukraine, and Israel.

Although the tool was only identified on X, the authoring organizations’ analysis of the software used for the campaign indicated the developers intended to expand its functionality to other social media platforms. The authoring organizations’ analysis also indicated the tool is capable of the following:

  1. Creating authentic appearing social media personas en masse;
  2. Deploying content similar to typical social media users;
  3. Mirroring disinformation of other bot personas;
  4. Perpetuating the use of pre-existing false narratives to amplify malign foreign influence; and
  5. Formulating messages, to include the topic and framing, based on the specific archetype of the bot.

Mitigations to address this influence campaign include:

  1. Consider implementing processes to validate that accounts are created and operated by a human person who abides by the platform’s respective terms of use. Such processes could be similar to well-established Know Your Customer guidelines.
  2. Consider reviewing and making upgrades to authentication and verification processes based on the information provided in this advisory;
  3. Consider protocols for identifying and subsequently reviewing users with known-suspicious user agent strings;
  4. Consider making user accounts Secure by Default by using default settings such as MFA, default settings that support privacy, removing personally identifiable information shared without consent, and clear documentation of acceptable behavior.

This is a continuation of how AI tools are being (and will be) used to expand the ability of actors to undertake next-generation digital influence campaigns. And while adversaries are found using these techniques, today, we should anticipate that private companies (and others) will offer similar capabilities in the near future in democratic and non-democratic countries alike.