Categories
Links Writing

Significant New Cybersecurity Protections Added in iOS 18.1

Apple has quietly introduced an enhanced security feature in iOS 18.1. If you haven’t authenticated to your device recently — the past few days — the device will automatically revert from the After First Unlock (AFU) state to the Before First Unlock (BFU) state, with the effect of better protecting user information.1

Users may experience this new functionality by sometimes needing to enter their credentials prior to unlocking their device if they haven’t used it recently. The effect is that stolen or lost devices will be returned to a higher state of security and impede unauthorized parties from gaining access to the data that users have stored on their devices.

There is a secondary effect, however, insofar as these protections in iOS 18.1 may impede some mobile device forensics practices when automatically returning seized devices to a higher state of security (i.e., BFU) after a few days. This can reduce the volume of user information that is available to state agencies or other parties with the resources to forensically analyze devices.

While this activity may raise concerns that lawful government investigations may be impaired it is worth recalling that Apple is responsible for protecting devices from around the world. Numerous governments, commercial organizations, and criminal groups are amongst those using mobile device forensics practices, and iOS devices in the hands of a Canadian university student are functionally same as iOS devices used by fortune 50 executives. The result is that all users receive an equivalent high level of security, and all data is strongly safeguarded regardless of a user’s economic, political, or socio-cultural situation.


  1. For more details on the differences between the Before First Unlock (BFU) and After First Unlock (AFU) states, see: https://blogs.dsu.edu/digforce/2023/08/23/bfu-and-afu-lock-states/ ↩︎
Categories
Links Writing

Encryption Use Hits a New Height in Canada

In a continuing demonstration of the importance of strong and privacy-protective communications, the federal Foreign Interference Commission has created a Signal account to receive confidential information.

Encrypted Messaging
For those who may feel more comfortable providing information to the Commission using encrypted means, they may do so through the Signal – Private Messenger app. Those who already have a Signal account can contact the Commission using our username below. Others will have to first download the app and set up an account before they can communicate with the Commission.

The Commission’s Signal Username is signal_pifi_epie20.24

Signal users can also scan QR Code below for the Commission’s username:

The Commission has put strict measures in place to protect the confidentiality of any information provided through this Signal account.

Not so long ago, the Government of Canada was arguing for an irresponsible encryption policy that included the ability to backdoor end-to-end encryption. It’s hard to overstate the significance of a government body now explicitly adopting Signal.

Categories
Links Writing

The Ongoing Problems of Placing Backdoors in Telecommunications Networks

In a cyber incident reminiscent of Operation Aurora,1 threat actors successfully penetrated American telecommunications companies (and a small number of other countries’ service providers) to gain access to lawful interception systems or associated data. The result was that:

For months or longer, the hackers might have held access to network infrastructure used to cooperate with lawful U.S. requests for communications data, according to people familiar with the matter, which amounts to a major national security risk. The attackers also had access to other tranches of more generic internet traffic, they said.

The surveillance systems believed to be at issue are used to cooperate with requests for domestic information related to criminal and national security investigations. Under federal law, telecommunications and broadband companies must allow authorities to intercept electronic information pursuant to a court order. It couldn’t be determined if systems that support foreign intelligence surveillance were also vulnerable in the breach.

Not only is this a major intelligence coup for the adversary in question, but it once more reveals the fundamental difficulties in deliberately establishing lawful access/interception systems in communications infrastructures to support law enforcement and national security investigations while, simultaneously, preventing adversaries from taking advantage of the same deliberately-designed communications vulnerabilities.

Categories
Links

Measuring the Effects of Active Disinformation Operations

This is a good long form piece by Thomas Rid on disinformation activities, with a particular focus on Russian operations. A key takeaway for me is that there is a real potential for the exposure of disinformation campaigns to beget subsequent campaigns, as the discovery (and journalistic coverage) of the initial campaign can bestow a kind of legitimacy upon adversaries in the eyes of their paymasters.

A way to overcome this ends up being the adoption of tactics that not just expose disinformation campaigns but, also, actively work to disable campaigners’ operational capacities at technical as well as staff levels. Merely revealing disinformation campaigns, by way of contrast, can serve as fuel for additional funding of disinformation operators and their abilities to launch subsequent campaigns or operations.

Categories
Links Writing

TikTok and the “Problem” of Foreign Influence

This is one of the clearer assessments of the efficacy (and lack thereof) of influencing social groups and populations using propaganda communicated over social media. While a short article can’t address every dimension of propaganda and influence operations, and their potential effects, this does a good job discussing some of the weaknesses of these operations and some of the less robust arguments about why we should be concerned about them.1

Key points in the article include:

  1. Individuals are actually pretty resistant to changing their minds when exposed to new or contradictory information which can have the effect of impeding the utility of propaganda/influence operations.
  2. While policy options tend to focus on the supply side of things (how do we stop propaganda/influence?) it is the demand side (I want to read about an issue) that is a core source of the challenge.
  3. Large scale one-time pushes to shift existing attitudes are likely to be detected and, subsequently, de-legitimize any social media source that exhibits obvious propaganda/influence operations.

This said, the article operates with a presumption that people’s pre-existing views are being challenged by propaganda/influence operations and that they will naturally resist such challenges. By way of contrast, where there are new or emerging issues, where past positions have been upset, or where information is sought in response to a significant social or political change, there remains an opportunity to affect change in individuals’ perceptions of issues.2 Nevertheless, those most likely to be affected will be those who are seeking out particular kinds of information on the basis that they believe something has epistemically or ontologically changed in their belief structures and, thus, they have shifted from a closed to open position to receive new positions/update their beliefs.


  1. In the past I have raised questions about the appropriateness of focusing so heavily on TikTok as a national security threat. ↩︎
  2. This phenomenon is well documented in the agenda-setting literatures. ↩︎
Categories
Links Writing

Russian State Media Disinformation Campaign Exposed

Today, a series of Western allies — including Canada, the United States, and the Netherlands — disclosed the existence of a sophisticated Russian social media influence operation that was being operated by RT. The details of the campaign are exquisite, and include some of code used to drive the operation.

Of note, the campaign used a covert artificial intelligence (AI) enhanced software package to create fictitious online personas, representing a number of nationalities, to post content on X (formerly Twitter). Using this tool, RT affiliates disseminated disinformation to and about a number of countries, including the United States, Poland, Germany, the Netherlands, Spain, Ukraine, and Israel.

Although the tool was only identified on X, the authoring organizations’ analysis of the software used for the campaign indicated the developers intended to expand its functionality to other social media platforms. The authoring organizations’ analysis also indicated the tool is capable of the following:

  1. Creating authentic appearing social media personas en masse;
  2. Deploying content similar to typical social media users;
  3. Mirroring disinformation of other bot personas;
  4. Perpetuating the use of pre-existing false narratives to amplify malign foreign influence; and
  5. Formulating messages, to include the topic and framing, based on the specific archetype of the bot.

Mitigations to address this influence campaign include:

  1. Consider implementing processes to validate that accounts are created and operated by a human person who abides by the platform’s respective terms of use. Such processes could be similar to well-established Know Your Customer guidelines.
  2. Consider reviewing and making upgrades to authentication and verification processes based on the information provided in this advisory;
  3. Consider protocols for identifying and subsequently reviewing users with known-suspicious user agent strings;
  4. Consider making user accounts Secure by Default by using default settings such as MFA, default settings that support privacy, removing personally identifiable information shared without consent, and clear documentation of acceptable behavior.

This is a continuation of how AI tools are being (and will be) used to expand the ability of actors to undertake next-generation digital influence campaigns. And while adversaries are found using these techniques, today, we should anticipate that private companies (and others) will offer similar capabilities in the near future in democratic and non-democratic countries alike.

Categories
Aside Links

Liberal Fictions, AI technologies, and Human Rights

Although we talk the talk of individual consent and control, such liberal fictions are no longer sufficient to provide the protection needed to ensure that individuals and the communities to which they belong are not exploited through the data harvested from them. This is why acknowledging the role that data protection law plays in protecting human rights, autonomy and dignity is so important. This is why the human rights dimension of privacy should not just be a ‘factor’ to take into account alongside stimulating innovation and lowering the regulatory burden on industry. It is the starting point and the baseline. Innovation is good, but it cannot be at the expense of human rights.

— Prof. Teresa Scassa, “Bill C-27 and a human rights-based approach to data protection

It’s notable that Prof. Scassa speaks about the way in which Bill C-27’s preamble was supplemented with language about human rights as a way to assuage some public critique of the legislation. Preambles, however, lack the force of law and do not compel judges to interpret legislation,action in a particular way. They are often better read as a way to explain legislation to a public or strike up discussions with the judiciary when legislation repudiates a court decision.

For a long form analysis of the utility of preambles see Prof. Kent Roaches, “The Uses and Audiences of Preambles in Legislation.”

Categories
Links

Instagram’s Ongoing Trust and Safety Problem

A New York Times investigation reveals how Instagram promotes posts that include young girls to male users, including sexual predators.

Aside from reaching a surprisingly large proportion of men, the ads got direct responses from dozens of Instagram users, including phone calls from two accused sex offenders, offers to pay the child for sexual acts and professions of love.

The results suggest that the platform’s algorithms play an important role in directing men to photos of children. And they echo concerns about the prevalence of men who use Instagram to follow and contact minors, including those who have been arrested for using social media to solicit children for sex.



… though The Times chose topics that the company estimated were dominated by women, the ads were shown, on average, to men about 80 percent of the time, according to a Times analysis of Instagram’s audience data. In one group of tests, photos showing the child went to men 95 percent of the time, on average, while photos of the items alone went to men 64 percent of the time.

These findings are deeply disturbing to say the absolute least.

Categories
Links

New York City’s Chatbot: A Warning to Other Government Agencies?

A good article by The Markup assessed the accuracy of New York City’s municipal chatbot. The chatbot is intended to provide New Yorkers with information about starting or operating a business in the city. The journalists found the chatbot regularly provided false or incorrect information which could result in legal repercussions for businesses and significantly discriminate against city residents. Problematic outputs included incorrect housing-related information, whether businesses must accept cash for services rendered, whether employers can take cuts of employees’ tips, and more. 

While New York does include a warning to those using the chatbot, it remains unclear (and perhaps doubtful) that residents who use it will know when to dispute outputs. Moreover, the statements of how the tool can be helpful and sources it is trained on may cause individuals to trust the chatbot.

In aggregate, this speaks to how important it is to effectively communicate with users, in excess of policies simply mandating some kind of disclosure of the risks associated with these tools, as well as demonstrates the importance of government institutions more carefully assessing (and appreciating) the risks of these systems prior to deploying them.

Categories
Links Writing

RCMP Found to Unlawfully Collect Publicly Available Information

The recent report from Office of the Privacy Commissioner of Canada, entitled “Investigation of the RCMP’s collection of open-source information under Project Wide Awake,” is an important read for those interested in the restrictions that apply to federal government agencies’ collection of this information.

The OPC found that the RCMP:

  • had sought to outsource its own legal accountabilities to a third-party vendor that aggregated information,
  • was unable to demonstrate that their vendor was lawfully collecting Canadian residents’ personal information,
  • operated in contravention to prior guarantees or agreements between the OPC and the RCMP,
  • was relying on a deficient privacy impact assessment, and
  • failed to adequately disclose to Canadian residents how information was being collected, with the effect of preventing them from understanding the activities that the RCMP was undertaking.

It is a breathtaking condemnation of the method by which the RCMP collected open source intelligence, and includes assertions that the agency is involved in activities that stand in contravention of PIPEDA and the Privacy Act, as well as its own internal processes and procedures. The findings in this investigation build from past investigations into how Clearview AI collected facial images to build biometric templates, guidance on publicly available information, and joint cross-national guidance concerning data scraping and the protection of privacy.