Link

Who Benefits from 5G?

The Financial Times (FT) ran a somewhat mixed piece on the future of 5G. The thesis is that telecom operators are anxious to realise the financial benefits of 5G deployments but, at the same time, these benefits were always expected to come in the forthcoming years; there was little, if any, expectation that financial benefits would happen immediately as the next-generation infrastructures were deployed.

The article correctly notes that consumers are skeptical of the benefits of 5G while, also, concluding by correctly stating that 5G was really always about the benefits that 5G Standalone will have for businesses. This is, frankly, a not great piece in terms of editing insofar as it combines two relatively distinct things without doing so in a particularly clear way.

5G Extended relies on existing 4G infrastructures. While there are theoretically faster speeds available to consumers, along with a tripartite spectrum band segmentation that can be used,1 most consumers won’t directly realise the benefits. One group that may, however, benefit (and that was not addressed at all in this piece) are rural customers. Opening up the lower-frequency spectrum blocks will allow 5G signals to travel farther with the benefit significantly accruing to those who cannot receive new copper, coax, or fibre lines. This said, I tend to agree with the article that most of the benefits of 5G haven’t, and won’t, be directly realised by individual mobile subscribers in the near future.2

5G Standalone is really where 5G will theoretically come alive. It’s, also, going to require a whole new way of designing and securing networks. At least as of a year or so ago, China was a global leader here but largely because they had comparatively poor 4G penetration and so had sought to leapfrog to 5G SA.3 This said, American bans on semiconductors to Chinese telecoms vendors, such as Huawei and ZTE, have definitely had a negative effect on the China’s ability to more fully deploy 5G SA.

In the Canadian case we can see investments by our major telecoms into 5G SA applications. Telus, Rogers, and Bell are all pouring money into technology clusters and universities. The goal isn’t to learn how much faster consumers’ phones or tablets can download data (though new algorithms to better manage/route/compress data are always under research) but, instead, to learn how how to take advantage of the more advanced business-to-business features of 5G. That’s where the money is, though the question will remain as to how well telecom carriers will be able to rent seek on those features when they already make money providing bandwidth and services to businesses paying for telecom products.


  1. Not all countries, however, are allocating the third, high-frequency, band on the basis that its utility remains in doubt. ↩︎
  2. Incidentally: it generally just takes a long, long time to deploy networks. 4G still isn’t reliably available across all of Canada, such as in populated rural parts of Canada. This delay meaningfully impedes the ability of farmers, as an example, to adopt smart technologies that would reduce the costs associated with farm and crop management and which could, simultaneously, enable more efficient crop yields. ↩︎
  3. Western telecoms, by comparison, want to extend the life of the capital assets they purchased/deployed around their 4G infrastructures and so prefer to go the 5G Extended route to start their 5G upgrade path. ↩︎
Link

Generalist Policing Models Remain Problematic

From the New York Time’s opinion section, this piece on“Why the F.B.I. Is so far behind on cybercrime?” reinforces the position that American law enforcement is stymied in investigating cybercrimes because:

…it lacks enough agents with advanced computer skills. It has not recruited as many of these people as it needs, and those it has hired often don’t stay long. Its deeply ingrained cultural standards, some dating to the bureau’s first director, J. Edgar Hoover, have prevented it from getting the right talent.

Emblematic of an organization stuck in the past is the F.B.I.’s longstanding expectation that agents should be able to do “any job, anywhere.” While other global law enforcement agencies have snatched up computer scientists, the F.B.I. tried to turn existing agents with no computer backgrounds into digital specialists, clinging to the “any job” mantra. It may be possible to turn an agent whose background is in accounting into a first-rate gang investigator, but it’s a lot harder to turn that same agent into a top-flight computer scientist.

The “any job” mantra also hinders recruitment. People who have spent years becoming computer experts may have little interest in pivoting to another assignment. Many may lack the aptitude for — or feel uneasy with — traditional law enforcement expectations, such as being in top physical fitness, handling a deadly force scenario or even interacting with the public.

This very same issue plagues the RCMP, which also has a generalist model that discourages or hinders specialization. While we do see better business practices in, say, France, with an increasing LEA capacity to pursue cybercrime, we’re not yet seeing North American federal governments overhaul their own policing services.1

Similarly, the FBI is suffering from an ‘arrest’ culture:

The F.B.I.’s emphasis on arrests, which are especially hard to come by in ransomware cases, similarly reflects its outdated approach to cybercrime. In the bureau, prestige often springs from being a successful trial agent, working on cases that result in indictments and convictions that make the news. But ransomware cases, by their nature, are long and complex, with a low likelihood of arrest. Even when suspects are identified, arresting them is nearly impossible if they’re located in countries that don’t have extradition agreements with the United States.

In the Canadian context, not only is pursuing to arrest a problem due to jurisdiction, the complexity of cases can mean an officer spends huge amounts of time on a computer, and not out in the field ‘doing the work’ of their colleagues who are not cyber-focused. This perception of just ‘playing games’ or ‘surfing social media’ can sometimes lead to challenges between cyber investigators and older-school leaders.2 And, making things even more challenging is that the resources to train to detect and pursue Child Sexual Abuse Material (CSAM) are relatively plentiful, whereas economic and non-CSAM investigations tend to be severely under resourced.

Though there is some hope coming for Canadian investigators, by way of CLOUD agreements between the Canadian and American governments, and the updates to the Cybercrime Convention, both will require updates to criminal law as well as potentially provincial privacy laws to empower LEAs with expanded powers. And, even with access to more American data that enables investigations this will not solve the arrest challenges when criminals are operating out of non-extradition countries.

It remains to be seen whether an expanded capacity to issue warrants to American providers will reduce some of the Canadian need for specialized training to investigate more rudimentary cyber-related crimes or if, instead, it will have a minimum effect overall.


  1. This is also generally true to provincial and municipal services as well. ↩︎
  2. Fortunately this is a less common issue, today, than a decade ago. ↩︎
Link

National Security Means What, Again?

There have been any number of concerns about Elon Musk’s behaviour, and especially in the recent weeks and months. This has led some commentators to warn that his purchase of Twitter may raise national security risks. Gill and Lehrich try to make this argument in their article, “Elon Musk Owning Twitter is A National Security Threat.” They give three reasons:

First, Musk is allegedly in communication with foreign actors – including senior officials in the Kremlin and Chinese Communist Party – who could use his acquisition of Twitter to undermine American national security.

Will Musk’s foreign investors have influence over Twitter’s content moderation policies? Will the Chinese exploit their significant leverage over Musk to demand he censor criticism of the CCP, or turn the dials up for posts that sow distrust in democracy?

Finally, it’s not just America’s information ecosystem that’s at stake, it’s also the private data of American citizens.

It’s worth noting that at no point do the authors provide a definition for ‘national security’, which causes the reader to have to guess what they likely mean. More broadly, in journalistic and opinion circle communities there is a curious–and increasingly common–conjoining of national security and information security. The authors themselves make this link in the kicker paragraph of their article, when they write

It is imperative that American leaders fully understand Musk’s motives, financing, and loyalties amidst his bid to acquire Twitter – especially given the high-stakes geopolitical reality we are living in now. The fate of American national security and our information ecosystem hang in the balance.1

Information security, generally, is focused on dangers which are associated with true or false information being disseminated across a population. It is distinguished from cyber security, and which is typically focused on the digital security protocols and practices that are designed to reduce technical computer vulnerabilities. Whereas the former focuses on a public’s mind the latter attends to how their digital and physical systems are hardened from being technically exploited.

Western governments have historically resisted authoritarian governments attempts to link the concepts of information security and cyber security. The reason is that authoritarian governments want to establish international principles and norms, whereby it becomes appropriate for governments to control the information which is made available to their publics under the guise of promoting ‘cyber security’. Democratic countries that emphasise the importance of intellectual freedom, freedom of religion, freedom of assembly, and other core rights have historically been opposed to promoting information security norms.

At the same time, misinformation and disinformation have become increasingly popular areas of study and commentary, especially following Donald Trump’s election as POTUS. And, in countries like the United States, Trump’s adoption of lies and misinformation was often cast as a national security issue: correct information should be communicated, and efforts to intentionally communicate false information should be blocked, prohibited, or prevented from massively circulating.

Obviously Trump’s language, actions, and behaviours were incredibly destabilising and abominable for an American president. And his presence on the world stage arguably emboldened many authoritarians around the world. But there is a real risk in using terms like ‘national security’ without definition, especially when the application of ‘national security’ starts to stray into the domain of what could be considered information security. Specifically, as everything becomes ‘national security’ it is possible for authoritarian governments to adopt the language of Western governments and intellectuals, and assert that they too are focused on ‘national security’ whereas, in fact, these authoritarian governments are using the term to justify their own censorious activities.

Now, does this mean that if we are more careful in the West about our use of language that authoritarian governments will become less censorious? No. But being more careful and thoughtful in our language, public argumentation, and positioning of our policy statements we may at least prevent those authoritarian governments from using our discourse as a justification for their own activities. We should, then, be careful and precise in what we say to avoid giving a fig leaf of cover to authoritarian activities.

And that will start by parties who use terms like ‘national security’ clearly defining what they mean, such that it is clear how national security is different from informational security. Unless, of course, authors and thinkers are in fact leaning into the conceptual apparatus of repressive governments in an effort to save democratic governance. For any author who thinks such a move is wise, however, I must admit that I harbour strong doubts of the efficacy or utility of such attempts.


  1. Emphasis not in original. ↩︎
Link

Can University Faculty Hold Platforms To Account?

Heidi Tworek has a good piece with the Centre for International Governance Innovation, where she questions whether there will be a sufficient number of faculty in Canada (and elsewhere) to make use of information that digital-first companies might be compelled to make available to researchers. The general argument goes that if companies must make information available to academics then these academics can study the information and, subsequently, hold companies to account and guide evidence-based policymaking.

Tworek’s argument focuses on two key things.

  1. First, there has been a decline in the tenured professoriate in Canada, with the effect that the adjunct faculty who are ‘filling in’ are busy teaching and really don’t have a chance to lead research.
  2. While a vanishingly small number of PhD holders obtain a tenure track role, a reasonable number may be going into the very digital-first companies that researchers needs data from to hold them accountable.

On this latter point, she writes:

If the companies have far more researchers than universities have, transparency regulations may not do as much to address the imbalance of knowledge as many expect.

I don’t think that hiring people with PhDs necessarily means that companies are addressing knowledge imbalances. Whatever is learned by these researchers tends to be sheltered within corporate walls and protected by NDAs. So those researchers going into companies may learn what’s going on but be unable (or unmotivated) to leverage what they know in order to inform policy discussions meant to hold companies to account.

To be clear, I really do agree with a lot in this article. However, I think it does have a few areas for further consideration.

First, more needs to be said about what, specifically, ’transparency’ encompasses and its relationships with data type, availability, etc. Transparency is a deeply contested concept and there are a lot of ways that the revelation of data basically creates a funhouse of mirrors effect, insofar as what researchers ‘see’ can be very distorted from the reality of what truly is.

Second, making data available isn’t just about whether universities have the professors to do the work but, really, whether the government and its regulators have the staff time as well. Professors are doing a lot of things whereas regulators can assign staff to just work the data, day in and day out. Focus matters.

Third, and related, I have to admit that I have pretty severe doubts about the ability of professors to seriously take up and make use of information from platforms, at scale and with policy impact, because it’s never going to be their full time jobs to do so. Professors are also going to be required to publish in books or journals, which means their outputs will be delayed and inaccessible to companies, government bureaucrats and regulators, and NGO staff. I’m sure academics will have lovely and insightful discussions…but they won’t happen fast enough, or in accessible places or in plain language, to generally affect policy debates.

So, what might need to be added to start fleshing out how universities are organised to make use of data released by companies and have policy impacts in research outputs?

First, universities in Canada would need to get truly serious about creating a ’researcher class’ to analyse corporate reporting. This would involve prioritising the hiring of research associates and senior research associates who have few or no teaching responsibilities.1

Second, universities would need to work to create centres such as the Citizen Lab, or related groups.2 These don’t need to be organisations which try and cover the waterfront of all digital issues. They could, instead, be more focused to reduce the number of staff or fellows that are needed to fulfil the organisation’s mandate. Any and all centres of this type would see a small handful of people with PhDs (who largely lack teaching responsibilities) guide multidisciplinary teams of staff. Those same staff members would not typically need a a PhD. They would need to be nimble enough to move quickly while using a peer-review lite process to validate findings, but not see journal or book outputs as their primacy currency for promotion or hiring.

Third, the centres would need a core group of long-term staffers. This core body of long-term researchers is needed to develop policy expertise that graduate students just don’t possess or develop in their short tenure in the university. Moreover, these same long-term researchers can then train graduate student fellows of the centres in question, with the effect of slowly building a cadre of researchers who are equipped to critically assess digital-first companies.

Fourth, the staff at research centres needs to be paid well and properly. They cannot be regarded as ‘graduate student plus’ employees but as specialists who will be of interest to government and corporations. This means that the university will need to pay competitive wages in order to secure the staff needed to fulfil centre mandates.

Basically if universities are to be successful in holding big data companies to account they’ll need to incubate quasi-NGOs and let them loose under the university’s auspice. It is, however, worth asking whether this should be the goal of the university in the first place: should society be outsourcing a large amount of the ‘transparency research’ that is designed to have policy impact or guide evidence-based policy making to academics, or should we instead bolster the capacities of government departments and regulatory agencies to undertake these activities

Put differently, and in context with Tworek’s argument: I think that assuming that PhDs holders working as faculty in universities are the solution to analysing data released by corporations can only hold if you happen to (a) hold or aspire to hold a PhD; (b) possesses or aspire to possess a research-focused tenure track job.

I don’t think that either (a) or (b) should guide the majority of the way forward in developing policy proposals as they pertain to holding corporations to account.

Do faculty have a role in holding companies such as Google, Facebook, Amazon, Apple, or Netflix to account? You bet. But if the university, and university researchers, are going to seriously get involved in using data released by companies to hold them to account and have policy impact, then I think we need dedicated and focused researchers. Faculty who are torn between teaching, writing and publishing in inaccessible locations using baroque theoretical lenses, pursuing funding opportunities and undertaking large amounts of department service and performing graduate student supervision are just not going to be sufficient to address the task at hand.


  1. In the interests of disclosure, I currently hold one of these roles. ↩︎
  2. Again in the interests of disclosure, this is the kind of place I currently work at. ↩︎
Link

Digital Currency Standards Heat Up

There is an ongoing debate as to which central banks will launch digital currencies, by which date, and how currencies will be interoperable with one another. Simon Sharwood, writing for The Register, is reporting that China’s Digital Yuan is taking big steps to answering many of those questions:

According to an account of the meeting in state-controlled media, Fan said standardization across payment systems will be needed to ensure the success of the Digital Yuan.

The kind of standardization he envisioned is interoperability between existing payment systems – whether they use QR codes, NFC or Bluetooth.

That’s an offer AliPay and WeChat Pay can’t refuse, unless they want Beijing to flex its regulatory muscles and compel them to do it.

With millions of payment terminals outside China already set up for AliPay and WeChat Pay, and the prospect of the Digital Yuan being accepted in the very same devices, Beijing has the beginnings of a global presence for its digital currency.

When I walk around my community I very regularly see options to use AliPay or WeChat Pay, and see many people using these options. The prospect that the Chinese government might be able to take advantage of existing payment structures to also use a government-associated digital fiat currency would be a remarkable manoeuvre that could theoretically occur quite quickly. I suspect that when/if some Western politicians catch wind of this they will respond quickly and bombastically.

Other governments’ central banks should, ideally, be well underway in developing the standards for their own digital fiat currencies. These standards should be put into practice in a meaningful way to assess their strengths and correct their deficiencies. Governments that are not well underway in launching such digital currencies are running the risk of seeing some of their population move away from domestically-controlled currencies, or basket currencies where the state determines what composes the basket, to currencies managed by foreign governments. This would represent a significant loss of policy capacity and, arguably, economic sovereignty for at least some states.

Why might some members of their population shift over to, say, the Digital Yuan? In the West this might occur when individuals are travelling abroad, where WeChat Pay and AliPay infrastructure is often more usable and more secure than credit card infrastructures. After using these for a while the same individuals may continuing to use those payment methods for ease and low cost when they return home. In less developed parts of the world, where AliPay and WeChat Pay are already becoming dominant, it could occur as members of the population continue their shift to digital transactions and away from currencies controlled or influenced by their governments. The effect would be, potentially, to provide a level of influence to the Chinese government while potentially exposing sensitive macro-economic consumer habits that could be helpful in developing Chinese economic, industrial, or foreign policy.

Western government responses might be to bar the use of the Digital Yuan in their countries but this could be challenging should it rely on common standards with AliPay and WeChat Pay. Could a ban surgically target the Digital Yuan or, instead, would it need to target all payment terminals using the same standard and, thus, catch AliPay and WeChat Pay as collateral damage? What if a broader set of states all adopt common standards, which happen to align with the Digital Yuan, and share infrastructure: just how many foreign and corporate currencies could be disabled without causing a major economic or diplomatic incident? To what extent would such a ban create a globally bifurcated (trifurcated? quadfurcated?) digital payment environment?

Though some governments might regard this kind of ‘burn them all’ approach as desirable there would be an underlying question of whether such an effect would be reasonable and proportionate. We don’t ban WeChat in the West, as an example, in part due to such an action being manifestly disproportionate to risks associated with the communications platform. It is hard to imagine how banning the Digital Yuan, along with WeChat Pay or AliPay or other currencies using the same standards, might not be similarly disproportionate where such a decision would detrimentally affect hundreds of thousands, or millions, of people and businesses that already use these payment systems or standards. It will be fascinating to see how Western central banks move forward to address the rise of digital fiat currencies and, also, how their efforts intersect with the demands and efforts of Western politicians that regularly advocate for anti-China policies and laws.

Link

Vulnerability Exploitability eXchange (VEX)

CISA has a neat bit of work they recently published, entitled “Vulnerability Exploitability eXchange (VEX) – Status Justifications” (warning: opens to .pdf.).1 Product security teams that adopt VEX could assert the status of specific vulnerabilities in their products. As a result, clients’ security staff could allocate time to remediate actionable vulnerabilities instead of burning time on potential vulnerabilities that product security teams have already closed off or mitigated.

There are a number of different machine-readable status types that are envisioned, including:

  • Component_not_present
  • Vulnerable_code_not_present
  • Vulnerable_code_cannot_be_controlled_by_adversary
  • Vulnerable_code_not_in_execute_path
  • Inline_mitigations_already_exist

CISA’s publication spells out what each status entails in more depth and includes diagrams to help readers understand what is envisioned. However, those same readers need to pay attention to a key caveat, namely, “[t]his document will not address chained attacks involving future or unknown risks as it will be considered out of scope.” Put another way, VEX is used to assess known vulnerabilities and attacks. It should not be relied upon to predict potential threats based on not-yet-public attacks nor new ways of chaining known vulnerabilities. Thus, while it would be useful to ascertain if a product is vulnerable to EternalBlue, today, it would not be useful to predict or assess the exploited vulnerabilities prior to EternalBlue having been made public nor new or novel ways of exploiting the vulnerabilities underlying EternalBlue. In effect, then, VEX is meant to address the known risks associated with N-Days as opposed to risks linked with 0-Days or novel ways of exploiting N-Days.2

For VEX to best work there should be some kind of surrounding policy requirements, such as when/if a supplier falsely (as opposed to incorrectly) asserts the security properties of its product there should be some disciplinary response. This can take many forms and perhaps the easiest relies on economics and not criminal sanction: federal governments or major companies will decline to do business with a vendor found to have issued a deceptive VEX, and may have financial recourse based on contactual terms with the product’s vendor. When or if this economic solution fails then it might be time to turn to legal venues and, if existent approaches prove insufficient, potentially even introduce new legislation designed to further discipline bad actors. However, as should be apparent, there isn’t a demonstrable requirement to introduce legislation to make VEX actionable.

I think that VEX continues work under the current American administration to advance a number of good policies that are meant to better secure products and systems. VEX works hand-in-hand with SBOMs and, also, may be supported by US Executive Orders around cybersecurity.

While Canada may be ‘behind’ the United States we can see that things are potentially shifting. There is currently a consultation underway to regenerate Canada’s cybersecurity strategy and infrastructure security legislation was introduced just prior to Parliament rising for its summer break. Perhaps, in a year’s time, we’ll see stronger and bolder efforts by the Canadian government to enhance infrastructure security with some small element of that recommending the adoption of VEXes. At the very least the government won’t be able to say they lack the legislative tools or strategic direction to do so.


  1. You can access a locally hosted version if the CISA link fails. ↩︎
  2. For a nice discussion of why N-days are regularly more dangerous then 0-Days, see: “N-Days: The Overlooked Cyber Threat for Utilities.” ↩︎
Link

Housing in Ottawa Now a National Security Issue

David Pugliese is reporting in the Ottawa Citizen that the Canadian Forces Intelligence Command (CFINTCOM) is “trying to avoid posting junior staff to Ottawa because it has become too expensive to live in the region.” The risk is that financial hardship associated with living in Ottawa could make junior members susceptible to subversion. Housing costs in Ottawa have risen much faster than either wage increases or inflation. Moreover, the special allowance provided to staff that is meant to assauge the high costs of living in Canadian cities has been frozen for 13 years.

At this point energy, telecommunications, healthcare, and housing all raise their own national security concerns. To some extent, such concerns have tracked with these industry categories: governments have always worried about the security of telecommunications networks as well as the availability of sufficient energy supplies. But in other cases, such as housing affordability, the national security concerns we are seeing are the result of long-term governance failures. These failures have created new national security threats that would not exist in the face of good (or even just better) governance.1

There is a profound danger in trying to address all the new national security challenges and issues using national security tools or governance processes. National security incidents are often regarded as creating moments of exception and, in such moments, actions can be undertaken that otherwise could not. The danger is that states of exception become the norm and, in the process, the regular modes of governance and law are significantly set aside to resolve the crises of the day. What is needed is a regeneration and deployment of traditional governance capacity instead of a routine reliance on national security-type responses to these issues.

Of course, governments don’t just need to respond to these metastasized governance problems in order to alleviate national security issues and threats. They need to do so, in equable and inclusive ways, so as to preserve or (re)generate the trust between the residents of Canada and their government.

The public may justifiably doubt that their system of government is working where successive governments under the major political parties are seen as having failed to provide for basic needs. The threat, then, is that ongoing governance failures run the risk of placing Canada’s democracy under pressure. While this might seem overstated I don’t think that’s the case: we are seeing a rise of politicians who are capitalizing on the frustrations and challenges faced by Canadians across the country, but who do not have their own solutions. Capitalizing on rage and frustration, and then failing to deliver on fixes, will only further alienate Canadians from their government.

Governments across Canada flexed their muscles during the earlier phases of the COVID-19 pandemic. Having used them, then, it’s imperative they keep flexing these muscles to address the serious issues that Canadians are experiencing. Doing so will assuage existent national security issues. It will also, simultaneously, serve to prevent other normal governance challenges from metastasizing into national security threats.


  1. As an aside, these housing challenges are not necessarily new. Naval staff posted to Esquimalt have long complained about the high costs of off-base housing in Victoria and the surrounding towns and cities. ↩︎
Link

A Brief Unpacking of a Declaration on the Future of the Internet

Cameron F. Kerry has a helpful piece in Brookings that unpacks the recently published ‘Declaration on the Future of the Internet.’ As he explains, the Declaration was signed by 60 States and is meant, in part, to rebut a China-Russia joint statement. Those countries’ statement would support their positions on ‘securing’ domestic Internet spaces and removing Internet governance from multi-stakeholder forums to State-centric ones.

So far, so good. However, baked into the Kerry’s article is language suggesting that either he misunderstands, or understates, some of the security-related elements of the Declaration. He writes:

There are additional steps the U.S. government can take that are more within its control than the actions and policies of foreign states or international organizations. The future of the Internet declaration contains a series of supporting principles and measures on freedom and human rights, Internet governance and access, and trust in use of digital network technology. The latter—trust in the use of network technology— is included to “ensure that government and relevant authorities’ access to personal data is based in law and conducted in accordance with international human rights law” and to “protect individuals’ privacy, their personal data, the confidentiality of electronic communications and information on end-users’ electronic devices, consistent with the protection of public safety and applicable domestic and international law.” These lay down a pair of markers for the U.S. to redeem.

I read this, against the 2019 Ministerial and recent Council of Europe Cybercrime Convention updates, and see that a vast swathe of new law enforcement and security agency powers would be entirely permissible based on Kerry’s assessment of the Declaration and States involved in signing it. While these new powers have either been agreed to, or advanced by, signatory States they have simultaneously been directly opposed by civil and human rights campaigners, as well as some national courts. Specifically, there are live discussions around the following powers:

  • the availability of strong encryption;
  • the guarantee that the content of communications sent using end-to-end encrypted devices cannot be accessed or analyzed by third-parties (include by on-device surveillance);
  • the requirement of prior judicial authorization to obtain subscriber information; and
  • the oversight of preservation and production powers by relevant national judicial bodies.

Laws can be passed that see law enforcement interests supersede individuals’ or communities’ rights in safeguarding their devices, data, and communications from the State. When or if such a situation occurs, the signatories of the Declaration can hold fast in their flowery language around protecting rights while, at the same time, individuals and communities experience heightened surveillance of, and intrusions into, their daily lives.

In effect, a lot of international policy and legal infrastructure has been built to facilitate sweeping new investigatory powers and reforms to how data is, and can be, secured. It has taken years to build this infrastructure and as we leave the current stage of the global pandemic it is apparent that governments have continued to press ahead with their efforts to expand the powers which could be provided to law enforcement and security agencies, notwithstanding the efforts of civil and human rights campaigners around the world.

The next stage of things will be to asses how, and in what ways, international agreements and legal infrastructure will be brought into national legal systems and to determine where to strategically oppose the worst of the over reaches. While it’s possible that some successes are achieved in resisting the expansions of state powers not everything will be resisted. The consequence will be both to enhance state intrusions into private lives as well as to weaken the security provided to devices and data, with the resultant effect of better enabling criminals to illicitly access or manipulate our personal information.

The new world of enhanced surveillance and intrusions is wholly consistent with the ‘Declaration on the Future of the Internet.’ And that’s a big, glaring, and serious problem with the Declaration.

Link

The Broader Implications of Data Breaches

Ikea Canada notified approximately 95,000 Canadian customers in recent weeks about a data breach the company has suffered. An Ikea employee conducted a series of searches between March 1 to March 3 which surfaced the account records of the aforementioned customers.1

While Ikea promised that financial information–credit card and banking information–hadn’t been revealed a raft of other personal information had been. That information included:

  • full first and last name;
  • postal code or home address;
  • phone number and other contact information;
  • IKEA loyalty number.

Ikea did not disclose who specifically accessed the information nor their motivations for doing so.

The notice provided by Ikea was better than most data breach alerts insofar as it informed customers what exactly had been accessed. For some individuals, however, this information is highly revelatory and could cause significant concern.

For example, imagine a case where someone has previously been the victim of either physical or digital stalking. Should their former stalker be an Ikea employee the data breach victim may ask whether their stalker now has confidential information that can be used to renew, or further amplify, harmful activities. With the customer information in hand, as an example, it would be relatively easy for a stalker to obtain more information such as where precisely someone lived. If they are aggrieved then they could also use the information to engage in digital harassment or threatening behaviour.

Without more information about the motivations behind why the Ikea employee searched the database those who have been stalked or had abusive relations with an Ikea employee might be driven to think about changing how they live their lives. They might feel the need to change their safety habits, get new phone numbers, or cycle to a new email. In a worst case scenario they might contemplate vacating their residence for a time. Even if they do not take any of these actions they might experience a heightened sense of unease or anxiety.

Of course, Ikea is far from alone in suffering these kinds of breaches. They happen on an almost daily basis for most of us, whether we’re alerted of the breach or not. Many news reports about such breaches focus on whether there is an existent or impending financial harm and stop the story there. The result is that journalist reporting can conceal some of the broader harms linked with data breaches.

Imagine a world where our personal information–how you can call us or find our homes–was protected equivalent to how our credit card numbers are current protected. In such a world stalkers and other abusive actors might be less able to exploit stolen or inappropriately accessed information. Yes, there will always be ways by which bad actors can operate badly, but it would be possible to mitigate some of the ways this badness can take place.

Companies could still create meaningful consent frameworks whereby some (perhaps most!) individuals could agree to have their information stored by the company. But, for those who have a different risk threshold they could make a meaningful choice so they could still make purchases and receive deliveries without, at the same time, permanently increasing the risks that their information might fall into the wrong hand. However, getting to this point requires expanded threat modelling: we can’t just worry about a bad credit card purchase but, instead, would need to take seriously the gendered and intersectional nature of violence and its intersection with cybersecurity practices.


  1. In the interests of disclosure, I was contacted as an affected party by Ikea Canada. ↩︎
Link

Messaging Interoperability and Client Security

Eric Rescorla has a thoughtful and nuanced assessment of recent EU proposals which would compel messaging companies to make their communications services interoperable. To his immense credit he spends time walking the reader through historical and contemporary messaging systems in order to assess the security issues prospectively associated with requiring interoperability. It’s a very good, and compact, read on a dense and challenging subject.

I must admit, however, that I’m unconvinced that demanding interoperability will have only minimal security implications. While much of the expert commentary has focused on whether end-to-end encryption would be compromised I think that too little time has been spent considering the client-end side of interoperable communications. So if we assume it’s possible to facilitate end-to-end communications across messaging companies and focus just on clients receiving/sending communications, what are some risks?1

As it stands, today, the dominant messaging companies have large and professional security teams. While none of these teams are perfect, as shown by the success of cyber mercenary companies such as NSO group et al, they are robust and constantly working to improve the security of their products. The attacks used by groups such as NSO, Hacking Team, Candiru, FinFisher, and such have not tended to rely on breaking encryption. Rather, they have sought vulnerabilities in client devices. Due to sandboxing and contemporary OS security practices this has regularly meant successfully targeting a messaging application and, subsequently, expanding a foothold on the device more generally.

In order for interoperability to ‘work’ properly there will need to be a number of preconditions. As noted in Rescorla’s post, this may include checking what functions an interoperable client possesses to determine whether ‘standard’ or ‘enriched’ client services are available. Moreover, APIs will need to be (relatively) stable or rely on a standardized protocol to facilitate interoperability. Finally, while spam messages are annoying on messaging applications today, they may become even more commonplace where interoperability is required and service providers cannot use their current processes to filter/quality check messages transiting their infrastructure.

What do all the aforementioned elements mean for client security?

  1. Checking for client functionality may reveal whether a targeted client possesses known vulnerabilities, either generally (following a patch update) or just to the exploit vendor (where they know of a vulnerability and are actively exploiting it). Where spam filtering is not great exploit vendors can use spam messaging as reconnaissance messaging with the service provider, client vendor, or client applications not necessarily being aware of the threat activity.
  2. When or if there is a significant need to rework how keying operates, or surveillance of identity properties more broadly that are linked to an API, then there is a risk that implementation of updates may be delayed until the revisions have had time to be adopted by clients. While this might be great for competition vis-a-vis interoperability it will, also, have the effect of signalling an oncoming change to threat actors who may accelerate activities to get footholds on devices or may warn these actors that they, too, need to update their tactics, techniques, and procedures (TTPs).
  3. As a more general point, threat actors might work to develop and propagate interoperable clients that they have, already, compromised–we’ve previously seen nation-state actors do so and there’s no reason to expect this behaviour to stop in a world of interoperable clients. Alternately, threat actors might try and convince targets to move to ‘better’ clients that contain known vulnerabilities but which are developed and made available by legitimate vendors. Whereas, today, an exploit developer must target specific messaging systems that deliver that systems’ messages, a future world of interoperable messaging will likely expand the clients that threat actors can seek to exploit.

One of the severe dangers and challenges facing the current internet regulation landscape has been that a large volume of new actors have entered the various overlapping policy fields. For a long time there’s not been that many of us and anyone who’s been around for 10-15 years tends to be suitably multidisciplinary that they think about how activities in policy domain X might/will have consequences for domains Y and Z. The new raft of politicians and their policy advisors, in contrast, often lack this broad awareness. The result is that proposals are being advanced around the world by ostensibly well-meaning individuals and groups to address issues associated with online harms, speech, CSAM, competition, and security. However, these same parties often lack awareness of how the solutions meant to solve their favoured policy problems will have effects on neighbouring policy issues. And, where they are aware, they often don’t care because that’s someone else’s policy domain.

It’s good to see more people participating and more inclusive policy making processes. And seeing actual political action on many issue areas after 10 years of people debating how to move forward is exciting. But too much of that action runs counter to the thoughtful warnings and need for caution that longer-term policy experts have been raising for over a decade.

We are almost certainly moving towards a ‘new Internet’. It remains in question, however, whether this ‘new Internet’ will see resolutions to longstanding challenges or if, instead, the rush to regulate will change the landscape by finally bringing to life the threats that long-term policy wonks have been working to forestall or prevent for much of their working lives. To date, I remain increasingly concerned that we will experience the latter than witness the former.


  1. For the record, I currently remain unconvinced it is possible to implement end-to-end encryption across platforms generally. ↩︎