Link

Who Benefits from 5G?

The Financial Times (FT) ran a somewhat mixed piece on the future of 5G. The thesis is that telecom operators are anxious to realise the financial benefits of 5G deployments but, at the same time, these benefits were always expected to come in the forthcoming years; there was little, if any, expectation that financial benefits would happen immediately as the next-generation infrastructures were deployed.

The article correctly notes that consumers are skeptical of the benefits of 5G while, also, concluding by correctly stating that 5G was really always about the benefits that 5G Standalone will have for businesses. This is, frankly, a not great piece in terms of editing insofar as it combines two relatively distinct things without doing so in a particularly clear way.

5G Extended relies on existing 4G infrastructures. While there are theoretically faster speeds available to consumers, along with a tripartite spectrum band segmentation that can be used,1 most consumers won’t directly realise the benefits. One group that may, however, benefit (and that was not addressed at all in this piece) are rural customers. Opening up the lower-frequency spectrum blocks will allow 5G signals to travel farther with the benefit significantly accruing to those who cannot receive new copper, coax, or fibre lines. This said, I tend to agree with the article that most of the benefits of 5G haven’t, and won’t, be directly realised by individual mobile subscribers in the near future.2

5G Standalone is really where 5G will theoretically come alive. It’s, also, going to require a whole new way of designing and securing networks. At least as of a year or so ago, China was a global leader here but largely because they had comparatively poor 4G penetration and so had sought to leapfrog to 5G SA.3 This said, American bans on semiconductors to Chinese telecoms vendors, such as Huawei and ZTE, have definitely had a negative effect on the China’s ability to more fully deploy 5G SA.

In the Canadian case we can see investments by our major telecoms into 5G SA applications. Telus, Rogers, and Bell are all pouring money into technology clusters and universities. The goal isn’t to learn how much faster consumers’ phones or tablets can download data (though new algorithms to better manage/route/compress data are always under research) but, instead, to learn how how to take advantage of the more advanced business-to-business features of 5G. That’s where the money is, though the question will remain as to how well telecom carriers will be able to rent seek on those features when they already make money providing bandwidth and services to businesses paying for telecom products.


  1. Not all countries, however, are allocating the third, high-frequency, band on the basis that its utility remains in doubt. ↩︎
  2. Incidentally: it generally just takes a long, long time to deploy networks. 4G still isn’t reliably available across all of Canada, such as in populated rural parts of Canada. This delay meaningfully impedes the ability of farmers, as an example, to adopt smart technologies that would reduce the costs associated with farm and crop management and which could, simultaneously, enable more efficient crop yields. ↩︎
  3. Western telecoms, by comparison, want to extend the life of the capital assets they purchased/deployed around their 4G infrastructures and so prefer to go the 5G Extended route to start their 5G upgrade path. ↩︎
Link

Generalist Policing Models Remain Problematic

From the New York Time’s opinion section, this piece on“Why the F.B.I. Is so far behind on cybercrime?” reinforces the position that American law enforcement is stymied in investigating cybercrimes because:

…it lacks enough agents with advanced computer skills. It has not recruited as many of these people as it needs, and those it has hired often don’t stay long. Its deeply ingrained cultural standards, some dating to the bureau’s first director, J. Edgar Hoover, have prevented it from getting the right talent.

Emblematic of an organization stuck in the past is the F.B.I.’s longstanding expectation that agents should be able to do “any job, anywhere.” While other global law enforcement agencies have snatched up computer scientists, the F.B.I. tried to turn existing agents with no computer backgrounds into digital specialists, clinging to the “any job” mantra. It may be possible to turn an agent whose background is in accounting into a first-rate gang investigator, but it’s a lot harder to turn that same agent into a top-flight computer scientist.

The “any job” mantra also hinders recruitment. People who have spent years becoming computer experts may have little interest in pivoting to another assignment. Many may lack the aptitude for — or feel uneasy with — traditional law enforcement expectations, such as being in top physical fitness, handling a deadly force scenario or even interacting with the public.

This very same issue plagues the RCMP, which also has a generalist model that discourages or hinders specialization. While we do see better business practices in, say, France, with an increasing LEA capacity to pursue cybercrime, we’re not yet seeing North American federal governments overhaul their own policing services.1

Similarly, the FBI is suffering from an ‘arrest’ culture:

The F.B.I.’s emphasis on arrests, which are especially hard to come by in ransomware cases, similarly reflects its outdated approach to cybercrime. In the bureau, prestige often springs from being a successful trial agent, working on cases that result in indictments and convictions that make the news. But ransomware cases, by their nature, are long and complex, with a low likelihood of arrest. Even when suspects are identified, arresting them is nearly impossible if they’re located in countries that don’t have extradition agreements with the United States.

In the Canadian context, not only is pursuing to arrest a problem due to jurisdiction, the complexity of cases can mean an officer spends huge amounts of time on a computer, and not out in the field ‘doing the work’ of their colleagues who are not cyber-focused. This perception of just ‘playing games’ or ‘surfing social media’ can sometimes lead to challenges between cyber investigators and older-school leaders.2 And, making things even more challenging is that the resources to train to detect and pursue Child Sexual Abuse Material (CSAM) are relatively plentiful, whereas economic and non-CSAM investigations tend to be severely under resourced.

Though there is some hope coming for Canadian investigators, by way of CLOUD agreements between the Canadian and American governments, and the updates to the Cybercrime Convention, both will require updates to criminal law as well as potentially provincial privacy laws to empower LEAs with expanded powers. And, even with access to more American data that enables investigations this will not solve the arrest challenges when criminals are operating out of non-extradition countries.

It remains to be seen whether an expanded capacity to issue warrants to American providers will reduce some of the Canadian need for specialized training to investigate more rudimentary cyber-related crimes or if, instead, it will have a minimum effect overall.


  1. This is also generally true to provincial and municipal services as well. ↩︎
  2. Fortunately this is a less common issue, today, than a decade ago. ↩︎
Link

Can University Faculty Hold Platforms To Account?

Heidi Tworek has a good piece with the Centre for International Governance Innovation, where she questions whether there will be a sufficient number of faculty in Canada (and elsewhere) to make use of information that digital-first companies might be compelled to make available to researchers. The general argument goes that if companies must make information available to academics then these academics can study the information and, subsequently, hold companies to account and guide evidence-based policymaking.

Tworek’s argument focuses on two key things.

  1. First, there has been a decline in the tenured professoriate in Canada, with the effect that the adjunct faculty who are ‘filling in’ are busy teaching and really don’t have a chance to lead research.
  2. While a vanishingly small number of PhD holders obtain a tenure track role, a reasonable number may be going into the very digital-first companies that researchers needs data from to hold them accountable.

On this latter point, she writes:

If the companies have far more researchers than universities have, transparency regulations may not do as much to address the imbalance of knowledge as many expect.

I don’t think that hiring people with PhDs necessarily means that companies are addressing knowledge imbalances. Whatever is learned by these researchers tends to be sheltered within corporate walls and protected by NDAs. So those researchers going into companies may learn what’s going on but be unable (or unmotivated) to leverage what they know in order to inform policy discussions meant to hold companies to account.

To be clear, I really do agree with a lot in this article. However, I think it does have a few areas for further consideration.

First, more needs to be said about what, specifically, ’transparency’ encompasses and its relationships with data type, availability, etc. Transparency is a deeply contested concept and there are a lot of ways that the revelation of data basically creates a funhouse of mirrors effect, insofar as what researchers ‘see’ can be very distorted from the reality of what truly is.

Second, making data available isn’t just about whether universities have the professors to do the work but, really, whether the government and its regulators have the staff time as well. Professors are doing a lot of things whereas regulators can assign staff to just work the data, day in and day out. Focus matters.

Third, and related, I have to admit that I have pretty severe doubts about the ability of professors to seriously take up and make use of information from platforms, at scale and with policy impact, because it’s never going to be their full time jobs to do so. Professors are also going to be required to publish in books or journals, which means their outputs will be delayed and inaccessible to companies, government bureaucrats and regulators, and NGO staff. I’m sure academics will have lovely and insightful discussions…but they won’t happen fast enough, or in accessible places or in plain language, to generally affect policy debates.

So, what might need to be added to start fleshing out how universities are organised to make use of data released by companies and have policy impacts in research outputs?

First, universities in Canada would need to get truly serious about creating a ’researcher class’ to analyse corporate reporting. This would involve prioritising the hiring of research associates and senior research associates who have few or no teaching responsibilities.1

Second, universities would need to work to create centres such as the Citizen Lab, or related groups.2 These don’t need to be organisations which try and cover the waterfront of all digital issues. They could, instead, be more focused to reduce the number of staff or fellows that are needed to fulfil the organisation’s mandate. Any and all centres of this type would see a small handful of people with PhDs (who largely lack teaching responsibilities) guide multidisciplinary teams of staff. Those same staff members would not typically need a a PhD. They would need to be nimble enough to move quickly while using a peer-review lite process to validate findings, but not see journal or book outputs as their primacy currency for promotion or hiring.

Third, the centres would need a core group of long-term staffers. This core body of long-term researchers is needed to develop policy expertise that graduate students just don’t possess or develop in their short tenure in the university. Moreover, these same long-term researchers can then train graduate student fellows of the centres in question, with the effect of slowly building a cadre of researchers who are equipped to critically assess digital-first companies.

Fourth, the staff at research centres needs to be paid well and properly. They cannot be regarded as ‘graduate student plus’ employees but as specialists who will be of interest to government and corporations. This means that the university will need to pay competitive wages in order to secure the staff needed to fulfil centre mandates.

Basically if universities are to be successful in holding big data companies to account they’ll need to incubate quasi-NGOs and let them loose under the university’s auspice. It is, however, worth asking whether this should be the goal of the university in the first place: should society be outsourcing a large amount of the ‘transparency research’ that is designed to have policy impact or guide evidence-based policy making to academics, or should we instead bolster the capacities of government departments and regulatory agencies to undertake these activities

Put differently, and in context with Tworek’s argument: I think that assuming that PhDs holders working as faculty in universities are the solution to analysing data released by corporations can only hold if you happen to (a) hold or aspire to hold a PhD; (b) possesses or aspire to possess a research-focused tenure track job.

I don’t think that either (a) or (b) should guide the majority of the way forward in developing policy proposals as they pertain to holding corporations to account.

Do faculty have a role in holding companies such as Google, Facebook, Amazon, Apple, or Netflix to account? You bet. But if the university, and university researchers, are going to seriously get involved in using data released by companies to hold them to account and have policy impact, then I think we need dedicated and focused researchers. Faculty who are torn between teaching, writing and publishing in inaccessible locations using baroque theoretical lenses, pursuing funding opportunities and undertaking large amounts of department service and performing graduate student supervision are just not going to be sufficient to address the task at hand.


  1. In the interests of disclosure, I currently hold one of these roles. ↩︎
  2. Again in the interests of disclosure, this is the kind of place I currently work at. ↩︎
Link

Housing in Ottawa Now a National Security Issue

David Pugliese is reporting in the Ottawa Citizen that the Canadian Forces Intelligence Command (CFINTCOM) is “trying to avoid posting junior staff to Ottawa because it has become too expensive to live in the region.” The risk is that financial hardship associated with living in Ottawa could make junior members susceptible to subversion. Housing costs in Ottawa have risen much faster than either wage increases or inflation. Moreover, the special allowance provided to staff that is meant to assauge the high costs of living in Canadian cities has been frozen for 13 years.

At this point energy, telecommunications, healthcare, and housing all raise their own national security concerns. To some extent, such concerns have tracked with these industry categories: governments have always worried about the security of telecommunications networks as well as the availability of sufficient energy supplies. But in other cases, such as housing affordability, the national security concerns we are seeing are the result of long-term governance failures. These failures have created new national security threats that would not exist in the face of good (or even just better) governance.1

There is a profound danger in trying to address all the new national security challenges and issues using national security tools or governance processes. National security incidents are often regarded as creating moments of exception and, in such moments, actions can be undertaken that otherwise could not. The danger is that states of exception become the norm and, in the process, the regular modes of governance and law are significantly set aside to resolve the crises of the day. What is needed is a regeneration and deployment of traditional governance capacity instead of a routine reliance on national security-type responses to these issues.

Of course, governments don’t just need to respond to these metastasized governance problems in order to alleviate national security issues and threats. They need to do so, in equable and inclusive ways, so as to preserve or (re)generate the trust between the residents of Canada and their government.

The public may justifiably doubt that their system of government is working where successive governments under the major political parties are seen as having failed to provide for basic needs. The threat, then, is that ongoing governance failures run the risk of placing Canada’s democracy under pressure. While this might seem overstated I don’t think that’s the case: we are seeing a rise of politicians who are capitalizing on the frustrations and challenges faced by Canadians across the country, but who do not have their own solutions. Capitalizing on rage and frustration, and then failing to deliver on fixes, will only further alienate Canadians from their government.

Governments across Canada flexed their muscles during the earlier phases of the COVID-19 pandemic. Having used them, then, it’s imperative they keep flexing these muscles to address the serious issues that Canadians are experiencing. Doing so will assuage existent national security issues. It will also, simultaneously, serve to prevent other normal governance challenges from metastasizing into national security threats.


  1. As an aside, these housing challenges are not necessarily new. Naval staff posted to Esquimalt have long complained about the high costs of off-base housing in Victoria and the surrounding towns and cities. ↩︎

Mitigating AI-Based Harms in National Security

Photo by Pixabay on Pexels.com

Government agencies throughout Canada are investigating how they might adopt and deploy ‘artificial intelligence’ programs to enhance how they provide services. In the case of national security and law enforcement agencies these programs might be used to analyze and exploit datasets, surface threats, identify risky travellers, or automatically respond to criminal or threat activities.

However, the predictive software systems that are being deployed–‘artificial intelligence’–are routinely shown to be biased. These biases are serious in the commercial sphere but there, at least, it is somewhat possible for researchers to detect and surface biases. In the secretive domain of national security, however, the likelihood of bias in agencies’ software being detected or surfaced by non-government parties is considerably lower.

I know that organizations such as the Canadian Security Intelligence Agency (CSIS) have an interest in understanding how to use big data in ways that mitigate bias. The Canadian government does have a policy on the “Responsible use of artificial intelligence (AI)” and, at the municipal policing level, the Toronto Police Service has also published a policy on its use of artificial intelligence. Furthermore, the Office of the Privacy Commissioner of Canada has published a proposed regulatory framework for AI as part of potential reforms to federal privacy law.

Timnit Gebru, in conversation with Julia Angwin, suggests that there should be ‘datasheets for algorithms’ that would outline how predictive software systems have been tested for bias in different use cases prior to being deployed. Linking this to traditional circuit-based datasheets, she says (emphasis added):

As a circuit designer, you design certain components into your system, and these components are really idealized tools that you learn about in school that are always supposed to work perfectly. Of course, that’s not how they work in real life.

To account for this, there are standards that say, “You can use this component for railroads, because of x, y, and z,” and “You cannot use this component for life support systems, because it has all these qualities we’ve tested.” Before you design something into your system, you look at what’s called a datasheet for the component to inform your decision. In the world of AI, there is no information on what testing or auditing you did. You build the model and you just send it out into the world. This paper proposed that datasheets be published alongside datasets. The sheets are intended to help people make an informed decision about whether that dataset would work for a specific use case. There was also a follow-up paper called Model Cards for Model Reporting that I wrote with Meg Mitchell, my former co-lead at Google, which proposed that when you design a model, you need to specify the different tests you’ve conducted and the characteristics it has.

What I’ve realized is that when you’re in an institution, and you’re recommending that instead of hiring one person, you need five people to create the model card and the datasheet, and instead of putting out a product in a month, you should actually do it in three years, it’s not going to happen. I can write all the papers I want, but it’s just not going to happen. I’m constantly grappling with the incentive structure of this industry. We can write all the papers we want, but if we don’t change the incentives of the tech industry, nothing is going to change. That is why we need regulation.

Government is one of those areas where regulation or law can work well to discipline its behaviours, and where the relatively large volume of resources combined with a law-abiding bureaucracy might mean that formally required assessments would actually be conducted. While such assessments matter, generally, they are of particular importance where state agencies might be involved in making decisions that significantly or permanently alter the life chances of residents of Canada, visitors who are passing through our borders, or foreign national who are interacting with our government agencies.

As it stands, today, many Canadian government efforts at the federal, provincial, or municipal level seem to be signficiantly focused on how predictive software might be used or the effects it may have. These are important things to attend to! But it is just as, if not more, important for agencies to undertake baseline assessments of how and when different predictive software engines are permissible or not, as based on robust testing and evaluation of their features and flaws.

Having spoken with people at different levels of government the recurring complaint around assessing training data, and predictive software systems more generally, is that it’s hard to hire the right people for these assessment jobs on the basis that they are relatively rare and often exceedingly expensive. Thus, mid-level and senior members of government have a tendency to focus on things that government is perceived as actually able to do: figure out and track how predictive systems would be used and to what effect.

However, the regular focus on the resource-related challenges of predictive software assessment raises the very real question of whether these constraints should just compel agencies to forgo technologies on the basis of failing to determine, and assess, their prospective harms. In the firearms space, as an example, government agencies are extremely rigorous in assessing how a weapon operates to ensure that it functions precisely as meant given that the weapon might be used in life-changing scenarios. Such assessments require significant sums of money from agency budgets.

If we can make significant budgetary allocations for firearms, on the grounds they can have life-altering consequences for all involved in their use, then why can’t we do the same for predictive software systems? If anything, such allocations would compel agencies to make a strong(er) business case for testing the predictive systems in question and spur further accountability: Does the system work? At a reasonable cost? With acceptable outcomes?

Imposing cost discipline on organizations is an important way of ensuring that technologies, and other business processes, aren’t randomly adopted on the basis of externalizing their full costs. By internalizing those costs, up front, organizations may need to be much more careful in what they choose to adopt, when, and for what purpose. The outcome of this introspection and assessment would, hopefully, be that the harmful effects of predictive software systems in the national security space were mitigated and the systems which were adopted actually fulfilled the purposes they were acquired to address.

Link

The Risks Linked With Canadian Cyber Operations in Ukraine

Photo by Sora Shimazaki on Pexels.com

Late last month, Global News published a story on how the Canadian government is involved in providing cyber support to the Ukrainian government in the face of Russia’s illegal invasion. While the Canadian military declined to confirm or deny any activities they might be involved in, the same was not true of the Communications Security Establishment (CSE). The CSE is Canada’s foreign signals intelligence agency. In addition to collecting intelligence, it is also mandated to defend Canadian federal systems and those designated as of importance to the government of Canada, provide assistance to other federal agencies, and conduct active and defensive cyber operations.1

From the Global News article it is apparent that the CSE is involved in both foreign intelligence operations as well as undertaking cyber defensive activities. Frankly these kinds of activity are generally, and persistently, undertaken with regard to the Russian government and so it’s not a surprise that these activities continue apace.

The CSE spokesperson also noted that the government agency is involved in ‘cyber operations’ though declined to explain whether these are defensive cyber operations or active cyber operations. In the case of the former, the Minister of National Defense must consult with the Minister of Foreign Affairs before authorizing an operation, whereas in the latter both Ministers must consent to an operation prior to it taking place. Defensive and active operations can assume the same form–roughly the same activities or operations might be undertaken–but the rationale for the activity being taken may vary based on whether it is cast as defensive or active (i.e., offensive).2

These kinds of cyber operations are the ones that most worry scholars and practitioners, on the basis that there is a risk that foreign operators or adversaries may misread a signal from a cyber operation or because the operation might have unintended consequences. Thus, the risk is that the operations that the CSE is undertaking run the risk of accidentally (or intentionally, I guess) escalating affairs between Canada and the Russian Federation in the midst of the shooting war between Russian and Ukrainian forces.

While there is, of course, a need for some operational discretion on the part of the Canadian government it is also imperative that the Canadian public be sufficiently aware of the government’s activities to understand the risks (or lack thereof) which are linked to the activities that Canadian agencies are undertaking. To date, the Canadian government has not released its cyber foreign policy doctrine nor has the Canadian Armed Forces released its cyber doctrine.3 The result is that neither Canadians nor Canada’s allies or adversaries know precisely what Canada will do in the cyber domain, how Canada will react when confronted, or the precise nature of Canada’s escalatory ladder. The government’s secrecy runs the risk of putting Canadians in greater jeopardy of a response from the Russian Federation (or other adversaries) without the Canadian public really understanding what strategic or tactical activities might be undertaken on their behalf.

Canadians have a right to know at least enough about what their government is doing to be able to begin assessing the risks linked with conducting operations during an active militant conflict against an adversary with nuclear weapons. Thus far such information has not been provided. The result is that Canadians are ill-prepared to assess the risk that they may be quietly and quickly drawn into the conflict between the Russian Federation and Ukraine. Such secrecy bodes poorly for being able to hold government to account, to say nothing of preventing Canadians from appreciating the risk that they could become deeply drawn into a very hot conflict scenario.


  1. For more on the CSE and the laws governing its activities, see “A Deep Dive into Canada’s Overhaul of Its Foreign Intelligence and Cybersecurity Laws.↩︎
  2. For more on this, see “Analysis of the Communications Security Establishment Act and Related Provisions in Bill C-59 (An Act respecting national security matters), First Reading (December 18, 2017)“, pp 27-32. ↩︎
  3. Not for lack of trying to access them, however, as in both cases I have filed access to information requests to the government for these documents 1 years ago, with delays expected to mean I won’t get the documents before the end of 2022 at best. ↩︎

Policing the Location Industry

Photo by Ingo Joseph on Pexels.com

The Markup has a comprehensive and disturbing article on how location information is acquired by third-parties despite efforts by Apple and Google to restrict the availability of this information. In the past, it was common for third-parties to provide SDKs to application developers. The SDKs would inconspicuously transfer location information to those third-parties while also enabling functionality for application developers. With restrictions being put in place by platforms such as Apple and Google, however, it’s now becoming common for application developers to initiate requests for location information themselves and then share it directly with third-party data collectors.

While such activities often violate the terms of service and policy agreements between platforms and application developers, it can be challenging for the platforms to actually detect these violations and subsequently enforce their rules.

Broadly, the issues at play represent significant governmental regulatory failures. The fact that government agencies often benefit from the secretive collection of individuals’ location information makes it that much harder for the governments to muster the will to discipline the secretive collection of personal data by third-parties: if the government cuts off the flow of location information, it will impede the ability of governments themselves obtain this information.

In some cases intelligence and security services obtain location information from third-parties. This sometimes occurs in situations where the services themselves are legally barred from directly collecting this information. Companies selling mobility information can let government agencies do an end-run around the law.

One of the results is that efforts to limit data collectors’ ability to capture personal information often sees parts of government push for carve outs to collecting, selling, and using location information. In Canada, as an example, the government has adopted a legal position that it can collect locational information so long as it is de-identified or anonymized,1 and for the security and intelligence services there are laws on the books that permit the collection of commercially available open source information. This open source information does not need to be anonymized prior to acquisition.2 Lest you think that it sounds paranoid that intelligence services might be interested in location information, consider that American agencies collected bulk location information pertaining to Muslims from third-party location information data brokers and that the Five Eyes historically targeted popular applications such as Google Maps and Angry Birds to obtain location information as well as other metadata and content. As the former head of the NSA announced several years ago, “We kill people based on metadata.”

Any arguments made by either private or public organizations that anonymization or de-identification of location information makes it acceptable to collect, use, or disclose generally relies tricking customers and citizens. Why is this? Because even when location information is aggregated and ‘anonymized’ it might subsequently be re-identified. And in situations where that reversal doesn’t occur, policy decisions can still be made based on the aggregated information. The process of deriving these insights and applying them showcases that while privacy is an important right to protect, it is not the only right that is implicated in the collection and use of locational information. Indeed, it is important to assess the proportionality and necessity of the collection and use, as well as how the associated activities affect individuals’ and communities’ equity and autonomy in society. Doing anything less is merely privacy-washing.

Throughout discussions about data collection, including as it pertains to location information, public agencies and companies alike tend to provide a pair of argument against changing the status quo. First, they assert that consent isn’t really possible anymore given the volumes of data which are collected on a daily basis from individuals; individuals would be overwhelmed with consent requests! Thus we can’t make the requests in the first place! Second, that we can’t regulate the collection of this data because doing so risks impeding innovation in the data economy.

If those arguments sound familiar, they should. They’re very similar to the plays made by industry groups who’s activities have historically had negative environmental consequences. These groups regularly assert that after decades of poor or middling environmental regulation that any new, stronger, regulations would unduly impede the existing dirty economy for power, services, goods, and so forth. Moreover, the dirty way of creating power, services, and goods is just how things are and thus should remain the same.

In both the privacy and environmental worlds, corporate actors (and those whom they sell data/goods to) have benefitted from not having to pay the full cost of acquiring data without meaningful consent or accounting for the environmental cost of their activities. But, just as we demand enhanced environmental regulations to regulate and address the harms industry causes to the environment, we should demand and expect the same when it comes to the personal data economy.

If a business is predicated on sneaking away personal information from individuals then it is clearly not particularly interested or invested in being ethical towards consumers. It’s imperative to continue pushing legislators to not just recognize that such practices are unethical, but to make them illegal as well. Doing so will require being heard over the cries of government’s agencies that have vested interests in obtaining location information in ways that skirt the law that might normally discipline such collection, as well as companies that have grown as a result of their unethical data collection practices. While this will not be an easy task, it’s increasingly important given the limits of platforms to regulate the sneaky collection of this information and increasingly problematic ways our personal data can be weaponized against us.


  1. “PHAC advised that since the information had been de-identified and aggregated, it believed the activity did not engage the Privacy Act as it was not collecting or using “personal information”. ↩︎
  2. See, as example, Section 23 of the CSE Act ↩︎
Link

Ontario’s Path Towards Legitimizing Employee Surveillance

Earlier this week, the Ontario government declared that it would be introducing a series of labour reforms. As part of these reforms, employers will be required to inform their employees of how they are being electronically monitored. These requirements will be applied to all employers with 25 or more employees.

Employers already undertake workplace surveillance, though it has become more common and extensive as a result of the pandemic. Where surveillance is undertaken, however, businesses must seek out specialized counsel or services to craft appropriate labour policies or contracting language. This imposes costs and, also, means that different firms may provide slightly different information. The effect is that employers may be more cautious in what surveillance they adopt and be required to expend funds to obtain semi-boutique legal opinions.

While introducing legislation would seem to extend privacy protections for employees, as understood at the moment the reforms will only require a notification to employees of the relevant surveillance. It will not bar the surveillance itself. Further, with a law on the books it will likely be easier for Ontario consulting firms to provide pretty rote advice based on the legislative language. The result, I expect, will be to drive down the transaction costs in developing workplace surveillance policies at the same time that workplace surveillance technologies become more affordable and extensively deployed.

While I suspect that many will herald this law reform as positive for employees, on the basis that at least now they will know how they are being monitored, I am far less optimistic. The specificity of notice will matter, a lot, and unless great care is taken in drafting the legislation employers will obtain a significant degree of latitude in the actual kinds of intrusive surveillance that can be used. Moreover, unless required in legislative language, we can expect employers to conceal the specific modes of surveillance on grounds of needing to protect the methods for operational business reasons. This latter element is of particular concern given that major companies, including office productivity companies like Microsoft, are baking extensive workplace surveillance functionality into their core offerings. Ontario’s reforms are not, in fact, good for employees but are almost certain to be a major boon for their employers.

Mandatory Patching of Serious Vulnerabilities in Government Systems

Photo by Mati Mango on Pexels.com

The Cybersecurity and Infrastructure Security Agency (CISA) is responsible for building national capacity to defend American infrastructure and cybersecurity assets. In the past year they have been tasked with receiving information about American government agencies’ progress (or lack thereof) in implementing elements of Executive Order 14028: Improving the Nation’s Cybersecurity and have been involved in responses to a number of events, including Solar Winds, the Colonial Pipeline ransomware attack, and others. The Executive Order required that CISA first collect a large volume of information from government agencies and vendors alike to assess the threats towards government infrastructure and, subsequently, to provide guidance concerning cloud services, track the adoption of multi factor authentication and seek ways of facilitating its implementation, establish a framework to respond to security incidents, enhance CISA’s threat hunting abilities in government networks, and more.1

Today, CISA promulgated a binding operational directive that will require American government agencies to adopt more aggressive patch tempos for vulnerabilities. In addition to requiring agencies to develop formal policies for remediating vulnerabilities it establishes a requirement that vulnerabilities with a common vulnerabilities and exposure ID be remediated within 6 months, and all others with two weeks. Vulnerabilities to be patched/remediated are found in CISA’s “Known Exploited Vulnerabilities Catalogue.”

It’s notable that while patching is obviously preferred, the CISA directive doesn’t mandate patching but that ‘remediation’ take place.2 As such, organizations may be authorized to deploy defensive measures that will prevent the vulnerability from being exploited but not actually patch the underlying vulnerability, so as to avoid a patch having unintended consequences for either the application in question or for other applications/services that currently rely on either outdated or bespoke programming interfaces.

In the Canadian context, there aren’t equivalent levels of requirements that can be placed on Canadian federal departments. While Shared Services Canada can strongly encourage departments to patch, and the Treasury Board Secretariat has published a “Patch Management Guidance” document, and Canada’s Canadian Centre for Cyber Security has a suggested patch deployment schedule,3 final decisions are still made by individual departments by their respective deputy minister under the Financial Administration Act.

The Biden administration is moving quickly to accelerate its ability to identify and remediate vulnerabilities while simultaneously lettings its threat intelligence staff track adversaries in American networks. That last element is less of an issue in the Canadian context but the first two remain pressing and serious challenges.

While its positive to see the Americans moving quickly to improve their security positions I can only hope that the Canadian federal, and provincial, governments similarly clear long-standing logjams that delegate security decisions to parties who may be ill-suited to make optimal decisions, either out of ignorance or because patching systems is seen as secondary to fulfilling a given department’s primary service mandate.


  1. For a discussion of the Executive Order, see: “Initial Thoughts on Biden’s Executive Order on Improving the Nation’s Cybersecurity” or “Everything You Need to Know About the New Executive Order on Cybersecurity.” ↩︎
  2. For more, see CISA’s “Vulnerability Remediation Requirements“. ↩︎
  3. “CCCS’s deployment schedule only suggests timelines for deployment. In actuality, an organization should take into consideration risk tolerance and exposure to a given vulnerability and associated attack vector(s) as part of a risk‑based approach to patching, while also fully considering their individual threat profile. Patch management tools continue to improve the efficiency of the process and enable organizations to hasten the deployment schedule.” Source: “Patch Management Guidance↩︎

Detecting Academic National Security Threats

Photo by Pixabay on Pexels.com

The Canadian government is following in the footsteps of it’s American counterpart and has introduced national security assessments for recipients of government natural science (NSERC) funding. Such assessments will occur when proposed research projects are deemed sensitive and where private funding is also used to facilitate the research in question. Social science (SSHRC) and health (CIHR) funding will be subject to these assessments in the near future.

I’ve written, elsewhere, about why such assessments are likely fatally flawed. In short, they will inhibit student training, will cast suspicion upon researchers of non-Canadian nationalities (and especially upon researchers who hold citizenship with ‘competitor nations’ such as China, Russia, and Iran), and may encourage researchers to hide their sources of funding to be able to perform their required academic duties while also avoiding national security scrutiny.

To be clear, such scrutiny often carries explicit racist overtones, has led to many charges but few convictions in the United States, and presupposes that academic units or government agencies can detect a human-based espionage agent. Further, it presupposes that HUMINT-based espionage is a more serious, or equivalent, threat to research productivity as compared to cyber-espionage. As of today, there is no evidence in the public record in Canada that indicates that the threat facing Canadian academics is equivalent to the invasiveness of the assessments, nor that human-based espionage is a greater risk than cyber-based means.

To the best of my knowledge, while HUMINT-based espionage does generate some concerns they pale in comparison to the risk of espionage linked to cyber-operations.

However, these points are not the principal focus of this post. I recently re-read some older work by Bruce Schneier that I think nicely casts why asking scholars to engage in national security assessments of their own, and their colleagues’, research is bound to fail. Schneier wrote the following in 2007, when discussing the US government’s “see something, say something” campaign:

[t]he problem is that ordinary citizens don’t know what a real terrorist threat looks like. They can’t tell the difference between a bomb and a tape dispenser, electronic name badge, CD player, bat detector, or trash sculpture; or the difference between terrorist plotters and imams, musicians, or architects. All they know is that something makes them uneasy, usually based on fear, media hype, or just something being different.

Replace “terrorist” with “national security” threat and we get to approximately the same conclusions. Individuals—even those trained to detect and investigate human intelligence driven espionage—can find it incredibly difficult to detect human agent-enabled espionage. Expecting academics, who are motivated to develop international and collegial relationships, who may be unable to assess the national security implications of their research, and who are being told to abandon funding while the government fails to supplement that which is abandoned, guarantees that this measure will fail.

What will that failure mean, specifically? It will involve incorrect assessments and suspicion being aimed at scholars from ‘competitor’ and adversary nations. Scholars will question whether they should work with a Chinese, Russian, or Iranian scholar even when they are employed in a Western university let alone when they are in a non-Western institution. I doubt these same scholars will similarly question whether they should work with Finish, French, or British scholars. Nationality and ethnicity lenses will be used to assess who are the ‘right’ people with whom to collaborate.

Failure will not just affect professors. It will also extend to affect undergraduate and graduate students, as well as post-doctoral fellows and university staff. Already, students are questioning what they must do in order to prove that they are not considered national security threats. Lab staff and other employees who have access to university research environments will similarly be placed under an aura of suspicion. We should not, we must not, create an academy where these are the kinds of questions with which our students and colleagues and staff must grapple.

Espionage is, it must be recognized, a serious issue that faces universities and Canadian businesses more broadly. The solution cannot be to ignore it and hope that the activity goes away. However, the response to such threats must demonstrate necessity and proportionality and demonstrably involve evidence-based and inclusive policy making. The current program that is being rolled out by the Government of Canada does not meet this set of conditions and, as such, needs to be repealed.