Link

Economics and Software Bills of Materials (SBOM)

In an article for The Hill, Shannon Lantzy and Kelly Rozumalski have discussed how Software Bill Of Materials (SBOMs) are good for business as well as security. SBOMs more forcefully emerged on the American policy space after the Biden Whitehouse promulgated an Executive Order on cybersecurity on May 12, 2021. The Order included a requirement that developers and private companies providing services to the United States government be required to produce Software Bill of Materials (SBOM).1 SBOMs are meant to help incident responders to cybersecurity events assess what APIs, libraries, or other digital elements might be vulnerable to an identified operation, and also help government procurement agencies better ensure the digital assets in a product or service meet a specified security standard.

Specifically, Lantzy and Rozumalsko write:

Product offerings that are already secure-by-design will be able to command a premium price because consumers will be able to compare SBOMs.

Products with inherently less patchable components will also benefit. A universal SBOM mandate will make it easy to spot vulnerabilities, creating market risk for lagging products; firms will be forced to reengineer the products before getting hacked. While this seems like a new cost to the laggards, it’s really just a transfer of future risk to a current cost of reengineering. The key to a universal mandate is that all laggards will incur this cost at roughly the same time, thereby not losing a competitive edge.

The promise of increased security and reduced risk will not be realized by SBOM mandates alone. Tooling and putting this mandate in practice will be required to realize the full power of the SBOM.

The idea of internalizing security costs to developers, and potentially increasing the cost of goods, has been something that has been discussed publicly and with Western governments for at least two decades or more. We’ve seen the overall risk profiles presented to organizations continue to increase year over year as a result of companies racing to market with little regard for security, which was a business development strategy that made sense when they experienced few economic liabilities for selling products with severe cybersecurity limitations or vulnerabilities. In theory, enabling comparison shopping vis-a-vis SBOMs will disincentivize companies from selling low-grade equipment and services if they want to get into high-profit enterprise or high-reliability government contracts, with the effect being that security improvements will also trickle down to the products purchased by consumers as well (‘trickle down cybersecurity’).

While I think that SBOMs are definitely a part of developing cybersecurity resilience it remains to be seen just how much consumers will pay for ‘more secure’ products given that, first, they are economically incentivized to pay the lowest possible amounts for goods and services and, second, they are unlikely to know for certain what is a good or bad security practice. Advocates of SBOMs often refer to them as akin to nutrition labels but we know that at most about a third of consumers read those labels (and those who read them often experience societal pressures to regulate caloric intake and thus read the labels) and, also, that the labels are often inaccurate.

It will be very interesting to see whether enterprise and consumers alike will be able or willing to pay higher up-front costs, to say nothing of being able to actually trust what is on the SBOM labels. Will companies that adopt SBOM products suffer a lower rate of cybersecurity incidents, or ones that are of reduced seriousness, or be able to respond more quickly when a cybersecurity incident has been realized? We’re going to actually be able to test the promises of SBOMs, soon, and it’s going to be fascinating to see things play out.


  1. I have a published a summary and brief analysis of this Executive Order elsewhere in case you want to read it. ↩︎
Link

Operation Fox Hunt

(Photo by Erik Mclean on Pexels.com)

ProPublica’s Sebastian Rotella and Kirsten Berg have an outstanding piece on the Chinese government’s efforts to compel individuals to return to China to face often trumped up charges. Efforts include secretly sending Chinese officials into the United States to surveil, harass, intimidate, and stalk residents of the United States, and also imprisoning or otherwise threatening residents’ family member who have remained in China.

Many of the details in the article are the result of court records, interviews, and assessments of Chinese media. It remains to be seen whether Chinese agents’ abilities to conduct ‘fox hunts’ will be impeded now that the US government is more aware of these operations. Given the attention and suspicion now cast towards citizens of China, however, there is also a risk that FBI agents may become overzealous in their investigations to the detriment of law-abiding Chinese-Americans or visitors from China.

In an ideal world there would be equivalent analyses or publications on the extent to which these operations are also undertaken in Canada. To date, however, there is no equivalent to ProPublica’s piece in the Canadian media landscape and given the Canadian media’s contraction we can’t realistically expect anything, anytime soon. However, even a short piece which assessed whether individuals from China who’ve run operations in the United States, and who are now barred from entering the US or would face charges upon crossing the US border, are similarly barred or under an extradition order in Canada would be a positive addition to what we know of how the Canadian government is responding to these kinds of Chinese operations.

Link

Alarmist Takes On Chinese Influence Operations Must Be Set Aside

Lotus Ruan and Gabrielle Lim have a terrific piece in Just Security ‌which strongly makes the case that, “fears of Chinese disinformation are often exaggerated by overblown assessments of the effects of China’s propaganda campaigns and casually drawn attributions.”

The two make clear that there are serious issues with how some Western policy analysts and politicians are suggesting that their governments respond to foreign influence operations that are associated with Chinese public and private parties. To begin, the very efficacy of influence operations remains mired in questions. While this is an area that is seeing more research of late, academics and policy analysts alike cannot assert with significant accuracy whether foreign influence operations have any real impact on domestic opinions or feelings. This should call for conservatism in the policies which are advanced but, instead, we often see calls for Western nations to adopt the internet ‘sovereignty’ positions championed by Russia and China themselves. These analysts and politicians are, in other words, asserting that they only way to be safe from China (and Russia) is to adopt those countries’ own policies.

Even were such (bad) policies adopted, it’s unclear that they would resolve the worst challenges facing countries such as the United States today. Anti-vaxxers, pro-coup supporters, and Big Lie advocates have all been affected by domestic influence operations that were (and are) championed by legitimately elected politicians, celebrities, and major media personalities. Building a sovereign internet ecosystem will do nothing to protect from the threats that are inside the continental United States and which are clearly having a deleterious effect on American society.

What I think I most appreciated in the piece by Ruan and Lim is that they frankly and directly called out many of the so-called solutions to disinformation and influence operations as racist. As just one example, there are those who call for ‘clean’ technologies that juxtapose Western against non-Western technologies. These kinds of arguments often directly perpetuate racist policies; they will not only do nothing to mitigate the spread of misinformation but will simultaneously cast suspicion and violence towards non-Caucasian members of society. Such proposals must be resisted and the authors are to be congratulated for directly and forcefully calling out the policies for what they are instead of carefully critiquing the proposals without actually calling them as racist as they are.

Link

Standards as the Contemporary Highway System

Jonathan Zittrain, in remarks prepared a few weeks ago, framed Internet protocol standards in a novel way. Specifically, he stated:

Second, it’s entirely fitting for a government to actively subsidize public goods like a common defense, a highway system, and, throughout the Internet’s evolution, the public interest development of standards and protocols to interlink otherwise-disparate systems. These subsidies for the development of Internet protocols, often expressed as grants to individual networking researchers at universities by such organizations as the National Science Foundation, were absolutely instrumental in the coalescence of Internet standards and the leasing of wholesale commercial networks on which to test them. (They also inspired some legislators to advertise their own foresight in having facilitated such strategic funding.) Alongside other basic science research support, this was perhaps some of the best bang for the buck that the American taxpayer has received in the history of the country. Government support in the tens of millions over a course of decades resulted in a flourishing of a networked economy measured in trillions.

Zittrain’s framing of this issue builds on some writing I’ve published around standards. In the executive summary of a report I wrote a few months ago, I stated that,

… the Government of Canada could more prominently engage with standards bodies to, at least in part, guarantee that such standards have security principles baked in and enabled by default; such efforts could include allocating tax relief to corporations, as well as funding to non-governmental organizations or charities, so that Canadians and Canadian interests are more deeply embedded in standards development processes.

To date I haven’t heard of this position being adopted by the Government of Canada, or even debated in public. However, framing this as a new kind of roadway could be the kind of rhetorical framing that would help it gain traction.

Link

Does Canada, Really, Need A Foreign Intelligence Service?

A group of former senior Canadian government officials who have been heavily involved in the intelligence community recently penned an op-ed that raised the question of “does Canada need a foreign intelligence service?” It’s a curious piece, insofar as it argues that Canada does need such a service while simultaneously discounting some of the past debates about whether this kind of a service should be established, as well as giving short shrift to Canada’s existing collection capacities that are little spoken about. They also fundamentally fail to take up what is probably the most serious issue currently plaguing Canada’s intelligence community, which is the inability to identify, hire, and retain qualified staff in existing agencies that have intelligence collection and analysis responsibilities.

The Argument

The authors’ argument proceeds in a few pieces. First, it argues that Canadian decision makers don’t really possess an intelligence mindset insofar as they’re not primed to want or feel the need to use foreign intelligence collected from human sources. Second, they argue that the Canadian Security Intelligence Service (CSIS) really does already possess a limited foreign intelligence mandate (and, thus, that the Government of Canada would only be enhancing pre-existing powers instead of create new powers from nothing). Third, and the meat of the article, they suggest that Canada probably does want an agency that collects foreign intelligence using human sources to support other members of the intelligence community (e.g., the Communications Security Establishment) and likely that such powers could just be injected into CSIS itself. The article concludes with the position that Canada’s allies “have quietly grumbled from time to time that Canada is not pulling its weight” and that we can’t prioritize our own collection needs when we’re being given intelligence from our close allies per agreements we’ve established with them. This last part of the argument has a nationalistic bent to it: implicitly they’re asking whether we can really trust even our allies and closest friends? Don’t we need to create a capacity and determine where such an agency and its tasking should focus on, perhaps starting small but with the intent of it getting larger?

Past Debates and Existing Authorities

The argument as positioned fails to clearly make the case for why these expanded authorities are required and simultaneously does not account for the existing powers associated with the CSE, the Canadian military, and Global Affairs Canada.

With regards to the former, the authors state, “the arguments for and against the establishment of a new agency have never really been examined; they have only been cursorily debated from time to time within the government by different agencies, usually arguing on the basis of their own interests.” In making this argument they depend on people not remembering their history. The creation of CSIS saw a significant debate about whether to include foreign human intelligence elements and the decision by Parliamentarians–not just the executive–was to not include these elements. The question of whether to enable CSIS or another agency to collect foreign human intelligence cropped up, again, in the late 1990s and early 2000, and again around 2006-2008 or so when the Harper government proposed setting up this kind of an agency and then declined to do so. To some extent, the authors’ op-ed is keeping with the tradition of this question arising every decade or so before being quietly set to the side.

In terms of agencies’ existing authorities and capacities, the CSE is responsible for conducting signals intelligence for the Canadian government and is tasked to focus on particular kinds of information per priorities that are established by the government. Per its authorizing legislation, the CSE can also undertake certain kinds of covert operations, the details of which have been kept firmly under wraps. The Canadian military has been aggressively building up its intelligence capacities with few details leaking out, and its ability to undertake foreign intelligence using human sources as unclear as the breadth of its mandate more generally.1 Finally, GAC has long collected information abroad. While their activities are divergent from the CIA or MI6–officials at GAC aren’t planning assassinations, as an example–they do collect foreign intelligence and share it back with the rest of the Government of Canada. Further, in their increasingly distant past they stepped in for the CIA in environments the Agency was prevented from operating within, such as in Cuba.

All of this is to say that Canada periodically goes through these debates of whether it should stand up a foreign intelligence service akin to the CIA or MI6. But the benefits of such a service are often unclear, the costs prohibitive, and the actual debates about what Canada already does left by the wayside. Before anyone seriously thinks about establishing a new service, they’d be well advised to read through Carvin’s, Juneau’s, and Forcese’s book Top Secret Canada. After doing so, readers will appreciate that staffing is already a core problem facing the Canadian intelligence community and recognize that creating yet another agency will only worsen this problem. Indeed, before focusing on creating new agencies the authors of the Globe and Mail op-ed might turn their minds to how to overcome the existing staffing problems. Solving that problem might enable agencies to best use their existing authorizing legislation and mandates to get much of the human foreign intelligence that the authors are so concerned about collecting. Maybe that op-ed could be titled, “Does Canada’s Intelligence Community Really Have a Staffing Problem?”


  1. As an example of the questionable breadth of the Canadian military’s intelligence function, when the military was tasked with assisting long-term care home during the height of the Covid-19 pandemic in Canada, they undertook surveillance of domestic activism organizations for unclear reasons and subsequently shared the end-products with the Ontario government. ↩︎
Link

Which States Most Require ‘Democratic Support’?

Roland Paris and Jennifer Walsh have an excellent, and thought-provoking, column in the Globe and Mail where they argue that Western democracies need to adopt a ‘democratic support’ agenda. Such an agenda has multiple points comprising:

  1. States getting their own democratic houses in order;
  2. States defending themselves and other democracies against authoritarian states’ attempts to disrupt democracies or coerce residents of democracies;
  3. States assisting other democracies which are at risk of slipping toward authoritarianism.

In principle, each of these points make sense and can interoperate with one another. The vision is not to inject democracy into states but, instead, to protect existing systems and demonstrate their utility as a way of weaning nations towards adopting and establishing democratic institutions. The authors also assert that countries like Canada should learn from non-Western democracies, such as Korea or Taiwan, to appreciate how they have maintained their institutions in the face of the pandemic as a way to showcase how ‘peer nations’ also implement democratic norms and principles.

While I agree with the positions the authors suggest, far towards the end of the article they delicately slip in what is the biggest challenge to any such agenda. Namely, they write:

Time is short for Canada to articulate its vision for democracy support. The countdown to the 2024 U.S. presidential election is already under way, and no one can predict its outcome. Meanwhile, two of Canada’s closest democratic partners in Europe, Germany and France, may soon turn inward, preoccupied by pivotal national elections that will feature their own brands of populist politics.1

In warning that the United States may be an unreliable promoter of democracy (and, by extension, human rights and international rules and order which have backstopped Western-dominated world governance for the past 50 years) the authors reveal the real threat. What does it mean when the United States is regarded as likely to become more deeply mired in internecine ideological conflicts that absorbs its own attention, limits its productive global engagements, and is used by competitor and authoritarian nations to warn of the consequences of “American-style” democracy?

I raise these questions because if the authors’ concerns are fair (and I think they are) then any democracy support agenda may need to proceed with the presumption that the USA may be a wavering or episodic partner in associated activities. To some extent, assuming this position would speak more broadly to a recognition that the great power has significantly fallen. To even take this as possible–to the extent that contingency planning is needed to address potential episodic American commitment to the agenda of buttressing democracies–should make clear that the American wavering is the key issue: in a world where the USA is regarded as unreliable, what does this mean for other democracies and how they support fellow democratic states? Do countries, such as Canada and others with high rule-of-law democratic governments, focus first and foremost on ‘supporting’ US democracy? And, if so, what does this entail? How do you support a flailing and (arguably) failing global hegemon?

I don’t pretend to have the answers. But it seems that when we talk about supporting democracies, and can’t rely on the USA to show up in five years, then the metaphorical fire isn’t approaching our house but a chunk of the house is on fire. And that has to absolutely be our first concern: can we put out the fire and save the house, or do we need to retreat with our children and most precious objects and relocate? And, if we must retreat…to where do we retreat?


  1. Emphasis not in original. ↩︎
Link

The Answer to Why Twitter Influences Canadian Politics

Elizabeth Dubois has a great episode of Wonks and War Rooms where she interviews Etienne Rainville of The Boys in Short Pants podcast, former Hill staffer, and government relations expert. They unpack how government staffers collect information, process it, and identify experts.

Broadly, the episode focuses on how the absence of significant policy expertise in government and political parties means that social media—and Twitter in particular—can play an outsized role in influencing government, and why that’s the case.

While the discussion isn’t necessarily revelatory to anyone who has dealt with some elements of government of Canada, and especially MPs and their younger staffers, it’s a good and tight conversation that could be useful for students of Canadian politics, and also helpfully distinguishes of of the differences between Canadian and American political cultures. I found the forthrightness of the conversation and the honesty of how government operates was particularly useful in clarifying why Twitter is, indeed, a place for experts in Canada to spend time if they want to be policy relevant.

Link

Facebook Prioritizes Growth Over Social Responsibility

Karen Hao writing at MIT Technology Review:

But testing algorithms for fairness is still largely optional at Facebook. None of the teams that work directly on Facebook’s news feed, ad service, or other products are required to do it. Pay incentives are still tied to engagement and growth metrics. And while there are guidelines about which fairness definition to use in any given situation, they aren’t enforced.

The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.

But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.

[Kaplan’s] claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebook’s data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.

The whole thing with ethics is that they have to be integrated such that they underlie everything that an organization does; they cannot function as public relations add ons. Sadly at Facebook the only ethic is growth at all costs, the social implications be damned.

When someone or some organization is responsible for causing significant civil unrest, deaths, or genocide then we expect that those who are even partly responsible to be called to account, not just in the public domain but in courts of law and international justice. And when those someones happen to be leading executives for one of the biggest companies in the world the solution isn’t to berate them in Congressional hearings and hear their weak apologies, but to take real action against them and their companies.

Link

Pandemic Burnout in Academia

Virginia Gewin, writing for Nature:

Even before the pandemic, many researchers in academia were struggling with poor mental health. Desiree Dickerson, an academic mental-health consultant in Valencia, Spain, says that burnout is a problem inherent in the academic system: because of how narrowly it defines excellence, and how it categorizes and rewards success. “We need to reward and value the right things,” she says.

Yet evidence of empathetic leadership at the institutional level is in short supply, says Richard Watermeyer, a higher-education researcher at the University of Bristol, UK, who has been conducting surveys to monitor impacts of the pandemic on academia. Performative advice from employers to look after oneself or to leave one day a week free of meetings to catch up on work is pretty superficial, he says. Such counsel does not reduce work allocation, he points out.

Academia has a rampant problem in how it is professionally configured. To get even a short term contract, now, requires a CV that would have been worthy of tenure twenty or thirty years ago. Which means that, when someone is hired as an assistant professor (with a 3-6 year probation period) they are already usually more qualified than their peers of the past and have to be prolific in the work that they contribute to and output, and do so with minimal or no complaints so as to avoid any problems in their transition from assistant to associate professor (i.e., full-time and sometimes protected employee).

Once someone has gone through the gauntlet, they come to expect that others should go through it as well: if the current generation can cut it, then surely the next generation of hires should be able to as well if they’re as ‘good’ as the current generation. Which means that those who were forced into an unsustainable work environment that routinely eats into personal time, vacation time (i.e., time when you use vacation days to catch up on other work that otherwise is hard to get done), child rearing time, and so forth, expect that those following them do the same.

Add into this the fact that most academic units are semi-self governing, and those in governorship positions (e.g., department chairs, deans) tend to lack any actual qualifications in managing a largely autonomous workforce and cannot rebalance work loads in a systemically positive way so as to create more sustainable working environments. As a result of a lack of formal management skills, these same folks tend to be unable to identify the issues that might come up in a workforce/network of colleagues, and they are also not resourced to know how to actually treat the given problem. And all of this presumes they are motivated to find and resolve problems in the first place. This very premise is often found faulty, given that those who are governing are routinely most concerned with the smooth running of their units and, of course, may keep in mind any junior colleagues who happen to cause ‘problems’ by expecting assistance or consideration given the systemic overwork that is the normal work-life imbalance.

What’s required is a full-scale revolt in the very structure of university departments if work-life balance is to be truly valued, and if academics are to be able to satisfy their teaching, service, and research requirements in the designated number of working hours. While the job is often perceived as very generous–and it is, in a whole lot of ways!–because you (ideally) have parts of it that you love, expecting people to regularly have 50-75 hour work weeks, little real downtime, little time with family and friends, and being placed on a constant treadmill of outputs is a recipe for creating jaded, cynical, and burned out professionals. Sadly, that’s how an awful lot of contemporary departments are configured.

Link

The Value of Brief Synthetic Literature Reviews

The Cambridge Security Research Computer Laboratory has a really lovely blog series called ‘Three Paper Tuesday’ that I wish other organizations would adopt.

They have a guest (and usually a graduate student) provide concise summaries of three papers and then have a short 2-3 paragraph ‘Lessons Learned’ section to conclude the post. Not only do readers get annotated bibliographies for each entry but, perhaps more importantly, the lessons learned means that non-experts can appreciate the literature in a broader or more general context. The post aboutsubverting neural networks, as an example, concludes with:

On the balance of the findings from these papers, adversarial reprogramming can be characterised as a relatively simple and cost-effective method for attackers seeking to subvert machine learning models across multiple domains. The potential for adversarial programs to successfully avoid detection and be deployed in black-box settings further highlights the risk implications for stakeholders.

Elsayed et al. identify theft of computational resources and violation of the ethical principles of service providers as future challenges presented by adversarial reprogramming, using the hypothetical example of repurposing a virtual assistant as spyware or a spambot. Identified directions for future research include establishing the formal properties and limitations of adversarial reprogramming, and studying potential methods to defend against it.

If more labs and research groups did this, I’d imagine it would help to spread awareness of some research and its actual utility or importance in advancing the state of knowledge to the benefit of other academics. It would also have the benefit of showcasing to policymakers what key issues actually are and where research lines are trending, and thus empower them (and, perhaps, even journalists) to better take up the issues that they happen to be focused on. That would certainly be a win for everybody: it’d be easier to identify articles of interest for researchers, relevance of research for practitioners, and showcase the knowledge and communication skills of graduate students.