Categories
Writing

What is the Role of Cyber Operators in Assessing Effectiveness or Shaping Cyber Policy?

An anonymous European Intelligence Official wrote an oped in July entitled, “Can lawyers lose wars by stifling cyber capabilities?” The article does a good job in laying out why a cyber operator — that is, someone who is presumably relatively close to either planning or undertaking cyber operations — is deeply frustrated by the way in which decision-making is undertaken.

While I admit to having some sympathy for the author’s plight I fundamentally disagree with much of their argument, and think that the positions they hold should be taken up and scrutinised. In this post, I’m really just pulling out quotations from the article and then providing some rebuttal or analysis — you’re best off reading it, first, if you want to more fully follow along and assess whether I’m being fair to the author and the points they are making.

With that out of the way, here we go….

Law is no longer seen as a system of checks and balances but as a way to shape state behaviour in cyberspace

Yes, this is one of the things that laws are actually supposed to do. You may (reasonably in some cases) disagree with the nature of the laws and their effects, but law isn’t a mere “check and balance.” And, especially where there is no real ability to contest interpretations of law (because they are administered by government agencies largely behind closed doors) it is particularly important for law to have a stronger guiding function in order to maintain democratic legitimacy and social trust in government operations.

Idealistic legalism causes legal debates on cyber capabilities to miss a crucial discussion point: what operational constraints are we willing to accept and what consequences does that have for our national security?

Sure, but some of this is because the USA government is so closed mouthed about its capacities. Consider if there was a more robust effort to explain practice such as in the case of some European agencies? I would note that the Dutch, as an example, are sometimes pretty explicit about their operations which is then helpful for considering their activities with respect to authorising laws and associated national and international norms.

Laws attempt to capture as many activities in cyberspace as possible. To do so, legal frameworks must oversimplify. This is ill-suited to such a complex domain

This seems to not appreciate how law tends, at least in some jurisdictions, to be broader in scope and then supplemented by regulations or policies. However, where regulations or policies have been determined as regularly insufficient there may be a decision that more detailed laws are now necessary. To an extent, this is the case post-Snowden and with very good reason, and as demonstrated in the various non-compliance reports that has been found with certain NSA (and other American intelligence community) operations over time.

The influence of practitioners slowly diminishes as lawyers increasingly take the lead in shaping senior leadership opinions on proposed cyber operations rather than merely advising.

I can appreciate the frustration of seeing the leadership move from operations practitioners to policy/legal practitioners.1 But that shift between whether organisations are being led by operations practitioners or those focused in law/policy can be a normal back and forth.

And to be entirely honest the key thing — and the implicit critique throughout this whole piece — is that the decision makers understand what the ops folks are saying.2 Those in decision making roles have a lot of responsibilities and, often, a bigger or different picture of the implications of operations.

I’m in no way saying that lawyers should be the folks to always call the shots3 but just because you’re in operations doesn’t mean that you necessarily are making the right calls broadly and, instead, may be seeing the right calls through your particular lens and mission. That lens and mission may not always be sufficient in coming to a conclusion that aligns more broadly with agency or national or international policy intents/goals.

… a law might stipulate that a (foreign) intelligence agency cannot collect information from systems owned by the citizens of its country. But what if, as Chinese and Russian cyber threat actors do, a system belonging to a citizen is being abused to route attack traffic through? Such an operational development is not foreseen, and thus not prescribed, by law. To collect information would then be illegal and require judicial overhaul – a process that can take years in a domain that can see modus operandi shift in a matter of days.

There may be cases where you have particularly risk adverse decision makers or, alternately, particularly strong legal limitations that preclude certain kinds of operations.

I would note that it is against the law to simply target civilians in conflict scenarios on grounds that doing so runs counter to the agreed-upon laws of war (recognising they are often not adhered to). Does this have the effect of impeding certain kinds of military activities? Yes. And that may still be the right decisions notwithstanding the consequences it may have on the ability to conduct some operations and/or reduce their efficacy.

In the cyber context, the complaint is that certain activities are precluded on the basis that the law doesn’t explicitly recognise and authorise them. Law routinely leaves wiggle rooms and part of the popular (and sometimes private…) problem has been how intelligence lawyers are perceived of as abusing that wiggle room — again, see the NSA and other agencies as they were denuded in some of the Snowden revelations, and openly opposite interpretations of legislation that was adopted to authorise actions that legislators had deliberately sought to preclude.4 For further reasons the mistrust may exist between operators and legislators, in Canada you can turn to the ongoing historical issues between CSIS and the Federal Court which suggests that the “secret law and practices” adopted by Canada’s IC community may counter to the actual law and legal processes, and then combine that with some NSIRA findings that CSE activities may have taken place in contravention of Canadian privacy law.

In the above context, I would say that lots of legislators (and publics) have good ground to doubt the good will or decision-making capacity of the various parties within national ICs. You don’t get to undertake the kind of activities that happened, previously, and then just pretend that “it was all in the recent past, everything’s changed, trust us guys.”

I would also note: the quoted material makes an assumption that policy makers have not, in fact, considered the scenario the author is proposing and then rejected it as a legitimate way of operating. The fact that a decision may not have gone your way is not the same as your concerns not being evaluated in the process of reaching a conclusion.

When effectiveness is seen as secondary, cyber activities may be compliant, but they are not winning the fight.

As I have been writing in various (frustrating) peer reviews I’ve been doing: evidence of this, please, as opposed to opinion and supposition. Also, “the fight” will be understood and perceived by different people in different positions in different agencies: a universal definition should not be presumed.

…constraints also incur costs due to increased bureaucratic complexity. This hampers operational flexibility and innovation – a trade-off often not adequately weighed by, or even visible to, law- and decision-makers. When appointing ex-ante oversight boards or judicial approval, preparation time for conducting cyber operations inevitably increases, even for those perfectly legal from the beginning.

So, in this case the stated problem is that legislators and decision makers aren’t getting the discrete kinds of operational detail that this particular writer thinks are needed to make the “right” trade off decisions.

In some cases….yeah. That’ll be the case. Welcome to the hell of people not briefing up properly, or people not understanding because briefing materials weren’t scoped or prepared right, and so forth. That is: welcome to the government (or any sufficiently large bureaucracy)!

But more broadly, the complaint is that the operator in question knows better than the other parties but without, again, specific and clear evidence that the trade offs are incorrect. I get that spooky things can’t be spoken aloud without them becoming de-spookified, but picture a similar kind of argument in any other sector of government and you’ll get the same kind of complaint. Ops people will regularly complain about legislators or decision makers when they don’t get their way, their sandcastles get crushed, or they have to do things in less-efficient ways in their busy days. And sometimes they’re right to complain and, in others, there is a lot more at stake than what they see operationally going on.

This is a losing game because, as Calder Walton noted, ‘Chinese and Russian services are limited only by operational effectiveness’.

I don’t want to suggest I disagree! But, at the same time, this is along the lines of “autocracies are great because they move faster than democracies and we have to recognise their efficiency” arguments that float around periodically.5

All of which is to say: autocracies and dictatorships have different internal logics to their bureaucracies that can have corresponding effects on their operations.

While it may be “the law” that impedes some Five Eyes/Western agencies’ activities, you can picture the need to advance the interests of kleptocrats or dictators’ kids, gin up enough ransomware dollars to put food on the team’s table, and so forth, as establishing some limits on the operational effectiveness of autocratic governments’ intelligence agencies.

It’s also worth noting that “effectiveness” can be a contested concept. If you’re OK blundering around and burning your tools and are identified pretty often then you may have a different approach to cyber operations, generally, as opposed to situations where being invisible is a key part of operational development. I’m not trying to suggest that the Russians, Chinese, and other adversaries just blunder about, nor that the FVEY are magical ghosts that no one sees on boxes and undertaking operations. However, how you perceive or define “effective” will have corresponding consequences for the nature and types of operations you undertake and which are perceived as achieving the mission’s goals.

Are agencies going to publicly admit they were unable to collect intelligence on certain adversary cyber actors because of legal boundaries?

This speaks to the “everything is secret and thus trust us” that is generally antithetical to democratic governance. To reverse things on the author: should there be more revelation of operations that don’t work so that they can more broadly be learned from? The complaint seems to be that the lawyers et al don’t know what they’re doing because they aren’t necessarily exposed to the important spooky stuff, or understand its significance and importance. To what extent, then, do the curtains need to open some and communicate this in effective ways and, also, the ways in which successes have previously happened.

I know: if anything is shown then it blows the whole premise of secret operations. But it’s hard to complain that people don’t get the issues if no facts are brought to the table, whereas the lawyers and such can point to the laws and at least talk to them. If you can’t talk about ops, then don’t be surprised that people will talk about what is publicly discussable…and your ops arguments won’t have weight because they don’t even really exist in the room where the substantive discussions about guardrails may be taking place.


In summary: while I tend to not agree with the author — and disagree as someone who has always been more on the policy and/or law side of the analytic space — their article was at least thought provoking. And for that alone I think that it’s worth taking the time to read their article and consider the arguments within it.


  1. I would, however, would hasten to note that the head of NSA/Cyber Command tends to be a hella lot closer to “ops” by merit of a military leadership. ↩︎
  2. And, also, what the legal and policy teams are saying… ↩︎
  3. Believe me on this point… ↩︎
  4. See, as example: “In 2006, after Congress added the requirement that Section 215 orders be “relevant to” an investigation, the DOJ acknowledged that language was intended to impose new protections. A fact sheet about the new law published by the DOJ stated: “The reauthorizing legislation’s amendments provide significant additional safeguards of Americans’ civil liberties and privacy,” in part by clarifying, “that a section 215 order cannot be issued unless the information sought is relevant to an authorized national security investigation.” Yet just months later, the DOJ convinced the FISC that “relevant to” meant “all” in the first Section 215 bulk dragnet order. In other words, the language inserted by Congress to ​limit ​the scope of what information could be gathered was used by the government to say that there were ​no limits​.” From: Section 215: A Brief History of Violations. ↩︎
  5. See, as examples, the past 2-4 years ago when there was a perception that the Chinese response to Covid-19 and the economy was superior to everyone else that was grappling with the global pandemic. ↩︎
Categories
Writing

Quick Thoughts on Academics and Policy Impact

I regularly speak with scholars who complain policy makers don’t read their work. 95% of the time that work is either published in books costing hundreds of dollars (in excess of department budgets) or behind a journal paywall that departments lack access to.1

Bluntly, it’s hard to have impact if your work is behind paywalls.

Moreover, in an era of ‘evidence-based policymaking’ dedicated public servants will regularly want to assess some of the references or underlying data in the work in question. They perform due diligence when they read facts, arguments, or policy recommendations.

However, the very work that a scholar is using to develop their arguments or recommendations may, also, lay behind paywalls. Purchasing access to the underlying books and papers that go into writing a paper could run a public servant, or their department, even more hundreds or thousands of dollars. Frankly they’re not likely to spend that amount of money and it’d often be irresponsible for them to do so.

So what are the effect of all these paywalls? Even if the government policymaker can get access to the scholar’s paper they cannot fact-check or assess how it was built. It is thus hard for them to validate conclusions and policy recommendations. This, in turn, means that committed public servants may put important scholarly research into an ‘interesting but not sufficiently evidence-based’ bucket.

Does this mean that academics shouldn’t publish in paywalled journals or books? No, because they have lots of audiences, and publications are the coin of the academic realm. But it does mean that academics who want to have near- or middle-term impacts need to do the work and make their findings, conclusions, and recommendations publicly available.

What to do, then?

Broadly, it is helpful to prepare and publish summaries of research to open-source and public-available outlets. The targets for this are, often, think tanks or venues that let academics write long-form pieces (think maximum of 1,200-1,500 words). Alternately, scholars can just start and maintain a blog and host summaries of their ideas, there, along with an offer to share papers that folks in government might be interested in but to which they lack access.

I can say with some degree of authority from my time in academia that publishing publicly-available reports, or summarising paywalled work, can do a great deal to move the needle in how government policies are developed. But, at the same time, moving that needle requires spending the time and effort. You should not just expect busy government employees to randomly come across your paywalled article, buy it, read it, and take your policy recommendations seriously.


  1. Few government departments have extensive access to academic journals. Indeed, even working at one of the top universities at the world and having access to a wealth of journals, I regularly came across articles that I couldn’t access! ↩︎
Categories
Aside Links

Grand Visions Fizzle in Brazil

The NYT has an incredibly depressing view of the way that Brasil is moving forward; while much of it is shared by the citizens of that country the article is overly one-sided and generally lacks a comprehensive understanding of why some of the cost overruns and setbacks have happened. We read that environmental protections and efforts to work with aboriginal people’s have led to railroads being delayed: why were there such expectations of a smooth and quick development of such railroads in the first place? Perhaps because the ‘frictions’ of such development (i.e. environment and people living on the land) had been cast aside?

What is largely missing throughout the piece is the context: why were certain projects put forward and then abandoned? In the absence of such context we’re left with the impression that the setbacks are the result of poor management and bureaucracy but is this the case, or simply the projection of American values onto specific South American infrastructure decisions?

Categories
Aside Links

Notes EM: Disorder as resistance

evgenymorozov:

I found this in the Letters section of the latest issue of The Times Literary Supplement (dated March 15, 2013). It doesn’t seem to be online:

Binder families

Sir, – In David Winters’s review of The Demon of Writing by Ben Kafka he mentions a clerk who saved the actors of the Comédie-Française during the Terror, by soaking their death warrants in a tub and throwing the balls of pulp out of the window (February 15). In the 1960s I worked as a welfare case worker, along with several hundred others, in a vast office in downtown Chicago. Each of the families of my 300 clients existed, bureaucratically speaking, as a large binder filled with forms and written notes. When the families had been on welfare for several generations, the binders were equivalent to two or three large telephone books.

Overwhelmed with an avalanche of forms, telephone calls, clients waiting for hours downstairs to see me, home visits to the high-rise housing projects in which they lived, I was taught by the veteran case workers to simply go into the huge library where the binders were stored, alphabetically on endless shelves, and “accidentally” file binders out of place. Then I could innocently plead that I was unable to take any action on the case because I could not find the binder. Without the binder nothing in the status of the clients could change, their cheques would continue to arrive, and I could “miraculously” locate their binder if I needed to. Sadly, we were on the verge of the computer age, the information was beginning to appear on IBM punch cards, and the binders were soon to become obsolete, signalling the beginning of a far more ruthless era in which no clerk could make inconvenient facts disappear.

MICHAEL LIPSEY 75 San Marino Drive, San Rafael, California 94901.

This speaks volumes to the humanity that “inefficient” bureaucratic organization can enable. Further, it foregrounds how contemporary drives towards efficiency and order can obviate some historical means of bureaucratic resistance, resistance that was significant for maintaining and improving people’s daily lives.