We have come a long way in routing the taboos that stand in the way of justice for victims of sexual assault. But there is still a distance to go. The problems are complex and rooted in centuries of culture and myth. The law, imperfect as it may be, is a powerful tool in achieving lasting change. But real justice will come only when we change attitudes—when respect for the autonomy of every person replaces old myths grounded in ownership, control, and power.
– Beverly McLachlin, Truth Be Told: My Journey Through Life and the Law
There were a pair of features that most jump out to me.
First, that the proposed legislation will compel Chinese companies “to police the personal data practices across their platforms” as part of Article 57. As noted by the team at Stanford,
“the three responsibilities identified for big platform companies here resonate with the “gatekeeper” concept for online intermediaries in Europe, and a requirement for public social responsibility reports echoes the DMA/DSA mandate to provide access to platform data by academic researchers and others. The new groups could also be compared with Facebook’s nominally independent Oversight Board, which the company established to review content moderation decisions.”
I’ll be particularly curious to see the kinds of transparency reporting that emerges out of these companies. I doubt the reports will parallel those in the West, which tend to focus on the processes and number of disclosures from private companies to government and, instead, the Chinese companies’ reports will focus on how companies are being ‘socially responsible’ with how they collect, process, and disclose data to other Chinese businesses. Still, if we see this more consumer-focused approach it will demonstrate yet another transparency report tradition that will be useful to assess in academic and public policy writing.
Second, the Stanford team notes that,
“new drafts of both the PIPL and the DSL added language toughening requirements for Chinese government approval before data holders in China cooperate with foreign judicial or law enforcement requests for data, making failure to gain permission a clear violation punishable by financial penalties up to 1 million RMB.”
While not surprising, this kind of restriction will continue to raise data sovereignty borders around personal information held in China. The effect? Western states will still need to push for Mutual Legal Assistant Treaty (MLAT) reform to successfully extract information from Chinese companies (and, perhaps in all likelihood, fail to conclude these reforms).1
Nevertheless, as competing legal frameworks are established that place the West on one side, and China and Russia on the other, the effect will be further entrenching the legal cultures of the Internet between different economic and political (and security) regimes. At the same time, data will be easily stored anywhere in the world including out of reach of relevant law enforcement agencies by criminal actors that routinely behave with technical and legal savvy.
Ultimately, the raising of regional and national digital borders is a topic to watch, both to keep an eye on what the forthcoming legal regimes will look like and, also, to assess the extents to which we see languages of ‘strong sovereignty’ or nationalism creep functionally into legislation around the world.
It’s stupefying how inaccurate MacOS’s software update is in actual use. I’m 2 hours into a ’15 minutes remaining’ and still have 5 more minutes on the clock. But at least you can actually install the operating system, unlike older and still supported Apple Watches that require a full system reset in order to install WatchOS updates!
How we measure changes not only what is being measured but also the moral scaffolding that compels us to live toward those standards. Innovations like assembly-line factories would further extend this demand that human beings work at the same relentlessly monotonous rate of a machine, as immortalized in Charlie Chaplin’s film Modern Times. Today, the control creep of self-tracking technologies into workplaces and institutions follows a similar path. In a “smart” or “AI-driven” workplace, the productive worker is someone who emits the desired kind of data — and does so in an inhumanly consistent way.
All I want for Apple to release today is a new Apple TV or, failing that, an absolutely massive cut in price to their very, very, very, very, very old Apple TV 4K. But really I want them to announce a new one so that I can take advantage of the full raft of Apple One services on the biggest screen I have in my house!
Elizabeth Dubois has a great episode of Wonks and War Rooms where she interviews Etienne Rainville of The Boys in Short Pants podcast, former Hill staffer, and government relations expert. They unpack how government staffers collect information, process it, and identify experts.
Broadly, the episode focuses on how the absence of significant policy expertise in government and political parties means that social media—and Twitter in particular—can play an outsized role in influencing government, and why that’s the case.
While the discussion isn’t necessarily revelatory to anyone who has dealt with some elements of government of Canada, and especially MPs and their younger staffers, it’s a good and tight conversation that could be useful for students of Canadian politics, and also helpfully distinguishes of of the differences between Canadian and American political cultures. I found the forthrightness of the conversation and the honesty of how government operates was particularly useful in clarifying why Twitter is, indeed, a place for experts in Canada to spend time if they want to be policy relevant.
Jason Healey and Robert Jervis have a thought provoking piece over at the Modern War Institute at West Point. The crux of the argument is that, as a result of overclassification, it’s challenging if not impossible for policymakers or members of the public (to say nothing of individual analysts in the intelligence community or legislators) to truly understand the nature of contemporary cyberconflict. While there’s a great deal written about how Western organizations have been targeted by foreign operators, and how Western governments have been detrimentally affected by foreign operations, there is considerably less written about the effects of Western governments’ own operations towards foreign states because those operations are classified.
To put it another way, there’s no real way of understanding the cause and effect of operations, insofar as it’s not apparent why foreign operators are behaving as they are in what may be reaction to Western cyber operations or perceptions of Western cyber operations. The kinds of communiques provided by American intelligence officials, while somewhat helpful, also tend to obscure as much as they reveal (on good days). Healey and Jervis write:
General Nakasone and others are on solid ground when highlighting the many activities the United States does not conduct, like “stealing intellectual property” for commercial profit or disrupting the Olympic opening ceremonies. There is no moral equivalent between the most aggressive US cyber operations like Stuxnet and shutting down civilian electrical power in wintertime Ukraine or hacking a French television station and trying to pin the blame on Islamic State terrorists. But it clouds any case that the United States is the victim here to include such valid complaints alongside actions the United States does engage in, like geopolitical espionage. The concern of course is a growing positive feedback loop, with each side pursuing a more aggressive posture to impose costs after each fresh new insult by others, a posture that tempts adversaries to respond with their own, even more aggressive posture.
Making things worse, the researchers and academics who are ostensibly charged with better understanding and unpacking what Western intelligence agencies are up to sometimes decline to fulfill their mandate. The reasons are not surprising: engaging in such revelations threaten possible career prospects, endanger the very publication of the research in question, or risk cutting off access to interview subjects in the future. Healey and Jervis focus on the bizarre logics of working and researching the intelligence community in the United States, saying (with emphasis added):
Think-tank staff and academic researchers in the United States often shy away from such material (with exceptions like Ben Buchanan) so as not to hamper their chances of a future security clearance. Even as senior researchers, we were careful not to directly quote NSA’s classified assessment of Iran, but rather paraphrased a derivative article.
A student, working in the Department of Defense, was not so lucky, telling us that to get through the department’s pre-publication review, their thesis would skip US offensive operations and instead focus on defense.
Such examples highlight the distorting effects of censorship or overclassification: authors are incentivized to avoid what patrons want ignored and emphasize what patrons want highlighted or what already exists in the public domain. In paper after paper over the decades, new historical truths are cumulatively established in line with patrons’ preferences because they control the flow and release of information.
What are the implications as written by Healey and Jervis? In intelligence communities the size of the United States’, information gets lost or not passed to whomever it ideally should be presented to. Overclassification also means that policy makers and legislators who aren’t deeply ‘in the know’ will likely engage in decisions based on half-founded facts, at best. In countries such as Canada, where parliamentary committees cannot access classified information, they will almost certainly be confined to working off of rumour, academic reports, government reports that are unclassified, media accounts that divulge secrets or gossip, and the words spoken by the heads of security and intelligence agencies. None of this is ideal for controlling these powerful organizations, and the selective presentation of what Western agencies are up to actually risks compounding broader social ills.
Legislative Ignorance and Law
One of the results of overclassification is that legislators, in particular, become ill-suited to actually understanding national security legislation that is presented before them. It means that members of the intelligence and national security communities can call for powers and members of parliament are largely prevented from asking particularly insightful questions, or truly appreciate the implications of the powers that are being asked for.
Indeed, in the Canadian context it’s not uncommon for parliamentarians to have debated a national security bill in committee for months and, when asked later about elements of the bill, they admit that they never really understood it in the first place. The same is true for Ministers who have, subsequently, signed off on broad classes of operations that have been authorized by said legislation.
Part of that lack of understanding is the absence of examples of how powers have been used in the past, and how they might be used in the future; when engaging with this material entirely in the abstract, it can be tough to grasp the likely or possible implications of any legislation or authorization that is at hand. This is doubly true in situations where new legislation or Ministerial authorization will permit secretive behaviour, often using secretive technologies, to accomplish equally secretive objectives.
Beyond potentially bad legislative debates leading to poorly understood legislation being passed into law and Ministers consenting to operations they don’t understand, what else may follow from overclassification?
Nationalism, Miscalculated Responses, and Racism
To begin with, it creates a situation where ‘we’ in the West are being attacked by ‘them’ in Russia, Iran, China, North Korea, or other distant lands. I think this is problematic because it casts Western nations, and especially those in the Five Eyes, as innocent victims in the broader world of cyber conflict. Of course, individuals with expertise in this space will scoff at the idea–we all know that ‘our side’ is up to tricks and operations as well!–but for the general public or legislators, that doesn’t get communicated using similarly robust or illustrative examples. The result is that the operations of competitor nations can be cast as acts of ‘cyberwar’ without any appreciation that those actions may, in fact, be commensurate with the operations that Five Eyes nations have themselves launched. In creating an Us versus Them, and casting the Five Eyes and West more broadly as victims, a kind of nationalism can be incited where ‘They’ are threats whereas ‘We’ are innocents. In a highly complex and integrated world, these kinds of sharp and inaccurate concepts can fuel hate and socially divisive attitudes, activities, and policies.
At the same time, nations may perceive themselves to be targeted by Five Eyes nations, and deduce effects to Five Eyes operations even when that isn’t the case. When a set of perimeter logs show something strange, or when computers are affected by ransomware or wiperware, or another kind of security event takes place, these less resourced nations may simply assume that they’re being targeted by a Five Eyes operation. The result is that foreign government may both drum up nationalist concerns about ‘the West’ or ‘the Five Eyes’ while simultaneously queuing up their own operations to respond to what may, in fact, have been an activity that was totally divorced from the Five Eyes.
I also worry that the overclassification problem can lead to statements in Western media that demonizes broad swathes of the world as dangerous or bad, or threatening for reasons that are entirely unapparent because Western activities are suppressed from public commentary. Such statements arise with regular frequency, where China is attributed to this or to that, or when Russia or Middle Eastern countries are blamed for the most recent ill on the Internet.
The effect of such statements can be to incite differential degrees of racism. When mainstream newspapers, as an example, constantly beat the drum that the Chinese government (and, by extension, Chinese people) are threats to the stability and development of national economies or world stability, over time this has the effect of teaching people that China’s government and citizens alike are dangerous. Moreover, without information about Western activities, the operations conducted by foreign agencies can be read out of context with the effect that people of certain ethnicities are regarded as inherently suspicious or sneaky as compared to those (principally white) persons who occupy the West. While I would never claim that the overclassification of Western intelligence operations are the root cause of racism in societies I do believe that overclassification can fuel misinformation about the scope of geopolitics and Western intelligence gathering operations, with the consequence of facilitating certain subsequent racist attitudes.
A colleague of mine has, in the past, given presentations and taught small courses in some of Canada’s intelligence community. This colleague lacks any access to classified materials and his classes focus on how much high quality information is publicly available when you know how and where to look for it, and how to analyze it. Students are apparently regularly shocked: they have access to the classified materials, but their understandings of the given issues are routinely more myopic and less robust. However, because they have access to classified material they tend to focus as much, or more, on it because the secretive nature of the material makes it ‘special’.
This is not a unique issue and, in fact, has been raised in the academic literature. When someone has access to special or secret knowledge they are often inclined to focus in on that material, on the assumption that it will provide insights in excess of what are available in open source. Sometimes that’s true, but oftentimes less so. And this ‘less so’ becomes especially problematic when operating in an era where governments tend to classify a great deal of material simply because the default is to assume that anything could potentially be revelatory to an agency’s operations. In this kind of era, overvaluing classified materials can lead to less insightful understandings of the issues of the day while simultaneously not appreciating that much of what is classified, and thus cast as ‘special’, really doesn’t provide much of an edge when engaging in analysis.
The solution is not to declassify all materials but, instead, to adopt far more aggressive declassification processes. This could, as just an example, entail tying declassification in some way to organizations’ budgets, such that if they fail to declassify materials their budgets are forced to be realigned in subsequent quarters or years until they make up from the prior year(s)’ shortfalls. Extending the powers of Information Commissioners, which are tasked with forcing government institutions to publish documents when they are requested by members of the public or parliamentarians (preferably subject to a more limited set of exemptions than exist today) might help. And having review agencies which can unpack higher-level workings of intelligence community organizations can also help.
Ultimately, we need to appreciate that national security and intelligence organizations do not exist in a bubble, but that their mandates mean that the externalized problems linked with overclassification are typically not seen as issues that these organizations, themselves, need to solve. Nor, in many cases, will they want to solve them: it can be very handy to keep legislators in the dark and then ask for more powers, all while raising the spectre of the Other and concealing the organizations’ own activities.
We do need security and intelligence organizations, but as they stand today their tendency towards overclassification runs the risk of compounding a range of deleterious conditions. At least one way of ameliorating those conditions almost certainly includes reducing the amount of material that these agencies currently classify as secret and thus kept from public eye. On this point, I firmly agree with Healey and Jervis.
But testing algorithms for fairness is still largely optional at Facebook. None of the teams that work directly on Facebook’s news feed, ad service, or other products are required to do it. Pay incentives are still tied to engagement and growth metrics. And while there are guidelines about which fairness definition to use in any given situation, they aren’t enforced.
The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.
But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.
[Kaplan’s] claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebook’s data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.
The whole thing with ethics is that they have to be integrated such that they underlie everything that an organization does; they cannot function as public relations add ons. Sadly at Facebook the only ethic is growth at all costs, the social implications be damned.
When someone or some organization is responsible for causing significant civil unrest, deaths, or genocide then we expect that those who are even partly responsible to be called to account, not just in the public domain but in courts of law and international justice. And when those someones happen to be leading executives for one of the biggest companies in the world the solution isn’t to berate them in Congressional hearings and hear their weak apologies, but to take real action against them and their companies.
When science research interferes with politics, economics, or culture, science is most often the loser. Thus, governments and businesses control healthcare for their personal gains or concepts and disregard or avoid factual knowledge and events.
Michael B. A. Oldstone, Viruses, Plagues, & History: Past, Present, and Future
Even before the pandemic, many researchers in academia were struggling with poor mental health. Desiree Dickerson, an academic mental-health consultant in Valencia, Spain, says that burnout is a problem inherent in the academic system: because of how narrowly it defines excellence, and how it categorizes and rewards success. “We need to reward and value the right things,” she says.
Yet evidence of empathetic leadership at the institutional level is in short supply, says Richard Watermeyer, a higher-education researcher at the University of Bristol, UK, who has been conducting surveys to monitor impacts of the pandemic on academia. Performative advice from employers to look after oneself or to leave one day a week free of meetings to catch up on work is pretty superficial, he says. Such counsel does not reduce work allocation, he points out.
Academia has a rampant problem in how it is professionally configured. To get even a short term contract, now, requires a CV that would have been worthy of tenure twenty or thirty years ago. Which means that, when someone is hired as an assistant professor (with a 3-6 year probation period) they are already usually more qualified than their peers of the past and have to be prolific in the work that they contribute to and output, and do so with minimal or no complaints so as to avoid any problems in their transition from assistant to associate professor (i.e., full-time and sometimes protected employee).
Once someone has gone through the gauntlet, they come to expect that others should go through it as well: if the current generation can cut it, then surely the next generation of hires should be able to as well if they’re as ‘good’ as the current generation. Which means that those who were forced into an unsustainable work environment that routinely eats into personal time, vacation time (i.e., time when you use vacation days to catch up on other work that otherwise is hard to get done), child rearing time, and so forth, expect that those following them do the same.
Add into this the fact that most academic units are semi-self governing, and those in governorship positions (e.g., department chairs, deans) tend to lack any actual qualifications in managing a largely autonomous workforce and cannot rebalance work loads in a systemically positive way so as to create more sustainable working environments. As a result of a lack of formal management skills, these same folks tend to be unable to identify the issues that might come up in a workforce/network of colleagues, and they are also not resourced to know how to actually treat the given problem. And all of this presumes they are motivated to find and resolve problems in the first place. This very premise is often found faulty, given that those who are governing are routinely most concerned with the smooth running of their units and, of course, may keep in mind any junior colleagues who happen to cause ‘problems’ by expecting assistance or consideration given the systemic overwork that is the normal work-life imbalance.
What’s required is a full-scale revolt in the very structure of university departments if work-life balance is to be truly valued, and if academics are to be able to satisfy their teaching, service, and research requirements in the designated number of working hours. While the job is often perceived as very generous–and it is, in a whole lot of ways!–because you (ideally) have parts of it that you love, expecting people to regularly have 50-75 hour work weeks, little real downtime, little time with family and friends, and being placed on a constant treadmill of outputs is a recipe for creating jaded, cynical, and burned out professionals. Sadly, that’s how an awful lot of contemporary departments are configured.