Catalin Cimpanu, reporting for The Record, has found that the European Union wants to build a recursive DNS service that will be available to EU institutions and the European public. The reasons for building the service are manifold, including concerns that American DNS providers are not GDPR compliant and worries that much of Europe is dependent on (largely) American-based or -owned infrastructure.
As part of the European system, plans are for it to:
… come with built-in filtering capabilities that will be able to block DNS name resolutions for bad domains, such as those hosting malware, phishing sites, or other cybersecurity threats.
This filtering capability would be built using threat intelligence feeds provided by trusted partners, such as national CERT teams, and could be used to defend organizations across Europe from common malicious threats.
It is unclear if DNS4EU usage would be mandatory for all EU or national government organizations, but if so, it would grant organizations like CERT-EU more power and the agility it needs to block cyber-attacks as soon as they are detected.
In addition, EU officials also want to use DNS4EU’s filtering system to also block access to other types of prohibited content, which they say could be done based on court orders. While officials didn’t go into details, this most likely refers to domains showing child sexual abuse materials and copyright-infringing (pirated) content.1
By integrating censorship/blocking provisions as the policy level of the European DNS, there is a real risk that over time that same system might be used for untoward ends. Consider the rise of anti-LGBTQ laws in Hungary and Poland, and how those governments mights be motivated to block access to ‘prohibited content’ that is identified as such by anti-LGBTQ politicians.
While a reader might hope that the European courts could knock down these kinds of laws, their recurrence alone raises the spectre that content that is deemed socially undesirable by parties in power could be censored, even where there are legitimate human rights grounds that justify accessing the material in question.
Rest of the World has published a terrific piece on the state of surveillance in Singapore, where governmental efficiency drives technologies that are increasingly placing citizens and residents under excessive and untoward kinds of surveillance. The whole piece is worth reading, but I was particularly caught by a comment made by the deputy chief executive of the Cyber Security Agency of Singapore:
“In the U.S., there’s a very strong sense of building technology to hold the government accountable,” he said. “Maybe I’m naive … but I just didn’t think that was necessary in Singapore.
Better.sg, which has around 1,000 members, works in areas where the government can’t or won’t, Keerthi said. “We don’t talk about who’s responsible for the problem. We don’t talk about who is responsible for solving the problem. We just talk about: Can we pivot this whole situation? Can we flip it around? Can we fundamentally shift human behaviour to be better?” he said.
… one app that had been under development was a ‘catch-a-predator’ chatbot, which parents would install on their childrens’ [sic] phones to monitor conversations. The concept of the software was to goad potential groomers into incriminating themselves, and report their activity to the police.
“The government’s not going to build this. … It is hostile, it is almost borderline entrapment,” Keerthi said, matter-of-factly. “Are we solving a real social problem? Yeah. Are parents really thrilled about it? Yeah.”
It’s almost breathtaking to see a government official admit they want to develop tools that the government, itself, couldn’t create for legal reasons but that he hopes will be attractive to citizens and residents. While I’m clearly not condoning the social problem that he is seeking to solve, the solution to such problems should be within the four corners of law as opposed to outside of them. When government officials deliberately move outside of the legal strictures binding them they demonstrate a dismissal of basic rights and due process with regards to criminal matters.
While such efforts might be ‘efficient’ and normal within Singapore they cannot be said to conform with basic rights nor, ultimately, with a political structure that is inclusive and responsive to the needs of its population. Western politicians and policy wonks routinely, and wistfully, talk about how they wish they were as free to undertake policy experiments and deployments as their colleagues in Asia. Hopefully more of them will read pieces like this one to understand that the efficiencies they are so fond of would almost certainly herald the end of the very democratic systems they operate within and are meant to protect.
Most clinical photos are taken by well-intentioned doctors who haven’t been trained in the nuances of photographing patients of different races. There are fundamental differences in the physics of how light interacts with different skin tones that can make documenting conditions on skin of color more difficult, says Chrystye Sisson, associate professor and chair of the photographic science program at Rochester Institute of Technology, the only such program in the nation.
Interactions between light, objects, and our eyes allow us to perceive color. For instance, a red object absorbs every wavelength of light except red, which it reflects back into our eyes. The more melanin there is in the skin, the more light it absorbs, and the less light it reflects back.
But standard photographic setups don’t account for those differences.
One of the things that I routinely experience shooting street photography in a multicultural city is just how screwy camera defaults treat individuals of different racial backgrounds. And I’ve yet to find a single default that captures darker skin accurately despite shooting for many years.
The past week has seen a logjam begin to clear in Canadian-Chinese-American international relations. After agreeing to the underlying facts associated with her (and Huawei’s) violation of American sanctions that have been placed on Iran, Meng Wanzhou was permitted to return to China after having been detained in Canada for several years. Simultaneously, two Canadian nationals who had been charged with national security crimes were themselves permitted to return to Canada on health-related grounds. The backstory is that these Canadians were seized shortly following the detainment of Huawei’s CFO, with the Chinese government repeatedly making clear that the Canadians were being held hostage and would only be released when the CFO was repatriated to China.
A huge amount of writing has taken place following the swap. But what I’ve found to be particular interesting in terms of offering a novel contribution to the discussions was an article by Julian Ku in Lawfare. In his article, “China’s Successful Foray Into Asymmetric Lawfare,” Ku argues that:
Although Canadians are relieved that their countrymen have returned home, the Chinese government’s use of its own weak legal system to carry out “hostage diplomacy,” combined with Meng’s exploitation of the procedural protections of the strong and independent Canadian and U.S. legal systems, may herald a new “asymmetric lawfare” strategy to counter the U.S. This strategy may prove an effective counter to the U.S. government’s efforts to use its own legal system to enforce economic sanctions, root out Chinese espionage, indict Chinese hackers, or otherwise counter the more assertive and threatening Chinese government.
I remain uncertain that this baseline premise, which undergirds the rest of his argument, holds true. In particular, his angle of analysis seems to set to the side, or not fully engage with, the following:
China’s hostage taking has further weakened the trust that foreign companies will have in the Chinese government. They must now acknowledge, and build into their risk models, the possibility that their executives or employees could be seized should the Chinese government get into a diplomatic, political, or economic dispute with the country from which they operate.
China’s blatant hostage taking impairs its world standing and has led to significant parts of the world shifting their attitudes towards the Chinese government. The results of these shifts are yet to be fully seen, but to date there have been doubts about entering into trade agreements with China, an increased solidarity amongst middle powers to resist what is seen as bad behaviour by China, and a push away from China and into the embrace of liberal democratic governments. This last point, in particular, runs counter to China’s long-term efforts to showcase its own style of governance as a genuine alternative to American and European models of democracy.
Despite what has been written, I think that relying on hostage diplomacy associated with its weak rule of law showcases China’s comparatively weak hand. Relying on low rule of law to undertake lawfare endangers its international strategic interests, which rely on building international markets and being treated as a respectable and reputable partner on the world stage. Resorting to kidnapping impairs the government’s ability to demonstrate compliance with international agreements and fora so as to build out its international policies.
Of course, none of the above discounts the fact that the Chinese government did, in fact, exploit this ‘law asymmetry’ between its laws and those of high rule of law countries. And the Canadian government did act under duress as a result of their nationals having been taken hostage, including becoming a quiet advocate for Chinese interests insofar as Canadian diplomats sought a way for the US government to reach a compromise with Huawei/Meng so that Canada’s nationals could be returned home. And certainly the focus on relying on high rule of law systems can delay investigations into espionage or other illicit foreign activities and operations that are launched by the Chinese government. Nevertheless, neither the Canadian or American legal systems actually buckled under the foreign and domestic pressure to set aside the rule of law in favour of quick political ‘fixes.’
While there will almost certainly be many years of critique in Canada and the United States about how this whole affair was managed the fact will remain that both countries demonstrated that their justice systems would remain independent from the political matters of the day. And they did so despite tremendous pressure: from Trump, during his time as the president, and despite the Canadian government being subjected to considerable pressure campaigns by numerous former government officials who were supportive, for one reason or another, of the Chinese government’s position to return Huawei’s CFO.
While it remains to be written what the actual, ultimate, effect of this swap of Huawei’s CFO for two inappropriately detained Canadians will be, some lasting legacies may include diminished political capital for the Chinese government while, at the same time, a reinforcing of the trust that can be put in the American and Canadian (and, by extension, Western democratic) systems of justice. Should these legacies hold then China’s gambit will almost certainly prove to have backfired.
First, the authors explain that WhatsApp has a system whereby recipients of messages can report content they have received to WhatsApp on the basis that it is abusive or otherwise violates WhatsApp’s Terms of Service. The article frames this reporting process as a way of undermining privacy on the basis that secured messages are not kept solely between the sender(s) and recipient(s) of the communications but can be sent to other parties, such as WhatsApp. In effect, the ability to voluntarily forward messages to WhatsApp that someone has received is cast as breaking the privacy promises that have been made by WhatsApp.
Second, the authors note that WhatsApp collects a large volume of metadata in the course of using the application. Using lawful processes, government agencies have compelled WhatsApp to disclose metadata on some of their users in order to pursue investigations and secure convictions against individuals. The case that is focused on involves a government employee who leaked confidential banking information to Buzzfeed, and which were subsequently reported out.
Assessing the Problems
In the case of forwarding messages for abuse reporting purposes, encryption is not broken and the feature is not new. These kinds of processes offer a mechanism that lets individuals self-identify and report on problematic content. Such content can include child grooming, the communications of illicit or inappropriate messages or audio-visual content, or other abusive information.
What we do learn, however, is that the ‘reactive’ and ‘proactive’ methods of detecting abuse need to be fixed. In the case of the former, only about 1,000 people are responsible for intaking and reviewing the reported content after it has first been filtered by an AI:
Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems. These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.
Further, the employees are often reliant on machine learning-based translations of content which makes it challenging to assess what is, in fact, being communicated in abusive messages. As reported,
… using Facebook’s language-translation tool, which reviewers said could be so inaccurate that it sometimes labeled messages in Arabic as being in Spanish. The tool also offered little guidance on local slang, political context or sexual innuendo. “In the three years I’ve been there,” one moderator said, “it’s always been horrible.”
There are also proactive modes of watching for abusive content using AI-based systems. As noted in the article,
Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive. The unencrypted data available for scrutiny is extensive. It includes the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.
Unfortunately, the AI often makes mistakes. This led one interviewed content reviewer to state that, “[t]here were a lot of innocent photos on there that were not allowed to be on there … It might have been a photo of a child taking a bath, and there was nothing wrong with it.” Often, “the artificial intelligence is not that intelligent.”
The vast collection of metadata has been a long-reported concern and issueassociated with WhatsApp and, in fact, was one of the many reasons why many individuals advocate for the use of Signal instead. The reporting in the ProPublica article helpfully summarizes the vast amount of metadata that is collected but that collection, in and of itself, does not present any evidence that Facebook or WhatsApp have transformed the application into one which inappropriately intrudes into persons’ privacy.
ProPublica Sets Back Reasonable Encryption Policy Debates
In suggesting that what WhatsApp has implemented is somehow wrong, it becomes more challenging for other companies to deploy similar reporting features without fearing that their decision will be reported on as ‘undermining privacy’. While there may be a valid policy discussion to be had–is a reporting process the correct way of dealing with abusive content and messages?–the authors didn’t go there. Nor did they seriously investigate whether additional resources should be adopted to analyze reported content, or talk with artificial intelligence experts or machine-based translation experts on whether Facebook’s efforts to automate the reporting process are adequate, appropriate, or flawed from the start. All those would be very interesting, valid, and important contributions to the broader discussion about integrating trust and safety features into encrypted messaging applications. But…those are not things that the authors choose to delve into.
The authors could have, also, discussed the broader importance (and challenges) in building out messaging systems that can deliberately conceal metadata, and the benefits and drawbacks of such systems. While the authors do discuss how metadata can be used to crack down on individuals in government who leak data, as well as assist in criminal investigations and prosecutions, there is little said about what kinds of metadata are most important to conceal and the tradeoffs in doing so. Again, there are some who think that all or most metadata should be concealed, and others who hold opposite views: there is room for a reasonable policy debate to be had and reported on.
Unfortunately, instead of actually taking up and reporting on the very valid policy discussions that are at the edges of their article, the authors choose to just be bombastic and asserted that WhatsApp was undermining the privacy protections that individuals thought they have when using the application. It’s bad reporting, insofar as it distorts the facts, and is particularly disappointing given that ProPublica has shown it has the chops to do good investigative work that is well sourced and nuanced in its outputs. This article, however, absolutely failed to make the cut.
This story of how the National Security Agency (NSA) was involved in analyzing typewriter bugs that were implanted by agents of the USSR in the 1980s is pretty amazing (.pdf) in terms of the technical and operational details which are have been written about. It’s also revealing in terms of how the parties who are permitted to write about these materials breathlessly describe the agencies’ past exploits. In critically reading these kinds of accounts its possible to learn how the agencies, themselves, regard themselves and their activities. In effect, how history is ‘created’—or propaganda written, depending on how your read the article in question—functions to reveal the nature of the actors involved in that creation and the way that myths and truths are created and replicated.
As a slight aside, whenever I come across material like this I’m reminded of just how poor the Canadian government is in disclosing its own intelligence agencies’ histories. As senior members of the Canadian intelligence community retire or pass away, and as recorded materials waste away or are disposed of, key information that is needed to understand how and why Canada has acted in the world are being lost. This has the effect of impoverishing Canadians’ own understandings of how their governments have operated, with the result that Canadian histories often risk missing essential information that could reveal hidden depths to what Canadians know about their country and its past.
Steven Chaplin has a really great explanation of whether the Canadian government can rely on national security and evidentiary laws to lawfully justify refusing to provide documents to the House of Commons, and to House committees. His analysis and explanation arose as a result of the Canadian government doing everything it could to, first, refuse to provide documents to the Parliamentary Committee which was studying Canadian-Chinese relations and, subsequently, refusing to provide the documents when compelled to do so by the House of Commons itself.
Rather than releasing the requested documents the government turned to the courts to adjudicate whether the documents in question–which were asserted to contain sensitive national security information–must, in fact, be released to the House or whether they could instead be sent to an executive committee, filled with Members of Parliament and Senators, to assess the contents instead. As Chaplin notes,
Having the courts intervene, as proposed by the government’s application in the Federal Court, is not an option. The application is clearly precluded by Article 9 of the Bill of Rights, 1689, which provides that a proceeding in Parliament ought not to be impeached or questioned in court. Article 9 not only allows for free speech; it is also a constitutional limit on the jurisdiction of the courts to preclude judicial interference in the business of the House.
The House ordered that the documents be tabled without redaction. Any decision of the court that found to the contrary would impeach or question the proceeding that led to the Order. And any attempt by the courts to balance the interests involved would constitute the courts becoming involved in ascertaining, and thereby questioning, the needs of the House and why the House wants the documents.
Beyond the Court’s involvement impeding into the territory of Parliament, there could be serious and long-term implications of letting the court become a space wherein the government and the House fight to obtain information that has been demanded. Specifically,
It may be that at the end of the day the government will continue to refuse to produce documents. In the same way that the government cannot use the courts to withhold documents, the House cannot go to court to compel the government to produce them, or to order witnesses to attend proceedings. It could also invite disobedience of witnesses, requiring the House to either drop inquiries or involve the courts to compel attendance or evidence. Allowing, or requiring, the government and the House to resolve their differences in the courts would not only be contrary to the constitutional principles of Article 9, but “would inevitably create delays, disruption, uncertainties and costs which would hold up the nation’s business and on that account would be unacceptable even if, in the end, the Speaker’s rulings were vindicated as entirely proper” (Canada (House of Commons) v. Vaid ). In short, the courts have no business intervening one way or the other.
Throughout the discussions that have taken place about this issue in Canada, what has been most striking is that the national security commentators and elites have envisioned that the National Security and Intelligence Committee of Parliamentarians (NSICOP) could (and should) be tasked to resolve any and all particularly sensitive national security issues that might be of interest to Parliament. None, however, seems to have contemplated that Parliament, itself, might take issue with the government trying to exclude Parliament from engaging in assessments of the government’s national security decisions nor that issue would be taken when topics of interest to Parliamentarians were punted into an executive body, wherein their fellow Members of Parliament on the body were sworn to the strictest secrecy. Instead, elites have hand waved to the importance of preserving secrecy in order for Canada to receive intelligence from allies, as well as asserted that the government would never mislead Parliament on national security matters (about which, these same experts explain, Members of Parliament are not prepared to receive, process, or understand given the sophistication of the intelligence and the apparent simplicity of most Parliamentarians themselves).
This was the topic of a recent episode of the Intrepid Podcast, where Philippe Lagassé noted that the exclusion of parliamentary experts when creating NSICOP meant that these entirely predictable showdown situations were functionally baked into how the executive body was composed. As someone who raised the issue of adopting an executive, versus a standing House, committee and was rebuffed as being ignorant of the reality of national security it’s with more than a little satisfaction that the very concerns which were raised when NSICOP was being created are, in fact, arising on the political agenda.
With regard to the documents that the House Committee was seeking, I don’t know or particularly care what their contents include. From my own experience I’m all too well aware that ‘national security’ is often stamped on things that either governments want to keep from the public because they can be politically damaging, be kept from the public just generally because of a culture of non-transparency and refusal of accountability, as well as (less often) be kept from the public on the basis that there are bonafide national security interests at stake. I do, however, care that the Government of Canada has (again) acted counter to Parliament’s wishes and has deliberately worked to impede the House from doing its work.
Successive governments seem to genuinely believe that they get to ‘rule’ Canada absolutely and with little accountability. While this is, in function, largely true given how cowed Members of Parliament are to their party leaders it’s incredibly serious and depressing to see the government further erode Parliament’s powers and abilities to fulfil its duties. A healthy democracy is filled with bumps for the government as it is held to account but, sadly, the Government of Canada–regardless of the party in power–is incredibly active in keeping itself, and its behaviours, from the public eye and thus held to account.
If only a committee might be struck to solve this problem…
Signal announced last week that their users could set a default that messages would auto-delete themselves after a period of time from 30 second to four weeks. The default would apply to all conversations, though could be modified on a per-conversation basis. The company wrote,
As the norms for how people connect have changed, much of the communication that once took place through the medium of coffee shops, bars, and parks now takes place through the medium of digital devices. One side effect of this shift from analog to digital is the conjoined shift from the ephemeral to the eternal: words once transiently spoken are now – more often than not – data stored forever.
… comprehensive digital remembering collapses history and thus impairs our judgement to act in time, while denying humans the chance to evolve, develop, and learn. This leaves us to helplessly oscillate between two equally troubling options: a permanent past and an ignorant present.
Signal’s approach, while appreciated, is also only a first step as they don’t provide an easy way to also extract and permanently retain some communications outside of their environment. Why does this matter? Because there are, in fact, some conversations that need to be retained for some time, be they personal (e.g., last communications with a loved on) or professional (e.g., government employees required to retain substantive decisions and conversations in archives). The company might introduce a flag where–with the consent of both parties–specific parts of conversations could be retained indefinitely outside of the default deletion times. Adding in the friction of retention would serve to replicate how ‘remembering’ often works in non-digital contexts: it takes extra effort to create facsimiles. We should strive to replicate that into more of our digital environments.
Still, Signal’s approach–enabling deletion by default–is arguably an effort to bend communications closer to their historical norms and, as such, likely for the better. They’re obviously not the first company to think this way–Snapchat famously led the way, and numerous social companies’ ‘stories’ posts are designed delete after 24 hours for ‘privacy’ and also (really) engagement reasons–but I think that it’s meaningful that a text-messaging company is introducing this as a way of easily setting defaults for forgetting.
In an article for The Hill, Shannon Lantzy and Kelly Rozumalski have discussed how Software Bill Of Materials (SBOMs) are good for business as well as security. SBOMs more forcefully emerged on the American policy space after the Biden Whitehouse promulgated an Executive Order on cybersecurity on May 12, 2021. The Order included a requirement that developers and private companies providing services to the United States government be required to produce Software Bill of Materials (SBOM).1 SBOMs are meant to help incident responders to cybersecurity events assess what APIs, libraries, or other digital elements might be vulnerable to an identified operation, and also help government procurement agencies better ensure the digital assets in a product or service meet a specified security standard.
Specifically, Lantzy and Rozumalsko write:
Product offerings that are already secure-by-design will be able to command a premium price because consumers will be able to compare SBOMs.
Products with inherently less patchable components will also benefit. A universal SBOM mandate will make it easy to spot vulnerabilities, creating market risk for lagging products; firms will be forced to reengineer the products before getting hacked. While this seems like a new cost to the laggards, it’s really just a transfer of future risk to a current cost of reengineering. The key to a universal mandate is that all laggards will incur this cost at roughly the same time, thereby not losing a competitive edge.
The promise of increased security and reduced risk will not be realized by SBOM mandates alone. Tooling and putting this mandate in practice will be required to realize the full power of the SBOM.
The idea of internalizing security costs to developers, and potentially increasing the cost of goods, has been something that has been discussed publicly and with Western governments for at least two decades or more. We’ve seen the overall risk profiles presented to organizations continue to increase year over year as a result of companies racing to market with little regard for security, which was a business development strategy that made sense when they experienced few economic liabilities for selling products with severe cybersecurity limitations or vulnerabilities. In theory, enabling comparison shopping vis-a-vis SBOMs will disincentivize companies from selling low-grade equipment and services if they want to get into high-profit enterprise or high-reliability government contracts, with the effect being that security improvements will also trickle down to the products purchased by consumers as well (‘trickle down cybersecurity’).
While I think that SBOMs are definitely a part of developing cybersecurity resilience it remains to be seen just how much consumers will pay for ‘more secure’ products given that, first, they are economically incentivized to pay the lowest possible amounts for goods and services and, second, they are unlikely to know for certain what is a good or bad security practice. Advocates of SBOMs often refer to them as akin to nutrition labels but we know that at most about a third of consumers read those labels (and those who read them often experience societal pressures to regulate caloric intake and thus read the labels) and, also, that the labels are often inaccurate.
It will be very interesting to see whether enterprise and consumers alike will be able or willing to pay higher up-front costs, to say nothing of being able to actually trust what is on the SBOM labels. Will companies that adopt SBOM products suffer a lower rate of cybersecurity incidents, or ones that are of reduced seriousness, or be able to respond more quickly when a cybersecurity incident has been realized? We’re going to actually be able to test the promises of SBOMs, soon, and it’s going to be fascinating to see things play out.
Many of the details in the article are the result of court records, interviews, and assessments of Chinese media. It remains to be seen whether Chinese agents’ abilities to conduct ‘fox hunts’ will be impeded now that the US government is more aware of these operations. Given the attention and suspicion now cast towards citizens of China, however, there is also a risk that FBI agents may become overzealous in their investigations to the detriment of law-abiding Chinese-Americans or visitors from China.
In an ideal world there would be equivalent analyses or publications on the extent to which these operations are also undertaken in Canada. To date, however, there is no equivalent to ProPublica’s piece in the Canadian media landscape and given the Canadian media’s contraction we can’t realistically expect anything, anytime soon. However, even a short piece which assessed whether individuals from China who’ve run operations in the United States, and who are now barred from entering the US or would face charges upon crossing the US border, are similarly barred or under an extradition order in Canada would be a positive addition to what we know of how the Canadian government is responding to these kinds of Chinese operations.