Link

Vulnerability Exploitability eXchange (VEX)

CISA has a neat bit of work they recently published, entitled “Vulnerability Exploitability eXchange (VEX) – Status Justifications” (warning: opens to .pdf.).1 Product security teams that adopt VEX could assert the status of specific vulnerabilities in their products. As a result, clients’ security staff could allocate time to remediate actionable vulnerabilities instead of burning time on potential vulnerabilities that product security teams have already closed off or mitigated.

There are a number of different machine-readable status types that are envisioned, including:

  • Component_not_present
  • Vulnerable_code_not_present
  • Vulnerable_code_cannot_be_controlled_by_adversary
  • Vulnerable_code_not_in_execute_path
  • Inline_mitigations_already_exist

CISA’s publication spells out what each status entails in more depth and includes diagrams to help readers understand what is envisioned. However, those same readers need to pay attention to a key caveat, namely, “[t]his document will not address chained attacks involving future or unknown risks as it will be considered out of scope.” Put another way, VEX is used to assess known vulnerabilities and attacks. It should not be relied upon to predict potential threats based on not-yet-public attacks nor new ways of chaining known vulnerabilities. Thus, while it would be useful to ascertain if a product is vulnerable to EternalBlue, today, it would not be useful to predict or assess the exploited vulnerabilities prior to EternalBlue having been made public nor new or novel ways of exploiting the vulnerabilities underlying EternalBlue. In effect, then, VEX is meant to address the known risks associated with N-Days as opposed to risks linked with 0-Days or novel ways of exploiting N-Days.2

For VEX to best work there should be some kind of surrounding policy requirements, such as when/if a supplier falsely (as opposed to incorrectly) asserts the security properties of its product there should be some disciplinary response. This can take many forms and perhaps the easiest relies on economics and not criminal sanction: federal governments or major companies will decline to do business with a vendor found to have issued a deceptive VEX, and may have financial recourse based on contactual terms with the product’s vendor. When or if this economic solution fails then it might be time to turn to legal venues and, if existent approaches prove insufficient, potentially even introduce new legislation designed to further discipline bad actors. However, as should be apparent, there isn’t a demonstrable requirement to introduce legislation to make VEX actionable.

I think that VEX continues work under the current American administration to advance a number of good policies that are meant to better secure products and systems. VEX works hand-in-hand with SBOMs and, also, may be supported by US Executive Orders around cybersecurity.

While Canada may be ‘behind’ the United States we can see that things are potentially shifting. There is currently a consultation underway to regenerate Canada’s cybersecurity strategy and infrastructure security legislation was introduced just prior to Parliament rising for its summer break. Perhaps, in a year’s time, we’ll see stronger and bolder efforts by the Canadian government to enhance infrastructure security with some small element of that recommending the adoption of VEXes. At the very least the government won’t be able to say they lack the legislative tools or strategic direction to do so.


  1. You can access a locally hosted version if the CISA link fails. ↩︎
  2. For a nice discussion of why N-days are regularly more dangerous then 0-Days, see: “N-Days: The Overlooked Cyber Threat for Utilities.” ↩︎
Link

Housing in Ottawa Now a National Security Issue

David Pugliese is reporting in the Ottawa Citizen that the Canadian Forces Intelligence Command (CFINTCOM) is “trying to avoid posting junior staff to Ottawa because it has become too expensive to live in the region.” The risk is that financial hardship associated with living in Ottawa could make junior members susceptible to subversion. Housing costs in Ottawa have risen much faster than either wage increases or inflation. Moreover, the special allowance provided to staff that is meant to assauge the high costs of living in Canadian cities has been frozen for 13 years.

At this point energy, telecommunications, healthcare, and housing all raise their own national security concerns. To some extent, such concerns have tracked with these industry categories: governments have always worried about the security of telecommunications networks as well as the availability of sufficient energy supplies. But in other cases, such as housing affordability, the national security concerns we are seeing are the result of long-term governance failures. These failures have created new national security threats that would not exist in the face of good (or even just better) governance.1

There is a profound danger in trying to address all the new national security challenges and issues using national security tools or governance processes. National security incidents are often regarded as creating moments of exception and, in such moments, actions can be undertaken that otherwise could not. The danger is that states of exception become the norm and, in the process, the regular modes of governance and law are significantly set aside to resolve the crises of the day. What is needed is a regeneration and deployment of traditional governance capacity instead of a routine reliance on national security-type responses to these issues.

Of course, governments don’t just need to respond to these metastasized governance problems in order to alleviate national security issues and threats. They need to do so, in equable and inclusive ways, so as to preserve or (re)generate the trust between the residents of Canada and their government.

The public may justifiably doubt that their system of government is working where successive governments under the major political parties are seen as having failed to provide for basic needs. The threat, then, is that ongoing governance failures run the risk of placing Canada’s democracy under pressure. While this might seem overstated I don’t think that’s the case: we are seeing a rise of politicians who are capitalizing on the frustrations and challenges faced by Canadians across the country, but who do not have their own solutions. Capitalizing on rage and frustration, and then failing to deliver on fixes, will only further alienate Canadians from their government.

Governments across Canada flexed their muscles during the earlier phases of the COVID-19 pandemic. Having used them, then, it’s imperative they keep flexing these muscles to address the serious issues that Canadians are experiencing. Doing so will assuage existent national security issues. It will also, simultaneously, serve to prevent other normal governance challenges from metastasizing into national security threats.


  1. As an aside, these housing challenges are not necessarily new. Naval staff posted to Esquimalt have long complained about the high costs of off-base housing in Victoria and the surrounding towns and cities. ↩︎
Link

A Brief Unpacking of a Declaration on the Future of the Internet

Cameron F. Kerry has a helpful piece in Brookings that unpacks the recently published ‘Declaration on the Future of the Internet.’ As he explains, the Declaration was signed by 60 States and is meant, in part, to rebut a China-Russia joint statement. Those countries’ statement would support their positions on ‘securing’ domestic Internet spaces and removing Internet governance from multi-stakeholder forums to State-centric ones.

So far, so good. However, baked into the Kerry’s article is language suggesting that either he misunderstands, or understates, some of the security-related elements of the Declaration. He writes:

There are additional steps the U.S. government can take that are more within its control than the actions and policies of foreign states or international organizations. The future of the Internet declaration contains a series of supporting principles and measures on freedom and human rights, Internet governance and access, and trust in use of digital network technology. The latter—trust in the use of network technology— is included to “ensure that government and relevant authorities’ access to personal data is based in law and conducted in accordance with international human rights law” and to “protect individuals’ privacy, their personal data, the confidentiality of electronic communications and information on end-users’ electronic devices, consistent with the protection of public safety and applicable domestic and international law.” These lay down a pair of markers for the U.S. to redeem.

I read this, against the 2019 Ministerial and recent Council of Europe Cybercrime Convention updates, and see that a vast swathe of new law enforcement and security agency powers would be entirely permissible based on Kerry’s assessment of the Declaration and States involved in signing it. While these new powers have either been agreed to, or advanced by, signatory States they have simultaneously been directly opposed by civil and human rights campaigners, as well as some national courts. Specifically, there are live discussions around the following powers:

  • the availability of strong encryption;
  • the guarantee that the content of communications sent using end-to-end encrypted devices cannot be accessed or analyzed by third-parties (include by on-device surveillance);
  • the requirement of prior judicial authorization to obtain subscriber information; and
  • the oversight of preservation and production powers by relevant national judicial bodies.

Laws can be passed that see law enforcement interests supersede individuals’ or communities’ rights in safeguarding their devices, data, and communications from the State. When or if such a situation occurs, the signatories of the Declaration can hold fast in their flowery language around protecting rights while, at the same time, individuals and communities experience heightened surveillance of, and intrusions into, their daily lives.

In effect, a lot of international policy and legal infrastructure has been built to facilitate sweeping new investigatory powers and reforms to how data is, and can be, secured. It has taken years to build this infrastructure and as we leave the current stage of the global pandemic it is apparent that governments have continued to press ahead with their efforts to expand the powers which could be provided to law enforcement and security agencies, notwithstanding the efforts of civil and human rights campaigners around the world.

The next stage of things will be to asses how, and in what ways, international agreements and legal infrastructure will be brought into national legal systems and to determine where to strategically oppose the worst of the over reaches. While it’s possible that some successes are achieved in resisting the expansions of state powers not everything will be resisted. The consequence will be both to enhance state intrusions into private lives as well as to weaken the security provided to devices and data, with the resultant effect of better enabling criminals to illicitly access or manipulate our personal information.

The new world of enhanced surveillance and intrusions is wholly consistent with the ‘Declaration on the Future of the Internet.’ And that’s a big, glaring, and serious problem with the Declaration.

Link

The Broader Implications of Data Breaches

Ikea Canada notified approximately 95,000 Canadian customers in recent weeks about a data breach the company has suffered. An Ikea employee conducted a series of searches between March 1 to March 3 which surfaced the account records of the aforementioned customers.1

While Ikea promised that financial information–credit card and banking information–hadn’t been revealed a raft of other personal information had been. That information included:

  • full first and last name;
  • postal code or home address;
  • phone number and other contact information;
  • IKEA loyalty number.

Ikea did not disclose who specifically accessed the information nor their motivations for doing so.

The notice provided by Ikea was better than most data breach alerts insofar as it informed customers what exactly had been accessed. For some individuals, however, this information is highly revelatory and could cause significant concern.

For example, imagine a case where someone has previously been the victim of either physical or digital stalking. Should their former stalker be an Ikea employee the data breach victim may ask whether their stalker now has confidential information that can be used to renew, or further amplify, harmful activities. With the customer information in hand, as an example, it would be relatively easy for a stalker to obtain more information such as where precisely someone lived. If they are aggrieved then they could also use the information to engage in digital harassment or threatening behaviour.

Without more information about the motivations behind why the Ikea employee searched the database those who have been stalked or had abusive relations with an Ikea employee might be driven to think about changing how they live their lives. They might feel the need to change their safety habits, get new phone numbers, or cycle to a new email. In a worst case scenario they might contemplate vacating their residence for a time. Even if they do not take any of these actions they might experience a heightened sense of unease or anxiety.

Of course, Ikea is far from alone in suffering these kinds of breaches. They happen on an almost daily basis for most of us, whether we’re alerted of the breach or not. Many news reports about such breaches focus on whether there is an existent or impending financial harm and stop the story there. The result is that journalist reporting can conceal some of the broader harms linked with data breaches.

Imagine a world where our personal information–how you can call us or find our homes–was protected equivalent to how our credit card numbers are current protected. In such a world stalkers and other abusive actors might be less able to exploit stolen or inappropriately accessed information. Yes, there will always be ways by which bad actors can operate badly, but it would be possible to mitigate some of the ways this badness can take place.

Companies could still create meaningful consent frameworks whereby some (perhaps most!) individuals could agree to have their information stored by the company. But, for those who have a different risk threshold they could make a meaningful choice so they could still make purchases and receive deliveries without, at the same time, permanently increasing the risks that their information might fall into the wrong hand. However, getting to this point requires expanded threat modelling: we can’t just worry about a bad credit card purchase but, instead, would need to take seriously the gendered and intersectional nature of violence and its intersection with cybersecurity practices.


  1. In the interests of disclosure, I was contacted as an affected party by Ikea Canada. ↩︎
Link

Messaging Interoperability and Client Security

Eric Rescorla has a thoughtful and nuanced assessment of recent EU proposals which would compel messaging companies to make their communications services interoperable. To his immense credit he spends time walking the reader through historical and contemporary messaging systems in order to assess the security issues prospectively associated with requiring interoperability. It’s a very good, and compact, read on a dense and challenging subject.

I must admit, however, that I’m unconvinced that demanding interoperability will have only minimal security implications. While much of the expert commentary has focused on whether end-to-end encryption would be compromised I think that too little time has been spent considering the client-end side of interoperable communications. So if we assume it’s possible to facilitate end-to-end communications across messaging companies and focus just on clients receiving/sending communications, what are some risks?1

As it stands, today, the dominant messaging companies have large and professional security teams. While none of these teams are perfect, as shown by the success of cyber mercenary companies such as NSO group et al, they are robust and constantly working to improve the security of their products. The attacks used by groups such as NSO, Hacking Team, Candiru, FinFisher, and such have not tended to rely on breaking encryption. Rather, they have sought vulnerabilities in client devices. Due to sandboxing and contemporary OS security practices this has regularly meant successfully targeting a messaging application and, subsequently, expanding a foothold on the device more generally.

In order for interoperability to ‘work’ properly there will need to be a number of preconditions. As noted in Rescorla’s post, this may include checking what functions an interoperable client possesses to determine whether ‘standard’ or ‘enriched’ client services are available. Moreover, APIs will need to be (relatively) stable or rely on a standardized protocol to facilitate interoperability. Finally, while spam messages are annoying on messaging applications today, they may become even more commonplace where interoperability is required and service providers cannot use their current processes to filter/quality check messages transiting their infrastructure.

What do all the aforementioned elements mean for client security?

  1. Checking for client functionality may reveal whether a targeted client possesses known vulnerabilities, either generally (following a patch update) or just to the exploit vendor (where they know of a vulnerability and are actively exploiting it). Where spam filtering is not great exploit vendors can use spam messaging as reconnaissance messaging with the service provider, client vendor, or client applications not necessarily being aware of the threat activity.
  2. When or if there is a significant need to rework how keying operates, or surveillance of identity properties more broadly that are linked to an API, then there is a risk that implementation of updates may be delayed until the revisions have had time to be adopted by clients. While this might be great for competition vis-a-vis interoperability it will, also, have the effect of signalling an oncoming change to threat actors who may accelerate activities to get footholds on devices or may warn these actors that they, too, need to update their tactics, techniques, and procedures (TTPs).
  3. As a more general point, threat actors might work to develop and propagate interoperable clients that they have, already, compromised–we’ve previously seen nation-state actors do so and there’s no reason to expect this behaviour to stop in a world of interoperable clients. Alternately, threat actors might try and convince targets to move to ‘better’ clients that contain known vulnerabilities but which are developed and made available by legitimate vendors. Whereas, today, an exploit developer must target specific messaging systems that deliver that systems’ messages, a future world of interoperable messaging will likely expand the clients that threat actors can seek to exploit.

One of the severe dangers and challenges facing the current internet regulation landscape has been that a large volume of new actors have entered the various overlapping policy fields. For a long time there’s not been that many of us and anyone who’s been around for 10-15 years tends to be suitably multidisciplinary that they think about how activities in policy domain X might/will have consequences for domains Y and Z. The new raft of politicians and their policy advisors, in contrast, often lack this broad awareness. The result is that proposals are being advanced around the world by ostensibly well-meaning individuals and groups to address issues associated with online harms, speech, CSAM, competition, and security. However, these same parties often lack awareness of how the solutions meant to solve their favoured policy problems will have effects on neighbouring policy issues. And, where they are aware, they often don’t care because that’s someone else’s policy domain.

It’s good to see more people participating and more inclusive policy making processes. And seeing actual political action on many issue areas after 10 years of people debating how to move forward is exciting. But too much of that action runs counter to the thoughtful warnings and need for caution that longer-term policy experts have been raising for over a decade.

We are almost certainly moving towards a ‘new Internet’. It remains in question, however, whether this ‘new Internet’ will see resolutions to longstanding challenges or if, instead, the rush to regulate will change the landscape by finally bringing to life the threats that long-term policy wonks have been working to forestall or prevent for much of their working lives. To date, I remain increasingly concerned that we will experience the latter than witness the former.


  1. For the record, I currently remain unconvinced it is possible to implement end-to-end encryption across platforms generally. ↩︎
Link

‘Glass Time’ Shortcut

man people woman iphone
Photo by Ron Lach on Pexels.com

Like most photographers I edit my images with the brightness on my screen set to its maximum. Outside of specialized activities, however, I and others don’t tend to set the brightness this high so as to conserve battery power.

The result is that when we, as photographers, as well as members of the viewing public tend to look images on photography platforms we often aren’t seeing them as their creator(s) envisioned. The images are, quite starkly, darker on our screens than on those of the photographers who made them.1

For the past few months whenever I’ve opened Glass or looked at photos on other platforms I’m made an effort to ensure that I’ve maximized the brightness on my devices as I’ve opened the app. This said, I still forget sometimes and only realize halfway through a viewing session. So I went about ensuring this ‘mistake’ didn’t happen any more by creating a Shortcut called ‘Glass Time’!

The Shortcut is pretty simple: when I run it, it maximizes the brightness of my iOS device and opens the Glass app. If you download the Shortcut it’s pretty easy to modify it to instead open a different application (e.g., Instagram, 500px, Flickr, etc). It’s definitely improved my experiences using the app and helped me to better appreciate the images that are shared by individuals on the platform.

Download ‘Glass Time’ Shortcut


  1. Of course there are also issues associated with different devices having variable maximum brightness and colour profiles. These kinds of differences are largely intractable in the current technical milieu. ↩︎
Link

The Risks Linked With Canadian Cyber Operations in Ukraine

Photo by Sora Shimazaki on Pexels.com

Late last month, Global News published a story on how the Canadian government is involved in providing cyber support to the Ukrainian government in the face of Russia’s illegal invasion. While the Canadian military declined to confirm or deny any activities they might be involved in, the same was not true of the Communications Security Establishment (CSE). The CSE is Canada’s foreign signals intelligence agency. In addition to collecting intelligence, it is also mandated to defend Canadian federal systems and those designated as of importance to the government of Canada, provide assistance to other federal agencies, and conduct active and defensive cyber operations.1

From the Global News article it is apparent that the CSE is involved in both foreign intelligence operations as well as undertaking cyber defensive activities. Frankly these kinds of activity are generally, and persistently, undertaken with regard to the Russian government and so it’s not a surprise that these activities continue apace.

The CSE spokesperson also noted that the government agency is involved in ‘cyber operations’ though declined to explain whether these are defensive cyber operations or active cyber operations. In the case of the former, the Minister of National Defense must consult with the Minister of Foreign Affairs before authorizing an operation, whereas in the latter both Ministers must consent to an operation prior to it taking place. Defensive and active operations can assume the same form–roughly the same activities or operations might be undertaken–but the rationale for the activity being taken may vary based on whether it is cast as defensive or active (i.e., offensive).2

These kinds of cyber operations are the ones that most worry scholars and practitioners, on the basis that there is a risk that foreign operators or adversaries may misread a signal from a cyber operation or because the operation might have unintended consequences. Thus, the risk is that the operations that the CSE is undertaking run the risk of accidentally (or intentionally, I guess) escalating affairs between Canada and the Russian Federation in the midst of the shooting war between Russian and Ukrainian forces.

While there is, of course, a need for some operational discretion on the part of the Canadian government it is also imperative that the Canadian public be sufficiently aware of the government’s activities to understand the risks (or lack thereof) which are linked to the activities that Canadian agencies are undertaking. To date, the Canadian government has not released its cyber foreign policy doctrine nor has the Canadian Armed Forces released its cyber doctrine.3 The result is that neither Canadians nor Canada’s allies or adversaries know precisely what Canada will do in the cyber domain, how Canada will react when confronted, or the precise nature of Canada’s escalatory ladder. The government’s secrecy runs the risk of putting Canadians in greater jeopardy of a response from the Russian Federation (or other adversaries) without the Canadian public really understanding what strategic or tactical activities might be undertaken on their behalf.

Canadians have a right to know at least enough about what their government is doing to be able to begin assessing the risks linked with conducting operations during an active militant conflict against an adversary with nuclear weapons. Thus far such information has not been provided. The result is that Canadians are ill-prepared to assess the risk that they may be quietly and quickly drawn into the conflict between the Russian Federation and Ukraine. Such secrecy bodes poorly for being able to hold government to account, to say nothing of preventing Canadians from appreciating the risk that they could become deeply drawn into a very hot conflict scenario.


  1. For more on the CSE and the laws governing its activities, see “A Deep Dive into Canada’s Overhaul of Its Foreign Intelligence and Cybersecurity Laws.↩︎
  2. For more on this, see “Analysis of the Communications Security Establishment Act and Related Provisions in Bill C-59 (An Act respecting national security matters), First Reading (December 18, 2017)“, pp 27-32. ↩︎
  3. Not for lack of trying to access them, however, as in both cases I have filed access to information requests to the government for these documents 1 years ago, with delays expected to mean I won’t get the documents before the end of 2022 at best. ↩︎
Link

Ontario’s Path Towards Legitimizing Employee Surveillance

Earlier this week, the Ontario government declared that it would be introducing a series of labour reforms. As part of these reforms, employers will be required to inform their employees of how they are being electronically monitored. These requirements will be applied to all employers with 25 or more employees.

Employers already undertake workplace surveillance, though it has become more common and extensive as a result of the pandemic. Where surveillance is undertaken, however, businesses must seek out specialized counsel or services to craft appropriate labour policies or contracting language. This imposes costs and, also, means that different firms may provide slightly different information. The effect is that employers may be more cautious in what surveillance they adopt and be required to expend funds to obtain semi-boutique legal opinions.

While introducing legislation would seem to extend privacy protections for employees, as understood at the moment the reforms will only require a notification to employees of the relevant surveillance. It will not bar the surveillance itself. Further, with a law on the books it will likely be easier for Ontario consulting firms to provide pretty rote advice based on the legislative language. The result, I expect, will be to drive down the transaction costs in developing workplace surveillance policies at the same time that workplace surveillance technologies become more affordable and extensively deployed.

While I suspect that many will herald this law reform as positive for employees, on the basis that at least now they will know how they are being monitored, I am far less optimistic. The specificity of notice will matter, a lot, and unless great care is taken in drafting the legislation employers will obtain a significant degree of latitude in the actual kinds of intrusive surveillance that can be used. Moreover, unless required in legislative language, we can expect employers to conceal the specific modes of surveillance on grounds of needing to protect the methods for operational business reasons. This latter element is of particular concern given that major companies, including office productivity companies like Microsoft, are baking extensive workplace surveillance functionality into their core offerings. Ontario’s reforms are not, in fact, good for employees but are almost certain to be a major boon for their employers.

Link

Europe Planning A DNS Infrastructure With Built-In Filtering

Catalin Cimpanu, reporting for The Record, has found that the European Union wants to build a recursive DNS service that will be available to EU institutions and the European public. The reasons for building the service are manifold, including concerns that American DNS providers are not GDPR compliant and worries that much of Europe is dependent on (largely) American-based or -owned infrastructure.

As part of the European system, plans are for it to:

… come with built-in filtering capabilities that will be able to block DNS name resolutions for bad domains, such as those hosting malware, phishing sites, or other cybersecurity threats.

This filtering capability would be built using threat intelligence feeds provided by trusted partners, such as national CERT teams, and could be used to defend organizations across Europe from common malicious threats.

It is unclear if DNS4EU usage would be mandatory for all EU or national government organizations, but if so, it would grant organizations like CERT-EU more power and the agility it needs to block cyber-attacks as soon as they are detected.

In addition, EU officials also want to use DNS4EU’s filtering system to also block access to other types of prohibited content, which they say could be done based on court orders. While officials didn’t go into details, this most likely refers to domains showing child sexual abuse materials and copyright-infringing (pirated) content.1

By integrating censorship/blocking provisions as the policy level of the European DNS, there is a real risk that over time that same system might be used for untoward ends. Consider the rise of anti-LGBTQ laws in Hungary and Poland, and how those governments mights be motivated to block access to ‘prohibited content’ that is identified as such by anti-LGBTQ politicians.

While a reader might hope that the European courts could knock down these kinds of laws, their recurrence alone raises the spectre that content that is deemed socially undesirable by parties in power could be censored, even where there are legitimate human rights grounds that justify accessing the material in question.


  1. Boldface not in original. ↩︎
Link

‘Efficiency’ and Basic Rights

Rest of the World has published a terrific piece on the state of surveillance in Singapore, where governmental efficiency drives technologies that are increasingly placing citizens and residents under excessive and untoward kinds of surveillance. The whole piece is worth reading, but I was particularly caught by a comment made by the deputy chief executive of the Cyber Security Agency of Singapore:

“In the U.S., there’s a very strong sense of building technology to hold the government accountable,” he said. “Maybe I’m naive … but I just didn’t think that was necessary in Singapore.

Better.sg, which has around 1,000 members, works in areas where the government can’t or won’t, Keerthi said. “We don’t talk about who’s responsible for the problem. We don’t talk about who is responsible for solving the problem. We just talk about: Can we pivot this whole situation? Can we flip it around? Can we fundamentally shift human behaviour to be better?” he said. 

… one app that had been under development was a ‘catch-a-predator’ chatbot, which parents would install on their childrens’ [sic] phones to monitor conversations. The concept of the software was to goad potential groomers into incriminating themselves, and report their activity to the police. 

“The government’s not going to build this. … It is hostile, it is almost borderline entrapment,” Keerthi said, matter-of-factly. “Are we solving a real social problem? Yeah. Are parents really thrilled about it? Yeah.”

It’s almost breathtaking to see a government official admit they want to develop tools that the government, itself, couldn’t create for legal reasons but that he hopes will be attractive to citizens and residents. While I’m clearly not condoning the social problem that he is seeking to solve, the solution to such problems should be within the four corners of law as opposed to outside of them. When government officials deliberately move outside of the legal strictures binding them they demonstrate a dismissal of basic rights and due process with regards to criminal matters.

While such efforts might be ‘efficient’ and normal within Singapore they cannot be said to conform with basic rights nor, ultimately, with a political structure that is inclusive and responsive to the needs of its population. Western politicians and policy wonks routinely, and wistfully, talk about how they wish they were as free to undertake policy experiments and deployments as their colleagues in Asia. Hopefully more of them will read pieces like this one to understand that the efficiencies they are so fond of would almost certainly herald the end of the very democratic systems they operate within and are meant to protect.