Categories
Links Writing

Learning from Service Innovation in the Global South

Western policy makers, understandably, often focus on how emerging technologies can benefit their own administrative and governance processes. Looking beyond the Global North to understand how other countries are experimenting with administrative technologies, such as those with embedded AI capacities, can productively reveal the benefits and challenges of applying new technologies at scale.

The Rest of the World continues to be a superb resource for getting out of prototypical discussions and news cycles, with its vision of capturing people’s experiences of technology outside of the Western world.

Their recent article, “Brazil’s AI-powered social security app is wrongly rejecting claims,” on the use of AI technologies South American and Latin American countries reveals the profound potential that automation has for processing social benefits claims…as well as how they can struggle with complex claims and further disadvantage the least privileged in society. In focusing on Brazil, we learn about how the government is turning to automated systems to expedite access to service; while in aggregate these automated systems may be helpful, there are still complex cases where automation is impairing access to (now largely automated) government services and benefits.

The article also mentions how Argentina is using generative AI technologies to help draft court opinions and Costa Rica is using AI systems to optimize tax filing and detect fraudulent behaviours. It is valuable for Western policymakers to see how smaller or more nimble or more resource constrained jurisdictions are integrating automation into service delivery, and learn from their positive experiences and seek to improve upon (or avoid similar) innovation that leads to inadequate service delivery.

Governments are very different from companies. They provide service and assistance to highly diverse populations and, as such, the ‘edge cases’ that government administrators must handle require a degree of attention and care that is often beyond the obligations that corporations have or adopt towards their customer base. We can’t ask, or expect, government administrators to behave like companies because they have fundamentally different obligations and expectations.

It behooves all who are considering the automation of public service delivery to consider how this goal can be accomplished in a trustworthy and responsible manner, where automated services work properly and are fit for purpose, and are safe, privacy protective, transparent and accountable, and human rights affirming. Doing anything less risks entrenching or further systematizing existing inequities that already harm or punish the least privileged in our societies.

Categories
Links Writing

Significant New Cybersecurity Protections Added in iOS 18.1

Apple has quietly introduced an enhanced security feature in iOS 18.1. If you haven’t authenticated to your device recently — the past few days — the device will automatically revert from the After First Unlock (AFU) state to the Before First Unlock (BFU) state, with the effect of better protecting user information.1

Users may experience this new functionality by sometimes needing to enter their credentials prior to unlocking their device if they haven’t used it recently. The effect is that stolen or lost devices will be returned to a higher state of security and impede unauthorized parties from gaining access to the data that users have stored on their devices.

There is a secondary effect, however, insofar as these protections in iOS 18.1 may impede some mobile device forensics practices when automatically returning seized devices to a higher state of security (i.e., BFU) after a few days. This can reduce the volume of user information that is available to state agencies or other parties with the resources to forensically analyze devices.

While this activity may raise concerns that lawful government investigations may be impaired it is worth recalling that Apple is responsible for protecting devices from around the world. Numerous governments, commercial organizations, and criminal groups are amongst those using mobile device forensics practices, and iOS devices in the hands of a Canadian university student are functionally same as iOS devices used by fortune 50 executives. The result is that all users receive an equivalent high level of security, and all data is strongly safeguarded regardless of a user’s economic, political, or socio-cultural situation.


  1. For more details on the differences between the Before First Unlock (BFU) and After First Unlock (AFU) states, see: https://blogs.dsu.edu/digforce/2023/08/23/bfu-and-afu-lock-states/ ↩︎
Categories
Links Writing

Encryption Use Hits a New Height in Canada

In a continuing demonstration of the importance of strong and privacy-protective communications, the federal Foreign Interference Commission has created a Signal account to receive confidential information.

Encrypted Messaging
For those who may feel more comfortable providing information to the Commission using encrypted means, they may do so through the Signal – Private Messenger app. Those who already have a Signal account can contact the Commission using our username below. Others will have to first download the app and set up an account before they can communicate with the Commission.

The Commission’s Signal Username is signal_pifi_epie20.24

Signal users can also scan QR Code below for the Commission’s username:

The Commission has put strict measures in place to protect the confidentiality of any information provided through this Signal account.

Not so long ago, the Government of Canada was arguing for an irresponsible encryption policy that included the ability to backdoor end-to-end encryption. It’s hard to overstate the significance of a government body now explicitly adopting Signal.

Categories
Writing

Establishing Confidence Ratings in Policy Assessments

Few government policy analysts are trained in assessing evidence in a structured way and then assigning confidence ratings when providing formal advice to decision makers. This lack of training can be particularly problematic when certain language (“likely,” “probable,” “believe,” etc) is used in an unstructured way because these words can lead decision makers to conclude that recommended actions have a heft of evidence and assessment that, in fact, may not be present in the assessment.

Put simply: we don’t train people to clinically assess the evidence provided to them in a rigorous and structured way, and upon which they make analyses. This has the effect of potentially driving certain decisions that otherwise might not be made.

The government analysts who do have this training tend to come from the intelligence community, which has spend decades (if not centuries) attempting to divine how reliable or confident assessments are because the sources of their data are often partial or questionable.

I have to wonder just what can be done to address this kind of training gap. It doesn’t make sense to send all policy analysts to an intelligence training camp because the needs are not the same. But there should be some kind of training that’s widely and commonly available.

Robert Lee, who works in private practice these days but was formerly in intelligence, set out some high-level framings for how private threat intelligence companies might delineate between different confidence ratings in a blog he posted a few years ago. His categories (and descriptions) were:

Low Confidence: A hypothesis that is supported with available information. The information is likely single sourced and there are known collection/information gaps. However, this is a good assessment that is supported. It may not be finished intelligence though and may not be appropriate to be the only factor in making a decision.

Moderate Confidence: A hypothesis that is supported with multiple pieces of available information and collection gaps are significantly reduced. The information may still be single sourced but there’s multiple pieces of data or information supporting this hypothesis. We have accounted for the collection/information gaps even if we haven’t been able to address all of them.

High Confidence: A hypothesis is supported by a predominant amount of the available data and information, it is supported through multiple sources, and the risk of collection gaps are all but eliminated. High confidence assessments are almost never single sourced. There will likely always be a collection gap even if we do not know what it is but we have accounted for everything possible and reduced the risk of that collection gap; i.e. even if we cannot get collection/information in a certain area it’s all but certain to not change the outcome of the assessment.

While this kind of categorization helps to clarify intelligence products I’m less certain how effective it is when it comes to more general policy advice. In these situations assessments of likely behaviours may be predicated on ‘softer’ sources of data such as a policy actor’s past behaviours. The result is that predictions may sometimes be based less on specific and novel data points and, instead, on a broader psychographic or historical understanding of how an actor is likely to behave in certain situations and conditions.

Example from Kent’s Words of Estimative Probability

Lee, also, provided the estimation probability that was developed in the early 1980s for CIA assessments. And I think that I like the Kent Word approach more if only because it provides a broader kind of language around “why” a given assessment is more or less accurate.

While I understand and appreciate that threat intelligence companies are often working with specific datapoints and this is what can lead to analytic determinations, most policy work is much softer than this and consequently doesn’t (to me) clearly align with the more robust efforts to achieve confidence ratings that we see today. Nevertheless, some kind of more robust approach to providing recommendations to decision makers is needed so that executives have a strong sense of an analyst’s confidence in any recommendation, and especially when there may be competing policy options at play. While intuition drives a considerable amount of policy work at least a little more formalized structure and analysis would almost certainly benefit public policy decision making processes.

Categories
Links

New York City’s Chatbot: A Warning to Other Government Agencies?

A good article by The Markup assessed the accuracy of New York City’s municipal chatbot. The chatbot is intended to provide New Yorkers with information about starting or operating a business in the city. The journalists found the chatbot regularly provided false or incorrect information which could result in legal repercussions for businesses and significantly discriminate against city residents. Problematic outputs included incorrect housing-related information, whether businesses must accept cash for services rendered, whether employers can take cuts of employees’ tips, and more. 

While New York does include a warning to those using the chatbot, it remains unclear (and perhaps doubtful) that residents who use it will know when to dispute outputs. Moreover, the statements of how the tool can be helpful and sources it is trained on may cause individuals to trust the chatbot.

In aggregate, this speaks to how important it is to effectively communicate with users, in excess of policies simply mandating some kind of disclosure of the risks associated with these tools, as well as demonstrates the importance of government institutions more carefully assessing (and appreciating) the risks of these systems prior to deploying them.

Categories
Aside Links

Highlights from TBS’ Guidance on Publicly Available Information

The Treasury Board Secretariat has released, “Privacy Implementation Notice 2023-03: Guidance pertaining to the collection, use, retention and disclosure of personal information that is publicly available online.”

This is an important document, insofar as it clarifies a legal grey space in Canadian federal government policies. Some of the Notice’s highlights include:

  1. Clarifies (some may assert expand) how government agencies can collect, use, retain, or disclose publicly available online information (PAOI). This includes from commercial data brokers or online social networking services
  2. PAOI can be collected for administrative or non-administrative purposes, including for communications and outreach, research purposes, or facilitating law enforcement or intelligence operations
  3. Overcollection is an acknowledged problem that organizations should address. Notably, “[a]s a general rule, [PAOI] disclosed online by inadvertence, leak, hack or theft should not be considered [PAOI] as the disclosure, by its very nature, would have occurred without the knowledge or consent of the individual to whom the personal information pertains; thereby intruding upon a reasonable expectation of privacy.”
  4. Notice of collection should be undertaken, though this may not occur due to some investigations or uses of PAOI
  5. Third-parties collecting PAOI on the behalf of organizations should be assessed. Organizations should ensure PAOI is being legitimately and legally obtained
  6. “[I]nstitutions can no longer, without the consent of the individual to whom the information relates, use the [PAOI] except for the purpose for which the information was originally obtained or for a use consistent with that purpose”
  7. Organizations are encouraged to assess their confidence in PAOI’s accuracy and potentially evaluate collected information against several data sources to confidence
  8. Combinations of PAOI can be used to create an expanded profile that may amplify the privacy equities associated with the PAOI or profile
  9. Retained PAOI should be denoted with “publicly available information” to assist individuals in determining whether it is useful for an initial, or continuing, use or disclosure
  10. Government legal officers should be consulted prior to organizations collecting PAOI from websites or services that explicitly bar either data scraping or governments obtaining information from them
  11. There are number pieces of advice concerning the privacy protections that should be applied to PAOI. These include: ensuring there is authorization to collect PAOI, assessing the privacy implications of the collection, adopting privacy preserving techniques (e.g., de-identification or data minimization), adopting internal policies, as well as advice around using attributable versus non-attributable accounts to obtain publicly available information
  12. Organizations should not use profile information from real persons. Doing otherwise runs the risk of an organization violating s. 366 (Forgery) or s.403 (Fraudulently impersonate another person) of the Criminal Code
Categories
Writing

Overclassification and Its Impacts

Photo by Wiredsmart on Pexels.com

Jason Healey and Robert Jervis have a thought provoking piece over at the Modern War Institute at West Point. The crux of the argument is that, as a result of overclassification, it’s challenging if not impossible for policymakers or members of the public (to say nothing of individual analysts in the intelligence community or legislators) to truly understand the nature of contemporary cyberconflict. While there’s a great deal written about how Western organizations have been targeted by foreign operators, and how Western governments have been detrimentally affected by foreign operations, there is considerably less written about the effects of Western governments’ own operations towards foreign states because those operations are classified.

To put it another way, there’s no real way of understanding the cause and effect of operations, insofar as it’s not apparent why foreign operators are behaving as they are in what may be reaction to Western cyber operations or perceptions of Western cyber operations. The kinds of communiques provided by American intelligence officials, while somewhat helpful, also tend to obscure as much as they reveal (on good days). Healey and Jervis write:

General Nakasone and others are on solid ground when highlighting the many activities the United States does not conduct, like “stealing intellectual property” for commercial profit or disrupting the Olympic opening ceremonies. There is no moral equivalent between the most aggressive US cyber operations like Stuxnet and shutting down civilian electrical power in wintertime Ukraine or hacking a French television station and trying to pin the blame on Islamic State terrorists. But it clouds any case that the United States is the victim here to include such valid complaints alongside actions the United States does engage in, like geopolitical espionage. The concern of course is a growing positive feedback loop, with each side pursuing a more aggressive posture to impose costs after each fresh new insult by others, a posture that tempts adversaries to respond with their own, even more aggressive posture.

Making things worse, the researchers and academics who are ostensibly charged with better understanding and unpacking what Western intelligence agencies are up to sometimes decline to fulfill their mandate. The reasons are not surprising: engaging in such revelations threaten possible career prospects, endanger the very publication of the research in question, or risk cutting off access to interview subjects in the future. Healey and Jervis focus on the bizarre logics of working and researching the intelligence community in the United States, saying (with emphasis added):

Think-tank staff and academic researchers in the United States often shy away from such material (with exceptions like Ben Buchanan) so as not to hamper their chances of a future security clearance. Even as senior researchers, we were careful not to directly quote NSA’s classified assessment of Iran, but rather paraphrased a derivative article.

A student, working in the Department of Defense, was not so lucky, telling us that to get through the department’s pre-publication review, their thesis would skip US offensive operations and instead focus on defense.

Such examples highlight the distorting effects of censorship or overclassification: authors are incentivized to avoid what patrons want ignored and emphasize what patrons want highlighted or what already exists in the public domain. In paper after paper over the decades, new historical truths are cumulatively established in line with patrons’ preferences because they control the flow and release of information.

What are the implications as written by Healey and Jervis? In intelligence communities the size of the United States’, information gets lost or not passed to whomever it ideally should be presented to. Overclassification also means that policy makers and legislators who aren’t deeply ‘in the know’ will likely engage in decisions based on half-founded facts, at best. In countries such as Canada, where parliamentary committees cannot access classified information, they will almost certainly be confined to working off of rumour, academic reports, government reports that are unclassified, media accounts that divulge secrets or gossip, and the words spoken by the heads of security and intelligence agencies. None of this is ideal for controlling these powerful organizations, and the selective presentation of what Western agencies are up to actually risks compounding broader social ills.

Legislative Ignorance and Law

One of the results of overclassification is that legislators, in particular, become ill-suited to actually understanding national security legislation that is presented before them. It means that members of the intelligence and national security communities can call for powers and members of parliament are largely prevented from asking particularly insightful questions, or truly appreciate the implications of the powers that are being asked for.

Indeed, in the Canadian context it’s not uncommon for parliamentarians to have debated a national security bill in committee for months and, when asked later about elements of the bill, they admit that they never really understood it in the first place. The same is true for Ministers who have, subsequently, signed off on broad classes of operations that have been authorized by said legislation.

Part of that lack of understanding is the absence of examples of how powers have been used in the past, and how they might be used in the future; when engaging with this material entirely in the abstract, it can be tough to grasp the likely or possible implications of any legislation or authorization that is at hand. This is doubly true in situations where new legislation or Ministerial authorization will permit secretive behaviour, often using secretive technologies, to accomplish equally secretive objectives.

Beyond potentially bad legislative debates leading to poorly understood legislation being passed into law and Ministers consenting to operations they don’t understand, what else may follow from overclassification?

Nationalism, Miscalculated Responses, and Racism

To begin with, it creates a situation where ‘we’ in the West are being attacked by ‘them’ in Russia, Iran, China, North Korea, or other distant lands. I think this is problematic because it casts Western nations, and especially those in the Five Eyes, as innocent victims in the broader world of cyber conflict. Of course, individuals with expertise in this space will scoff at the idea–we all know that ‘our side’ is up to tricks and operations as well!–but for the general public or legislators, that doesn’t get communicated using similarly robust or illustrative examples. The result is that the operations of competitor nations can be cast as acts of ‘cyberwar’ without any appreciation that those actions may, in fact, be commensurate with the operations that Five Eyes nations have themselves launched. In creating an Us versus Them, and casting the Five Eyes and West more broadly as victims, a kind of nationalism can be incited where ‘They’ are threats whereas ‘We’ are innocents. In a highly complex and integrated world, these kinds of sharp and inaccurate concepts can fuel hate and socially divisive attitudes, activities, and policies.

At the same time, nations may perceive themselves to be targeted by Five Eyes nations, and deduce effects to Five Eyes operations even when that isn’t the case. When a set of perimeter logs show something strange, or when computers are affected by ransomware or wiperware, or another kind of security event takes place, these less resourced nations may simply assume that they’re being targeted by a Five Eyes operation. The result is that foreign government may both drum up nationalist concerns about ‘the West’ or ‘the Five Eyes’ while simultaneously queuing up their own operations to respond to what may, in fact, have been an activity that was totally divorced from the Five Eyes.

I also worry that the overclassification problem can lead to statements in Western media that demonizes broad swathes of the world as dangerous or bad, or threatening for reasons that are entirely unapparent because Western activities are suppressed from public commentary. Such statements arise with regular frequency, where China is attributed to this or to that, or when Russia or Middle Eastern countries are blamed for the most recent ill on the Internet.

The effect of such statements can be to incite differential degrees of racism. When mainstream newspapers, as an example, constantly beat the drum that the Chinese government (and, by extension, Chinese people) are threats to the stability and development of national economies or world stability, over time this has the effect of teaching people that China’s government and citizens alike are dangerous. Moreover, without information about Western activities, the operations conducted by foreign agencies can be read out of context with the effect that people of certain ethnicities are regarded as inherently suspicious or sneaky as compared to those (principally white) persons who occupy the West. While I would never claim that the overclassification of Western intelligence operations are the root cause of racism in societies I do believe that overclassification can fuel misinformation about the scope of geopolitics and Western intelligence gathering operations, with the consequence of facilitating certain subsequent racist attitudes.

Solutions

A colleague of mine has, in the past, given presentations and taught small courses in some of Canada’s intelligence community. This colleague lacks any access to classified materials and his classes focus on how much high quality information is publicly available when you know how and where to look for it, and how to analyze it. Students are apparently regularly shocked: they have access to the classified materials, but their understandings of the given issues are routinely more myopic and less robust. However, because they have access to classified material they tend to focus as much, or more, on it because the secretive nature of the material makes it ‘special’.

This is not a unique issue and, in fact, has been raised in the academic literature. When someone has access to special or secret knowledge they are often inclined to focus in on that material, on the assumption that it will provide insights in excess of what are available in open source. Sometimes that’s true, but oftentimes less so. And this ‘less so’ becomes especially problematic when operating in an era where governments tend to classify a great deal of material simply because the default is to assume that anything could potentially be revelatory to an agency’s operations. In this kind of era, overvaluing classified materials can lead to less insightful understandings of the issues of the day while simultaneously not appreciating that much of what is classified, and thus cast as ‘special’, really doesn’t provide much of an edge when engaging in analysis.

The solution is not to declassify all materials but, instead, to adopt far more aggressive declassification processes. This could, as just an example, entail tying declassification in some way to organizations’ budgets, such that if they fail to declassify materials their budgets are forced to be realigned in subsequent quarters or years until they make up from the prior year(s)’ shortfalls. Extending the powers of Information Commissioners, which are tasked with forcing government institutions to publish documents when they are requested by members of the public or parliamentarians (preferably subject to a more limited set of exemptions than exist today) might help. And having review agencies which can unpack higher-level workings of intelligence community organizations can also help.

Ultimately, we need to appreciate that national security and intelligence organizations do not exist in a bubble, but that their mandates mean that the externalized problems linked with overclassification are typically not seen as issues that these organizations, themselves, need to solve. Nor, in many cases, will they want to solve them: it can be very handy to keep legislators in the dark and then ask for more powers, all while raising the spectre of the Other and concealing the organizations’ own activities.

We do need security and intelligence organizations, but as they stand today their tendency towards overclassification runs the risk of compounding a range of deleterious conditions. At least one way of ameliorating those conditions almost certainly includes reducing the amount of material that these agencies currently classify as secret and thus kept from public eye. On this point, I firmly agree with Healey and Jervis.

Categories
Links Writing

Election Nightmare Scenarios

The New York Times has a selection of experts’ ‘nightmare scenarios’ for the forthcoming USA election. You can pick and choose which gives you colder sweats—I tend to worry about domestic disinformation, a Bush v. Gore situation, or uncounted votes—but, really, few of these nightmares strike to the heart of the worst of the worst.

American institutions have suffered significantly under Trump and, moreover, public polarization and the movement of parts of the USA electorate (and, to different extents, global electorates) into alternate reality bubbles mean that the supports which are meant to facilitate peaceful transitions of power such that the loser can believe in the outcomes of elections are badly wounded. Democracies don’t die in darkness, per se, but through neglect and an unwillingness of the electorate to engage because change tends to be hard, slow, and incremental. There are solutions to democratic decline, and focusing on the next electoral cycles matters, but we can’t focus on elections to the detriment of understanding how to rejuvenate democratic systems of governance more generally.

Categories
Aside Links

With Remote Hacking, the Government’s Particularity Problem Isn’t Going Away

Crocker’s article is a defining summary of the legal problems associated with the U.S. Government’s attempts to use malware to conduct lawful surveillance of persons suspected of breaking the law. He explores how even after the law is shifted to authorize magistrates to issue warrants pertaining to persons outside of their jurisdictions, broader precedent concerning wiretaps may prevent the FBI or other actors from using currently-drafted warrants to deploy malware en masse. Specifically, the current framework adopted might violate basic constitutional guarantees that have been defined in caselaw over the past century, to the effect of rendering mass issuance of malware an unlawful means of surveillance.

Categories
Aside Links

Meet Moxie Marlinspike, the Anarchist Bringing Encryption to All of Us

Meet Moxie Marlinspike, the Anarchist Bringing Encryption to All of Us:

In March, Brazilian police briefly jailed a Facebook exec after WhatsApp failed to comply with a surveillance order in a drug investigation. The same month, The New York Times revealed that WhatsApp had received a wiretap order from the US Justice Department. The company couldn’t have complied in either case, even if it wanted to. Marlin­spike’s crypto is designed to scramble communications in such a way that no one but the people on either end of the conversation can decrypt them (see sidebar). “Moxie has brought us a world-class, state-of-the-art, end-to-end encryption system,” WhatsApp cofounder Brian Acton says. “I want to emphasize: world-class.”

For Marlinspike, a failed wiretap can mean a small victory. A few days after Snowden’s first leaks, Marlin­spike posted an essay to his blog titled “We Should All Have Something to Hide,” emphasizing that privacy allows people to experi­ment with lawbreaking as a precursor for social progress. “Imagine if there were an alternate dystopian reality where law enforcement was 100 percent effective, such that any potential offenders knew they would be immediately identified, apprehended, and jailed,” he wrote. “How could people have decided that marijuana should be legal, if nobody had ever used it? How could states decide that same-sex marriage should be permitted?”

We live in a world where mass surveillance is a point of fact, not a fear linked with dystopic science fiction novels. Moxie’s work doesn’t blind the watchers but it has let massive portions of the world shield the content of their communications – if not the fact they are communicating in the first place – from third-parties seeking to access those communications. Now unauthorized parties such a government agencies are increasingly being forced to target specific devices, instead of the communications networks writ large, which may have the effects of shifting state surveillance from that which is mass to that which is targeted. Such a consequence would be a major victory for all persons, regardless of whether they live in a democratic state or not.