Categories
Links

Cybersecurity and White Labelled Android Devices

Trend Micro has a nice short piece on the challenges of assessing the security properties of various components of Android devices. In short, white labelling incentivizes device manufacturers to invest the least amount possible in what they’re building for the brands that will sell devices to consumers. Trend Micro included this very nice little mention on the shenanigans that firmware developers can get up to:

Firmware developers supplying the OEM might agree to provide the software at a lower cost because they can compensate the lost profit through questionable means, for example by discreetly pre-installing apps from other app developers for a fee. There is a whole market built around this bundling service with prices ranging from 1 to 10 Chinese yuan (approximately US$0.14 to US$1.37 as of this writing) per application per device. This is where the risk is: As long as the firmware, packaged apps, and update mechanisms of the device are not owned, controlled, or audited by the smartphone brand itself, a rogue supplier can hide unauthorized code therein.1

While the authors suggest a range of policy options, from SBOMs to placing requirements on device transparency before administrators ‘trust’ devices, I’m not confident of these suggestions’ efficacy when taking a broader look at who principally uses white labelled devices. There are economics at play: should all devices have increased input costs associated with greater traceability and accountability then it will place financial pressures on the individuals in society who are most likely to be purchasing these devices. I doubt that upper-middle class individuals will be particularly affected by restricting the availability of many white labelled Android devices but such restrictions would almost certainly have disproportionate impacts on less affluent members of society or those who are, by necessity, price conscious. Should these individuals have to pay more for the computing power that they may depend on for a wide range of tasks—and in excess of how more affluent members of society use their devices?

Security has long been a property that individuals with more money can more easily ‘acquire’, and those who are less affluent have been less able to possess similar quantities or qualities of security in the services and products that they own. I understand and appreciate (and want to agree with) the Trend Micro analysts on how to alleviate some of the worse security properties associated with white labelled devices but it seems as though any such calculation needs to undertake a broader intersectional analysis. It’s possible that at the conclusion of such an analysis you still arrive at similar security-related concerns but would, also, include a number of structural social change policy prescriptions as preconditions that must be met before heightened security can be made more equitably available to more members of society.


  1. Emphasis added. ↩︎
Categories
Links

Postal Interception Coming to Canada?

The Canadian Senate is debating Bill S-256, ‌An Act to amend the Canada Post Corporation Act (seizure) and to make related amendments to other Acts. The relevant elements of the speech include:

Under the amendment to the Customs Act, a shipment entering Canada may be subject to inspection by border services officers if they have reason to suspect that its contents are prohibited from being imported into Canada. If this is the case, the shipment, whether a package or an envelope, may be seized. However, an envelope mailed in Canada to someone who resides at a Canadian address cannot be opened by the police or even by a postal inspector.

To summarize, nothing in the course of the post in Canada is liable to demand, seizure, detention or retention, except if a specific legal exception exists in the Canada Post Corporation Act or in one of the three laws I referenced. However, items in the mail can be inspected by a postal inspector, but if it is a letter, the inspector cannot open it to complete the inspection.

Thus, a police officer who has reasonable grounds to suspect that an item in the mail contains an illegal drug or a handgun cannot be authorized, pursuant to a warrant issued by a judge, to intercept and seize an item until it is delivered to the addressee or returned to the sender. I am told that letters containing drugs have no return address.

The Canadian Association of Chiefs of Police, in 2015, raised this very issue (.pdf). They recognised “that search and seizure authorities granted to law enforcement personnel under the Criminal Code of Canada or other criminal law authorities are overridden by the [Canada Post Corporation Act], giving law enforcement no authority to seize, detain or retain parcels or letters while they are in the course of mail and under Canada Post’s control.” The result was the Association was resolved:

that the Canadian Association of Chiefs of Police requests the Government of Canada to amend the Canada Post Corporation Act to provide police, for the purpose of intercepting contraband, with the ability to obtain judicial authorization to seize, detain or retain parcels or letters while they are in the course of mail and under Canada Post’s control.

It would seem as though, should Bill S-256 pass into law, that seven or eight years later some fairly impressive new powers that contrast with decades of mail privacy precedent may come undone.

Categories
Links Writing

Who Benefits from 5G?

The Financial Times (FT) ran a somewhat mixed piece on the future of 5G. The thesis is that telecom operators are anxious to realise the financial benefits of 5G deployments but, at the same time, these benefits were always expected to come in the forthcoming years; there was little, if any, expectation that financial benefits would happen immediately as the next-generation infrastructures were deployed.

The article correctly notes that consumers are skeptical of the benefits of 5G while, also, concluding by correctly stating that 5G was really always about the benefits that 5G Standalone will have for businesses. This is, frankly, a not great piece in terms of editing insofar as it combines two relatively distinct things without doing so in a particularly clear way.

5G Extended relies on existing 4G infrastructures. While there are theoretically faster speeds available to consumers, along with a tripartite spectrum band segmentation that can be used,1 most consumers won’t directly realise the benefits. One group that may, however, benefit (and that was not addressed at all in this piece) are rural customers. Opening up the lower-frequency spectrum blocks will allow 5G signals to travel farther with the benefit significantly accruing to those who cannot receive new copper, coax, or fibre lines. This said, I tend to agree with the article that most of the benefits of 5G haven’t, and won’t, be directly realised by individual mobile subscribers in the near future.2

5G Standalone is really where 5G will theoretically come alive. It’s, also, going to require a whole new way of designing and securing networks. At least as of a year or so ago, China was a global leader here but largely because they had comparatively poor 4G penetration and so had sought to leapfrog to 5G SA.3 This said, American bans on semiconductors to Chinese telecoms vendors, such as Huawei and ZTE, have definitely had a negative effect on the China’s ability to more fully deploy 5G SA.

In the Canadian case we can see investments by our major telecoms into 5G SA applications. Telus, Rogers, and Bell are all pouring money into technology clusters and universities. The goal isn’t to learn how much faster consumers’ phones or tablets can download data (though new algorithms to better manage/route/compress data are always under research) but, instead, to learn how how to take advantage of the more advanced business-to-business features of 5G. That’s where the money is, though the question will remain as to how well telecom carriers will be able to rent seek on those features when they already make money providing bandwidth and services to businesses paying for telecom products.


  1. Not all countries, however, are allocating the third, high-frequency, band on the basis that its utility remains in doubt. ↩︎
  2. Incidentally: it generally just takes a long, long time to deploy networks. 4G still isn’t reliably available across all of Canada, such as in populated rural parts of Canada. This delay meaningfully impedes the ability of farmers, as an example, to adopt smart technologies that would reduce the costs associated with farm and crop management and which could, simultaneously, enable more efficient crop yields. ↩︎
  3. Western telecoms, by comparison, want to extend the life of the capital assets they purchased/deployed around their 4G infrastructures and so prefer to go the 5G Extended route to start their 5G upgrade path. ↩︎
Categories
Links Writing

Generalist Policing Models Remain Problematic

From the New York Time’s opinion section, this piece on“Why the F.B.I. Is so far behind on cybercrime?” reinforces the position that American law enforcement is stymied in investigating cybercrimes because:

…it lacks enough agents with advanced computer skills. It has not recruited as many of these people as it needs, and those it has hired often don’t stay long. Its deeply ingrained cultural standards, some dating to the bureau’s first director, J. Edgar Hoover, have prevented it from getting the right talent.

Emblematic of an organization stuck in the past is the F.B.I.’s longstanding expectation that agents should be able to do “any job, anywhere.” While other global law enforcement agencies have snatched up computer scientists, the F.B.I. tried to turn existing agents with no computer backgrounds into digital specialists, clinging to the “any job” mantra. It may be possible to turn an agent whose background is in accounting into a first-rate gang investigator, but it’s a lot harder to turn that same agent into a top-flight computer scientist.

The “any job” mantra also hinders recruitment. People who have spent years becoming computer experts may have little interest in pivoting to another assignment. Many may lack the aptitude for — or feel uneasy with — traditional law enforcement expectations, such as being in top physical fitness, handling a deadly force scenario or even interacting with the public.

This very same issue plagues the RCMP, which also has a generalist model that discourages or hinders specialization. While we do see better business practices in, say, France, with an increasing LEA capacity to pursue cybercrime, we’re not yet seeing North American federal governments overhaul their own policing services.1

Similarly, the FBI is suffering from an ‘arrest’ culture:

The F.B.I.’s emphasis on arrests, which are especially hard to come by in ransomware cases, similarly reflects its outdated approach to cybercrime. In the bureau, prestige often springs from being a successful trial agent, working on cases that result in indictments and convictions that make the news. But ransomware cases, by their nature, are long and complex, with a low likelihood of arrest. Even when suspects are identified, arresting them is nearly impossible if they’re located in countries that don’t have extradition agreements with the United States.

In the Canadian context, not only is pursuing to arrest a problem due to jurisdiction, the complexity of cases can mean an officer spends huge amounts of time on a computer, and not out in the field ‘doing the work’ of their colleagues who are not cyber-focused. This perception of just ‘playing games’ or ‘surfing social media’ can sometimes lead to challenges between cyber investigators and older-school leaders.2 And, making things even more challenging is that the resources to train to detect and pursue Child Sexual Abuse Material (CSAM) are relatively plentiful, whereas economic and non-CSAM investigations tend to be severely under resourced.

Though there is some hope coming for Canadian investigators, by way of CLOUD agreements between the Canadian and American governments, and the updates to the Cybercrime Convention, both will require updates to criminal law as well as potentially provincial privacy laws to empower LEAs with expanded powers. And, even with access to more American data that enables investigations this will not solve the arrest challenges when criminals are operating out of non-extradition countries.

It remains to be seen whether an expanded capacity to issue warrants to American providers will reduce some of the Canadian need for specialized training to investigate more rudimentary cyber-related crimes or if, instead, it will have a minimum effect overall.


  1. This is also generally true to provincial and municipal services as well. ↩︎
  2. Fortunately this is a less common issue, today, than a decade ago. ↩︎
Categories
Links Writing

National Security Means What, Again?

There have been any number of concerns about Elon Musk’s behaviour, and especially in the recent weeks and months. This has led some commentators to warn that his purchase of Twitter may raise national security risks. Gill and Lehrich try to make this argument in their article, “Elon Musk Owning Twitter is A National Security Threat.” They give three reasons:

First, Musk is allegedly in communication with foreign actors – including senior officials in the Kremlin and Chinese Communist Party – who could use his acquisition of Twitter to undermine American national security.

Will Musk’s foreign investors have influence over Twitter’s content moderation policies? Will the Chinese exploit their significant leverage over Musk to demand he censor criticism of the CCP, or turn the dials up for posts that sow distrust in democracy?

Finally, it’s not just America’s information ecosystem that’s at stake, it’s also the private data of American citizens.

It’s worth noting that at no point do the authors provide a definition for ‘national security’, which causes the reader to have to guess what they likely mean. More broadly, in journalistic and opinion circle communities there is a curious–and increasingly common–conjoining of national security and information security. The authors themselves make this link in the kicker paragraph of their article, when they write

It is imperative that American leaders fully understand Musk’s motives, financing, and loyalties amidst his bid to acquire Twitter – especially given the high-stakes geopolitical reality we are living in now. The fate of American national security and our information ecosystem hang in the balance.1

Information security, generally, is focused on dangers which are associated with true or false information being disseminated across a population. It is distinguished from cyber security, and which is typically focused on the digital security protocols and practices that are designed to reduce technical computer vulnerabilities. Whereas the former focuses on a public’s mind the latter attends to how their digital and physical systems are hardened from being technically exploited.

Western governments have historically resisted authoritarian governments attempts to link the concepts of information security and cyber security. The reason is that authoritarian governments want to establish international principles and norms, whereby it becomes appropriate for governments to control the information which is made available to their publics under the guise of promoting ‘cyber security’. Democratic countries that emphasise the importance of intellectual freedom, freedom of religion, freedom of assembly, and other core rights have historically been opposed to promoting information security norms.

At the same time, misinformation and disinformation have become increasingly popular areas of study and commentary, especially following Donald Trump’s election as POTUS. And, in countries like the United States, Trump’s adoption of lies and misinformation was often cast as a national security issue: correct information should be communicated, and efforts to intentionally communicate false information should be blocked, prohibited, or prevented from massively circulating.

Obviously Trump’s language, actions, and behaviours were incredibly destabilising and abominable for an American president. And his presence on the world stage arguably emboldened many authoritarians around the world. But there is a real risk in using terms like ‘national security’ without definition, especially when the application of ‘national security’ starts to stray into the domain of what could be considered information security. Specifically, as everything becomes ‘national security’ it is possible for authoritarian governments to adopt the language of Western governments and intellectuals, and assert that they too are focused on ‘national security’ whereas, in fact, these authoritarian governments are using the term to justify their own censorious activities.

Now, does this mean that if we are more careful in the West about our use of language that authoritarian governments will become less censorious? No. But being more careful and thoughtful in our language, public argumentation, and positioning of our policy statements we may at least prevent those authoritarian governments from using our discourse as a justification for their own activities. We should, then, be careful and precise in what we say to avoid giving a fig leaf of cover to authoritarian activities.

And that will start by parties who use terms like ‘national security’ clearly defining what they mean, such that it is clear how national security is different from informational security. Unless, of course, authors and thinkers are in fact leaning into the conceptual apparatus of repressive governments in an effort to save democratic governance. For any author who thinks such a move is wise, however, I must admit that I harbour strong doubts of the efficacy or utility of such attempts.


  1. Emphasis not in original. ↩︎
Categories
Links Writing

Can University Faculty Hold Platforms To Account?

Heidi Tworek has a good piece with the Centre for International Governance Innovation, where she questions whether there will be a sufficient number of faculty in Canada (and elsewhere) to make use of information that digital-first companies might be compelled to make available to researchers. The general argument goes that if companies must make information available to academics then these academics can study the information and, subsequently, hold companies to account and guide evidence-based policymaking.

Tworek’s argument focuses on two key things.

  1. First, there has been a decline in the tenured professoriate in Canada, with the effect that the adjunct faculty who are ‘filling in’ are busy teaching and really don’t have a chance to lead research.
  2. While a vanishingly small number of PhD holders obtain a tenure track role, a reasonable number may be going into the very digital-first companies that researchers needs data from to hold them accountable.

On this latter point, she writes:

If the companies have far more researchers than universities have, transparency regulations may not do as much to address the imbalance of knowledge as many expect.

I don’t think that hiring people with PhDs necessarily means that companies are addressing knowledge imbalances. Whatever is learned by these researchers tends to be sheltered within corporate walls and protected by NDAs. So those researchers going into companies may learn what’s going on but be unable (or unmotivated) to leverage what they know in order to inform policy discussions meant to hold companies to account.

To be clear, I really do agree with a lot in this article. However, I think it does have a few areas for further consideration.

First, more needs to be said about what, specifically, ’transparency’ encompasses and its relationships with data type, availability, etc. Transparency is a deeply contested concept and there are a lot of ways that the revelation of data basically creates a funhouse of mirrors effect, insofar as what researchers ‘see’ can be very distorted from the reality of what truly is.

Second, making data available isn’t just about whether universities have the professors to do the work but, really, whether the government and its regulators have the staff time as well. Professors are doing a lot of things whereas regulators can assign staff to just work the data, day in and day out. Focus matters.

Third, and related, I have to admit that I have pretty severe doubts about the ability of professors to seriously take up and make use of information from platforms, at scale and with policy impact, because it’s never going to be their full time jobs to do so. Professors are also going to be required to publish in books or journals, which means their outputs will be delayed and inaccessible to companies, government bureaucrats and regulators, and NGO staff. I’m sure academics will have lovely and insightful discussions…but they won’t happen fast enough, or in accessible places or in plain language, to generally affect policy debates.

So, what might need to be added to start fleshing out how universities are organised to make use of data released by companies and have policy impacts in research outputs?

First, universities in Canada would need to get truly serious about creating a ’researcher class’ to analyse corporate reporting. This would involve prioritising the hiring of research associates and senior research associates who have few or no teaching responsibilities.1

Second, universities would need to work to create centres such as the Citizen Lab, or related groups.2 These don’t need to be organisations which try and cover the waterfront of all digital issues. They could, instead, be more focused to reduce the number of staff or fellows that are needed to fulfil the organisation’s mandate. Any and all centres of this type would see a small handful of people with PhDs (who largely lack teaching responsibilities) guide multidisciplinary teams of staff. Those same staff members would not typically need a PhD. They would need to be nimble enough to move quickly while using a peer-review lite process to validate findings, but not see journal or book outputs as their primacy currency for promotion or hiring.

Third, the centres would need a core group of long-term staffers. This core body of long-term researchers is needed to develop policy expertise that graduate students just don’t possess or develop in their short tenure in the university. Moreover, these same long-term researchers can then train graduate student fellows of the centres in question, with the effect of slowly building a cadre of researchers who are equipped to critically assess digital-first companies.

Fourth, the staff at research centres needs to be paid well and properly. They cannot be regarded as ‘graduate student plus’ employees but as specialists who will be of interest to government and corporations. This means that the university will need to pay competitive wages in order to secure the staff needed to fulfil centre mandates.

Basically if universities are to be successful in holding big data companies to account they’ll need to incubate quasi-NGOs and let them loose under the university’s auspice. It is, however, worth asking whether this should be the goal of the university in the first place: should society be outsourcing a large amount of the ‘transparency research’ that is designed to have policy impact or guide evidence-based policy making to academics, or should we instead bolster the capacities of government departments and regulatory agencies to undertake these activities?

Put differently, and in context with Tworek’s argument: I think that assuming that PhDs holders working as faculty in universities are the solution to analysing data released by corporations can only hold if you happen to (a) hold or aspire to hold a PhD; (b) possesses or aspire to possess a research-focused tenure track job.

I don’t think that either (a) or (b) should guide the majority of the way forward in developing policy proposals as they pertain to holding corporations to account.

Do faculty have a role in holding companies such as Google, Facebook, Amazon, Apple, or Netflix to account? You bet. But if the university, and university researchers, are going to seriously get involved in using data released by companies to hold them to account and have policy impact, then I think we need dedicated and focused researchers. Faculty who are torn between teaching, writing and publishing in inaccessible locations using baroque theoretical lenses, pursuing funding opportunities and undertaking large amounts of department service and performing graduate student supervision are just not going to be sufficient to address the task at hand.


  1. In the interests of disclosure, I currently hold one of these roles. ↩︎
  2. Again in the interests of disclosure, this is the kind of place I currently work at. ↩︎
Categories
Links

Digital Currency Standards Heat Up

There is an ongoing debate as to which central banks will launch digital currencies, by which date, and how currencies will be interoperable with one another. Simon Sharwood, writing for The Register, is reporting that China’s Digital Yuan is taking big steps to answering many of those questions:

According to an account of the meeting in state-controlled media, Fan said standardization across payment systems will be needed to ensure the success of the Digital Yuan.

The kind of standardization he envisioned is interoperability between existing payment systems – whether they use QR codes, NFC or Bluetooth.

That’s an offer AliPay and WeChat Pay can’t refuse, unless they want Beijing to flex its regulatory muscles and compel them to do it.

With millions of payment terminals outside China already set up for AliPay and WeChat Pay, and the prospect of the Digital Yuan being accepted in the very same devices, Beijing has the beginnings of a global presence for its digital currency.

When I walk around my community I very regularly see options to use AliPay or WeChat Pay, and see many people using these options. The prospect that the Chinese government might be able to take advantage of existing payment structures to also use a government-associated digital fiat currency would be a remarkable manoeuvre that could theoretically occur quite quickly. I suspect that when/if some Western politicians catch wind of this they will respond quickly and bombastically.

Other governments’ central banks should, ideally, be well underway in developing the standards for their own digital fiat currencies. These standards should be put into practice in a meaningful way to assess their strengths and correct their deficiencies. Governments that are not well underway in launching such digital currencies are running the risk of seeing some of their population move away from domestically-controlled currencies, or basket currencies where the state determines what composes the basket, to currencies managed by foreign governments. This would represent a significant loss of policy capacity and, arguably, economic sovereignty for at least some states.

Why might some members of their population shift over to, say, the Digital Yuan? In the West this might occur when individuals are travelling abroad, where WeChat Pay and AliPay infrastructure is often more usable and more secure than credit card infrastructures. After using these for a while the same individuals may continuing to use those payment methods for ease and low cost when they return home. In less developed parts of the world, where AliPay and WeChat Pay are already becoming dominant, it could occur as members of the population continue their shift to digital transactions and away from currencies controlled or influenced by their governments. The effect would be, potentially, to provide a level of influence to the Chinese government while potentially exposing sensitive macro-economic consumer habits that could be helpful in developing Chinese economic, industrial, or foreign policy.

Western government responses might be to bar the use of the Digital Yuan in their countries but this could be challenging should it rely on common standards with AliPay and WeChat Pay. Could a ban surgically target the Digital Yuan or, instead, would it need to target all payment terminals using the same standard and, thus, catch AliPay and WeChat Pay as collateral damage? What if a broader set of states all adopt common standards, which happen to align with the Digital Yuan, and share infrastructure: just how many foreign and corporate currencies could be disabled without causing a major economic or diplomatic incident? To what extent would such a ban create a globally bifurcated (trifurcated? quadfurcated?) digital payment environment?

Though some governments might regard this kind of ‘burn them all’ approach as desirable there would be an underlying question of whether such an effect would be reasonable and proportionate. We don’t ban WeChat in the West, as an example, in part due to such an action being manifestly disproportionate to risks associated with the communications platform. It is hard to imagine how banning the Digital Yuan, along with WeChat Pay or AliPay or other currencies using the same standards, might not be similarly disproportionate where such a decision would detrimentally affect hundreds of thousands, or millions, of people and businesses that already use these payment systems or standards. It will be fascinating to see how Western central banks move forward to address the rise of digital fiat currencies and, also, how their efforts intersect with the demands and efforts of Western politicians that regularly advocate for anti-China policies and laws.

Categories
Aside Links

2022.8.10

I’ve been making some small changes to Excited Pixels. I’ve updated my list of good podcasts I listen to (it now includes several of my preferred photography podcasts) and I’ve also created a portfolio page that currently showcases some of my recent preferred  monochrome street photography. For my daily photography check out my Glass profile.

Categories
Links Writing

Tech for Whom?

Charley Johnson has a good line of questions and critique for any organization or group which is promoting a ‘technology for good’ program. The crux is that any and all techno-utopian proposals suggest a means of technology to solve a problem as defined by the party making the proposal. Put another way, these kinds of solutions do not tend to solve real underlying problems but, instead, solve the ‘problems’ for which hucksters have build a pre-designed a ‘solution’.

This line of analysis isn’t new, per se, and follows in a long line of equity, social justice, feminism, and critical theory writers. Still, Johnson does a good job in extracting key issues with techno-utopianism. Key, is that any of these solutions tend to present a ‘tech for good’ mindset that:

… frames the problem in such a way that launders the interests, expertise, and beliefs of technologists…‘For good’ is problematic because it’s self-justifying. How can I question or critique the technology if it’s ‘for good’? But more importantly, nine times out of ten ‘for good’ leads to the definition of a problem that requires a technology solution.

One of the things that we are seeing more commonly is the use of data, in and of itself, as something that can be used for good: data for good initiatives are cast as being critical to solving climate change, making driving safer, or automating away the messier parties of our lives. Some of these arguments are almost certainly even right! However, the proposed solutions tend to rely on collecting, using, or disclosing data—derived from individuals’ and communities’ activities—without obtaining their informed, meaningful, and ongoing consent. ‘Data for good’ depends, first and often foremost, on removing the agency to say ‘yes’ or ‘no’ to a given ‘solution’.

In the Canadian context efforts to enable ‘good’ uses of data have emerged through successively introduced pieces of commercial privacy legislation. The legislation would permit the disclosure of de-identified personal information for “socially beneficial purposes.” Information could be disclosed to government, universities, public libraries, health care institutions, organizations mandated by the government to carry out a socially beneficial purpose, and other prescribed entities. Those organizations could use the data for a purpose related to health, the provision or improvement of public amenities or infrastructure, the protection of the environment or any other prescribed purpose.

Put slightly differently, whereas Johnson’s analysis is towards a broad concept of ‘data for good’ in tandem with elucidating examples, the Canadian context threatens to see broad-based techno-utopian uses of data enabled at the legislative level. The legislation includes the ability to expand whom can receive de-identified data and the range of socially beneficial uses, with new parties and uses being defined by regulation. While there are a number of problems with these kinds of approaches—which include the explicit removal of consent of individuals and communities to having their data used in ways they may actively disapprove of—at their core the problems are associated with power: the power of some actors to unilaterally make non-democratic decisions that will affect other persons or communities.

This capacity to invisibly express power over others is the crux of most utopian fantasies. In such fantasies, power relationships are resolved in the absence of making them explicit and, in the process, an imaginary is created wherein social ills are fixed as a result of power having been hidden away. Decision making in a utopia is smooth and efficient, and the power asymmetries which enable such situations is either hidden away or just not substantively discussed.

Johnson’s article concludes with a series of questions that act to re-surface issues of power vis-a-vis explicitly raising questions of agency and the origin and nature of the envisioned problem(s) and solution(s):

Does the tool increase the self-determination and agency of the poor?

Would the tool be tolerated if it was targeted at non-poor people?

What problem does the tool purport to solve and who defined that problem?

How does the way they frame the problem shape our understanding of it?

What might the one framing the problem gain from solving it?

We can look to these questions as, at their core, raising issues of power—who is involved in determining how agency is expressed, who has decision-making capabilities in defining problems and solutions—and, through them, issues of inclusion and equity. Implicit through his writing, at least to my eye, is that these decisions cannot be assigned to individuals but to individuals and their communities.

One of the great challenges for modern democratic rule making is that we must transition from imagining political actors as rational, atomic, subjects to ones that are seen as embedded in their community. Individuals are formed by their communities, and vice versa, simultaneously. This means that we need to move away from traditional liberal or communitarian tropes to recognize the phenomenology of living in society, alone and together simultaneously, while also recognizing and valuing the tilting power and influence of ‘non-rational’ aspects of life that give life much of its meaning and substance. These elements of life are most commonly those demonized or denigrated by techno-utopians on the basis that technology is ‘rational’ and is juxtaposed against the ‘irrationality’ of how humans actually live and operate in the world.

Broad and in conclusion, then, techno-utopianism is functionally an issue of power and domination. We see ‘tech bros’ and traditional power brokers alike advancing solutions to their perceived problems, and this approach may be further reified should legislation be passed to embed this conceptual framework more deeply into democratic nation-states. What is under-appreciated is that while such legislative efforts may make certain techno-utopian activities lawful the subsequent actions will not, as a result, necessarily be regarded as legitimate by those affected by the lawful ‘socially beneficial’ uses of de-identified personal data.

The result? At best, ambivalence that reflects the population’s existing alienation from democratic structures of government. More likely, however, is that lawful but illegitimate expressions of ‘socially beneficial’ uses of data will further delegitimize the actions and capabilities of the states, with the effect of further weakening the perceived inclusivity of our democratic traditions.

Categories
Links

Adding Context to Facebook’s CSAM Reporting

In early 2021, John Buckley, Malia Andrus, and Chris Williams published an article entitled, “Understanding the intentions of Child Sexual Abuse Material (CSAM) sharers” on Meta’s research website. They relied on information that Facebook/Meta had submitted to NCMEC to better understand why individuals they reported had likely shared illegal content.

The issue of CSAM on Facebook’s networks has risen in prominence following a report in 2019 in the New York Times. That piece indicated that Facebook was responsible for reporting the vast majority of the 45 million online photos and videos of children being sexually abused online. Ever since, Facebook has sought to contextualize the information it discloses to NCMEC and explain the efforts it has put in place to prevent CSAM from appearing on its services.

So what was the key finding from the research?

We evaluated 150 accounts that we reported to NCMEC for uploading CSAM in July and August of 2020 and January 2021, and we estimate that more than 75% of these did not exhibit malicious intent (i.e. did not intend to harm a child), but appeared to share for other reasons, such as outrage or poor humor. While this study represents our best understanding, these findings should not be considered a precise measure of the child safety ecosystem.

This finding is significant, as it quickly becomes suggestive that the mass majority of the content reported by Facebook—while illegal!—is not deliberately being shared for malicious purposes. Even if we assume that the number sampled should be adjusted—perhaps only 50% of individuals were malicious—we are still left with a significant finding.

There are, of course, limitations to the research. First, it excludes all end-to-end encrypted messages. So there is some volume of content that cannot be detected using these methods. Second, it remains unclear how scientifically robust it was to choose the selected 150 accounts for analysis. Third, and related, there is a subsequent question of whether the selected accounts are necessarily representative of the broader pool of accounts that are associated with distributing CSAM.

Nevertheless, this seeming sleeper-research hit has significant implications insofar as it would compress the number of problematic accounts/individuals disclosing CSAM to other parties. Clearly more work along this line is required, ideally across Internet platforms, in order to add further context and details to the extent of the CSAM problem and subsequently define what policy solutions are necessary and proportionate.