Categories
Links

Addressing Disinformation and Other Harms Using Generative DRM

The ideas behind this initiative—that a metadata-powered glyph will appear above or around content produced by generative AI technologies to inform individuals of the providence of content they come across—depend on a number of somewhat improbable things.

  1. A whole computing infrastructure based on tracking metadata reliably and then presenting it to users in ways they understand and care about, and which is adopted by the masses.
  2. That generative outputs will need to remain the exception as opposed to the norm: when generative image manipulation (not full image creation) is normal then how much will this glyph help to notify people of ‘fake’ imagery or other content?
  3. That there are sufficiently low benefits to offering metadata-stripping or content-modification or content-creation systems that there are no widespread or easy-to-adopt ways of removing the identifying metadata from generative content.

Finally, where the intent behind fraudulent media is to intimidate, embarrass, or harass (e.g., non-consensual deepfake pornographic content, violence content), then what will the glyph in question do to allay these harms? I suspect very little unless it is, also, used to identify individuals who create content for the purposes of addressing criminal or civil offences. And, if that’s the case, then the outputs would constitute a form of data that are designed to deliberately enable state intervention in private life, which could raise a series of separate, unique, and difficult to address problems.

Categories
Aside Links

Highlights from TBS’ Guidance on Publicly Available Information

The Treasury Board Secretariat has released, “Privacy Implementation Notice 2023-03: Guidance pertaining to the collection, use, retention and disclosure of personal information that is publicly available online.”

This is an important document, insofar as it clarifies a legal grey space in Canadian federal government policies. Some of the Notice’s highlights include:

  1. Clarifies (some may assert expand) how government agencies can collect, use, retain, or disclose publicly available online information (PAOI). This includes from commercial data brokers or online social networking services
  2. PAOI can be collected for administrative or non-administrative purposes, including for communications and outreach, research purposes, or facilitating law enforcement or intelligence operations
  3. Overcollection is an acknowledged problem that organizations should address. Notably, “[a]s a general rule, [PAOI] disclosed online by inadvertence, leak, hack or theft should not be considered [PAOI] as the disclosure, by its very nature, would have occurred without the knowledge or consent of the individual to whom the personal information pertains; thereby intruding upon a reasonable expectation of privacy.”
  4. Notice of collection should be undertaken, though this may not occur due to some investigations or uses of PAOI
  5. Third-parties collecting PAOI on the behalf of organizations should be assessed. Organizations should ensure PAOI is being legitimately and legally obtained
  6. “[I]nstitutions can no longer, without the consent of the individual to whom the information relates, use the [PAOI] except for the purpose for which the information was originally obtained or for a use consistent with that purpose”
  7. Organizations are encouraged to assess their confidence in PAOI’s accuracy and potentially evaluate collected information against several data sources to confidence
  8. Combinations of PAOI can be used to create an expanded profile that may amplify the privacy equities associated with the PAOI or profile
  9. Retained PAOI should be denoted with “publicly available information” to assist individuals in determining whether it is useful for an initial, or continuing, use or disclosure
  10. Government legal officers should be consulted prior to organizations collecting PAOI from websites or services that explicitly bar either data scraping or governments obtaining information from them
  11. There are number pieces of advice concerning the privacy protections that should be applied to PAOI. These include: ensuring there is authorization to collect PAOI, assessing the privacy implications of the collection, adopting privacy preserving techniques (e.g., de-identification or data minimization), adopting internal policies, as well as advice around using attributable versus non-attributable accounts to obtain publicly available information
  12. Organizations should not use profile information from real persons. Doing otherwise runs the risk of an organization violating s. 366 (Forgery) or s.403 (Fraudulently impersonate another person) of the Criminal Code
Categories
Aside Links

The Women Behind AI Ethics

Rolling Stone has an excellent article that profiles the women who have been at the forefront of warning how contemporary AI systems can be, and are being, used to (re)inscribe bias, discrimination, sexism, and racism into contemporary and emerging digital tools and systems. An important read that is well worth your time.

Categories
Links

New Details About Russia’s Surveillance Infrastructure

Writing for the New York Times, Krolik, Mozur, and Satariano have published new details about the state of Russia’s telecommunications surveillance capacity. They include documentary evidence in some cases of what these technologies can do, including the ability to:

  • identify if mobile phones are proximate to one another to detect meetups
  • identify whether a person’s phone is proximate to a burner phone, to de-anonymize the latter
  • use deep packet inspection systems to target particular kinds of communications metadata associated with secure communications applications

These types of systems are appearing in various repressive states and are being used by their governments.

Similar systems have long been developed in advanced Western democratic countries which leads me to wonder whether what we’re seeing from authoritarian countries will ultimately usher in the use of similar technologies in higher rule-of-law states or if, instead, Western companies will merely export the tools without them being adopted in the countries developing them.

In effect, will the long-term result of revealing authoritarian capabilities lead to the gradual legitimization of their use in democratic countries so long as using them is tied to judicial oversight?

Categories
Links

Critically Assessing AI Technologies’ Economic Potentials

This article by Ramani and Wang, entitled “Why transformative AI is really, really hard to achieve,” is probably the best critical economic analysis of the current AI debates I’ve come across. It assesses what would be required for AI technologies to live up to the current hype cycles about how these technologies will massively benefit economic productivity. Based on the nature of AI technologies being developed, combined with the history of economic productivity enhancements over time, the authors conclude that the present day hype is unlikely to be met.

Key to the arguments is that AI technologies do not, as of yet, sufficiently automate a vast set of tasks which are comparatively easy for humans to accomplish, nor are they able to benefit from the latent knowledge and intelligence that guides humans in their daily lives. The authors argue that AI technologies must broadly automate tasks, instead of discretely automating them, in order to achieve cross-industry improvements to productivity. Doing otherwise will merely accelerate aspects of processes which will remain gridlocked in the aggregate by more traditional or less automated processes.

The authors are not dismissing the potential utility of AI technologies, however, but instead just arguing that they are not as likely to achieve the transformative economic miracles that many are suggesting are just around the corner. However, even if AI systems are ‘only’ as significant for productivity as the combustion engine (which discretely as opposed to comprehensively enhanced productivity) this would be a significant accomplishment.

Categories
Links

Deskilling and Human-in-the-Loop

I found boyd’s “Deskilling on the Job” to be a useful framing for how to be broadly concerned, or at least thoughtful, about using emerging A.I. technologies in professional as well as training environments.

Most technologies serve to augment human activity. In sensitive situations we often already require a human-in-the-loop to respond to dangerous errors (see: dam operators, nuclear power staff, etc). However, should emerging A.I. systems’ risks be mitigated by also placing humans-in-the-loop then it behooves policymakers to ask: how well does this actually work when we thrust humans into correcting often highly complicated issues moments before a disaster?

Not to spoil things, but it often goes poorly, and we then blame the humans in the loop instead of the technical design of the system.1

AI technologies offer an amazing bevy of possibilities. But thinking more carefully on how to integrate them into society while, also, digging into history and scholarly writing in automation will almost certainly help us avoid obvious, if recurring, errors in how policy makers think about adding guardrails around AI systems.


  1. If this idea of humans-in-the-loop and the regularity of errors in automated systems interests you, I’d highly encourage you to get a copy of ‘Normal Accidents’ by Perrow. ↩︎
Categories
Links

Russian Cyber Doctrine and Its Implementation

While the following might be a bit bellicose it, at the same time, has a ring of truth to it.

Using a foreign country’s military doctrine to reframe fuck-ups as successes — here, that the Russians’ real operations have had the intended effects — boils down to doing a GRU colonel’s work for him; placating Gerasimov about whether or not the O6’s department has contributed to winning the war, among other things.

The Russian government and its various agencies have been incredibly active in attempting to influence or affect the ability of the Ukrainian government to resist the illegal Russian invasion of its territory. But at the same time there has been a back and forth about the successes or failures of Russia in largely academic or public policy circles. In at least some cases, these arguments seem to argue for the successes of the Russian doctrine without sufficient evidence to maintain the position.

Notwithstanding the value of some of those debates it’s nice to see a line of critique that is more attentive to the structure of institutions and what often drives them, with the affect of broadening the rationales and explanations for the (un)successful efforts in the cyber domain by Russian forces.

Categories
Aside Links

Felony Contempt of Business Model

Cory Doctorow has a great analysis of Netflix and it’s efforts to define (and delimit) what constitutes a family. The real kicker, though, is the final paragraph:

When [Netflix] used adversarial interoperability to build a multi-billion-dollar global company using the movie studios’ products in ways the studios hated, that was progress. When you define “family” in ways that makes Netflix less money, that’s felony contempt of business model.

Netflix: a company the whole family can appreciate. Just perhaps not together.

Categories
Links

Censorship, ChatGPT, and Baidu

The Wall Street Journal is reporting that Baidu will soon integrate ChatGPT into the company’s chat/search offerings. The company plans, however, to:

limit its chatbot’s outputs in accordance with the state’s censorship rules, one of the people said. OpenAI also applies restrictions to ChatGPT’s outputs in an effort to avoid toxic hate speech and politically sensitive topics.

While I have no doubt that Baidu will impose censorship, I wonder whether researchers will be able to leverage the learning properties of ChatGPT to gain insight into what is censored by Baidu. Side-channel research has been used to reveal how censorship is undertaken by companies operating in China; I’d expect using these AI models will offer yet another way of interrogating their censorship engines.

Categories
Links Writing

Doing A Policy-Oriented PhD

Steve Saideman has a good, short, thought on why doing a PhD is rarely a good idea for Canadians who want to get into policy work. Specifically, he writes:

In Canada, alas, there is not that much of a market for policy-oriented PhDs. We don’t have much in the way of think tanks, there are only a few govt jobs that either require PhDs or where the PhD gives one an advantage over an MA, and, the govt does not pay someone more if they have a PhD.

I concur that there are few places, including think tanks or civil society organizations, where you’re likely to find a job if you have a policy-related PhD. Moreover, when you do find one it can be challenging, if not impossible, to find promotion opportunities because the organizations tend to be so small.

That said, I do in fact think that doing a policy-related PhD can sometimes be helpful if you stay pretty applied in your outputs while pursuing your degree. In my case, I spent a lot of time during my PhD on many of the same topics that I still focus on, today, and can command a premium in consulting rates and seniority for other positions because I’ve been doing applied policy work for about 15 years now, inclusive of my time in my PhD. I, also, developed a lot of skills in my PhD—and in particular the ability to ask and assess good questions, know how questions or policy issues had been previously answered and to what effect, and a reflexive or historical thinking capacity I lacked previously—that are all helpful soft skills in actually doing policy work. Moreover, being able to study policy and politics, and basically act as an independent agent for the time of my PhD, meant I had a much better sense of what I thought about issues, why, and how to see them put into practice than I would have gained with just a master’s degree.

Does that mean I’d recommend doing a PhD? Well…no. There are huge opportunity costs you incur in doing them and, also, you can narrow you job market searches by appearing both over-educated and under-qualified. The benefits of holding a PhD tend to become more apparent after a few years in a job as opposed to being helpful in netting that first one out of school.

I don’t regret doing a PhD but, if someone is particularly committed to doing one, I think that they should hurl themselves into it with absolute abandon and treat it as a super-intensive 40-65 hour/week job, and be damn sure that you have a lot of non-academic outputs to prove to a future employer that you understand the world and not just academic journals. It’s hard work, which is sometimes rewarding, and there are arguably different (and less unpleasant) ways of getting to a relatively similar end point. But if someone is so motivated by a hard question that they’d be doing the research and thinking about it, regardless of whether they were in a PhD program? Then they might as well go and get the piece of paper while figuring out the answer.