Categories
Writing

The Changing Utility of Social Media

Several years ago I was speaking with a special advisor to President Bush Jr. He was, also, an academic and in the summer he had returned to his university to teach some of international relations courses. This was during the time when the US had a force stationed in Iraq, and his students regularly had more up to date information on what was happening on the ground than he did, notwithstanding having a broad security clearance and access to top US intelligence. How was this possible?

His students were on Twitter.

Another story: when I was doing my PhD there was an instance where it was clear that the Iranian government had managed to access information that should have been encrypted while in transit between using Google products from Iran. After figuring this out I shared information on Twitter and the infosec community subsequently went to work to rectify the situation.

There are lots of similar stories of how social media has been good for individuals in their personal and professional lives. But, equally (or more so ), there are stories where social media services have fed serious and life threatening problems. The Myanmar genocide. Undermining young women’s sense of self-confidence and leading to thoughts of self-harm. Enabling a former President to accelerate an irregular political and policy environment, often with harmful effects to members of government, residents of the United States, and the world more broadly.

The Future of Social Media

But the social media services that enable the positive and negative network effects of the past are significantly different, today, than just 5 years ago. What does this mean for the future of social media services?

First, we need to assess the extents to which the services remain well situated for their purposes. For the sharing of popular news, as an example, some companies to moving away from doing so partially or entirely in response to economics or emerging law or regulations. What does it mean when a core driver of some hardcore users — journalists, academics, some in government — no longer see the same utility in engaging online? What does this mean for the affordances of new services?

Second, to what extent are the emerging services really able to address the harms and problems of the old services? How can these services be made ‘safe to use’ and promote equity and avoid generating harms to some individuals and communities? I think there is a valid open question around whether you can ever create a real-time communications platform that enables mass broadcast, and which does not amplify historical harms and dangerous social effects.

Third, to what extent have these services outlived some of their utility? While individuals used to share information broadly on social media networks they can now retreat to large chat groups or online chat services (i.e., the next generation of AOL chat is here!). These more private experience still enable the formation of community without the exposure to some of the harmful or disquieting content or messages that existed on the more public social media sites.1

There has, also, been an explosion of new-Twitter competitors (along with those competing with other networks, including Instagram and popular/corporate chat services). While this has the benefit of reducing some of the aggregated harms that can arise, just in the sense that individuals are spread out between services and cannot mass against one another as they could previously, it also means that content which is published may lack the same kind of reach as in the past. Whereas once you may have had thousands of Twitter or Instagram or Facebook followers who you could alert to pressing issues of social injustice, now this same population is scattered across a bevy of different services and platforms. The dispersion effect makes it hard to have the same kind of thought leader status as may have been possible, even in the relatively recent past.

One of the solutions to these problems, writ large, is to facilitate a ‘Post Once (on your own) Site, Syndicate Everywhere’ (POSSE) situation, where you can post on one service and then syndicate it to all the other services. Promoters of this maintain that you can then have a single ‘identity’ or location, put all your content there, and then share it around the world.

Obviously this approach has some initial appeal. And for many individuals or groups they may prefer this approach. But a POSSE ‘solution’ to the disintermediation of social media fails to take into account the value of having discrete online identities.

As just one example, I have a website for professional materials, use a service to share and circulate my photographs, blog less formally here, circulate interesting news articles using an RSS feed, share short thoughts about professional topics on LinkedIn, and then have a sequence of chat applications for yet other conversations. Bringing all these together into a single space would be problematic by merit of diluting the deliberateness that each space is imbued with. Put differently, I don’t want the materials that might get me a job linked to my street photography or ruminations, on the basis that it could impede my ability to find the right kind(s) of gainful employment.

As I contemplate the state of social media and identity, today, I guess I’m left with the ongoing recognition that classic media organizations played a key role in identifying what was more or less important to pay attention to, especially when the information sources I cultivated over the past decade have quickly and suddenly changed. The social media that was so useful in aggregating information even intelligence services lacked, as well as that was used to respond to information security issues, is now long past.

Social media as it was is dead. Long live socialized media.


  1. With the caveat that some groups retreat to these more private spaces to share harmful or disturbing content without worry their actions are likely to be detected and stopped. ↩︎
Categories
Links Writing

Generative AI Technologies and Emerging Wicked Policy Problems

While some emerging generative technologies may positively affect various domains (e.g., certain aspects of drug discovery and biological research, efficient translation between certain languages, speeding up certain administrative tasks, etc) they are, also, enabling new forms of harmful activities. Case in point, some individuals and groups are using generative technologies to generate child sexual abuse or exploitation materials:

Sexton says criminals are using older versions of AI models and fine-tuning them to create illegal material of children. This involves feeding a model existing abuse images or photos of people’s faces, allowing the AI to create images of specific individuals. “We’re seeing fine-tuned models which create new imagery of existing victims,” Sexton says. Perpetrators are “exchanging hundreds of new images of existing victims” and making requests about individuals, he says. Some threads on dark web forums share sets of faces of victims, the research says, and one thread was called: “Photo Resources for AI and Deepfaking Specific Girls.”

… realism also presents potential problems for investigators who spend hours trawling through abuse images to classify them and help identify victims. Analysts at the IWF, according to the organization’s new report, say the quality has improved quickly—although there are still some simple signs that images may not be real, such as extra fingers or incorrect lighting. “I am also concerned that future images may be of such good quality that we won’t even notice,” says one unnamed analyst quoted in the report.

The ability to produce generative child abuse content is becoming a wicked problem with few (if any) “good” solutions. It will be imperative for policy professionals to learn from past situations where technologies were found to sometimes facilitate child abuse related harms. In doing so, these professionals will need to draw lessons concerning what kinds of responses demonstrate necessity and proportionality with respect to the emergent harms of the day.

As just one example, we will have to carefully consider how generative AI-created child sexual abuse content is similar to, and distinctive from, past policy debates on the policing of online child sexual abuse content. Such care in developing policy responses will be needed to address these harms and to avoid undertaking performative actions that do little to address the underlying issues that drive this kind of behaviour.

Relatedly, we must also beware the promise that past (ineffective) solutions will somehow address the newest wicked problem. Novel solutions that are custom built to generative systems may be needed, and these solutions must simultaneously protect our privacy, Charter, and human rights while mitigating harms. Doing anything less will, at best, “merely” exchange one class of emergent harms for others.

Categories
Links

Addressing Disinformation and Other Harms Using Generative DRM

The ideas behind this initiative—that a metadata-powered glyph will appear above or around content produced by generative AI technologies to inform individuals of the providence of content they come across—depend on a number of somewhat improbable things.

  1. A whole computing infrastructure based on tracking metadata reliably and then presenting it to users in ways they understand and care about, and which is adopted by the masses.
  2. That generative outputs will need to remain the exception as opposed to the norm: when generative image manipulation (not full image creation) is normal then how much will this glyph help to notify people of ‘fake’ imagery or other content?
  3. That there are sufficiently low benefits to offering metadata-stripping or content-modification or content-creation systems that there are no widespread or easy-to-adopt ways of removing the identifying metadata from generative content.

Finally, where the intent behind fraudulent media is to intimidate, embarrass, or harass (e.g., non-consensual deepfake pornographic content, violence content), then what will the glyph in question do to allay these harms? I suspect very little unless it is, also, used to identify individuals who create content for the purposes of addressing criminal or civil offences. And, if that’s the case, then the outputs would constitute a form of data that are designed to deliberately enable state intervention in private life, which could raise a series of separate, unique, and difficult to address problems.

Categories
Photography Reviews

The Problem with Glass’ AI Explore Feature

Graffiti Alley, Toronto, 2023

I’m a street photographer and have taken tens of thousands of images over the past decade. For the past couple years I’ve moved my photo sharing over to Glass, a member-paid social network that beautifully represents photographers’ images and provides a robust community to share and discuss the images that are posted.

I’m a big fan of Glass and have paid for it repeatedly. I currently expect to continue doing so. But while I’ve been happy with all their new features and updates previously, the newly announced computer vision-enabled search is a failure at launch and should be pulled from public release.

To be clear: I think that this failure can (and should) be rectified and this post documents some of the present issues with Glass’ AI-enabled search so their development team can subsequently work to further improve search and discoverability on the platform. The post is not intended to tarnish or otherwise belittle Glass’s developers or their hard work to build a safe and friendly photo sharing platform and community.

Trust and Safety and AI technologies

It’s helpful to start with a baseline recognition that computer vision technologies tend to be, at their core, anti-human. A recent study of academic papers and patents revealed how computer vision research fundamentally strips individuals of their humanity by way of referring to them as objects. This means that any technology which adopts computer vision needs to do so in a thoughtful and careful way if it is to avoid objectifying humans in harmful ways.

But beyond that, there are key trust and safety issues that are linked to AI models which are relied upon to make sense of otherwise messy data. In the case of photographs, a model can be used to subsequently enable queries against the photos, such as by classifying men or women in images, or classifying different kinds of scenes or places, or so as to surface people who hold different kinds of jobs. At issue, however, is that many of the popular AI models have deep or latent biases — queries for ‘doctors’ surface men, ‘nurses’ women, ‘kitchens’ associated with images including women, ‘worker’ surfacing men — or they fundamentally fail to correctly categorize what is in the image, with the result of surfacing images that are not correlated with the search query. This latter situation becomes problematic when the errors are not self-evident to the viewer, such as when searching for one location (e.g., ‘Toronto’) reveals images of different places (e.g., Chicago, Singapore, or Melbourne) but that a viewer may not be able to detect as erroneous.

Bias is a well known issue amongst anyone developing or implementing AI systems. There are numerous ways to try to technically address bias as well as policy levers that ought to be relied upon when building out an AI system. As just one example, when training a model it is best practice to include a dataset card, which explains the biases or other characteristics of the dataset in question. These dataset cards can also explain to future users or administrators how the AI system was developed so future administrators can better understand the history behind past development efforts. To some extent, you can think of dataset cards as a policy appendix to a machine language model, or as the ‘methods’ and ‘data’ section of a scientific paper.

Glass, Computer Vision, and Ethics

One of Glass’ key challenges since its inception has been around onboarding and enabling users to find other, relevant, photographers or images. While the company has improved things significantly over the past year there was still a lot of manual work to find relevant work, and to find photographers who are active on the platform. It was frustrating for everyone and especially to new users, or when people who posted photos didn’t categorize their images with the effect of basically making them undiscoverable.

One way to ‘solve’ this has been to apply a computer vision model that is designed to identify common aspects of photos — functionally label them with descriptions — and then let Glass users search against these aspects or labels. The intent is positive and, if done well, could overcome a major issue in searching imagery both because the developers can build out a common tagging system and because most people won’t take the time to provide detailed tags for their images were the option provided to them.

Sometimes the system seems to work pretty well. Searching for ‘street food vendors’ pulls up pretty accurate results.

However, when I search for ‘Israeli’ I’m served with images of women. When I open them up there is no information suggesting that the women are, in fact, Israeli, and in some cases images are shot outside of Israel. Perhaps the photographers are Israeli? Or there is location-based metadata that geolocates the images to Israel? Regardless, it seems suspicious that this term almost exclusively surfaces women.

Searching ‘Arab’ also brings up images of women, including some who are in headscarves. It is not clear that each of the women are Arabic. Moreover, it is only after 8 images of women are presented is a man in a beard shown. This subject, however, does not have any public metadata that indicates he is, or identifies as being, Arabic.

Similar gender-biased results happen when you search for ‘Brazillian’, ‘Russian’, ‘Mexican’, or ‘African’. When you search for ‘European’, ‘Canadian’, ‘American’, ‘Japanese’, however, you surface landscapes and streetscapes in addition to women.

Other searches produce false results. This likely occurs because the AI model has been trained that certain items in scenes are correlated to concepts. As an example, when you search for ‘nurse’ the results are often erroneous (e.g., this photo by L E Z) or link a woman in a face mask to being a nurse. There are, of course, also just sexualized images of women.

When searching for ‘doctor’ we can see that the model likely has some correlation between a mask and being a doctor but, aside from that, the images tend to return male subjects as images. Unlike ‘nurse’ there are no sexualized images of men or women that immediately are surfaced.

Also, if you do a search for ‘hot’ you are served — again — with images of sexualized women. While the images tend to be ‘warm’ colours they do not include streetscapes or landscapes.

Doing a search for ‘cold’, however, and you get cold colours (i.e., blues) along with images of winter scenes. Sexualized female images are not presented.

Consider also some of the search queries which are authorized and how they return results:

  • ‘slut’ which purely surfaces women
  • ‘tasty’ which surfaces food images along with images of women
  • ‘lover’ which surfaces images of men and women, or women alone. It is rare that men are shown on their own
  • ‘juicy’ which tends to return images of fruit or of sexualized women
  • ‘ugly’ which predominantly surfaces images of men
  • ‘asian’ which predominantly returns images of sexualized Asian women
  • ‘criminal’ which often appears linked to darker skin or wearing a mask
  • ‘jew’ which (unlike Israeli) exclusively surfaces men for the first several pages of returned images
  • ‘black’ primarily surfaces women in leather or rubber clothing
  • ‘white’ principally surfaces white women or women in white clothing

Note that I refrained from any particularly offensive queries on the basis that I wanted to avoid taking any actions that could step over an ethical or legal line. I also did not attempt to issue any search queries using a language other than English. All queries were run on October 15, 2023 using my personal account with the platform.

Steps Forward

There are certainly images of women who have been published on Glass and this blogpost should not be taken as suggesting that these images should be removed. However, even running somewhat basic queries reveal that (at a minimum) there is an apparent gender bias in how some tags are associated with men or women. I have only undertaken the most surface level of queries and have not automated searches or loaded known ‘problem words’ to query against Glass. I also didn’t have to.

Glass’ development team should commit to pulling its computer vision/AI-based search back into a beta or to pull the system entirely. Either way, what the developers have pushed into production is far from ready for prime time if the company—and the platform and its developers—are to be seen as promoting an inclusive and equitable platform that avoids reaffirming historical biases that are regularly engrained in poorly managed computer vision technologies.

Glass’ developers have previously shown that they deeply care about getting product developments right and about fostering a safe and equitable platform. It’s one of the reasons that they are building a strong and healthy community on the platform. As it stands today, however, their AI-powered search function violates these admirable company values.

I hope that the team corrects this error and brings the platform, and its functions, back into comportment with the company’s values rather than continue to have a clearly deficient product feature deployed for all users. Maintaining the search features, as it exists today, would undermine the team’s efforts to otherwise foster the best photographic community available on the Internet, today.

Glass’ developers have shown attentiveness to the community in developing new features and fixing bugs, and I hope that they read this post as one from a dedicated and committed user who just wants the platform to be better. I like Glass and the developers’ values, and hope these values are used to undergird future explore and search functions as opposed to the gender-biased values that are currently embedded in Glass’ AI-empowered search functions.

Categories
Photo Essay Photography

Favourite Photos of Summer 2023

I’ve had the good fortune to get out and take photos pretty well every week of the summer. On the whole I’ve enjoyed decent light, good and interesting weather, and lots of events that opened up opportunities to capture the city in interesting ways.

Paulie B. has a recent video where he asked street photographers about their top photo or two of the summer. Inspired by his video, I thought that I’d post a few of my photos and explain why I liked them. All of these photos were first shared on Glass, and Fuji images relied on my “Classic Monochrome” recipe.

Brock & Dundas, Toronto, 2023

This was taken at one of the first festivals of the summer. I just walked back and forth through it over a couple of days and left with a number of images I liked, with this probably my favourite. Why?

First, I love the woman’s expression in her relationship to the officer, as well as with the pineapples: what exactly is the problem? Why is she so shocked? What has the officer said, if anything?

Second, I liked the background — it showcases this part of Toronto. It’s not filled with the new shiny glass buildings and condos, and still has some of the older shops and signs. This location gives a sense of ‘where’ this image was taken.

Third, I just like having images with pineapples in them. I don’t know why but I can tell it’s a motif in studying the images that I’ve taken over the years.

Queens Quay & Spadina, Toronto, 2023

This image was taken on Toronto’s waterfront. It just captures all the things that summer can be in Toronto: ferries coming from the Toronto islands, some people relaxing along the water, seagulls (which are everywhere along the waterfront in the summer), travellers landing at the island airport, and just a sense of activity and calm.

York & Wellington, Toronto, 2023

Taken from the financial district of downtown Toronto, I really liked how the light was falling on the scene and the way that the male subject is relaxing against the bulls. It almost feels pastoral to me, which isn’t the typical experience I get when walking around (or living in) the downtown core.

Queen & Bay, Toronto, 2023

I’m a sucker for taking photos of ice cream trucks and I really liked how this guy was looking out of the truck while a pigeon was just wandering by in the lower left of the frame. Is this the most complex image I took in the summer? Nope. But I still liked the environmental portrait that was captured.

Spadina & St Andrew, Toronto, 2023

Taken along one of my regular patrol routes, there’s a lot that I like throughout this frame.

It has a lot of construction elements — something I’ve been deliberately including in my street photos as part of a long-term project — and there’s some sub-framing that comes out because of how the shadows lay against the wall. The subjects to the far right of the frame are somewhat interesting — what are they pointing at? And does it intersect with the ‘caution’ warning? — but their shadows are where they shine. The shadows seem like they’re up to…something…while at the same time there is a subject that is reminiscent of the Invisible Man wandering along the left side of the frame. In aggregate, this scene has a degree of dimensionality that I really liked, some subjects of interest, and fit within an ongoing project.

Queens Quay & Bay, Toronto, 2023

I’m always a sucker for isolated subjects in the city who are in interesting situations, or have interesting expressions or body language. This photograph captures this for me.

I like that the main subject seems somewhat desolate, and yet is sitting alongside a series of summer treats and toys. And the fact that this is a vendor who only takes cash? I wonder when such signs are going to be real indicators of a distant past. The other piece that I like is how the top, right, and left of the frame are all food-related: the subject is selling popcorn and candies, hotdogs are being sold along the left of the frame, and the top of the frame can refer to top-end gourmet restaurants. So there’s multiples ‘frames’ to the subject which, again, adds a degree of structure or complexity into the composition.

Canadian National Exhibition, Toronto, 2023

This was taken during the waning days of the CNE, which is a massive festival that takes place annually in Toronto. People are typically excited and happy, but our older subject, here, seems sad, quiet, or in deep contemplation.

Having her placed against games and the Kool-Aid Man on one side, and the child and mother on the other, really underscores her emotional state in what is typically a festive situation. I also like the depth of the photo that indicates where the women is in Toronto. This leaves the viewer with a deeper sense of context, which helps to amplify the woman’s facial expression and body language.

Canadian National Exhibition, Toronto, 2023

The final photo of the summer is another from the CNE. The subjects in this one exemplify what is ‘normal’ in the summer — happiness, togetherness, and fun. The subjects’ expressions and open and apparent and I love how large the stuffed pig is in context to the woman — what will she do with it once she gets it home?

While it’s not the most complicated of photos I took over the summer it expresses a sense of unadulterated happiness or joy that regularly brings a smile to my face.

Categories
Aside Links

Highlights from TBS’ Guidance on Publicly Available Information

The Treasury Board Secretariat has released, “Privacy Implementation Notice 2023-03: Guidance pertaining to the collection, use, retention and disclosure of personal information that is publicly available online.”

This is an important document, insofar as it clarifies a legal grey space in Canadian federal government policies. Some of the Notice’s highlights include:

  1. Clarifies (some may assert expand) how government agencies can collect, use, retain, or disclose publicly available online information (PAOI). This includes from commercial data brokers or online social networking services
  2. PAOI can be collected for administrative or non-administrative purposes, including for communications and outreach, research purposes, or facilitating law enforcement or intelligence operations
  3. Overcollection is an acknowledged problem that organizations should address. Notably, “[a]s a general rule, [PAOI] disclosed online by inadvertence, leak, hack or theft should not be considered [PAOI] as the disclosure, by its very nature, would have occurred without the knowledge or consent of the individual to whom the personal information pertains; thereby intruding upon a reasonable expectation of privacy.”
  4. Notice of collection should be undertaken, though this may not occur due to some investigations or uses of PAOI
  5. Third-parties collecting PAOI on the behalf of organizations should be assessed. Organizations should ensure PAOI is being legitimately and legally obtained
  6. “[I]nstitutions can no longer, without the consent of the individual to whom the information relates, use the [PAOI] except for the purpose for which the information was originally obtained or for a use consistent with that purpose”
  7. Organizations are encouraged to assess their confidence in PAOI’s accuracy and potentially evaluate collected information against several data sources to confidence
  8. Combinations of PAOI can be used to create an expanded profile that may amplify the privacy equities associated with the PAOI or profile
  9. Retained PAOI should be denoted with “publicly available information” to assist individuals in determining whether it is useful for an initial, or continuing, use or disclosure
  10. Government legal officers should be consulted prior to organizations collecting PAOI from websites or services that explicitly bar either data scraping or governments obtaining information from them
  11. There are number pieces of advice concerning the privacy protections that should be applied to PAOI. These include: ensuring there is authorization to collect PAOI, assessing the privacy implications of the collection, adopting privacy preserving techniques (e.g., de-identification or data minimization), adopting internal policies, as well as advice around using attributable versus non-attributable accounts to obtain publicly available information
  12. Organizations should not use profile information from real persons. Doing otherwise runs the risk of an organization violating s. 366 (Forgery) or s.403 (Fraudulently impersonate another person) of the Criminal Code
Categories
Aside Links

The Women Behind AI Ethics

Rolling Stone has an excellent article that profiles the women who have been at the forefront of warning how contemporary AI systems can be, and are being, used to (re)inscribe bias, discrimination, sexism, and racism into contemporary and emerging digital tools and systems. An important read that is well worth your time.

Categories
Writing

Publicly Normalizing Significant Espionage Operations is a Good Thing

The USA government recently took a bad beat when it came to light that alleged Chinese threat actors undertook a pretty sophisticated espionage operation that got them access to sensitive email communications of members of the US government. As the details come out it seems as though the Secretary of State and his inner circle weren’t breached but that other senior officials managing the USA-China relationship were.

Still, the actual language the US government is using to describe the espionage operation is really good to read. As an example, the cybersecurity director of the NSA, Rob Joyce, has stated that:

“It is China doing espionage […] That is what nation-states do. We need to defend against it, we need to push back on it, but that is something that happens.”

Why is this good? Because the USA was successfully targeted by an advanced espionage operation that has likely serious effects but this is normal, and Joyce is saying so publicly. Adopting the right language in this space is all too rare when espionage or other activities are often cast as serious ‘attacks’ or described using other inappropriate or bombastic language.

The US government’s language helps to clarify what are, and are not, norms-violating actions. Major and successful espionage operations don’t violate acceptable international norms. Moreover, not only does this make clear what is a fair operation to take against the USA; it, also, makes clear what the USA/FVEY think are appropriate actions to take towards other international actors. The language must be read as also justifying the allies’ own actions and effectively preempts any arguments from China or other nations that successful USA or FVEY espionage operations are anything other than another day on the international stage.

Clearly this is not new language. Former DNI Clapper, when describing the Office of Personnel Management hack in 2015, said,

You have to kind of salute the Chinese for what they did. If we had the opportunity to do that, I don’t think we’d hesitate for a minute.

But it bears regularly repeating to establish what remain ‘appropriate’ in terms of signalling ongoing international norms. This signalling is not just to adversary nations or friendly allies however, but also to more regular laypersons, national security practitioners, or other operators who might someday work on the national or international stage. Signalling has a broader educational value for them (and for new reporters who end up picking up the national security beat someday in the future).

At an operational level, it’s also worth noting that this is intelligence gathering that can potentially lower temperatures. Knowing what the other side is thinking or how they’re interpreting things is super handy if you want to defrost some of your diplomatic relations. Though it can obviously hurt by losing advantages in your diplomatic positions, too, of course! And especially if it lets the other side outflank you.

Still, I have faith in the EquationGroup’s ongoing collection against even hard targets in China and elsewhere to help balance the information asymmetry equation. While the US suffered a now-publicly reported loss of information security, the NSA is actively working to achieve similar (if less public) successes of its own on a daily basis. And I’m sure they’re racking up wins of their own!

Categories
Photography Writing

Street Photography in a More Private World

Jack Layton Ferry Terminal, Toronto, 2023

For the past several months Neale James has talked about how new laws which prevent taking pictures of people on the street will inhibit the documenting of history in certain jurisdictions. I’ve been mulling this over while trying to determine what I really think about this line of assessment and photographic concern. As a street photographer it seems like an issue where I’ve got some skin in the game!

In short, while I’m sympathetic with this line of argumentation I’m not certain that I agree. So I wrote a longish email to Neale—which was included in this week’s Photowalk podcast—and I’ve largely reproduced the email below as a blog post.

I should probably start by stating my priors:

  1. As a street photographer I pretty well always try to include people in my images, and typically aim to get at least some nose and chin. No shade to people who take images of peoples’ backs (and I selectively do this too) but I think that capturing some of the face’s profile can really bring many street photos to life.1
  2. I, also, am usually pretty obvious when I’m taking photos. I find a scene and often will ‘set up’ and wait for folks to move through it. And when people tell me they aren’t pleased or want a photo deleted (not common but it happens sometimes) I’m usually happy to do so. I shoot between 28-50mm (equiv.) focal lengths and so it’s always pretty obvious when I’m taking photos, which isn’t the case with some street photographers who are shooting at 100mm . To each their own but I think if I’m taking a photo the subjects should be able to identify that’s happening and take issue with it, directly, if they so choose to.

Anyhow, with that out of the way:

If you think of street photography in the broader history of photography, it started with a lot of images with hazy or ghostly individuals (e.g. ‘Panorama of Saint Lucia, Naples’ by Jones or ’Physic Street, Canton’ by Thomson or ‘Rue de Hautefeuille’ by Marville). Even some of the great work—such as by Cartier-Bresson, Levitt, Bucquet, van Schaick, Atget, Friedlander, Robert French, etc—include photographs where the subjects are not clearly identified. Now, of course, some of their photographs include obvious subjects, but I think that it’s worth recognizing that many of the historical ‘greats’ include images where you can’t really identify the subject. And… that was just fine. Then, it was mostly a limitation of the kit whereas now, in some places, we’re dealing with the limitations of the law.

Indeed, I wonder if we can’t consider the legal requirement that individuals’ identifiable images not be captured as potentially a real forcing point for creativity that might inspire additional geographically distinctive street photography traditions: think about whether, in some jurisdictions, instead of aperture priority being a preferred setting, that shutter priority is a default, with speeds of 5-15 second shutters to get ghostly images.2

Now, if such a geographical tradition arises, will that mean we get all the details of the clothing and such that people are wearing, today? Well…no. Unless, of course, street photographers embrace creativity and develop photo essays that incorporate this in interesting or novel ways. But street photography can include a lot more than just the people, and the history of street photography and the photos we often praise as masterpieces showcase that blurred subjects can generate interesting and exciting and historically-significant images.

One thing that might be worth thinking about is what this will mean for how geographical spaces are created by generative AI in the future. Specifically:

  1. These AI systems will often default to norms based on the weighting of what has been collected in training data. Will they ‘learn’ that some parts of the world are more or less devoid of people based on street photos and so, when generating images of certain jurisdictions, create imagery that is similarly devoid of people? Or, instead, will we see generative imagery that includes people whereas real photos will have to blur or obfuscate them?
  2. Will we see some photographers, at least, take up a blending of the real and the generative, where they capture streets but then use programs to add people into those streetscapes based on other information they collect (e.g., local fashions etc)? Basically, will we see some street photographers adopt a hybrid real/generative image-making process in an effort to comply with law while still adhering to some of the Western norms around street photography?

As a final point, while I identify as a street photographer and avoid taking images of people in distress, the nature of AI regulation and law means that there are indeed some good reasons for people to be concerned about the taking of street photos. The laws frustrating some street photographers are born from arguably real concerns or issues.

For example, companies such as Cleaview AI (in Canada) engaged in the collection of images and, subsequently, generated biometric profiles of people based on scraping publicly available images.

Most people don’t really know how to prevent such companies from being developed or selling their products but do know that if they stop the creation of training data—photographs—then they’re at least less likely to be captured in a compromising or unfortunate situation.

It’s not the photographers, then, that are necessarily ‘bad’ but the companies who illegally exploit our work to our detriment, as well as to the detriment of the public writ large.

All to say: as street photographers, and photographers more generally, we should think broader than our own interests to appreciate why individuals may not want their images taken in light of technical developments that are all around us. And importantly, the difference is that as photographers we do often share our work whereas CCTV cameras and such do not, with the effect that the images we take can end up in generative AI, and non-generative AI training data systems, whereas the cameras that are monitoring all of us always are (currently…) less likely to be feeding the biometric surveillance training data beast.


  1. While, at the same time, recognizing that sometimes a photo is preferred because people are walking away from the camera/towards something else in the scene. ↩︎
  2. The ND filter manufacturers will go wild! ↩︎
Categories
Links

New Details About Russia’s Surveillance Infrastructure

Writing for the New York Times, Krolik, Mozur, and Satariano have published new details about the state of Russia’s telecommunications surveillance capacity. They include documentary evidence in some cases of what these technologies can do, including the ability to:

  • identify if mobile phones are proximate to one another to detect meetups
  • identify whether a person’s phone is proximate to a burner phone, to de-anonymize the latter
  • use deep packet inspection systems to target particular kinds of communications metadata associated with secure communications applications

These types of systems are appearing in various repressive states and are being used by their governments.

Similar systems have long been developed in advanced Western democratic countries which leads me to wonder whether what we’re seeing from authoritarian countries will ultimately usher in the use of similar technologies in higher rule-of-law states or if, instead, Western companies will merely export the tools without them being adopted in the countries developing them.

In effect, will the long-term result of revealing authoritarian capabilities lead to the gradual legitimization of their use in democratic countries so long as using them is tied to judicial oversight?