The Globe and Mail has a terrific photographic series entitled "A century caught on camera." As a Toronto resident I was struck by just how many traditions, rituals, and grievances have stuck with the city–or in the city–for over a century.
Further, the way in which the images have been captured has changed substantially over time as a result of the technical capacity of camera equipment, along with the interests or preferences of the photographers at different times. Images in the past decade or two, as an example, clearly draw more commonly from celebrity or artistic portraiture than 50 years ago. Moreover, it’s pretty impressive just how much photographers have done with their equipment over the past century and this, generally, speaks to how easy street and documentary photographers have it today as compared to when our compatriots were using slow lenses and film.
It may take you quite a while to get through all the images but I found the process to be exceedingly worthwhile. Though I admit that the first decade during which the Globe used colour images probably ranks as my least favourite period in the galleries that the paper has published.
I was summoned for jury duty this week but we were all let out after just a few hours. While I admit I didn’t want to be stuck in a trial for up to a few weeks, I had hoped for a little more time to get through a massive backlog of reading that’s built up over the past 3-4 months. Still, I managed to read, annotate, and close over 100 Safari tabs so I’ll call it a win!
For the past several years I’ve kept looking at the Leica Q2 as the next step in the camera I want to use. To be clear, I think I’m pretty proficient with the Fuji X100F but I’ve also been in situations where it hasn’t been able to perform, either due to weather or extreme low light. And as much as I like it there are things I find less than ideal about the Fuji, including the zone focusing system.
When I was in Quebec, recently, I held the Q2 for the first time, and got to play with it bit, and it convinced me that this was the next device I wanted to use to make photos. I don’t know that I’ll actually use it to make 28mm images and suspect I’ll crop to 35mm (equiv), but regardless I’m looking forward to using it when it arrives in the next week or so!
Several years ago I was speaking with a special advisor to President Bush Jr. He was, also, an academic and in the summer he had returned to his university to teach some of international relations courses. This was during the time when the US had a force stationed in Iraq, and his students regularly had more up to date information on what was happening on the ground than he did, notwithstanding having a broad security clearance and access to top US intelligence. How was this possible?
His students were on Twitter.
Another story: when I was doing my PhD there was an instance where it was clear that the Iranian government had managed to access information that should have been encrypted while in transit between using Google products from Iran. After figuring this out I shared information on Twitter and the infosec community subsequently went to work to rectify the situation.
There are lots of similar stories of how social media has been good for individuals in their personal and professional lives. But, equally (or more so ), there are stories where social media services have fed serious and life threatening problems. The Myanmar genocide. Undermining young women’s sense of self-confidence and leading to thoughts of self-harm. Enabling a former President to accelerate an irregular political and policy environment, often with harmful effects to members of government, residents of the United States, and the world more broadly.
The Future of Social Media
But the social media services that enable the positive and negative network effects of the past are significantly different, today, than just 5 years ago. What does this mean for the future of social media services?
First, we need to assess the extents to which the services remain well situated for their purposes. For the sharing of popular news, as an example, some companies to moving away from doing so partially or entirely in response to economics or emerging law or regulations. What does it mean when a core driver of some hardcore users — journalists, academics, some in government — no longer see the same utility in engaging online? What does this mean for the affordances of new services?
Second, to what extent are the emerging services really able to address the harms and problems of the old services? How can these services be made ‘safe to use’ and promote equity and avoid generating harms to some individuals and communities? I think there is a valid open question around whether you can ever create a real-time communications platform that enables mass broadcast, and which does not amplify historical harms and dangerous social effects.
Third, to what extent have these services outlived some of their utility? While individuals used to share information broadly on social media networks they can now retreat to large chat groups or online chat services (i.e., the next generation of AOL chat is here!). These more private experience still enable the formation of community without the exposure to some of the harmful or disquieting content or messages that existed on the more public social media sites.1
There has, also, been an explosion of new-Twitter competitors (along with those competing with other networks, including Instagram and popular/corporate chat services). While this has the benefit of reducing some of the aggregated harms that can arise, just in the sense that individuals are spread out between services and cannot mass against one another as they could previously, it also means that content which is published may lack the same kind of reach as in the past. Whereas once you may have had thousands of Twitter or Instagram or Facebook followers who you could alert to pressing issues of social injustice, now this same population is scattered across a bevy of different services and platforms. The dispersion effect makes it hard to have the same kind of thought leader status as may have been possible, even in the relatively recent past.
One of the solutions to these problems, writ large, is to facilitate a ‘Post Once (on your own) Site, Syndicate Everywhere’ (POSSE) situation, where you can post on one service and then syndicate it to all the other services. Promoters of this maintain that you can then have a single ‘identity’ or location, put all your content there, and then share it around the world.
Obviously this approach has some initial appeal. And for many individuals or groups they may prefer this approach. But a POSSE ‘solution’ to the disintermediation of social media fails to take into account the value of having discrete online identities.
As I contemplate the state of social media and identity, today, I guess I’m left with the ongoing recognition that classic media organizations played a key role in identifying what was more or less important to pay attention to, especially when the information sources I cultivated over the past decade have quickly and suddenly changed. The social media that was so useful in aggregating information even intelligence services lacked, as well as that was used to respond to information security issues, is now long past.
Social media as it was is dead. Long live socialized media.
With the caveat that some groups retreat to these more private spaces to share harmful or disturbing content without worry their actions are likely to be detected and stopped. ↩︎
While some emerging generative technologies may positively affect various domains (e.g., certain aspects of drug discovery and biological research, efficient translation between certain languages, speeding up certain administrative tasks, etc) they are, also, enabling new forms of harmful activities. Case in point, some individuals and groups are using generative technologies to generate child sexual abuse or exploitation materials:
Sexton says criminals are using older versions of AI models and fine-tuning them to create illegal material of children. This involves feeding a model existing abuse images or photos of people’s faces, allowing the AI to create images of specific individuals. “We’re seeing fine-tuned models which create new imagery of existing victims,” Sexton says. Perpetrators are “exchanging hundreds of new images of existing victims” and making requests about individuals, he says. Some threads on dark web forums share sets of faces of victims, the research says, and one thread was called: “Photo Resources for AI and Deepfaking Specific Girls.”
…
… realism also presents potential problems for investigators who spend hours trawling through abuse images to classify them and help identify victims. Analysts at the IWF, according to the organization’s new report, say the quality has improved quickly—although there are still some simple signs that images may not be real, such as extra fingers or incorrect lighting. “I am also concerned that future images may be of such good quality that we won’t even notice,” says one unnamed analyst quoted in the report.
The ability to produce generative child abuse content is becoming a wicked problem with few (if any) “good” solutions. It will be imperative for policy professionals to learn from past situations where technologies were found to sometimes facilitate child abuse related harms. In doing so, these professionals will need to draw lessons concerning what kinds of responses demonstrate necessity and proportionality with respect to the emergent harms of the day.
As just one example, we will have to carefully consider how generative AI-created child sexual abuse content is similar to, and distinctive from, past policy debates on the policing of online child sexual abuse content. Such care in developing policy responses will be needed to address these harms and to avoid undertaking performative actions that do little to address the underlying issues that drive this kind of behaviour.
Relatedly, we must also beware the promise that past (ineffective) solutions will somehow address the newest wicked problem. Novel solutions that are custom built to generative systems may be needed, and these solutions must simultaneously protect our privacy, Charter, and human rights while mitigating harms. Doing anything less will, at best, “merely” exchange one class of emergent harms for others.
A whole computing infrastructure based on tracking metadata reliably and then presenting it to users in ways they understand and care about, and which is adopted by the masses.
That generative outputs will need to remain the exception as opposed to the norm: when generative image manipulation (not full image creation) is normal then how much will this glyph help to notify people of ‘fake’ imagery or other content?
That there are sufficiently low benefits to offering metadata-stripping or content-modification or content-creation systems that there are no widespread or easy-to-adopt ways of removing the identifying metadata from generative content.
Finally, where the intent behind fraudulent media is to intimidate, embarrass, or harass (e.g., non-consensual deepfake pornographic content, violence content), then what will the glyph in question do to allay these harms? I suspect very little unless it is, also, used to identify individuals who create content for the purposes of addressing criminal or civil offences. And, if that’s the case, then the outputs would constitute a form of data that are designed to deliberately enable state intervention in private life, which could raise a series of separate, unique, and difficult to address problems.
I’m a street photographer and have taken tens of thousands of images over the past decade. For the past couple years I’ve moved my photo sharing over to Glass, a member-paid social network that beautifully represents photographers’ images and provides a robust community to share and discuss the images that are posted.
I’m a big fan of Glass and have paid for it repeatedly. I currently expect to continue doing so. But while I’ve been happy with all their new features and updates previously, the newly announced computer vision-enabled search is a failure at launch and should be pulled from public release.
To be clear: I think that this failure can (and should) be rectified and this post documents some of the present issues with Glass’ AI-enabled search so their development team can subsequently work to further improve search and discoverability on the platform. The post is not intended to tarnish or otherwise belittle Glass’s developers or their hard work to build a safe and friendly photo sharing platform and community.
Trust and Safety and AI technologies
It’s helpful to start with a baseline recognition that computer vision technologies tend to be, at their core, anti-human. A recent study of academic papers and patents revealed how computer vision research fundamentally strips individuals of their humanity by way of referring to them as objects. This means that any technology which adopts computer vision needs to do so in a thoughtful and careful way if it is to avoid objectifying humans in harmful ways.
But beyond that, there are key trust and safety issues that are linked to AI models which are relied upon to make sense of otherwise messy data. In the case of photographs, a model can be used to subsequently enable queries against the photos, such as by classifying men or women in images, or classifying different kinds of scenes or places, or so as to surface people who hold different kinds of jobs. At issue, however, is thatmanyofthe popular AI models have deep or latent biases — queries for ‘doctors’ surface men, ‘nurses’ women, ‘kitchens’ associated with images including women, ‘worker’ surfacing men — or they fundamentally fail to correctly categorize what is in the image, with the result of surfacing images that are not correlated with the search query. This latter situation becomes problematic when the errors are not self-evident to the viewer, such as when searching for one location (e.g., ‘Toronto’) reveals images of different places (e.g., Chicago, Singapore, or Melbourne) but that a viewer may not be able to detect as erroneous.
Bias is a well known issue amongst anyone developing or implementing AI systems. There are numerous ways to try to technically address bias as well as policy levers that ought to be relied upon when building out an AI system. As just one example, when training a model it is best practice to include a dataset card, which explains the biases or other characteristics of the dataset in question. These dataset cards can also explain to future users or administrators how the AI system was developed so future administrators can better understand the history behind past development efforts. To some extent, you can think of dataset cards as a policy appendix to a machine language model, or as the ‘methods’ and ‘data’ section of a scientific paper.
Glass, Computer Vision, and Ethics
One of Glass’ key challenges since its inception has been around onboarding and enabling users to find other, relevant, photographers or images. While the company has improved things significantly over the past year there was still a lot of manual work to find relevant work, and to find photographers who are active on the platform. It was frustrating for everyone and especially to new users, or when people who posted photos didn’t categorize their images with the effect of basically making them undiscoverable.
One way to ‘solve’ this has been to apply a computer vision model that is designed to identify common aspects of photos — functionally label them with descriptions — and then let Glass users search against these aspects or labels. The intent is positive and, if done well, could overcome a major issue in searching imagery both because the developers can build out a common tagging system and because most people won’t take the time to provide detailed tags for their images were the option provided to them.
Sometimes the system seems to work pretty well. Searching for ‘street food vendors’ pulls up pretty accurate results.
However, when I search for ‘Israeli’ I’m served with images of women. When I open them up there is no information suggesting that the women are, in fact, Israeli, and in some cases images are shot outside of Israel. Perhaps the photographers are Israeli? Or there is location-based metadata that geolocates the images to Israel? Regardless, it seems suspicious that this term almost exclusively surfaces women.
Searching ‘Arab’ also brings up images of women, including some who are in headscarves. It is not clear that each of the women are Arabic. Moreover, it is only after 8 images of women are presented is a man in a beard shown. This subject, however, does not have any public metadata that indicates he is, or identifies as being, Arabic.
Similar gender-biased results happen when you search for ‘Brazillian’, ‘Russian’, ‘Mexican’, or ‘African’. When you search for ‘European’, ‘Canadian’, ‘American’, ‘Japanese’, however, you surface landscapes and streetscapes in addition to women.
Other searches produce false results. This likely occurs because the AI model has been trained that certain items in scenes are correlated to concepts. As an example, when you search for ‘nurse’ the results are often erroneous (e.g., this photo by L E Z) or link a woman in a face mask to being a nurse. There are, of course, also just sexualized images of women.
When searching for ‘doctor’ we can see that the model likely has some correlation between a mask and being a doctor but, aside from that, the images tend to return male subjects as images. Unlike ‘nurse’ there are no sexualized images of men or women that immediately are surfaced.
Also, if you do a search for ‘hot’ you are served — again — with images of sexualized women. While the images tend to be ‘warm’ colours they do not include streetscapes or landscapes.
Doing a search for ‘cold’, however, and you get cold colours (i.e., blues) along with images of winter scenes. Sexualized female images are not presented.
Consider also some of the search queries which are authorized and how they return results:
‘slut’ which purely surfaces women
‘tasty’ which surfaces food images along with images of women
‘lover’ which surfaces images of men and women, or women alone. It is rare that men are shown on their own
‘juicy’ which tends to return images of fruit or of sexualized women
‘ugly’ which predominantly surfaces images of men
‘asian’ which predominantly returns images of sexualized Asian women
‘criminal’ which often appears linked to darker skin or wearing a mask
‘jew’ which (unlike Israeli) exclusively surfaces men for the first several pages of returned images
‘black’ primarily surfaces women in leather or rubber clothing
‘white’ principally surfaces white women or women in white clothing
Note that I refrained from any particularly offensive queries on the basis that I wanted to avoid taking any actions that could step over an ethical or legal line. I also did not attempt to issue any search queries using a language other than English. All queries were run on October 15, 2023 using my personal account with the platform.
Steps Forward
There are certainly images of women who have been published on Glass and this blogpost should not be taken as suggesting that these images should be removed. However, even running somewhat basic queries reveal that (at a minimum) there is an apparent gender bias in how some tags are associated with men or women. I have only undertaken the most surface level of queries and have not automated searches or loaded known ‘problem words’ to query against Glass. I also didn’t have to.
Glass’ development team should commit to pulling its computer vision/AI-based search back into a beta or to pull the system entirely. Either way, what the developers have pushed into production is far from ready for prime time if the company—and the platform and its developers—are to be seen as promoting an inclusive and equitable platform that avoids reaffirming historical biases that are regularly engrained in poorly managed computer vision technologies.
Glass’ developers have previously shown that they deeply care about getting product developments right and about fostering a safe and equitable platform. It’s one of the reasons that they are building a strong and healthy community on the platform. As it stands today, however, their AI-powered search function violates these admirable company values.
I hope that the team corrects this error and brings the platform, and its functions, back into comportment with the company’s values rather than continue to have a clearly deficient product feature deployed for all users. Maintaining the search features, as it exists today, would undermine the team’s efforts to otherwise foster the best photographic community available on the Internet, today.
Glass’ developers have shown attentiveness to the community in developing new features and fixing bugs, and I hope that they read this post as one from a dedicated and committed user who just wants the platform to be better. I like Glass and the developers’ values, and hope these values are used to undergird future explore and search functions as opposed to the gender-biased values that are currently embedded in Glass’ AI-empowered search functions.
I’ve had the good fortune to get out and take photos pretty well every week of the summer. On the whole I’ve enjoyed decent light, good and interesting weather, and lots of events that opened up opportunities to capture the city in interesting ways.
This was taken at one of the first festivals of the summer. I just walked back and forth through it over a couple of days and left with a number of images I liked, with this probably my favourite. Why?
First, I love the woman’s expression in her relationship to the officer, as well as with the pineapples: what exactly is the problem? Why is she so shocked? What has the officer said, if anything?
Second, I liked the background — it showcases this part of Toronto. It’s not filled with the new shiny glass buildings and condos, and still has some of the older shops and signs. This location gives a sense of ‘where’ this image was taken.
Third, I just like having images with pineapples in them. I don’t know why but I can tell it’s a motif in studying the images that I’ve taken over the years.
Queens Quay & Spadina, Toronto, 2023
This image was taken on Toronto’s waterfront. It just captures all the things that summer can be in Toronto: ferries coming from the Toronto islands, some people relaxing along the water, seagulls (which are everywhere along the waterfront in the summer), travellers landing at the island airport, and just a sense of activity and calm.
York & Wellington, Toronto, 2023
Taken from the financial district of downtown Toronto, I really liked how the light was falling on the scene and the way that the male subject is relaxing against the bulls. It almost feels pastoral to me, which isn’t the typical experience I get when walking around (or living in) the downtown core.
Queen & Bay, Toronto, 2023
I’m a sucker for taking photos of ice cream trucks and I really liked how this guy was looking out of the truck while a pigeon was just wandering by in the lower left of the frame. Is this the most complex image I took in the summer? Nope. But I still liked the environmental portrait that was captured.
Spadina & St Andrew, Toronto, 2023
Taken along one of my regular patrol routes, there’s a lot that I like throughout this frame.
It has a lot of construction elements — something I’ve been deliberately including in my street photos as part of a long-term project — and there’s some sub-framing that comes out because of how the shadows lay against the wall. The subjects to the far right of the frame are somewhat interesting — what are they pointing at? And does it intersect with the ‘caution’ warning? — but their shadows are where they shine. The shadows seem like they’re up to…something…while at the same time there is a subject that is reminiscent of the Invisible Man wandering along the left side of the frame. In aggregate, this scene has a degree of dimensionality that I really liked, some subjects of interest, and fit within an ongoing project.
Queens Quay & Bay, Toronto, 2023
I’m always a sucker for isolated subjects in the city who are in interesting situations, or have interesting expressions or body language. This photograph captures this for me.
I like that the main subject seems somewhat desolate, and yet is sitting alongside a series of summer treats and toys. And the fact that this is a vendor who only takes cash? I wonder when such signs are going to be real indicators of a distant past. The other piece that I like is how the top, right, and left of the frame are all food-related: the subject is selling popcorn and candies, hotdogs are being sold along the left of the frame, and the top of the frame can refer to top-end gourmet restaurants. So there’s multiples ‘frames’ to the subject which, again, adds a degree of structure or complexity into the composition.
Canadian National Exhibition, Toronto, 2023
This was taken during the waning days of the CNE, which is a massive festival that takes place annually in Toronto. People are typically excited and happy, but our older subject, here, seems sad, quiet, or in deep contemplation.
Having her placed against games and the Kool-Aid Man on one side, and the child and mother on the other, really underscores her emotional state in what is typically a festive situation. I also like the depth of the photo that indicates where the women is in Toronto. This leaves the viewer with a deeper sense of context, which helps to amplify the woman’s facial expression and body language.
Canadian National Exhibition, Toronto, 2023
The final photo of the summer is another from the CNE. The subjects in this one exemplify what is ‘normal’ in the summer — happiness, togetherness, and fun. The subjects’ expressions and open and apparent and I love how large the stuffed pig is in context to the woman — what will she do with it once she gets it home?
While it’s not the most complicated of photos I took over the summer it expresses a sense of unadulterated happiness or joy that regularly brings a smile to my face.
This is an important document, insofar as it clarifies a legal grey space in Canadian federal government policies. Some of the Notice’s highlights include:
Clarifies (some may assert expand) how government agencies can collect, use, retain, or disclose publicly available online information (PAOI). This includes from commercial data brokers or online social networking services
PAOI can be collected for administrative or non-administrative purposes, including for communications and outreach, research purposes, or facilitating law enforcement or intelligence operations
Overcollection is an acknowledged problem that organizations should address. Notably, “[a]s a general rule, [PAOI] disclosed online by inadvertence, leak, hack or theft should not be considered [PAOI] as the disclosure, by its very nature, would have occurred without the knowledge or consent of the individual to whom the personal information pertains; thereby intruding upon a reasonable expectation of privacy.”
Notice of collection should be undertaken, though this may not occur due to some investigations or uses of PAOI
Third-parties collecting PAOI on the behalf of organizations should be assessed. Organizations should ensure PAOI is being legitimately and legally obtained
“[I]nstitutions can no longer, without the consent of the individual to whom the information relates, use the [PAOI] except for the purpose for which the information was originally obtained or for a use consistent with that purpose”
Organizations are encouraged to assess their confidence in PAOI’s accuracy and potentially evaluate collected information against several data sources to confidence
Combinations of PAOI can be used to create an expanded profile that may amplify the privacy equities associated with the PAOI or profile
Retained PAOI should be denoted with “publicly available information” to assist individuals in determining whether it is useful for an initial, or continuing, use or disclosure
Government legal officers should be consulted prior to organizations collecting PAOI from websites or services that explicitly bar either data scraping or governments obtaining information from them
There are number pieces of advice concerning the privacy protections that should be applied to PAOI. These include: ensuring there is authorization to collect PAOI, assessing the privacy implications of the collection, adopting privacy preserving techniques (e.g., de-identification or data minimization), adopting internal policies, as well as advice around using attributable versus non-attributable accounts to obtain publicly available information
Organizations should not use profile information from real persons. Doing otherwise runs the risk of an organization violating s. 366 (Forgery) or s.403 (Fraudulently impersonate another person) of the Criminal Code
Rolling Stone has an excellent article that profiles the women who have been at the forefront of warning how contemporary AI systems can be, and are being, used to (re)inscribe bias, discrimination, sexism, and racism into contemporary and emerging digital tools and systems. An important read that is well worth your time.