Categories
Links Writing

Older Adults’ Perception of Smart Home Technologies

Percy Campbell et al.’s article, “User Perception of Smart Home Surveillance Among Adults Aged 50 Years and Older: Scoping Review,” is a really interesting bit of work into older adults/ perceptions of Smart Home Technologies (SMTs). The authors conducted a review of other studies on this topic to, ultimately, derive a series of aggregated insights that clarify the state of the literature and, also, make clear how policy makers could start to think about the issues older adults associate with SMTs.

Some key themes/issues that arose from the studies included:

  • Privacy: different SMTs were perceived differently. But key was that the privacy concerns were sometimes highly contextual based on region, with one possible effect being that it can be challenging to generalize from one study about specific privacy interests to a global population
  • Collection of Data — Why and How: People were generally unclear what was being collected or for what purpose. A lack of literacy may raise issues of ongoing meaningful consent of collection.
  • Benefits and Risks: Data breaches/hacks, malfunction, affordability, and user trust were all possible challenges/risks. However, participants in studies also generally found that there were considerable benefits with these technologies, and most significantly they perceived that their physical safety was enhanced.
  • Safety Perceptions: All types of SHT’s were seen as useful for safety purposes, especially in accident or emergency. Safety-enhancing features may be preferred in SHT’s for those 50+ years of age.

Given the privacy, safety, etc themes, and how regulatory systems are sometimes being outpaced by advances in technology, they authors propose a data justice framework to regulate or govern SHTs. This entails:

  • Visibility: there are benefits to being ‘seen’ by SHTs but, also, privacy needs to be applied so individuals can selectively remove themselves from being visible to commercial etc parties.
  • Digital engagement/ disengagement: individuals should be supported in making autonomous decisions about how engaged or in-control of systems they are. They should, also, be able to disengage, or only have certain SHTs used to monitor or affect them.
  • Right to challenge: individuals should be able to challenge decisions made about them by SHT. This is particularly important in the face of AI which may have ageist biases built into it.

While I still think that there is the ability of regulatory systems to be involved in this space — if only regulators are both appropriately resourced and empowered! — I take the broader points that regulatory approaches should, also, include ‘data justice’ components. At the same time, I think that most contemporary or recently updated Western privacy and human rights legislation includes these precepts and, also, that there is a real danger in asserting there is a need to build a new (more liberal/individualistic) approach to collective action problems that regulators, generally, are better equipped to address than are individuals.

Categories
Links

Deskilling and Human-in-the-Loop

I found boyd’s “Deskilling on the Job” to be a useful framing for how to be broadly concerned, or at least thoughtful, about using emerging A.I. technologies in professional as well as training environments.

Most technologies serve to augment human activity. In sensitive situations we often already require a human-in-the-loop to respond to dangerous errors (see: dam operators, nuclear power staff, etc). However, should emerging A.I. systems’ risks be mitigated by also placing humans-in-the-loop then it behooves policymakers to ask: how well does this actually work when we thrust humans into correcting often highly complicated issues moments before a disaster?

Not to spoil things, but it often goes poorly, and we then blame the humans in the loop instead of the technical design of the system.1

AI technologies offer an amazing bevy of possibilities. But thinking more carefully on how to integrate them into society while, also, digging into history and scholarly writing in automation will almost certainly help us avoid obvious, if recurring, errors in how policy makers think about adding guardrails around AI systems.


  1. If this idea of humans-in-the-loop and the regularity of errors in automated systems interests you, I’d highly encourage you to get a copy of ‘Normal Accidents’ by Perrow. ↩︎
Categories
Links

Externalizing Costs- The Table Saw Edition

Steve Gass, a physicist and lawyer by training, gave an interview about SawStop, which is a table saw that’s designed to detect and immediately stop and retract the blade if it detects human flesh. He discussed how and why he created it, but also addressed the pressing social need it serves: there are around 150 injuries a day with table saws, and about 8 of them are amputations.

Given that he’s designed a technology to massively cut down on these injuries,1 you may wonder why it hasn’t been widely adopted. The reason, unsurprisingly, is that other table saw manufacturers just externalize the harmful social costs of their products. As Gass notes in his interview with MachinePix Weekly:

The fundamental question came down to economics. Almost a societal economic structure question. The CPSC says table saws result in about $4B in damage annually. The market for table saws is about $200-400M. This is a product that does almost 10x in damage as the market size. There’s a disconnect—these costs are borne by individuals, the medical system, workers comp—and not paid by the power tools company. Because of that, there’s not that much incentive to improve the safety of these tools.

As depressingly normal, even if companies did want to integrate Gass’ technologies it’d add somewhat to their current bill of materials and, as such, run the risk of making their products less competitive in the market when juxtaposed against other companies’ table saws. With the result being a massive cost to the economy that is borne by taxpayers and insurance companies.


  1. Pardon the pun. ↩︎
Categories
Links Writing

Safe Streets and Systemic Racism

Sabat Ismail, writing at Spacing Toronto, interrogates who safe streets are meant to be safe for. North American calls for adopting Nordic models of urban cityscapes are often focused on redesigning streets for cycling whilst ignoring that Nordic safety models are borne out of broader conceptions of social equity. Given the broader (white) recognition of the violent threat that police can represent to Black Canadians, cycling organizations which are principally advocating for safe streets must carefully think through how to make them safe, and appreciate why calls for greater law enforcement to protect non-automobile users may run counter to an equitable sense of safety. To this point, Ismail writes:

I recognize the ways that the safety of marginalized communities and particularly Black and Indigenous people is disregarded at every turn and that, in turn, we are often provided policing and enforcement as the only option to keep us safe. The options for “safety” presented provide a false choice – because we do not have the power to determine safety or to be imagined within its folds.

Redesigning streets without considering how the design of urban environments are rife with broader sets of values runs the very real risk of further systematizing racism while espousing values of freedom and equality. The values undergirding the concept of safe streets must be assessed by a diverse set of residents to understand what might equitably provide safety for all people; doing anything less will likely re-embed existing systems of power in urban design and law, to the ongoing detriment and harm of non-white inhabitants of North American cities.

Categories
Links Photography Roundup Writing

The Roundup for April 28-May 4, 2018 Edition

OLYMPUS DIGITAL CAMERA
Hoop Dreams by Christopher Parsons

In the wake of the Toronto attack any number of journalists are trying to become experts on the ‘incel’ community, which defines itself as a community of men who are involuntarily celibate and as deserving intercourse with women. It’s led to some suggestions that maybe it’s appropriate to think about policy solutions to the ‘problem’. At issue, of course, is that some persons have failed to recognize the problem itself. Consider Ross Douthat, who links Amia Srinivasan’s ruminations on the links between desire and politics with incels, effectively conjoining a misogynistic subculture with “the overweight and disabled, minority groups treated as unattractive by the majority, trans women unable to find partners and other victims … of a society that still makes us prisoners of patriarchal and also racist-sexist-homophobic rules of sexual desire.” Douthat continues to ultimately argue that a combination of commerce, technology, and efforts to destigmatize sex work will lead to “at a certain point, without anyone formally debating the idea of a right to sex, right-thinking people will simply come to agree that some such right exists, and that it makes sense to look to some combination of changed laws, new technologies and evolved mores to fulfill it.”

Douthat’s entire argumentative structure — that the ‘problem’ to solve in an inability to engage in sexual, if not romantic, relationships — is predicated on the notion that there is such a thing as a legitimate right to intercourse. There is not. There is a legitimate right to safe, respectful, and destigmatized sexual relationships and activities. There is a right to sexual education, to sexual health and wellbeing, but there is no right to intercourse: such a right would imply that the act of penetrating another person is necessary and appropriate. That is clearly not the case.

Instead, the problem with the incel community is linked with misogyny. Specifically, as Jessica Valenti writes, the problem is with misogynist terrorism, a situation where certain men’s disdain towards women drives mass murders. Part of solving this particular problem is linked with addressing the underlying culture in America, and the world more generally. Specifically, she writes:

Part of the problem is that American culture still largely sees men’s sexism as something innate rather than deviant. And in a world where sexism is deemed natural, the misogynist tendencies of mass shooters become afterthoughts rather than predictable and stark warnings.

The truth is that in addition to not protecting women, we are failing boys: failing to raise them to believe they can be men without inflicting pain on others, failing to teach them that they are not entitled to women’s sexual attention and failing to allow them an outlet for understandable human fear and foibles that will not label them “weak” or unworthy.

It’s essential that men, and boys, learn about how to engage with other humans in non-destructive ways. Such a process is borderline revolutionary because it entails reshaping how cultural, social, legal, and economic relationships are structured, and any such restructuring must be motivated by a rebalancing of power relationships across genders and races (and, ultimately, geographies). The outcome will be that the privilege that straight white men have enjoyed for centuries will be diminished and, correspondingly, restrict the social and economic opportunities that some men have enjoyed solely because of their gender and race. But those changes are essential if we’re to actually confront the misogyny and racism that underlies not just incel culture, but that of mainstream society and politics as well.


Inspiring Quotation of the Week

Writing—I can really only speak to writing here—always, always only starts out as shit: an infant of monstrous aspect; bawling, ugly, terrible, and it stays terrible for a long, long time (sometimes forever). Unlike cooking, for example, where largely edible, if raw, ingredients are assembled, cut, heated, and otherwise manipulated into something both digestible and palatable, writing is closer to having to reverse-engineer a meal out of rotten food.

  • David Rokoff

New Apps

Great Photography Shots

I’d never seen x-ray photos of flowers before. It’s an absolutely breathtaking form of image making.

Photo manipulation by Edmanep

Music I’m Digging

Neat Podcast Episodes

Good Reads for the Week

Cool Things

Categories
Links

For some safety experts, Uber’s self-driving taxi test isn’t something to hail

Washington Post:

Even so, the effort is raising concern from safety experts who say the technology has major limitations that can be very dangerous. Self-driving cars have trouble seeing in bad weather. Sudden downpours, snow and especially puddles make it difficult for autonomous vehicles to detect lines on pavement and thereby stay in one lane.

Walker Smith added that self-driving cars have sometimes confused bridges for other obstacles. “People need to understand both the potential and the limitations of these systems, and inviting them inside is part of that education,” he said.

The vehicles also have difficulty understanding human gestures — for example, a crosswalk guard in front of a local elementary school may not be understood, said Mary Cummings, director of Duke University’s Humans and Autonomy Lab, at a Senate hearing in March. She recommended that the vehicles not be allowed to operate near schools.

Then there’s a the human factor: Researchers have shown that people like to test and prank robots. Today, a GPS jammer, which some people keep in their trunks to block police from tracking them, will easily throw off a self-driving car’s ability to sense where it is, Cummings said.

Current self-driving cars often cannot see which lane they’re in, if it’s raining. They don’t understand what a bridge is versus other road-terrain. They don’t understand what a cross-walk guard is. And they are reliant on a notoriously brittle location technology.

What can go wrong with testing them in urban centres then, exactly?

Categories
Links Writing

Fragmentation leaves Android phones vulnerable to hackers

Via the Washington Post:

“You have potentially millions of Androids making their way into the work space, accessing confidential documents,” said Christopher Soghoian, a former Federal Trade Commission technology expert who now works for the ACLU. “It’s like a really dry forest, and it’s just waiting for a match.”

The high degrees of fragmentation in the Android ecosystem are incredibly problematic; fragmentation combined with delays in providing updates effectively externalizes the security-related problems stemming from mobile OS vulnerabilities on individual owners of phones. Those owners are (typically) the least able parties in the owner/carrier/manufacturer/OS creator relationship to remedy the flaws. At the moment, Google tends to promptly (try) to respond to flaws. The manufacturers and vendors then have to certify and process any updates, which can take months. It’s inexcusable that these parties can not only sit on OS updates, but they can continue to knowingly sell vulnerable phones.

Imagine if, after a car line was reported to have some problem that required the line’s recall and refurbishment, dealers continued to sell the car. They didn’t even notify the person buying the car that there was a problem, just that ‘enhancements’ (i.e. the seat didn’t eject when you hit something at 60Km/hr, plus a cool new clock display on the dashboard) were coming. The dealers would be subject to some kind of legal action or, failing that, consumers could choose to work with dealers who sold safe cars. Why, exactly, aren’t phone carriers being subjected to the same scrutiny and held to the same safety standards?

Categories
Links

Advice on Browsing the Web Safely

Global Voices has a series of good suggestions on how to browse the web safely. Many users may not need to take the more extreme precautions – such as browsing from a USB-drive mounted operating system – but other pieces of information are helpful. Well worth the (quick) read.