Categories
Writing

Lessig Blog, v2: A time for silence

lessig:

A week ago today, Aaron gave up. And since I received the call late Friday night telling me that, like so many others who were close to him, I have not rested. Not slept, really. Not connected with my kids, at all. Not held my wife except to comfort her tears, or for her to comfort mine.

Instead…

I am still struggling to come to terms with Aaron’s death. I was first incredibly depressed. Then mad. I’m still at that point.

I was one step removed from him in more ways than I can count and, based on my grief, I can’t imagine the pain experienced by my friends and colleagues. His causes overlapped with my own. His principles often as well. I can understand and sympathize – and, to a large extent, support – his advocacy tactics. I can impose my own understandings on why he took his life and be saddened, but not necessarily surprised and certainly unable to lash out at him for his decision.

What is perhaps most significant to my mind, now, is that the challenges that faced Aaron similarly bear down on many of the members of the digital and civil rights community. Threats of outlandish prosecution. Warnings of how advocacy will be treated as criminal behaviour of the highest sort. Attempts to legally force and coerce colleagues to turn on one another.

Aaron can, and does, serve as a focus for some of the problems that some members of this community experience on a sadly common basis. We need to move forward to better help, support, and uplift our own. We need to work harder to make sure that suicide isn’t seen as a way to resolve the problems that some of our community experiences. To this end we have to buttress against the despondency, isolate, and fear imposed by elements of government with the hope, togetherness, and laughter that makes this community so important and productive.

Categories
Writing

Could Google+ Depend of Google Now’s Success?

MG Siegler recently argued that:

Google+ is a turd.

I’m not sure why everyone seems afraid to admit this. I think it’s similar to the reason why some seem reluctant to call Windows 8 a turd when it’s already abundantly clear: people are scared that such a bold statement could come back to bite them in the ass. But it won’t. Both are clearly turds.

Google continues to try to cram Google+ down people’s throats, but it just won’t stay down. People are gonna keep puking it right back up. The only compelling feature of Google+ is Hangouts; everything else is a carbon copy of some social activity that people can (and already do) do elsewhere. Google simply made a bad call and started chasing the wrong thing (social) far too late.

I wonder how long it will take Google to admit defeat here? I’m sure we’ll see a lot more of the shoving of Google+ in our faces first — Chrome, you’re next. But I really wish Google would take all the energy being put behind this dog and use it to blow out their truly interesting and innovative products, like Google Now.

I think that the of Google+ could depend on Google’s capability of linking signals from their social networking product with their Now product. Currently, Now can ascertain things like when you’re near certain locations or about to perform certain actions (e.g. near a bus stop/station or about to take a flight) and provide relevant and helpful data to the Android Phone user. This is really cool and, if you’re comfortable with this degree of personalized data mining, potentially convenient.

What Now presently lacks is the ability to tell me that when I’ve a break in my day (based on Google Calendar analysis) and a friend also has a break (based on an analysis of their calendar) that we could mutually meet for coffee or meal. It similarly lacks an awareness of my colleagues and friends to suggest that there are special non-birthday dates coming up. Same thing for mass-mining of check-ins (to figure out what my social community eats, and where they do it often) and preferred news and website content.

The thing is, all of these functionality elements could be implemented if there was widescale adoption and use of Google+. This means that updated version of Android need to get to millions of handsets or, alternately, Chrome need to deploy Now functionality (something that code analyses suggest is imminent). Either/or could encourage people to adopt Google+ to get heightened personalized data mining. Yes, you read that right: (perceived) helpful surveillance could get people to intentionally adopt products that facilitate useful personalized insights.

The key issue – beyond pure legal and regulatory concerns – will be whether this kind of mining is seen as ‘creepy’ or not. If the Now product is seen as cool, feature rich, opt-in, and not privacy infringing – and is adopted by a significant portion of the masses – then Google could offer personalized services in excess of those offered by Twitter and Facebook today. This might be the ‘nudge’ necessary to get a significant portion of the social graph onto Google and consequently elicit a network effect sufficient to turn Google+ into a viable and useful social networking community.

If Google+ is seen as a gateway to improved Now information, and if users see Now as a feature they want more of in their life, then Google+ could see a fresh (if somewhat forced) breath of life. A key question, however, is whether the advantages of a cool product offering are sufficient to get people to ‘jump ship’ onto a largely empty social networking platform. It’ll be curious to watch because if Google is successful they’ll have found a way to create a social graph in a novel manner, one that other companies may subsequently attempt to replicate.

Categories
Links Writing

Belkin #Fails At Password Creation

WPA2-PSK is recognized as a pretty reasonable way for most consumer to secure their wifi access point. That said, this mechanism falls pretty flat on its face when router manufacturers screw up, and it looks like Belkin has screwed up badly. From a Register article we see that:

Each of the eight characters of the default passphrase are created by substituting a corresponding hex-digit of the WAN MAC address using a static substitution table. Since the WAN MAC address is the WLAN MAC address + one or two (depending on the model), a wireless attacker can easily guess the wan mac address of the device and thus calculate the default WPA2 passphrase.

This is just really poor mechanism to calculate the password. At least the manufacturer has been totally silent on the issue, and unwilling to disclose how they intend to defray potential attacks; this gives the possibility that Belkin’ll fix things instead of just abandoning consumers (which seems to be, sadly, a pretty default vendor response when their errors undermine users’ privacy and security). Here’s hoping that Belkin decides to not be like most router vendors…

Categories
Writing

Ubiquitous Police Surveillance and Guilt by Location

The Times Colonist has a particularly good opinion piece concerning authorities’ use of automatic license plate recognition. This technology was recently subject of an investigation in British Columbia, with the provincial information and privacy commissioner asserting that many of the current uses of the technology must stop. For more information, you can read the decision  (.pdf) or some press coverage about the decision.

When speaking about authorities’ interests in retaining locational information about people who aren’t immediately of interest to police, the author of the opinion piece writes:

And the concept [of collecting such information] goes against the golden thread that winds its way throughout our justice system – the presumption of innocence unless proven otherwise. A person shouldn’t become the focus of an investigation just because he or she happened to drive along a certain street at a certain time.

But a person who hasn’t done anything wrong shouldn’t worry, right? Ask that to people whose lives have been ruined when they have been investigated or charged for a crime and later exonerated. That stigma of being the target of a police investigation is not easily erased, even when a person is cleared of all wrongdoing.

This latter paragraph – that the stigma of a false investigation can significantly alter a person’s life possibilities for an extensive period of time – is often forgotten about or glossed over when reporting on new policing surveillance practices. In an era where information is in abundance, and the attention span to monitor stories and issues is at a premium, a false charge may be legally overturned without the population more generally ever correcting their false impressions. This can create a long-standing disadvantage for falsely accused person as they try to carry on with their lives.

Moreover, the very potential that information could be used against you turns the (popular) understanding of guilt on its head: instead of authorities clearly linking a person’s presence at a location with a crime, it becomes the responsibility of each individual to demonstrate the innocence of being in place X at time Y. Given that these license plate scanners can capture where people are, at any time of the day, there isn’t a necessary reason that a person will know why they were at X at Y. While such oversights ought to be understood as the reasonable failings of a reasonable human’s mind, the danger is that an inability to justify one’s presence at a particular place could be taken as an indication of potential guilt. As a result of such ‘suspicious’ behaviour an individual who was just driving at the ‘wrong place’ at the ‘wrong time’ could be subjected to more intrusive police surveillance, simply because a scanner identified a person at a particular place at a particular time.

Fortunately, the privacy commissioner has significantly come out against this ubiquitous form of surveillance. Her stance should limit these dystopian risks of license plate scanners in her jurisdiction. Now it’s up to the authorities to respect the decision and mediate how and why they use the technology.

Categories
Writing

On Publicness and the Academy

Alex Reid has written a short piece about his position concerning the question: if and academic speaks in public, is it right for members of the audience to record/write/talk about what was said?

While I can’t say that I agree with one of the positions he assumes – that as an academic you should exclusively be publishing close-to-complete work (i.e. drafts or early works in progress you don’t want talked about need not apply!) – it’s worth the read, especially in the context that many academics are loathe to have ‘early’ work broadcast beyond tightly controlled confines and populations.

Alex has a great punchline, emphasizing how academics are for the first time really, widely, seeing their work being public and thus critiqued/engaged with. It’s scary for a lot of people but it’s definitely the new reality of academe. The post is well worth the few minutes it’ll take you to read!

Categories
Links Writing

Social Media Used to Target Advocate/Journalist

While it comes as no surprise that police monitored Facebook during last year’s Occupy protests, in the case of Occupy Miami an advocate/journalist was specifically targeted after his Facebook profile was subjected to police surveillance. An email produced in the court case revealed:

the police had been monitoring Miller’s Facebook page and had sent out a notice warning officers in charge of evicting the Occupy Miami protestors that Miller was planning to cover the process.

Significantly, the police tried to destroy evidence showing that they had unlawfully targeted the advocate, footage that (after having been forensically recovered) revealed that the charges laid against the advocate were blatantly false. That authorities conduct such surveillance – often without the targets of surveillance knowing that they have been targeted or, when targeted, why – matters for the general population because lawfully exercising one’s rights increasingly leads to citizens being punished for doing so. Moreover, when the surveillance is accompanied by deliberate attempts to undermine citizens’ capacities to respond to unlawful detentions and false charges, we have a very, very real problem that can affect any citizen.

We know from academic research conducted by scholars such as Jeffrey Monaghan and Kevin Walby that Canadian authorities use broad catch-all caricatures during major events to identify ‘problem populations.’ We also know that many of the suspects that are identified during such events are identically labeled regardless of actually belonging in the caricature population. The capacity to ‘effectively’ sort in a way resembling fact or reality is marginal at best. Consequently, we can’t just say that the case of Occupy surveillance is an ‘American thing’: Canadian authorities do the same thing to Canadian citizens of all ages, be they high school or university students, employed middle-aged citizens, or the elderly. These are surveillance and sorting processes that are widely adopted with relatively poor regulation or oversight. These processes speak to the significant expansion of what constitutes general policing as well as speaking to the state-born risks of citizens even in ‘safe’ countries using social media in an unreflective manner.

Categories
Aside Writing

Ubuntu’s Privacy FUBAR

The EFF has a particularly good accounting of how the most recent changes to Ubuntu are intensely problematic from a privacy perspective. Specifically, performing local searches will (and does) leak information to third-parties such as Facebook and Amazon. Though not explicitly mentioned, remember that in many jurisdictions if you ‘give up’ or ‘abandon’ information to third-parties then you often lose considerable (legal) privacy protections. As such, Ubuntu’s decision to leak data to third-parties whenever users perform local searches on their computer could have significant implications for Ubuntu users’ legal protections concerning personal search information. If Microsoft or Apple did something similar then there would almost certainly be complaints filed to federal bodies: will similar reactions emerge from the Linux and Ubuntu communities?

Categories
Writing

Skype Discloses Subscriber Info to Private Investigators

In a not-particularly-surprising move, Skype handed over a 16 year old’s subscriber information to a firm hired by Paypal. No warrant was required, as the information was provided to a private party, and that party subsequently gave it to police. In essence, a very large telecommunications service provider (TSP) made available personally identifiable information that, ultimately, led to an arrest without authorities having to convince a judge that they had legitimate grounds to get that information from the TSP.

At a talk I recently attended, a retired Assistant RCMP Commissioner emphasized time and time again that Canadians need to be more worried about corporations like Skype, Google, and Facebook than they do the federal or provincial governments. He correctly, I believe, spoke to the social harms that these companies can and do cause to individuals who both subscribe and do not subscribe to the companies’ service offerings.

Non-controversially, we know that many large companies can take actions that are harmful to individuals, as can states themselves. What is less recognized, however, is that there are more and more cases where private intermediaries are acting as one or two degrees of separation between public institutions and large private data stores. Such ‘intermediary protection’ often lets states access and use personal data that they otherwise cannot access without considerable difficulty. Worse, where authorities refuse to bring intermediary-provided data to court it can be challenging for accused persons to argue that an investigation was predicated on inappropriate access to their personal data. More time has to be spent considering the role of these data intermediaries and thinking through how to prevent the disclosure of personal data to state authorities in the absence of judicial oversight. Failure to tackle this problem will simply lead to more and more inappropriate access to corporate data by authorities, and critically to access without adequate or necessary judicial oversight.

Categories
Writing

Could Email Undermine the 2012 American Election?

In the aftermath of Hurricane Sandy, some of the polling stations that would have been used by Americans to cast ballots are gone. Moreover, some citizens in New Jersey are unlikely to either find their new polling station or take the time to find a station and vote. Quite simply, they’re rebuilding their lives: presidential politics aren’t necessarily centre of mind at the moment.

In the wake of the disaster, New Jersey will let some voters cast their ballots by fax and email. One American expert has identified a range of possible attack vectors that could be used to compromise people’s votes. He’s quoted as saying,

Those are just some of the more obvious and potentially catastrophic ways a direct security failure could affect this election … The email voting scheme has so many ways it can fail or that doubt can be cast on the integrity of the results, that if a race somewhere in New Jersey is decided by email ballots, it seems almost guaranteed that we’re going to have a bunch of mini-2000-in-Floridas all over the state.

In addition to basic security concerns around voting, it’s critical to understand that voting by email (effectively) removes secrecy provisions. Messages will not have to be encrypted, meaning that if employees cast their ballots at work then their employer(s) could ascertain how their employees are voting. This is an incredibly serious issue.

In the best of worlds, the New Jersey elections won’t rely or depend on the emailed votes to determine a winner. This said, even if the votes don’t change the local results – if individuals win seats by sufficient margins that the emailed ‘ballots’ wouldn’t affect who won – the national vote could the endangered if the New Jersey voting system is connected to the national system. The risk, here, is that if an attacker could compromise the New Jersey voting infrastructure (perhaps by sending an infected attachment to an email message) then the rest of the infrastructure could also be compromised. Such an attack, were it to occur, could compromise not just the New Jersey results but, potentially, races across the United States.

While it’s evident why the government decided to let people vote by email – to ensure that Americans could cast their ballot despite the horrific natural disaster – these good intentions could result in very, very bad results. Worse, it could encourage trust and confidence in online voting systems more generally, systems that simply cannot be adequately secured (for more as to why, see this and this). While paper ballets are infuriating for many they remain an ideal means of confidently expressing voting intentions. While alternate approaches certainly need to be considered to let people vote, especially in times of crisis, voting by email is not an idea that should have been contemplated, let alone adopted, as a solution to the Sandy-related voting problems.

Categories
Writing

Google’s ‘Friendly Tracking’: Fitfully Creepy?

Kashmir Hill wrote an article last week about how Google Now is informing some Nexus owners of how active they have been over the past week. She rightfully notes that this is really just making transparent the tracking that smartphones do all the time, though putting it to (arguably) good and helpful use. This said, Google’s actions raise a series of interesting issues and questions.

To begin, Google’s actions are putting a ‘friendly face’ on locational tracking. Their presentation of this data also reveals some of the ways that Google can – and apparently is – using locational data: for calculating not just distance but, based on the rate of movement between locations, the means by which users are getting from point A to B. This isn’t surprising,given that Google has had to develop algorithms to determine if subscribers’ phones are moving in cars (in fast or slow traffic) for some of their traffic alerts systems. Determining whether you’re walking/biking instead of driving is presumably just a happy outcome of that algorithmic determination. That said: is this mode of analyzing movement and location necessarily something that users want Google to be processing? Can they have been genuinely expected to consent to this surveillance – barring in jargon-ridden Terms of Service and Privacy Policies – and, moreover, can Now users get both raw data and the categories into which their locational data has been ‘sorted’ by Google? Can they have both sets of data fully, and permanently, expunged from Google databases?

Friendliness – or not, if you see this mode of tracking and notification as problematic – aside, I think that Google’s alerts speak to the important role that ambient technology can play in encouraging public fitness. In the interests of disclosure, I’ve used a non-GPS-based system to track the relative levels of my activity for the past six or seven months. It’s been the single best $100 that I’ve spent in the past five years and led to very important, and positive, changes in my personal health. I specifically chose a non-GPS system because I worry about the implications of linking health/fitness information with where individuals physically move: I see such data as a potential gold mine for health insurers and employers. This is where I see the primary (from my perspective) concerns: how can individuals be assured that GPS-related fitness information won’t be made available to health insurers who are setting Android users’ health premiums? How can they prevent the information from leaking to employers, or anyone else that might have an interest in this data?

Past this issue of data flow control I actually think that making basic fitness information very, very clear to people is a good idea. A comfortable one? No, not necessarily. No one really wants to see how little they may have been active. But I’m not certain that this mode of fitness analysis is necessarily creepy; it can definitely be unpleasant, however.

Of course individuals need to be able to opt-out of this kind of tracking if they’d like. Really, it should be opt-in (from a privacy perspective) though from a public health perspective I can’t help but wonder if it shouldn’t be opt-out. This is an area where there are competing public goods, and unlike a debate around security and privacy (which tends to feature pretty drawn out, well entrenched, battle lines) I’m not sure we’ve had a good discussion about the nature of locational tracking as it relates to basic facets of public fitness and, by extension, public health.

In the end, this is actually a tracking technology that I’m largely on the fence about, and my core reason for having problems with it are (a) I don’t think people had any real idea that they had opted-in to the fitness analysis; (b) I don’t trust third-parties not to get access to this data for purposes at odds with the data subject’s own interests. If both (a) and (b) could be resolved, however, I think I’d have a much harder time disagreeing with such ‘fitness alerts’ being integrated with smartphones given the significant problems of obesity amongst Western citizens.

What are your thoughts on this topic?