Categories
Writing

Thoughts on 1Password ‘Home’ Edition

People are worried that someone’s going to steal their data or secretly access their personal devices. Border agents are accessing devices with worrying regularity. Travellers are being separated from their devices and electronic when they fly. Devices are stolen with depressing regularity. And then there’s the ongoing concern that jealous spouses, partners, or family members will try to see with whom their partner’s been emailing, Snapchatting, or Whatsapping.

Few people are well positioned to defend against all of these kinds of intrusions. Some might put a password on their device. Others might be provided by updates for their devices (and even install the updates!). But few consumers are well situated to determine which software is better or worse in terms of providing security and user privacy, or make informed decisions about how much a security product is actually worth.

Consider a longstanding question that plagues regular consumers: which version of Windows is ‘the most secure’? Security experts often advise consumers to encrypt their devices to prevent many of the issues linked to theft. Unfortunately, only the professional or enterprise versions of Windows offer BitLocker, which provides strong full disk encryption.1 These professional versions are rarely provided by-default to consumers when they buy their laptops or desktops — they get the ‘Home’ editions instead — because why would everyday folks want to encrypt their data at rest using the best security available? (See above list for reasons.)

Consumers ask the same security-related questions about different applications they use. Consider:

  • Which messaging software gives you good functionality and protects your chats from snoops?
  • Which cloud services is it safe to store my data in?
  • Which VoIP system encrypts my data securely, so no one else can listen in?
  • And so on…

Enter the Password Managers

Password managers all generally offer the same kind of security promises: use the manager, generate unique passwords, and thus reduce the likelihood that one website’s security failure will result in all of a person’s accounts being victimized. ‘Security people’ have been pushing regular consumers to adopt these managers for a long time. It’s generally an uphill fight because trusting a service with all your passwords is scary. It’s also a hill that got a little steeper following an announcement by AgileBits this week.

AgileBits sells a password manager called ‘1Password’. The company has recognized that people are worried about their devices being seized at borders or about border agents compelling people to log into their various services and devices. Such services could include the 1Password, which is pitched as a safe place to hold your logins, credit card information, identity information, and very private notes. Recognizing the the company has encouraged people to store super sensitive information in one place, and thus create a goldmine for border agents, AgileBits has released a cool travel mode for 1Password to reduce the likelihood that a border agent will get access to that stash of private and secret data.

1Password Home Edition

But that cool travel mode that’s now integrated into 1Password? It’s only available to people who pay a monthly subscription for the software. So all those people who were already skeptical of password managers and who it was very hard to convince them to use a manger in the first place but who we finally got to use 1Password or similar service? Or those people who resist monthly payments for things and would rather just buy their software once and be done with it? Yeah, they’re unlikely to subscribe to AgileBit’s monthly service. And so those users who’ve been taught to store all their stuff in 1Password are effectively building up a prime private information goldmine for border agents and AgileBits is willing to sell them out to the feds because they’re not paying up.

People who already sunk money into 1Password to buy the software are, now, users the 1Password Home version. Or to be blunt: they get the segregated kinds of security that Microsoft is well known for. It’s disappointing that in AgileBits’ efforts to ‘convert’ people to ongoing payments that the company has decided to penalize some of its existing user base. But I guess it’s great for border agents!

I’m sure AgileBits and 1Password will survive, just as Microsoft does, but it’s certainly is a sad day when some users get more security than others. And it’s especially sad when a company that is predicated on aggregating sensitive data in one location decides it would rather exploit that vulnerability for its own profit instead of trying to protect all of its users equally.

NOTE: This was first published on Medium on May 24, 2017.


  1. 1 Windows 8 and 10 do offer ‘Device Encryption’ but not all devices support this kind of encryption. Moreover, it relies on signing into Windows with a Microsoft Account and uploads the recovery key to Microsoft’s servers, meaning the user isn’t in full control of their own security. Unauthorized parties can, potentially, access the recovery key and subsequently decrypt computers secured with Device Encryption. ↩︎
Categories
Aside Writing

Limits of Data Access Requests

Last week I wrote about the limits of data access requests, as they related to car sharing applications like Uber. A data access request involves you contacting a private company and requesting a copy of your personal information, as well as the ways in which that data is processed, disclosed, and the periods of time for which data is retained.

Research has repeatedly shown that companies are very poor at comprehensively responding to data access requests. Sometimes this is because of divides between technical teams that collect and use the data, policy teams that determine what is and isn’t appropriate to do with data, and legal teams that ascertain whether collections and uses of data comport with the law. In other situations companies simply refuse to respond because they adopt a confused-nationalist understanding of law: if the company doesn’t have an office somewhere then that jurisdiction’s laws aren’t seen as applying to the company, even if the company does business in the jurisdiction.

Automated Data Export As Solution?

Some companies, such as Facebook and Google, have developed automated data download services. Ostensibly these services are designed so that you can download the data you’ve input into the companies, thus revealing precisely what is collected about you. In reality, these services don’t let you export all of the information that these respective companies collect. As a result when people tend to use these download services they end up with a false impression of just what information the companies collect and how its used.

A shining example of the kinds of information that are not revealed to users of these services has come to light. A recently leaked document from Facebook Australia revealed that:

Facebook’s algorithms can determine, and allow advertisers to pinpoint, "moments when young people need a confidence boost." If that phrase isn’t clear enough, Facebook’s document offers a litany of teen emotional states that the company claims it can estimate based on how teens use the service, including "worthless," "insecure," "defeated," "anxious," "silly," "useless," "stupid," "overwhelmed," "stressed," and "a failure."

This targeting of emotions isn’t necessarily surprising: in a past exposé we learned that Facebook conducted experiments during an American presidential election to see if they could sway voters. Indeed, the company’s raison d’être is figure out how to pitch ads to customers, and figuring out when Facebook users are more or less likely to be affected by advertisements is just good business. If you use the self-download service provided by Facebook, or any other data broker, you will not receive data on how and why your data is exploited: without understanding how their algorithms act on the data they collect from you, you can never really understand how your personal information is processed.

But that raison d’être of pitching ads to people — which is why Facebook could internally justify the deliberate targeting of vulnerable youth — ignores baseline ethics of whether it is appropriate to exploit our psychology to sell us products. To be clear, this isn’t a company stalking you around the Internet with ads for a car or couch or jewelry that you were browsing about. This is a deliberate effort to mine your communications to sell products at times of psychological vulnerability. The difference is between somewhat stupid tracking versus deliberate exploitation of our emotional state.1

Solving for Bad Actors

There are laws around what you can do with the information provided by children. Whether Facebook’s actions run afoul of such law may never actually be tested in a court or privacy commissioner’s decision. In part, this is because actually mounting legal challenges is extremely challenging, expensive, and time consuming. These hurdles automatically tilt the balance towards activities such as this continuing, even if Facebook stops this particular activity. But, also, part of the problem is Australia’s historically weak privacy commissioner as well as the limitations of such offices around the world: Privacy Commissioners Offices are often understaffed, under resourced, and unable to chase every legally and ethically questionable practice undertaken by private companies. Companies know about these limitations and, as such, know they can get away with unethical and frankly illegal activities unless someone talks to the press about the activities in question.

So what’s the solution? The rote advice is to stop using Facebook. While that might be good advice for some, for a lot of other people leaving Facebook is very, very challenging. You might use it to sign into a lot of other services and so don’t think you can easily abandon Facebook. You might have stored years of photos or conversations and Facebook doesn’t give you a nice way to pull them out. It might be a place where all of your friends and family congregate to share information and so leaving would amount to being excised from your core communities. And depending on where you live you might rely on Facebook for finding jobs, community events, or other activities that are essential to your life.

In essence, solving for Facebook, Google, Uber, and all the other large data broker problems is a collective action problem. It’s not a problem that is best solved on an individualistic basis.

A more realistic kind of advice would be this: file complaints to your local politicians. File complaints to your domestic privacy commissioners. File complaints to every conference, academic association, and industry event that takes Facebook money.2 Make it very public and very clear that you and groups you are associated with are offended by the company in question that is profiting off the psychological exploitation of children and adults alike.3 Now, will your efforts to raise attention to the issue and draw negative attention to companies and groups profiting from Facebook and other data brokers stop unethical data exploitation tomorrow? No. But by consistently raising our concerns about how large data brokers collect and use personal information, and attributing some degree of negative publicity to all those who benefit from such practices, we can decrease the public stock of a company.

History is dotted with individuals who are seen as standing up to end bad practices by governments and private companies alike. But behind them tend to be a mass of citizens who are supportive of those individuals: while standing up en masse may mean that we don’t each get individual praise for stopping some tasteless and unethical practices, our collective standing up will make it more likely that such practices will be stopped. By each working a little we can do something that, individually, we’d be hard pressed to change as individuals.

NOTE: This blog was first published on Medium on May 1, 2017.


  1. 1 Other advertising companies adopt the same practices as Facebook. So I’m not suggesting that Facebook is worst-of-class and letting the others off the hook. ↩︎
  2. 2 Replace ‘Facebook’ with whatever company you think is behaving inappropriately, unethically, or perhaps illegally. ↩︎
  3. 3 Surely you don’t think that Facebook is only targeting kids, right? ↩︎
Categories
Writing

Uber and the Limits of Privacy Law

When was the last time that you thought long and hard about the information companies are collecting, sharing, and selling about you? Maybe you thought about it after reading some company had suffered a data breach or questionably used your data, and then set the worries out of your mind.

What you may not know is that most contemporary Western nation-states have established data protection and privacy legislation over the past several decades. A core element of these laws include data access rights: the right for individuals to compel companies to disclose what information the companies have collected, stored, and shared about them.

In Canada, federal commercial privacy legislation lets Canadian citizens and residents request their personal information. They can use an online application to make those requests to telecommunications companies, online dating companies, or fitness wearable companies. Or they can make requests themselves to specific companies on their own.

So, what happens when you make a request to a ride sharing company? A company like Uber? It might surprise you but they tend to provide you with a lot of information about you, pretty quickly, and in surprisingly digestible formats. You can see when you used a ride sharing application to book a ride, the coordinates of the pickup, where you were dropped off, and so forth.

But you don’t necessarily get all of the information that ride sharing companies collect about you. In the case of Uber, the company was recently found to be fingerprinting the phones its application was installed on. There’s some reason to believe that this was for anti-fraud purposes but, regardless, the collection of that information arguably constitutes the collection of personal information. Per Canadian privacy legislation, such information is defined as “information about an identifiable individual” and decisions by the Commissioner have found that if there is even an instant where machine identifiers are linked with identifiable subscriber data, that the machine identifiers also constitute personal information. Given that Uber was collecting the fingerprints while the application was installed, it likely was linking those fingerprints with subscriber data, even if only momentarily before subsequently separating the identifiers and other data.

So if Uber had a legal duty to inform individuals about the personal information that it collected, and failed to do so, what is the recourse? Either the Federal Office of the Privacy Commissioner of Canada could launch an investigation or someone who requested their personal information from Uber could file a formal complaint with the Office. That complaint would, pretty simply, argue that Uber had failed to meet its legal obligations by not disclosing the tracking information.

But even if Uber was found to have violated Canadian law there isn’t a huge amount of recourse for affected individuals. There aren’t any fines that can be levied by the Canadian federal commissioner. And Uber might decide that it doesn’t want to implement any recommendations that Privacy Commissioner provided: in Canada, to enforce an order, a company has to be taken to court. Even when companies like Facebook have received recommendations they have selectively implemented them and ignored those that would impact their business model. So ‘enforcement’ tends to be limited to moral suasion when applied by the federal privacy commissioner.1

But the limits of enforcement only strike to a part of the problem. What is worse is we only know about Uber’s deceptive practices because of journalism. It isn’t because the company was forthcoming and proactively disclosed this information well-in-advance of fingerprinting devices. Other companies can read that signal and know that they can probably engage in questionable and unlawful practices and have a pretty low expectation of being caught or punished.

In a recent article published by a summer fellow for the Citizen Lab, Adrian Fong argued that enforcing data protection and privacy laws on individual private companies is likely an untenable practice. Too few companies will be able to figure out how to deal with data access requests, fewer will be inclined to respond to them, and even fewer will understand whether they are obligated to respond to such requests or not in the first place. Instead, Fong argues that application stores — such as Google’s and Apple’s respective App stores — could include comprehensive data access rights as part of the contracts that app developers agree to with the app store owners. Failure to comply with the data access rights aspect of a contract could lead to an app being removed from the app store. Were Google and Apple to seriously implement such a practice then their ability to remove bad actors, such as Uber, from app stores could lead to a modification of business practices.

Ultimately, however, I’m not certain that the ‘solution’ to Uber is better privacy law. It’s probably not even just better regulation. Rather, ‘solving’ for companies like Uber demands changing how engineers and business persons are educated and trained, and modifying the grounds under which they’re rewarded and punished for their actions. Greater emphases on ethical practices and the politics of code need to be ingrained in their respective educational curriculum, just as arts and humanities students should be exposed in more depth to the hard sciences. And engineers, generally, need to learn that they’re not just solving hard problems such as preventing fraudulent rides: they’re also embedding power structures in the code they develop, and those structures can’t just run roughshod over the law that democratic publics have established to govern private behaviours. Or, at least, if they run afoul of the law — be it national data protection law or contract law — there will at least be serious consequences. Doing otherwise will simply incentivize companies to act unethically on the basis that there are few, or no, consequences for behaving like a bad actor.

NOTE: this was originally posted to Medium.


  1. 1 Some of Canada’s provincial commissioners do have order making powers. ↩︎
Categories
Links

The London Tube Is Tracking Riders with Their Phones

From Wired:

An agency like TfL could also use uber-accurate tracking data to send out real-time service updates. “If no passengers are using a particular stairway, it could alert TfL that there’s something wrong with the stairway—a missing step or a scary person,” Kaufman says. (Send emergency services stat.)

The Underground won’t exactly know what it can do with this data until it starts crunching the numbers. That will take a few months. Meanwhile, TfL has set about quelling a mini-privacy panic—if riders don’t want to share data with the agency, Sager Weinstein recommends shutting off your mobile device’s Wi-Fi.

So, on the one hand, they’ll apply norms and biases to ascertain why their data ‘says’ certain things. But to draw these conclusion the London transit authority will collect information from customers and the only way to disable this collection is to reduce the functionality of your device when you’re in a public space. Sounds like a recipe for great consensual collection of data and subsequent data ‘analysis’.

Categories
Links

Privacy and Policing in a Digital World

As the federal government holds public consultations on what changes should be made to Bill C-51, the controversial anti-terrorism legislation passed by the Conservative government, various police agencies such as the RCMP and the Canadian Association of Chiefs of Police have petitioned to gain new powers to access telephone and internet data. Meanwhile nearly half of Canadians believe they should have the right to complete digital privacy. The Agenda examines the question of how to balance privacy rights with effective policing in the digital realm.

I was part of a panel that discussed some of the powers that the Government of Canada is opening for discussion as part of its National Security consultation, which ends on December 15, 2016. If you want to provide comments to the government, see: https://www.canada.ca/en/services/defence/nationalsecurity/consultation-national-security.html

Categories
Aside Links

The Subtle Ways Your Digital Assistant Might Manipulate You

From Wired:

Amazon’s Echo and Alphabet’s Home cost less than $200 today, and that price will likely drop. So who will pay our butler’s salary, especially as it offers additional services? Advertisers, most likely. Our butler may recommend services and products that further the super-platform’s financial interests, rather than our own interests. By serving its true masters—the platforms—it may distort our view of the market and lead us to services and products that its masters wish to promote.

But the potential harm transcends the search bias issue, which Google is currently defending in Europe. The increase in the super-platform’s economic power can translate into political power. As we increasingly rely on one or two head butlers, the super-platform will learn about our political beliefs and have the power to affect our views and the public debate.

The discussions about algorithmic bias often have an almost science fiction feel to them. But as personal assistant platforms are monetized by platforms by inking deals with advertisers and designing secretive business practices designed to extract value from users, the threat of attitude shaping will become even more important. Why did your assistant recommend a particular route? (Answer: because it took you past businesses the platform owner believes you are predisposed to spend money at.) Why did your assistant present a particular piece of news? (Answer: because the piece in question conformed with your existing views and thus increased time you spent on the site, during which you were exposed to the platform’s associated advertising partners’ content.)

We are shifting to a world where algorithms are functionally what we call magic. A type of magic that can be used to exploit us while we think that algorithmically-designed digital assistants are markedly changing our lives for the better.

Categories
Links Quotations

RCMP is overstating Canada’s ‘surveillance lag’ | Toronto Star

From a piece that I wrote with Tamir Israel for the Toronto Star:

The RCMP has been lobbying the government behind the scenes for increased surveillance powers on the faulty premise that their investigative powers are lagging behind those foreign police services.

The centrepiece of the RCMP’s pitch is captured in an infographic that purports to show foreign governments are legislating powers that are more responsive to investigative challenges posed by the digital world. On the basis of this comparison, the RCMP appears to have convinced the federal government to transform a process intended to curb the excesses of Bill C-51 into one dominated by proposals for additional surveillance powers.

The RCMP’s lobbying effort misleadingly leaves an impression that Canadian law enforcement efforts are being confounded by digital activities.

An Op-ed that I published with a colleague of mine, Tamir Israel, earlier this week that calls out the RCMP for deliberately misleading the public with regards to government agencies’ existing surveillance powers and capabilities.

Categories
Links Quotations

Pleading the Case: How the RCMP Fails to Justify Calls for New Investigatory Powers

The powers that the government is proposing in its national security consultation — that all communications made by all Canadians be retained regardless of guilt, that all communications be accessible to state agencies on the basis that any Canadian could potentially commit a crime, that security of communications infrastructure should be secondary to government access to communications — are deeply disproportionate to the challenges government agencies are facing. The cases chosen by authorities to be selectively revealed to journalists do not reveal a crisis of policing but that authorities continue to face the ever-present challenges of how to prioritize cases, how to assign resources, and how to pursue investigations to conclusion. Authorities have never had a perfect view into the private lives of citizens and that is likely to continue to be the case, but they presently have a far better view into the lives of most citizens, using existing powers, than ever before in history.

The powers discussed in its consultation, and that the RCMP has implicitly argued for by revealing these cases, presume that all communications in Canada ought to be accessible to government agencies upon their demand. Implementing the powers outlined in the national security consultation would require private businesses to assume significant costs in order to intercept and retain any Canadian’s communications. And such powers would threaten the security of all Canadians — by introducing backdoors into Canada’s communications ecosystem — in order to potentially collect evidence pursuant to a small number of cases, while simultaneously exposing all Canadians to the prospect of criminals or foreign governments exploiting the backdoors the RCMP is implicitly calling for.

While the government routinely frames lawful interception, mandated decryption, and other investigatory powers as principally a ‘privacy-vs-security’ debate, the debate can be framed as one of ‘security-or-less-security’. Do Canadians want to endanger their daily communications and become less secure in their routine activities so that the RCMP and our security services can better intercept data they cannot read, or retain information they cannot process? Or do Canadians want the strongest security possible so that their businesses, personal relationships, religious observations, and other aspects of their daily life are kept safe from third-persons who want to capture and exploit their sensitive and oftentimes confidential information? Do we want to be more safe from cybercriminals, or more likely to be victimized by them by providing powers to government agencies?

 

Categories
Links

Secret Backdoor in Some U.S. Phones Sent Data to China, Analysts Say – NYTimes.com

From the New York Times:

International customers and users of disposable or prepaid phones are the people most affected by the software. But the scope is unclear. The Chinese company that wrote the software, Shanghai Adups Technology Company, says its code runs on more than 700 million phones, cars and other smart devices. One American phone manufacturer, BLU Products, said that 120,000 of its phones had been affected and that it had updated the software to eliminate the feature.

Kryptowire, the security firm that discovered the vulnerability, said the Adups software transmitted the full contents of text messages, contact lists, call logs, location information and other data to a Chinese server. The code comes preinstalled on phones and the surveillance is not disclosed to users, said Tom Karygiannis, a vice president of Kryptowire, which is based in Fairfax, Va. “Even if you wanted to, you wouldn’t have known about it,” he said.

The manufacturer of the American branded phones didn’t know of this exfiltration vector. Consumers had no idea of the vector. And Google apparently had no idea that this data was being exfiltrated. But trust mobile devices for moderately-confidential work…

Categories
Links

Privacy experts fear Donald Trump accessing global surveillance network

Thomas Drake, an NSA whistleblower who predated Snowden, offered an equally bleak assessment. He said: “The electronic infrastructure is fully in place – and ex post facto legalised by Congress and executive orders – and ripe for further abuse under an autocratic, power-obsessed president. History is just not kind here. Trump leans quite autocratic. The temptations to use secret NSA surveillance powers, some still not fully revealed, will present themselves to him as sirens.”

Bush and Cheney functionally authorized the NSA to undertake unlawful operations and actively sought to hinder authorizing courts from understanding what was going on. At the same time, that administration established black sites and novel detention rules for persons kidnapped by the CIA from around the world.

Obama and Biden developed legal theories that were accompanied by authorizing legislation to make the NSA’s previously unlawful activities lawful. The Obama presidency also failed to close Gitmo or convince the American public that torture should be forbidden or that criminal (as opposed to military) courts are the appropriate ways of dealing with suspected terror suspects. And thoughout the NSA deliberately misled and lied to its authorizing court, the CIA deliberately withheld documents from investigators and spied on those working for the intelligence oversight committees, and the FBI continued to conceal its own surveillance operations as best it could.

There are a lot of things to be worried about when it comes to the United States’ current trajectory. But one of the more significant items to note is that the most sophisticated and best financed surveillance and policing infrastructure in the world is going to be working at the behest of an entirely unproven, misogynistic, racist, and bigoted president.

It’s cause to be very, very nervous for the next few years.