Categories
Photography

Gear Burn Out

Don’t stare into the sun.

– Mothers from around the world

In the recent eclipse some photographers rented cameras and lenses to try and get some shots of the eclipse. They didn’t, however, adequately protect the gear. The results are shocking.

Categories
Links Photography

National Geographic Photos of the Year

These are absolutely amazing shots; I have to admit my preference for the People’s Awards is definitely ‘Colourful Markets’. The vibrancy of the image combined with the elevated angle of the shot is really magical.

Categories
Photography

Big Mac Lighting

Philippe Echaroux was challenged to use a normal image making device to take exceptional portraits. He used a straw, Big Mac box, and flashlight to create a light box and, along with an iPhone, got some exceptional shots. It doesn’t matter what tool you use to take photos so long as you’re knowledgeable about its strengths and weaknesses and possess an adventurous spirit.

Categories
RPG

See the Sketches J.R.R. Tolkien Used to Build Middle-Earth

Many of these are amazing, in that they show how one of the most adored fantasy world’s maps began just as those used in most homebrew D&D games.

Categories
Photography

Space Training

Photo made with Olympus EM10ii and M.14-42mm F3.5-5.6 II R lens at the Canadian National Exhibition on August 27, 2017 in Toronto, Ontario. Edited in Apple Photos.
Photo made with Olympus EM10ii and M.14-42mm F3.5-5.6 II R lens at the Canadian National Exhibition on August 27, 2017 in Toronto, Ontario. Edited in Apple Photos.
Categories
Links

Cider Profiles

 AV Club:

English ciders, for example, tend to be still, dry, and higher in alcohol than most ciders. (English ciders are often considered the red wine of the cider world.) Spanish ciders are more often compared to sour beers, with a funkier taste. French ciders are the most approachable of European ciders, as they have a champagne-like sparkle and are lower in alcohol content. Terroir isn’t all that differentiates European ciders from American ones, however, as their use of wild yeasts results in a bolder, more offbeat flavor profile.

American ciders are harder to pin down, as the unique processes brewers have been applying to craft beer—barrel-aging, hopping, the addition of spices and other fruits—are also being used by cider makers, resulting in a variety of different tastes. What most American ciders have in common, however, is lightness, crispness, and an easy-going approachability.

As someone who appreciates well-crafted beers and liquors, and has recently tried to get into cider, this is really helpful in orienting myself. Thus far I think my preferred kind of cider tends to be semi-experimental (I had a truly delightful gin barrel-aged dry cider earlier this summer) but knowing what to look for in flavour profiles is definitely helpful going forward.

Categories
Writing

Thoughts on 1Password ‘Home’ Edition

People are worried that someone’s going to steal their data or secretly access their personal devices. Border agents are accessing devices with worrying regularity. Travellers are being separated from their devices and electronic when they fly. Devices are stolen with depressing regularity. And then there’s the ongoing concern that jealous spouses, partners, or family members will try to see with whom their partner’s been emailing, Snapchatting, or Whatsapping.

Few people are well positioned to defend against all of these kinds of intrusions. Some might put a password on their device. Others might be provided by updates for their devices (and even install the updates!). But few consumers are well situated to determine which software is better or worse in terms of providing security and user privacy, or make informed decisions about how much a security product is actually worth.

Consider a longstanding question that plagues regular consumers: which version of Windows is ‘the most secure’? Security experts often advise consumers to encrypt their devices to prevent many of the issues linked to theft. Unfortunately, only the professional or enterprise versions of Windows offer BitLocker, which provides strong full disk encryption.1 These professional versions are rarely provided by-default to consumers when they buy their laptops or desktops — they get the ‘Home’ editions instead — because why would everyday folks want to encrypt their data at rest using the best security available? (See above list for reasons.)

Consumers ask the same security-related questions about different applications they use. Consider:

  • Which messaging software gives you good functionality and protects your chats from snoops?
  • Which cloud services is it safe to store my data in?
  • Which VoIP system encrypts my data securely, so no one else can listen in?
  • And so on…

Enter the Password Managers

Password managers all generally offer the same kind of security promises: use the manager, generate unique passwords, and thus reduce the likelihood that one website’s security failure will result in all of a person’s accounts being victimized. ‘Security people’ have been pushing regular consumers to adopt these managers for a long time. It’s generally an uphill fight because trusting a service with all your passwords is scary. It’s also a hill that got a little steeper following an announcement by AgileBits this week.

AgileBits sells a password manager called ‘1Password’. The company has recognized that people are worried about their devices being seized at borders or about border agents compelling people to log into their various services and devices. Such services could include the 1Password, which is pitched as a safe place to hold your logins, credit card information, identity information, and very private notes. Recognizing the the company has encouraged people to store super sensitive information in one place, and thus create a goldmine for border agents, AgileBits has released a cool travel mode for 1Password to reduce the likelihood that a border agent will get access to that stash of private and secret data.

1Password Home Edition

But that cool travel mode that’s now integrated into 1Password? It’s only available to people who pay a monthly subscription for the software. So all those people who were already skeptical of password managers and who it was very hard to convince them to use a manger in the first place but who we finally got to use 1Password or similar service? Or those people who resist monthly payments for things and would rather just buy their software once and be done with it? Yeah, they’re unlikely to subscribe to AgileBit’s monthly service. And so those users who’ve been taught to store all their stuff in 1Password are effectively building up a prime private information goldmine for border agents and AgileBits is willing to sell them out to the feds because they’re not paying up.

People who already sunk money into 1Password to buy the software are, now, users the 1Password Home version. Or to be blunt: they get the segregated kinds of security that Microsoft is well known for. It’s disappointing that in AgileBits’ efforts to ‘convert’ people to ongoing payments that the company has decided to penalize some of its existing user base. But I guess it’s great for border agents!

I’m sure AgileBits and 1Password will survive, just as Microsoft does, but it’s certainly is a sad day when some users get more security than others. And it’s especially sad when a company that is predicated on aggregating sensitive data in one location decides it would rather exploit that vulnerability for its own profit instead of trying to protect all of its users equally.

NOTE: This was first published on Medium on May 24, 2017.


  1. 1 Windows 8 and 10 do offer ‘Device Encryption’ but not all devices support this kind of encryption. Moreover, it relies on signing into Windows with a Microsoft Account and uploads the recovery key to Microsoft’s servers, meaning the user isn’t in full control of their own security. Unauthorized parties can, potentially, access the recovery key and subsequently decrypt computers secured with Device Encryption. ↩︎
Categories
Writing

When ‘Contact Us’ Forms Becomes Life Threatening

Journalists targeted by security services can write about relatively banal subjects. They might report on the amount and quality of food available in markets. They might write about the slow construction of roads. They might write about dismal housing conditions. They might even just include comments about a politician that are seen as unfavourable, such as the politician wiped sweat from their brow before answering a question. Risky reporting from extremely hostile environments needn’t involve writing about government surveillance, policing, or corruption: far, far less ‘sensitive’ reporting can be enough for a government to cast a reporter as an enemy of the state.

The rationale for such hyper-vigilance on the part of dictatorships and authoritarian countries is that such governments regularly depend on international relief funds or the international community’s decision to not harshly impede the country’s access to global markets. Negative press coverage could cut off relief funds or monies from international organizations following a realization that the country lacks the ‘freedoms’ and ‘progress’ the government and most media publicly report on. If the international community realizes that the country in question is grossly violating human rights it might also limit the country’s access to capital markets. In either situation, limiting funds available to the government can endanger the reigning government or hinder leaders from stockpiling stolen wealth.

Calling for Help

Reaching out to international journalism protection organizations, or to foreign governments that might offer asylum, can raise serious negative publicity concerns for dictatorial or authoritarian governments. If a country’s journalists are fleeing because they believe they are in danger, and that fact rises to public attention, it could negatively affect a leader’s public image and the government’s access to funds. On this basis governments may place particular journalists under surveillance and punish them should they do anything to threaten the public image of the leader or country. Such surveillance is also utilized when reporters who are in a country are covering, and writing about, facts that stand in contravention to government propaganda.

The potential for electronic surveillance is particularly high, and serious, when the major telecommunications providers in a country tend to fully comply with, or willingly provide assistance to, state security and intelligence services. This degree of surveillance makes contacting international organizations that assist journalists risky; when a foreign organization does not encrypt communications sent to it, the organization’ security practices may further endanger a journalist calling for help. One of the many journalists covered in Bad News: Last Journalists in a Dictatorship who feared his life was in danger by the Rwandan government stated,

[h]e had written to the Committee to Protect Journalists, in New York, but someone in the president’s office had then shown him the application that he had filled out online. He didn’t trust people living abroad any longer.” (Bad News: Last Journalists in a Dictatorship, 83-4)

Such surveillance could have taken place in a few different ways: the local network or computer the journalist used to prepare and send the application might have been compromised. Alternately, the national network might have been subject to surveillance for ‘sensitive’ materials. Though the former case is a prevalent problem (e.g., Internet cafes being compromised by state actors) it’s not one that international journalist organizations are well suited to fix. The latter situation, however, where the national network itself is hostile, is something that media organizations can address.

Network inspection technologies can be configured to look for particular pieces of metadata and content that are of interest to government monitors. By sorting for certain kinds of metadata, such as websites visited, content selection can be applied relatively efficiently and automated analysis of that content subsequently be employed. That content analysis, however, depends on the government in question having access to plaintext communications.

Many journalism organizations historically have had ‘contact us’ pages on their websites, and many continue to have and use these pages. Some organizations secure their contact forms by using SSL encryption. But many organizations do not, including organizations that actively assert they will provide assistance to international journalists in need. These latter organizations make it trivial for states that are hostile to journalists to monitor in-country journalists who are making requests or issuing claims using these insecure contact forms.

Mitigating Threats

One way that journalism protection organizations can somewhat mitigate the risk of government surveillance is to implement SSL on their websites, which encrypts communications sent to the organization’s web server. It is still apparent to network monitors what website was visited but not which pages. And if the journalist sends a message using a ‘contact us’ form the data communicated will be encrypted, thus preventing network snoops from figuring out what is being said.

SSL isn’t a bulletproof solution to stopping governments from monitoring messages sent using contact forms. But it raises the difficulty of intercepting, decrypting, and analyzing the calls for help sent by at-risk journalists. And adding such security is relatively trivial to implement with the advent of free SSL encryption projects like ‘Let’s Encrypt’.

Ideally journalism organizations would either add SSL to their websites — to inhibit adversarial states from reading messages sent to these organizations — or only provide alternate means of communicating with them. That might mandate email, and list hosts that provide service-to-service encryption (i.e. those that have implemented STARTSSL), messaging applications that provide sufficient security to evade most state actors (everything from WhatsApp or Signal, to even Hangouts if the US Government and NSA aren’t the actors you’re hiding from), or any other kind of secure communications channel that should be secure from non-Five Eyes surveillance countries.

No organization wants to be responsible for putting people at risk, especially when those people are just trying to find help in dangerous situations. Organizations that exist to, in part, protect journalists thus need to do the bare minimum and ensure their baseline contact forms are secured. Doing anything else is just enabling state surveillance of at-risk journalists, and stands as antithetical to the organizations’ missions.

NOTE: This post was previously published on Medium.

Categories
Aside Writing

Limits of Data Access Requests

Last week I wrote about the limits of data access requests, as they related to car sharing applications like Uber. A data access request involves you contacting a private company and requesting a copy of your personal information, as well as the ways in which that data is processed, disclosed, and the periods of time for which data is retained.

Research has repeatedly shown that companies are very poor at comprehensively responding to data access requests. Sometimes this is because of divides between technical teams that collect and use the data, policy teams that determine what is and isn’t appropriate to do with data, and legal teams that ascertain whether collections and uses of data comport with the law. In other situations companies simply refuse to respond because they adopt a confused-nationalist understanding of law: if the company doesn’t have an office somewhere then that jurisdiction’s laws aren’t seen as applying to the company, even if the company does business in the jurisdiction.

Automated Data Export As Solution?

Some companies, such as Facebook and Google, have developed automated data download services. Ostensibly these services are designed so that you can download the data you’ve input into the companies, thus revealing precisely what is collected about you. In reality, these services don’t let you export all of the information that these respective companies collect. As a result when people tend to use these download services they end up with a false impression of just what information the companies collect and how its used.

A shining example of the kinds of information that are not revealed to users of these services has come to light. A recently leaked document from Facebook Australia revealed that:

Facebook’s algorithms can determine, and allow advertisers to pinpoint, "moments when young people need a confidence boost." If that phrase isn’t clear enough, Facebook’s document offers a litany of teen emotional states that the company claims it can estimate based on how teens use the service, including "worthless," "insecure," "defeated," "anxious," "silly," "useless," "stupid," "overwhelmed," "stressed," and "a failure."

This targeting of emotions isn’t necessarily surprising: in a past exposé we learned that Facebook conducted experiments during an American presidential election to see if they could sway voters. Indeed, the company’s raison d’être is figure out how to pitch ads to customers, and figuring out when Facebook users are more or less likely to be affected by advertisements is just good business. If you use the self-download service provided by Facebook, or any other data broker, you will not receive data on how and why your data is exploited: without understanding how their algorithms act on the data they collect from you, you can never really understand how your personal information is processed.

But that raison d’être of pitching ads to people — which is why Facebook could internally justify the deliberate targeting of vulnerable youth — ignores baseline ethics of whether it is appropriate to exploit our psychology to sell us products. To be clear, this isn’t a company stalking you around the Internet with ads for a car or couch or jewelry that you were browsing about. This is a deliberate effort to mine your communications to sell products at times of psychological vulnerability. The difference is between somewhat stupid tracking versus deliberate exploitation of our emotional state.1

Solving for Bad Actors

There are laws around what you can do with the information provided by children. Whether Facebook’s actions run afoul of such law may never actually be tested in a court or privacy commissioner’s decision. In part, this is because actually mounting legal challenges is extremely challenging, expensive, and time consuming. These hurdles automatically tilt the balance towards activities such as this continuing, even if Facebook stops this particular activity. But, also, part of the problem is Australia’s historically weak privacy commissioner as well as the limitations of such offices around the world: Privacy Commissioners Offices are often understaffed, under resourced, and unable to chase every legally and ethically questionable practice undertaken by private companies. Companies know about these limitations and, as such, know they can get away with unethical and frankly illegal activities unless someone talks to the press about the activities in question.

So what’s the solution? The rote advice is to stop using Facebook. While that might be good advice for some, for a lot of other people leaving Facebook is very, very challenging. You might use it to sign into a lot of other services and so don’t think you can easily abandon Facebook. You might have stored years of photos or conversations and Facebook doesn’t give you a nice way to pull them out. It might be a place where all of your friends and family congregate to share information and so leaving would amount to being excised from your core communities. And depending on where you live you might rely on Facebook for finding jobs, community events, or other activities that are essential to your life.

In essence, solving for Facebook, Google, Uber, and all the other large data broker problems is a collective action problem. It’s not a problem that is best solved on an individualistic basis.

A more realistic kind of advice would be this: file complaints to your local politicians. File complaints to your domestic privacy commissioners. File complaints to every conference, academic association, and industry event that takes Facebook money.2 Make it very public and very clear that you and groups you are associated with are offended by the company in question that is profiting off the psychological exploitation of children and adults alike.3 Now, will your efforts to raise attention to the issue and draw negative attention to companies and groups profiting from Facebook and other data brokers stop unethical data exploitation tomorrow? No. But by consistently raising our concerns about how large data brokers collect and use personal information, and attributing some degree of negative publicity to all those who benefit from such practices, we can decrease the public stock of a company.

History is dotted with individuals who are seen as standing up to end bad practices by governments and private companies alike. But behind them tend to be a mass of citizens who are supportive of those individuals: while standing up en masse may mean that we don’t each get individual praise for stopping some tasteless and unethical practices, our collective standing up will make it more likely that such practices will be stopped. By each working a little we can do something that, individually, we’d be hard pressed to change as individuals.

NOTE: This blog was first published on Medium on May 1, 2017.


  1. 1 Other advertising companies adopt the same practices as Facebook. So I’m not suggesting that Facebook is worst-of-class and letting the others off the hook. ↩︎
  2. 2 Replace ‘Facebook’ with whatever company you think is behaving inappropriately, unethically, or perhaps illegally. ↩︎
  3. 3 Surely you don’t think that Facebook is only targeting kids, right? ↩︎
Categories
Writing

Uber and the Limits of Privacy Law

When was the last time that you thought long and hard about the information companies are collecting, sharing, and selling about you? Maybe you thought about it after reading some company had suffered a data breach or questionably used your data, and then set the worries out of your mind.

What you may not know is that most contemporary Western nation-states have established data protection and privacy legislation over the past several decades. A core element of these laws include data access rights: the right for individuals to compel companies to disclose what information the companies have collected, stored, and shared about them.

In Canada, federal commercial privacy legislation lets Canadian citizens and residents request their personal information. They can use an online application to make those requests to telecommunications companies, online dating companies, or fitness wearable companies. Or they can make requests themselves to specific companies on their own.

So, what happens when you make a request to a ride sharing company? A company like Uber? It might surprise you but they tend to provide you with a lot of information about you, pretty quickly, and in surprisingly digestible formats. You can see when you used a ride sharing application to book a ride, the coordinates of the pickup, where you were dropped off, and so forth.

But you don’t necessarily get all of the information that ride sharing companies collect about you. In the case of Uber, the company was recently found to be fingerprinting the phones its application was installed on. There’s some reason to believe that this was for anti-fraud purposes but, regardless, the collection of that information arguably constitutes the collection of personal information. Per Canadian privacy legislation, such information is defined as “information about an identifiable individual” and decisions by the Commissioner have found that if there is even an instant where machine identifiers are linked with identifiable subscriber data, that the machine identifiers also constitute personal information. Given that Uber was collecting the fingerprints while the application was installed, it likely was linking those fingerprints with subscriber data, even if only momentarily before subsequently separating the identifiers and other data.

So if Uber had a legal duty to inform individuals about the personal information that it collected, and failed to do so, what is the recourse? Either the Federal Office of the Privacy Commissioner of Canada could launch an investigation or someone who requested their personal information from Uber could file a formal complaint with the Office. That complaint would, pretty simply, argue that Uber had failed to meet its legal obligations by not disclosing the tracking information.

But even if Uber was found to have violated Canadian law there isn’t a huge amount of recourse for affected individuals. There aren’t any fines that can be levied by the Canadian federal commissioner. And Uber might decide that it doesn’t want to implement any recommendations that Privacy Commissioner provided: in Canada, to enforce an order, a company has to be taken to court. Even when companies like Facebook have received recommendations they have selectively implemented them and ignored those that would impact their business model. So ‘enforcement’ tends to be limited to moral suasion when applied by the federal privacy commissioner.1

But the limits of enforcement only strike to a part of the problem. What is worse is we only know about Uber’s deceptive practices because of journalism. It isn’t because the company was forthcoming and proactively disclosed this information well-in-advance of fingerprinting devices. Other companies can read that signal and know that they can probably engage in questionable and unlawful practices and have a pretty low expectation of being caught or punished.

In a recent article published by a summer fellow for the Citizen Lab, Adrian Fong argued that enforcing data protection and privacy laws on individual private companies is likely an untenable practice. Too few companies will be able to figure out how to deal with data access requests, fewer will be inclined to respond to them, and even fewer will understand whether they are obligated to respond to such requests or not in the first place. Instead, Fong argues that application stores — such as Google’s and Apple’s respective App stores — could include comprehensive data access rights as part of the contracts that app developers agree to with the app store owners. Failure to comply with the data access rights aspect of a contract could lead to an app being removed from the app store. Were Google and Apple to seriously implement such a practice then their ability to remove bad actors, such as Uber, from app stores could lead to a modification of business practices.

Ultimately, however, I’m not certain that the ‘solution’ to Uber is better privacy law. It’s probably not even just better regulation. Rather, ‘solving’ for companies like Uber demands changing how engineers and business persons are educated and trained, and modifying the grounds under which they’re rewarded and punished for their actions. Greater emphases on ethical practices and the politics of code need to be ingrained in their respective educational curriculum, just as arts and humanities students should be exposed in more depth to the hard sciences. And engineers, generally, need to learn that they’re not just solving hard problems such as preventing fraudulent rides: they’re also embedding power structures in the code they develop, and those structures can’t just run roughshod over the law that democratic publics have established to govern private behaviours. Or, at least, if they run afoul of the law — be it national data protection law or contract law — there will at least be serious consequences. Doing otherwise will simply incentivize companies to act unethically on the basis that there are few, or no, consequences for behaving like a bad actor.

NOTE: this was originally posted to Medium.


  1. 1 Some of Canada’s provincial commissioners do have order making powers. ↩︎