Categories
Roundup Writing

The Roundup for January 6-12, 2018 Edition

The Descent
The Descent by Christopher Parsons

I was listening to ‘Tips from the Top Floor’ this week and, in response to the question of where a listener should consider posting their photos online, Chris Marquardt launched into a good series of questions about why people share news, photos, and other media online. Is it to draw attention to things? Is it to generate likes? Is it to elicit feedback? Or is it for some other reason?

It’s not the first time that I’ve thought about why I, personally, produce and share materials. And there are very different reasons for how and why I write and share in different mediums. Some venues, like Twitter, are where I and my professional colleagues tend to share information with one another while also engaging in (limited) conversations. My professional website, today, is a space where I publish mostly- or totally-complete work to make it accessible to colleagues who are interested in my longer-form materials. My personal website, Excited Pixels, is largely for me: I write, and collect information, because doing so helps me think about the issues and products that I find interesting or noteworthy. I’d be lying if I said I always shared or linked material here that I thought was interesting but my goal is to at least have some material to go back through.

Other places, like Flickr when I used it, was where I stored my photos in the case of a serious data disaster like a hard drive crash or house fire. Earlier social networks were really used to share information with my friends (as opposed to colleagues), though I’ve largely stopped publicly writing about deeply personal or day-to-day content at this point.

There was one major new social network service that I regularly used last year: Instagram. It was very, very helpful in forcing me to take more photos and get a lot more comfortable with my cameras and some basic editing software. I can see a difference in the photos I take a year later but, equally as important, I can get the kinds of photos I want faster because I understand my gear a lot better today. I did enjoy looking at really amazing photos on a regular basis but found that the site both takes a lot of time, in part because the almost daily curation of content was a pain in the bum. Furthermore, the time that I spent there meant I wasn’t spending time elsewhere, such as here, or engaging in any number of other pursuits.

This year my ‘new’ social network to try out is micro.blog. And to be honest I don’t know exactly what I think about it. As a plus, the people who are currently using it produce a lot of signal and not a lot of noise, and the blogging tends to be more personal than is common today. It feels like a community of people who have, and are, coming together. It’s a new network and so there are UI things that are still being developed, and the actual way that it works remains a borderline mystery to me,1 but it’s interesting to watch. And why do I post there? I…don’t entirely know. In part because I’m curious to see how the network develops: it’s sort of like watching Twitter, back when I joined, but where most of the users are more mature and self-aware and mindful of what is being posted.

I’ve always tended to delete almost as much content as I post, not so much because I self-censor2 as because I want to be careful and mindful in what I permanently add to the Internet. One of the benefits of blogging in different venues since the early aughts is that I lived through the blowback that can arise when the stakes were relatively low and consequences minimal. That’s less the case today as a result of the memory of the ‘net combined with the speed at which errors can spin out of control. What once could be forgotten, even online, is now likely semi-permanent at best, and the speed at which an error can go viral, today, is unlike almost any other time in history. Still, the questions raised by Chris apply as much to text-based social media and content production and sharing as they do to photography. It’s helpful to be reminded periodically that the best content is that with which we deliberately engage.


Related to photography, one of my personal goals for this year is to print more of my stuff! The last time I did a lot of printing of my own material was in 2016 and I really want to refresh my frames!

There are a few different ways I’m planning on making my photos a little more physical. First, I’m going to be printing a ‘best of’ album for 2017. I imposed a 50 photo limit to make me cull, cull, and then cull a lot more. In my initial analysis what’s most striking is that while I might not think that the photos are necessarily the best technical shots I took, they all possess similar kinds of tension and drama. So over the next few months I think that I’m going to consider what went into getting the ‘best of’ shots and then seeing if there are interesting or novel ways to better fill my shots with more drama.

Second, I’m going to be printing a bunch of photos on canvass for the first time! At the moment I’m thinking I’ll try printing a bunch of 8×8” black and white photos and, above them, 2-3 much larger colour prints (likely in gold frames) to draw some contrast on the empty wall that I have available to me.

Third, I’m going to probably print a bunch of 4×6 shots for the purpose of sending them to family members. I’d like to include a short message on each of the photos; it gives a nice thing to put up on a fridge or wall3 and also a physical artifact with my thoughts about the recipient or whatever is on my mind at the time. I’d actually intended to print and send these to my dad and stepmother last year, in an effort to start repairing our relationship, but sadly wasn’t able to. But I don’t see why a good idea can’t be recycled and used to maintain the relatively good relationships I have with my surviving family members!


Quotation That Resonated With Me

“Belief, as I use the word here, is the insistence that the truth is what one would ‘lief’ or wish it to be. The believer will open his mind to the truth on condition that it fits in with his preconceived ideas and wishes. Faith, on the other hand, is an unreserved opening of the mind to the truth, whatever it may turn out to be. Faith has no preconceptions; it is a plunge into the unknown. Belief clings, but faith let’s go.”

-Alan Watts

Amazing Videos

https://m.youtube.com/watch

Great Photography Shots

I really appreciated Helena Georgiou’s portfolio, where she captures ordinary people passing by interesting and vibrant parts of the urban landscape.

Person walking across bridge

Black and white umbrella on yellow grid

Woman in blue and White walking along a yellow, blue, and white wall.

Music I’m Digging

Neat Podcast Episodes

Good Reads for the Week


Footnotes

  1. Little things like…I have no idea what I’m paying a monthly fee for, exactly. I think I need to pay to be a member of the network, or to post to the network, or something? But I really have no idea and the support documentation when you sign up is utterly unclear just how things work, or why, which makes sense given its relative youth and the technical sophistication of a lot of its early members. I have faith this will improve as its user numbers grow.
  2. Or, at least I don’t self-censor too often. Except when I need to do so to avoid legal jeopardy.
  3. Some of my family have almost entirely bare walls…so I can imagine these photos migrating onto at least one person’s walls.
Categories
Aside Links

The Problem of Botting on Instagram

Calder Wilson at Petapixels:

Instagram’s Terms of Use make it clear that botting is a no-no. Over the past couple of years the platform has implemented anti-spam/anti-bot restriction, which does things like prevent accounts from liking too many photos in a short amount of time or commenting the same thing again and again. It’s obvious they oppose using bots ideologically, and it’s very easy to determine who’s using them or not, so why don’t they do something about it?

For one thing, Instagram is killing it right now. Every time Facebook reports their financial earnings, they need to show robust growth in their flagship products; almost just as importantly, they need to show healthy engagement. Growth and engagement are the life forces of Facebook’s stock, and any decrease in either can send shares south.

Now, consider that my @canonbw account was liking over 30,000 photos every month along with thousands and thousands of comments. That doesn’t even include the activity generated from people responding and liking my images/following me in return. If I took every Instagram user I know in my life who doesn’t use a bot, it’s more than likely that my single account generated more “activity” than everyone else over the last year combined.

If we take into account the massive number of people botting everyday all around the world, the number of likes and comments are astronomical. It’s very unlikely that this huge engagement engine will ever be shut down by Facebook Inc. The relationship between Instagram and botters is seemingly symbiotic, but I argue that in the long run, Instagram suffers.

The problems linked with false engagements fuels the life of Facebook as a public company, while turning the actual product space into one that is as demoralizing as Facebook itself. A growing number of academic articles are finding correlations between Facebook use and depression, in part linked to how much content is liked. While Instagram use remains relatively strongly correlated with happiness, will this persist with the growing rise of bots?

Categories
Aside Links

Exploited for Advertising

As part of a long-feature for The Guardian:

The techniques these companies use are not always generic: they can be algorithmically tailored to each person. An internal Facebook report leaked this year, for example, revealed that the company can identify when teens feel “insecure”, “worthless” and “need a confidence boost”. Such granular information, Harris adds, is “a perfect model of what buttons you can push in a particular person”.

Tech companies can exploit such vulnerabilities to keep people hooked; manipulating, for example, when people receive “likes” for their posts, ensuring they arrive when an individual is likely to feel vulnerable, or in need of approval, or maybe just bored. And the very same techniques can be sold to the highest bidder. “There’s no ethics,” he says. A company paying Facebook to use its levers of persuasion could be a car business targeting tailored advertisements to different types of users who want a new vehicle. Or it could be a Moscow-based troll farm seeking to turn voters in a swing county in Wisconsin.

Harris believes that tech companies never deliberately set out to make their products addictive. They were responding to the incentives of an advertising economy, experimenting with techniques that might capture people’s attention, even stumbling across highly effective design by accident.

The problems facing many Internet users today are predicated on how companies’ services are paid: by companies doing everything they can to capture and hold your attention regardless of your own interests. If there were alternate models of financing social media companies, such as paying small monthly or yearly fees, imagine how different online communications would be: communities would likely be smaller, yes, but the developers would be motivated to do whatever they could to support the communities instead of advertisers targeting those communities. Silicon Valley has absorbed many of the best minds for the past decade and a half in order to make advertisements better. Imagine what would be different if all that excitement had been channeled towards less socially destructive outputs.

Categories
Writing

WhatsApp Profits

Facebook’s purchase of WhatsApp made sense in terms to buying a potential competitor before it got too large to threaten Facebook’s understanding of social relationships. The decision to secure communications between WhatsApp users only solidified Facebook’s position that it was less interested in mining the content of communications than on understanding the relationships between each user.

However, as businesses turn to WhatsApp to communicate with their customers a new revenue opportunity has opened for Facebook: compelling businesses to pay some kind of a fee to continue using the service for commercial communications.

WhatsApp will eventually charge companies to use some future features in the two free business tools it started testing this summer, WhatsApp’s chief operating officer, Matt Idema, said in an interview.

The new tools, which help businesses from local bakeries to global airlines talk to customers over the app, reflect a different approach to monetization than other Facebook products, which rely on advertising.

This is Facebook flipping who ‘pays’ for using WhatsApp. Whereas in the past customers paid a small yearly fee, now customers will get it free and businesses will be charged to use it. It remains to be seen, however, whether WhatsApp is ‘sticky’ enough for consumers to genuinely expect businesses to use it for customer communications. Further, Facebook’s payment model will also stand as a contrast between WhatsApp and its Asian competitors, such as LINE and WeChat, which have transformed their messaging platforms into whole social networks that can also be used for robust commercial transactions. Is this the beginning of an equivalent pivot on Facebook’s part or are they, instead, trying out an entirely separate business model in the hopes of not canibalizing Facebook itself?

Categories
Aside Writing

Limits of Data Access Requests

Last week I wrote about the limits of data access requests, as they related to car sharing applications like Uber. A data access request involves you contacting a private company and requesting a copy of your personal information, as well as the ways in which that data is processed, disclosed, and the periods of time for which data is retained.

Research has repeatedly shown that companies are very poor at comprehensively responding to data access requests. Sometimes this is because of divides between technical teams that collect and use the data, policy teams that determine what is and isn’t appropriate to do with data, and legal teams that ascertain whether collections and uses of data comport with the law. In other situations companies simply refuse to respond because they adopt a confused-nationalist understanding of law: if the company doesn’t have an office somewhere then that jurisdiction’s laws aren’t seen as applying to the company, even if the company does business in the jurisdiction.

Automated Data Export As Solution?

Some companies, such as Facebook and Google, have developed automated data download services. Ostensibly these services are designed so that you can download the data you’ve input into the companies, thus revealing precisely what is collected about you. In reality, these services don’t let you export all of the information that these respective companies collect. As a result when people tend to use these download services they end up with a false impression of just what information the companies collect and how its used.

A shining example of the kinds of information that are not revealed to users of these services has come to light. A recently leaked document from Facebook Australia revealed that:

Facebook’s algorithms can determine, and allow advertisers to pinpoint, "moments when young people need a confidence boost." If that phrase isn’t clear enough, Facebook’s document offers a litany of teen emotional states that the company claims it can estimate based on how teens use the service, including "worthless," "insecure," "defeated," "anxious," "silly," "useless," "stupid," "overwhelmed," "stressed," and "a failure."

This targeting of emotions isn’t necessarily surprising: in a past exposé we learned that Facebook conducted experiments during an American presidential election to see if they could sway voters. Indeed, the company’s raison d’être is figure out how to pitch ads to customers, and figuring out when Facebook users are more or less likely to be affected by advertisements is just good business. If you use the self-download service provided by Facebook, or any other data broker, you will not receive data on how and why your data is exploited: without understanding how their algorithms act on the data they collect from you, you can never really understand how your personal information is processed.

But that raison d’être of pitching ads to people — which is why Facebook could internally justify the deliberate targeting of vulnerable youth — ignores baseline ethics of whether it is appropriate to exploit our psychology to sell us products. To be clear, this isn’t a company stalking you around the Internet with ads for a car or couch or jewelry that you were browsing about. This is a deliberate effort to mine your communications to sell products at times of psychological vulnerability. The difference is between somewhat stupid tracking versus deliberate exploitation of our emotional state.1

Solving for Bad Actors

There are laws around what you can do with the information provided by children. Whether Facebook’s actions run afoul of such law may never actually be tested in a court or privacy commissioner’s decision. In part, this is because actually mounting legal challenges is extremely challenging, expensive, and time consuming. These hurdles automatically tilt the balance towards activities such as this continuing, even if Facebook stops this particular activity. But, also, part of the problem is Australia’s historically weak privacy commissioner as well as the limitations of such offices around the world: Privacy Commissioners Offices are often understaffed, under resourced, and unable to chase every legally and ethically questionable practice undertaken by private companies. Companies know about these limitations and, as such, know they can get away with unethical and frankly illegal activities unless someone talks to the press about the activities in question.

So what’s the solution? The rote advice is to stop using Facebook. While that might be good advice for some, for a lot of other people leaving Facebook is very, very challenging. You might use it to sign into a lot of other services and so don’t think you can easily abandon Facebook. You might have stored years of photos or conversations and Facebook doesn’t give you a nice way to pull them out. It might be a place where all of your friends and family congregate to share information and so leaving would amount to being excised from your core communities. And depending on where you live you might rely on Facebook for finding jobs, community events, or other activities that are essential to your life.

In essence, solving for Facebook, Google, Uber, and all the other large data broker problems is a collective action problem. It’s not a problem that is best solved on an individualistic basis.

A more realistic kind of advice would be this: file complaints to your local politicians. File complaints to your domestic privacy commissioners. File complaints to every conference, academic association, and industry event that takes Facebook money.2 Make it very public and very clear that you and groups you are associated with are offended by the company in question that is profiting off the psychological exploitation of children and adults alike.3 Now, will your efforts to raise attention to the issue and draw negative attention to companies and groups profiting from Facebook and other data brokers stop unethical data exploitation tomorrow? No. But by consistently raising our concerns about how large data brokers collect and use personal information, and attributing some degree of negative publicity to all those who benefit from such practices, we can decrease the public stock of a company.

History is dotted with individuals who are seen as standing up to end bad practices by governments and private companies alike. But behind them tend to be a mass of citizens who are supportive of those individuals: while standing up en masse may mean that we don’t each get individual praise for stopping some tasteless and unethical practices, our collective standing up will make it more likely that such practices will be stopped. By each working a little we can do something that, individually, we’d be hard pressed to change as individuals.

NOTE: This blog was first published on Medium on May 1, 2017.


  1. 1 Other advertising companies adopt the same practices as Facebook. So I’m not suggesting that Facebook is worst-of-class and letting the others off the hook. ↩︎
  2. 2 Replace ‘Facebook’ with whatever company you think is behaving inappropriately, unethically, or perhaps illegally. ↩︎
  3. 3 Surely you don’t think that Facebook is only targeting kids, right? ↩︎
Categories
Links Writing

Partnering to help curb the spread of terrorist content online

Facebook, Microsoft, Twitter, and YouTube are coming together to help curb the spread of terrorist content online. There is no place for content that promotes terrorism on our hosted consumer services. When alerted, we take swift action against this kind of content in accordance with our respective policies.

Starting today, we commit to the creation of a shared industry database of “hashes” — unique digital “fingerprints” — for violent terrorist imagery or terrorist recruitment videos or images that we have removed from our services. By sharing this information with each other, we may use the shared hashes to help identify potential terrorist content on our respective hosted consumer platforms. We hope this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online.

The creation of the industry database of hashes both shows the world that these companies are ‘doing something’ without that something being particularly onerous: any change to a file will result it in having a different hash and thus undetectable by the filtering system being rolled out by these companies. But that technical deficiency is actually the least interesting aspect of what these companies are doing. Rather than being compelled to inhibit speech – by way of a law that might not hold up to a First Amendment challenge in the United States – the companies are voluntarily adopting this process.

The result is that some files will be more challenging to find without someone putting in the effort to seek them out. But it also means that the governments of the world cannot say that the companies aren’t doing anything, and most people aren’t going to be interested in the nuances of the technical deficits of this mode of censorship. So what we’re witnessing is (another) privatized method of censorship that is arguably more designed to rebut political barbs about the discoverability of horrible material on these companies’ services than intended to ‘solve’ the actual problem of the content’s creation and baseline availability.

While a realist might argue that anything is better than nothing, I think that the very existence of these kinds of filtering and censoring programs is inherently dangerous. While it’s all fine and good for ‘bad content’ to be blocked who will be defining what is ‘bad’? And how likely is it that, at some point, ‘good’ content will be either intentionally or accidentally blocked? These are systems that can be used in a multitude of ways once established, and which are often incredibly challenging to retire when in operation.

Categories
Links Writing

Can @Jack Save Twitter?

A long read by the author of Hatching Twitter: A True Story of Money, Power, Friendship, and Betrayal, which unpacks the return of one of Twitter’s co-founders. It’s an instructive read into the poisonous culture of Twitter and the backbiting that characterizes the company…and seemingly has meant that it’s been unable to really determine what it’s about, for whom, and how it will be profitable to investors. The end is particularly telling, insofar as Twitter is seen as having one last chance — to succeed in ‘live’ events — or else have to potentially sell to a Microsoft or equivalent staid technology company.

Categories
Links

Social Media Privacy – Part I

Social Media Privacy – Part I:

One in three anglophone Canadians say that not a single day goes by without checking into their social media feeds. Use of such applications has increased. On top of that, there is growing concern over how much information is being shared online and who may have access to it. Has the government been doing enough to protect Canadians? Is the social media industry being proactive or reactive? Will government institutions such as CSIS and CSES increase their monitoring of users in light of recent events? We will explore the current situation, what the future holds and what social media users can do to protect their information.

This week’s expert guests are:

  • Christopher Parsons, Postdoctoral Fellow at the Citizen Lab in the Munk School of Global Affairs at the University of Toronto and a Principal at Block G Privacy and Security Consulting
  • Avner Levin, Director of the Privacy and Cyber Crime Institute at Ryerson University, Associate Professor at the Ted Rogers School of Management, and Chair of the Law & Business Department
  • Sharon Polsky, President of the Privacy and Access Council of Canada

 

Categories
Links

Should you worry about social media surveillance?

Should you worry about social media surveillance?

 

Categories
Links Quotations

The Canadian Government Wants to Pay More People to Creep Your Facebook

The Canadian Government Wants to Pay More People to Creep Your Facebook:

But government social media monitoring could very easily cross over into a legal gray area. Christopher Parsons, a cybersurveillance researcher at the University of Toronto’s Citizen Lab, said the collection of personal data from online sources needs to be rigorously justified, and even when it is, the data needs to be handled and stored safely.

“The government can’t just collect information about Canadians—even from public sourced data repositories such as social media—just because it wants to,” said Parsons in an email to me. “There have to be terms set on the collection, handling, disclosure, and disposal of personal information that the government wants to gather. As a result, even when data is collected for legitimate reasons that doesn’t mean the data can then be used in any way that the government (subsequently) decides.”

Strict oversights into how the government gleans and uses this intelligence—even in the service of testing policy reactions, as Parsons thinks this service will likely do—is required.

According to Parsons, that comes in the form of internal “privacy impact assessments” related to the specific social media surveillance program.

“Government agencies are supposed to conduct such assessments before collecting Canadians’ personal information and explain the specifics of how and why they will collect Canadians’ personal data,” said Parsons.

In the medium term, it appears Canadians can count on more of their tweets to be sucked up into a government social media surveillance system—then potentially shared across government departments.

Parsons told me that the sharing of the personal data of Canadian, in general, is only becoming more pervasive across government agencies.

“There has been a marked increase in the sharing of personal data between and across different departments because information is initially being collected for vague or far-sweeping reasons. Were social media information collected for similarly vague reasons then the government could then try to expansively share collected information across government,” he said.