Photography and Social Media

5CC1F7AF-32C8-44BD-9E44-C061F92CC924
(Passer By by Christopher Parsons)

Why do we want to share our photos online? What platforms are better or worse to use in sharing images? These are some of the questions I’ve been pondering for the past few weeks.

Backstory

About a month ago a colleague stated that she would be leaving Instagram given the nature of Facebook’s activities and the company’s seeming lack of remorse. Her decision has stuck with me and left me wondering whether I want to follow her lead.

I deleted my Facebook accounts some time ago, and have almost entirely migrated my community away from WhatsApp. But as an amateur photographer I’ve hesitated to leave an app that was, at least initially, designed with photographers in mind. I’ve used the application over the years to develop and improve my photographic abilities and so there’s an element of ‘sunk cost’ that has historically factored into my decision to stay or leave.

But Instagram isn’t really for photographers anymore. The app is increasingly stuffed with either videos or ads, and is meant to create a soft landing point for when/if Facebook truly pivots away from its main Facebook app.1 The company’s pivot makes it a lot easier to justify leaving the application though, at the same time, leaves me wondering what application or platform, if any, I want to move my photos over to.

The Competition(?)

Over the past week or two I’ve tried Flickr.2 While it’s the OG of photo sharing sites its mobile apps are just broken. I can’t create albums unless I use the web app. The sharing straight from the Apple Photos app is janky. I worry (for no good reason, really) about the cost for the professional version (do I even need that as an amateur?) as well as the annoyance of tagging photos in order to ‘find my tribe.’

It’s also not apparent to me how much community truly exists on Flickr: the whole platform seems a bit like a graveyard with only a small handful of active photographers still inhabiting the space.

I’m also trying Glass at the moment. It’s not perfect: search is non-existent, you can’t share your gallery of photos with non-Glass users at the moment, discovery is a bit rough, there’s no Web version, and it’s currently iPhone only. However, I do like that the app (and its creators) is focused on sharing images and that it has a clear monetization schema in the form of a yearly subscription. The company’s formal roadmap also indicates that some of these rough edges may be filed away in the coming months.

I also like that Glass doesn’t require me to develop a tagging system (that’s all done in the app using presets), let’s me share quickly and easily from the Photos app, looks modern, and has a relatively low yearly subscription cost. And, at least so far, most of the comments are better than on the other platforms, which I think is important to developing my own photography.

Finally, there’s my blog here! And while I like to host photo series here this site isn’t really designed as a photo blog first and foremost. Part of the problem is that WordPress continues to suck for posting media in my experience but, more substantively, this blog hosts a lot more text than images. I don’t foresee changing this focus anytime in the near or even distant future.

The Necessity of Photo Sharing?

It’s an entirely fair question to ask why even bother sharing photos with strangers. Why not just keep my images on my devices and engage in my own self-critique?

I do engage in such critique but I’ve personally learned more from putting my images into the public eye than I would just by keeping them on my own devices.3 Some of that is from comments but, also, it’s been based on what people have ‘liked’ or left emoji comments on. These kinds of signals have helped me better understand what is a better or less good photograph.

However, at this point I don’t think that likes and emojis are the source of my future photography development: I want actual feedback, even if it’s limited to just a sentence or so. I’m hoping that Glass might provide that kind of feedback though I guess only time will tell.


  1. For a good take on Facebook and why its functionally ‘over’ as a positive brand check out M.G. Siegler’s article, “Facebook is Too Big, Fail.” ↩︎
  2. This is my second time with Flickr, as I closed a very old account several years ago given that I just wasn’t using it. ↩︎
  3. If I’m entirely honest, I bet I’ve learned as much or more from reading photography teaching/course books, but that’s a different kind of learning entirely. ↩︎
Link

The So-Called Privacy Problems with WhatsApp

(Photo by Anton on Pexels.com)

ProPublica, which is typically known for its excellent journalism, published a particularly terrible piece earlier this week that fundamentally miscast how encryption works and how Facebook vis-a-vis WhatsApp works to keep communications secured. The article, “How Facebook Undermines Privacy Protections for Its 2 Billion WhatsApp Users,” focuses on two so-called problems.

The So-Called Privacy Problems with WhatsApp

First, the authors explain that WhatsApp has a system whereby recipients of messages can report content they have received to WhatsApp on the basis that it is abusive or otherwise violates WhatsApp’s Terms of Service. The article frames this reporting process as a way of undermining privacy on the basis that secured messages are not kept solely between the sender(s) and recipient(s) of the communications but can be sent to other parties, such as WhatsApp. In effect, the ability to voluntarily forward messages to WhatsApp that someone has received is cast as breaking the privacy promises that have been made by WhatsApp.

Second, the authors note that WhatsApp collects a large volume of metadata in the course of using the application. Using lawful processes, government agencies have compelled WhatsApp to disclose metadata on some of their users in order to pursue investigations and secure convictions against individuals. The case that is focused on involves a government employee who leaked confidential banking information to Buzzfeed, and which were subsequently reported out.

Assessing the Problems

In the case of forwarding messages for abuse reporting purposes, encryption is not broken and the feature is not new. These kinds of processes offer a mechanism that lets individuals self-identify and report on problematic content. Such content can include child grooming, the communications of illicit or inappropriate messages or audio-visual content, or other abusive information.

What we do learn, however, is that the ‘reactive’ and ‘proactive’ methods of detecting abuse need to be fixed. In the case of the former, only about 1,000 people are responsible for intaking and reviewing the reported content after it has first been filtered by an AI:

Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems. These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.


Further, the employees are often reliant on machine learning-based translations of content which makes it challenging to assess what is, in fact, being communicated in abusive messages. As reported,

… using Facebook’s language-translation tool, which reviewers said could be so inaccurate that it sometimes labeled messages in Arabic as being in Spanish. The tool also offered little guidance on local slang, political context or sexual innuendo. “In the three years I’ve been there,” one moderator said, “it’s always been horrible.”

There are also proactive modes of watching for abusive content using AI-based systems. As noted in the article,

Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive. The unencrypted data available for scrutiny is extensive. It includes the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.

Unfortunately, the AI often makes mistakes. This led one interviewed content reviewer to state that, “[t]here were a lot of innocent photos on there that were not allowed to be on there … It might have been a photo of a child taking a bath, and there was nothing wrong with it.” Often, “the artificial intelligence is not that intelligent.”

The vast collection of metadata has been a long-reported concern and issue associated with WhatsApp and, in fact, was one of the many reasons why many individuals advocate for the use of Signal instead. The reporting in the ProPublica article helpfully summarizes the vast amount of metadata that is collected but that collection, in and of itself, does not present any evidence that Facebook or WhatsApp have transformed the application into one which inappropriately intrudes into persons’ privacy.

ProPublica Sets Back Reasonable Encryption Policy Debates

The ProPublica article harmfully sets back broader policy discussion around what is, and is not, a reasonable approach for platforms to take in moderating abuse when they have integrated strong end-to-end encryption. Such encryption prevents unauthorized third-parties–inclusive of the platform providers themselves–from reading or analyzing the content of the communications themselves. Enabling a reporting feature means that individuals who receive a communication are empowered to report it to a company, and the company can subsequently analyze what has been sent and take action if the content violates a terms of service or privacy policy clause.

In suggesting that what WhatsApp has implemented is somehow wrong, it becomes more challenging for other companies to deploy similar reporting features without fearing that their decision will be reported on as ‘undermining privacy’. While there may be a valid policy discussion to be had–is a reporting process the correct way of dealing with abusive content and messages?–the authors didn’t go there. Nor did they seriously investigate whether additional resources should be adopted to analyze reported content, or talk with artificial intelligence experts or machine-based translation experts on whether Facebook’s efforts to automate the reporting process are adequate, appropriate, or flawed from the start. All those would be very interesting, valid, and important contributions to the broader discussion about integrating trust and safety features into encrypted messaging applications. But…those are not things that the authors choose to delve into.

The authors could have, also, discussed the broader importance (and challenges) in building out messaging systems that can deliberately conceal metadata, and the benefits and drawbacks of such systems. While the authors do discuss how metadata can be used to crack down on individuals in government who leak data, as well as assist in criminal investigations and prosecutions, there is little said about what kinds of metadata are most important to conceal and the tradeoffs in doing so. Again, there are some who think that all or most metadata should be concealed, and others who hold opposite views: there is room for a reasonable policy debate to be had and reported on.

Unfortunately, instead of actually taking up and reporting on the very valid policy discussions that are at the edges of their article, the authors choose to just be bombastic and asserted that WhatsApp was undermining the privacy protections that individuals thought they have when using the application. It’s bad reporting, insofar as it distorts the facts, and is particularly disappointing given that ProPublica has shown it has the chops to do good investigative work that is well sourced and nuanced in its outputs. This article, however, absolutely failed to make the cut.

Link

Facebook Prioritizes Growth Over Social Responsibility

Karen Hao writing at MIT Technology Review:

But testing algorithms for fairness is still largely optional at Facebook. None of the teams that work directly on Facebook’s news feed, ad service, or other products are required to do it. Pay incentives are still tied to engagement and growth metrics. And while there are guidelines about which fairness definition to use in any given situation, they aren’t enforced.

The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.

But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.

[Kaplan’s] claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebook’s data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.

The whole thing with ethics is that they have to be integrated such that they underlie everything that an organization does; they cannot function as public relations add ons. Sadly at Facebook the only ethic is growth at all costs, the social implications be damned.

When someone or some organization is responsible for causing significant civil unrest, deaths, or genocide then we expect that those who are even partly responsible to be called to account, not just in the public domain but in courts of law and international justice. And when those someones happen to be leading executives for one of the biggest companies in the world the solution isn’t to berate them in Congressional hearings and hear their weak apologies, but to take real action against them and their companies.

Link

Explaining WhatsApp’s Encryption for Business Communications

Shoshana Wodinsky writing for Gizmodo has a lengthy, and detailed, breakdown of how and why WhatsApp is modifying its terms of service to facilitate consumer-to-business communications. The crux of the shift, really, comes down to:

… in the years since WhatsApp co-founders Jan Koum and Brian Acton cut ties with Facebook for, well, being Facebook, the company slowly turned into something that acted more like its fellow Facebook properties: an app that’s kind of about socializing, but mostly about shopping. These new privacy policies are just WhatsApp’s—and Facebook’s—way of finally saying the quiet part out loud.

What’s going to change? Namely whenever you’re speaking to a business then those communications will not be considered end-to-end encrypted and, as such, the communications content and metadata that is accessible can be used for advertising and other marketing, data mining, data targeting, or data exploitation purposes. If you’re just chatting with individuals–that is, not businesses!–then your communications will continue to be end-to-end encrypted.

For an additional, and perhaps longer, discussion of how WhatsApp’s shifts in policy–now, admittedly, delayed for a few months following public outrage–is linked to the goal of driving business revenue into the company check out Alec Muffett’s post over on his blog. (By way of background, Alec’s been in the technical security and privacy space for 30+ years, and is a good and reputable voice on these matters.)

Aside

2018.9.6

I think I’m going to actively crosspost all the photos I post on Instagram here, on my personal website, as well. I find more value posting photos on Instagram because that’s where my community is but, at the same time, I’m loath to leave my content existing exclusively on a third-party’s infrastructure. Especially when that infrastructure is owned by Facebook.

Limits of Data Access Requests

rawpixel-378006-unsplash
Photo by rawpixel on Unsplash

A data access request involves you contacting a private company and requesting a copy of your personal information, as well as the ways in which that data is processed, disclosed, and the periods of time for which data is retained.

I’ve conducted research over the past decade which has repeatedly shown that companies are often very poor at comprehensively responding to data access requests. Sometimes this is because of divides between technical teams that collect and use the data, policy teams that determine what is and isn’t appropriate to do with data, and legal teams that ascertain whether collections and uses of data comport with the law. In other situations companies simply refuse to respond because they adopt a confused-nationalist understanding of law: if the company doesn’t have an office somewhere in a requesting party’s country then that jurisdiction’s laws aren’t seen as applying to the company, even if the company does business in the jurisdiction.

Automated Data Export As Solution?

Some companies, such as Facebook and Google, have developed automated data download services. Ostensibly these services are designed so that you can download the data you’ve input into the companies, thus revealing precisely what is collected about you. In reality, these services don’t let you export all of the information that these respective companies collect. As a result when people tend to use these download services they end up with a false impression of just what information the companies collect and how its used.

A shining example of the kinds of information that are not revealed to users of these services has come to light. A leaked document from Facebook Australia revealed that:

Facebook’s algorithms can determine, and allow advertisers to pinpoint, “moments when young people need a confidence boost.” If that phrase isn’t clear enough, Facebook’s document offers a litany of teen emotional states that the company claims it can estimate based on how teens use the service, including “worthless,” “insecure,” “defeated,” “anxious,” “silly,” “useless,” “stupid,” “overwhelmed,” “stressed,” and “a failure.”

This targeting of emotions isn’t necessarily surprising: in a past exposé we learned that Facebook conducted experiments during an American presidential election to see if they could sway voters. Indeed, the company’s raison d’être is figure out how to pitch ads to customers, and figuring out when Facebook users are more or less likely to be affected by advertisements is just good business. If you use the self-download service provided by Facebook, or any other data broker, you will not receive data on how and why your data is exploited: without understanding how their algorithms act on the data they collect from you, you can never really understand how your personal information is processed.

But that raison d’être of pitching ads to people — which is why Facebook could internally justify the deliberate targeting of vulnerable youth — ignores baseline ethics of whether it is appropriate to exploit our psychology to sell us products. To be clear, this isn’t a company stalking you around the Internet with ads for a car or couch or jewelry that you were browsing about. This is a deliberate effort to mine your communications to sell products at times of psychological vulnerability. The difference is between somewhat stupid tracking versus deliberate exploitation of our emotional state.1

Solving for Bad Actors

There are laws around what you can do with the information provided by children. Whether Facebook’s actions run afoul of such law may never actually be tested in a court or privacy commissioner’s decision. In part, this is because mounting legal challenges is extremely challenging, expensive, and time consuming. These hurdles automatically tilt the balance towards activities such as this continuing.

But part of the challenge in stopping such exploitative activities are also linked to Australia’s historically weak privacy commissioner as well as the limitations of such offices around the world: Privacy Commissioners Offices are often understaffed, under resourced, and unable to chase every legally and ethically questionable practice undertaken by private companies. Companies know about these limitations and, as such, know they can get away with unethical and frankly illegal activities unless someone talks to the press about the activities in question.

So what’s the solution? The rote advice is to stop using Facebook. While that might be good advice for some, for a lot of other people leaving Facebook is very, very challenging. You might use it to sign into a lot of other services and so don’t think you can easily abandon Facebook. You might have stored years of photos or conversations and Facebook doesn’t give you a nice way to pull them out. It might be a place where all of your friends and family congregate to share information and so leaving would amount to being excised from your core communities. And depending on where you live you might rely on Facebook for finding jobs, community events, or other activities that are essential to your life.

In essence, solving for Facebook, Google, Uber, and all the other large data broker problems is a collective action problem. It’s not a problem that is best solved on an individualistic basis.

A more realistic kind of advice would be this: file complaints to your local politicians. File complaints to your domestic privacy commissioners. File complaints to every conference, academic association, and industry event that takes Facebook money.2 Make it very public and very clear that you and groups you are associated with are offended by the company in question that is profiting off the psychological exploitation of children and adults alike.3 Now, will your efforts to raise attention to the issue and draw negative attention to companies and groups profiting from Facebook and other data brokers stop unethical data exploitation tomorrow? No. But by consistently raising our concerns about how large data brokers collect and use personal information, and attributing some degree of negative publicity to all those who benefit from such practices, we can decrease the public stock of a company.

History is dotted with individuals who are seen as standing up to end bad practices by governments and private companies alike. But behind them tend to be a mass of citizens who are supportive of those individuals: while standing up en masse may mean that we don’t each get individual praise for stopping some tasteless and unethical practices, our collective standing up will make it more likely that such practices will be stopped. By each working a little we can do something that, individually, we’d be hard pressed to change as individuals.

(This article was previously published in a slightly different format on a now-defunct Medium account.)

Footnotes:

1 Other advertising companies adopt the same practices as Facebook. So I’m not suggesting that Facebook is worst-of-class and letting the others off the hook.

2 Replace ‘Facebook’ with whatever company you think is behaving inappropriately, unethically, or perhaps illegally.

3 Surely you don’t think that Facebook is only targeting kids, right?

Link

MPs consider contempt charges for Canadian company linked to Cambridge Analytica after raucous committee meeting

Aggregate IQ executives came to answer questions before a Canadian parliamentary committee. Then they had the misfortune of dealing with a well-connected British Information Commissioner, Elizabeth Denham:

At Tuesday’s committee meeting, MPs pressed Silvester and Massingham on their company’s work during the Brexit referendum, for which they are currently under investigation in the UK over possible violations of campaign spending limits. Under questioning from Liberal MP Nathaniel Erskine-Smith, Silvester and Massingham insisted they had fully cooperated with the UK information commissioner Elizabeth Denham. But as another committee member, Liberal MP Frank Baylis, took over the questioning, Erskine-Smith received a text message on his phone from Denham which contradicted the pair’s testimony.

Erskine-Smith handed his phone to Baylis, who read the text aloud.  “AIQ refused to answer her specific questions relating to data usage during the referendum campaign, to the point that the UK is considering taking further legal action to secure the information she needs,” Denham’s message said.

Silvester replied that he had been truthful in all his answers and said he would be keen to follow up with Denham if she had more questions.

It’s definitely a bold move to inform parliamentarians, operating in a friendly but foreign jurisdiction, that they’re being misled by one of their witnesses. So long as such communications don’t overstep boundaries — such as enabling a government official to engage in a public witchhunt of a given person or group — these sorts of communications seem essential when dealing with groups which have spread themselves across multiple jurisdictions and are demonstrably behaving untruthfully.

The Roundup for April 14-20, 2018 Edition

Walkways by Christopher Parsons

Earlier this year, I suggested that the current concerns around Facebook data being accessed by unauthorized third parties wouldn’t result in users leaving the social network in droves. Not just because people would be disinclined to actually leave the social network but because so many services use Facebook.

Specifically, one of the points that I raised was:

3. Facebook is required to log into a lot of third party services. I’m thinking of services from my barber to Tinder. Deleting Facebook means it’s a lot harder to get a haircut and impossible to use something like Tinder.

At least one company, Bumble, is changing its profile confirmation methods: whereas previously all Bumble users linked their Facebook information to their Bumble account for account identification, the company is now developing their own verification system. Should a significant number of companies end up following Bumble’s model then this could have a significant impact on Facebook’s popularity, as some of the ‘stickiness’ of the service would be diminished.1

I think that people moving away from Facebook is a good thing. But it’s important to recognize that the company doesn’t just provide social connectivity: Facebook has also made it easier for businesses to secure login credential and (in others cases) ‘verify’ identity.2 In effect one of the trickiest parts of on boarding customers has been done by a third party that was well resourced to both collect and secure the data from formal data breaches. As smaller companies assume these responsibilities, without the equivalent to Facebook’s security staff, they are going to have to get very good, very fast, at protecting their customers’ information from data breaches. While it’s certainly not impossible for smaller companies to rise to the challenge, it won’t be a cost free endeavour, either.

It will be interesting to see if more companies move over to Bumble’s approach or if, instead, businesses and consumers alike merely shake their heads angrily at Facebook’s and continue to use the service despite its failings. For what it’s worth, I continue to think that people will just shake their heads angrily and little will actually come of the Cambridge Analytica story in terms of affecting the behaviours and desires of most Facebook users, unless there are continued rapid and sustained violations of Facebook users’ trust. But hope springs eternal and so I genuinely do hope that people shift away from Facebook and towards more open, self-owned, and interesting communications and networking platforms.


Thoughtful Quotation of the Week

The brands themselves aren’t the problem, though: we all need some stuff, so we rely on brands to create the things we need. The problem arises when we feel external pressure to acquire as if new trinkets are a shortcut to a more complete life. That external pressure shouldn’t be a sign to consume. If anything, it’s a sign to pause and ask, “Who am I buying this for?”

Great Photography Shots

I was really stunned by Zsolt Hlinka’s architectural photography, which was featured on My Modern MET.

Music I’m Digging

Neat Podcast Episodes

Good Reads for the Week

Cool Things

Footnotes

  1. I think that the other reasons I listed in my earlier post will still hold. Those points were:

    1. Few people vote. And so they aren’t going to care that some shady company was trying to affect voting patterns.
    2. Lots of people rely on Facebook to keep passive track of the people in their lives. Unless communities, not individuals, quit there will be immense pressure to remain part of the network.

  2. I’m aware that it’s easy to establish a fake Facebook account and that such activity is pretty common. Nevertheless, an awful lot of people use their ‘real’ Facebook accounts that has real verification information, such as email addresses and phone numbers.

Facebook Isn’t Going Anywhere

In the wake of the Cambridge Analytica scandal, there are calls for people to delete their Facebook accounts. Similar calls have gone out in the past following Facebook-related scandals. As the years have unfolded following each scandal, Facebook has become more and more integrated into people’s lives while, at the same time, more and more people claim to dislike the service. I’m confident that some thousands of people will delete (or at least deactivate) their accounts. But I don’t think that the Cambridge Analytica scandal is going to be what causes people to flee Facebook en mass for the following reasons:

  1. Few people vote. And so they aren’t going to care that some shady company was trying to affect voting patterns.
  2. Lots of people rely on Facebook to keep passive track of the people in their lives. Unless communities, not individuals, quit there will be immense pressure to remain part of the network.
  3. Facebook is required to log into a lot of third party services. I’m thinking of services from my barber to Tinder. Deleting Facebook means it’s a lot harder to get a haircut and impossible to use something like Tinder.

Now, does this mean Cambridge Analytica will have no effect? No. In fact, Facebook’s second-worst nightmare is probably an acceleration of decreased use of the social network. So if people use Facebook hesitantly and significantly decrease how often they’re on the service this could open the potential for other networks to capitalize on the new minutes or hours of attention which are available. But regardless, Facebook isn’t going anywhere barring far more serious political difficulties.

Quote

There’s another theory floating around as to why Facebook cares so much about the way it’s impacting the world, and it’s one that I happen to agree with. When Zuckerberg looks into his big-data crystal ball, he can see a troublesome trend occurring. A few years ago, for example, there wasn’t a single person I knew who didn’t have Facebook on their smartphone. These days, it’s the opposite. This is largely anecdotal, but almost everyone I know has deleted at least one social app from their devices. And Facebook is almost always the first to go. Facebook, Twitter, Instagram, Snapchat, and other sneaky privacy-piercing applications are being removed by people who simply feel icky about what these platforms are doing to them, and to society.

Some people are terrified that these services are listening in to their private conversations. (The company’s anti-privacy tentacles go so far as to track the dust on your phone to see who you might be spending time with.) Others are sick of getting into an argument with a long-lost cousin, or that guy from high school who still works in the same coffee shop, over something that Trump said, or a “news” article that is full of more bias and false facts. And then there’s the main reason I think people are abandoning these platforms: Facebook knows us better than we know ourselves, with its algorithms that can predict if we’re going to cheat on our spouse, start looking for a new job, or buy a new water bottle on Amazon in a few weeks. It knows how to send us the exact right number of pop-ups to get our endorphins going, or not show us how many Likes we really have to set off our insecurities. As a society, we feel like we’re at war with a computer algorithm, and the only winning move is not to play.

There was a time when Facebook made us feel good about using the service—I used to love it. It was fun to connect with old friends, share pictures of your vacation with everyone, or show off a video of your nephew being extra-specially cute. But, over time, Facebook has had to make Wall Street happy, and the only way to feed that beast is to accumulate more, more, more: more clicks, more time spent on the site, more Likes, more people, more connections, more hyper-personalized ads. All of which adds up to more money. But as one recent mea culpa by an early Internet guru aptly noted, “What if we were never meant to be a global species?”

As much as I’d like to believe that users will flee Facebook, I still think the network effect will keep them inside the company’s heavily walled garden. It’ll take a new generation using new applications and interested in different kinds of content creation — and Facebook not buying up whatever is popular to that generation — for the company’s grasp to be loosened.