But testing algorithms for fairness is still largely optional at Facebook. None of the teams that work directly on Facebook’s news feed, ad service, or other products are required to do it. Pay incentives are still tied to engagement and growth metrics. And while there are guidelines about which fairness definition to use in any given situation, they aren’t enforced.
The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.
But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.
[Kaplan’s] claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebook’s data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.
The whole thing with ethics is that they have to be integrated such that they underlie everything that an organization does; they cannot function as public relations add ons. Sadly at Facebook the only ethic is growth at all costs, the social implications be damned.
When someone or some organization is responsible for causing significant civil unrest, deaths, or genocide then we expect that those who are even partly responsible to be called to account, not just in the public domain but in courts of law and international justice. And when those someones happen to be leading executives for one of the biggest companies in the world the solution isn’t to berate them in Congressional hearings and hear their weak apologies, but to take real action against them and their companies.
… in the years since WhatsApp co-founders Jan Koum and Brian Acton cut ties with Facebook for, well, being Facebook, the company slowly turned into something that acted more like its fellow Facebook properties: an app that’s kind of about socializing, but mostly about shopping. These new privacy policies are just WhatsApp’s—and Facebook’s—way of finally saying the quiet part out loud.
What’s going to change? Namely whenever you’re speaking to a business then those communications will not be considered end-to-end encrypted and, as such, the communications content and metadata that is accessible can be used for advertising and other marketing, data mining, data targeting, or data exploitation purposes. If you’re just chatting with individuals–that is, not businesses!–then your communications will continue to be end-to-end encrypted.
For an additional, and perhaps longer, discussion of how WhatsApp’s shifts in policy–now, admittedly, delayed for a few months following public outrage–is linked to the goal of driving business revenue into the company check out Alec Muffett’s post over on his blog. (By way of background, Alec’s been in the technical security and privacy space for 30+ years, and is a good and reputable voice on these matters.)
I think I’m going to actively crosspost all the photos I post on Instagram here, on my personal website, as well. I find more value posting photos on Instagram because that’s where my community is but, at the same time, I’m loath to leave my content existing exclusively on a third-party’s infrastructure. Especially when that infrastructure is owned by Facebook.
A data access request involves you contacting a private company and requesting a copy of your personal information, as well as the ways in which that data is processed, disclosed, and the periods of time for which data is retained.
I’ve conducted research over the past decade which hasrepeatedlyshown that companies are often very poor at comprehensively responding to data access requests. Sometimes this is because of divides between technical teams that collect and use the data, policy teams that determine what is and isn’t appropriate to do with data, and legal teams that ascertain whether collections and uses of data comport with the law. In other situations companies simply refuse to respond because they adopt a confused-nationalist understanding of law: if the company doesn’t have an office somewhere in a requesting party’s country then that jurisdiction’s laws aren’t seen as applying to the company, even if the company does business in the jurisdiction.
Automated Data Export As Solution?
Some companies, such as Facebook and Google, have developed automated data download services. Ostensibly these services are designed so that you can download the data you’ve input into the companies, thus revealing precisely what is collected about you. In reality, these services don’t let you export all of the information that these respective companies collect. As a result when people tend to use these download services they end up with a false impression of just what information the companies collect and how its used.
A shining example of the kinds of information that are not revealed to users of these services has come to light. A leaked document from Facebook Australia revealed that:
Facebook’s algorithms can determine, and allow advertisers to pinpoint, “moments when young people need a confidence boost.” If that phrase isn’t clear enough, Facebook’s document offers a litany of teen emotional states that the company claims it can estimate based on how teens use the service, including “worthless,” “insecure,” “defeated,” “anxious,” “silly,” “useless,” “stupid,” “overwhelmed,” “stressed,” and “a failure.”
This targeting of emotions isn’t necessarily surprising: in a past exposé we learned that Facebook conducted experiments during an American presidential election to see if they could sway voters. Indeed, the company’s raison d’être is figure out how to pitch ads to customers, and figuring out when Facebook users are more or less likely to be affected by advertisements is just good business. If you use the self-download service provided by Facebook, or any other data broker, you will not receive data on how and why your data is exploited: without understanding how their algorithms act on the data they collect from you, you can never really understand how your personal information is processed.
But that raison d’être of pitching ads to people — which is why Facebook could internally justify the deliberate targeting of vulnerable youth — ignores baseline ethics of whether it is appropriate to exploit our psychology to sell us products. To be clear, this isn’t a company stalking you around the Internet with ads for a car or couch or jewelry that you were browsing about. This is a deliberate effort to mine your communications to sell products at times of psychological vulnerability. The difference is between somewhat stupid tracking versus deliberate exploitation of our emotional state.1
Solving for Bad Actors
There are laws around what you can do with the information provided by children. Whether Facebook’s actions run afoul of such law may never actually be tested in a court or privacy commissioner’s decision. In part, this is because mounting legal challenges is extremely challenging, expensive, and time consuming. These hurdles automatically tilt the balance towards activities such as this continuing.
But part of the challenge in stopping such exploitative activities are also linked to Australia’s historically weak privacy commissioner as well as the limitations of such offices around the world: Privacy Commissioners Offices are often understaffed, under resourced, and unable to chase every legally and ethically questionable practice undertaken by private companies. Companies know about these limitations and, as such, know they can get away with unethical and frankly illegal activities unless someone talks to the press about the activities in question.
So what’s the solution? The rote advice is to stop using Facebook. While that might be good advice for some, for a lot of other people leaving Facebook is very, very challenging. You might use it to sign into a lot of other services and so don’t think you can easily abandon Facebook. You might have stored years of photos or conversations and Facebook doesn’t give you a nice way to pull them out. It might be a place where all of your friends and family congregate to share information and so leaving would amount to being excised from your core communities. And depending on where you live you might rely on Facebook for finding jobs, community events, or other activities that are essential to your life.
In essence, solving for Facebook, Google, Uber, and all the other large data broker problems is a collective action problem. It’s not a problem that is best solved on an individualistic basis.
A more realistic kind of advice would be this: file complaints to your local politicians. File complaints to your domestic privacy commissioners. File complaints to every conference, academic association, and industry event that takes Facebook money.2 Make it very public and very clear that you and groups you are associated with are offended by the company in question that is profiting off the psychological exploitation of children and adults alike.3 Now, will your efforts to raise attention to the issue and draw negative attention to companies and groups profiting from Facebook and other data brokers stop unethical data exploitation tomorrow? No. But by consistently raising our concerns about how large data brokers collect and use personal information, and attributing some degree of negative publicity to all those who benefit from such practices, we can decrease the public stock of a company.
History is dotted with individuals who are seen as standing up to end bad practices by governments and private companies alike. But behind them tend to be a mass of citizens who are supportive of those individuals: while standing up en masse may mean that we don’t each get individual praise for stopping some tasteless and unethical practices, our collective standing up will make it more likely that such practices will be stopped. By each working a little we can do something that, individually, we’d be hard pressed to change as individuals.
(This article was previously published in a slightly different format on a now-defunct Medium account.)
1 Other advertising companies adopt the same practices as Facebook. So I’m not suggesting that Facebook is worst-of-class and letting the others off the hook.
2 Replace ‘Facebook’ with whatever company you think is behaving inappropriately, unethically, or perhaps illegally.
3 Surely you don’t think that Facebook is only targeting kids, right?
Aggregate IQ executives came to answer questions before a Canadian parliamentary committee. Then they had the misfortune of dealing with a well-connected British Information Commissioner, Elizabeth Denham:
At Tuesday’s committee meeting, MPs pressed Silvester and Massingham on their company’s work during the Brexit referendum, for which they are currently under investigation in the UK over possible violations of campaign spending limits. Under questioning from Liberal MP Nathaniel Erskine-Smith, Silvester and Massingham insisted they had fully cooperated with the UK information commissioner Elizabeth Denham. But as another committee member, Liberal MP Frank Baylis, took over the questioning, Erskine-Smith received a text message on his phone from Denham which contradicted the pair’s testimony.
Erskine-Smith handed his phone to Baylis, who read the text aloud. “AIQ refused to answer her specific questions relating to data usage during the referendum campaign, to the point that the UK is considering taking further legal action to secure the information she needs,” Denham’s message said.
Silvester replied that he had been truthful in all his answers and said he would be keen to follow up with Denham if she had more questions.
It’s definitely a bold move to inform parliamentarians, operating in a friendly but foreign jurisdiction, that they’re being misled by one of their witnesses. So long as such communications don’t overstep boundaries — such as enabling a government official to engage in a public witchhunt of a given person or group — these sorts of communications seem essential when dealing with groups which have spread themselves across multiple jurisdictions and are demonstrably behaving untruthfully.
Earlier this year, I suggested that the current concerns around Facebook data being accessed by unauthorized third parties wouldn’t result in users leaving the social network in droves. Not just because people would be disinclined to actually leave the social network but because so many services use Facebook.
Specifically, one of the points that I raised was:
3. Facebook is required to log into a lot of third party services. I’m thinking of services from my barber to Tinder. Deleting Facebook means it’s a lot harder to get a haircut and impossible to use something like Tinder.
At least one company, Bumble, is changing its profile confirmation methods: whereas previously all Bumble users linked their Facebook information to their Bumble account for account identification, the company is now developing their own verification system. Should a significant number of companies end up following Bumble’s model then this could have a significant impact on Facebook’s popularity, as some of the ‘stickiness’ of the service would be diminished.1
I think that people moving away from Facebook is a good thing. But it’s important to recognize that the company doesn’t just provide social connectivity: Facebook has also made it easier for businesses to secure login credential and (in others cases) ‘verify’ identity.2 In effect one of the trickiest parts of on boarding customers has been done by a third party that was well resourced to both collect and secure the data from formal data breaches. As smaller companies assume these responsibilities, without the equivalent to Facebook’s security staff, they are going to have to get very good, very fast, at protecting their customers’ information from data breaches. While it’s certainly not impossible for smaller companies to rise to the challenge, it won’t be a cost free endeavour, either.
It will be interesting to see if more companies move over to Bumble’s approach or if, instead, businesses and consumers alike merely shake their heads angrily at Facebook’s and continue to use the service despite its failings. For what it’s worth, I continue to think that people will just shake their heads angrily and little will actually come of the Cambridge Analytica story in terms of affecting the behaviours and desires of most Facebook users, unless there are continued rapid and sustained violations of Facebook users’ trust. But hope springs eternal and so I genuinely do hope that people shift away from Facebook and towards more open, self-owned, and interesting communications and networking platforms.
Thoughtful Quotation of the Week
The brands themselves aren’t the problem, though: we all need some stuff, so we rely on brands to create the things we need. The problem arises when we feel external pressure to acquire as if new trinkets are a shortcut to a more complete life. That external pressure shouldn’t be a sign to consume. If anything, it’s a sign to pause and ask, “Who am I buying this for?”
I think that the other reasons I listed in my earlier post will still hold. Those points were:
1. Few people vote. And so they aren’t going to care that some shady company was trying to affect voting patterns.
2. Lots of people rely on Facebook to keep passive track of the people in their lives. Unless communities, not individuals, quit there will be immense pressure to remain part of the network. ↩
I’m aware that it’s easy to establish a fake Facebook account and that such activity is pretty common. Nevertheless, an awful lot of people use their ‘real’ Facebook accounts that has real verification information, such as email addresses and phone numbers. ↩
In the wake of the Cambridge Analytica scandal, there are calls for people to delete their Facebook accounts. Similar calls have gone out in the past following Facebook-related scandals. As the years have unfolded following each scandal, Facebook has become more and more integrated into people’s lives while, at the same time, more and more people claim to dislike the service. I’m confident that some thousands of people will delete (or at least deactivate) their accounts. But I don’t think that the Cambridge Analytica scandal is going to be what causes people to flee Facebook en mass for the following reasons:
Few people vote. And so they aren’t going to care that some shady company was trying to affect voting patterns.
Lots of people rely on Facebook to keep passive track of the people in their lives. Unless communities, not individuals, quit there will be immense pressure to remain part of the network.
Facebook is required to log into a lot of third party services. I’m thinking of services from my barber to Tinder. Deleting Facebook means it’s a lot harder to get a haircut and impossible to use something like Tinder.
Now, does this mean Cambridge Analytica will have no effect? No. In fact, Facebook’s second-worst nightmare is probably an acceleration of decreased use of the social network. So if people use Facebook hesitantly and significantly decrease how often they’re on the service this could open the potential for other networks to capitalize on the new minutes or hours of attention which are available. But regardless, Facebook isn’t going anywhere barring far more serious political difficulties.
There’s another theory floating around as to why Facebook cares so much about the way it’s impacting the world, and it’s one that I happen to agree with. When Zuckerberg looks into his big-data crystal ball, he can see a troublesome trend occurring. A few years ago, for example, there wasn’t a single person I knew who didn’t have Facebook on their smartphone. These days, it’s the opposite. This is largely anecdotal, but almost everyone I know has deleted at least one social app from their devices. And Facebook is almost always the first to go. Facebook, Twitter, Instagram, Snapchat, and other sneaky privacy-piercing applications are being removed by people who simply feel icky about what these platforms are doing to them, and to society.
Some people are terrified that these services are listening in to their private conversations. (The company’s anti-privacy tentacles go so far as to track the dust on your phone to see who you might be spending time with.) Others are sick of getting into an argument with a long-lost cousin, or that guy from high school who still works in the same coffee shop, over something that Trump said, or a “news” article that is full of more bias and false facts. And then there’s the main reason I think people are abandoning these platforms: Facebook knows us better than we know ourselves, with its algorithms that can predict if we’re going to cheat on our spouse, start looking for a new job, or buy a new water bottle on Amazon in a few weeks. It knows how to send us the exact right number of pop-ups to get our endorphins going, or not show us how many Likes we really have to set off our insecurities. As a society, we feel like we’re at war with a computer algorithm, and the only winning move is not to play.
There was a time when Facebook made us feel good about using the service—I used to love it. It was fun to connect with old friends, share pictures of your vacation with everyone, or show off a video of your nephew being extra-specially cute. But, over time, Facebook has had to make Wall Street happy, and the only way to feed that beast is to accumulate more, more, more: more clicks, more time spent on the site, more Likes, more people, more connections, more hyper-personalized ads. All of which adds up to more money. But as one recent mea culpa by an early Internet guru aptly noted, “What if we were never meant to be a global species?”
As much as I’d like to believe that users will flee Facebook, I still think the network effect will keep them inside the company’s heavily walled garden. It’ll take a new generation using new applications and interested in different kinds of content creation — and Facebook not buying up whatever is popular to that generation — for the company’s grasp to be loosened.
Having followed Facebook for a long time, I know what really plagues the company is that being open and transparent is not part of its DNA. This combination of secrecy, microtargeting and addiction to growth at any cost is the real challenge. The company’s entire strategy is based on targeting, monetizing and advertising.
Common sense ideas such as being humane, understanding its impact on society and civic infrastructure — well that doesn’t bring any dollars into the coffers. Call me cynical, but reactive apologies are nothing but spin.
Facebook’s purchase of WhatsApp made sense in terms to buying a potential competitor before it got too large to threaten Facebook’s understanding of social relationships. The decision to secure communications between WhatsApp users only solidified Facebook’s position that it was less interested in mining the content of communications than on understanding the relationships between each user.
However, as businesses turn to WhatsApp to communicate with their customers a new revenue opportunity has opened for Facebook: compelling businesses to pay some kind of a fee to continue using the service for commercial communications.
WhatsApp will eventually charge companies to use some future features in the two free business tools it started testing this summer, WhatsApp’s chief operating officer, Matt Idema, said in an interview.
The new tools, which help businesses from local bakeries to global airlines talk to customers over the app, reflect a different approach to monetization than other Facebook products, which rely on advertising.
This is Facebook flipping who ‘pays’ for using WhatsApp. Whereas in the past customers paid a small yearly fee, now customers will get it free and businesses will be charged to use it. It remains to be seen, however, whether WhatsApp is ‘sticky’ enough for consumers to genuinely expect businesses to use it for customer communications. Further, Facebook’s payment model will also stand as a contrast between WhatsApp and its Asian competitors, such as LINE and WeChat, which have transformed their messaging platforms into whole social networks that can also be used for robust commercial transactions. Is this the beginning of an equivalent pivot on Facebook’s part or are they, instead, trying out an entirely separate business model in the hopes of not canibalizing Facebook itself?