Link

Can University Faculty Hold Platforms To Account?

Heidi Tworek has a good piece with the Centre for International Governance Innovation, where she questions whether there will be a sufficient number of faculty in Canada (and elsewhere) to make use of information that digital-first companies might be compelled to make available to researchers. The general argument goes that if companies must make information available to academics then these academics can study the information and, subsequently, hold companies to account and guide evidence-based policymaking.

Tworek’s argument focuses on two key things.

  1. First, there has been a decline in the tenured professoriate in Canada, with the effect that the adjunct faculty who are ‘filling in’ are busy teaching and really don’t have a chance to lead research.
  2. While a vanishingly small number of PhD holders obtain a tenure track role, a reasonable number may be going into the very digital-first companies that researchers needs data from to hold them accountable.

On this latter point, she writes:

If the companies have far more researchers than universities have, transparency regulations may not do as much to address the imbalance of knowledge as many expect.

I don’t think that hiring people with PhDs necessarily means that companies are addressing knowledge imbalances. Whatever is learned by these researchers tends to be sheltered within corporate walls and protected by NDAs. So those researchers going into companies may learn what’s going on but be unable (or unmotivated) to leverage what they know in order to inform policy discussions meant to hold companies to account.

To be clear, I really do agree with a lot in this article. However, I think it does have a few areas for further consideration.

First, more needs to be said about what, specifically, ’transparency’ encompasses and its relationships with data type, availability, etc. Transparency is a deeply contested concept and there are a lot of ways that the revelation of data basically creates a funhouse of mirrors effect, insofar as what researchers ‘see’ can be very distorted from the reality of what truly is.

Second, making data available isn’t just about whether universities have the professors to do the work but, really, whether the government and its regulators have the staff time as well. Professors are doing a lot of things whereas regulators can assign staff to just work the data, day in and day out. Focus matters.

Third, and related, I have to admit that I have pretty severe doubts about the ability of professors to seriously take up and make use of information from platforms, at scale and with policy impact, because it’s never going to be their full time jobs to do so. Professors are also going to be required to publish in books or journals, which means their outputs will be delayed and inaccessible to companies, government bureaucrats and regulators, and NGO staff. I’m sure academics will have lovely and insightful discussions…but they won’t happen fast enough, or in accessible places or in plain language, to generally affect policy debates.

So, what might need to be added to start fleshing out how universities are organised to make use of data released by companies and have policy impacts in research outputs?

First, universities in Canada would need to get truly serious about creating a ’researcher class’ to analyse corporate reporting. This would involve prioritising the hiring of research associates and senior research associates who have few or no teaching responsibilities.1

Second, universities would need to work to create centres such as the Citizen Lab, or related groups.2 These don’t need to be organisations which try and cover the waterfront of all digital issues. They could, instead, be more focused to reduce the number of staff or fellows that are needed to fulfil the organisation’s mandate. Any and all centres of this type would see a small handful of people with PhDs (who largely lack teaching responsibilities) guide multidisciplinary teams of staff. Those same staff members would not typically need a a PhD. They would need to be nimble enough to move quickly while using a peer-review lite process to validate findings, but not see journal or book outputs as their primacy currency for promotion or hiring.

Third, the centres would need a core group of long-term staffers. This core body of long-term researchers is needed to develop policy expertise that graduate students just don’t possess or develop in their short tenure in the university. Moreover, these same long-term researchers can then train graduate student fellows of the centres in question, with the effect of slowly building a cadre of researchers who are equipped to critically assess digital-first companies.

Fourth, the staff at research centres needs to be paid well and properly. They cannot be regarded as ‘graduate student plus’ employees but as specialists who will be of interest to government and corporations. This means that the university will need to pay competitive wages in order to secure the staff needed to fulfil centre mandates.

Basically if universities are to be successful in holding big data companies to account they’ll need to incubate quasi-NGOs and let them loose under the university’s auspice. It is, however, worth asking whether this should be the goal of the university in the first place: should society be outsourcing a large amount of the ‘transparency research’ that is designed to have policy impact or guide evidence-based policy making to academics, or should we instead bolster the capacities of government departments and regulatory agencies to undertake these activities

Put differently, and in context with Tworek’s argument: I think that assuming that PhDs holders working as faculty in universities are the solution to analysing data released by corporations can only hold if you happen to (a) hold or aspire to hold a PhD; (b) possesses or aspire to possess a research-focused tenure track job.

I don’t think that either (a) or (b) should guide the majority of the way forward in developing policy proposals as they pertain to holding corporations to account.

Do faculty have a role in holding companies such as Google, Facebook, Amazon, Apple, or Netflix to account? You bet. But if the university, and university researchers, are going to seriously get involved in using data released by companies to hold them to account and have policy impact, then I think we need dedicated and focused researchers. Faculty who are torn between teaching, writing and publishing in inaccessible locations using baroque theoretical lenses, pursuing funding opportunities and undertaking large amounts of department service and performing graduate student supervision are just not going to be sufficient to address the task at hand.


  1. In the interests of disclosure, I currently hold one of these roles. ↩︎
  2. Again in the interests of disclosure, this is the kind of place I currently work at. ↩︎

Adding Context to Facebook’s CSAM Reporting

In early 2021, John Buckley, Malia Andrus, and Chris Williams published an article entitled, “Understanding the intentions of Child Sexual Abuse Material (CSAM) sharers” on Meta’s research website. They relied on information that Facebook/Meta had submitted to NCMEC to better understand why individuals they reported had likely shared illegal content.

The issue of CSAM on Facebook’s networks has risen in prominence following a report in 2019 in the New York Times. That piece indicated that Facebook was responsible for reporting the vast majority of the 45 million online photos and videos of children being sexually abused online. Ever since, Facebook has sought to contextualize the information it discloses to NCMEC and explain the efforts it has put in place to prevent CSAM from appearing on its services.

So what was the key finding from the research?

We evaluated 150 accounts that we reported to NCMEC for uploading CSAM in July and August of 2020 and January 2021, and we estimate that more than 75% of these did not exhibit malicious intent (i.e. did not intend to harm a child), but appeared to share for other reasons, such as outrage or poor humor. While this study represents our best understanding, these findings should not be considered a precise measure of the child safety ecosystem.

This finding is significant, as it quickly becomes suggestive that the mass majority of the content reported by Facebook—while illegal!—is not deliberately being shared for malicious purposes. Even if we assume that the number sampled should be adjusted—perhaps only 50% of individuals were malicious—we are still left with a significant finding.

There are, of course, limitations to the research. First, it excludes all end-to-end encrypted messages. So there is some volume of content that cannot be detected using these methods. Second, it remains unclear how scientifically robust it was to choose the selected 150 accounts for analysis. Third, and related, there is a subsequent question of whether the selected accounts are necessarily representative of the broader pool of accounts that are associated with distributing CSAM.

Nevertheless, this seeming sleeper-research hit has significant implications insofar as it would compress the number of problematic accounts/individuals disclosing CSAM to other parties. Clearly more work along this line is required, ideally across Internet platforms, in order to add further context and details to the extent of the CSAM problem and subsequently define what policy solutions are necessary and proportionate.

Photography and Social Media

5CC1F7AF-32C8-44BD-9E44-C061F92CC924
(Passer By by Christopher Parsons)

Why do we want to share our photos online? What platforms are better or worse to use in sharing images? These are some of the questions I’ve been pondering for the past few weeks.

Backstory

About a month ago a colleague stated that she would be leaving Instagram given the nature of Facebook’s activities and the company’s seeming lack of remorse. Her decision has stuck with me and left me wondering whether I want to follow her lead.

I deleted my Facebook accounts some time ago, and have almost entirely migrated my community away from WhatsApp. But as an amateur photographer I’ve hesitated to leave an app that was, at least initially, designed with photographers in mind. I’ve used the application over the years to develop and improve my photographic abilities and so there’s an element of ‘sunk cost’ that has historically factored into my decision to stay or leave.

But Instagram isn’t really for photographers anymore. The app is increasingly stuffed with either videos or ads, and is meant to create a soft landing point for when/if Facebook truly pivots away from its main Facebook app.1 The company’s pivot makes it a lot easier to justify leaving the application though, at the same time, leaves me wondering what application or platform, if any, I want to move my photos over to.

The Competition(?)

Over the past week or two I’ve tried Flickr.2 While it’s the OG of photo sharing sites its mobile apps are just broken. I can’t create albums unless I use the web app. The sharing straight from the Apple Photos app is janky. I worry (for no good reason, really) about the cost for the professional version (do I even need that as an amateur?) as well as the annoyance of tagging photos in order to ‘find my tribe.’

It’s also not apparent to me how much community truly exists on Flickr: the whole platform seems a bit like a graveyard with only a small handful of active photographers still inhabiting the space.

I’m also trying Glass at the moment. It’s not perfect: search is non-existent, you can’t share your gallery of photos with non-Glass users at the moment, discovery is a bit rough, there’s no Web version, and it’s currently iPhone only. However, I do like that the app (and its creators) is focused on sharing images and that it has a clear monetization schema in the form of a yearly subscription. The company’s formal roadmap also indicates that some of these rough edges may be filed away in the coming months.

I also like that Glass doesn’t require me to develop a tagging system (that’s all done in the app using presets), let’s me share quickly and easily from the Photos app, looks modern, and has a relatively low yearly subscription cost. And, at least so far, most of the comments are better than on the other platforms, which I think is important to developing my own photography.

Finally, there’s my blog here! And while I like to host photo series here this site isn’t really designed as a photo blog first and foremost. Part of the problem is that WordPress continues to suck for posting media in my experience but, more substantively, this blog hosts a lot more text than images. I don’t foresee changing this focus anytime in the near or even distant future.

The Necessity of Photo Sharing?

It’s an entirely fair question to ask why even bother sharing photos with strangers. Why not just keep my images on my devices and engage in my own self-critique?

I do engage in such critique but I’ve personally learned more from putting my images into the public eye than I would just by keeping them on my own devices.3 Some of that is from comments but, also, it’s been based on what people have ‘liked’ or left emoji comments on. These kinds of signals have helped me better understand what is a better or less good photograph.

However, at this point I don’t think that likes and emojis are the source of my future photography development: I want actual feedback, even if it’s limited to just a sentence or so. I’m hoping that Glass might provide that kind of feedback though I guess only time will tell.


  1. For a good take on Facebook and why its functionally ‘over’ as a positive brand check out M.G. Siegler’s article, “Facebook is Too Big, Fail.” ↩︎
  2. This is my second time with Flickr, as I closed a very old account several years ago given that I just wasn’t using it. ↩︎
  3. If I’m entirely honest, I bet I’ve learned as much or more from reading photography teaching/course books, but that’s a different kind of learning entirely. ↩︎
Link

The So-Called Privacy Problems with WhatsApp

(Photo by Anton on Pexels.com)

ProPublica, which is typically known for its excellent journalism, published a particularly terrible piece earlier this week that fundamentally miscast how encryption works and how Facebook vis-a-vis WhatsApp works to keep communications secured. The article, “How Facebook Undermines Privacy Protections for Its 2 Billion WhatsApp Users,” focuses on two so-called problems.

The So-Called Privacy Problems with WhatsApp

First, the authors explain that WhatsApp has a system whereby recipients of messages can report content they have received to WhatsApp on the basis that it is abusive or otherwise violates WhatsApp’s Terms of Service. The article frames this reporting process as a way of undermining privacy on the basis that secured messages are not kept solely between the sender(s) and recipient(s) of the communications but can be sent to other parties, such as WhatsApp. In effect, the ability to voluntarily forward messages to WhatsApp that someone has received is cast as breaking the privacy promises that have been made by WhatsApp.

Second, the authors note that WhatsApp collects a large volume of metadata in the course of using the application. Using lawful processes, government agencies have compelled WhatsApp to disclose metadata on some of their users in order to pursue investigations and secure convictions against individuals. The case that is focused on involves a government employee who leaked confidential banking information to Buzzfeed, and which were subsequently reported out.

Assessing the Problems

In the case of forwarding messages for abuse reporting purposes, encryption is not broken and the feature is not new. These kinds of processes offer a mechanism that lets individuals self-identify and report on problematic content. Such content can include child grooming, the communications of illicit or inappropriate messages or audio-visual content, or other abusive information.

What we do learn, however, is that the ‘reactive’ and ‘proactive’ methods of detecting abuse need to be fixed. In the case of the former, only about 1,000 people are responsible for intaking and reviewing the reported content after it has first been filtered by an AI:

Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems. These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.


Further, the employees are often reliant on machine learning-based translations of content which makes it challenging to assess what is, in fact, being communicated in abusive messages. As reported,

… using Facebook’s language-translation tool, which reviewers said could be so inaccurate that it sometimes labeled messages in Arabic as being in Spanish. The tool also offered little guidance on local slang, political context or sexual innuendo. “In the three years I’ve been there,” one moderator said, “it’s always been horrible.”

There are also proactive modes of watching for abusive content using AI-based systems. As noted in the article,

Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive. The unencrypted data available for scrutiny is extensive. It includes the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.

Unfortunately, the AI often makes mistakes. This led one interviewed content reviewer to state that, “[t]here were a lot of innocent photos on there that were not allowed to be on there … It might have been a photo of a child taking a bath, and there was nothing wrong with it.” Often, “the artificial intelligence is not that intelligent.”

The vast collection of metadata has been a long-reported concern and issue associated with WhatsApp and, in fact, was one of the many reasons why many individuals advocate for the use of Signal instead. The reporting in the ProPublica article helpfully summarizes the vast amount of metadata that is collected but that collection, in and of itself, does not present any evidence that Facebook or WhatsApp have transformed the application into one which inappropriately intrudes into persons’ privacy.

ProPublica Sets Back Reasonable Encryption Policy Debates

The ProPublica article harmfully sets back broader policy discussion around what is, and is not, a reasonable approach for platforms to take in moderating abuse when they have integrated strong end-to-end encryption. Such encryption prevents unauthorized third-parties–inclusive of the platform providers themselves–from reading or analyzing the content of the communications themselves. Enabling a reporting feature means that individuals who receive a communication are empowered to report it to a company, and the company can subsequently analyze what has been sent and take action if the content violates a terms of service or privacy policy clause.

In suggesting that what WhatsApp has implemented is somehow wrong, it becomes more challenging for other companies to deploy similar reporting features without fearing that their decision will be reported on as ‘undermining privacy’. While there may be a valid policy discussion to be had–is a reporting process the correct way of dealing with abusive content and messages?–the authors didn’t go there. Nor did they seriously investigate whether additional resources should be adopted to analyze reported content, or talk with artificial intelligence experts or machine-based translation experts on whether Facebook’s efforts to automate the reporting process are adequate, appropriate, or flawed from the start. All those would be very interesting, valid, and important contributions to the broader discussion about integrating trust and safety features into encrypted messaging applications. But…those are not things that the authors choose to delve into.

The authors could have, also, discussed the broader importance (and challenges) in building out messaging systems that can deliberately conceal metadata, and the benefits and drawbacks of such systems. While the authors do discuss how metadata can be used to crack down on individuals in government who leak data, as well as assist in criminal investigations and prosecutions, there is little said about what kinds of metadata are most important to conceal and the tradeoffs in doing so. Again, there are some who think that all or most metadata should be concealed, and others who hold opposite views: there is room for a reasonable policy debate to be had and reported on.

Unfortunately, instead of actually taking up and reporting on the very valid policy discussions that are at the edges of their article, the authors choose to just be bombastic and asserted that WhatsApp was undermining the privacy protections that individuals thought they have when using the application. It’s bad reporting, insofar as it distorts the facts, and is particularly disappointing given that ProPublica has shown it has the chops to do good investigative work that is well sourced and nuanced in its outputs. This article, however, absolutely failed to make the cut.

Link

Facebook Prioritizes Growth Over Social Responsibility

Karen Hao writing at MIT Technology Review:

But testing algorithms for fairness is still largely optional at Facebook. None of the teams that work directly on Facebook’s news feed, ad service, or other products are required to do it. Pay incentives are still tied to engagement and growth metrics. And while there are guidelines about which fairness definition to use in any given situation, they aren’t enforced.

The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.

But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.

[Kaplan’s] claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebook’s data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.

The whole thing with ethics is that they have to be integrated such that they underlie everything that an organization does; they cannot function as public relations add ons. Sadly at Facebook the only ethic is growth at all costs, the social implications be damned.

When someone or some organization is responsible for causing significant civil unrest, deaths, or genocide then we expect that those who are even partly responsible to be called to account, not just in the public domain but in courts of law and international justice. And when those someones happen to be leading executives for one of the biggest companies in the world the solution isn’t to berate them in Congressional hearings and hear their weak apologies, but to take real action against them and their companies.

Link

Explaining WhatsApp’s Encryption for Business Communications

Shoshana Wodinsky writing for Gizmodo has a lengthy, and detailed, breakdown of how and why WhatsApp is modifying its terms of service to facilitate consumer-to-business communications. The crux of the shift, really, comes down to:

… in the years since WhatsApp co-founders Jan Koum and Brian Acton cut ties with Facebook for, well, being Facebook, the company slowly turned into something that acted more like its fellow Facebook properties: an app that’s kind of about socializing, but mostly about shopping. These new privacy policies are just WhatsApp’s—and Facebook’s—way of finally saying the quiet part out loud.

What’s going to change? Namely whenever you’re speaking to a business then those communications will not be considered end-to-end encrypted and, as such, the communications content and metadata that is accessible can be used for advertising and other marketing, data mining, data targeting, or data exploitation purposes. If you’re just chatting with individuals–that is, not businesses!–then your communications will continue to be end-to-end encrypted.

For an additional, and perhaps longer, discussion of how WhatsApp’s shifts in policy–now, admittedly, delayed for a few months following public outrage–is linked to the goal of driving business revenue into the company check out Alec Muffett’s post over on his blog. (By way of background, Alec’s been in the technical security and privacy space for 30+ years, and is a good and reputable voice on these matters.)

Aside

2018.9.6

I think I’m going to actively crosspost all the photos I post on Instagram here, on my personal website, as well. I find more value posting photos on Instagram because that’s where my community is but, at the same time, I’m loath to leave my content existing exclusively on a third-party’s infrastructure. Especially when that infrastructure is owned by Facebook.

Limits of Data Access Requests

rawpixel-378006-unsplash
Photo by rawpixel on Unsplash

A data access request involves you contacting a private company and requesting a copy of your personal information, as well as the ways in which that data is processed, disclosed, and the periods of time for which data is retained.

I’ve conducted research over the past decade which has repeatedly shown that companies are often very poor at comprehensively responding to data access requests. Sometimes this is because of divides between technical teams that collect and use the data, policy teams that determine what is and isn’t appropriate to do with data, and legal teams that ascertain whether collections and uses of data comport with the law. In other situations companies simply refuse to respond because they adopt a confused-nationalist understanding of law: if the company doesn’t have an office somewhere in a requesting party’s country then that jurisdiction’s laws aren’t seen as applying to the company, even if the company does business in the jurisdiction.

Automated Data Export As Solution?

Some companies, such as Facebook and Google, have developed automated data download services. Ostensibly these services are designed so that you can download the data you’ve input into the companies, thus revealing precisely what is collected about you. In reality, these services don’t let you export all of the information that these respective companies collect. As a result when people tend to use these download services they end up with a false impression of just what information the companies collect and how its used.

A shining example of the kinds of information that are not revealed to users of these services has come to light. A leaked document from Facebook Australia revealed that:

Facebook’s algorithms can determine, and allow advertisers to pinpoint, “moments when young people need a confidence boost.” If that phrase isn’t clear enough, Facebook’s document offers a litany of teen emotional states that the company claims it can estimate based on how teens use the service, including “worthless,” “insecure,” “defeated,” “anxious,” “silly,” “useless,” “stupid,” “overwhelmed,” “stressed,” and “a failure.”

This targeting of emotions isn’t necessarily surprising: in a past exposé we learned that Facebook conducted experiments during an American presidential election to see if they could sway voters. Indeed, the company’s raison d’être is figure out how to pitch ads to customers, and figuring out when Facebook users are more or less likely to be affected by advertisements is just good business. If you use the self-download service provided by Facebook, or any other data broker, you will not receive data on how and why your data is exploited: without understanding how their algorithms act on the data they collect from you, you can never really understand how your personal information is processed.

But that raison d’être of pitching ads to people — which is why Facebook could internally justify the deliberate targeting of vulnerable youth — ignores baseline ethics of whether it is appropriate to exploit our psychology to sell us products. To be clear, this isn’t a company stalking you around the Internet with ads for a car or couch or jewelry that you were browsing about. This is a deliberate effort to mine your communications to sell products at times of psychological vulnerability. The difference is between somewhat stupid tracking versus deliberate exploitation of our emotional state.1

Solving for Bad Actors

There are laws around what you can do with the information provided by children. Whether Facebook’s actions run afoul of such law may never actually be tested in a court or privacy commissioner’s decision. In part, this is because mounting legal challenges is extremely challenging, expensive, and time consuming. These hurdles automatically tilt the balance towards activities such as this continuing.

But part of the challenge in stopping such exploitative activities are also linked to Australia’s historically weak privacy commissioner as well as the limitations of such offices around the world: Privacy Commissioners Offices are often understaffed, under resourced, and unable to chase every legally and ethically questionable practice undertaken by private companies. Companies know about these limitations and, as such, know they can get away with unethical and frankly illegal activities unless someone talks to the press about the activities in question.

So what’s the solution? The rote advice is to stop using Facebook. While that might be good advice for some, for a lot of other people leaving Facebook is very, very challenging. You might use it to sign into a lot of other services and so don’t think you can easily abandon Facebook. You might have stored years of photos or conversations and Facebook doesn’t give you a nice way to pull them out. It might be a place where all of your friends and family congregate to share information and so leaving would amount to being excised from your core communities. And depending on where you live you might rely on Facebook for finding jobs, community events, or other activities that are essential to your life.

In essence, solving for Facebook, Google, Uber, and all the other large data broker problems is a collective action problem. It’s not a problem that is best solved on an individualistic basis.

A more realistic kind of advice would be this: file complaints to your local politicians. File complaints to your domestic privacy commissioners. File complaints to every conference, academic association, and industry event that takes Facebook money.2 Make it very public and very clear that you and groups you are associated with are offended by the company in question that is profiting off the psychological exploitation of children and adults alike.3 Now, will your efforts to raise attention to the issue and draw negative attention to companies and groups profiting from Facebook and other data brokers stop unethical data exploitation tomorrow? No. But by consistently raising our concerns about how large data brokers collect and use personal information, and attributing some degree of negative publicity to all those who benefit from such practices, we can decrease the public stock of a company.

History is dotted with individuals who are seen as standing up to end bad practices by governments and private companies alike. But behind them tend to be a mass of citizens who are supportive of those individuals: while standing up en masse may mean that we don’t each get individual praise for stopping some tasteless and unethical practices, our collective standing up will make it more likely that such practices will be stopped. By each working a little we can do something that, individually, we’d be hard pressed to change as individuals.

(This article was previously published in a slightly different format on a now-defunct Medium account.)

Footnotes:

1 Other advertising companies adopt the same practices as Facebook. So I’m not suggesting that Facebook is worst-of-class and letting the others off the hook.

2 Replace ‘Facebook’ with whatever company you think is behaving inappropriately, unethically, or perhaps illegally.

3 Surely you don’t think that Facebook is only targeting kids, right?

Link

MPs consider contempt charges for Canadian company linked to Cambridge Analytica after raucous committee meeting

Aggregate IQ executives came to answer questions before a Canadian parliamentary committee. Then they had the misfortune of dealing with a well-connected British Information Commissioner, Elizabeth Denham:

At Tuesday’s committee meeting, MPs pressed Silvester and Massingham on their company’s work during the Brexit referendum, for which they are currently under investigation in the UK over possible violations of campaign spending limits. Under questioning from Liberal MP Nathaniel Erskine-Smith, Silvester and Massingham insisted they had fully cooperated with the UK information commissioner Elizabeth Denham. But as another committee member, Liberal MP Frank Baylis, took over the questioning, Erskine-Smith received a text message on his phone from Denham which contradicted the pair’s testimony.

Erskine-Smith handed his phone to Baylis, who read the text aloud.  “AIQ refused to answer her specific questions relating to data usage during the referendum campaign, to the point that the UK is considering taking further legal action to secure the information she needs,” Denham’s message said.

Silvester replied that he had been truthful in all his answers and said he would be keen to follow up with Denham if she had more questions.

It’s definitely a bold move to inform parliamentarians, operating in a friendly but foreign jurisdiction, that they’re being misled by one of their witnesses. So long as such communications don’t overstep boundaries — such as enabling a government official to engage in a public witchhunt of a given person or group — these sorts of communications seem essential when dealing with groups which have spread themselves across multiple jurisdictions and are demonstrably behaving untruthfully.

The Roundup for April 14-20, 2018 Edition

Walkways by Christopher Parsons

Earlier this year, I suggested that the current concerns around Facebook data being accessed by unauthorized third parties wouldn’t result in users leaving the social network in droves. Not just because people would be disinclined to actually leave the social network but because so many services use Facebook.

Specifically, one of the points that I raised was:

3. Facebook is required to log into a lot of third party services. I’m thinking of services from my barber to Tinder. Deleting Facebook means it’s a lot harder to get a haircut and impossible to use something like Tinder.

At least one company, Bumble, is changing its profile confirmation methods: whereas previously all Bumble users linked their Facebook information to their Bumble account for account identification, the company is now developing their own verification system. Should a significant number of companies end up following Bumble’s model then this could have a significant impact on Facebook’s popularity, as some of the ‘stickiness’ of the service would be diminished.1

I think that people moving away from Facebook is a good thing. But it’s important to recognize that the company doesn’t just provide social connectivity: Facebook has also made it easier for businesses to secure login credential and (in others cases) ‘verify’ identity.2 In effect one of the trickiest parts of on boarding customers has been done by a third party that was well resourced to both collect and secure the data from formal data breaches. As smaller companies assume these responsibilities, without the equivalent to Facebook’s security staff, they are going to have to get very good, very fast, at protecting their customers’ information from data breaches. While it’s certainly not impossible for smaller companies to rise to the challenge, it won’t be a cost free endeavour, either.

It will be interesting to see if more companies move over to Bumble’s approach or if, instead, businesses and consumers alike merely shake their heads angrily at Facebook’s and continue to use the service despite its failings. For what it’s worth, I continue to think that people will just shake their heads angrily and little will actually come of the Cambridge Analytica story in terms of affecting the behaviours and desires of most Facebook users, unless there are continued rapid and sustained violations of Facebook users’ trust. But hope springs eternal and so I genuinely do hope that people shift away from Facebook and towards more open, self-owned, and interesting communications and networking platforms.


Thoughtful Quotation of the Week

The brands themselves aren’t the problem, though: we all need some stuff, so we rely on brands to create the things we need. The problem arises when we feel external pressure to acquire as if new trinkets are a shortcut to a more complete life. That external pressure shouldn’t be a sign to consume. If anything, it’s a sign to pause and ask, “Who am I buying this for?”

Great Photography Shots

I was really stunned by Zsolt Hlinka’s architectural photography, which was featured on My Modern MET.

Music I’m Digging

Neat Podcast Episodes

Good Reads for the Week

Cool Things

Footnotes

  1. I think that the other reasons I listed in my earlier post will still hold. Those points were:

    1. Few people vote. And so they aren’t going to care that some shady company was trying to affect voting patterns.
    2. Lots of people rely on Facebook to keep passive track of the people in their lives. Unless communities, not individuals, quit there will be immense pressure to remain part of the network.

  2. I’m aware that it’s easy to establish a fake Facebook account and that such activity is pretty common. Nevertheless, an awful lot of people use their ‘real’ Facebook accounts that has real verification information, such as email addresses and phone numbers.