Link

The Problem of Botting on Instagram

Calder Wilson at Petapixels:

Instagram’s Terms of Use make it clear that botting is a no-no. Over the past couple of years the platform has implemented anti-spam/anti-bot restriction, which does things like prevent accounts from liking too many photos in a short amount of time or commenting the same thing again and again. It’s obvious they oppose using bots ideologically, and it’s very easy to determine who’s using them or not, so why don’t they do something about it?

For one thing, Instagram is killing it right now. Every time Facebook reports their financial earnings, they need to show robust growth in their flagship products; almost just as importantly, they need to show healthy engagement. Growth and engagement are the life forces of Facebook’s stock, and any decrease in either can send shares south.

Now, consider that my @canonbw account was liking over 30,000 photos every month along with thousands and thousands of comments. That doesn’t even include the activity generated from people responding and liking my images/following me in return. If I took every Instagram user I know in my life who doesn’t use a bot, it’s more than likely that my single account generated more “activity” than everyone else over the last year combined.

If we take into account the massive number of people botting everyday all around the world, the number of likes and comments are astronomical. It’s very unlikely that this huge engagement engine will ever be shut down by Facebook Inc. The relationship between Instagram and botters is seemingly symbiotic, but I argue that in the long run, Instagram suffers.

The problems linked with false engagements fuels the life of Facebook as a public company, while turning the actual product space into one that is as demoralizing as Facebook itself. A growing number of academic articles are finding correlations between Facebook use and depression, in part linked to how much content is liked. While Instagram use remains relatively strongly correlated with happiness, will this persist with the growing rise of bots?

Link

Exploited for Advertising

As part of a long-feature for The Guardian:

The techniques these companies use are not always generic: they can be algorithmically tailored to each person. An internal Facebook report leaked this year, for example, revealed that the company can identify when teens feel “insecure”, “worthless” and “need a confidence boost”. Such granular information, Harris adds, is “a perfect model of what buttons you can push in a particular person”.

Tech companies can exploit such vulnerabilities to keep people hooked; manipulating, for example, when people receive “likes” for their posts, ensuring they arrive when an individual is likely to feel vulnerable, or in need of approval, or maybe just bored. And the very same techniques can be sold to the highest bidder. “There’s no ethics,” he says. A company paying Facebook to use its levers of persuasion could be a car business targeting tailored advertisements to different types of users who want a new vehicle. Or it could be a Moscow-based troll farm seeking to turn voters in a swing county in Wisconsin.

Harris believes that tech companies never deliberately set out to make their products addictive. They were responding to the incentives of an advertising economy, experimenting with techniques that might capture people’s attention, even stumbling across highly effective design by accident.

The problems facing many Internet users today are predicated on how companies’ services are paid: by companies doing everything they can to capture and hold your attention regardless of your own interests. If there were alternate models of financing social media companies, such as paying small monthly or yearly fees, imagine how different online communications would be: communities would likely be smaller, yes, but the developers would be motivated to do whatever they could to support the communities instead of advertisers targeting those communities. Silicon Valley has absorbed many of the best minds for the past decade and a half in order to make advertisements better. Imagine what would be different if all that excitement had been channeled towards less socially destructive outputs.

WhatsApp Profits

Facebook’s purchase of WhatsApp made sense in terms to buying a potential competitor before it got too large to threaten Facebook’s understanding of social relationships. The decision to secure communications between WhatsApp users only solidified Facebook’s position that it was less interested in mining the content of communications than on understanding the relationships between each user.

However, as businesses turn to WhatsApp to communicate with their customers a new revenue opportunity has opened for Facebook: compelling businesses to pay some kind of a fee to continue using the service for commercial communications.

WhatsApp will eventually charge companies to use some future features in the two free business tools it started testing this summer, WhatsApp’s chief operating officer, Matt Idema, said in an interview.

The new tools, which help businesses from local bakeries to global airlines talk to customers over the app, reflect a different approach to monetization than other Facebook products, which rely on advertising.

This is Facebook flipping who ‘pays’ for using WhatsApp. Whereas in the past customers paid a small yearly fee, now customers will get it free and businesses will be charged to use it. It remains to be seen, however, whether WhatsApp is ‘sticky’ enough for consumers to genuinely expect businesses to use it for customer communications. Further, Facebook’s payment model will also stand as a contrast between WhatsApp and its Asian competitors, such as LINE and WeChat, which have transformed their messaging platforms into whole social networks that can also be used for robust commercial transactions. Is this the beginning of an equivalent pivot on Facebook’s part or are they, instead, trying out an entirely separate business model in the hopes of not canibalizing Facebook itself?

Link

Partnering to help curb the spread of terrorist content online

Facebook, Microsoft, Twitter, and YouTube are coming together to help curb the spread of terrorist content online. There is no place for content that promotes terrorism on our hosted consumer services. When alerted, we take swift action against this kind of content in accordance with our respective policies.

Starting today, we commit to the creation of a shared industry database of “hashes” — unique digital “fingerprints” — for violent terrorist imagery or terrorist recruitment videos or images that we have removed from our services. By sharing this information with each other, we may use the shared hashes to help identify potential terrorist content on our respective hosted consumer platforms. We hope this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online.

The creation of the industry database of hashes both shows the world that these companies are ‘doing something’ without that something being particularly onerous: any change to a file will result it in having a different hash and thus undetectable by the filtering system being rolled out by these companies. But that technical deficiency is actually the least interesting aspect of what these companies are doing. Rather than being compelled to inhibit speech – by way of a law that might not hold up to a First Amendment challenge in the United States – the companies are voluntarily adopting this process.

The result is that some files will be more challenging to find without someone putting in the effort to seek them out. But it also means that the governments of the world cannot say that the companies aren’t doing anything, and most people aren’t going to be interested in the nuances of the technical deficits of this mode of censorship. So what we’re witnessing is (another) privatized method of censorship that is arguably more designed to rebut political barbs about the discoverability of horrible material on these companies’ services than intended to ‘solve’ the actual problem of the content’s creation and baseline availability.

While a realist might argue that anything is better than nothing, I think that the very existence of these kinds of filtering and censoring programs is inherently dangerous. While it’s all fine and good for ‘bad content’ to be blocked who will be defining what is ‘bad’? And how likely is it that, at some point, ‘good’ content will be either intentionally or accidentally blocked? These are systems that can be used in a multitude of ways once established, and which are often incredibly challenging to retire when in operation.

Link

This is not surveillance as we know it: the anatomy of Facebook messages

There are a lot of issues related to ‘wiretapping the Internet.’ A post from Privacy International, from 2012, nicely details the amount of metadata and data fields linked with just a Facebook message and the challenges in ‘just’ picking out certain fields from large lists.

As the organization notes:

Fundamentally, the whole of the request to the Facebook page must be read, at which point the type of message is known, and only then can the technology pretend it didn’t see the earlier parts. Whether this information is kept is often dismissed as “technical detail”, but in fact it is the fundamental point.

We should be vary of government harvesting large amounts of data and then promising to dispose of it; while such actions could be performed, initially, once the data is potentially accessible the laws to legitimize its capture, retention, storage, and processing will almost certainly follow.

Quote

Social utopians like Haque, Tapscott and Jarvis are, of course, wrong. The age of networked intelligence isn’t very intelligent. The tragic truth is that getting naked, being yourself in the full public gaze of today’s digital network, doesn’t always result in the breaking down of ancient taboos. There is little evidence that networks like Facebook, Skype and Twitter are making us any more forgiving or tolerant. Indeed, if anything, these viral tools of mass exposure seem to be making society not only more prurient and voyeuristic, but also fuelling a mob culture of intolerance, schadenfreude and revengefulness.

* Andrew Keen, #digitalvertigo: how today’s online social revolution is dividing, diminishing, and disorienting us
Quote

You see, the thing about humans is that we have a really short attention span, and really bad memories. It’s actually hard for me to remember a time before I had a phone that could effectively replace my entire computer in most situations. A phone that I could make video calls from from any spot in the world, one that would let me log into our team’s IRC channel while on the floor of a major media event in any city and communicate with our whole staff. A device that was small enough to fit into the front pocket of my arguably-too-tight jeans that would let me connect and share my most important thoughts about developing news and world events — in real time! — with millions of people at once. A device that would underpin and enable modern social movements and political revolutions, generally shrink our sense of the size of humanity, and mesmerize and delight almost everyone who used it.

* Joshua Topolsky, “Reasons to be excited