Amazon’s Echo and Alphabet’s Home cost less than $200 today, and that price will likely drop. So who will pay our butler’s salary, especially as it offers additional services? Advertisers, most likely. Our butler may recommend services and products that further the super-platform’s financial interests, rather than our own interests. By serving its true masters—the platforms—it may distort our view of the market and lead us to services and products that its masters wish to promote.
But the potential harm transcends the search bias issue, which Google is currently defending in Europe. The increase in the super-platform’s economic power can translate into political power. As we increasingly rely on one or two head butlers, the super-platform will learn about our political beliefs and have the power to affect our views and the public debate.
The discussions about algorithmic bias often have an almost science fiction feel to them. But as personal assistant platforms are monetized by platforms by inking deals with advertisers and designing secretive business practices designed to extract value from users, the threat of attitude shaping will become even more important. Why did your assistant recommend a particular route? (Answer: because it took you past businesses the platform owner believes you are predisposed to spend money at.) Why did your assistant present a particular piece of news? (Answer: because the piece in question conformed with your existing views and thus increased time you spent on the site, during which you were exposed to the platform’s associated advertising partners’ content.)
We are shifting to a world where algorithms are functionally what we call magic. A type of magic that can be used to exploit us while we think that algorithmically-designed digital assistants are markedly changing our lives for the better.