The problem … was that the surveillance technology sold to Iran in 2008 is standard “lawful intercept” functionality required by law in Europe, so that police can track criminals. Unfortunately, with the same technology in the hands of a regime that defines “crime” broadly to include political dissent and “blasphemy,” the result is an efficient antidissident surveillance machine.
Tag: Censorship
Wired has run a decent piece surrounding unilateral American seizures of domain names by acting on critical infrastructure governed by US law. A key bit from the article to get you interested:
Bodog.com was registered with a Canadian registrar, a VeriSign subcontractor, but the United States shuttered the site without any intervention from Canadian authorities or companies.
Instead, the feds went straight to VeriSign. It’s a powerful company deeply enmeshed in the backbone operations of the internet, including managing the .com infrastructure and operating root name servers. VeriSign has a cozy relationship with the federal government, and has long had a contract from the U.S. government to help manage the internet’s “root file” that is key to having a unified internet name system.
These domain seizures are a big deal. Despite what some have written, even a .ca address (such as the address country code top level domain linked to this website) could be subjected to a take down that leverages the root file. In effect, US copyright law combined with American control of critical Internet infrastructure is being used to radically extend America’s capability to mediate the speech rights of foreign citizens.
The capacity for the US to unilaterally impact the constitution of the Web is not a small matter: such actions threaten the sovereign right to establish policy and law that governs the lives of citizens living in countries like Canada, Russia, Australia, and Europe generally. Something must be done, and soon, before the Web – and the Internet with it – truly begins to fracture.
Facebook Censorship
![]()
I’ve tried to think of something comprehensive to say about the Facebook censorship rules for a few days now. I still don’t have something that really captures how absurd and offensive many of the items listed are. So, rather than give a holistic analysis of the document, here are a few thoughts:
Sex and Nudity
- Point (1) indicates that permitting foreplay images between members of the same gender is somehow exception, given the statement “Foreplay allowed (Kissing, groping, etc.) even for same sex (man-man/woman-woman.” That this needs to be clearly stated is suggestive of a basic level of discomfort with same sex relationships.
- Point (12) seems intensely hard to police, with enforcement being contingent on an employee’s own awareness of sexual fetishes. Moreover, given that the definition of a fetish is often derived from the use of inanimate objects as a stimulus to achieve sexual enjoyment/arousal, a high level of subjectivity will almost necessarily come into monitoring for the depiction of sexual fetishes “in any form.”
Hate Content
- The note that “Humor overrules hate speech UNLESS slur words are present or the humor is not evident” is concerning because, in some circumstances, Facebook recognizes hate speech as somehow appropriate. I would suggest that the capacity for one person to detect humour is a particularly poor (and, arguably, inappropriate) evaluation metric.
Graphic Content
- Point (1) seems immediately hard to govern, especially given that many Facebook members will support state-sanction violence towards targeted individuals. Example: would graphic comments supporting American efforts to torture Osama bin Laden be inappropriate? Is it OK to call for violence towards ‘bad’ people and not towards ‘good’ ones?
- Point (6) prohibits the exhibition of what might be termed ‘grisly’ images that clearly show the penetration of skin. Blood or other aspects of a violent act are permitted, but the barrier of the skin is seen as special. This is suggestive of the ‘kinds’ of violence that Facebook recognizes as more or less appropriate for public viewing while imposing a particular cultural norm on a global network.
- There is “No exception for news or awareness related content.” Thus, any news that is shared by Facebook members must conform to a specific norm of ‘appropriateness’ and failure to conform results in the removal of the content. Such an attitude speaks poorly of the company’s willingness to act as a site for individuals to communicate fully and openly: Facebook is declaring that their monetization depends, in part, on everyone being happy (or at least not shocked) and thus prohibits certain modes of expression.
Credible Threats
- Point (3), that any threat to a head of state should be escalated, regardless of credibility, is problematic for three reasons. First: it will capture a vast number of users in a dragnet and it is unclear just little would place a user within this net (e.g. would “I fucking hate X and wish we’d just kill X” qualify?) Second: it stinks of an effort to pass responsibility to another party, so that if a particular message is ever linked to an attack then Facebook would be minimally responsible. Third: the number of potential threats can outpace professional security audit staff’s capability to ascertain real/false threats. Dragnet surveillance for this kind of behaviour is a poor means of identifying actual threats.
Those are some of my thoughts about this particular document. There are others that are still crystallizing and once/if I develop a full thought about the document I’ll be sure to post it.
Making Sense of Twitter ‘Censorship’
Jillian York, the Director of International Freedom of Expression at the EFF, has a good (and quick) thought on Twitter’s recent decision to ‘censor’ some Tweets in particular geographical areas.
Let’s be clear: This is censorship. There’s no way around that. But alas, Twitter is not above the law. Just about every company hosting user-generated content has, at one point or another, gotten an order or government request to take down content. Google lays out its orders in its Transparency Report. Other companies are less forthright. In any case, Twitter has two options in the event of a request: Fail to comply, and risk being blocked by the government in question, or comply (read: censor). And if they have “boots on the ground”, so to speak, in the country in question? No choice.
In the event that a company chooses to comply with government requests and censor content, there are a number of mitigating steps the company can take. The most important, of course, is transparency, something that Twitter has promised. Google is also transparent in its content removal (Facebook? Not so much). Twitter’s move to geolocate their censorship is also smart, given the alternative (censoring it worldwide, that is) – particularly since it appears a user can manually change his or her location.
I tend to agree with her position. I’m not particularly happy that Twitter is making this move but can appreciate that from an Internet governance – and national sovereignty – position that Twitter’s new policy ‘fits’ with international practices. Further, the company’s unwillingness to globally censor is positive, and limits that damage caused by state-mandated censorship.
Admittedly, I’d like to see the company go a bit further that is in line with their drive towards transparency. Perhaps if you did a keyword search in a particular geographic area you might receive a notice reading, “Some items in this search have been censored in your region” or something along those lines. Still, Twitter is arguably the best ‘good’ company that is prominent in the social networking environment at the moment, so I’ll hope they make additional steps towards full transparency rather than lambasting the company for its policy changes right now.