Alec Muffett has a terrific piece that clearly articulates why, exactly, passwords are beneficial elements of a broader security apparatus. He also notes core ‘risks’ associated with passwords, and how many of these risks can be defrayed (spoiler alert: just use a strong password management system).
Tag: Security
2012.11.24
The issue here is not whether Anonymous activists can be rightfully prosecuted: acts of civil disobedience, by definition, are violations of the law designed to protest or create a cost for injustices. The issue is how selectively these cyber-attack laws are enforced: massive cyber-attacks aimed at a group critical of US policy (WikiLeaks) were either perpetrated by the US government or retroactively sanctioned by it, while relatively trivial, largely symbolic attacks in defense of the group were punished with the harshest possible application of law enforcement resources and threats of criminal punishment.
That the US government largely succeeded in using extra-legal and extra-judicial means to cripple an adverse journalistic outlet is a truly consequential episode: nobody, regardless of one’s views on WikiLeaks, should want any government to have that power. But the manifestly overzealous prosecutions of Anonymous activists, in stark contrast to the (at best) indifference to the attacks on WikiLeaks, makes all of that even worse. In line with its unprecedented persecution of whistleblowers generally, this is yet another case of the US government exploiting the force of law to entrench its own power and shield its actions from scrutiny.
Glenn Greenwald, “Prosecution of Anonymous activists highlights war for Internet control”
![]()
Axel Arnbak and Nico van Ejik have a thought provoking paper about regulating systematic vulnerabilities in the HTTPS value chain. They focus on constitutional values to establish a baseline to measure regulation against; it’s a clever move that offers a good lens to critique legislative efforts mean to regulate SSL. The paper is here, and the full abstract is below:
Hypertext Transfer Protocol Secure (‘HTTPS’) has evolved into the de facto standard for secure web browsing. Through the certificate-based authentication protocol, web services and internet users protect valuable communications and transactions against interception and alteration by cybercriminals, governments and business. In only one decade, it has facilitated trust in a thriving global E-Commerce economy, while every internet user has come to depend on HTTPS for social, political and economic activities on the internet.
Recent breaches and malpractices at several Certificate Authorities (CA’s) have led to a collapse of trust in these central mediators of HTTPS communications as they revealed ‘fundamental weaknesses in the design of HTTPS’ (ENISA 2011). In particular, the breach at Dutch CA Diginotar shows how a successful attack on one of the 650 Certificate Authorities across 54 jurisdictions enables attackers to create false SSL-certificates for any given website or service. Moreover, Diginotar kept the breach silent. So for 90 days, web browsers continued to trust Diginotar certificates, enabling attackers to intercept the communications of 300.000 Iranians. In its aftermath, Dutch public authorities overtook operations at Diginotar and convinced Microsoft to delay updates to its market-leading web browser to ensure ‘the continuity of the internet’. These bold interventions lacked a legitimate basis.
While serving as the de facto standard for secure web browsing, in many ways the security of HTTPS is broken. Given our dependence on secure web browsing, the security of HTTPS has become a top priority in telecommunications policy. In June 2012, the European Commission proposed a new Regulation on eSignatures. As the HTTPS ecosystem is by and large unregulated across the world, the proposal presents a paradigm shift in the governance of HTTPS. This paper examines if, and if so, how the European regulatory framework should legitimately address the systemic vulnerabilities of the HTTPS ecosystem.
To this end, the HTTPS authentication model is conceptualised using actor-based value chain analysis and the systemic vulnerabilities of the HTTPS ecosystem are described through the lens of several landmark breaches. The paper then explores the rationales for regulatory intervention, discusses the EU eSignatures Regulation and abstracts from the EU proposal to develop general insights for HTTPS governance. Our findings should thus be relevant for anyone interested in HTTPS, cybersecurity and internet governance – both in Europe and abroad.
HTTPS governance apprises the incentive structure of the entire HTTPS authentication value chain, untangles the concept of information security and connects its balancing of public and private interests to underlying values, in particular constitutional rights such as privacy, communications secrecy and freedom of communication.
In the long term, a robust technical and policy overhaul must address the systemic weaknesses of HTTPS, as each CA is a single point of failure for the security of the entire ecosystem. On the short term, specific regulatory measures to be considered throughout the value chain may include proportional liability provisions, meaningful security breach notifications and internal security requirements, but both legitimacy and effectiveness will depend on the exact wording of the regulatory provisions.
The research finds that the EU eSignatures proposal lacks an integral vision on the HTTPS value chain and a coherent normative assessment of the underlying values of HTTPS governance. These omissions lead to sub-optimal provisions on liability, security requirements, security breach notifications and supervision in terms of legitimacy and addressing the systemic security vulnerabilities of the HTTPS ecosystem.
While it comes as no surprise that police monitored Facebook during last year’s Occupy protests, in the case of Occupy Miami an advocate/journalist was specifically targeted after his Facebook profile was subjected to police surveillance. An email produced in the court case revealed:
the police had been monitoring Miller’s Facebook page and had sent out a notice warning officers in charge of evicting the Occupy Miami protestors that Miller was planning to cover the process.
Significantly, the police tried to destroy evidence showing that they had unlawfully targeted the advocate, footage that (after having been forensically recovered) revealed that the charges laid against the advocate were blatantly false. That authorities conduct such surveillance – often without the targets of surveillance knowing that they have been targeted or, when targeted, why – matters for the general population because lawfully exercising one’s rights increasingly leads to citizens being punished for doing so. Moreover, when the surveillance is accompanied by deliberate attempts to undermine citizens’ capacities to respond to unlawful detentions and false charges, we have a very, very real problem that can affect any citizen.
We know from academic research conducted by scholars such as Jeffrey Monaghan and Kevin Walby that Canadian authorities use broad catch-all caricatures during major events to identify ‘problem populations.’ We also know that many of the suspects that are identified during such events are identically labeled regardless of actually belonging in the caricature population. The capacity to ‘effectively’ sort in a way resembling fact or reality is marginal at best. Consequently, we can’t just say that the case of Occupy surveillance is an ‘American thing’: Canadian authorities do the same thing to Canadian citizens of all ages, be they high school or university students, employed middle-aged citizens, or the elderly. These are surveillance and sorting processes that are widely adopted with relatively poor regulation or oversight. These processes speak to the significant expansion of what constitutes general policing as well as speaking to the state-born risks of citizens even in ‘safe’ countries using social media in an unreflective manner.
In a not-particularly-surprising move, Skype handed over a 16 year old’s subscriber information to a firm hired by Paypal. No warrant was required, as the information was provided to a private party, and that party subsequently gave it to police. In essence, a very large telecommunications service provider (TSP) made available personally identifiable information that, ultimately, led to an arrest without authorities having to convince a judge that they had legitimate grounds to get that information from the TSP.
At a talk I recently attended, a retired Assistant RCMP Commissioner emphasized time and time again that Canadians need to be more worried about corporations like Skype, Google, and Facebook than they do the federal or provincial governments. He correctly, I believe, spoke to the social harms that these companies can and do cause to individuals who both subscribe and do not subscribe to the companies’ service offerings.
Non-controversially, we know that many large companies can take actions that are harmful to individuals, as can states themselves. What is less recognized, however, is that there are more and more cases where private intermediaries are acting as one or two degrees of separation between public institutions and large private data stores. Such ‘intermediary protection’ often lets states access and use personal data that they otherwise cannot access without considerable difficulty. Worse, where authorities refuse to bring intermediary-provided data to court it can be challenging for accused persons to argue that an investigation was predicated on inappropriate access to their personal data. More time has to be spent considering the role of these data intermediaries and thinking through how to prevent the disclosure of personal data to state authorities in the absence of judicial oversight. Failure to tackle this problem will simply lead to more and more inappropriate access to corporate data by authorities, and critically to access without adequate or necessary judicial oversight.
Bit9 has released a report that outlines a host of fairly serious concerns around Android devices and app permissions. To be upfront: Android isn’t special in this regard, as if you have a Blackberry, iPhone, or Windows Phone Device you’ll also find a pile of apps that have very, very strange permission requests (e.g. can a wallpaper application access your GPS and contact book?). The video (above) is a quick overview of some findings; the executive summary can be found here and the full report here (.pdf).
![]()
I need to create responses to the above security questions before I can purchase items through Apple’s digital stores. The problem: I actually don’t know the (legitimate/real) answers to any of the questions.
Admittedly the best security procedure, in the face of any vendor authentication questions, is to produce garbage/unrelated responses to any authentication questions that vendors ask. This said, it’s a a bit insane that I have to do this for the questions Apple has provided. Now, is this a problem that most people can overcome? Of course. They just write in answers and (somewhere) they write down their responses. I actually could use 1Password for this, a terrific password and identity manager that I highly recommend. This said, I’m not going to bother. Purchasing the $20 piece of software just isn’t worth the effort for me: in effect, Apple has succeeded in dissuading me from making an impulse purchase. That’s really not great for the business of app developers (Apple, really, doesn’t care that much given the relative amount that the app store contributes to their overall yearly profits).
You might wonder why these questions are being asked. I suspect they’re largely in response to the Mat Honan hack. In short, a Wired reporter’s Apple, Amazon, Twitter, and Google accounts were hacked so a third-party could masquerade as Mat on Twitter. This led to a ridiculous level of criticism in the press concerning how Apple authenticated users’ identities. I have no doubt that these questions – again, pictured above – are largely meant to better authenticate users and thus avoid identity fraud.
The problem of authentication fraud can be devilishly hard for companies to address. In the case of Apple, there is no option for the user to generate their own questions and responses. This might be seen as good security amongst ‘professionals’ – it prevents really, really crappy questions and easily found responses – but it creates an incredibly poor user experience. While writing down passwords isn’t the horrific nightmare scenario that some security analysts declare, expecting people to find those responses when they’re in trouble – such as their accounts have been hacked – will meet mixed results at best. Further, given how other companies tend to follow Apple’s lead(s) it’s only a matter of time until more and more (less security conscious) companies adopt similar or identical security questions/answers. Such adoptions will limit the relative novelty of Apple’s authentication questions and thus reduce their capability to genuinely authenticate users’ identities. Consequently, such questions (in the short and long terms) will likely just leave its customers frustrated.
Ultimately, this kind of authentication really is less than ideal; more nuanced and (to the user) transparent analytics protocols to detect aberrant behaviours and then recover accounts would be far, far superior to what Apple is presently rolling out. Hopefully it doesn’t take further authentication failures, on Apple’s part, for them to realize the error of their ways and correct it.
Patrick Ball has a good and highly accessible article over on Wired about why certain means of securing communications are problematic. It’s highly recommended. Rather than leave you with the overview of “this is what is said and why it’s important,” let me leave you with a key quotation from the article that (to my mind) nicely speaks to the author’s general mindset: “Good security is about not trusting people. It’s about studying math and software and assuring that the program cannot be turned to bad intent.”