Categories
Links Writing

SSL Skeleton Keys

From the Ars lede:

Critics are calling for the ouster of Trustwave as a trusted issuer of secure sockets layer certificates after it admitted minting a credential it knew would be used by a customer to impersonate websites it didn’t own.

The so-called subordinate root certificate allowed the customer to issue SSL credentials that Internet Explorer and other major browsers would accept as valid for any server on the Internet. The unnamed buyer of this skeleton key used it to perform what amounted to man-in-the-middle attacks that monitored users of its internal network as they accessed SSL-encrypted websites and services. The data-loss-prevention system used a hardware security module to ensure the private key at the heart of the root certificate wasn’t accidentally leaked or retrieved by hackers.

It’s not new that these keys are issued – and, in fact, governments are strongly believed to compel such keys from authorities in their jurisdiction – but the significance of these keys cannot be overstated. SSL is intended to encourage trust: if you see that a site is using SSL then that site is supposed to be ‘safe’. This is the lesson that the Internet industry has been pounding into end-users/consumers for ages. eCommerce largely depends on consumers ‘getting’ this message.

The problem is that the lesson is increasingly untrue.

Given the sale of ‘skeleton key’ certs, the hacking of authorities to generate (illegitimate) certs for major websites (e.g. addons.mozilla.com, hotmail.com, gmail.com, etc), and widespread backend problems with SSL implementation, it is practically impossible to claim the SSL makes things ‘safe’. While SSL isn’t in the domain of security theatre, it can only be seen as marginally increasing protection instead of making individuals, and their online transactions, safe.

This is significant for the end-user/consumer, because they psychologically respond to the difference between ‘safe’ and ‘safer’. Ideally a next-generation, peer-reviewable and trust agile, system will be formally adopted by the major players in the near future. Only after the existing problems around SSL are worked out – through trust agility, certificate pinning, and so forth – will the user experience be moved back towards the ‘safe’ position in the ‘safe/unsafe’ continuum.

Categories
Videos

Fixing SSL, Moxie-Style

A follow up to my last post; if you want insight into how to fix the cruft that is SSL, take the time to watch Moxie’s presentation on SSL and The Future of Authenticity.

Categories
Links

Chrome Kills CA Revocation Checks

From Ars:

“While the benefits of online revocation checking are hard to find, the costs are clear: online revocation checks are slow and compromise privacy,” Langley added. That’s because the checks add a median time of 300 milliseconds and a mean of almost 1 second to page loads, making many websites reluctant to use SSL. Marlinspike and others have also complained that the services allow certificate authorities to compile logs of user IP addresses and the sites they visit over time.

Chrome will instead rely on its automatic update mechanism to maintain a list of certificates that have been revoked for security reasons. Langley called on certificate authorities to provide a list of revoked certificates that Google bots can automatically fetch. The time frame for the Chrome changes to go into effect are “on the order of months,” a Google spokesman said.

The problems with CA revocation checks have been particularly prominent over the past 12 months, given the large number of serious CA breaches. While even the Google fetch mechanism isn’t ideal – really, we need to move to an agile trust framework combined (ideally) with browser pinning that can’t be compromised by corporate admins – it’s better. Still, there’s a long way to go until SSL and the CA system are reformed to the point of being actual ‘trusted’ facets of the Internet.

Categories
Quotations

The most important detail to focus on, is (per comment 12 by Brian Trzupek above) that Trustwave knew when it issued the certificate that it would be used to sign certificates for websites not owned by Trustwave’s corporate customer.

That is, Trustwave sold a certificate knowing that it would be used to perform active man-in-the-middle interception of HTTPS traffic.

This is very very different than the usual argument that is used to justify “legitimate” intermediate certificates: the corporate customer wants to generate lots of certs for internal servers that it owns.

Regardless of the fact that Trustwave has since realized that this is not a good business practice to be engaged in, the damage is done.

With root certificate power comes great responsibility. Trustwave has abused this power and trust, and so the appropriate punishment here is death (of its root certificate).

~Christopher Soghoian, in comment about Trustwave

Categories
Quotations

Phone hacking, for the most part, depends on remote access. Hackers obtain unprotected phone numbers from a variety of sources – Facebook must be a favorite – or by social engineering. PINs, for the most part, are easy to guess. Hacking typically takes place in the legitimate user’s absence.

Unless Apple or Google plans to bar remote access to devices, facial recognition security surely only solves a small part of the problem. Back to the drawing board.

~Kim Davis, from Internet Evolution

Categories
Humour

Security Measures

Security Measures – Ric Stultz    2012

The security systems are aware, armed, and not taking prisoners.

Categories
Aside Links

iOS is a Security Vampire

I’m sorry, but what Path did is (in some jurisdictions, such as my own) arguably a criminal offence. Want to know what they’ve been up to?

When developer Arun Thampi started looking for a way to port photo and journaling software Path to Mac OS X, he noticed some curious data being sent from the Path iPhone app to the company’s servers. Looking closer, he realized that the app was actually collecting his entire address book — including full names, email addresses, and phone numbers — and uploading it to the central Path service. What’s more, the app hadn’t notified him that it would be collecting the information.

Path CEO Dave Morin responded quickly with an apology, saying that “we upload the address book to our servers in order to help the user find and connect to their friends and family on Path quickly and efficiently as well as to notify them when friends and family join Path. Nothing more.” He also said that the lack of opt-in was an iOS-specific problem that would be fixed by the end of the week. [emphasis added]

No: this isn’t an ‘iOS-specific problem’ it’s an ‘iOS lacks an appropriate security model and so we chose to abuse it problem’. I cannot, for the life of me, believe that Apple is willing to let developers access the contact book – with all of its attendant private data – without ever notifying the end user. Path should be tarred, feathered, and legally punished. This wasn’t an ‘accident’ but a deliberate decision, and there should be severe consequences for it.

Also: while the Verge author writes:

Thampi doesn’t think Path is doing anything untoward with the data, and many users don’t have a problem with Path keeping some record of address book contacts.

I think that this misses a broader point. You should not be able to disclose mass amounts of other people’s personal information without their consent. When I provide key contact information it is for an individual’s usage, not for them to share my information with a series of corporate actors to do whatever those actors want with it. The notion that a corporation would be so bold as to steal this personal information to use for their own purposes is absolutely, inexcusably, wrong.

Categories
Aside

Useful Warnings

circa476: Poor Apple….

THIS is the kind of actionable, helpful, warning information that should be presented to end-users. It gives them the relevant information they need to choose ‘Cancel’ or ‘Add Anyway’ without scaring them one way or the other. If the jailbreak community can do this, then why the hell can’t the big players like Apple, RIM, Google, Microsoft and the rest?

Categories
Links Writing

Viruses stole City College of S.F. data for years

The viral infestation detailed by the Chronicle is horrific in (at least) two ways: first, that data was leeched from university networks for year after year, and second that it’s only now – and perhaps by happenstance – that the IT staff detected the security breach. From the article:

a closer look revealed a far more nefarious situation, which had been lurking within the college’s electronic systems since 1999. For now, it’s still going on. So far, no cases of identify theft have been linked to the breach. That may change as the investigation continues, and college officials said they might need to bring in the FBI.

Each night at about 10 p.m., at least seven viruses begin trolling the college networks and transmitting data to sites in Russia, China and at least eight other countries, including Iran and the United States, Hotchkiss and his team discovered. Servers and desktops have been infected across the college district’s administrative, instructional and wireless networks. It’s likely that personal computers belonging to anyone who used a flash drive during the past decade to carry information home were also affected.

Some of the stolen data is probably innocuous, such as lesson plans. But an analysis shows that students and faculty have used college computers to do their banking, and the viruses have grabbed the information, Hotchkiss said.

It is for precisely this kind of reason that regular updates of common, lab-based, computer equipment must be performed. These computers must centrally factor into campus security plans because of their accessibility to the public and a broad student population. I simply cannot believe that systems were so rarely refreshed, so rarely updated, and so poorly secured that a mass infection of a campus could occur, unless a university security and data protection policy were not being implemented by staff. Regardless, what has happened at this campus is an inexcusable failure: lessons should be learned, yes, but heads should damn well roll as well.

Categories
Links

Videoconferencing Systems Laden With Security Holes?

From a piece in The New York Times, we learn that

Rapid7 discovered that hundreds of thousands of businesses were investing in top-quality videoconferencing units, but were setting them up on the cheap. At last count, companies spent an estimated $693 million on group videoconferencing from July to September of last year, according to Wainhouse Research.

The most popular units, sold by Polycom and Cisco, can cost as much as $25,000 and feature encryption, high-definition video capture, and audio that can pick up the sound of a door opening 300 feet away. But administrators are setting them up outside the firewall and are configuring them with a false sense of security that hackers can use against them.

Whether real hackers are exploiting this vulnerability is unknown; no company has announced that it has been hacked. (Nor would one, and most would never know in any case.) But with videoconference systems so ubiquitous, they make for an easy target.

Two months ago, Mr. Moore wrote a computer program that scanned the Internet for videoconference systems that were outside the firewall and configured to automatically answer calls. In less than two hours, he had scanned 3 percent of the Internet.

In that sliver, he discovered 5,000 wide-open conference rooms at law firms, pharmaceutical companies, oil refineries, universities and medical centers. He stumbled into a lawyer-inmate meeting room at a prison, an operating room at a university medical center, and a venture capital pitch meeting where a company’s financials were being projected on a screen. Among the vendors that popped up in Mr. Moore’s scan were Polycom, Cisco, LifeSize, Sony and others. Of those, Polycom — which leads the videoconferencing market in units sold — was the only manufacturer that ships its equipment — from its low-end ViewStation models to its high-end HDX products — with the auto-answer feature enabled by default.

It sounds like there’s a whole lot of networking/IT admins who either should be fired (for doing a piss poor job of establishing network security) or resourced (if the reasons for the piss poor job is the result of understaffing and lack of proper training). Likely both are required.