Link

Messaging Interoperability and Client Security

Eric Rescorla has a thoughtful and nuanced assessment of recent EU proposals which would compel messaging companies to make their communications services interoperable. To his immense credit he spends time walking the reader through historical and contemporary messaging systems in order to assess the security issues prospectively associated with requiring interoperability. It’s a very good, and compact, read on a dense and challenging subject.

I must admit, however, that I’m unconvinced that demanding interoperability will have only minimal security implications. While much of the expert commentary has focused on whether end-to-end encryption would be compromised I think that too little time has been spent considering the client-end side of interoperable communications. So if we assume it’s possible to facilitate end-to-end communications across messaging companies and focus just on clients receiving/sending communications, what are some risks?1

As it stands, today, the dominant messaging companies have large and professional security teams. While none of these teams are perfect, as shown by the success of cyber mercenary companies such as NSO group et al, they are robust and constantly working to improve the security of their products. The attacks used by groups such as NSO, Hacking Team, Candiru, FinFisher, and such have not tended to rely on breaking encryption. Rather, they have sought vulnerabilities in client devices. Due to sandboxing and contemporary OS security practices this has regularly meant successfully targeting a messaging application and, subsequently, expanding a foothold on the device more generally.

In order for interoperability to ‘work’ properly there will need to be a number of preconditions. As noted in Rescorla’s post, this may include checking what functions an interoperable client possesses to determine whether ‘standard’ or ‘enriched’ client services are available. Moreover, APIs will need to be (relatively) stable or rely on a standardized protocol to facilitate interoperability. Finally, while spam messages are annoying on messaging applications today, they may become even more commonplace where interoperability is required and service providers cannot use their current processes to filter/quality check messages transiting their infrastructure.

What do all the aforementioned elements mean for client security?

  1. Checking for client functionality may reveal whether a targeted client possesses known vulnerabilities, either generally (following a patch update) or just to the exploit vendor (where they know of a vulnerability and are actively exploiting it). Where spam filtering is not great exploit vendors can use spam messaging as reconnaissance messaging with the service provider, client vendor, or client applications not necessarily being aware of the threat activity.
  2. When or if there is a significant need to rework how keying operates, or surveillance of identity properties more broadly that are linked to an API, then there is a risk that implementation of updates may be delayed until the revisions have had time to be adopted by clients. While this might be great for competition vis-a-vis interoperability it will, also, have the effect of signalling an oncoming change to threat actors who may accelerate activities to get footholds on devices or may warn these actors that they, too, need to update their tactics, techniques, and procedures (TTPs).
  3. As a more general point, threat actors might work to develop and propagate interoperable clients that they have, already, compromised–we’ve previously seen nation-state actors do so and there’s no reason to expect this behaviour to stop in a world of interoperable clients. Alternately, threat actors might try and convince targets to move to ‘better’ clients that contain known vulnerabilities but which are developed and made available by legitimate vendors. Whereas, today, an exploit developer must target specific messaging systems that deliver that systems’ messages, a future world of interoperable messaging will likely expand the clients that threat actors can seek to exploit.

One of the severe dangers and challenges facing the current internet regulation landscape has been that a large volume of new actors have entered the various overlapping policy fields. For a long time there’s not been that many of us and anyone who’s been around for 10-15 years tends to be suitably multidisciplinary that they think about how activities in policy domain X might/will have consequences for domains Y and Z. The new raft of politicians and their policy advisors, in contrast, often lack this broad awareness. The result is that proposals are being advanced around the world by ostensibly well-meaning individuals and groups to address issues associated with online harms, speech, CSAM, competition, and security. However, these same parties often lack awareness of how the solutions meant to solve their favoured policy problems will have effects on neighbouring policy issues. And, where they are aware, they often don’t care because that’s someone else’s policy domain.

It’s good to see more people participating and more inclusive policy making processes. And seeing actual political action on many issue areas after 10 years of people debating how to move forward is exciting. But too much of that action runs counter to the thoughtful warnings and need for caution that longer-term policy experts have been raising for over a decade.

We are almost certainly moving towards a ‘new Internet’. It remains in question, however, whether this ‘new Internet’ will see resolutions to longstanding challenges or if, instead, the rush to regulate will change the landscape by finally bringing to life the threats that long-term policy wonks have been working to forestall or prevent for much of their working lives. To date, I remain increasingly concerned that we will experience the latter than witness the former.


  1. For the record, I currently remain unconvinced it is possible to implement end-to-end encryption across platforms generally. ↩︎
Link

Europe Planning A DNS Infrastructure With Built-In Filtering

Catalin Cimpanu, reporting for The Record, has found that the European Union wants to build a recursive DNS service that will be available to EU institutions and the European public. The reasons for building the service are manifold, including concerns that American DNS providers are not GDPR compliant and worries that much of Europe is dependent on (largely) American-based or -owned infrastructure.

As part of the European system, plans are for it to:

… come with built-in filtering capabilities that will be able to block DNS name resolutions for bad domains, such as those hosting malware, phishing sites, or other cybersecurity threats.

This filtering capability would be built using threat intelligence feeds provided by trusted partners, such as national CERT teams, and could be used to defend organizations across Europe from common malicious threats.

It is unclear if DNS4EU usage would be mandatory for all EU or national government organizations, but if so, it would grant organizations like CERT-EU more power and the agility it needs to block cyber-attacks as soon as they are detected.

In addition, EU officials also want to use DNS4EU’s filtering system to also block access to other types of prohibited content, which they say could be done based on court orders. While officials didn’t go into details, this most likely refers to domains showing child sexual abuse materials and copyright-infringing (pirated) content.1

By integrating censorship/blocking provisions as the policy level of the European DNS, there is a real risk that over time that same system might be used for untoward ends. Consider the rise of anti-LGBTQ laws in Hungary and Poland, and how those governments mights be motivated to block access to ‘prohibited content’ that is identified as such by anti-LGBTQ politicians.

While a reader might hope that the European courts could knock down these kinds of laws, their recurrence alone raises the spectre that content that is deemed socially undesirable by parties in power could be censored, even where there are legitimate human rights grounds that justify accessing the material in question.


  1. Boldface not in original. ↩︎

Russia passes ‘Big Brother’ anti-terror laws

Russia has passed legislation which functionally adopts many of the worst — and largely discredited — surveillance provisions that Europe adopted in the past and is now abandoning. Specifically, Russian telecoms will be required to retain data traffic information for 6 months, as well as assist government agencies decrypt information. The law will also (further) penalize those who support terrorist activities or engage in other types of social disturbances: the problem is that such accusations are increasingly used to target those disliked by the government as opposed to those whom are actually supporting terrorism or the destruction of Russian society.

It will be particularly interesting to see what, if any, effect the EU has on Russia’s new law. Will the law, which flagrantly violates human rights, inhibit Russia’s ability to trade with EU member nations or will the infringement be ignored? Or will the EU be so consumed by the Brexit that it cannot — or will not — turn its attention to one of its largest trading partners?

Link

Emergency surveillance bill clears Commons

Emergency surveillance bill clears Commons:

This ‘emergency’ follows the European Court of of Justice finding that mass data retention laws in Europe are illegal. In response, the UK government is passing a localized data retention and surveillance bill.

Significantly, the government has stated that:

The government has insisted the ruling throws into doubt existing regulations, meaning communications companies could begin deleting vital data. Ministers claim the bill only reinforces the status quo and does not create new powers.

At issue is that the existing status quo has been deemed illegal. And yet, in response, Parliament has decided to pass more – still illegal – legislation. And so civil liberties groups will bring this into court, spend years fighting, only to have the legislation overturned. And after which, government will likely pass similar, still illegal, legislation. And the wheel of politics will turn on and on and on…

Link

EU votes in favor of universal mobile charger

Awesome news for consumers in Europe who have to deal with the multitude of manufacturers that use proprietary adaptors for no clear purpose. Note: I exclude Apple from the ‘no clear purpose’ category, as lightening adaptors are wickedly more awesome to use than microUSB. This is a fact I’m reminded of every time I plug in my Android phone and wife plugs in her iPhone. Especially when doing so in dimly lit situations (i.e. almost every night in the dark).

On that note, USB Type-C connectors (which, like lightening connectors, will fit into ports regardless their orientation) cannot come soon enough!

Brief Thoughts on Google’s ‘Shared Endorsements’ Policy

Simon Davies, one of the world’s most prominent privacy advocates, has filed formal complaints across the EU concerning Google’s ‘Shared Endorsements’ policy. Per this policy, Google may use:

the images, personal data and identities of its users to construe personal endorsements published alongside the company’s advertised products across the Internet

The legality of recent changes to Google’s policies that allow the company to share personal data across all its products and services are currently being investigated by a number of EU data protection authorities. The data protection issues and violations highlighted in my complaint go the heart of many of the aspects under investigation. Indeed the Shared Endorsements policy is made possible only through company-wide amalgamation of personal data.

In effect, Davies argues that the amalgamation of Google’s services under the company’s harmonized privacy policy/data pooling policy may be illegal and that, moreover, individuals may not know that their images and comments might be revealed to people they know upon leaving reviews of products and services in Google-owned environments.

Admittedly, I find that the shared pooling of information across my networks can be incredibly helpful (e.g. highlighting the reviews/opinions of people I know concerning various subjects and topics). Knowing that a colleague with whom I share book interests likes a book is more helpful to me than a review from someone that I don’t know. At the same time, I review products that I’ve purchased online quite often: given how helpful others’ reviews can be when I’m purchasing a product it seems like a courtesy to provide information into a private-commons. So, while I would prefer a review from a colleague I’m perfectly willing to make purchasing decisions based on what absolute strangers say/write as well.

The more significant issue with Google’s products, in my opinion, emerges from how the company’s business decisions are narrowing the range of commentary individuals may engage in. Such self-censorship is largely attributable to linking all comments to a person’s real name/public identity. Personally, this means that I often avoid leaving some book reviews, not because I’m ‘ashamed’ of the review but because I worry about whether it could detrimentally affect my future publishing opportunities. My reviews are (I think) reasonably high quality and fair but I refuse to leave some without some degree of pseudonymity. There is no reason to believe that my decision is unique: those in similar, tight-knit, industries likely experience similar pressures to avoid reviewing/commenting on some products, despite being experts concerning the product(s) in question.

I am not from  a ‘marginalized’ or ‘repressed’ social population, and Google is seemingly deploying platforms that are meant to serve people like me: people who freely review products online and who find it acceptable that such reviews are publicly shared and oftentimes highlighted to specific users. And yet, even I avoid saying certain (legal) things based on the (unknown) consequences linked to such speech acts. Despite being reasonably savvy concerning the collection, use, and sharing of personal information even I do not fully appreciate or understand how Google collects, retains, processes, or disseminates information I provide to the company. If even I am censoring legitimate speech because of the vicissitudes of Google’s privacy policies and uncertainties associated with providing content on their platforms then there is (to my mind) a very serious problem at the very base of the company’s contemporary data-integration and disclosure operations.

Link

Prism threatens ‘sovereignty’ of all EU data

Caspar Bowden has been aggressively lobbying the EU Parliament over the implications of the FISA Amendments Act for some time. In short, the Act authorizes capturing data from ‘Electronic Communications Service Providers’ when the data possesses foreign intelligence value. The result is that business and personal information, in addition to information directly concerning ‘national security’, can be legitimately collected by the Agency. (For more, see pages 33-35 of this report.)

Caspar’s most recent article outlines the unwillingness of key members of the EU Parliament to take seriously the implications of American surveillance … until it ceases to be an issue for policy wonks, and one of politics. Still, the Parliament has yet to retract recent amendments that would detrimentally affect the privacy rights of European citizens: it will be interesting to see whether the politics of the issue reverse the parliamentarians’ decisions or if lobbying by corporate interests win the day.

Notes EM: My FT oped: Google Revolution Isn’t Worth Our Privacy

evgenymorozov:

Google’s intrusion into the physical world means that, were its privacy policy to stay in place and cover self-driving cars and Google Glass, our internet searches might be linked to our driving routes, while our favourite cat videos might be linked to the actual cats we see in the streets. It also means that everything that Google already knows about us based on our search, email and calendar would enable it to serve us ads linked to the actual physical products and establishments we encounter via Google Glass.

For many this may be a very enticing future. We can have it, but we must also find a way to know – in great detail, not just in summary form – what happens to our data once we share it with Google, and to retain some control over what it can track and for how long.

It would also help if one could drive through the neighbourhood in one of Google’s autonomous vehicles without having to log into Google Plus, the company’s social network, or any other Google service.

The European regulators are not planning to thwart Google’s agenda or nip innovation in the bud. This is an unflattering portrayal that might benefit Google’s lobbying efforts but has no bearing in reality. Quite the opposite: it is only by taking full stock of the revolutionary nature of Google’s agenda that we can get the company to act more responsibly towards its users.

I think that it’s critically important to recognize just what the regulators are trying to establish: some kind of line in the sand, a line that identifies practices that move against the ethos and civil culture of particular nations. There isn’t anythingnecessarily wrong with this approach to governance. The EU’s approach suggests a deeper engagement with technology than some other nations, insofar as some regulators are questioning technical developments and potentialities on the basis of a legally-instantiated series of normative rights.

Winner, writing all the way back 1986 in his book The whale and the reactor: a search for limits in an age of high technology, recognized that frank discussions around technology and the socio-political norms embedded in it are critical to a functioning democracy. The decisions we make with regards to technical systems can have far-reaching consequences, insofar as (some) technologies become ‘necessary’ over time because of sunk costs, network effects, and their relative positioning compared to competing products. Critically, technologies aren’t neutral: they are shaped within a social framework that is crusted with power relationships. As a consequence, it behooves us to think about how technologies enable particular power relations and whether they are relates that we’re comfortable asserting anew, or reaffirming again.

(If you’re interested in reading some of Winner’s stuff, check out his essay, “Do Artifacts Have Politics.”)

Link

EU regulators accuse smart card chipmakers of price-fixing

Looks like some chipmakers might experience some revenue ‘setbacks’ after engaging in antitrust actions:

The case has been ongoing for years, as the European Commission searched the offices of Infineon Technologies AG, STMicroelectronics NV, Renesas Technology Corp. and Atmel Corp. in 2008. In 2009 it investigated companies that make chips for telephone SIM cards, bank cards and ID cards over price-fixing and customer allocation. NXP Semiconductors NV has admitted that it has been involved in the investigations and could be subject to fines.

Should the EU prove that price-fixing is occurring, it can levy fines on companies. While the commission has been trying to negotiated a settlement, those talks have fallen through, which may lead to stiffer fines.

 

Quote

The 27 regulators, led by France’s CNIL, gave Google three to four months to make changes to its privacy policy — or face “more contentious” action. In a statement on its website today, the CNIL said that four months on from that report Google has failed “to come into compliance” so will now face additional action.

“On 18 February, the European authorities find that Google does not give a precise answer and operational recommendations. Under these circumstances, they are determined to act and pursue their investigations,” the CNIL said in its statement (translated from French with Google Translate).

According to the statement, the European regulators intend to set up a working group, led by CNIL, to “coordinate their enforcement action” against Google — with the working group due to be established before the summer. An action plan for tackling the issue was drawn up at a meeting of the regulators late last month, and will be “submitted for validation” later this month, they added.