Link

Messaging Interoperability and Client Security

Eric Rescorla has a thoughtful and nuanced assessment of recent EU proposals which would compel messaging companies to make their communications services interoperable. To his immense credit he spends time walking the reader through historical and contemporary messaging systems in order to assess the security issues prospectively associated with requiring interoperability. It’s a very good, and compact, read on a dense and challenging subject.

I must admit, however, that I’m unconvinced that demanding interoperability will have only minimal security implications. While much of the expert commentary has focused on whether end-to-end encryption would be compromised I think that too little time has been spent considering the client-end side of interoperable communications. So if we assume it’s possible to facilitate end-to-end communications across messaging companies and focus just on clients receiving/sending communications, what are some risks?1

As it stands, today, the dominant messaging companies have large and professional security teams. While none of these teams are perfect, as shown by the success of cyber mercenary companies such as NSO group et al, they are robust and constantly working to improve the security of their products. The attacks used by groups such as NSO, Hacking Team, Candiru, FinFisher, and such have not tended to rely on breaking encryption. Rather, they have sought vulnerabilities in client devices. Due to sandboxing and contemporary OS security practices this has regularly meant successfully targeting a messaging application and, subsequently, expanding a foothold on the device more generally.

In order for interoperability to ‘work’ properly there will need to be a number of preconditions. As noted in Rescorla’s post, this may include checking what functions an interoperable client possesses to determine whether ‘standard’ or ‘enriched’ client services are available. Moreover, APIs will need to be (relatively) stable or rely on a standardized protocol to facilitate interoperability. Finally, while spam messages are annoying on messaging applications today, they may become even more commonplace where interoperability is required and service providers cannot use their current processes to filter/quality check messages transiting their infrastructure.

What do all the aforementioned elements mean for client security?

  1. Checking for client functionality may reveal whether a targeted client possesses known vulnerabilities, either generally (following a patch update) or just to the exploit vendor (where they know of a vulnerability and are actively exploiting it). Where spam filtering is not great exploit vendors can use spam messaging as reconnaissance messaging with the service provider, client vendor, or client applications not necessarily being aware of the threat activity.
  2. When or if there is a significant need to rework how keying operates, or surveillance of identity properties more broadly that are linked to an API, then there is a risk that implementation of updates may be delayed until the revisions have had time to be adopted by clients. While this might be great for competition vis-a-vis interoperability it will, also, have the effect of signalling an oncoming change to threat actors who may accelerate activities to get footholds on devices or may warn these actors that they, too, need to update their tactics, techniques, and procedures (TTPs).
  3. As a more general point, threat actors might work to develop and propagate interoperable clients that they have, already, compromised–we’ve previously seen nation-state actors do so and there’s no reason to expect this behaviour to stop in a world of interoperable clients. Alternately, threat actors might try and convince targets to move to ‘better’ clients that contain known vulnerabilities but which are developed and made available by legitimate vendors. Whereas, today, an exploit developer must target specific messaging systems that deliver that systems’ messages, a future world of interoperable messaging will likely expand the clients that threat actors can seek to exploit.

One of the severe dangers and challenges facing the current internet regulation landscape has been that a large volume of new actors have entered the various overlapping policy fields. For a long time there’s not been that many of us and anyone who’s been around for 10-15 years tends to be suitably multidisciplinary that they think about how activities in policy domain X might/will have consequences for domains Y and Z. The new raft of politicians and their policy advisors, in contrast, often lack this broad awareness. The result is that proposals are being advanced around the world by ostensibly well-meaning individuals and groups to address issues associated with online harms, speech, CSAM, competition, and security. However, these same parties often lack awareness of how the solutions meant to solve their favoured policy problems will have effects on neighbouring policy issues. And, where they are aware, they often don’t care because that’s someone else’s policy domain.

It’s good to see more people participating and more inclusive policy making processes. And seeing actual political action on many issue areas after 10 years of people debating how to move forward is exciting. But too much of that action runs counter to the thoughtful warnings and need for caution that longer-term policy experts have been raising for over a decade.

We are almost certainly moving towards a ‘new Internet’. It remains in question, however, whether this ‘new Internet’ will see resolutions to longstanding challenges or if, instead, the rush to regulate will change the landscape by finally bringing to life the threats that long-term policy wonks have been working to forestall or prevent for much of their working lives. To date, I remain increasingly concerned that we will experience the latter than witness the former.


  1. For the record, I currently remain unconvinced it is possible to implement end-to-end encryption across platforms generally. ↩︎
Link

Lawful Access Was the Tip of an Already Existant Iceberg

From a National Post article, published in 2012, we get a taste of the governments’ existing surveillance capabilities and activities:

Medical

The intimate information in medical files might include: erectile dysfunction, anti-psychotic medication, HIV tests, addictions, body mass index, the times you sought help because of stress, depression or sexual trauma. Health records can include psychiatric counselling.

And it isn’t just information about the person named on the file. They contain concerns expressed about a spouse’s drinking or infidelity or drug use by their child; the times they vented about their unstable boss.

Aren’t these out of the hands of anyone other than health-care providers?

Ask Sean Bruyea. The Gulf War veteran found his health records, including psychiatric reports, had been passed around by bureaucrats and sent to a Cabinet Minister in an apparent bid to discredit the outspoken critic.

Financial

Financial records are similarly sensitive: how much you earn, how much you donate to charity, which charities you choose, bankruptcy declarations, who you owe money to.

Financial data in government hands include income tax records, pension information, child tax benefits and much more. Anyone who has received a cheque from the government for any reason or ever paid money to the government is now in a database.

Corporate and business registration, federally and provincially, also requires a lot of personal and financial information. Credit card records offer a detailed profile of spending habits. Although privately held, a court order sees them turned over.

“You can find almost anyone and learn an awful lot about them if you have their credit history,” said a former police officer who now works for a provincial government.

There are also the enormous databanks of the Financial Transactions and Reports Analysis Centre of Canada (FinTRAC), a government agency collecting and disclosing information on suspected money laundering and terrorist financing.

Banks, life insurance companies, securities dealers, accountants, casinos, real estate brokers and others who deal with cash are obligated to report the deals or attempted deals under certain circumstances.

“Behaviour is suspicious, not people,” is FinTRAC’s mantra.

Scholastic

Extensive student records exist on most Canadians, including government student loans.

Local school boards and provincial education ministries have recorded your marks, attendance, illnesses, notes from teachers to parents and notes from home to the school. Many jurisdictions are moving to creating a complete, portable account of each student that follows the person from class to class, school to school.

Like head lice in a shared toque, it never goes away.

Policing

Law-enforcement databanks allow officers anywhere to check if a person is dangerous or a fugitive. Databanks such as the Canadian Police Information Centre lists criminal convictions, warrants and other important interactions with police. Also flagged are “emotionally disturbed persons” and those who are HIV-positive.

But there is, increasingly, much more to police databanks, with almost anyone who has a police encounter being entered into one.

It is hard to muster worry that a convicted killer or child molester is flagged in a police computer, but what about you being embedded there for complaining about a noisy party or reporting stolen property?

The PRIME-BC police database contains the names of more than 85% of B.C. residents, according to the B.C. Civil Liberties Association, which warns citizens could be passed up for jobs and volunteer positions because of misleading red flags. In Alberta, TALON, a new, $65-million database, is also raising concerns.

Manitoba, under Mr. Toews when he was the province’s attorney-general, was a trailblazer in recording interaction with young men to note markers of gang activity to help identify and declare them as gang members.

The Toronto-area forces have an enormous, shared combined database.

Federally, also, those convicted of certain offences are ordered to submit their DNA to the DNA databanks, perhaps the ultimate baring of your identity.

Travel

Passport Canada, an agency of Foreign Affairs Canada, keeps a large repository on citizens, including facial-recognition biometrics, those who vouched for your passport application and all trips abroad as well as visa applications.

Canada Border Services Agency keeps track of who is crossing our borders, including where you go and who arrives to visit you.

Recall that thin slip of card for customs you filled out on the airplane when returning to Canada. You wrote your name, address, travelling companions, passport number, where you went, how long you stayed and what you bought.

Those cards — its catalogue of booze and tobacco and all — are kept and can be forwarded to police or other government agencies.

Immigration

The Field Operations Support Systems, used by border and immigration agents, track all immigration-related information.

The Computer Assisted Immigration Processing System tracks every immigration application being processed by overseas offices, including family history, assessment notes, appeals status and concerns raised by citizenship staff.

Both of these large databanks are being consolidated into the Global Case Management System. The consolidation is but one example of the government’s drive of integrating data.

Transportation

Provincial ministries regulating driver’s licences hold a bevy of information, including medical information, address, photograph and its biometric information for facial recognition, driving and vehicle records.

This summer, the Insurance Corporation of British Columbia caused an uproar by offering biometric data from its database to police to help identify participants in the Stanley Cup riot. Critics blasted the potential use of data collected for one purpose for a distinctly different one.

Automatic Licence Plate Recognition (ALPR) creates another powerful tool for surveillance.

Pitched as a way of finding stolen cars and kidnapped children, the technology has appeal, but the portable devices that read hundreds of passing licence plates every minute and runs them through registration databases to attach it to an owner is causing concern.

Scanned pictures can be stamped with GPS co-ordinates, date and time information and stored in a database. It can track cars coming and going from any destination.

In Britain, there have been wide complaints of police using ALPR to stop vehicles coming or going to political protests. Privacy watchdogs in B.C. uncovered that among those automatically targeted by the RCMP’s ALPR included everyone who has gone to court to establish legal custody of a child, all who had a mental health problem that received police attention, and those linked to others under investigation.

Corporate information

Information collected by private corporations also has a way of making it to government.

407 ETR, the privately run electronic toll highway north of Toronto, scans licence plates so the owner can be billed. Police have accessed the data to track vehicles entering and exiting the highway, cross-referencing it and linking it to their investigations.

More widely used is hydro-electricity data. Special legislation in some provinces sees hydro data turned over to government to help identify homes with unusually high usage.

Drawing a lot of power is a marker for running a marijuana grow operation. More than one hothouse cucumber farmer, hot tub or swimming pool owner has been on the wrong end of that information.

Needless to say, that’s a lot of surveillance in a lot of sectors. The range of activities also speaks to why privacy advocates are often jack-of-all-trades (there aren’t a lot of them, so they need to learn a little about a lot) and why there are persistant worries around ‘surveillance creep’, or the gradual expansion of state surveillance capabilities. Sure, a new program may not be all that significant on its own but when combined with everything else authorities can derive previously-impossible-to-realize insights into Canadians’ private lives.

And, let me tell you from experience: getting access to the personal information that is stored about you by various agencies is often an act in futility. Government can learn about you, but it’s often impossible to learn what government has recorded about yourself.

Link: Lawful Access Was the Tip of an Already Existant Iceberg

Quote

So even in the worst cases, free products don’t usually end too badly. Well, unless you’re a user, or one of the alternatives that gets crushed along the way. But everyone who funds and builds a free product usually comes out of it pretty well, especially if they don’t care what happens to their users.

Free is so prevalent in our industry not because everyone’s irresponsible, but because it works.

In other industries, this is called predatory pricing, and many forms of it are illegal because they’re so destructive to healthy businesses and the welfare of an economy. But the tech industry is far less regulated, younger, and faster-moving than most industries. We celebrate our ability to do things that are illegal or economically infeasible in other markets with productive-sounding words like “disruption”.

* Marco Arment, “Free Works
Quote

The 27 regulators, led by France’s CNIL, gave Google three to four months to make changes to its privacy policy — or face “more contentious” action. In a statement on its website today, the CNIL said that four months on from that report Google has failed “to come into compliance” so will now face additional action.

“On 18 February, the European authorities find that Google does not give a precise answer and operational recommendations. Under these circumstances, they are determined to act and pursue their investigations,” the CNIL said in its statement (translated from French with Google Translate).

According to the statement, the European regulators intend to set up a working group, led by CNIL, to “coordinate their enforcement action” against Google — with the working group due to be established before the summer. An action plan for tackling the issue was drawn up at a meeting of the regulators late last month, and will be “submitted for validation” later this month, they added.

Policy Matters Too

Nadim Kobeissi recently wrote about Do Not Track, and effectively restated the engineering-based reasons why the proposed standard will fail. The standard, generally, would let users set their web browser to ask websites not to deposit tracking cookies on their computers. Specifically, Nadim wrote:

Do Not Track is not only ineffective: it’s dangerous, both to the users it lulls into a false belief of privacy, and towards the implementation of proper privacy engineering practice. Privacy isn’t achieved by asking those who have the power to violate your privacy to politely not do so — and thus sacrifice advertising revenue — it’s achieved by implementing client-side preventative measures. For browsers, these are available in examples such as EFF’s HTTPS Everywhere, Abine’s DoNotTrackMe, AdBlock, and so on. Those are proper measures from an engineering perspective, since they attempt to guard your privacy whether the website you’re visiting likes it or not.

He is writing as an engineer and, from that perspective, he’s not wrong. Unfortunately, as an engineer he’s entirely missing the broader implications of DNT: specifically, it lets users proactively inform a site that they do not give consent to being tracked. This proactive declaration can suddenly activate a whole host of privacy protections that are established under law; individuals don’t necessarily have to have their declarations respected for them to be legally actionable.

Now, will most users have any clue if their positions are being upheld? No, of course not. This is generally true of any number of laws. However, advocates, activists, academic researchers, and lawyers smelling class-action lawsuits will monitor to see if websites are intentionally dismissing users’ choice to refuse being tracked. As successful regulatory/legal challenges are mounted website owners will have to engage in a rational calculus: is the intelligence or monies gained from tracking worth the potential regulatory or legal risk? If initial punishments are high enough then major players may decide that it is economically rational to abide by DNT headers, whereas smaller sites (perhaps with less to lose/less knowledge of DNT) may continue to track regardless of what a browser declares to the web server. If we’re lucky, these large players will include analytics engine providers as well as advertiser networks.

Now, does this mean that DNT will necessarily succeed? No, not at all. The process is absolutely mired in confusion and problems – advertisers are trying to water down what DNT ‘means’, and some browser manufacturers are making things harder by trying to be ‘pro-privacy’ and designing DNT as a default setting for their browsers. Moreover, past efforts to technically demonstrate users’ privacy have failed (e.g. P3P), and chances are good that DNT will fail as well. However, simply because there are technical weaknesses associated with the standard does not mean that the protocol, more broadly, will fail: what is coded into standards can facilitate subsequent legal and regulatory defences of users’ privacy, and these defences may significantly improve users’ privacy online.