Repost: End-to-end encryption and child safety in the UK

Originally posted on December 9, 2020. Reposted here without edits.

On December 8, Anne Longfield, the children’s commissioner for England, published a briefing decrying the planned move to end-to-end encryption for their direct messaging (“DM”) applications by social media companies such as Facebook1.

The utterly cynical interpretation of all this is that the government plans to ram through some sort of legislation that will let both police and social media platforms monitor every DM communication in the country’s cyber-space unimpeded while screaming “think of the children!” at the top of their collective lungs. Let us try to take this one step at a time, however.

First, the survey itself from which the briefing’s figures are derived. Best as I can tell, all of the commissioner’s percentages on which children receive what kind of DMs are based on the survey detailed in footnote 20 on page 9 of the briefing, which states, in part, that this was a survey of 2,000 children with percentages “weighted to produce nationally representative estimates” but “not…tested for statistical significance”. It is unclear from the footnote as to how the survey was conducted – via phone? online? – or how the weighting of the results worked, which potentially makes the final results less than fully accurate. As well, one puzzles at the “feel uncomfortable” language, as there may not necessarily be a sexual connotation, especially in videogaming circles. For example, in online “deathmatch” type games2, one can comparatively often encounter players who follow up a loss by hurling abuse of the vilest kind at their teammates via ingame chat – this is hardly a good thing, but isn’t exactly of the same degree as child sexual abuse, which is what the commissioner seems to be so concerned with.

But let us suppose, for a moment, that the survey’s results are correct and of unimpeachable quality.

First, one must briefly define what end-to-end encryption is in the context of DMs. The briefing itself has a helpful diagram on page 7 that depicts the following two scenarios for a scenario in which Alice sends a DM to Bob.

  • No end-to-end encryption. Alice encrypts a message and sends it to the server. The server decrypts Alice’s message, re-encrypts it and sends it to Bob. Bob then decrypts the re-encrypted message. As a consequence, anyone with access to the server can see unencrypted traffic.
    • Parenthetically, this is exactly how mobile phones work, with the call only being encrypted until it hits the phone company’s systems. Police in some jurisdictions have been able to take advantage of this by spoofing cell phone towers with Stingray-type systems, effectively mounting man-in-the-middle attacks.
  • End-to-end encryption. Alice encrypts a message and sends it to the server. Only Bob can decrypt the message – the server merely forwards it along without examining its contents.

The clear advantage of end-to-end encryption from an information security standpoint is that so long as Alice’s and Bob’s devices are not compromised, the attacker never sees the message in plaintext. Most notably, this prevents insider attacks – anyone working for the provider with access to the server that passes the messages between Alice and Bob – but it does also create something of an obstacle for any “wiretapping” efforts by law enforcement3.

To be completely fair, the tug of war between personal privacy and state interests has been going on since the very beginning of computer-based encryption. After all, the argument goes, providing citizens with strong end-to-end encryption mechanisms will allow some of them to conceal “bad” activities from the government. The counter to this, of course, being that sometimes it is the government itself that misuses its surveillance powers – hello, Mr. Nixon, hey there, Mr. Bush – and in any case, any kind of a backdoor or security weakness built into a system for the government’s sake is bound to eventually be discovered, and abused, by some private individuals. There really isn’t any “right” answer here, at least a nonpolitical one, because either alternative entails some kind of a social cost. I myself might be leery of the security services’ propensity to surveil everyone, all of the time, even illegaly, if they can get away with it4, and also distrustful of security holes in general no matter how well-intentioned. Someone else, however, might point to the recent wave of criminal arrests in Britain and Continenal Europe on the basis of intercepted DMs across a “private” mobile phone network – surely these are not the people one wants to hand end-to-end encryption to.

What makes one think that in this case, the government is simply out for a surveillance power grab is the “think of the children” aspect of the argumentation. Suppose somewhere a truly terrible individual does send an upsetting DM to a teenage girl – the briefing specifically singles out the 14-17 female demographic, with 16% reporting having received such messages on social media platforms from strangers in the past four weeks. Is it really the only right solution to the situation to disable end-to-end encryption on the DM itself, so that the social media platform could automatically scan it for “offensive content” before forwarding along? Might not a “report user” button instead, allowing the girl herself to flag what she finds offensive, be just as efficacious while preserving her privacy? And what of the police, if the only way in which they can prosecute a sexual abuse case is to tap the server as it is passing DMs between the perpetrator and the victim – not tapping either of their phones5 instead, not gathering evidence from the victim, not searching the perpetrator’s home computer for the very photos and videos they are sending out via DMs…

Leaving such considerations aside, how much of a role to DMs over social media play in the overall crime picture. For example, the Office for National Statistics estimates friends or relatives to be the perpetrators in something like 70% of instances, although their methodology for deriving this is based on a survey of 18-74 year olds rather than actual case data6. Thus, a significant proportion of sexual abuse has absolutely nothing to do with the problem so stressed by the commissioner in her briefing, that of strangers sending “uncomfortable” DMs to underaged users. Furthermore, Table 32 of the very same ONS report shows that for the year ended March 2019 police in England and Wales recorded 748 cases of exposure or voyerism involving children; 5,900 instances of “sexual grooming”; and 24,640 cases of “sexual activity involving a child” that is not sexual assault or rape and can sometimes involve things like displaying sexual imagery. At least some of these, as well, do not involve DMs at all but rather occur in the real world. Are we really going to legislatively strip out end-to-end encryption from messaging platforms over, potentially, a few thousand cases per year of someone sending a “tackle out” picture to a teenaged girl against her volition, in lieu of, say, adding a prominent “block user” button?

And if what we are really talking about is a case where a stranger contacts an underaged user with the goal of convincing them to a) disclose their real identity and location and b) physically meet so as to c) facilitate actual sexual abuse or rape, they are likely not starting the conversation with an offensive photograph, and the DM exchange would be difficult, to put it mildly, to screen out via automated tools, and yet this is precisely the solution that Longfield is putting forward.

The point is that the more one looks at the commissioner’s specific angle, the more one starts to feel this whole exercise is a smoke screen. Given that this is the same government which, not to long ago, tried to distribute anti-extremism guidelines listing Extinction Rebellion, of all things, as an extremist group…Suffice it to say, it will be interesting, in this sense, to see how this debate unfolds if or when the “online harms” bill is finally taken up…


  1. https://www.childrenscommissioner.gov.uk/wp-content/uploads/2020/12/cco-access-denied.pdf, retrieved December 9, 2020.). Specifically, the briefing argues that “the privacy of direct messaging platforms can conceal some of the most serious crimes against children”, and that end-to-end encryption “risks preventing the police from gathering the evidence they need to prosecute perpetrators of child expoitation and abuse.” In support of this, the briefing notes that according to its own survey, nine in ten children aged 8-17 are using direct messaging services on social media, with 38% saying they have received a message, picture or video that made them “feel uncomfortable”, including 10% that received such a message from a stranger. As such, the commissioner recommends that social media companies do not apply end-to-end encryption to children’s accounts, and – somehow – “retain the ability to scan for child sexual abuse material”, which makes no sense unless end-to-end encryption is eliminated for everyone, not just children. All this comes in advance of an “online harms” bill that might possibly be taken up by Parliament in 2021, assuming the government in its infinite incompetence can ever sufficiently sort out Brexit and the COVID-19 pandemic to focus on turning some other aspect of British life into a complete dog’s breakfast((Me? Bitter?! Surely, you jest…[]
  2. The kind where randomly selected teams of players are pitted against one another, for example the Battlefield series of first-person shooters.[]
  3. Whereas with a telephone call, they could just go to the phone company – the server owner, in this example.[]
  4. See, for example, the whole post-911 NSA warrantless surveillance saga.[]
  5. To belabour the point, end-to-end encryption does not work if one of the “ends” is compromised.[]
  6. See https://www.ons.gov.uk/peoplepopulationandcommunity/crimeandjustice/articles/childsexualabuseinenglandandwales/yearendingmarch2019, as well as the tables in the accompanying dataset.[]