Understanding Apple’s Upcoming Child Safety Features

Published August 9, 2021, 8:49 AM

by Professor Rom Feria

Earlier this week, Apple announced that it will be introducing three (3) new features that protects against Child Sexual Abuse Material (CSAM) when iOS 15, iPadOS 15 and MacOS Monterey drop in a month (or two?). The features affect Siri and Search, iMessage and iCloud Photos. You can find details of Apple’s announcement at https://apple.com/child-safety/. Before Apple got a chance to do their press release, privacy and security experts got wind of this news and released their opinions, mostly criticizing Apple for introducing a backdoor and reneging on their privacy commitments. Before I continue, let me just say that I am all for protecting children against these monsters who create, distribute and use CSAM. It is unfortunate that the Philippines is one of the countries where these monsters reside and operate.

Siri and Search

Let’s take on the first child safety feature affects Siri and Search. Siri and Search will provide some interventions when a search that is related to CSAM is done — “will explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this issue.” This is a welcome feature, and does not come with any privacy issues.

Messages

The second child safety feature involves iMessage —when a child sends a sexually explicit photo, the child will prompted with a warning before it is sent, and the parent will be informed should the child continue to send it.

When a child receives a sexually explicit photo, the photo will be blurred, the child will get a warning, and the parent will be alerted.

For parents, this is a welcome protection and provides some peace of mind knowing that there is some level of protection being done. However, this is one of the features that is controversial — some claim this feature compromises the end-to-end encryption the Messages app, particularly with iMessage, is known for.

First, this feature is OPT-IN — it only works when the child is part of the Family Sharing account and the parent configured the restrictions and alerts. For accounts belonging to adults, this feature does not work.

Second, this is done ON DEVICE — iMessage end-to-end encryption remains and is not compromised. How it works is that there is an added step that determines if the photo is sexually explicit before it is queued for encrypting prior to sending. On the other end, when the photo is received, it will be decrypted, and that same added step will determine if the photo is sexually explicit and then blurs it. Again, end-to-end encryption is NOT compromised.

The fears expressed by some folks is that the added step, the one that determines whether the photo is sexually explicit or not (yes, the machine learning model), can be tampered with or can be overridden by governments. What is preventing Apple from pushing an update that includes filtering of images belonging to a specific group of people, and instead of alerting parents, it alerts the government? I don’t think that Apple can get away with this, though.

iCloud Photos

The third child safety feature involves iCloud Photo. With some privacy-preserving algorithms, Apple devices will download a database of hash values, a long sequence of numbers and letters, computed from the images of known CSAMs as maintained by the US National Center for Missing and Exploited Children (NCMEC). From here on, each photo that is uploaded to iCloud Photos will have its corresponding hash value, which will then be compared (see the documents CSAM Detection — Technical Summary (PDF) and Apple PSI System — Security Protocol and Analysis (PDF) for detail) to hash values from known CSAMs in the database. If there is a match, it will be tagged before it is uploaded to iCloud Photos. When a threshold of tagged photos is reached, Apple will be alerted to review the photos, and determine whether or not it merits reporting to NCMEC and suspending the iCloud Photos account. Users can appeal, if there is a false positive, but according to Apple’s documents, it is very, very rare, one in one trillion chance per year.

Some things to note:

  1. You need to have iCloud Photos ENABLED (unfortunately, this is ON by default) for this to work. DISABLE iCloud Photos and this becomes useless.
  2. The database of hash values cannot be reversed engineered to re-generate the CSAMs from NCMEC.
  3. Hash value matching is done LOCALLY, ON DEVICE. This is dissimilar to the object recognition done on device — whilst both uses a database/model, one analyzes the actual images, whilst the other uses computed hash values.
  4. Apple does not have access to the actual photos UNTIL the threshold is reached, only then will Apple identify the images (how? see the algorithms mentioned above).
  5. Companies, such as Microsoft, Google, Facebook (including Instagram) and Twitter are all scanning photos for CSAMs when uploaded on their services, in a non-privacy-centric way, I guess.

Whilst the algorithms were vetted by privacy, cryptography and computer vision experts, other privacy experts are voicing out their concern on how this can be abused, even to a point of calling this a possible backdoor on an otherwise privacy-preserving Apple device. What is alarming is that the database from NCMEC can be replaced with some other databases provided by other governments, for example, countries that are against homosexuality will have its own databases, pushed to Apple devices without users knowing, until it is too late. This is probably far-fetched, but then again, it is known that Apple complies with the different laws from different countries — will Apple stick to its guns and deny other uses of this technology, as what they claimed? Personally, I am very concerned that the Philippine government will take advantage of this and use the technology to red-tag its citizens — knowing how it prevents Apple from protecting Philippines-based users by disallowing the use of iCloud+ Private Relays due to some regulatory restrictions.

What can we do?

I agree that Apple needs to do something to protect children from predators, but I think Apple needs to do more — either they do it differently, or provide more transparency (yes, if only Apple’s operating systems are open sourced), and maybe have an independent oversight board or something that monitors these databases and machine learning models, but until then, Apple should unbundle this from the upcoming operating system versions and get back to the drawing boards.

There is an open letter at https://appleprivacyletter.com asking Apple to halt its roll-out and re-affirm its commitment to end-to-end encryption (which did not change a bit, IMHO) and privacy— as of last count, it already has more than 2700 signatories — add your name on the letter (requires GitHub account as of last check) if you have the same concerns as others and want to make yourself heard. I added my name even if I know that these features will be introduced exclusively in the US first, but once it is there, there is no turning back.

 
CLICK HERE TO SIGN-UP
 

YOU MAY ALSO LIKE

["tech-news","technology","technology"]
[2781785,2912870,2912863,2912852,2912804,2912643,2912244]