How iPhone child-safety photo scanning works - and why privacy advocates are worried

File picture: Pexels

File picture: Pexels

Published Aug 20, 2021

Share

By Tatum Hunter, Reed Albergotti

If you're confused by Apple's soon-to-come child safety features, you're not alone.

Additions to the company's messaging and photos announced this month were technically complicated, and Apple had to release a separate FAQ document afterward. But the stated goal is simple: Curb the spread of child sexual abuse material (CSAM) and explicit photos of minors.

Starting with an update to iOS 15 coming this year, the Messages app will scan for explicit photos on kids' accounts and will scan iCloud uploads for known child sexual abuse material before the images even leave the device.

This may have been a win for concerned parents and child safety advocates, but privacy and security experts are raising eyebrows. Why is Apple using fancy technology to scan content and communications on our private property, including phones and tablets?

Here, we break down what the new features entail, how they work and where things could go wrong.

- What is Apple doing?

Apple is rolling out new "child safety" features in updates to its operating systems coming this year, it said Aug. 5. This doesn't affect all iPhone users, and for the near future at least the features will be available only in the United States.

The first change is to the Messages function, which will be able to scan incoming and outgoing photo attachments on children's accounts to identify "sexually explicit" photos. If the feature is enabled and a photo is flagged as explicit, Apple will serve kids a prompt warning of the risks and ask if they really want to see or send the photo. If they are younger than 13, they'll be warned that choosing to proceed means their parents will be notified, if their parents have opted in. Children older than 13 still receive the warnings, but their parents won't be notified regardless of what they choose, Apple says.

The second update involves scanning photos for known child sexual abuse material. If someone uses Apple's iCloud storage - which lets you access the same photos across devices - to upload 30 or more images that match an image database maintained by the National Center for Missing or Exploited Children (NCMEC), Apple shuts down the corresponding iCloud account and sends a report to NCMEC.

The last feature deals with Siri, Apple's automated voice assistant, and the Search function on iOS. People using Siri or Search to look for images of abuse will instead find information on why the content is harmful, as well as resources to get help, Apple says.

Below we break down what the new features entail, how they work and where things could go wrong.

- How will it work?

For a photo to be classified as known CSAM, it must appear in both NCMEC's database and another database maintained by a different organization or country.

If a photo on your phone matches up with one in the database, your phone will attach a "safety voucher" to the photo when it uploads to iCloud. Apple says its employees won't be able to look at the photos until there have been 30 matches. At that time, the photos will be decrypted, and an Apple employee will look at the matching photos sent by your phone and determine whether it is, indeed, child abuse material. Then Apple will notify NCMEC, which will notify authorities.

As for the photo scanning in Messages, Apple says it will use artificial intelligence that can tell explicit images from ones that aren't. That AI will run right on your kid's phone or iPad, so messages aren't being sent elsewhere for review, Apple says.

- Why are people concerned about scanning photos in Messages?

Message scanning may help prevent kids from sharing sexually explicit photos of themselves and others. But the presence of a machine-learning model scanning every incoming and outgoing photo violates Apple's claims of end-to-end encrypted messaging, according to a statement from the Electronic Frontier Foundation.

"We all deserve privacy. We deserve private thoughts. We deserve private conversations, including children and teenagers," says India McKinney, director of federal affairs at EFF.

Apple declined to comment on the EFF claim.

The new communication safety feature also assumes that the people wielding parental controls always have children's best interest at heart, which isn't necessarily true, McKinney said.

Apple confirmed the feature stores undeletable copies of explicit photos that children under 13 send or receive. However, that opens it up to potential violations of privacy.

John Callery, vice president of technology at the Trevor Project, a nonprofit organization serving LGBTQ youth, mentioned the effect this could have on minors who aren't ready to come out to their parents.

"We've advised various technology partners in the past about how they can ethically incorporate potential safety monitoring features in their products without intruding on youth's privacy - that might look like a notification that does not share any additional information such as a photo or a classification of an image or other content," Callery said.

Apple declined to comment on those particular concerns but emphasized notifications only apply to children under 13 using accounts with both family sharing and communication safety notifications turned on.

- Why are people concerned about CSAM detection?

Apple says this system is actually more private than competitors, such as Facebook and Google. Those systems scan images that are stored on servers, not individuals' devices.

But security experts say doing the scanning on a person's phone is a bigger invasion of privacy than doing it on a remote server.

Apple says it's using cryptography - or techniques for passing information secretly - to identify CSAM while making sure the company can't access any other photos on its customers' devices or in the cloud.

There's also an emotional factor. Some people find that having a device they own - especially one that is as personals a smartphone - potentially turn them in to the authorities is just uncomfortable, even if they are law-abiding.

- How could this all go wrong?

Matt Tait, chief operating officer at Corellium and a former security researcher at the British Government Communications Headquarters,

and other privacy experts say there would have been little backlash to Apple scanning for child pornography on iCloud itself, which is unencrypted and can be subpoenaed by law enforcement. By creating an elaborate method to ensure the matching of child pornography images happens on devices, Apple's leadership seems to "have tied their shoelaces together in a very clever way," Tait said.

Any reasonable person wants to combat child sexual abuse, EFF's McKinney said. But once a method to scan on a device exists, it could be expanded in ways we're less comfortable with. What if authoritarian governments pressured Apple to scan for other types of images, such as pictures of homosexual acts or memes making fun of politicians, she asked.

Apple has put in place elaborate measures to stop abuse from happening. But part of the problem is the unknown. iPhone users don't know exactly where this is all headed, and while they might trust Apple, there is a nagging suspicion among privacy advocates and security researchers that something could go wrong.

- What does Apple say?

In a statement released a few days after it announced the new features, Apple said that if governments tried to force the company to add non-CSAM images, it would "refuse such demands," and that its system "has been designed to prevent that from happening."

Apple, though, has a history of complying with local laws in countries such as China, where it has blocked privacy-protecting "virtual private network" apps from its App Store there and moved its servers inside the country at the request of the government.

The CSAM-scanning system, as Apple describes it, has safeguards in place. Apple says it doesn't have access to the encoded images stored in the combined CSAM database, so it couldn't add new types of content to scan for even if it wanted to do so.

Apple also said that introducing image scanning in Messages and before iCloud upload doesn't break its encryption or compromise its stance on privacy.

- Should I trust Apple?

As it stands, we don't have much choice. Apple has called on independent security researchers to cross-check the company's promises going forward.

But Apple has put up roadblocks that make it difficult for security researchers to inspect its devices. Researchers must first break Apple's digital locks to inspect how the phones operate in the real world.

There's no evidence that Apple will use these features for anything other than child safety. But the company has surprised us before, McKinney said.

Apple went to court in 2016 when it refused to give the FBI access to an iPhone belonging to one of the shooters in a deadly attack in San Bernardino, Calif. "We believe the contents of your iPhone are none of our business," Apple CEO Tim Cook said, refusing to "hack our own users."

Given statements like that, the new child safety measures are "out of the blue," according to McKinney. Would Apple ever reverse its position on photo scanning, too?

- What if I don't want these features?

Parents with Apple Family plans must opt in to the communication safety feature, so if you're not comfortable with your kids' messages being scanned, it doesn't have to happen.

If you don't want Apple checking photos on your device, you can turn off iCloud sharing. To do this, go to Settings on an iOS device, click on your name at the top of the screen, then tap iCloud. Select Photos from the list, and tap the slider to the off position.

- What impact will this have on the spread of CSAM?

That's difficult to say. Right now, the NCMEC doesn't have data on how many reports from Big Tech companies lead to investigations, arrests or convictions by law enforcement, it says - so we're not sure the effect of flagging photos on the child sex abuse problem more broadly.

Apple said its goal is to reduce its role in the spread of CSAM.

- Is Apple legally required to scan for this?

No, but if Apple finds CSAM, it's obligated to report it.

- What do critics want?

Alex Stamos, former chief security officer at Facebook and director of the Stanford Internet Observatory, said in an interview that Apple should hold off on launching its scanning software and share the code with security researchers so they can verify Apple's claims.

Apple said there will be a beta version of the new iOS that security researchers will be able to download to examine CSAM detection before the feature is available to the public.

Meanwhile, EFF's McKinney said that if Apple is concerned about iPhone owners seeing photo content they don't want to see, the company should allow them to report messages and senders. That way, the user has control, rather than Apple and its technology.

The Washington Post

Related Topics:

apple