Apple Continues to Face Concerns Over Release of Anti-Child Porn Tools

In early August 2021, Apple released plans founded in good intentions to combat images of child sexual abuse on iPhones that almost immediately sparked heated criticism from technology and privacy experts worldwide. Following widespread criticism, Apple ended up announcing in September that it would not be launching its child protection features on iOS 15 as planned. This decision was due to feedback from customers, advocacy groups, researchers, and others. Apple said it and would take “additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.” No further details were provided on how the company planned to consult or to collect input or with whom they would work.

Last week, Apple shareholders went against the company’s recommendations and voted to pass a shareholder proposal entitled “Civil Rights Audit. This proposal could result in Apple conducting a third-party audit pertaining to how the company handles civil rights and relations with disgruntled employees.  The proposal mentioned concerns from shareholders over racial inequality, pay equity, and privacy (particularly Apple’s child safety measures and proposed tool for scanning iPhones for child pornography).

Apple Releases Two Child Safety Features but Delays Child Pornography Detection System

Initially proposed as a multi-pronged effort to securely monitor user content for child pornography, Apple’s plan involved in-line image detection tools to protect children using Messages, safety protocols for Siri and Search, and an on-device photo monitoring solution. The last feature had proven to be so contentious that by December, Apple had updated its Expanded Protections for Children page to remove all references to the controversial child sexual abuse material (CSAM) detection feature. However, Apple has not yet canceled plans for the release of this new tool; documents outlining how the functionality may work are still available on Apple’s website.

While the child pornography detection feature has not yet received a new launch date, as of early December Apple had already released two of the other child-protection features initially announced in August. The first change was largely uncontroversial. It affected Apple’s Search app and Siri. If someone searches for a topic deemed to be related to sexual exploitation of children, they will be redirected by Apple to resources for reporting it or getting help for an attraction to child pornography. 

The second feature ties into Apple’s existing Family Sharing system and is designed to protect children from viewing inappropriate imagery. It scans incoming and outgoing pictures for “sexually explicit” material. Images which meet this description are blurred, and the child is warned about its contents and told not to view it. Resources are also provided to affected children to get help. Children then have the option to alert someone they trust about a flagged photo; this is separate from the choice of whether to unblur and view the image. 

Unlike the original version proposed in August, the version of the feature released in December via iOS 15.2 does not send notifications to parents if a child decides to view a sexually explicit image. Experts like Harvard Cyberlaw Clinic instructor Kendra Albert objected to this feature initially because it could out queer and transgender children to their parents. The original feature also failed to provide protections against parents who are violent or abusive. End-to-end encryption is not impacted because checks are carried out on device; that’s one reason these features, once revised, didn’t meet the same pushback from privacy advocates.

 

Behind the Delay of Apple’s New “NeuralHash” Tool

When Apple announced their revolutionary new child sexual abuse material (CSAM) detection technology in September, they were not expecting the massive blowback that followed from security researchers, privacy experts, and civil rights advocates. The new tool, which Apple calls NeuralHash, is designed to identify known child pornography materials on a user’s device without the need to possess the image or know the contents of the image. Photos and videos stored in iCloud are “end-to-end encrypted”, which means that even Apple cannot access this data. This is why NeuralHash scans for child pornography materials on a user’s iPhone; Apple claims this is more privacy friendly than practices of companies which scan all of a user’s file.

Apple is able to perform searches for “already known” images of child pornography by looking for images that contain the same string of letters and numbers used to uniquely identify an image (known as a hash) as those provided by child protection organizations like the National Center for Missing and Exploited Children. If the NeuralHash tool finds 30 or more matching hashes, these images are sent to Apple in order to be manually reviewed before the account owner is reported to law enforcement. 

Although Apple said the chances of a false positive were about “one in a trillion,” technology experts almost immediately sought to refute the company’s notions that this technology was unlikely to be abused by organizations or governments with ill intentions. Because Apple’s NeuralHash “already exists” in iOS 14.3 as “obfuscated code’” security researchers were able to reverse-engineer NeuralHash into a Python script that was then published on GitHub and Reddit to test the algorithm before it was fully implemented on iOS and macOS devices. 

Privacy and security researchers were almost immediately alarmed at how soon the first report of a “hash collision” occurred. This meant NeuralHash produced the same hash for two different images. This was particularly concerning for Apple, who has often touted its products as the most secure technology products on the market. Hash collisions can show that systems that rely on encryption to keep them secure are ineffective. Some concerns over this new technology have even been raised by Apple’s own employees. There have been reports of many employees expressing concerns that the company is damaging its industry-leading privacy reputation over the new child pornography detection system. Although Apple has continued to push back against concerns from civil liberties and security experts over their proposed child pornography scanning technology, these alarms have so far been enough to delay implementation of NeuralHash.

 

Why Apple May be Changing Its Approach to Handling Child Pornography

A group of researchers said Apple’s plans for scanning iPhone for images of child sexual abuse was dangerous technology that was not only invasive but possibly not effective. One question that has emerged is why Apple (who has often been considered to set the industry standard for privacy and security) has chosen to aggressively pursue untested technology that might create a “back door” that could create additional security concerns. It’s a far cry from five years ago, when Apple fought with the F.B.I. over unlocking an iPhone of a gunman in San Bernardino, California. “Once you create that back door, it will be used by people whom you don’t want to use it,” said Eva Galperin, the cybersecurity director at the Electronic Frontier Foundation, a digital-rights group. “That is not a theoretical harm. That is a harm we’ve seen happen time and time again.”

One likely reason for Apple’s new approach to combatting the spread of child pornography is political pressure. The National Center for Missing and Exploited Children began publishing how often technology companies reported cases of child pornography a few years ago. Apple reported 265 cases to the authorities in 2020 compared to 20.3 million reported by Facebook. Apple was likely embarrassed to finish at the bottom of the pack. Members of Congress added pressure for Apple to take action after a 2019 article by the New York Times. According to that article, tech companies’ reports of incidents of persons sharing photos and videos of child sex abuse have doubled.

 

How “NeuralHash” Differs From What Other Companies Do

Many technology experts believe Apple is opening Pandora’s box by proposing the use of NeuralHash to scan iOS and MacOS products for child pornography. Other companies, such as Facebook, Google, and Microsoft, also scan the photos and videos of users to search for child pornography, but they only do so on images that are on the companies’ computer servers. In contrast, Apple plans to do much of the scanning directly on people’s iPhones. In the initial proposal, Apple said it would scan photos users choose to upload to its iCloud storage service, but this scanning still takes place on the device. This would be the first technology built into a phone’s operating system that can search a person’s private data and report it to law enforcement. 

The fear is this could open the door to an even larger class of surveillance, with governments across the world enticed to force Apple to analyze more of people’s personal information, once they know Apple has the capability to do so. Although Apple has insisted this won’t occur, they have a mixed record globally on how they handle requests from governments and to what extent privacy and civil liberties are respected. Although Apple has postponed their plans to use this new technology to scan devices for child pornography, they’ve only promised to delay the release, not to cancel it. 

Little seems certain at this point. But concerns continue to grow.

Confer with us in good health! You may choose to confer with us by Zoom or telephone to avoid Covid risk. Please phone us at 206.826.1400 to schedule your conference.