Apple recently announced a new suite of features to combat the spread of child sexual abuse material. Everyone agrees that child safety is important and that companies must do more to protect children online. But no technology is ever neutral — not even with the lofty goal of safeguarding children from predators.
As advocates warned, Apple's initiative could be the start of a wave of privacy violations, setting a dangerous precedent for the future of digital privacy rights. The fact is that we need more transparency from Apple about how these policies will be put into practice and strong guardrails, both corporate and regulatory, to prevent abuse of these new features.
The initiative is spread out through three new features, two of which raised red flags for privacy advocates. The first is a tool to automatically scan photos stored on users' devices before they are uploaded to iCloud. It will then compare those images with a database of child sexual abuse images hosted by the National Center for Missing & Exploited Children, a private nonprofit organization established by Congress in 1984. If a match is found, law enforcement may be notified.
The other controversial new feature will flag sexually explicit photos sent or received by minors using the Messages app, with an option for parents to receive notifications if their children view flagged photos. While that sounds like a helpful tool for parents, it may also lead to other risks.
For starters, the flagging feature could lead to discriminatory effects due to algorithmic bias in how the feature chooses which images to flag. For example, Instagram recently faced backlash when it introduced explicit content filters, which some argue lead to discriminatory blocking or downranking of posts by certain creators of marginalized races, genders and backgrounds.
The public deserves more transparency about how Apple plans to identify problematic images and what algorithms it plans to use for this feature.
This feature could even lead to greater abuse of vulnerable children, including potentially outing LGBTQ children to homophobic families. As legal scholar and advocate Kendra Albert put it, "These 'child protection' features are going to get queer kids kicked out of their homes, beaten, or worse." Children in abusive homes may also be prevented from sharing photographic evidence of their abuse by the parental notification feature.
The good news is that the parental notification feature will be available only for children under 13, and minor users will be informed that the notification setting is on. In addition, both the risks and the benefits of the Messages scanning feature may be limited, as users can easily use any number of other messaging apps to evade the content flagging and parental notification.
Regardless, the public deserves more transparency about how Apple plans to identify problematic images and what algorithms it plans to use for this feature.
The device scanning feature is even worse. Currently, many tech platforms include some safety features that monitor content for child sexual abuse material, often reporting directly to the National Center for Missing & Exploited Children. However, most of these safety mechanisms scan content that is shared to servers or posted to sites. In contrast, Apple's safety tool would scan photos saved to users' devices. Doing so would by definition decrease users' control over who can access information stored on their devices, which has a much higher risk of future abuse.
While Apple may mean well, it's not hard to imagine how this safety feature could lead to even greater privacy incursions in the future. Today, the justification is child safety. Tomorrow, the justification might be counterterrorism or public health or national security. When we begin giving up our digital rights, it is hard to turn the clock back and bring back past protections.
Even worse, Apple's device scanning feature could open the door to abuse by other parties, including state actors. Governments around the world have long called for "backdoor" access to applications and devices — built-in access that would get around standard security measures (like encryption) and allow governments to view, manipulate or control the data and devices you own.
Governments, including illiberal and authoritarian governments, may see this preloaded device scanning feature as a tool they can use for their own ends. This could lead to devastating consequences, not only for our privacy but also for the privacy of people living in nondemocratic countries where rights are already severely limited. Think of the spyware abilities that the NSO group, an Israeli company, provided to governments supposedly to track terrorists and criminals, which some countries then used to monitor activists and journalists.
Now imagine those same abilities hard-coded into every iPhone and Mac computer on the planet. Apple's new features aren't quite as invasive, but the NSO Group example shows how corporations and governments can easily abuse access to data and devices.
Apple has already responded to privacy advocates' concerns in an FAQ posted to its site: "We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future."
Apple has also promised to refuse demands from governments to add other types of images to the list of images that would be flagged. This is encouraging, and we should commend Apple for its pledge. However, the fact remains that any system that collects data can suffer from vulnerabilities that lead to unwanted access, including access by authoritarian governments.
It is true that Apple has, in the past, resisted government demands to access devices and data, including during its high-profile 2016 legal dispute with the FBI. Investigators demanded that Apple unlock an iPhone recovered from one of the suspects in the 2015 terrorist attack in San Bernardino, California; Apple refused. The company has also repeatedly used privacy as a market differentiator, advertising itself as the "good" tech company that preserves users' privacy, in contrast to Facebook, Google, Amazon and the rest.
Apple's new safety features may help curb the spread of child sexual abuse material and online and offline exploitation of children. However, it is just as likely that the bad actors they're trying to deter will get around these new features by eschewing Messages and iCloud in favor of other messaging and cloud storage services. What we may end up with are new software tools that are both ineffective and harmful to our privacy rights.
Child safety and privacy are key concerns for any tech company, and Apple must be accountable on both fronts. The fact that a few new features by a single company can create so much impact on fundamental rights shows once again why we need stronger privacy laws and why Congress needs to act to pass a federal privacy law as soon as possible.CORRECTION (Aug. 12, 2021, 4:45 p.m. ET): A previous version of this article misattributed this quote: "These 'child protection' features are going to get queer kids kicked out of their homes, beaten, or worse." It is from Kendra Albert, a clinical instructor at Harvard's Cyberlaw Clinic, not Kendra Serra.