Apple just announced new features against child abuse images. The announcement followed shortly after the Financial Times revealed the new features. In the latest versions of iOS, iPadOS, macOS, and watchOS, images containing scenes of child abuse are flagged in iCloud.
- Apple’s news app will warn of sexually explicit content in the future
- Child abuse material is flagged in iCloud Photos
- Siri and Search will have additional tools to warn of child abuse
the Financial Times published an article on Thursday afternoon about new tools against child abuse in Apple’s operating systems. Shortly afterwards, Apple confirmed the new features in an official press release and in a technical report.
From iOS 15, iPadOS 15, watchOS 8 and macOS Monterey – initially only in the USA – updated devices will have additional functions that are intended to prevent and warn against the distribution of content with child abuse.
Warning messages for parents and guardians
The messaging app recognizes the sending and receipt of sexually explicit images. These are then initially made unrecognizable by means of a blurring effect. In order to see such a message, you as a user have to: First acknowledge a warning and confirm a dialog.
Parents or legal guardians also have the option to be notified when their child is viewing content that has been flagged by Messages. According to Apple, the analysis is carried out on the device itself without the company having access to the content.
The new function will be integrated into the new family account options in iOS 15, iPadOS 15 and macOS Monterey.
Detection in iCloud Photos
The feature that is likely to get the most attention, however, is the new technology announced by Apple to detect images that contain scenes of child abuse. This will be integrated into iCloud and be able to identify images that have been registered by the NCMEC (National Center for Missing and Exploited Children).
Although the new system reads the files stored in the cloud, the system works on the basis of a comparison of the data on the device itself. A concern that Apple has repeated several times. For this purpose, only a hash, i.e. an identifier, is used and the uploaded photos and images from the NCME and other organizations are compared.
According to Apple, the hash is not changed if the file size changes or if colors are removed or the image’s compression level is changed. In addition, the company is unable to interpret the results of the analysis. Unless an account exceeds an unspecified limit of positive identifications.
- Does Pegasus Affect Us All? This is why spyware is so dangerous
According to Apple, the system has an error rate of less than a trillion per year. When a potentially infringing account is identified, an assessment is made of the analyzed images. If the identification is confirmed, Apple will send a report to the NCMEC after deactivating the account. The owners of the profile can of course still object to this.
Even before the official announcement of the new tool, cryptography experts warned of the risk that Apple’s new feature could open the door for similar systems for other purposes. For example, for spying on by authoritarian governments bypassing the difficulties created by end-to-end encryption systems.
So far, Apple has not announced when the system will be available in other regions or whether this will happen at all. How the whole thing can be reconciled with data protection regulations such as the European GDPR is also still open.
Siri interferes too
The package of innovations is completed by Siri in connection with the search in the operating system. The voice assistant will now display information about online safety, including links.
As with the other features, this will initially only be offered in the United States. Again, it is not foreseeable when it will be available in other regions.