The Messages app will use on-device machine learning to warn about sensitive content, while keeping private communications unreadable by Apple.
Next, iOS and iPadOS will use new applications of cryptography to help limit the spread of CSAM online, while designing for user privacy.
https://www.apple.com/child-safety/
There is no ambiguity here. Of course they will scan images in the cloud as well, but they are explicit in saying that it is (also) on the device itself.
Apple is announcing 3 new ‘features’.
First one scans iMessage messages / photos on device / warns kids and partners.
Second one is the CSAM photo hash compare in iCloud upload feature.
Third one is the Siri search protection/warning feature.
But surely iCloud upload feature is on the device. And if it was only in the cloud they wouldn't need to mention iOS or iPadOS at all.
To start, once you upload something to the cloud you do - or at least are expected to - realize that it is under full control of another entity.
Because of that you might not use iCloud or you might not upload everything to iCloud.
I certainly hope you didn’t get yourself all worked up without actually understanding what you’re mad at :)
It is not related to the CSAM database feature.
Read details here: https://daringfireball.net/2021/08/apple_child_safety_initia...
The Messages app will use on-device machine learning to warn about sensitive content, while keeping private communications unreadable by Apple.
Next, iOS and iPadOS will use new applications of cryptography to help limit the spread of CSAM online, while designing for user privacy.
https://www.apple.com/child-safety/
There is no ambiguity here. Of course they will scan images in the cloud as well, but they are explicit in saying that it is (also) on the device itself.