Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I, too, have worked on similar detection technology using state of the art neural networks. There is no way there won't be false positives, I suspect many, many more than true positives.

It is very likely that as a result of this, thousands of innocent people will have their most private of images viewed by unaccountable strangers, will be wrongly suspected or even tried and sentenced. This includes children, teenagers, transsexuals, parents and other groups this is allegedly supposed to protect.

The willful ignorance and even pride by the politicians and managers who directed and voted for these measures to be taken disgusts me to the core. They have no idea what they are doing and if they do they are simply plain evil.

It's a (in my mind entirely unconstitutional) slippery slope that can lead to further telecommunications privacy and human rights abuses and limits freedom of expression by its chilling effect.

Devices should exclusively act in the interest of their owners.



Microsoft, Facebook, Google and Apple have scanned data stored on their servers for CSAM for over a decade already. The difference is that Apple is moving the scan on-device. Has there been any report of even a single person who's been a victim of a PhotoDNA false positive in those ten years? I'm not trying to wave away the concerns about on-device privacy, but I'd want evidence that a such significant scale of wrongful conviction is plausible as a result of Apple's change.

I can believe that a couple of false positives would inevitably occur assuming Apple has good intentions (which is not a given), but I'm not seeing how thousands could be wrongfully prosecuted unless Apple weren't using the system like they state they will. At least in the US, I'm not seeing how a conviction can be made on the basis of a perceptual hash alone without the actual CSAM. The courts would still need the actual evidence to prosecute people. Getting people arrested on a doctored meme that causes a hash collision would at most waste the court's time, and it would only damage the credibility of perceptual hashing systems in future cases. Also, thousands of PhotoDNA false positives being reported in public court cases would only cause Apple's reputation to collapse. They seem to have enough confidence that such an extreme false positive rate is not possible to the point of implementing this change. And I don't see how just moving the hashing workload to the device fundamentally changes the actual hashing mechanism and increases the chance of wrongful conviction over the current status quo of serverside scanning (assuming that it only applies to images uploaded to iCloud, which could change of course). The proper time to be outraged at the wrongful conviction problem was ten years ago, when the major tech companies started to adopt PhotoDNA.

On the other hand, if we're talking about what the CCP might do, I would completely agree.


> I'm not seeing how a conviction can be made on the basis of a perceptual hash alone without the actual CSAM

This is a good point, but it's not just about people getting wrongly convicted, this system even introducing a remote possibility of having strangers view your personal files is disturbing. In the US, it violates the 4th amendment against unreasonable search, a company being the middleman doesn't change that. Privacy is a shield of the individual, here the presumption of innocence is deposed even before the trial. An extremely low false positive rate or the perceived harmlessness of the current government don't matter, the systems' existence is inherently wrong. It's an extension of the warrantless surveillance culture modern nations are already so good at.

"It is better that ten guilty persons escape than that one innocent suffer." - https://en.wikipedia.org/wiki/Blackstone%27s_ratio

In a future with brain-computer interfaces, would you like such an algorithm to search your mind for illegal information too?

Is it still your device if it acts against you?


> thousands of innocent people will have their most private of images viewed by unaccountable strangers, will be wrongly suspected or even tried and sentenced

Apple says: "The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account."

What evidence do you have against that statement?

Next, flagged accounts are reviewed by humans. So, yes, there is a minuscule chance a human might see a derivative of some wrongly flagged images. But there is no reason to believe that they "will be wrongly suspected or even tried and sentenced".


> Apple says: "The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account."

I'd rather have evidence for that statement first, since these are just funny numbers. I couldn't find false-positive rates for PhotoDNA either. How many people have been legally affected by false positives so far, how many had their images viewed? The thing is, how exactly the system works has to be kept secret, because it can otherwise be circumvented. So these technical numbers will be unverifiable. The outcomes will not, and this might be a nice reason for a FOIA request.

But who knows, it might not matter, since it's a closed source, effectively uncontrollable program running soon on millions of devices against the interest of their owners and no one is really accountable so false positives can be treated as 'collateral damage'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: