Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Minority report style police based on AI embedded in iPhones? That's a killer feature! They could call it iSWAT or something.

Let's see if fear of AI is greater than brand loyalty and convenience. An unstoppable force meets an immovable object.

Apparently Apple has a team working as AI police. These people decide who gets reported.

> While noting the 1-in-1 trillion probability of a false positive, Apple said it "manually reviews all reports made to NCMEC to ensure reporting accuracy." (https://arstechnica.com/tech-policy/2021/08/apple-explains-h...)

Don't even get me started with the 1 in a trillion probability, we know adversarial images could be created that will trick the system even when you don't have access to the model.



One in a trillion? I wonder what assumptions went into computing that number, and I cheer for the "courageous" person who is bluffing it inside the company. Anybody in tech knows that whatever assumptions were valid yesterday are very likely to break today or tomorrow, and with it, whatever piece of software used it. Apple should know better, they have zero-day exploits regularly. And that's before even accounting for the political pressures.

I can't precisely buy a new phone in a whim, but I'm not trusting my well-being to any "one in a trillion" in the marketing of a company.


One in a trillion. How many photos are taken every day on iPhones? I can assure you that it’s a non-zero number of “one in a trillion” at least once a week.


1000 Photos per iPhone per week sounds a bit much.

Not arguing with the general sentiment though.


> we know adversarial images could be created that will trick the system

It could maybe be used to perform a kind of DDoS attack on the human verification stage by increasing false-positives, but I doubt a non-child-porn image would fool a human into thinking it looks like child porn just because of some carefully-applied noise.

Also, in what conceivable way is this like Minority Report, ie three psychic humans, floating in a pond, hallucinating the future?


Cultural differences around child/baby nude photos might also come into play. What seems a cute photo to some is CP to others.


Photos to be matched will be from the police, so will be material they think is prosecutable.


How about a real porn image with a young-looking actress and some carefully-applied noise?


I guess that would be for the defence to bring up in court given the images will be supplied by the police and they will consider them to be genuine child pornography


Didn't system in Minority Report worked absolutely amazing except several edge cases by super rich who would easily get away with murder in current society anyway? I know movie is used as an example of something very bad, but why, never got it.


They send to jail the guy that didn't kill her wife with the scissors at the beginning of the movie. Did they had any other false positive before?


But with murders they have an objective statistics that murder rate fell to zero, which is pretty cool by itself. Most of murders are not premeditated. I guess stopping people who in head of the moment would kill would be enough but they weirdly went further than that. In the beginning of the movie guy was going to kill his wife, if you believe technology of predicting future they have.


I think the point was they had no way of knowing, and that free will is real so it is wrong to punish someone deemed to be fated to crime.


>Let's see if fear of AI is greater than brand loyalty and convenience

I don't think so - I think corporations and governments are getting better and better at PR and so, we'll see, slowly but surely, more and more features like this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: