Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
On Apple’s “Expanded Protections for Children” – A Personal Story (areoform.wordpress.com)
961 points by areoform on Aug 6, 2021 | hide | past | favorite | 522 comments


The issue here has nothing to do with children, they would always use 'children' as excuse. The issue here is installation of Spyware Engine.

The very fact of existence of such Spyware Engine has effect that each person would behave as 'someone is looking'.

This means zero privacy. This means 'no privacy' feeling. Everyone is more 'careful' when people around. And this has prolonged effect on society's freedom, democracy and rights of individual because independent thinking happens when true privacy is there.

Even a fear of false triggering the system and exposure of one's private thoughts/pictures/events to other people during 'sorting out false alarm' would make people behave differently. Even if they have nothing to do with any children the fear that their private moments can become public just because some AI would decide so would make each person think twice about each step.

This is a huge attack on privacy of individual and should not be taken lightly. The effect will be also huge. Just imagine what they will do next if they get away with this. Just imagine what other companies will do once this is accepted for apple devices.

I think this can/should also be considered as a fraud because when one bought apple device one was never told that spyware would be installed some day later.

Edit: I think it should be illegal/made illegal to install any spyware on any device for any excuse without warant. Spyware is a Search of 'home' without warrant to some degree, isn't it? Done through the hands of some private company under some excuse... What about Fourth Amendment?


> Just imagine what other companies will do once this is accepted for apple devices.

No need to imagine.

China flat out censors chat programs and discussion sites, and some words will instantly get you put on "the list".

Winnie the Pooh, for example.

Oh, you say that you're just interested in the wonderful literary works of A. A. Milne, but that's what the anti-government types always say! Prove your innocence in this kangaroo court, and you'll be let off with merely a stint in a re-education camp.


I was using Tantan last year, a Chinese dating app. I tried to send a woman I was flirting with a message like “that pic of you is sexy”, and the app warned me against using vulgar language.

I can’t say sexy. In a dating app. With a woman I matched with. What am I supposed to do? Say she’s doing a great job of maximizing her gene pool’s potential for aesthetic presentation in the context of a romantic setting?


> Say she’s doing a great job of maximizing her gene pool’s potential for aesthetic presentation in the context of a romantic setting?

If she goes for that, she's a keeper.


Actually it's a good advice from Tantan. Chinese dating culture is very different. I'm not sure "sexy" is going to be received well, especially if she uses a translator (which is common).


The woman I was flirting with was in Morocco XD


Tinder does the same thing.


s/sexy/beautiful/


Not even remotely interchangeable.

s/sexy/hot


> I can’t say sexy. In a dating app. With a woman I matched with.

Would you go into a Christian church and start preaching about Hindu gods? Some might, but nearly everyone would say it's disrespectful of a different culture.

So it seems to me like it'd be a cultural difference. In Western culture sex is prominent (America, Europe, etc) even if repressed (America). In Chinese culture... well I'm going to assume it's still very much repressed.


Seems like you’re exercising a lot of social control over the couple. Why shouldn’t they be allowed to privately communicate their feelings. Simply chalking up social control to “cultural differences” ignores the fact that these aren’t cultural mores developed independently, but the results of a highly moralizing tyrannical government trying to assert social control for the material benefit of those at the top.


Indeed. After all, the application is supposed to be only a conduit for a private conversation between two people.

Two adults can negotiate their own communications protocol. Just because you met someone in a Catholic church, doesn't mean they aren't interested in Hindu gods or in sexuality. People are not defined by the culture they live in.

A dating app that tells you what you can or cannot say in a 1:1 conversation isn't a conduit for private communication - it's acting like a chaperone on the date. Except normally, chaperones are there to prevent sex, not police thoughts.


Might be useful to other non-natives also: https://en.wikipedia.org/wiki/Chaperone_(social)


> Why shouldn’t they be allowed to privately communicate their feelings.

I agree.

> Simply chalking up social control to “cultural differences” ignores the fact that these aren’t cultural mores developed independently, but the results of a highly moralizing tyrannical government trying to assert social control

I've never been to China. But even from an outsider's point of view your statement strikes me as self-centered. What right do you have to enforce your own morals upon an entire other nation?

> a highly moralizing tyrannical government trying to assert social control for the material benefit of those at the top.

Western countries are arguably the same. How does that make the Chinese government wrong and Western countries right?

> Seems like you’re exercising a lot of social control over the couple.

To circle back to this comment; am I? The control is over the internet. Until they've met in real life you're just a stranger on it. Strangers on the internet can be very dangerous and with very little repercussion. Governments have an obligation to protect their citizens. So you should speak kindly and respectfully. You never know what words might upset the other end.

When you meet in person then you're putting your own person at risk for the words you say instead of just an online pseudonym. When you meet in person the stakes have been raised. What does the government do to prevent the couple from talking sexy after they've met in real life? I'm sure there's a lot but once you've met in real life then either you're physically in their culture or they're physically in your culture and respect boundaries must have changed.

Can that system be abused for the people at the top? Absolutely. Is it abused for the people at the top? Well I live in America and so I see a lot of anti-China propaganda so I'd argue that it probably is. But you'd be blind to think that China's alone in that.


It depends on the context. The fact that it's a dating app means both people expect a certain privacy in their words and a certain "vulgarity" like calling eachother sexy without repurcusions.

Just like a porn website doesn't shield you from porn because it's "on the internet" , a dating website shouldn't shield you from intimate language.


> What right do you have to enforce your own morals upon an entire other nation?

Aren't they suggesting that the individuals involved decide what kind of language is appropriate for their conversation? I can't imagine how you'd call that "enforc[ing] your own morals" upon anyone, let alone an entire nation!


> What right do you have to enforce your own morals upon an entire other nation?

Careful with this argument. What right did the Northern States have to enforce their own morals upon the Southern States (which had ceded, and were therefore their own country)?

I'm not saying you're wrong. I'm saying this is not valid justification.


> What right did the Northern States have to enforce their own morals upon the Southern States (which had ceded, and were therefore their own country)?

That's a very good argument. But let me counter: we fought a war over it. That's what gives us the right to enforce the North's morals over the South's. Do you want to fight a war against China over our differences?

Even disregarding the war itself; that was about slavery and human rights (freedom) with strong undertones of racism. So if you want to make privacy a human rights issue then sure and yes. I'll even agree with you: I think privacy should be a human right! But until privacy is a human right recognized and enforced then again: what right do we have to enforce your own morals upon an entire nation?

UN's UDHR Article 12 [0] is very light about privacy (and what it even means) and (IMO) has a very very poor history of enforcement of human rights. Even more, it states:

> No one shall be subjected ... to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.

And I would argue that statement extends to calling someone "sexy" which might be an insult to someone with different morals than yours.

[0]: https://www.un.org/en/about-us/universal-declaration-of-huma...


The example you described is not analogous to texting a woman in a dating app.

But the point about cultural differences does stand (not that it should result in a government being able to police how two adults communicate.)


You would think any western nation would make a counter draft for freedom. But just look at these cowards...

The US has its own problems, but who should take something like the EU seriously? For what stands this union of countries? For absolutely nothing. Extremely weak, especially considering the union was advertised as standing for common values.


The common values of the European Union are there, visible for all: graft through subsidies and comfy posts for you and your political allies. Tried and proven, works beautifully, all the politicians love it!


It's always to protect children or protect from terrorists or organized crime.

And it is always pretended that the bad guys can only exist on the client side and not on the supervisor side.


There is a nonzero price to enforce norms and laws.

For instance, let's imagine a technology emerges that can render any wall in a building transparent and penetrable for a short time, without affecting the structural integrity of the building. How much easier would it make the job of the law enforcement! How many crimes would it help prevent or at least detect! Of course it will be a securely guarded technology, so that it won't fall to hands of random strangers, malicious hackers, or burglars; only the law enforcement agencies would use it, and only for legitimate purposes! Honest.

Would you endorse such a technology in your town? Mandate it for your neighborhood? Why? Are you comfortable with the idea that a number of crimes will go undetected or not prevented because this feature is not implemented? Not rhetorical questions all.


There is a complicated legal and political process because there are a lot of things which are criminal that really shouldn't be. Historically, the law has been used to come down hard on people who:

* Say bad things about the king.

* Said good things about the king.

* Are homosexual and attempting to live their life.

* Are black and attempting to live their life.

* Are foreign, regardless of activity.

* Are female and attempting to own property.

* Are local but not the right type of local.

* The actual person is OK but they're trying to help the Jews.

* Own the wrong book.

* Worship the wrong god.

And in hindsight it is generally agreed that those laws were poorly thought out. Giving the police tools to enforce 100% compliance with the law is by no means a sane thing to do. And if that is the plan it would behove us to make sure the law is good first. Which it obviously isn't there are gaping holes in every legal system.


[flagged]


I'll gladly wager $500 donated to the charity of the winner's choice that your prediction here:

>I predict that in 5-10 years in USA it would become obligatory to participate in "pride parades". People who avoid them would be put on "the list".

is not accurate. We can do it on Longbets.org. I'll give you the full ten year window.


Nobody is forcing people to change their gender or brainwashing children with homosexual propaganda. You should examine your biases.


Maybe you should read the news on the other side of the fence. In the UK, LGBT training is a compulsory class for children. Many Muslim families are pulling their children out of school because they're not being given the choice to withdraw their children from classes where opinions about sexuality are being presented as fact in contrary to the principles they want to raise their child with.

https://www.bbc.co.uk/news/uk-england-48294017


> In the UK, LGBT training is a compulsory class for children.

This is way overhyped. It's barely more than “gay people exist, don't bully them” most places. One lesson in PSHE class – maybe two or three, if you get lucky and your teachers decide to teach you about binary trans people too, or if sexuality is mentioned in the Equality Act lesson (which mostly focuses on disability and maternity leave).

> opinions about sexuality are being presented as fact

By this logic, we should ban other children from talking about their religions in schools; they're likely to say things like “Jesus is God” or “God isn't real”. It's unlikely a religious child would get particularly confused with a teacher exposing them to different views on morality, given how much they're already exposed to that.


Many? You sure about that? Also, "opinions about sexuality are being presented as fact in contrary to the principles they want to raise their child with". How do you think things like that help? We want a tolerant and inclusive society, thank-you. I know gay Muslims in the UK. Can you imagine what life is like for them?


What does "LGBT training" mean? What I gather is that what they are teaching kids is, in essence, "be kind to each other". Do these families that are protesting not have the same biases against gay people that the person I originally responded to has?


I’ve seen some media in the last few years that definitely qualifies as LGBT propaganda and would therefore be banned under Russian (and other countries’) laws.

The most recent I can think of is that creepy Blues Clues episode… why do prepubescent children need to know about Pride and any sexuality at all?

The reasonable middle to me is to introduce LGBT issues in sex-ed when kids are going through puberty and treat it with decorum. You can punish homo/transphobia when it occurs, not try to program it out of literal toddlers with a creepy song about families marching.


I assume you'd also be against teaching children about the existence of things like marriage prior to starting sex-ed as teenagers.


I never learned about marriage in school, I learned about it from my parents since it is a global social norm. I would teach them about false equivalence in school however.

Anyway the issue isn’t with kids learning about LGBT, it’s HOW they learn about it. I wouldn’t promote nightclubbing or pickup artistry to my kids. I do also have issues with how aspects of mainstream ‘straight culture’ are pushed to kids as well.

For example, drag is an inherently adult form of entertainment now being pushed to kids. Grown men throwing money at drag kids makes my stomach turn. Two moms or dads taking their kid to play in the park does not.


Or as someone on slashdot put it:

> Destroying the privacy of several billion people is not an adequate price to pay for capturing a dozen or even a hundred bad guys.

> Sure it did get them some. So would carpet-bombing New York City. Success alone is a worthless measure without taking cost into account.

-- http://yro.slashdot.org/comments.pl?sid=4631081&cid=45871537


Wouldn’t a better analogy be:

I run a shipping company. You want to send a package. It is illegal for me to handle and ship certain things (e.g., nuclear bombs). Further, I don’t want to handle any of those things. Furthermore, if I find those things (e.g., when a package breaks open), I am legally required to alert the authorities.

What assurance can you provide to the me (the shipping company) that there is nothing illegal in your box? Suppose I ask you to attest but then I find out later that a bunch of people have been lying on their attestation forms which means I have been unknowingly, undesirably made party to illegal activities? Every other company solves this by simply opening everyone’s package and looking. Suppose my shipping company’s clever engineers invent a detector I can give to you that doesn’t require me to look inside the package but can tell me with some certainty that there are no illegal things in the package. What statistical properties would the detector need to have to satisfy you that this was better than forcing every package open?


This isn’t an analogy, it’s just reality, right up to the sentence “every other company solves this by simply opening everyone’s package”. Every shipping company doesn’t open every package today, and in the real world there are no magical package opening robots, though there are some specialized detectors. So we already know from experiment that imperfect detection is better than forcing every package open.


It is an analogy to what Apple is doing. Analogically you are sending something to Apple to handle for you (either store or ship) and they have legal obligations and business requirements. Every other company manages those by ‘opening your package’ is analogous to server-side image scanning like PhotoDNA.


PhotoDNA is analogous to the detectors. The analog of “opening every package” would be human review of every picture.

We already have real world evidence for “what statistical properties the detector would need to have” to be better than opening every package, because the real delivery companies are literally doing this today. There’s nothing hypothetical about the question.


Perhaps the analogy could have been clearer and the hypothetical more broad. I was trying to press on the difference between the shipper doing a very specific test vs some functionally similar but unknown test being done by the shipping company (the server-side test is unknown, it could be hashes, non-hash based image comparison, or manual review). I wasn’t asking about something like the sensitivity or specificity of those tests, rather, why we would be fine with one and not the other when the difference is not whether but when, where, and what test is applied.


Bomb sniffing dogs are your “magical package opening robots”


That’s a pretty good analogy, but it’s what you do with the detectors answer that’s the problem. If you just say: “The detect says no, we won’t accept your package”. Then there’s no problem.

Apple could do the same, simply letting the user know that this/these images cannot be uploaded to iCloud, and then do nothing else.

The problem is that the results of these scans are pretty useless. They don’t prove anything. While Apple knows that law enforcement and politicians will believe it’s 100% correct, because they don’t even understand that DNA can be wrong, and demand customer names from Apple.


On the subject of rendering walls transparent for the last 10 years or so: https://news.ycombinator.com/item?id=8914962

Or

http://www.usatoday.com/story/news/2015/01/19/police-radar-s...


I love cyberpunk because its predictions are all coming true, so I feel that I've got a fair warning.

This is stuff directly from Deus Ex (1999). Repercussions for posting wrong memes are also mentioned in Deus Ex, and are also coming true. (The epidemic situation is not that dire as depicted there, though. At least not yet.)


Transparent walls - well, walls are transparent to many types of electromagnetic waves, which can be visualized:

https://www.sciencealert.com/wi-fi-signals-can-identify-you-...

https://www.nbcnews.com/tech/tech-news/mit-device-can-detect...


>Of course it will be a securely guarded technology, so that it won't fall to hands of random strangers, malicious hackers, or burglars; only the law enforcement agencies would use it, and only for legitimate purposes! Honest.

Here lies the problem. You can't guarantee that. Law enforcement itself has criminals. In some countries law enforcement is even one of the largest collection of criminals. And then something like Trump happens. Do you really want to give such powers to people like Trump and it's followers, not to mention more malicious ones?

There is no way that this technology is only used for good.

And what comes on top, it doesn't stop crime, just it's modus operandi.

If they can't use their computer to handle certain data, they use hacked ones and hide it there. You know the trick how drug smugglers use the suitcases of tourists to smuggle their drugs?


Of course I would campaign for it to be accessible. It would be a complete revolution in archeology, architecture, engineering, maintenance, rescue and firefighting and geology should it also work on rocks.

Not sure I see the relation to Apple laying Fully Automated AI ThinkPol groundwork though.


I love the different angle you've taken!

Let's assume that it only works with thin walls, so it has limited archaeology use. Firefighting is a very fair point.

Would you, reader, trust the firefighters to not abuse such a power by mistake? Would you trust the chance of it being abused for the chance of being rescued during a serious fire? Not an easy question.


You can put checks and balances in place. Firefighters have vehicles with ladders, but they don't drive up to houses and break-and-enter. If it can be made obvious when they're using it, and there's independent oversight, I think the trade-off is definitely worth it.


I don't mind your downvotes, as long as you honestly think about the transparent wall proposition.


Didn’t downvote, but didn’t understand context of your comment given who you replied to. Maybe that’s why downvotes?


I think the context is suitable: the arguments for increasing efforts of law enforcement, because "think of the children", accompanied with promises that only the good guys will have the new technical means, and the law enforcement won't even make honest mistakes using it, to say nothing of ever abusing it.


> And it is always pretended that the bad guys can only exist on the client side and not on the supervisor side.

This is very much the crux of it.

Back during the immediate post-9/11 era there was a huge push to restrict all kinds of civil liberties (and the birth of today's surveillance state) in the name of preventing the next terror attack, which could be "a mushroom cloud" in the words of George W. Bush.

There's a problem with this reasoning.

Osama bin Laden wasn't some random dude who got radicalized. He was a member of one of the wealthiest families in the world. He was born rich, grew up rich, attended Harvard, and had a family that rubbed shoulders with heads of state and were directly connected to the Saudi royal family.

Osama bin Laden was a member of what I've heard described as the "superclass," those who are beyond just being "merely rich" in that they possess not only vast wealth but internationally diversified wealth and powerful political connections.

He was of the social class that is behind the glass of a surveillance state, and if he wanted to avoid any possibility of surveillance himself he and other members of his class could easily afford expensive security consultants and specialized devices. They could also afford armies of attorneys to get them off any lists and out of any trouble.

If someone detonates an atomic bomb in Washington DC, it will not be some random middle class youth who got radicalized on a 'chan board or a Facebook group. It will not be some random protestor or dissident. It will be someone like Osama bin Laden. It will be someone with the money, expertise, connections, and background to find, recruit, and pay the personnel required to obtain or build a nuclear device. It will be someone with the connections to organize the logistics to smuggle it into the country and put it in position.

It will be a member of the global elite.

People with bin Laden's level of wealth and privilege are the dangerous ones. The lens of scrutiny should be aimed at them. One member of the middle class radicalized with toxic ideology might shoot up a school, but one elite radicalized with the same ideology could blow up a city or engineer a super-plague.


The risk to the existing order comes from outsiders with leadership abilities.

Osama bin Laden's $30MM (per Wikipedia) inheritance from his estranged family no doubt greased some skids, but mission-driven people with charisma come from all socioeconomic backgrounds, and some of them succeed. (FWIW he did not attend Harvard, and AFAIR never set foot in the US.)

The general pattern is that the wealthy are, on average, supportive of the existing order. They are the winners of the current game. Of course there are always rebellious children who are motivated by religion or power or fame etc, but most of them are feckless due to their upbringing. And ultimately, there are so few of them.

Leadership abilities can be found across all economic strata. Most are happy to leverage their talents into moderate economic advantage, but some are driven by "larger" causes.

I would say that the intersection of leadership abilities, belief in a "larger" cause, belief in victimization, and a willingness to harm innocent people (or to blame them for their inaction against your oppressor) ... is what leads to the risk of violence against the existing order.


Looks like you’re right about Harvard. I’ve heard that claim for years but it seems to be a confusion with another member of the (large) family and controversy about Saudi money at Harvard.

https://en.m.wikipedia.org/wiki/Osama_bin_Laden

My point is that people with money and power are far more able to execute large scale crimes, and not just terrorism. Meanwhile surveillance is experienced more and more as you move down the socioeconomic pyramid.

Apple is catching flak for this but overall their devices respect privacy more than most cheaper devices. They also cost a lot more. Your average cheapo phone or laptop comes absolutely stuffed to the gills with spyware and runs older OSes with bad security. The poorer you are, the more spyware riddled and insecure your devices probably are.


Bin Laden did take a course at Oxford and palled around with Britain's best and brightest of the young. So there is some connection with the West's elite universities.


Bin Laden certainly has been to Oxford, but I’ve not heard of any connexion with the university.¹ I wouldn’t say that those attending language schools and similar tend to interact much with students of the university. With the exception of a few cramming programmes for admissions, the ‘best and brightest’ (I assume by this you mean those sufficiently academically able to get into Oxbridge as matriculated students—I daresay the average Imperial mathmo is brighter than the average Cambridge land economist) tend not to take courses in Oxford outside the university, since they mostly use the name ‘Oxford’ to entice gullible tourists who don’t realise that these places have nothing to do with the university.

1: http://news.bbc.co.uk/1/hi/uk/1595205.stm


You are forgetting that if the alt-right movement is not itself a product of GamerGate, it got an enormous JATO boost from GamerGate. One of the darkest eras in recent American history -- the Trump administration -- happened in large part because some basement nerds were pissed that a woman wrote a God-damned video game.

Do not discount the effects of chan boards and Facebook groups so foolishly.


I lived through that idiocy and you're not wrong, but I really don't think all those trolls and Pajama Nazis would be particularly dangerous without elite backing.

Some of it may have been organic at first, but things don't stay organic for long these days. The instant there's even a whiff of a popular movement that can be exploited the propagandists are on it... especially if it's a movement that can be exploited so as to win an election or make money. The thing that made that stuff dangerous was elites and their water-carriers like Steve Bannon, Donald Trump, Milo Yiannopolis, Rupert Murdoch, etc. empowering and steering all those useful idiots.

The Pajama Nazis have a camp of doppelgängers that I've come to call Basement Bolsheviks, but they haven't had much impact since that camp of idiots doesn't seem useful to anyone with money and power (yet?).


Or the fact that someone is in client side make them bad guys.


> I think this can/should also be considered as a fraud because when one bought apple device one was never told that spyware would be installed some day later.

I have an itch that somewhere in the hundreds of pages of EULA no one ever reads there is a "You accept that Apple has the right to modify its software at any time for any reason" kind of thing


In a sensible world there would be a law that allows you to pick and choose which non-essential apps and services can run on the hardware you own.


Sorry, the spyware engine is considered "essential" for regulatory compliance reasons.

You have no idea the inherent power of being the "setter of definitions" in a legal context.


That's why I said sensible.


I'm pointing out that sensibility is determined by multi-person consensus which by definition will tend to exclude a lot of viewpoints.

The presence of legitimate common sense (an actual concordance of principles, life experience, understanding, and the existence of a fungible nominative signpost that will be reliably reproduced within error bars) is surprisingly difficult to hew and maintain. It morphs over time, and is exactly why these multi-national tech companies implementing things like this is so terrifyingly dangerous. If they implement the capability to do a thing, they are now the movers of the political Overton window by realizing the means. They are literally shaping the rhetorical landscape.

Example: Let's say the push for this intrusive client side scanning came from the U.K., and the U.K. marketplace alone accounted for enough business to make it too painful to NOT do (simplifying economic/business assumption for illustrative purposes, and just picking on the U.K for because they are $not_my_jurisdiction). In the U.S. (Third Party Doctrine aside), this feature would be something that would be a U.S. Constitutional violation (4th Amendment) if mandated by the legislature short of an Amendment, yet since it would be done out of expedience in the absence of someone telling them NOT to do it, they'd ship it by default on American phones anyway. The mere existence of the capability greatly increases the willingness of jurisdictions to use it. This means the most privacy eroding jurisdictions are creating a race to the bottom for infecting everyone else barring population Al refusal to say No unambiguously.

I don't know why this jurisdictional backdooring isn't more obvious to people than it is. Or maybe I'm just starting to grasp how politics actually works vs. working in theory, but this really does seem to be completely screwing up how process is supposed to go. This is nothing short of a private entity taking the practical reins of power, and the political edifice coming along behind and post-facto rationalizing what the actual trailblazers are doing.

To be frank, this terrifies me even more than Congress being in charge. Congress's process rounds out most things to some level of effectual benign nature. Tech companies though? If you even start asking the important questions about higher order effects, either out the window you go, or everyone looks at you like a nut.

This is not a good way to go.


It turns out Sensible is also a word that is up for definition.


That definition of Spyware Engine has existed for a while. Every browser checks sites and downloads using Google's Safe Browsing [1]. Firefox constantly phones home to googleusercontent and AWS. Apple sends hashes of every application you run on MacOS to its Akamai hosted service -- try blocking Akamai on your firewall while still connected to the internet and the OS becomes unresponsive. The OS features to scan and report home have been around for a long time.

I think Apple checking photos before they uploaded to its own servers is their way around the Fourth Amendment while using relatively similar methods that they already use for malware.

Devil's advocacy aside, I buy the slippery slope argument and I find these methods reprehensible and open to abuse. I agree with all of it. I think this new addition is a step on the downward slope. I don't use MacOS, nor Windows, nor any cloud services personally. I keep an iPhone 7 around to chat with "iMessage" friends -- and this is the straw that stops me from using for that. I have never used iCloud for photos even when I was using an iPhone regularly.

I used to have an Android phone, which I also never sync'd with Google's cloud/drive. Then after an update, the sync-to-cloud option was automagically turned on again and my photos started to upload. I deleted them from the cloud and swore off ever using an Android phone again.

[1]: https://support.mozilla.org/en-US/kb/how-does-phishing-and-m...

Addendum: That Mozilla support link is also interesting in terms of privacy. It's flagged as a privileged page for me so it loads Google Analytics that would otherwise be blocked by browser extensions.


It seems like the fourth amendment would be more applicable in Apples case, as it's doing the searching within your own home, where searches are explicitly forbidden.


The other issue here is this:

If Spyware Engine is allowed to be installed by some idiot with company in his hands to amplify his idiocy, what prevents another idiot with company in his hands to do the same?

And how about specially crafted 'normal' pictures sent to individual to trigger the system? Special mind would have a perfect tool to get some one down / put into the troubles.


"And how about specially crafted 'normal' pictures sent to individual to trigger the system? Special mind would have a perfect tool to get some one down / put into the troubles."

Exactly this was my first thought. Sad that these days my first thought, if some new "system" to prevent/catch something/one is introduced, is "How could this system be abused to blame somebody?".


Of course, that can be the basis for blackmail but I guess that what's going to happen most frequently will be a parent sending a bathroom or beach picture of a child to his/her spouse or parents, getting flagged and... what? This is going to happen tens of thousands of times pretty soon.


What confuses me is the requirement for this capability. The technology is only used on photos that are being uploaded to iCloud.

Apple can decrypt data on iCloud, so why not just do this offline? Doesn't make much sense.

It makes me think that the technology will be expanded to encompass all offline images at some point.


Is it? The article I read about it said it also includes photos saved on your iphone, even if they aren't uploaded to icloud.


Apple have a fairly detailed page on the approach including independent evaluations. Which states that it’s only for iCloud. But there’s no reason why this couldn’t change in the future.

https://www.apple.com/child-safety/


On face value alone that should reassure sceptics that this new scanner will only be used for good purposes.

However there is a very long list of companies not just doing things for good, but also for profit.

And when profit enters the equation, then the original mission statement tends to get distorted into something bad/worse.

Of course Apple is free to decide, but the consumer has the option to either loudly disagree or exit the Apple ecosystem altogether.


The thing is, that it's as always - anyone arguing against this system will be branded as a pedophile sympathiser.


Well, simple minded people will always try to use that rhetoric.

I see the same pattern with pro-vaccine and "anti-vax", either you support vaccines or you are labeled "anti-vax", all is black-and-white without any nuances.

That sort of discussion rarely leads to any productive outcome.


From what I read it works for on your phone so it's off-line but only when iCloud photos is enabled.


The fact that they've decided to move this processing from their servers to the local device means they 100% intend to scan all of your photos and not only those that you upload to iCloud in the future.


I don't think they do. I also don't think they can easily decrypt icloud as you're making it out to be.

However... yeah. This is a massive breach of trust. Having any spyware agent on my device is disgusting. I can't imagine the abuse cases this sort of thing can be used for.


> I also don't think they can easily decrypt icloud as you're making it out to be.

https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...


According to some reports they’ve been doing it server side for a while:

https://nakedsecurity.sophos.com/2020/01/09/apples-scanning-...



> each person would behave as 'someone is looking'.

They (that is, we) should already. We should assume being looked at, unless we can reasonably prove it's not so. In all public places, for certain. Using all public services, for certain.

This decision erodes the trust in personal handheld terminals (quaintly still named "phones"). This is great that the announcement is made publicly. I can easily imagine a similar feature to be deployed tacitly in other countries and other platforms.


A strong belief of mine is: the technology should never betray its owner. Regardless who the owner is, regardless if they are benevolent, regardless of their wishes or background.


> This means zero privacy. This means 'no privacy' feeling. Everyone is more 'careful' when people around. And this has prolonged effect on society's freedom, democracy and rights of individual because independent thinking happens when true privacy is there.

100% agree, this is called a "chilling effect", shutting down discussion and giving more power to the status quo.


I agree with you. But on the other hand Google Drive has been doing this already since forever so I'm not really sure what's changed through this.


The location of the scanning moved from their hardware to consumer hardware.


If your consumer hardware auto-updates, it's "their" hardware where "their" is the set of all software vendors with keys allowing them to auto-update anything on your computer.


> The issue here has nothing to do with children, they would always use 'children' as excuse. The issue here is installation of Spyware Engine.

I’m always surprised when people say things like this.

As if we haven’t been carrying around highly capable “Spyware Engines” this whole time.

Apple doesn’t need the cover of stamping out child porn to spy on you. They can do it just fine, if they want to, without it.


The fourth amendment applies to the government, not apple.


It seems that this is before uploading photos to iCloud or on send to iMessage. You can avoid both and still use an iPhone.


They are doing everything client side so it seems like an apartment complex saying you are not allowed to have fires inside your apartment so they installed a smoke detector in your apartment. Most people would agree that it's a reasonable check because they don't plan on burning down their apartment but yes, if they keep pressing on it can definitely become orwellian.

> The issue here has nothing to do with children, they would always use 'children' as excuse. The issue here is installation of Spyware Engine.

I'm not sure that's entirely true. There is a reason stuff like this is debatable and is being considered by a reputable company. It is not black and white and both sides have some valid arguments. There are absolute atrocious monsters in this world and apple is in a tough spot. They are locking down their devices and it is keeping both bad and good guys out. You may think good riddens, they both shouldn't be able to get in but there are downsides to building a completely impenetrable device (e.g. tor is great for freedom, but also makes it a lot easier for criminals to operate). They are receiving pressure from law enforcement and no doubt people in their company have glimpsed the horror of child abuse and want to do something about it. They wouldn't be able to sleep at night if they did nothing, but they also won't be able to sleep at night because of the incredibly slippery slope they just stepped out on.

We also don't live in a libertarian society and I don't think most people would like the reality of living in one. On the state security and personal liberty spectrum, I think they united states is a lot closer to libertarian than we are to the ccp tho but there is no spot on that spectrum where everyone will be happy. Finding the least bad spot is an incredibly difficult problem for Apple.


> They are doing everything client side

That's what they say. And we should trust them on this because...? More importantly, even if they're doing everything client-side now, why should we trust it won't change in a few years?

Also, they aren't doing everything client-side. For this measure to be useful, it has to send out the hashes, or that there were matches detected, to the company, and ultimately the law enforcement.

> it seems like an apartment complex saying you are not allowed to have fires inside your apartment so they installed a smoke detector in your apartment

This situation is not like a fire. A fire in one apartment threatens the entire building (and everyone in it), and it spreads very quickly like started. This is more like installing chemical sensors in every apartment that will automatically call the police when they sniff out illegal drugs. You have nothing to fear if you're not an illegal drug user... unless the sensor has a false positive against your medication. Or the volatiles from your cooking. Or the cleaning agents. Or a jar with a mix of benign substances that someone sent you to screw with you. Or...

Perhaps people would mind less if they could trust the system will work. But anyone with even little bit of exposure to tech industry already knows that these systems don't work, and data collected is routinely abused.


> And we should trust them on this because...? More importantly, even if they're doing everything client-side now, why should we trust it won't change in a few years?

If you use an Apple phone, you have no choice but to trust them. They produce your phone and control everything on that phone. Did you trust them before? If so, why and why would you not trust them now with this feature? If you didn't trust them before and don't trust them after this feature, you probably shouldn't use one of their phones.

> This situation is not like a fire. A fire in one apartment threatens the entire building (and everyone in it), and it spreads very quickly like started. This is more like installing chemical sensor...

I would consider a sensor that detects drugs in apartments a lot more orwellian and I'm pretty sure that something like that would be a lot more controversial than what apple is doing here. They are not adding a feature to catch people that use drugs, just child predators. There is a massive gap between the two and there are zero people that consider allowing child predators to exist is a good idea. I'm also pretty sure that there are zero people that agree that letting apartments burn down is a good idea.

> Perhaps people would mind less if they could trust the system will work. But anyone with even little bit of exposure to tech industry already knows that these systems don't work, and data collected is routinely abused.

Yep, exactly, this changes nothing really. I hate apple products and I don't use any, but their reputation is the least bad and I trust them more than others.

There are horrible monsters in this world, we don't live in a libertarian society, most people don't want to, and a lot of people will be completely fine with sacrificing client side scanning of their pictures to catch those horrible monsters out there. This issue is not black and white/right versus wrong. There are trade offs and it was a judgement call by apple. Everyone has some line where they would be ok with this feature. If it helps avoid the scarring of 1,000 children per year, maybe you don't think its worth it, but there is some line where everyone concedes. I don't really know what is an acceptable amount of horribly scarred children to trade for not having one's pictures scanned on one's phone but if they keep their word and it doesn't turn into an orwellian nightmare, then I would be incredibly grateful for every scumbag they help take off the streets.


>> has effect that each person would behave as 'someone is looking'

I don’t think that’s true. Compare the difference in how people react to cctv vs how they react to “Surveillance Camera Man”.

This is a cctv system, you can forget it’s there. Until it incorrectly flags you anyway.


But you don't take your clothes off in places with cctv, because you know that you don't want to be recorded doing that.

People are aware where and how they are recorded.

Phones are (well.. were) private, people take nudes, do naked video calls and all the other related stuff, because they trust that there is only one other person watching this, and they usually trust that person.

Now, we're one step away from trump-beats-cnn-gif-meme getting into the hash database to flag all the trump supporters. When the next wikileaks happens, and they can track who had the documents/photos/videos first, possibly even before they got leaked. Someone (4chan,...) can email you photos an videos that are of something random, but crafted in a way to create a false positive (a bit slower form of swatting).

This is basically setting up cctv in your bedroom, saying it's there "to protect the children" and trusting some stupid algorithm to look at you.


Someone stole my Aunt's laptop a while ago, but she managed to hunt the person down and get it back only to find the bloody thing had it's password reset, accounts wiped, etc...

She gave it to me to get back in working order, and I had to ask her if she was interested in pressing charges, because handing it to me may compromise chain of custody of any investigation. She decided not to proceed with filing charges, but just wanted me to investigate and restore the bloody thing.

The person in question sync's their mobile content to that laptop. I had more than enough evidence of phone use while driving, who they were associating with, where they were hanging out, what they were doing, etc.... I didn't even have to try digging.

Do not underestimate the level of mindless information spillage by modern devices, nor the level of exposure guaranteed always on, officially acknowledged hooks into your device sensor or input feeds offers.


We have had to remind (older) family members at times to not send pictures of their young children/relatives if the pictures have any nudity or could possibly be considered suggestive.

Even if not illegal in any way, it’s one of those things where even an accusation can be seriously damaging. The prospect of your phone continually scanning your photos will make that even worse.


> "The very fact of existence of such Spyware Engine has effect that each person would behave as 'someone is looking'. This means zero privacy. This means 'no privacy' feeling. Everyone is more 'careful' when people around."

These are pressures which likely help keep society relatively stable. People's behaviour is guided and influenced by what other people might think of it, and human ancestral environment up until cities(?) was small groups and families and villages where everyone knew everyone else's business. I don't think we should take it as given that changing that is unarguably for the better. Historically there was always the chance that someone would eavesdrop, or see your diary lying around, or open your mail,

> "And this has prolonged effect on society's freedom, democracy and rights of individual because independent thinking happens when true privacy is there"

Citation needed because "Please don't machine-process my photos because I get so anxious I can't think" sounds like a problem you need to deal with, not a problem for everyone else to deal with. So does "The effect will be also huge. Just imagine what they will do next if they get away with this." - always jumping to the worst possible doom scenario is something people see therapists about.

> "The issue here is installation of Spyware Engine."

This is no more than a bank having to make sure you aren't storing stolen diamonds in their bank vaults, and they've now found a way to check only the things people are about to bring into the bank to alert the bank as early as possible.


I'm glad you've nailed the crux of the problem - if this is no more than a bank making sure you aren't storing stolen diamonds in their bank vaults -- then surely the analogy to bank vaults here is 'the phone you purchased from Apple'.

But you are right - as long as they have the right to push code to your device at any time, it isn't really yours. Shame it costs so much to keep the bank's vaults in our pockets at all times.


This system does not stop you keeping whatever you want on the phone you purchased from Apple, it only affects what you try to send to iCloud, so no the analogy to bank vaults is not your phone, it is asking Apple to hold your data on their servers and people trying to send them illegal data to hold.


But it's a single line-of-code change away to change that. If someone amasses troops along your border for 'military exercises', do you just trust them not to walk over if they find no defenses there?


Im really starting to develop a visceral hatred for Apple.

They are talking sanctimoniously about privacy out of one end of their mouth, put on a family friendly face by banning anything offending prude sensibilities, while having no qualms about doing as much business as they possibly can in China. Now this.

They are the ultimative dystopian corporation, I find them even worse than Facebook in that regard, at least Facebook has some interest in free speech and does not operate in China.

I guess the reason I am so angry is that I like the devices and engineering marvels they can produce. But I cannot be a customer of this.


> at least Facebook has some interest in free speech and does not operate in China.

That wasn’t really their choice, and they’ve been trying to find a way back into the market since the day they were banned. They still make billions of dollars every year on ads from Chinese buyers that run globally.


It's almost comical when you go back and watch their "1984" ad


I wouldn't use Facebook as a pillar of privacy.. in any way shape or form.

Further, they are looking into doing something similar as Apple:

https://bgr.com/tech/whatsapp-refuses-to-use-facebooks-creep...


> Most cloud services — Dropbox, Google, and Microsoft to name a few — already scan user files for content that might violate their terms of service or be potentially illegal, like CSAM.


There's an expectation that what you upload on the cloud will get some scrutiny, being on somebody else machine.

But here they'll scan the local files on your own device, that you supposedly "own" (but not really, because Apple already locked it up to make sure you can only install stuff they have approved and taken a cut off)


this is false. it's not scanning your local files.

only the files you upload to apple photos.


The compute is on the edge; the data is local, just marked for cloud sync. This way, you get to pay for the capital and energy costs of the scanning instead of Apple.


Am I misunderstanding that this is the literal last step for a photo destined for iCloud, and a way to keep apple from having to decrypt your private photos to scan them? With iCloud sync off, does this scan still happen?


Correct. Functionally there is no difference to what Google does with their own cloud photo service. The only difference is where the processing occurs.

Turning off cloud photo syncing will disable the scan on both Android and iOS.


It's only a matter of time until this comes to non-Apple devices as well, I think. Personally, I'm biding my time with Android phones hoping that phones where I personally control the software running on them can become usable.


Mark Zuckerberg famously asked Xi Jinping to name his daughter. Cultural taboos aside, Facebook would love LOVE LOVE to operate in China.


Facebook wanted to enter China AF but China is smart not to allow it. Zuck had since sort of given up and become less friendly to China.


Facebook helped build the guide rails that led to genocide.. not sure why you would look to them for anything.


I don't know about you, but I just decided to switch from an iPhone back to a Nokia feature phone (where is mobile Linux when you need it?) and start backing out of Apple's ecosystem. I'm quite positive I can live without apps, just as I used to before Y2K. And, in case you're wondering, no: I have nothing to do with illegal adult imagery. [Edit #1] This article gets (at least) one thing right: when it comes to certain kinds of crime, an accusation is ALL it takes to destroy your life. [Edit #2] I used to laugh at all the "we have your password, send Bitcoin or..." spam. The first "you have a picture of a gray cat on your phone, send Bitcoin or..." message would give me a friggin' heart attack.


I switched to an iphone about a year and half ago and I’m feeling it too. My current plan is to switch to a lineageOS device. Apple has completely run off the rails in the past year, from aggressive politicking to now complete abandonment of their core values.

I didn’t even realize apple had the ability to do something like this? Can someone explain how this is even possible for them to do? This wasn’t an update was it? Just a flipped switch?


They write the OS that runs on your iPhone, so of course they have the ability to do this or anything else.


But without an update to the os or even an app on the os, how are they able to make this change? I’m not in this space but id love to understand the technicals here, especially the ramifications for, ‘what else are they capable of doing without my permission?’


Ah, thanks for clarifying the question, and it's a really good one, and I haven't seen the answer to that. I'm guessing it would have been part of one recent update to the OS, and the feature then gets enabled on a certain date or remotely or something like that.

And your final question "What else are they capable of doing without my permission?" --- the answer is anything they want. Concretely, what are the capabilities installed NOW that they can take advantage of, who knows exactly?

As an example, I do remember a looong time ago Google remotely removed apps from Android phones, pretty sure Apple could or might even have done so as well.


Apple has the ability to do so[0]. I believe they have used it against malware in the past, but finding articles is proving to be complex.

Another solution for apple is to simply revoke the codesigning certificate that is used to sign the app, which will render the application un-runnable.

[0]: https://www.macrumors.com/2008/08/06/apples-ability-to-deact...


The change is scheduled to ship with an OS update (iOS 15): https://www.macrumors.com/2021/08/05/apple-new-child-safety-...

Nothing technical prevents them from putting something in ring zero which does silent updates without announcing them, but that's not what's happening here.


They can force update your phone, so pretty much anything they want.

This means they can intercept even e2e encrypted messages such as signal, if they wanted.

When you control kernel, you can do anything, to anything running on that system.


That's the issue - apple devices run an opaque blob signed by the entity developing it which is also is in possession of the source code which it does not share with anyone, so users can't inspect it to see what it actually does.

For comparison in a Linux distro:

- everything is built from source on distro infrastructure, users can inspect any an all source code of everything running on their machine - software updates are transparent and not enforced, user can read changelog or compare source code of the updates - software updates of individual packages don't usually go directly from upstream open source software developers but via a package maintainer in the distro, for each distro - if an upstream project introduced fishy stuff like Apple is doing would almost certainly be noticed by either one of the package maintainers or users due to changes in the source or in software behavior, alerting others to do source code analysis and stop the attempt from affecting users


Privacy was never a "core value" at Apple. Have people forgot about Prism? During that, Apple bent over backward to give data to the nsa. Other companies like yahoo sued to try preventing it. Privacy is a marketing tactic at apple they used when they realized it could be used against their competitors like Google who have business models based on knowing as much as possible about their consumers.


A Linux phone would be good, but not sufficient.

Once things like Apple is doing become normalized, the next step becomes making it illegal to distribute or use software that doesn't do it. So running your Linux phone will be illegal, and why would you want to do that anyway, except to look at child pornography?

Similarly, non-sanctionable digital currencies like Bitcoin will be made illegal.

Avoiding modern tech is a workaround, but will be get increasingly hard as the world becomes increasingly reliant on it. (Cash, also, will have to go away.)


Same. I’ve been using Apple products for years. The trade-offs they made were acceptable to me. That’s now over.

I’ve been looking at trying something like a Pixel but with Calyx as a first step while waiting for the PinePhone and the Libre M to mature further.


In case you hadn't heard this story, all PinePhones in a recent batch of shipments were redirected to NewZealand, regardless of their final destination [1]. Some people were suggesting that this could mean they were possibly tampered with.

[1]https://news.ycombinator.com/item?id=28002447


I wonder if it was the cheapest Five-Eyes country to ship them all through.


> while waiting for the PinePhone and the Libre M to mature further.

This can't happen fast enough... though I can't get behind Purism/LibreM it's just too fucking expensive, basic privacy and ownership of your device shouldn't be a privilege.


Librem 5 you mean.


What if you send a photo to an iPhone user that is gets false-positived, aren't you also at risk?

Basically, is it with interacting with Apple users if you are worried about that problem?


Don't forget hosted email providers scanning attachments. Best to stick to voice and snail mail.


Actually voicemail and calls can (and are) scanned for content as well. Best to stick with in-person conversations only, not around any technology that has a microphone.


I’m in the same boat. I loved Apple’s stand on privacy. That seems to have now been thrown out the window. I despise child abuse but this isn’t how you protect children. What are my options on Android? Any smaller phones? Everything I see there is enormous. I loved my original iPhone SE. :(


Any Android One phone is clean (aside from Google ofc). Sadly the program will probably be cancelled because many manufacturers want their own spyware installed on phones. I think Android will at some point fail.

There is librem and pinephones and they don't look bad, but don't expect many apps and interoperability. Some people regard that as a plus.


Pixel 5 is pretty good - not too big, good battery, fingerprint scanner and 90hz screen.


Pixel 4a?


5.8 inch screen. :(

I’d like something like my 4 inch iPhone SE but I know that is impossible. How about 4.7 inch range like the new iPhone SE?

Edit: If anyone is ever reading this and looking, I went with a Pixel 2, has a 5 inch screen and a good camera. I'll see how it goes. Will be nice to not be locked into Apple anymore.


Pixel 4a and Pixel 2 are almost the same size.

https://www.phonearena.com/phones/size/Google-Pixel-4a,Googl...

Keep in mind that bezels have gotten much smaller over time.


Oh I see. Thank you, I didn’t think about that.


Yep, I'm gonna go into a dumb phone if those changes go in. Privacy was the whole reason I was in Apple's ecosystem. If they're gonna spy on me, even if it's a good reason, then there's no privacy anymore. Basically: fuck them.


Sailfish OS is mobile Linux and can run on smartphones. I didn't personally use it but the website claims it to run on the Sony Xperia 10 II from 2020.

It costs $50, though.

https://jolla.com


Purchasing Sailfish has some geographical restrictions.

"Sailfish X is currently available in the countries of the European Union, Norway and Switzerland ("Authorized Countries") and the use of our website and services to purchase Sailfish X outside of the Authorized Countries is prohibited."[0]

https://shop.jolla.com


This is effectively just (understandable) CYA on Jollas part, likely due to the some countries respecting the insanity that are software patents.

In practice only the act of buying the actual Sailfish OS license in https://shop.jolla.com needs to be done from the the EU/supported countries. That can be done via a EU VPN and none of the Jolla provided services (RPM repositories needed for system updates, etc.) are geoblocked. I went to Japan with Sailfish OS phone twice with no issues at all.

For reference:

- getting SFOS from outside of EU:

https://together.jolla.com/question/184861/ask-purchase-sail...

https://forum.sailfishos.org/t/has-jolla-abandoned-sailors-i...

- stats from the community Sailfish OS software repository showing a lot of traffic from countries outside of the officially supported area

https://openrepos.net/statistics/global


Yes, this is the main problem with SFOS.

I think it is because of the USA's allowance of trivial patents which act like a trade block.


I have been using Sailfish OS since its launch in 2013 and it works fine, even for regular users (if you install it for them).

I have recently installed it on the Xperia 10 II you mention as well use a couple Xperia X devices in the family. Use the Jolla 1 & Jolla C before back when Jolla still manufactured their own devices.

Frankly, before some of the PinePhone distro mature, this is the only really usable Linux distro based independent mobile OS. And it has been available for years, yet people don't seem to be aware of it or seem to ignore it for some reason.

And if you have any questions about Sailfish OS, fire away! :)


> Frankly, before some of the PinePhone distro mature, this is the only really usable Linux distro based independent mobile OS.

Did you take PureOS into account ? It is the Linux OS installed on the Librem 5 phone. https://puri.sm/products/librem-5/


I know about it, but it still seems much less mature & available only on limited hardware. Still hopefully this will get addressed over time.


I've been aware of it but the fact that there's proprietary parts makes it a non-starter for me. Why leave one closed-source ecosystem to go join another? I'll get a phone I can flash with postmarketOS/Plasma Mobile/Ubuntu Touch instead.


Because Sailfish OS works now and is mikes better in both openness and privacy violations than both iOS and Android ?

And while I agree that the closed source parts are stupid I still think Sailfish OS is a good stepping stone to full open distros and hardware once they reach sufficient maturity.


How's the Anbox support on it? I'd like to keep using the large F-Droid app ecosystem until native alternatives are as good.

Also have they resolved the kernel updating issue? All phones which ship with Android 11 or lower are unable to have their kernel updated because each device uses a modified kernel[1]. Phones which ship with Android 12 like the Pixel 6 all use the same Android Common Kernel, so it's enabled Google to guarantee the Pixel 6 will have at least 5 years of kernel and OS updates for the first time in Android's history.

[1] https://lwn.net/Articles/830979/


The 50€ for a Sailfish OS license also pays for a (proprietary) Android emulation layer (called Alien Dalvik, originally by the Miriad Group) that works really well. More here:

https://jolla.zendesk.com/hc/en-us/articles/201440787-What-A...

You can even install microg or full blown Google apps if you really want & are fine with the implications (still less problematic having Google stuff in the emulator than on a native device IMHO).

There have been actually also some attempts to get Anbox running on Sailfish OS as the community ports dont get the Android emulation layer, only officially supported devices.

As for major version kernel update issues - they did not, but I am not really sure it's that problematic in the end.

So basically they base their port on the best kernel version + blob bundle from the Sony open device program at the time, then stick with it. Apparently the way updates of these low level things work on devices originally shipping with Android are too stupid and fragile to make supportable over the air updates possible for Sailfish OS.

They could port the adaptation to newer bundle from the open device program and either mandate re-flashing or support two versions based on different kernels on a single device. Both not very good options imho.

Still, not updating the major kernel version does not mean the Don support the devices or newer rebuild the kernel. All the Xperia devices in the program are still getting OS updates, including security fixes for the kernel. They actually only recently dropped support for their Jolla 1 handset released in 2013.

It's just when you really need the latest kernel features or the most advanced Android emulation layer you might need to get the most recently supported device.


Thanks for the info! The limited supported hardware and kernel updates has me hesitant still but I'll give it another look, it's been years since I did last.


While not as fast (+ I think keeping parts of it closed is stupid and never gave them any benefit) as I would like there is definitely continuous progress being made in Sailfish OS, not to mention a very nice community of users, app developers, translators, device porters and contributors to the open parts of Sailfish OS.

Thats what kept me there for all those years. :)


One of the reasons I want the Xperia 10 II is because of the TOF sensor and the camera array. Does the Sailfish OS offer software which utilises those?


For some reason the default camera only uses the "normal" camera but the Advanced Camera application from the community repos (Open Repos) can use all three without issues.

No idea about the ToF sensor - I woukd guess it could be used unless it nerds some weird Android blobs to function.


The only thing I'd struggle to lose is a map (Google Maps/Apple Maps). For me, it's essential if I'm going somewhere unfamiliar.


Doesn't good old https://maps.google.com or https://osm.org work on phones without apps?


I don't know, would it? It requires fairly modern browser features that I don't remember the old school phones' browsers having.


The mobile web interface of Google Maps is hideous and unusable. I had to use it from time to time, when OSM didn't have the data I needed, and I regretted every second of it.


It's by design so you use the app


Most online banking sites in Germany require a phone nowadays, some even one that passes Google safety net for Android phones.


AFAICS, no, they don't. If you don't install the bank's app, you are usually offered to use a stand-alone hardware TAN generator. I currently have 4 TAN generators for different bank accounts (because everybody seeems to use a different protocol).

On the positive side, a stand-alone air-gapped TAN generator feels much saver than using a (possibly back-doored) smart-phone to do on-line banking.


For people outside Germany, here's what a TAN generator looks like: https://www.pcdirekt.de/hardware/tan-generator/


Obviously I can't speak for every country and every bank, but at least in two countries and with two banks they are trying really hard to get me off the hardware tokens and onto mobile phones.

They have done all kinds of dark patterns, for example they update the website (which eventually becomes a mandatory update), but that requires relinquishing the hardware token, or they update their mobile app, and if you even try to log-in into the new app they automatically cancel your token, etc.

When my token's battery died, they didn't want to give me a new one. They said they don't offer the service anymore. Only after escalating N times and explaining that I need a hardware token because their SMS-based 2FA didn't work internationally they gave me a new token. Suddenly it wasn't discontinued anymore. Now they moved off SMS-based 2FA to some mobile app, so I suspect next time I need a token I won't be able to get it.

Make no mistake, enjoy your TAN generator while you can, because you will not be able to enjoy it for long.

In the country I live in right now, the only way to get a proof for your COVID-19 vaccination is if you "voluntarily" enroll into some phone-based authentication scheme with your government that requires a modern, non-jailbroken iOS or Android device.


Yes, exactly. Maybe you can protect yourself (a bit) by having two phones. One for browsing and communication and one to get official (banking/vaccination/etc) stuff done.


In germany (and I'd assume standardized throughout europe) COVID vaccination digital certificate is just that: a digitally signed certificate (encoding your name+vaccination meta-info) passed around via QR code. While you can use an App to read and present it, just having a paper-copy of the QR code (plus some ID document proving that the name on the certificate matches your person) should be enough to prove your vaccination status.


There is nowhere to get a paper copy from in my country. At least not without going through insurmountable hoops. I tried.

You have to log-in to a government website, and you do it either through a mobile phone registered with the government, or with a smartcard and a proprietary Java application. Pick your poison. (And they don't want to issue new smartcards either.)


I'm not in Germany, but I am currently moving banks because the only way to do online banking is to have an Android or Apple phone, either for 2FA or for mobile banking. They don't have hardware generators. I currently have an old Windows phone (unsupported) and a Pinephone (unsupported), so no online banking for me.


This is so weird to me. I'm over here in the US using a website with a regular old password, no 2FA or anything. Same thing at multiple institutions.

I've run into a few that require 2FA - either SMS or email, your choice.


There is a new EU regulation that requires certain security considerations. Sadly, all banks decided that the best way to implement those was to develop a proprietary solution for themselves and completely avoid anything even close to being standardized.


What bothers me is they all had 2FA in the form of certificates or SMS auth, and most of them moved off of that to some other solution (current bank has 3rd party login which I can do with a cert, the old bank only has an app). What's wrong with cert auth? It can't be less secure than an android app.


Some banks offer it (usually those with higher fees), but unless you count every separate Sparkasse and VR as their own Bank (for non-Germans: It’s similar-ish to a franchising system), I wouldn’t say "usually".


You mean a phone to receive a text for MFA? "Most" can't be more than that.

However there are a couple of new "Fintech" banks in the UK that are mobile app only. No website and I assume very difficult to get anyone on the phone. That seems crazy to me. Access to your bank is at the mercy of Google/Apple.


They probably require something like SmartID, which means that you need to have a smartphone that's not rooted. Ie can't even use LineageOS.


Rooted is unrelated, some apps require it, others don’t. But several require their proprietary app to access the online banking website from your computer.


Rooted is related in that some apps will refuse to work, or will disable features if they detect that the phone is rooted.


Until SafetyNet is improved you can still hide you're rooted via MagiskHide.


I wrote that, yes. But my comment was about the broader situation and not just about the apps requiring root.


Mine has it as an option, but it also supports this[1] external tan generator. I use it since I do not understand how a single device that could hold all the data needed for online banking can be considered 2 factor, when its 1 factor at best.

[1]https://www.mediamarkt.de/de/product/_reinersct-tanjack%C2%A...


Some banks in Germany support chipTAN which doesn't require a phone. You can find a list of banks supporting chipTAN here: https://www.kostenloser-girokonto-vergleich.de/diese-banken-...


That list is outdated. I was doubtful DKB (which you can’t even use with rooted Android devices, I had to cancel my account via customer support because they decided to cut me off with a minor update) allows chipTAN, and the latest comment is that they stopped supporting it.


I'm regularly using chipTAN with a stand-alone TAN generator (similar to [1], based on optical flicker-code) for a DKB account. No problems so far.

Deutsche bank uses PhotoTAN, which supports hardware TAN generators [2].

Some banks use different but similar TAN generators (e.g. SecurePlus [3]), and I also witnessed Postbank accounts operated with USB-based TAN generators [4] called "BestSign".

[1] https://www.voelkner.de/products/476429/REINER-SCT-tanJack-o...

[2] https://genostore.de/db/phototan-lesegeraet/92/phototan-lese...

[3] https://cb.kobilshop.com/secureplus-generator/1/secureplus-g... securePlus Generator

[4] https://www.seal-one.com/devices.en#DiVc3100KLink BestSign


You are right and I stand corrected


I just picked up a tom-tom on ebay for $30.00, I forgot how awesome having a dedicated GPS can be. No hot phone, no worries about data connection.


The feature phones do have google maps, but from what I've read they all take some time to acquire a lock (could be a software issue?)


Probably hardware. GPS either requires a bootstrap starting point or it has to completely solve the equations to calculate the location (slow!).

Expensive phones triangulate via towers (fast!) and use that to bootstrap the GPS unit. Some GPS units don’t support that bootstrapping and are, thus, slower.


My 10 year old Garmin still works and I can get updated maps on it if I ever need it.


Just take a mobile device and avoid matching it to any identity, if - like many - you have needed pocket computing power since forever and since forever you have been a private individual.


That's not really possible. iPhones require an iCloud account which even if not directly linked to you, serves as a pool of information that Apple can gather about you and work out your identity from. The second you switch on your phone, it connects to several cell towers and enables your position to be triangulated. As you move around between work, home, the shops, it's constantly informing the providers of your location. Even if you switch SIMs regularly, your device's IMSI number is static and serves as a unique fingerprint. Conveniently, spoofing IMSI numbers is illegal. Guess why.

Theoretically it might be possible to combat cell tower triangulation by having the modem only respond to one tower at a time and insert artificial latency to make it harder to work out your distance. But, another convenient fact: cell modems are all closed source, proprietary black boxes. They also have DMA (Direct Memory Access) to your phone.

Perhaps I should be wearing a tinfoil hat but these don't seem like coincidences to me. They seem like intentional measures to weaponise a tool in our everyday lives.


> iPhones require an iCloud account

Nope. I have several dev devices. None of which have an iCloud account setup.

Didn’t do anything special; just skipped it at setup.


I think a good balance between usability and privacy is to use LineageOS with MicroG (an open implementation of GApps APIs) or something like Sailfish OS.


If Spyware Engine is allowed to be installed by some idiot with company in his hands to amplify his idiocy, what prevents another idiot with company in his hands to do the same?

And how about specially crafted 'normal' pictures sent to individual to trigger the system? Special mind would have a perfect tool to get some one down / put into the troubles.


> (where is mobile Linux when you need it?)

Here it is: https://puri.sm/products/librem-5 and https://www.pine64.org/pinephone/.


It's not (only) about the hardware, both PureOS and Plasma Mobile platforms are nowhere near production-grade software, likely cannot be even considered beta-quality by an end-user.


What is not “production-grade” about PureOS? It’s the same OS they install on their laptops and the mobile options are baked into GTK. Do you use PureOS? As far as I understand it’s just Debian, with some changes to Gnome and both of those run in production in lots of places.


The HN audience can have those phones as daily drivers (some do). I am not suggesting that they are as good as iPhones.


I see your point. There's also postmarketOS; PINE64 PinePhone is listed as one of its primary supported devices[1].

[1] https://wiki.postmarketos.org/wiki/Devices


postmarketOS mostly only supports well very old devices, and the Pinephone which is also based on completely outdated hardware. I wish there was something a little more recent with Linux support.


http://wiki.postmarketos.org/wiki/Devices

It also supports the Librem 5, but also the Oneplus 6 and 6T


I had a look at OnePlus and checked wikipedia to know which country it comes from, and... wow :

* benchmark results faked by software

* personal data collection

* backdoor

All of these do not appear on the English page so there's a link to the French one : https://fr.wikipedia.org/wiki/OnePlus#Controverses


Yep..... I'm not endorsing them per se, I'm just saying work is going on to get them to work on pmOS (and Mobian).


Which is why I hope that steam deck success. It'll open many more option for linux portable, and hopefully linux mobile based on steamos will follow through.


Steam Deck is nothing like a phone, size wise.


It's more like a small tablet, I presume. That's fine as long as more apps appears on steam os natively, and that it can also run on the mobile device.


Which model did you choose?

I'm looking at doing the same and I noticed Nokia has published the source code for the 8110 (I believe kaios runs ontop of Linux)


I'll get a small folder with decent battery life - anything that's dumb as a fencepost and fits inside my pocket. I was getting tired of carrying a large holster on my belt, anyway.


Mobile Linux is here : https://puri.sm/products/librem-5/


> where is mobile Linux when you need it?)

There's the Pinephone these days, but it's not ready for general consumption.


As my iPhone battery fades I'll be looking more and more at Pinephone.


I don't understand how this is supposed to do anything except generate false positives.

If someone takes a new picture of a child being abused, that won't be on the database so won't get flagged.

Are they expecting child-abusers to take photos of existing photos of children being abused?

Or are they hoping to catch abusers sending each other known photos of child abuse unencrypted? Because that's:

a: stupid (the even-remotely-smart abusers will encrypt their abuse) and

b: rife for mis-use (I, using a non-Apple device, send your Apple device a photo of abuse. You're now flagged as an abuser and your life shuts down)

As others have said, I don't think this has anything to do with child abuse, because it plain doesn't work as a method of preventing child abuse. It totally works for flagging "politically unacceptable" images, though. Maybe the CCP got fed up of all those memes?


I'm surprised how shocked people are about this new thing from Apple. Both Google and Microsoft are already doing this stuff and have been since 2014[0][1]. How come Apple is getting such a huge amount of hate for this, but nobody even seems aware of the others already having implemented such technologies.

I also hope nobody here ever gets mad about the rampant child pornography going around on platforms like Kik, because that would be incredibly hypocritical.

0: https://www.theverge.com/2014/8/5/5970141/how-google-scans-y...

1: https://nakedsecurity.sophos.com/2014/08/10/microsoft-scans-...


Yeah, you expect it from Google and MS; Apple was the one tech giant that promised to value your privacy, so now that they've turned on that, there are no mainstream options left. Hence the strong dismay.

(Honestly Apple probably doesn't really have an option -- the govt wants this, and I'm sure they can exert all kinds of leverage to get it if Apple doesn't comply.)


Unless Apple is starting to view YOUR phone as THEIR property - this is an invasion.

Android does not scan your phone. Google scans content uploaded to ITS machines - because content hosted on ITS machines is a liability.


Right and how you reduce that liability is to do the scanning client-side and reject the upload/send. Google does this with Drive uploads and WhatsApp does it on message sends.

You can't be in hot water for hosting/distributing CSAM if it never gets on your servers/network in the first place.


It has to do with the way Apple advertises being a herald of privacy.

Also your links say that Google and Microsoft scan e-mails on the cloud. Not phone storage as in Apple's case.


> Also your links say that Google and Microsoft scan e-mails on the cloud. Not phone storage as in Apple's case.

Another confirmation that Google and MS have access to the data you store in the cloud and Apple does not. It's very unfortunate that scanning this locally is viewed as a negative in stead of a positive.


What confirmation? Apple already has access to your cloud data:

FBI 'persuaded Apple to halt iCloud encryption' - https://www.bbc.com/news/technology-51207744


I was talking with my roommate about this yesterday. Similarly to how in the article there was a cat picture that the NN said was guac, I wonder if we will start to see the rise of "booby trapped images" where you might be downloading a wallpaper but unknown to you, that image contains CSAM hashes.


Hash collisions are theoretically impossible. It's a completely different issue than a NN misclassifying an image. It reflects poorly on the author that they made this argument.


I'm sorry, but this is 100% incorrect. In fact the opposite is true.

Hash collisions are 100% impossible to avoid. You are mapping an infinite set (the set of all possible images) to a finite set (a fixed length number).

Cryptographic hashes are designed so that collisions are hard to construct at will. But this is not a cryptographic hash at all and I wouldn't be surprised that constructing an image that matches a given hash is easy.


>I'm sorry, but this is 100% incorrect. In fact the opposite is true.

>Hash collisions are 100% impossible to avoid. You are mapping an infinite set (the set of all possible images) to a finite set (a fixed length number).

If you want to be needlessly pedantic I guess. But for 99.999999% of usage hash collisions don't exist in practice.

>Cryptographic hashes are designed so that collisions are hard to construct at will. But this is not a cryptographic hash at all and I wouldn't be surprised that constructing an image that matches a given hash is easy.

You got a cite to back this up? Because they claim otherwise.


> If you want to be needlessly pedantic I guess. But for 99.999999% of usage hash collisions don't exist in practice.

Oh right, that very soothing. The very device I spent $1000 on has 0.0001% of ruining my life by causing a no-knock raid due to a false positive. They should put this stuff on their ads, makes me wanna buy more Apple products.

I'd much rather sell all my current Apple devices and permanently switch to linux than do this.


There are probably a billion Apple devices out there [0]. Each of those devices has at least a couple hundred unique images on it [1].

If you're correct, and the chance of a hash collision is 0.0001% then for each hash in the database, that's 200,000 collisions.

Assuming the database has 1,000 hashes [2] and there's no overlap in collisions for any given person, that 200 million peoples' lives ruined.

[0] rough guess based on: https://www.statista.com/statistics/276306/global-apple-ipho... the exact number might be off but I think the order of magnitude is about right.

[1] My gf has taken at least 3 photos every day for the last 5 years at least (so well over 5000 unique photos) - 200 is very conservative.

[2] Again, very conservative. This is a collection of all known child porn images. I wouldn't be surprised if the number is actually two orders of magnitude higher.

[edit] replied to the wrong comment. Bugger. Hopefully contributed to the conversation anyway.


Apple says:

>less than a one in one trillion chance per year of incorrectly flagging a given account

If we gave every single human being an iPhone we'd expect an incorrect flag every 160 years or so.


In all fairness, we've all seen marketing twist developer words many, many times. "less than one in one trillion" sounds like a very neat, nice number that somebody decided to fixate on.


This.

And I guess we'll find out.

The horrible thing is, of course, that this whole process will probably be automated and there'll be no recourse to a human with any power to work things out properly. If there are thousands of cases, they'll have to take to Twitter to shine some light on it. Except that in this case they're self-identlfying as suspected child abusers. How many people are going to risk the negatives that go with that? Will we ever find out how many people were actually false positives?


No, it's one of the 99.999999% of times where there's 0 risk.


I'm pointing out that it is technologically possible to do so, and I do not want to bet my life that Apple's code has no bugs. I've been writing code for 20 years, no one can convince me this shit won't ruin at least one life in the next decade after its release.


If you are talking about cryptographic hashes, then yes. But perceptual hashes are design to be similar if the images are similar. I still have to read the tech paper, but most likely it uses some sort of perceptual hashing or AI recognition.


>If someone takes a new picture of a child being abused, that won't be on the database so won't get flagged.

Hah. That's an interesting point.

If anything you are encouraging new child porn.


It's a measure to attack the distribution and consumption of child porn, which is a driver for its production. It can also help provide evidence in unearthing child porn rings. Seems obvious.


https://en.m.wikipedia.org/wiki/Four_Horsemen_of_the_Infocal...

There's way more to it than appears at first blush. This is much more than a kiddie porn thing, it's a fundamental shift in the Overton window.


Just a quick reminder:

* Microsoft scans and deletes stuff on your drive it believes is harmful [1]. There was also some indication a while back that they would delete 'pirated content' - i.e. anything some algorithm detects you "shouldn't have" [2].

* Google Drive have been deleting 'pirated content' for quite a while based on hashes [3]. I imagine other cloud services do this as well [4] and I recommend you do not sync your important files with such a service.

This was always on the cards. Next it will be 'terrorist materials', then it will be 'harmful content' and soon simply 'DMCA content' stored locally will be reported to the police. Freedoms definitely erode if not defended.

On a related note, projects like Pine64's Pinephone are early stages (read: difficult to daily drive) but definitely in the right direction [5]. Some day in the near future you will be able to get a decent mobile experience without having to worry about spyware/adware/malware.

[1] https://news.ycombinator.com/item?id=27914752

[2] https://www.indiatoday.in/technology/news/story/windows-10-w...

[3] https://torrentfreak.com/google-drive-uses-hash-matching-det...

[4] https://www.extremetech.com/computing/179495-how-dropbox-kno...

[5] https://wiki.pine64.org/index.php/PinePhone


> Google Drive have been deleting 'pirated content' for quite a while based on hashes [3].

> This week we received a tip from a reader who was unable to share a link to a screener copy of a Hollywood blockbuster. Instead of a public link, Google drive warned that sharing the file in question could violate its terms of service.

What you linked to says that GDrive prevents users from sharing pirated content, not using GDrive for storing them (or that GDrive was deleting them).


> What you linked to says that GDrive prevents users from sharing pirated content, not using GDrive for storing them (or that GDrive was deleting them).

I performed an experiment to test whether GDrive delete pirated content - they do.


How can I reproduce your experiment?


> How can I reproduce your experiment?

Pirate a copy of some modern Disney/Pixar film and upload it to your GDrive. Within a few hours it'll likely get DMCA'd.

EDIT: There is no requirement to share a link, Google will scan your drive once it processes the video.

There are other such reported cases of DMCA violations, etc, so this isn't isolated [1] [2].

[1] https://support.google.com/accounts/thread/21194028/drive-sh...

[2] https://forum.ragezone.com/f872/thats-kinda-messed-rh-bin-10...


Just want to comment on the links:

[1]: the guy was sharing Windows ISOs.

[2]: Reqvim seems to have been sharing a modified binary (the original was created by someone else).

If you've experienced it I'm wondering if you might have uploaded it to a shared folder. But I'll keep your anecdote in mind, not too interested in reproducing it on my primary account, and not sure if enforcement would be different if I were to use a burner.

I'll also note that as mentioned by a commenter in [2], Google isn't required by DMCA to remove content on GDrive that isn't being shared, though this might be a user's breach of TOS.


> There was also some indication a while back that they would delete 'pirated content' - i.e. anything some algorithm detects you "shouldn't have" [2]

You should read the linked article more carefully. It makes an incorrect conclusion. IANAL, but if you read the EULA lines carefully, you'll see thst they're saying, basically, "We might auto-update software, which may incidentally break your pirated version, and we're sorry and don't want to get sued if this happens to you".


Ah, likely the wrong article. I have tried to find one that did state this but it doesn't surface now.

I do remember very clearly that Microsoft were deleting pirated version of Office on purpose and this EULA basically said "oops, we told you it might happen".


I think it's only a matter of time before this sort of thing finds it's way to Android phones, but I think there's a very clear line between scanning material that I'm uploading to a company vs. scanning material that is on a device that I ostensibly own.


> I think it's only a matter of time before this sort of thing finds it's way to Android phones, but I think there's a very clear line between scanning material that I'm uploading to a company vs. scanning material that is on a device that I ostensibly own.

The tools already exist and the legal framework is essentially in place - every time you go through border security you essentially are told "we can search everything on your person (including your device) and there is nothing you can do about it".

It's not such a stretch that some 'national security' enforcement is put in place. Imagine having the same applied to your work place or public transport. Essentially the same that is happening with vaccine passports, where the number of places you can go is slowly eroded to the point that you can't viably go to any public place any more.

New Zealand for example for many years have had the ability to perform a 'digital strip search' where they can force you to unlock your device [0]. If you live within the Five Eyes you can almost guarantee this is already in effect for you or will be soon.

On the plus side, this has led to some 'hilarious' stories where the TSA border security were physically searching a person for Bitcoin [1].

Fun times to live in.

[0] https://www.washingtonpost.com/news/morning-mix/wp/2018/10/0...

[1] https://www.forbes.com/sites/kashmirhill/2014/03/03/why-the-...


I have a very common name too... I was an inch away to be denied a loan for my house because there was this guy with my same name, surname and date of birth that was in the CRIF (a sort of registry for people with very low credit score here in Italy).

I know the feeling.

And while we all agree about the fact the child porn should be stopped and people should go to jail for it, apparently "what happens on your iPhone stays on your iPhone" only until Apple says so.

https://twitter.com/chrisvelazco/status/1081330848262062080


I have an unusual, hard to spell surname. One time when I lived in the UK I found it hard to open a new bank account.

After a bit of investigating I found out that I had been placed on the Credit Industry Fraud Avoidance System because different spelling of my name had been used in previous applications. This was seen as an attempt to get past credit checking, in reality the people entering my name in to their systems mis-spelled/mis-typed my name. As a result of someones carelessness I was flagged as being risky.


Had the same problem in France. It resolved shortly, in a few days, but the bank cancelled my credit card, without warning, on a Friday. I have one of the most common first name along with one of the most common last names. There are thousands of people with the same name as I. The guy who was on the no-banking-list was not the same age, wasn't born or lived in the same city, everything else was different.


I don't think my name is particularly common, but I got stopped while going through immigration in Australia a while back, and was quizzed in a private room by 2 men in black suits for quite a while - who I was, why I was there, who I worked for, that kind of thing.

In the end, they thanked me for cooperating (they were actually quite friendly and polite in general, not like going I to the US, for example) and explained that they were looking for someone with the same name as me.


I'm also Italian. My dad shared the name and surname with somebody with shady background who incidentally lived close to our house.

He was denied an operation at the bank once, but was quickly able to clarify the issue and unblock it (probably he got lucky).

But he had to have himself removed from the white pages and the ring label (ring label? whatever, il citofono :D) because of violent threats.


was quickly able to clarify the issue and unblock it

This is a major part of the problem with algorithms: what if the issue is "computer says no" and that's the most detailed explanation you can get?


Regulators are aware and are creating rules to prevent this. For example, the upcoming EU AI law requires companies to provide a detailed explanation for eg credit scoring if requested.


Re: Twitter image link - This is why you should never ever trust any corporation and their marketing BS. PASS LAWS INSTEAD.


> PASS LAWS INSTEAD.

Do you suppose any government is going to pass laws which prohibits companies from scanning photos for CSAM either on-device or in the cloud? When half the time it's being done at the insistence of those self same governments?


I know this isn't the answer, but I am happy to have a globally unique name.

I was lucky to have not one but two uncommon surnames. (grandmother remarried, children got an extra surname). There are only a dozen people with that combination of surnames.


I, too, have pretty good reason to believe I have a globally unique name. Which makes me wonder, how common is that? I have no idea how even to begin coming up with a rough estimate. I can’t even make an educated guess about the order of magnitude.


> There are only a dozen people with that combination of surnames.

How can you be sure of that?


I have no idea how it works in other countries, but in Poland there are publicly accessible `.csv` files with data about names and surnames in the PESEL system (the ID assigned to people with Polish citizenship) since its creation. A double surname would be marked as a separate entry. If something like this exists in other countries, checking the rarity of a combination of surnames would be quite easy.


Do you have a link to these? I'm curious if my name is unique


You can download the CSV's here: https://dane.gov.pl/pl/dataset/1681,nazwiska-osob-zyjacych-w....

For the benefit of non-Polish-speakers:

- Nazwiska - surnames

- żeńskie - female

- męskie - male


hm, I was expecting to get first-names and last-names. Is there a CSV I'm missing?


I'm the only person in the UK with my sequence of names, they told me so when I was getting my covid jab and didn't have my NHS number on me so they had to find me by name and match to my photo ID.

French immigrant mother so French first name and middle name, and then an uncommon British (Welsh I think, dad's parents didn't learn English until they got to secondary school) second name.


Google, Facebook, Twitter are some indications. Not 100% guaranteed though.


Use of casual (non-GUID) identifiers will result in namespace collisions.

Casual identifiers have to be stored as plaintext. To facilitate record matching (linking) across data stores.

The only way to enable full field level encryption for data at rest is to issue everyone GUIDs.

If we're not willing to do that, for whatever reason, the status quo (a boring dystopia) will persist.

cite: Translucent Databases


Minority report style police based on AI embedded in iPhones? That's a killer feature! They could call it iSWAT or something.

Let's see if fear of AI is greater than brand loyalty and convenience. An unstoppable force meets an immovable object.

Apparently Apple has a team working as AI police. These people decide who gets reported.

> While noting the 1-in-1 trillion probability of a false positive, Apple said it "manually reviews all reports made to NCMEC to ensure reporting accuracy." (https://arstechnica.com/tech-policy/2021/08/apple-explains-h...)

Don't even get me started with the 1 in a trillion probability, we know adversarial images could be created that will trick the system even when you don't have access to the model.


One in a trillion? I wonder what assumptions went into computing that number, and I cheer for the "courageous" person who is bluffing it inside the company. Anybody in tech knows that whatever assumptions were valid yesterday are very likely to break today or tomorrow, and with it, whatever piece of software used it. Apple should know better, they have zero-day exploits regularly. And that's before even accounting for the political pressures.

I can't precisely buy a new phone in a whim, but I'm not trusting my well-being to any "one in a trillion" in the marketing of a company.


One in a trillion. How many photos are taken every day on iPhones? I can assure you that it’s a non-zero number of “one in a trillion” at least once a week.


1000 Photos per iPhone per week sounds a bit much.

Not arguing with the general sentiment though.


> we know adversarial images could be created that will trick the system

It could maybe be used to perform a kind of DDoS attack on the human verification stage by increasing false-positives, but I doubt a non-child-porn image would fool a human into thinking it looks like child porn just because of some carefully-applied noise.

Also, in what conceivable way is this like Minority Report, ie three psychic humans, floating in a pond, hallucinating the future?


Cultural differences around child/baby nude photos might also come into play. What seems a cute photo to some is CP to others.


Photos to be matched will be from the police, so will be material they think is prosecutable.


How about a real porn image with a young-looking actress and some carefully-applied noise?


I guess that would be for the defence to bring up in court given the images will be supplied by the police and they will consider them to be genuine child pornography


Didn't system in Minority Report worked absolutely amazing except several edge cases by super rich who would easily get away with murder in current society anyway? I know movie is used as an example of something very bad, but why, never got it.


They send to jail the guy that didn't kill her wife with the scissors at the beginning of the movie. Did they had any other false positive before?


But with murders they have an objective statistics that murder rate fell to zero, which is pretty cool by itself. Most of murders are not premeditated. I guess stopping people who in head of the moment would kill would be enough but they weirdly went further than that. In the beginning of the movie guy was going to kill his wife, if you believe technology of predicting future they have.


I think the point was they had no way of knowing, and that free will is real so it is wrong to punish someone deemed to be fated to crime.


>Let's see if fear of AI is greater than brand loyalty and convenience

I don't think so - I think corporations and governments are getting better and better at PR and so, we'll see, slowly but surely, more and more features like this.


Just swap out the child abuse hashes for Tiananmen square photo hashes and january 6th insurrection photo hashes and you got yourself some pretty dystopian technology.

Wow, come to think of it, will the hash database on your device change when traveling between countries? A popular photo of two women kissing on your phone could be perfectly legal in the US, but suddenly get you jailed if you travel through Saudi Arabia and forget to delete it.


Hash database checks would be a privacy friendly alternative to custom agents physically accessing your phone when you enter a country. This would be more efficient as well, so checking 100% of travellers phones (and connected cloud storages) would be possible. A very effective way of protecting your country from IS terrorists!


> A very effective way of protecting your country from IS terrorists!

I sense irony. But I'll still reply:

> Hash database checks would be a privacy friendly alternative to custom agents physically accessing your phone when you enter a country

The border agent could always claim that their system detected something and therefore they need to look through all your private files.


That's the thing - if Apple can do this, they will do this. If the U.S. government will do this, they will do this. If the CCP can do this, they will do this. Complaining, protesting, boycotting, even passing laws won't matter - if they can do this, they will do this. The only way to stop it is to make it impossible - Tor, Freenet, I2P, etc. I don't hold much hope that truly encrypted, unbreakable networks will actually take off, though... because in an actually unbreakable network, the kind of censorship that people support is as impossible as the kind of censorship that people oppose.


If macOS is any indication, I bet the next iteration will send your picture rehashes to Apple's servers. Then all they need to do is pass some location data (if they don't already) and checking against location-specific hash databases becomes trivial.


> the hash database on your device

Is there one? Reading a BBC News story on this, it sounds like checking is done when you try to upload an image to iCloud, so it's on the server.


The database and check is on your device, and triggered when you try to upload. There is not much point to moving scanning to the device-side, unless this technology is intended to be extended so that in the future it doesn't check for an iCloud upload trigger.


I wonder how much local storage space is used to support this self-incrimination application?


Not much actually since it's just strings of hashes. It will however use CPU cycles when it hashes your device's photos.


If you use an Apple phone as intended every photo you take is automatically uploaded to iCloud. It takes no usr action and does not imply an intent to share or distribute the photo.


^^^^^

THIS THIS THIS THIS!!

How will this work when traveling? If I have legal non-CSAM pornography, but I travel to another country where pornography is completely illegal, will I get arrested?

Who controls the hashes? When do the hashes get updated?

Fuck I feel so betrayed by Apple.


Time to switch to Graphene OS my man.


Interesting - glancing through the 330-and-counting comments here, I see an enormous amount of harsh language about Apple, feelings of betrayal by Apple, promises to never buy another Apple product, etc.

But virtually zero mentions of the U.S. Congress, President, judges, political parties, etc. The folks who are presumably behind this new Apple feature. (Does anyone see a "Add Evil Feature...Profit" story here, that does not require massive government arm-twisting?)

IANAH (...Not A Historian), but I'll guess that if the well-educated citizens of a country do not believe that it is a real democracy - or feel that being politically active on issues that they care about is an unacceptable burden - then (at best) it very soon won't be.


It smells like this is government influenced. They leverage they are using is the tech anti-trust circus going on right now and the "save the children," decoy they used for the drug war.


time for some myth busting:

1. It's been turned on for all major cloud providers for some time. Upload something into google cloud/photos/dropbox? Yeah, the system's there as well. It just seems most people have been unaware of this so far.

2. It's only for the US. From my understanding Europe and ROW doesn't get this. At least for now.

3. I don't like this way of doing things, but I do understand that there's been major push from the US Government for this kind of tech. Basically, the tech companies had two choices: either add an actual backdoor, or provide another way of checking the content of all their users. They went with the latter.

4. If anyone has a problem with this solution, they should contact their US Representative, as they've been pushing this like crazy "for the children" :)

5. it's not scanning your local files. only the ones uploaded to apple photos. just like all the other actors.


> 3. I don't like this way of doing things, but I do understand that there's been major push from the US Government for this kind of tech. Basically, the tech companies had two choices: either add an actual backdoor, or provide another way of checking the content of all their users. They went with the latter.

That was the wrong decision. Had they just removed the strong encryption of cloud content, this would be the standard case you are referring to.

Now, since it's local, the question is no longer what it's currently being used for, but how long before they are obligated by the government to activate it for local files also.


This was still a problem when they were doing it server side. Especially since it was implemented stealthily.


The issue is what they are going to do next without telling you.

The issue is installation of Spyware Engine, not it's current usage.


What, other than paranoia, makes you think they are going to do anything next without telling you? They told us about this, we're all talking about it.

The issue is your calling it "Spyware Engine" as if it's anything that wasn't happening before - mesh networking for "Find my phone", watching bluetooth and wifi and GPS locations and reporting back to HQ for things like mapping and traffic monitoring and route planning, and possibly advertising, these are much more "spyware" than this is - they happen more routinely, to more people, and report more back.

UK ISPs have been legally obliged to block certain websites from access for years, Google knows where you live from nearby WiFi SSIDs and can correlate all your browsing traffic back almost entirely to your real identity all the time, GMail and Office365 and FaceBook and Microsoft know everyone you contact and what you talk about and therefore what kind of relation you are to each other, all the big providers know where you live and where you work and whose house you spend time in from location tracking and correlating cookies with IPs. This thing from Apple is way more narrowly constrained and scoped than any of that.


> 2. It's only for the US. From my understanding Europe and ROW doesn't get this. At least for now.

AFAIK the EU is actually pondering legislation to make proactive scanning for CSAM mandatory. Right now, in the US at least, proactive scanning like practically all cloud providers do is not required by law.



happy times :(

i fear more governments will take this position now that the big ones managed to push it thru.


> 1. It's been turned on for all major cloud providers for some time. Upload something into google cloud/photos/dropbox? Yeah, the system's there as well. It just seems most people have been unaware of this so far.

Two things. First, this feels like the outrage about the CPU ID in Pentium III's, until someone figured out it had been there all along in Pentium II's.

Second, are there any statistics which show that this scanning has been effective? Searching reveals a lot of conflicting information about child predation statistics.


Apple has gifted a golden gift to ransomware attackers. What's more scary than losing all your data? Getting you arrested.


Now we just need to place a photo on our opponents phones - journalists, politicians, someone whos job you want, you name it - and their phone will call the police itself and bring the proof too, and it's their burden to prove their innocence.


Just send a cat video with 1 frame swapped out via e-mail.


Think of all those leakers or hackers who just happened to always have kiddie porn on their devices. Somehow. This has been the default state of things.


Planting something bad on someone was always possible. That hasn’t changed.


What's changed is that I could've once taken reasonable measures to mitigate that risk. All of that's now in Apple's hands and they've got no reason to protect me.


yes, but now you won't have to bribe or pump up probable cause for a search warrant to find such plant. It makes it 100 times easier.


An attacker who has a way to place multiple illegal images onto someone’s phone and into their iCloud Photo Library without their consent… it’s an interesting idea but how is it possible?


Not very long ago we would have said, "an attacker who has a way to remotely control someone's iPhone without their consent simply by sending a text message... It's an interesting idea but how is it possible?" Then we learned about NSO Pegasus and that it was not only possible, but was already being done to the phones of unsuspecting victims across the world.


There’s the whole NSO Pegasus that answers for how regarding remotely. However social engineering is also possible. Drunk out in town makes an easy target for a skilled social engineer.

All in all - it’s very possible.


These sort of databases and automated systems can easily be weaponized by anybody.

Suppose your target post a picture from their iPhone to a social network. You just have to add this picture perceptual hash to the offensive database for bad things to happen to them.

To do this : you grab their picture, overlay some offensive content on top rendered very transparently so that the new image is like 95% original + 5% offensive content. You also generate a second image with ratio 98% original + 2% offensive content.

Then you post the pair of images on a public site that usually get scanned for offensive content. (You may even add the original image for good measure). Given that the offensive and original image can be exactly reconstructed, and that even at low transparency the human eye can still distinguish the offensive image it won't take long before all images get labelled as offensive. Their hashes added to the database, and a collision with the picture of your victim registered.

Launch a script and do it for every pic they post. Of course there are plenty of variations of the above technique.


It remains to be seen if this is possible at all… let alone “easily” done.

Have any other big tech companies that scan for illegal images been shown to be vulnerable to this?


For sensitive content you can store like nudity, you can use some neural networks trained on a dataset you can manually check and sanitize, and you won't be susceptible to this attack.

But if you can't store the data and you are only allowed to store hashes, you become dependent of the quality of the hash database which is troublesome to verify, and therefore susceptible to poisoning.

How hard the game will be for the various players is not really the point, given that it's a game that shouldn't be played : only Apple know the rules and can verify them, and not even Apple should have legal access to offensive raw database to check that it hasn't be poisoned.

If they use some neural networks, it can easily turn into a PR disaster, once someone take their weights and use a Generative neural network to hallucinate offensive content based on the weight they release.


Apple’s documentation discusses how they use a neural network to generate the bits which get turned into the hash.


"You have an illegal video called 'CollateralMurder.mp4' [1] and you have been reported to the authorities."

How long until this is reality?

[1] Collateral Murder is the war crime the US committed and was leaked via Wikileaks and is the reason Assange is still behind bars.


How can the public confirm that NCMEC doesn't already contain non-CSAM images of US officials engaging in illegal, compromising or embarrassing situations?

Are there nonpublic images from Jeffrey Epstein's blackmail collections already in the database? Is there any system that could be developed to trustworthily prevent this situation without distributing child porn?


I think I will selfishly say I couldn't care less about child abuse as much as my iPhone being entirely mine with only actors/people with my explicit opt-in consent to be able to look at. I find that perhaps the root of what Apple thinks is not many people think that way but it's not even something I have to think about: I know clearly that is enough to move off the Apple ecosystem.

If anyone thinks or says otherwise they ought to think about what else they would agree to where the other person can do what they want to them without their consent.


>False positives

If this was a problem, would we not be seeing a slew of complaints about innocents being dragged through the mud with OneDrive and PhotoDNA? The only thing unique about Apples implemention is that it's client-side.


But there is a difference. Sure the same technology might be integrated into several cloud storage provider. But i can choose to use a storage provider or not. If Apple is activating this technology on your phone you can not opt out. Your phone will be searched.


Yeah this is what really irks me, cloud providers always struck me as "optional", this is adding a feature which can trivially be tweaked to simply scan your entire phone for wrongthink and send the results to the CCP.

If Google decides to implement this, which doesn't seem very unlikely because of the optics of NOT implementing this, people who aren't tech savvy won't really have a realistic way to opt out.

I just don't think there is very much evidence that in practice false positives are a big issue and the article is very much pushing that argument.


If you think Apple is going to install something onto your phone to scan everything for wrongthink and send it to the CCP, you shouldn’t be installing their updates or using an iPhone anyway. They can already do that at any time. This feature doesn’t make it any easier. On the contrary, this feature seems specialized for its purpose, and to slide down your slippery slope they would need some different and more general kind of spyware.


I think my "slippery slope" concerns are less technical and more overton window shifting. If you have this practice of scanning images in place, it's not as huge as a leap to suggest we should register hate propaganda images (for instance, infographics) and not allow Apple to lawfully host it on its servers and so on. You just get less and less steps away from justifying banning things like propaganda against the government or individuals in the government and such.

I would agree on a purely technical level your phone is already pwned by Apple so worrying about this on a technical level is closing the barn door after the horse got out. However from a social perspective one of the holdouts against photo scanning has stopped being a holdout, and doing things from client-side makes it seem less wrong to do other things client-side.

Besides the privacy stuff, this also is a bit more of a slide towards software that you purchase being ultimately controlled by and for the benefit of parties other than yourself.


Apple is a US company and this feature is rolling out to US customers in partnership with US law enforcement. I don’t think we have invoke the CCP boogeyman to be worried about this.


This comparison of images to known hashes is only for Apple's iCloud Photo feature, which is optional.

That said, many hosted providers/social networks have similar features - they just have server-side implementations and might not have felt the need for disclosure.


Google already has this.


But this will be triggered only while uploading to iCloud, so if you are not using iCloud for photos the algorithm will never run? Or that’s why I understood.


For now, the next step is to do on device scanning with fuzzy hashes whatever that means.


What does that mean?


Correct, Apple confirmed this hash-matching is only for their own iCloud Photos service (and appears to be designed with that service in mind).


Only if you use iCloud photos, so you do have a choice.

In practice any major cloud provider is going to or is already doing this. We need a better regulatory approach, it isn’t practical to put the responsibility on providers.


"On device scanning" and "only if you use iCloud photos" doesn't make any sense does it? If it's really the case they're just preparing the ground for the next step which they hope won't get as much publicity...


Microsoft and Google hash your unencrypted photos on their own servers. Apple could easily do the same… I wonder why they didn’t…


Probably because they are also adding a feature to parental controls which when enabled checks images sent to the phones of your children 13 and under and if the image matches a known bad image they give the child the option to accept it or not, warning the child that if they elect to accept it their parents will be notified and given a copy of the image.

That has to be done on the phone because Messages is end-to-end encrypted. If they are going to have to have hash matching on the phone anyway for that, it makes sense to also use that for checking images that are to be sent to the cloud.


The detection of sexual images in kids’ messages doesn’t use the same hashing setup as the iCloud Photos detection feature.


No, there is a plan to also do on device scanning.


They've said the on-device scanning is done only when you are using iCloud Photos. So it's on-device scanning but only when you are deciding to share your photos with Apple... At least that's how it is for now.

https://www.macrumors.com/2021/08/05/apple-csam-detection-di...

"CSAM image scanning is not an optional feature and it happens automatically, but Apple has confirmed to MacRumors that it cannot detect known CSAM images if the iCloud Photos feature is turned off."


There’s two things going on I thought the iMessage stuff was on device but maybe I’m misreading things. It’s unfortunately cloaked in secrecy…


> We need a better regulatory approach

We won't get one though, because the "think of the children!!!" crap is extremely pervasive and anyone going against it will be smeared as a pedo defender.


It is unrealistic to expect corporations, which exist solely via regulations of various governments, to act as if they are immune from government regulatory control. In fact, a lot of negative press for FAANG (add extra letters as needed) specifically makes the point that they are acting as if they are bigger than any government.

In that mindset, the answer is absolutely to combat and to attempt to change broken laws first.

Also, it isn't just "think of the children". For instance, there have been some _terrible_ proposals under the banner of Right to Repair. People tend to not want to invest the time in understanding the ramifications of the actual proposals, and instead vote for or against the concept. One of the reasons ballot measures are both empowering and terrifying.

Good regulations take time and care - and generally, less is more.


In cloud storage they can manually review the picture on server side for false positives before throwing to law enforcement.

You are consenting to the upload and are aware that it can be searched so not a legal problem.

Law enforcement will also review the picture in the cloud instead of busting your door to search your phone and the whole scenario from article.

When they do it client side they can't just upload your content on hash collision. (or maybe they do which is also a problem in itself)


This is a lie. Did you try reading the actual announcement by Apple? It's right here: https://www.apple.com/child-safety/.

- The scanning will be performed only if photos are to be uploaded to iCloud.

- The database will be encrypted multiple times in a way that it can't be clearly read.

- There's not notification to Apple in any way in case of matches.

- Instead, each match result is again encrypted in a way that is inaccessible to Apple and uploaded together with photo

- If there a lot of positive matches, they eventually will become able to decrypt it. That's when they will do manual check, lock the account if it is correct and notify authorities.


There are multiple features. This particular one is about client-side scanning before upload to Apple's iCloud Photo service, which is optional to use.

Presumably it is client side so that they can do anonymization/encryption of photos on the server, and treat any data access outside the account and account they have shared the photo with as an audited and cross-organizational event.

But if you want to use another hosted service, you can... and likely get their implementation of a similar system. Presumably this is US regulatory compliance.


Who would back up their phone to the cloud if they were involved in illegal activities?

This is going to bite more innocent people through false positives than criminals who already know how to get away with these things.


Presumably the cloud synchronization checks are not a feature Apple wanted to add, but one which they had to under US regulations. Other providers have done this for years server-side, but Apple needed a different approach since the photos are E2E encrypted[1]

It is not a ML model but a list of known image hashes, and is only enabled for US-based accounts, furthering my suspicions this was minimum-effort for regulatory compliance.

Note they _do_ have a feature (also announced today) that uses ML models, but it is meant for local filtering and parental controls/notifications. This feature is also US-only and the parental notifications policy is fixed and age-based. I believe this is both to fit into regulations (e.g. US recognition of rights based on age) and into cultural norms.

I suspect they will have different rules in different jurisdictions when this rolls out further in the future.

[1]: With separate key escrow HSMs for account recovery and legal compliance with e.g. court-ordered access.


this is false. it's only for photos uploaded to apple's cloud.

the tech runs locally, but only on those photos.


Why do some people seem to think that it will stay limited to that forever? It's not just you, it's multiple people in here who think that this is ok. They are our devices. They should not be running software that can get us arrested on our devices. It won't stop there. It never stops there when they can ask for more.


If you take their technical summary [1] at face value, they designed it to be limited.

Even if the hashing and matching happen on the local device, a match can only be revealed server-side. The hash database distributed to local devices will be blind hashed with a server-side secret key and the locally derived hash match will need to be decrypted with that key to be read by Apple. So theoretically if the local device doesn't upload content to iCloud, no content matching can be revealed, even if the hashing and matching has been done locally.

Of course, you also need to trust that Apple won't be uploading those locally derived hashes to iCloud without the user's permission if iCloud backups are disabled.

[1]: https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


because we’re not discussing hypotheticals, but real life.

and in real life governments elected by the people have been pushing for this for years. the result has been google and all the other cloud providers already implementing this. apple was the last big one to hold out.

will they expand this in the future? sure, whatever. the system is so broken, and i’m so powerless, that at this point in time it doesn’t matter what i want.

at least it will only apply to the US. the ROW is spared. at least for now.


You can opt out by not using iCloud for your photos. If you prefer, you can install Google Photos or OneDrive onto your iPhone and let your images be scanned in the clear in the cloud instead.


This is only applied to photos being uploaded to iCloud.


I might be mistaken but how it's not the same thing?

- You don't like storage provider - you don't use it

- You don't like ecosystem/smartphone provider - you don't use it

On top of that you can opt out even now by disabling updates it just means that you won't have access to the newest iOS and take risk that at some point software developers will stop supporting OS you decided to stick to.


>You don't like ecosystem/smartphone provider - you don't use it

This just doesn't work like this in our day and age. We are one ecosystem (android) away from complete domination of this scanning technology. You could argue that I could use a librem or something but at that point all librem users automatically become suspicious because "all major manufacturers have this, he probably has something to hide".


apples and oranges, unless you want severly affect your quality of life, owning a smartphone is prety basic thing for adults

you could as well live in woods away from society, but that's obviously not the solution to bad laws


If it only was that easy. Social pressure to use certain services often trump personal preferences. Sure, I can change provider from X to Y, but unless I successfully convince my mother to do the same, I'll end up having both X and Y as my mom will still use X.


> The only thing unique about Apples implemention is that it's client-side.

Yeah, the only thing unique is installation of Spyware Engine in personal device that will do 'who knows what' later. What can possibly go wrong?


The system is used during synchronization to iCloud Photos, which is an optional feature.

Other hosting providers do similar scans, but they do not do them within the client-side component. Presumably, this is done to give the hosting environment no access to the actual data (e.g. data access such as by subpoena becomes a cross-organizational auditable event)


This is definitely one of those topics where going public can have immediate negative consequences for someone even if it turns out to have been a false positive.


> The only thing unique about Apples implemention is that it's client-side.

This is just another case of, 'your phone' is not yours. People stores their lives and thoughts in a device that they have no control of.

I am not against those use per se. I am worried about what it means legally. The minimum that we need is a more clear legal framework. This is a private company accessing personal data without consent and without any formal requirement from law enforcement.


> This is just another case of, 'your phone' is not yours. People stores their lives and thoughts in a device that they have no control of.

This is specific to photo synchronization to the cloud, which is an optional service.

In that sense, it isn't really doing anything different than say a OneDrive photo sync app - except it is doing the check on the client side, rather than having the server decrypt to do the check.

It is part of the default install and setup wizard, but thats a battle we lost decades ago in the PC world. You still have to opt into iCloud synchronization for this to be enabled.


Yes. The concern is reasonable, but given that this technology is widely implemented in basically every cloud storage offering, there isn't much to suggest that the described outcome will actually happen.


Do we know the described outcome is not happening with PhotoDNA, etc?

In the story it didn't became a well known issue since it would happen only to few unlucky individuals.

The same goes for the unintended asset freezing.


Maybe anecdotal evidence, but I've read multiple posts/news stories about people affected by sanctions because of a shared name. I figure that if there was an issue where PhotoDNA had actually led to someone being arrested, that would have made headlines somewhere, at least here.


Mega anyone?


I was seriously considering switching to Apple for my next phone because of all their privacy marketing. With this update, I'm not touching Apple with a 10 foot pole.


A second degree acquaintance of mine in Germany had his house raided by police and all of his electronics (including his work laptop and phone) temporarily confiscated on suspicion of possession of child pornography because...

...an anonymous member of a child pornography forum used the handle "$NAME_$YEAR" and had his location set to $TOWN. The innocent bystander's middle name is $NAME and he was born in $TOWN in $YEAR but had not lived there for decades. He only learned about this absolutely nonsensical justification and got his stuff back by hiring a lawyer who requested access to the DA's investigation file.

If you can get a judge to sign a warrant with flimsy non-evidence like this, what will they do when the robot says "95% match detected".

This gives me pause...


If it is really only to be used when iCloud Photos is turned on, why not do the hashing on Apple's servers when the photos are uploaded to the service? It is my understanding that this kind of hash-checking of files is a pretty standard thing on major cloud storage services.

I can only think of two reasons for this:

- for a future expansion into files that are not uploaded, even with cloud services turned off

- Apple doesn't want to foot the energy costs of doing it

Finally, it should be noted that this will also apply to macOS Monterey, not just iOS/iPadOS.


Third possibility: Apple is going to E2E encrypt iCloud photos, but the only way the U.S. government will allow it while still complying with CSAM laws is if the photos to be uploaded are scanned on the device.


There are no CSAM laws that require this sort of privacy violation, and it would be a gross overstep by any government to mandate it.

If they enabled E2EE iCloud after this it'd be just for marketing purposes, because client-side encryption is being circumvented it would render the whole promise of privacy as a lie.


> client-side encryption is being circumvented

Client side encryption of what? A plain photo? That you designate to upload to cloud? At which point it's encrypted, or hash-checked then encrypted. Where's the circumvention?



They have to have hashing on the phone to support the new parental control option for Messages. If parents enable this option, it checks incoming images on their child's phone against the hashes, gives the child a chance to reject matching images, and if the child chooses to accept the image notifies the parents and gives them a copy of the image.


No, that "feature" isn't based on hashes from the CSAM database. It's instead based on a machine learning model trained on the CSAM database which allows them to identify arbitrary images of underage nudity. These are independent from each other.


> why not do the hashing on Apple's servers when the photos are uploaded to the service?

Their reasoning seems to be that it would break E2E encryption.


So... the obvious Serious Issues here are privacy and child pronography. This blog though, is about an issue I don't know how to define concisely.

It's about getting labeled, for whatever reason as "high risk." I flagged some paypal thing years ago, when I moved countries. It was too much trouble to sort out, so I abandoned my $20 and moved on. Paypal has me in a no recourse bucket. Most transactions run through such risk assessments. False positives are tolerated. Child pronography scanning, as this post lays out, will have similar issues. We're already used to credit scores and insurance behaving like this.

I'm not diminishing the privacy aspects, but this neokafka world of automated, risk accessing bureaucracy is also a world we're constantly walking towards. It's also bad.


Basically, these companies know that false positives happen. To them, it's just a statistic - they can afford to wrongly ban 0.1% of customers in the name of preventing abuse. But for the person they're wrongly banning, that's not so fun.

I'm worried this will apply to more seriously some day. "This algorithm says you're guilty of a crime, and it's 99.8% accurate, which we've determined is acceptable, so there will be no way to appeal your verdict."


>The company claimed the system had an "extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account".

>Apple says that it will manually review each report to confirm there is a match. It can then take steps to disable a user's account and report to law enforcement.

They say all reports will be manually reviewed, so the blog post, while painting a scary picture, doesn't really paint a picture that matches the reality of how this works - ie automated systems flagging up with no human intervention. I know the blogger tries to illustrate how human intervention isn't enough, but in this case we're talking about a human just comparing an image with another image, not a hypothetical lazy legal department "playing it safe" instead of doing due diligence.


As a survivor of child abuse, one of the major obstacles to recovery is to identify and let go of this generalized fear that goes with you everywhere you go. I find it extremely ironic that, under the pretense of "saving the children," one would implement technologies that subject the general population to that same type of overbearing fear.


As a stock owner, owning thousands of dollars of Apple equipment, and respect for the company I think this is a huge mistake.

This technology benefits practically nobody, consumes resources, and is bad for the reputation of the "privacy" aware company.

If something like this is to be implemented the door is opened for scanning everything else.

Here is a question: If I know my photos will be scanned for child pornography, would I keep child pornography in my phone?


I ask this question in earnest - what's in this for Apple, why is Apple doing this?

I haven't seen anyone who likes this direction taken by Apple. Why would they do something that has the potential to detract from their product demand. Apple always said "We're different - we sell you our products, we don't sell your data." Now, they're giving away the data and for what? Child safety - I don't buy it.

I remember when Apple refused to help law enforcement unlock the iPhone of a mass shooter is California. Why the change of tact now?


> why is Apple doing this?

They don't want child pornography stored on their servers?


Is it maybe due to FOSTA-SESTA? Maybe they're worried about being liable for crimes under that bill?


Prepping up to meet China's expectations?


Seems that most comments are on the privacy side of the argument. So I will try to defend this :)

"Apple’s system is less invasive in that the screening is done on the phone, and “only if there is a match is notification sent back to those searching”, said Alan Woodward, a computer security professor at the University of Surrey. “This decentralised approach is about the best approach you could adopt if you do go down this route.”

So it will only happen when you upload data to the iCloud. So the question is then: is it okay for apple to make sure no illegal content is stored on its servers? I think you could argue both ways but to me at least it is less privacy invasive than it seems.

it is about expectations of privacy, in the same way you can expect less privacy in a public space in the real world. It should be clear by now that you can expect less privacy on the public web. Of course a counter argument is that is is perhaps more like renting a hotel room than it is a total public space. But you can also expect less privacy in a hotel room than you can expect in your own home.

For many years privacy and freedom of speech has been a hallmark feature of the internet. I was (kinda still am) a big proponent of this. But we have seen what will happen if you take it to its extreem. From disinformation campaigns to illegal activities. So to me (and incidentally Facebook) we should get better (inter)national rules about how we want to use the public space of the internet. Just in the same way we have rules and laws in public space.

Internet is blending more and more in to the real world. So governments should make it clear in what way we can expect privacy and free speech and where we can expect it (self hosting, public platforms etc) and what responsibilities companies have to enforce it.

The hardline arguments of privacy advocates are the same that the NRA is using to prevent any form of gun control. It starts with A what is them going to prevent to do B. Well rules, regulations and laws. You should be very aware that privacy isn't watered down and law enforcement should make sure they don't mis use what rules there are. But to me checking hashtags for know child abuse photos is the same as doing a basic background check before giving someone an assault weapon

That being said it will stop no one from encrypting images before uploading it to iCloud.


The problem with client side verification is it dosent take much have the system look at all local files for verification, not just the files being uploaded to iCloud.

Now that Apple has some form of client side verification, an authoritarian government with enough economical clout can preassure apple into extending the system to look for things beyond child pornography. The govt could even just provide apple with their own hashes which may or may not be CP and mandate that Apple verify these hashes too. So in the end it all depends on how well apple is ready to push back against such government and if they are willing to take a hit to their profit margins by being effectively banned from a country


But they can do that anyway. For all you know it is already running. As the system is proposed it only scans before uploading so not scanning all data on the phone. of course any thing can change. But since it is a closed source os nothing is stopping them from having the ability anyway.

And suddenly I remember that Apple is playing by Chinese rules in china any way.

https://www.nytimes.com/2021/05/17/technology/apple-china-ce...


If they only care about their servers why are they installing it client side. There is clearly more to it than that.


One reason could be so they don't have to be able to decrypt on their end a file that you send them to keep for you.


The here issue is installation of spyware engine, not it's current usage.

The issue here is what they are going to do next without telling you.


Also it is not really a spyware engine. It checks checksums. So you need to know the content to be of any use. People forget that if you are in the apple verge they know more than enough about you, they don't need a spyware engine.

The thing is that they do their best to make sure that they do as little as possible of server side processing. Tasks like sorting photos, face recognition is all done locally as to ensure misusage and ending up with a big database of all sorts of data.

They have chosen to do the same route with a very specific set of content checking. of course this tech can be used in malicious ways (banning books and what not) but if you live in a country that has a government like that, best is not to trust any device that comes with software preinstalled anyway


>Also it is not really a spyware engine.

Well "It checks ... " is exactly what spyware does, at least this part is Spyware Engine. Which part of it is not spyware engine? Did you see the source code?

>The thing is that they do their best Yeah, right. Or so they say...

>of course this tech can be used in malicious ways

It is already used in malicious ways, it works as Spyware Engine and we have no information about what else it does under bs umbrella like "children protection" or any other bs they would invent to justify Spyware Engine legalization attempts.


At any time Apple could turn evil and include invasive spyware that sells you out. If you’re afraid this can happen, this new feature doesn’t make it any easier. You shouldn’t use Apple products at all if you distrust them to that degree.


Will it help? Imagine what other companies will do if Apple will get away with that...


Considering how frequently my innocent images (petting my cat, taking a photo of a pepper growing in my garden, pointing to a word printed on a page in a book) trip Discord's automatic-bad-image-blocker and are unable to be uploaded to Discord because they think they are bad, I'm a little worried that when they turn this technology on, instantly many 100% innocent photos people have taken will be quietly flagged and put people on lists without them ever knowing it, and as they go through life this non-thing that they were never charged with (to be able to be found innocent) will follow them and haunt them and impact other business relationships, etc. All because of pictures a computer thought was something else.


Sounds like you're describing a completely different system than the one being discussed. It compares to a corpus of known-bad images.


The contents of this corpus are unverifiable. They are claimed to be only known-bad images.

However, the public doesn't (and shouldn't) have the ability to inspect these images to verify that non-CSAM politically threatening images are not included. (ex. pictures of politicians doing cocaine with adult prostitutes, or images of war crimes committed by US troops)

I think that distinction is important. Because even if the NCMEC database is only child porn right now (which we cannot know), the public will also not know if the scope of the database later changes because now Apple will call the police on whoever possesses an image that matches this database. Adding this capability increases the incentive for misuse of the database.


This post starts out alright, but the idea that police will raid your house without even requesting a copy of the picture that triggered the system is outright ridiculous.

A better (and more related to his issues) scenario would be Apple automatically kicking you off their platform and you losing access to all your data, including your passwords which you stored in iCloud Keychain. Because that is an actual realistic scenario.


Yeah, they would never do that...

https://en.wikipedia.org/wiki/No-knock_warrant#Controversy

Key quote near the bottom:

Due to errors or acting on bad or faulty tips without double-checking information, Chicago Police Department has raided many wrong addresses.

So yes, raids without double-checking the information (i.e. looking at the picture that lead to the tip) has been happening quite recently.


In The Netherlands, it is common now for the police to report on a bust "also quite a substantial amount of cash was found".

Sometimes, this is the only substantial allegation.

Show me the man, I will show you the crime ( Stalin? Beria? )


Qu'on me donne six lignes écrites de la main du plus honnête homme, j'y trouverai de quoi le faire pendre.

If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him.

Cardinal Richelieu


This is great, I can already see reverse ransomware. It does not remove desired data from your device, it adds undesired data and then deletes itself including any evidence that you are a victim instead of a criminal.


I don't buy the equivalence here. Notice that while the author was blackballed by an arse-covering bank, they weren't actually arrested for sanctions-busting. This is broadly because you need a bit more than "appears to match entry in database" before you get to probable cause for an arrest.


In the same way neural nets can be fooled into identifying images wrongly, I think nerd brains can be fooled into making false comparisons between non-comparable scenarios, especially as soon as "privacy" is involved.


I wonder how Apple will fish out “child porn” from tons of innocuous pictures of babies, often mostly in the nude (children like to wear no clothes). Is their algo clever enough to understand context?

I somehow doubt it.


In theory they are simply matching against a known database of existing images, and as such your baby/family images will not be a match - they are personal images that have not been catalogued elsewhere. There is also meant to be human verification of any flagged images.

In practice ?

(Edit: This is for the iCloud scanning. The (i)Message warning around sexting is a whole other issue)


> There is also meant to be human verification of any flagged images.

"Rest assured our system is safe, a random guy working in a digital sweatshop at the end of the world will get to see your naked baby pictures just to make sure you're not a criminal"


This is definitely an 'added' risk they need to qualify - how they will stop any verification from leaking private images.

One would hope that such checks should be minimal and therefore performed by suitably qualified (and equally audited) personnel. As you allude, it is more likely that such work is outsourced in the same way that the manufacturing of the phone is.


Amazon in the pursuit of Alexa quality control has most recordings manually reviewed by real people to see whether the response was appropriate. This includes any accidental triggers. It probably does not surprise that these devices have recorded sensitive information like bank details and passwords, or have recorded instances of domestic abuse which employees have been forced to listen to [1]. It absolutely wouldn't surprise me to learn that Apple will have a team of people reviewing everything that gets flagged, who will get to see everyone's private photos and also probably be exposed to real CSAM.

[1] https://www.theguardian.com/technology/2019/apr/11/amazon-st...


Apple never gets to see your photo. Even if the photo matches the database and even if you have enough photos matched to reach the threshold that lets Apple decrypt the hashes, the photo itself is not decrypted or sent to anyone as part of this process. Only the related metadata.


Source?


Jeepers good point, actually this went from an inconvenience to a total show stopper for me.

What will the job interview sound like for this job? “Do you have any experience in recognising child porn?”, “yes, many years, and great enthusiasm for the job”.

I’m looking for my old Huawei phone now.


>There is also meant to be human verification of any flagged images.

That shouldn't make anyone sleep soundly at night. The person at the other end has no incentive to not turn your life upside down.


Wait, if the Apple algorithm flags a photo of my kid in his bath, some unknown human is going to check the picture for verification purpose ? That's even more cringe.


Your kid in the bath won’t be in the database being matched against.


If it’s a fixed db, not an “artificial sort-of intelligence”.

But that’s a stupid way around it, surely. Child porn is surely, and tragically, in continuous production. And really, it’s the production that is the biggest issue, this is the continued abuse of children. And that’s the one bit the algo won’t match against?


Apple has documentation explaining how it all works with a database of known images.


Out of 200k photos being matched against I'd be extremely surprised if there aren't at least some false positives generated.


> There is also meant to be human verification of any flagged images.

So Apple does have access to everything you store on your phone? Either they're lying about being concerned about privacy or you don't have your facts straight.


The only images scanned are ones uploaded to iCloud photos, so these are images Apple already has.


At this rate Apple is no better than Xiaomi or Huawei: using forced labor to make their phones; produce phones in China; biggest market is China; checking your phone for violated content. Maybe time to switch to Xiaomi, it is 3x cheaper and better tech.


> biggest market is China

When I first read this I thought it was impossible. But it seems like, while Apple’s total revenue from the US is double that of China, actual iPhone sales volumes may not be as far apart. The numbers aren’t publicly available, but Bloomberg has an estimate as of 2017 showing US iPhone sales of 70 million vs 50M in China:

https://www.businessinsider.com/apple-iphone-sales-region-ch...

It looks like iPhone sales were going down in China as of then, but their GDP has also grown rapidly in recent years and I’ve seen other charts showing Apple’s China revenue basically increasing in concert with Chinese GDP, so it may not be a stretch to say that China is the biggest iPhone market today. Whoa.


Better tech, not really. Otherwise it’s accurate.


I recommend people watch [0], a discussion of Apple's CSAM scanning by Alex Stamos, Matthew Green, Riana Pfefferkorn and David Thiel. Gives a well-balanced overview of what's happening and what the trade-offs are, IMO.

[0] https://www.youtube.com/watch?v=dbYZVNSOVy4

Edit: tracking -> scanning


Tomorrow are all Macs(heck PCs) also going to come under the same ambit? How is one supposed to work on classified business or government tasks if a piece of software is actively gong to rummage through it all.


Unpopular opinion here. I see the slippery slope, but I would say that this is the bare minimum price Apple is willing to pay to enforce privacy and encryption all over the ecosystem. They needed to give something to the governments to take the “think of the children” element from the broader privacy discourse. I think for example that the radical stance of Telegram on this is much more strategically detrimental to our broader right to privacy than this compromise Apple has settled with. That said, we need full transparency here. Apple’s statement that the probability of false flagging is one in a trillion is not enough and can’t be taken at face value.


> needed to give something to the governments

Why? do car companies feel also compelled to give your car's location to all the governments? There is no need for apple to do anything, it's a choice, they are doing tit-for-tat with governments. That's always nefarious


I thought modern cars did this already?


Perhaps not automatically, but many cars do save GPS data in their diagnostic data, which could easily be accessed if a case was brought against you.


they do?


Because this is the biggest capitalistic entity in history, and the only risk it faces is governmental actions. When it comes to governmental action against privacy, that means undermining encryption for everyone.


When apple does it you can do nothing as a user. You can however complain to your representative in a democratic society


Apple can’t even build a self driving car, which is an easier problem than building a neural network that can’t be tricked. If they write one in a trillion, they didn’t try hard enough to attack their own system. They should publish their model and get it peer reviewed publicly, without that their number is unscientific.


Agreed 100%. That's what I mean by having more transparency.


The point the article is making is that there is no technical way to give up on full E2E with a backdoor that can't be misused. Both politicians and Tim Cook are fully aware of it.


This is so awful. I trusted Apple because of their stance on privacy — I felt as though I could trust them so I bought into their ecosystem and now my iPhone is basically the nerve center of my life. I keep everything on my iPhone and access all my primary services through it. All with the confidence that all I have to do is push the lock button five times and all of it is locked away behind a very long diceware pass-phrase and that nobody, not even the government, could pry into it. So reading all the headlines tonight was like a physical punch to the gut. All of a sudden I’m tasked with migrating all my shit somewhere else? I literally don’t even know where to begin… it’s going to take me weeks to research everything and find a new device. I don’t even know if there is an alternative device.

And no matter what move I make next, it’s going to be a downgrade in hardware specs. What a bummer.

And here’s another thing. In the OP, they make it sound as if the imaginary mark was a female. We all know that females won’t be affected in any way. Most females will probably not even care about this. Because they don’t know what it’s like to be nervous around children or to have to make sure you are never alone with children even by accident or happenstance because everyone assumes that you are a pedophile, rapist kidnapper. Leery looks from adults all the time whenever there are young girls around. It’s a uniquely male experience. It’s also uniquely male to have your guts cut open and ripped out by your cell mate. When people start being brought in for a hash match and then they find incedental images of young girls that look like they might not be 18, innocent MEN are going to die.


Btw, now there is a very nice vector of attack to "swat" persons that you don't like:

You create anonymous account, especially pretending to be underage, you send blacklisted pictures to the target. It will be automatically stored on the device of the victim at reception and maybe automatically uploaded to icloud for backup.

Now you just have to wait for an automatic apple shit to trigger and have the company and cops ruin the life of your victim automatically.

It is SaaS: Swat as a service :-)


This is exactly right and my biggest issue with this.

On my iPhone all images I look at on WhatsApp get automatically added to my photos, which in turn automatically get uploaded to the cloud.


It seems like it isn’t possible to store images in someone’s iCloud Photo Library without their involvement. This would be a major security incident and immediately patched, if someone were able to find a way to do this.


Don't most messaging apps these days automatically save all photos by default? I believe this is the default setting on WhatsApp and even Telegram. Even if the photo won't be saved until the recipient opens the message, all that's needed is for them to forget about this setting. Then the photos are automatically uploaded to iCloud, also by default.


Reminds me of the German novel QualityLand, where rich people give their children names with deliberate misspellings just to avoid cases like that.


Pre-internet, I used to go drinking with a chap called John, one of the pubs we went to was next to the main police station, so unsurprisingly full of plod. One night we walked into the pub and it fell almost silent; it was only later we found out why -- John was the spitting image for the just-arrested Peter Sutcliffe: the Yorkshire ripper.


This person has a personal story about how bank bureaucracy cannot be bothered to get him off of some watch list. Everything else in this story has not happened, and requires a substantial extrapolation of what could conceivably happen.

But all the things that could conceivably happen could always conceivably happen. My sense is that Apple is trying to placate some groups to avoid providing more intrusive changes, such as being able to decode iMessages and removing end-to-end encryption. While there is a lot of concern about a slippery slope, one could imagine this is actually a path that avoids much steeper, harder to avoid slopes.

Apple’s announcement of a limited program of photo matching does not increase or decrease their ability to scan files in the future, but it’s not what they are describing now, and it’s unclear why they would provide such capability since their brand has emphasized privacy so strongly.


Anyone know how well these similarity neural nets deal with something as simple as, say, rotation? It would be trivial to generate 4 hashes for the 90° cardinal rotations (and, I guess 4 more for axis flips), but how about, for example, 11°?

It would really suck if everyone starts implementing this and there's such a simple way to circumvent it.


The neural net is specifically designed to handle such cases. This is covered in Apple’s documentation about this.


With an M1 Mac or heck any soldered storage chip, you can't really be sure if data is swiped correctly. Whether on your own devices or on a second on device, before getting caught by stupid client side application with check for arbitrary documents.


The poster was mistaken for someone that was correctly sanctioned (at least in the legal sense). The risk she perceives with Apple is that they will falsely accuse people of crime.

These are completely different situations. Not to mention how infinitely unlikely it is that _several_ of your photos would be similar enough to known abusive material to be flagged.

>There aren’t enough people (nor PR capital) to look through all of your photos.

This is a nonsense sentence. They ARE going through all your photos, both in this effort, comparing hashes, and for classifying your photos with ML, a highly publicised feature. Both happen on device.


> The risk she perceives with Apple is that they will falsely accuse people of crime.

In my country, there was not so long ago a case of a person wrongly accused of murder[0]. He spent 18 years in prison, despite there being no convincing evidence of him committing the crime. The people invested in the case wanted to have a scapegoat ASAP. Cases related to pedophilia are often very emotionally-charged, given the nature of the crimes. It can easily lead to hastily made decisions that can ruin many lives.

> Not to mention how infinitely unlikely it is that _several_ of your photos would be similar enough to known abusive material to be flagged.

Given enough time and enough attempts, there still is a possibility of something like that occurring. And this is with an assumption that the algorithm used will be correctly implemented with no false positives at all – something that I doubt even a manual analysis by humans would be able to achieve given the volume of data to check.

[0]: https://pl.wikipedia.org/wiki/Sprawa_Tomasza_Komendy (in Polish)


My point was that the article has nothing to do with the Apple situation. Being confused for someone rightly accused of a crime is very different from being falsely accused.

And I'm not sure what your main point is. Yes, sometimes people are wrongly accused, even convicted. Is that argument against having a legal system, or what are you trying to say?


I am trying to say that over eagerness in searching for a crime where there is none could cause more damage than it is worth. Those committing the crimes would quickly switch to other methods, but those innocent would be caught by accident.

I am also not saying that such people should be left free and unpunished, but that the method proposed could cause far more bad than good, in my humble opinion of course.

A legal system of some kind is obviously necessary, but using argument of thinking about `insert a group of people you should care about` can easily lead to abuse of the established laws.


> Eventually, the forensics will return nothing. Your lawyer will succeed in getting you to walk free.

If you're lucky. If you're unlucky, they'll insist that you have it stored "encrypted" in a file whose extension their automated scanners don't recognize and keep you locked up until you produce the password. And insisting that there is no password and that the file isn't an encrypted file (it's some system file and you don't even know what it's for) is exactly what a criminal would say.


Regarding the OFAC story on the front, Paypal's implementation of this is stunningly brain-dead. They do naive string matching for any field in the transaction, including some rather short OFAC listed aliases. So you can have a situation where a very short string, like "JONAS" in a non-payee-name field, like a "note" or "memo", causes the transaction to be held. Then they won't release the funds until you produce scanned identifications, detailed write-ups, etc.


There are a lot of common names in OFAC when you get to the alias list. (because, apparently, a common name works well as an alias)

Jesus. Lopez. Mohammed. Mark Taylor. Sally Jones. Park. Wong. Kim.

Chuck Taylor used to be on there, but isn't anymore. (not the shoes)

Since there are a lot of alias matches, you get a lot of people who wind up matching when there's no reasonable way that there's an actual match. (i.e., they're happily living in the US and have an ordinary bank account there, not the dead dictator of Liberia, or someone in Columbia, or North Korea.)


Right. The interesting part is that there doesn't seem to be good guidance from OFAC on exactly how tight/loose the matching needs to be, and on what parts/fields of the transaction need to be scanned or not. So, everyone implements it differently.


This article makes a good case for turning off every apple device you own.. and never turning them back on. And it's not because Apple has implemented this algorithm per-se, it's because the way they have implemented it punts the problems directly to Law Enforcement. Maybe they will (or have) proposed at least a manual review process after flagging before involving Law Enforcement. Otherwise it's tuttle/buttle all the way down...


Not long until people will use deep learning style techniques to modify images to falsely trigger this alarm on harmless pictures and strew them randomly on the web.


Look at the evolution: from mainframes (expensive machines owned and controlled by others) to Personal Computers (cool! computers for the masses! we can do whatever we want with them!), to the smartphone (we can now bring the PC with us!), to a device you can't control, owned by dystopian corporations that now continuously looks into your data to see if you break the law.


What I don't understand is that Apple does seem to believe that this this has no possibility to open them up to very visible and expensive lawsuits in case of false positives. I'd assume if they hit the wrong person it would get ugly for them. Especially as civil rights organizations would love to support anyone who gets falsely accused if it was only for the publicity.


Hmmm. So after past 5 years posing as best privacy-focused company out there we get this? Major red flag.

I want my home server to work as my personal cloud: music, photos, movies, files collections. And everything should be end-to-end encrypted so that no one has any way to look inside it. With 5G and broadband this might even work without much optimization. Someone knows about such solution?


Nextcloud


I'm curious about what sort of hash algorithm they intend to use. Just some SHA hashing or is it some spatial hashing where you can calculate a meaningful difference between hashing and maybe even infer content. I'd assume this might be interesting with regard to building a profile for choosing relevant ads.


I don't understand the hullabaloo about this. Apple is a private company. They are selling their product. You are free to not buy it. That's what people have been saying anytime the topic of anti trust is brought up in the context of app store. So same deal here. Lol.


Out of interest what happens if an image gets flagged client side? If it's a CSAM scanner online that can presumably have a human do a sanity check before informing the authorities. But will Apple have the ability to pull the file from your phone or does get reported directly.

Both quite concerning options TBH.


>> Random noise due to lighting conditions that takes an innocent photo (or, more likely, an intimate photo of you or your partner) and makes the algorithm assume that it is similar to a restricted photo.

What algo is Apple using to compare 'similar' hashes from images, does anyone know?



Does Apple openly reveal such details about their software? Could revealing it make it possible to drive somebody to find out how to trick it?


Ironically, this happened with Apple recently: https://news.ycombinator.com/item?id=27878333 misapplication of face recognition.


Remember when Apple punted privacy as their strength? In a closed proprietary system, it's always just a promise. It's the same story then as it is now. Never trust a company promising you privacy. If you can't verify your privacy, consider it absent.


Given the massive progress seen in image generation and synthesis with NNs, I'd imagine it will only be a matter of time until the production of CP doesn't require a child. Get ready, because the situation is about to get a whole lot more complicated.


Don't like this story? Get involved with local and national government to remove qualified immunity for false accusers, and put stiff penalties on companies that erroneously implicate wrong doing and put bonus penalties for the deaths of pets by SWAT.


My next phone will not be an iPhone.


Will it help? Imagine what other companies will do if Apple will get away with that...


I plan to get a high end Android phone and load an open OS onto it. TBH, I thought Apple would be a good steward for iOS. Oh well…


I've recently bought a pinephone, I'll be glad to get out of the apple eco-systemm


So this guy is probably having a life altering problem, he might never have money again, and he worries about Apple? Is this second part just to attract attention to his peril? Or am I reading something wrong here.


> Apple has decided to scan images on your phone.

I wonder what would be the defense to apple if they decided to scan the images on your desktop / laptop as well. Is there a limit to how intrusive tech companies can get?


Ummmm… I have bad news:

“These features are coming later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey.

https://www.apple.com/child-safety/ :^)


And then the next Pegasus will be used to plant CSAM instead of spying, just to burn someone publicly. Good luck getting to the forensic level of Amnesty International or Citizen Lab to prove anything.


I guess Framework laptop now needs to start with slim phones as well.


That's pretty much the Fairphone https://www.fairphone.com/en/


I wonder if the problem of a post author could be resolved by changing his legal name to some unique name?

And, consequently, a sanctioned person could probably do the same to avoid sanctions.


At least these popups will install the knowledge in children, that technology is not trustworthy and secure by default. Better than making them believe otherwise.


It is time to give up mobile phones. The human race is simply too immature to provide such a technology without mass abuse, corruption, and generation of misery at scale. Seriously, how many spam calls a day to you receive? Do you even bother to answer calls from unknown numbers? How's about your text experience, do you receive a lot of unsolicited text spam? They telecommunication companies are flawed to the core and are not addressing these core issues at all. Their product is a crime engine masquerading as a communications device.


Custom Android ROMs are bound to make a bigger comeback if Android manufacturers aren't going to take a stand against similar crap


as someone who use custom ROMs entirely for 10+ years there is hardly any comeback possible, since they would need to be used in first place in past, how many phones had custom ROMs at peak 0.1% ? Plus good luck installing custom ROM without affecting functionality of phone or installing at all, Sony will mess your camera, Huawei/Honor doesn't allow it at all, Oppo/Realme will disable fingerprint sensor, Samsung will trip the knox and some features will never work again, so what's there left? pretty much only Pixel and Xiaomi, unless you wanna tablet only Pixel which is hardly distributed anywhere in world


Custom ROMs are like Linux. Only advanced users will use them.

Regarding the functionality we lose when using a custom ROM, I think the trick is to use a device from a "modding friendly" brand. Samsung, Sony or Huawei are not modding friendly. Get a OnePlus, Asus, Xiaomi/Poco/Redmi or Google Pixel and most if not all hardware will work fine.

(My experience is based using the OnePlus 3, OnePlus 8 Pro and Redmi Note 9S with LineageOS and a Asus Zenfone 6 with OmniROM. Fingerprint works fine and even more exotic features like the IR blaster on the 9S or the flip camera on the Zenfone 6 works.)

Now, by replacing the original OS you also remove some things that makes the phone better. Eg: the stock camera app, which often gives you better processing (there are modded GCam apps that might help fix this, alternatively you can save DNG/raw files). One useful feature that I had on the Zenfone 6 that custom ROMs can't give me is the ability to stop charging at a certain % (can be done with Magisk, but it's not supported by the ROM itself).

The main problem for me are the apps. Many require Google services and Google's safetynet and both are a problem if you decide to use "pure" Android. Even if one decides to install Google Apps to have the Play Store and their other services, there's still the safetynet issue that gets triggered just for unlocking your device.


Can someone clear this for me, why wouldn't the author sue the bank which locked their cash?


I do not know the details, so I can speculate only. A lawsuit may yet happen if that is the only way to free up the money. However, a lawsuit is expensive and time consuming, so it is probably not the best place to start. Also, I don't believe the bank has any obligation to do business with a customer, so the most you could hope for would be to free up the funds. I believe the author would prefer to continue to be able to bank unimpeded (and switching banks won't help as the problem will follow unless and until it is cleared up). Finally, the bank has a pretty good defense. They are required by law to impose the sanctions, so prudence requires them to act first and ask questions later.


Because banks are 100% allowed to freeze transactions and freeze accounts, they have extremely broad leeway in who they do business with.

This stuff happens all the time. Banks will even shutdown your account for suspicious activity and refuse to tell you what activity was suspicious.


The laws enforce this behaviour. The terms of service in the contract you signed allows them to. He has no recourse.


In about less than 4 years it will be shown this was used for completely other purposes against political enemies, but accepted because they were your political enemies. And then it will be used against you, after which there's an outrage thread and "how could this have have happened???" posts.


Time to (re)watch Brazil!


> It’s a common name, like Robert Brown or Mary Johnson. Shouldn’t it be easy to prove that I’m not person X?

I bet its probably more like Abdul Mohammad and the 100 variations of spelling that in English

edit: as anyone that gets OFAC list updates daily would notice


Even if it is, would that matter? Should we disconnect millions of people from the global economy because of their given name?

Also, it’s hard not to read your comment and wonder if you hate a large proportion of the human species,

> I bet its probably more like Abdul Mohammad and the 100 variations of spelling that in English


Dude, it's a name, chill out. Names have many variations, some more than others. Don't turn this into something it's not...


at first I was going to ignore this comment and all the leading questions, but then I saw you were OP.

it wasn't a xenophobic or islamophobic comment nor was it condoning the reality or your experience with it, it was an observation from how you wrote your post and what I know about the world


Apologies for divining malintent. No, I don't have a name like that. And it shouldn't matter even if I have. If you read my name, you wouldn't think I'd be on a list. But here I am.

The law of large numbers.


"won't someone please think of the children" ..


What happens when parents take photos of their children?


The algorithm is supposed to check for known hashes. So, a new photo shouldn't trigger it.


Don't reply with your wallet is a small start.


The irony is the Big Tech giants rub shoulders with known pedophiles (Bill Gates + Jeffrey Epstein).

George Orwell was right, he was just not enough pessimistic about how bad things would get.


So HN seems to irrationally hate Bitcoin and crypto, but the entire first part of this story could have been solved with that.


Ctrl-f crypto, ah there it is. The solution to their banking madness ignored on HN and grayed out, of course.


On the other hand, using Bitcoin means that there is no realistic protection against theft and unavoidable mistakes. Maintaining a business without a way to correct mistakes and any kind of protection against scams seems to be hard to implement in practice.


Nice boilerplate. Did you read the article?

The author in question got cut off from transferring funds due to whatever reason (a confusion in identity at this point we understand).

Funds could have easily be transferred to him via crypto.


Yes, I read the article. All I said is that while Bitcoin resolves some issues, its use presents many more.


I am detailing a solution to the author's problem. And you...?


The downvotes only prove my point.


This is a serious situation that has been unfolding for years now in the world of fraud in general (from both the retailers and the banks' perspective).

I used to work in a company doing fraud prevention. The first thing that shocked me when I started in the role, was the sheer indifference people working there had for rejected transactions. The way these were supposed to be handled was like this:

1. Transaction is rejected because of a rule/condition/bank decline 2. Investigate transaction manually and identify root cause 3. Contact customer and provide information on how they can resolve this (ask for more details if you are unsure, mark them as safe to allow transactions as it was a clear false positive or ask them to speak to their bank or payment provider as the system wasn't the one that rejected the transaction)

The general view was though that if the customer think it's important, then they should get in touch with the company and there shouldn't be proactive work done on these transactions since chances are "most of them are dodgy". The problem was that there were thousands of these every single day and a very small team working in that department - less than 10 people. I voiced the fact that even if someone is fast at identifying these and going through them (once you developed experience in identifying the cause of the decline and doing the administrative work of allowing the user to complete transactions, emailing them and then keeping an eye on their account for when they try to make a purchase) they couldn't work through more than maybe 2-300 transactions in a day. That was 2-300 of someone working ONLY on this task, almost automatically and having a deep understanding of what happened in each of them. Realistically it was more around 50-100 transactions per day. The rejection list could have up to 3-4000 entries per day.

I argued for improvement of the rules that were pushing rejects into that queue, I argued for better analysis to be completed before a rule would be added, I argued for accepting a higher risk but allowing lower value transactions through to reduce friction and make the queue more manageable. I ended up getting some of the things I was asking for, but what happened was that the rejections went down to 1-2000 per day and people just assumed now that those are fraudulent for sure since now we're "better at spotting criminals"...

The introduction of another system that employed machine learning made fraud agents even more indifferent towards those queues since now "you can't fool AI". It was a very sad state of affairs and since I left there I seriously doubt that this has improved in any way. I remember the email chains you would get from some users who tried to make important purchases or even just regular purchases and not being able to get in touch with a human. I remember the frustration, the friction and the blanket statements that would emanate in team meetings saying that "it's fine as long as we protect the company from losses, so what if a few people have issues"... Those few people could have been them in other scenarios.

I think this will only keep getting worse and worse with time. Each new software that attempts to bring "efficiency" to some of these types of tasks can cause a huge array of problems downstream. You end up with companies that will have 5 people working in a department that should have 500 because machine learning will "take care of the issue". I know I haven't even touched the privacy concerns that this raises, and there are PLENTY and covered in great detail in this post and in the one made by the EFF, but we're forgetting here that those affected by these things are other people.

And as long as governments will say "You need to regulate this SOMEHOW" and companies will come back with "We'll use the power of AI and ML to train NN to easily identify any problems and thus bring efficiency and safety to all of our customers". We're people, we're not machines and we have a vast array of personal circumstances and elements that make us unique and saying that a blanket solution will "fix all problems" is absolutely inhuman and shows a crass misunderstanding of how these tools work and what their unintended consequences are.

TL;DR - using ML/AI/NN will cause a huge array of problems. We need more people working these queues which increase exponentially so that innocent users do not get caught in your "high-tech solution". For each criminal fraud prevention rules catch, a few hundred innocent people cannot complete their activities or are labelled as criminals until proven innocent.


Thank you so much for reading my post and writing this comment.

What you highlighted is the reason why I wrote it. Most of the discussions are of an abstract harm against an immediate one. They other those involved.

My experience isn’t unique. It’s fairly mild.

There are people who have gone to jail because of a mistake by an algorithm. And many who have been arrested. Here’s one with Apple involved, https://www.businessinsider.com/tech/a-teen-is-suing-apple-f...

The harm is real. And I have been on both sides. I am a survivor of childhood sexual abuse. I know this would not have helped in my case. And in the cases of the other survivors I know. It was our environments that led to our experiences. The adults ignored it, were in on it, or in denial.

Apple is trying to solve a complex social problem with an algorithm. One that we are supposed to trust has been made without errors or bugs.

I can’t see the upside here.


Really sorry to hear about your experiences and I must agree that this isn't the way out of this, it just isn't. Situations have unbelievable complexities and these are blanket approaches that will invariably do more harm than good. I think a frustrating argument that I hear from parents and other people on the other side of this is the classical "if you have nothing to hide then X".

They step back on that when the problem is turned around into "why does your house have walls? why do you have locks and doors?". It's not because you have something to hide, it's because of privacy and safety. And we need these spaces to provide us with a modicum of safety and privacy because some discussions can only emerge in those spaces, some ideas need to be discussed in those safe spaces and because I wouldn't want that the world in which modern children grow up in has entirely given up their right to privacy. We always end up using the bad as something that needs to be guarded against, but we seem to fail to see all the positive experiences that have emerged in those safe private spaces as we were growing up...

Hope you are better now and thank you for the write up!


How can a company that openly deals with the Chinese Communist Party claim any kind of moral high ground? It just boggles the mind.


I have been telling Alexa all along that “Biden is not my president”. Do you think I am in trouble? (Please don’t downvote for a joke).


First it will be to catch those pesky pedophiles. And terrorists, they fell out of fashion recently, but let's not forget about them. Then, obviously, we need to cut hate speech, homophobia, transgenderphobia, antifeminism, racism and all those bad, bad things. And fake news and misinformation, all those covid vaccination/lockdown skeptics in the first place.

And at the end we will be allowed to talk only about the stuff that three or four corporations and their political associates are fine with.


terrorists didn't fall out of fashion, they just need a break to recover. Hence the retreat from Afghanistan. You can't fight against terrorism if you run out if terrorists


Many thanks for posting this. It's a very well-written and timely piece.


This sort of automated crime detection systems will be reserved for peons, i.e. 99% of the population.


I don't understand why people are suddenly losing their minds over this. The technology has been in use by basically every cloud storage platform and social network for ages now.

Where are all the stories of people jailed for false positives? Or even arrested and released? There aren't any because they don't arrest people on apple/google/facebooks say, I would assume they just open an investigation. I would also take a guess that people consuming this content tend not to have a single image that might flag this system, it must be extremely unlikely for someone to trigger two or more false positives.


Can’t the blockchain help with misidentifying individuals?

As for the photo scanning isn’t it already happening?


Out of curiosity: How, specifically, would you like to solve this with blockchain tech?


> Can’t the blockchain help with misidentifying individuals?

It can absolutely help with misidentifying individuals forever and ever.


You must be into Etherium




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: