Reverse engineering project: https://archive.is/YQWOM
Hash collision: https://archive.is/OhQUd
What this means: Apple may take steps to change NeuralHash to mitigate this before actually putting it into use (provided they aren't lying and already using it). This also doesn't mean that anyone knows which hashes would cause an image to be flagged (probably not possible without someone leaking the list).
But it may be possible for malicious actors to guess which images would be included in the database, hash them, and find collisions that they could then get people to download.
Update
Apple says the collision is "not a concern". By which I assume they mean "we don't care."
The autistic tranny furries that get off on child porn, grooming children, ruining lives, and worming their way into communities they aren't welcome and subverting them are already on the case weaponizing this
Yep, the people who actually have child porn have both motive and means to exploit it.
They probably don't. To be fair, none of the links explicitly say the probabilities of collisions other than "perceptual hashes" are more prone.
Is it statistically significant? Don't know.
Would Apple have reason to give a damn, other than reducing false positives (i.e. money spent employing dirty humans for final manual verification as per their PR)? Probably not.
Is Apple and the government going to abuse this? Track record speaks for itself. We don't need proof of concepts to confirm the furthering of privacy degradation.
Edit: This is obviously a great thread and finding btw. I forgot to note that but it's especially important given recent crap littering the forum.
It's checked client side, but the answer is not "True" or "False" but a key that could either be used to decrypt the content or that does nothing.
Likewise, the hashes are encrypted and theoretically not possible to discover.
Apple could put anything they want in the database. All we have is their word that they won't.