Reverse engineering project: https://archive.is/YQWOM
Hash collision: https://archive.is/OhQUd
What this means: Apple may take steps to change NeuralHash to mitigate this before actually putting it into use (provided they aren't lying and already using it). This also doesn't mean that anyone knows which hashes would cause an image to be flagged (probably not possible without someone leaking the list).
But it may be possible for malicious actors to guess which images would be included in the database, hash them, and find collisions that they could then get people to download.
Update
Apple says the collision is "not a concern". By which I assume they mean "we don't care."
They probably don't. To be fair, none of the links explicitly say the probabilities of collisions other than "perceptual hashes" are more prone.
Is it statistically significant? Don't know.
Would Apple have reason to give a damn, other than reducing false positives (i.e. money spent employing dirty humans for final manual verification as per their PR)? Probably not.
Is Apple and the government going to abuse this? Track record speaks for itself. We don't need proof of concepts to confirm the furthering of privacy degradation.
Edit: This is obviously a great thread and finding btw. I forgot to note that but it's especially important given recent crap littering the forum.