I'm a bit nervous to consider the age of AI imagery/video because we could feasibly see more images like this and there will still be a chance that they're real.
Just today someone pulled out their phone in the breakroom and said "hey, check out this cool photo of the eclipse", and the moment I saw it I said "it's a cool photo, sure, but that's AI generated." She pouted, and I had to add "like, really obviously AI generated". She had the audacity to suggest maybe it was a different lens like an infrared camera when I brought up that the day was completely cloudy up there, the eclipse straight-up wasn't visible in that location.
It was just a "photo" of Niagara Falls. Landscape, water on rocks. But while apparently they were fooled, it took me less than a second to see all sorts of errors in it (sun too big, Cascade Falls too close to Horseshoe, three main waterfalls instead of just the two, sun's angle in the sky, cloudcover was "framing" highlight points, there were no tourists, there was no BUILDINGS, etc etc).
So while it's chancy to be in the era of AI generation, to quote the Old Texts, "I've seen a lot of shoops in my time". You should familiarize yourself with AI image generation, use it a reasonable amount if possible, see the things it tends to do. And always take a second glance at any images that seem suspect, because they likely are.
10 years? Try maybe 2-3. We've very, very close and with some it's already hard to tell. Small imperfections like fingers and other consistencies are what is holding it back. I'd expect this to be ironed out sooner rather than later. On the one hand it is fascinating, on the other horrifying because we will not be able to tell apart fakes.
I'm imagining the potential improvement since we've only just entered AI's infancy. With a "simpler" image prompt, such as a famous person eating shit, it could be more likely to produce a "perfect" image. Of course, it doesn't need to be a picture either. Deepfake video and voice generation can also apply.
The half joke is that there are always degenerate politicians, such that any gross claim has a chance to be true.
The same reason that guys who get grilled by the cops with no proof still confess.
When you know you are busted, regardless on if they can prove it, most people will panic and then immediately dig themselves a hole before they can pull it together to recognize they could lie out of it.
I'm a bit nervous to consider the age of AI imagery/video because we could feasibly see more images like this and there will still be a chance that they're real.
Just today someone pulled out their phone in the breakroom and said "hey, check out this cool photo of the eclipse", and the moment I saw it I said "it's a cool photo, sure, but that's AI generated." She pouted, and I had to add "like, really obviously AI generated". She had the audacity to suggest maybe it was a different lens like an infrared camera when I brought up that the day was completely cloudy up there, the eclipse straight-up wasn't visible in that location.
It was just a "photo" of Niagara Falls. Landscape, water on rocks. But while apparently they were fooled, it took me less than a second to see all sorts of errors in it (sun too big, Cascade Falls too close to Horseshoe, three main waterfalls instead of just the two, sun's angle in the sky, cloudcover was "framing" highlight points, there were no tourists, there was no BUILDINGS, etc etc).
So while it's chancy to be in the era of AI generation, to quote the Old Texts, "I've seen a lot of shoops in my time". You should familiarize yourself with AI image generation, use it a reasonable amount if possible, see the things it tends to do. And always take a second glance at any images that seem suspect, because they likely are.
They do that already. There are phones where if you take a photo of the moon, the ai generates details for the moon. Mostly chinese phones/apps.
What if technology improves so much in 10 years that it will become impossible to tell?
10 years? Try maybe 2-3. We've very, very close and with some it's already hard to tell. Small imperfections like fingers and other consistencies are what is holding it back. I'd expect this to be ironed out sooner rather than later. On the one hand it is fascinating, on the other horrifying because we will not be able to tell apart fakes.
I'm imagining the potential improvement since we've only just entered AI's infancy. With a "simpler" image prompt, such as a famous person eating shit, it could be more likely to produce a "perfect" image. Of course, it doesn't need to be a picture either. Deepfake video and voice generation can also apply.
The half joke is that there are always degenerate politicians, such that any gross claim has a chance to be true.
I'm surprised he/other pols don't just say "it's fake" and ignore the story. Maybe in this case there was another witness.
The same reason that guys who get grilled by the cops with no proof still confess.
When you know you are busted, regardless on if they can prove it, most people will panic and then immediately dig themselves a hole before they can pull it together to recognize they could lie out of it.