I'm a bit nervous to consider the age of AI imagery/video because we could feasibly see more images like this and there will still be a chance that they're real.
Just today someone pulled out their phone in the breakroom and said "hey, check out this cool photo of the eclipse", and the moment I saw it I said "it's a cool photo, sure, but that's AI generated." She pouted, and I had to add "like, really obviously AI generated". She had the audacity to suggest maybe it was a different lens like an infrared camera when I brought up that the day was completely cloudy up there, the eclipse straight-up wasn't visible in that location.
It was just a "photo" of Niagara Falls. Landscape, water on rocks. But while apparently they were fooled, it took me less than a second to see all sorts of errors in it (sun too big, Cascade Falls too close to Horseshoe, three main waterfalls instead of just the two, sun's angle in the sky, cloudcover was "framing" highlight points, there were no tourists, there was no BUILDINGS, etc etc).
So while it's chancy to be in the era of AI generation, to quote the Old Texts, "I've seen a lot of shoops in my time". You should familiarize yourself with AI image generation, use it a reasonable amount if possible, see the things it tends to do. And always take a second glance at any images that seem suspect, because they likely are.
10 years? Try maybe 2-3. We've very, very close and with some it's already hard to tell. Small imperfections like fingers and other consistencies are what is holding it back. I'd expect this to be ironed out sooner rather than later. On the one hand it is fascinating, on the other horrifying because we will not be able to tell apart fakes.
I'm a bit nervous to consider the age of AI imagery/video because we could feasibly see more images like this and there will still be a chance that they're real.
Just today someone pulled out their phone in the breakroom and said "hey, check out this cool photo of the eclipse", and the moment I saw it I said "it's a cool photo, sure, but that's AI generated." She pouted, and I had to add "like, really obviously AI generated". She had the audacity to suggest maybe it was a different lens like an infrared camera when I brought up that the day was completely cloudy up there, the eclipse straight-up wasn't visible in that location.
It was just a "photo" of Niagara Falls. Landscape, water on rocks. But while apparently they were fooled, it took me less than a second to see all sorts of errors in it (sun too big, Cascade Falls too close to Horseshoe, three main waterfalls instead of just the two, sun's angle in the sky, cloudcover was "framing" highlight points, there were no tourists, there was no BUILDINGS, etc etc).
So while it's chancy to be in the era of AI generation, to quote the Old Texts, "I've seen a lot of shoops in my time". You should familiarize yourself with AI image generation, use it a reasonable amount if possible, see the things it tends to do. And always take a second glance at any images that seem suspect, because they likely are.
What if technology improves so much in 10 years that it will become impossible to tell?
10 years? Try maybe 2-3. We've very, very close and with some it's already hard to tell. Small imperfections like fingers and other consistencies are what is holding it back. I'd expect this to be ironed out sooner rather than later. On the one hand it is fascinating, on the other horrifying because we will not be able to tell apart fakes.