From what I learned from https://www.thispersondoesnotexist.com/ look at eye position. Place your cursor on an eye and refresh (hit F5). Head direction, age, etc doesn't matter. You'll always be on an eye.
Edit (I'm bored): Apart from that... you can't do shit. Yes, some images have artifacts. Especially when jewelry / other people are present.
But eventually (took me like 30-40 refreshes and if I would use the code I could probably use parameters) it ends up like this. Sure if we know it's an AI generated image we can talk about her hair on the top left looking slightly strange. When we have this resolution and not a twitter avatar.
And algorithms probably exist that can detect AI generated images (the whole idea of generating images likes this is using a GAN basically two algorithms fighting (one tries to create a "real" image and one trying to detect a "fake" image).
Lesson learned: you cannot (as a human) detect AI images of people.
Also that site is old by now. You could easily train newer models that didn't have the restrictions you mention (like eye position.) And I'm sure with some effort you could create multiple images of the same "person" in different environments etc.
From what I learned from https://www.thispersondoesnotexist.com/ look at eye position. Place your cursor on an eye and refresh (hit F5). Head direction, age, etc doesn't matter. You'll always be on an eye.
Edit (I'm bored): Apart from that... you can't do shit. Yes, some images have artifacts. Especially when jewelry / other people are present.
But eventually (took me like 30-40 refreshes and if I would use the code I could probably use parameters) it ends up like this. Sure if we know it's an AI generated image we can talk about her hair on the top left looking slightly strange. When we have this resolution and not a twitter avatar.
And algorithms probably exist that can detect AI generated images (the whole idea of generating images likes this is using a GAN basically two algorithms fighting (one tries to create a "real" image and one trying to detect a "fake" image).
Lesson learned: you cannot (as a human) detect AI images of people.
Also that site is old by now. You could easily train newer models that didn't have the restrictions you mention (like eye position.) And I'm sure with some effort you could create multiple images of the same "person" in different environments etc.