I'm looking forward to the first famous court defense against audio or video evidence where the defendant simply says "Nope, not me." "It's fake." "It's a total fabrication!" You're going to need forensics experts specifically trained in figuring this out. Probably even using AI itself to determine if something was AI-generated.
The real danger in this stuff is not how it would be abused in a court of law, but in the court of public opinion. A lie can travel halfway around the world while the truth is still putting on its shoes.
A LOT of what's admitted in courts can be dismissed easily with how reliable witness testimonials can be and how juries are VERY easily convinced if you 'tell a story'.
This being the latest addition may make a lot of recordings easier to deny going 'oh that's AI, not me admitting to crimes' it'll be on the biases of the Judge on if it is allowed or dismissed.
I don't think it's too dissimilar to risks or concerns about CGI-editing in the past. At least in the court of law.
For the average viewer, sure, it'll be tough to tell the difference, but it's going to be pretty damn difficult to conceal it with sufficient tools designed to analyze the video/audio footage for clear signs of tampering or artificial production.
All evidence actually needs to have a person attached to it. A document isn't a document, it's a document that someone had to pick up, that someone made, that someone interpreted. Everyone involved with that evidence can be dragged into court.
So okay, you've got a video that you want to sneak in that's actually AI generated. Well, how did you get it, the court needs to know that. It needs to be submitted long before the trial begins. You have to share it with the defense. Who can vouch for the video's authenticity, they can be subject to cross-examination. Who received the video evidence, they can be subject to cross-examination. Can you have an expert witness testify that this is genuine footage? They can be subject to cross and the other side can get their own experts involved. That also means that they need access to your copy of the footage so it can be examined.
To be honest, the worst thing about our court system is that it could totally work if: a) judges actually followed the law, b) prosecutors didn't have so much undue power.
I'll say this, recording and editing audio off my phone is a lot harder than it appears. If I could just copy my voice to a written sentence, I would happily do it.
The defense for this for a professor is to just record everything you do in class and with students. Then some clown goes to HR with an AI you saying stuff and you pull up the time and show what really happened.
Only problem is this life blog recording can be subpoenaed used against you, but that's an easy legal fix.
It will be admitted or dismissed as impossible to verify depending on the regime's needs.
It will make it easier for the regime to make up “proof.””
I'm looking forward to the first famous court defense against audio or video evidence where the defendant simply says "Nope, not me." "It's fake." "It's a total fabrication!" You're going to need forensics experts specifically trained in figuring this out. Probably even using AI itself to determine if something was AI-generated.
The real danger in this stuff is not how it would be abused in a court of law, but in the court of public opinion. A lie can travel halfway around the world while the truth is still putting on its shoes.
A LOT of what's admitted in courts can be dismissed easily with how reliable witness testimonials can be and how juries are VERY easily convinced if you 'tell a story'.
This being the latest addition may make a lot of recordings easier to deny going 'oh that's AI, not me admitting to crimes' it'll be on the biases of the Judge on if it is allowed or dismissed.
I don't think it's too dissimilar to risks or concerns about CGI-editing in the past. At least in the court of law.
For the average viewer, sure, it'll be tough to tell the difference, but it's going to be pretty damn difficult to conceal it with sufficient tools designed to analyze the video/audio footage for clear signs of tampering or artificial production.
No, because that's not how courts work.
While there are always dirty prosecutors like FlufferBoy2000 that try to pull in evidence that has no one to speak for it, you can't actually do that. If you remember that was when ADA Kraus in the Rittenhouse case was given drone footage by an anonymous source from at 3 AM a few days before trial, from a footage of a drone company, that seemed to have been a shell company that only existed for a few weeks and managed to entirely disappear, and gave Fox News completely different footage, and the prosecutors lied about the footage they shared with the defense.
All evidence actually needs to have a person attached to it. A document isn't a document, it's a document that someone had to pick up, that someone made, that someone interpreted. Everyone involved with that evidence can be dragged into court.
Here's Judge "Badass" Borowski explaining to a race-grifter that you can't just randomly take screenshots, say they are publicly available, and then use that as evidence. You need to have a person authenticate it and lay foundation for it for the court to proceed with it.
So okay, you've got a video that you want to sneak in that's actually AI generated. Well, how did you get it, the court needs to know that. It needs to be submitted long before the trial begins. You have to share it with the defense. Who can vouch for the video's authenticity, they can be subject to cross-examination. Who received the video evidence, they can be subject to cross-examination. Can you have an expert witness testify that this is genuine footage? They can be subject to cross and the other side can get their own experts involved. That also means that they need access to your copy of the footage so it can be examined.
To be honest, the worst thing about our court system is that it could totally work if: a) judges actually followed the law, b) prosecutors didn't have so much undue power.
I'll say this, recording and editing audio off my phone is a lot harder than it appears. If I could just copy my voice to a written sentence, I would happily do it.
We just need that one cardassian dude to determine whether it's real or not. "It's a faaaaaake!"
But in seriousness, it's likely that there are ways for an AI to determine whether another AI voice created that file, or whether it was genuine.
If not, it'll be like ArtemisFoul said, and the regime will deem it true or untrue depending on what their needs are.
The defense for this for a professor is to just record everything you do in class and with students. Then some clown goes to HR with an AI you saying stuff and you pull up the time and show what really happened.
Only problem is this life blog recording can be subpoenaed used against you, but that's an easy legal fix.
Just in time for a woman to be caught saying the genocidal agenda?
"It was a deepfake!"
Where would people get that idea?
I don't get what you're implying there.
Yes, I said that would be a good idea, and I stand by it, especially if people would believe she actually said it.
I doubt we'll ever get lucky enough to expose their inner beliefs unless someone defects from their side.