Overreactions from these assholes voice actors have gotten funnier to watch blow up.
But there is a serious rabbithole with regards to voice AI. "There will be consequences", but I dont see much discussion surrounding those consequences. One popular thing I see currently on YouTube seems to be having specific VAs or live/dead singers sing songs they never sung. Seems harmless, but since these use potential copyrighted recordings, I have no clue if its fair use or not, maybe they qualify as "mashups". So far it seems pretty much everyone using voice AI admits it upfront.
However with future concerns, if the AI gets good enough, how does this play into falsifying evidence, fake phonecall transcripts, etc? I know these AI leave some sort of fingerprint one could identify their use, though I can't cheer on this sort of AI improving.
From the one side, singing a cover requires permission or licensing. Many studios ignore this, but that technically puts their copyright at risk.
If you HAVE that license, or if the song is old enough to be public domain/creative commons, or if it's an original work, then I see no issues with AI use: The core programming is NOT the training data, you can't teach an AI to sing with just 2 minutes of audio, it takes hours and hours of "core" training data, which you "culture" with 2-3 minutes of training data for the specific voice. The proof for this lies in "joke" AI voices, you can get training data for purely non-vocal sounds, like R2D2 or meme sound effects, and make a human voice out of it, which would be impossible if it didn't have a HEALTHY dataset of human voice to build it on. If a human has lots of "core" training data, then spends a fraction of the time learning how to imitate someone, then singing in that voice, it's A-OK.
And I, for one, welcome our new robot overlords. They should be held to the same standard as humans.
I do understand that getting the AI to work for oneself can take some finagling. You can have an art AI put out freaky junk (like by injecting a set with rare Pepes), or have it create masterpieces in one's own style. Error probability in a set does reduce and smooth out the larger the set becomes, but yeah, accuracy comes from healthier data. Lobotomizing it (like the woke-types) with biases only cripples the quality output.
And I, for one, welcome our new robot overlords. They should be held to the same standard as humans.
I'm fine with that so long as the overlords leave me alone.
It's most likely NOT fair use, but neither is posting your own karaoke cover of a song. Thankfully the studios don't often go after people for that, but that might change when enough of these mashups become super popular. (and the studios realize it's become acceptable for them to make money from it)
I seriously hope any laws that apply to "AI" are only ever limited to stopping deceptive and fraudulent practices, of the kind you mention.
Yeah, fraud is the biggest concern to me, given how much "deepfakes" have evolved over recent years.
Still my own personal concern with AI in general is if creative people who work with their hands start dropping their chisels and pens. I believe the future landscape will be the unskilled using AI to make "new content" based off what exists, creators using it to fill in for they don't have resources for, and the skilled using it to speed up their processes. I'm unsure if any digital creatives will create things fully by hand anymore for money, but hey, hobbyists still make oil paintings to this day.
Overreactions from these assholes voice actors have gotten funnier to watch blow up.
But there is a serious rabbithole with regards to voice AI. "There will be consequences", but I dont see much discussion surrounding those consequences. One popular thing I see currently on YouTube seems to be having specific VAs or live/dead singers sing songs they never sung. Seems harmless, but since these use potential copyrighted recordings, I have no clue if its fair use or not, maybe they qualify as "mashups". So far it seems pretty much everyone using voice AI admits it upfront.
However with future concerns, if the AI gets good enough, how does this play into falsifying evidence, fake phonecall transcripts, etc? I know these AI leave some sort of fingerprint one could identify their use, though I can't cheer on this sort of AI improving.
From the one side, singing a cover requires permission or licensing. Many studios ignore this, but that technically puts their copyright at risk.
If you HAVE that license, or if the song is old enough to be public domain/creative commons, or if it's an original work, then I see no issues with AI use: The core programming is NOT the training data, you can't teach an AI to sing with just 2 minutes of audio, it takes hours and hours of "core" training data, which you "culture" with 2-3 minutes of training data for the specific voice. The proof for this lies in "joke" AI voices, you can get training data for purely non-vocal sounds, like R2D2 or meme sound effects, and make a human voice out of it, which would be impossible if it didn't have a HEALTHY dataset of human voice to build it on. If a human has lots of "core" training data, then spends a fraction of the time learning how to imitate someone, then singing in that voice, it's A-OK.
And I, for one, welcome our new robot overlords. They should be held to the same standard as humans.
I do understand that getting the AI to work for oneself can take some finagling. You can have an art AI put out freaky junk (like by injecting a set with rare Pepes), or have it create masterpieces in one's own style. Error probability in a set does reduce and smooth out the larger the set becomes, but yeah, accuracy comes from healthier data. Lobotomizing it (like the woke-types) with biases only cripples the quality output.
I'm fine with that so long as the overlords leave me alone.
It's most likely NOT fair use, but neither is posting your own karaoke cover of a song. Thankfully the studios don't often go after people for that, but that might change when enough of these mashups become super popular. (and the studios realize it's become acceptable for them to make money from it)
I seriously hope any laws that apply to "AI" are only ever limited to stopping deceptive and fraudulent practices, of the kind you mention.
Yeah, fraud is the biggest concern to me, given how much "deepfakes" have evolved over recent years.
Still my own personal concern with AI in general is if creative people who work with their hands start dropping their chisels and pens. I believe the future landscape will be the unskilled using AI to make "new content" based off what exists, creators using it to fill in for they don't have resources for, and the skilled using it to speed up their processes. I'm unsure if any digital creatives will create things fully by hand anymore for money, but hey, hobbyists still make oil paintings to this day.