From the one side, singing a cover requires permission or licensing. Many studios ignore this, but that technically puts their copyright at risk.
If you HAVE that license, or if the song is old enough to be public domain/creative commons, or if it's an original work, then I see no issues with AI use: The core programming is NOT the training data, you can't teach an AI to sing with just 2 minutes of audio, it takes hours and hours of "core" training data, which you "culture" with 2-3 minutes of training data for the specific voice. The proof for this lies in "joke" AI voices, you can get training data for purely non-vocal sounds, like R2D2 or meme sound effects, and make a human voice out of it, which would be impossible if it didn't have a HEALTHY dataset of human voice to build it on. If a human has lots of "core" training data, then spends a fraction of the time learning how to imitate someone, then singing in that voice, it's A-OK.
And I, for one, welcome our new robot overlords. They should be held to the same standard as humans.
I do understand that getting the AI to work for oneself can take some finagling. You can have an art AI put out freaky junk (like by injecting a set with rare Pepes), or have it create masterpieces in one's own style. Error probability in a set does reduce and smooth out the larger the set becomes, but yeah, accuracy comes from healthier data. Lobotomizing it (like the woke-types) with biases only cripples the quality output.
And I, for one, welcome our new robot overlords. They should be held to the same standard as humans.
I'm fine with that so long as the overlords leave me alone.
From the one side, singing a cover requires permission or licensing. Many studios ignore this, but that technically puts their copyright at risk.
If you HAVE that license, or if the song is old enough to be public domain/creative commons, or if it's an original work, then I see no issues with AI use: The core programming is NOT the training data, you can't teach an AI to sing with just 2 minutes of audio, it takes hours and hours of "core" training data, which you "culture" with 2-3 minutes of training data for the specific voice. The proof for this lies in "joke" AI voices, you can get training data for purely non-vocal sounds, like R2D2 or meme sound effects, and make a human voice out of it, which would be impossible if it didn't have a HEALTHY dataset of human voice to build it on. If a human has lots of "core" training data, then spends a fraction of the time learning how to imitate someone, then singing in that voice, it's A-OK.
And I, for one, welcome our new robot overlords. They should be held to the same standard as humans.
I do understand that getting the AI to work for oneself can take some finagling. You can have an art AI put out freaky junk (like by injecting a set with rare Pepes), or have it create masterpieces in one's own style. Error probability in a set does reduce and smooth out the larger the set becomes, but yeah, accuracy comes from healthier data. Lobotomizing it (like the woke-types) with biases only cripples the quality output.
I'm fine with that so long as the overlords leave me alone.