Win / KotakuInAction2
KotakuInAction2
Sign In
DEFAULT COMMUNITIES All General AskWin Funny Technology Animals Sports Gaming DIY Health Positive Privacy
Reason: None provided.

The working assumption on my part is that AI voice synthesis will/has-in-private developed similarly to AI picture synthesis.

Where it is trained on the product of a huge number of artists and if asked to can often recreate one particular artist's style recognizably, but can also mash the training from a bunch of different artist's work together and create a generic gestalt image that is still aesthetically pleasing enough to suffice for an entertainment project. It is still in abstract sense built off their work, but given the black box like nature of the large AI models, all it needs is a couple of third party contractors and a few conveniently unrecorded steps in the process to make it practically impossible to prove one specific artist's work was needed to make an AI model that does a good voice. With enough third party voice data, and some legally and arguably morally acceptable training methods (eg. Asking a bunch of human proof-listeners, "which of these voice clips sounds most like Homer Simpson?") it's even plausible to effectively impersonate someone else's voice without ever directly training off audio clips of them. We already accept that openly impersonating someone's voice is not stealing, it's who is doing the physical speaking that owns that audio recording, it's only a problem if you try to pass it off as actually spoken by the impersonatee.

So realistically the only thing that would keep voice acting intact as profession would be purely an honor system by the producers around not abusing plausible deniability, and I cannot think of a more dishonorable bunch of wretches than entertainment media producers. I like to consider myself a realist, so I also consider voice acting a career with a looming expiry date.

As you say, you don't need union action to make people read contracts, and that's why I view the union action not as a measure to encourage informed consent, but as a simple coercive measure to try and force union terms on union members that will put impractical minimum charges or procedural requirements on allowing your voice work to help AI development. All in an effort to strangle the "with enough third party data" clause above that lets voice AI become a generalized tool that can replace even those unwilling to directly participate in its development.

I'm not even stridently 100% anti-union, but union intervention in this instance is largely indistinguishable from gov intervention, it's just a smaller and more focused government still taking choices out of individuals' hands. It's stopping those individuals willing to accept even short term lucrative contracts that will drastically cut down the demand for them in the future, for the sake of those who would refuse those contracts because they are unwilling to change.

113 days ago
1 score
Reason: Original

The working assumption on my part is that AI voice synthesis will/has-in-private developed similarly to AI picture synthesis.

Where it is trained on the product of a huge number of artists and if asked to can often recreate one particular artist's style recognizably, but can also mash the training from a bunch of different artist's work together and create a generic gestalt image that is still aesthetically pleasing enough to suffice for an entertainment project. It is still in abstract sense built off their work, but given the black box like nature of the large AI models, all it needs is a couple of third party contractors and a few conveniently unrecorded steps in the process to make it practically impossible to prove one specific artist's work was needed to make an AI model that does a good voice. With enough third party voice data, and some legally and arguably morally acceptable training methods (eg. Asking a bunch of human proof-listeners, "which of these voice clips sounds most like Homer Simpson?") it's even plausible to effectively impersonate someone else's voice without ever directly training off audio clips of them. We already accept that openly impersonating someone's voice is not stealing, it's who is doing the physical speaking that owns that audio recording, it's only a problem if you try to pass it off as actually spoken by the impersonatee.

So realistically the only thing that would keep voice acting intact as profession would be purely an honor system by the producers around not abusing plausible deniability, and I cannot think of a more dishonorable bunch of wretches than entertainment media producers. I like to consider myself a realist, so I also consider voice acting a career with a looming expiry date.

As you say, you don't need union action to make people read contracts, and that's why I view the union action not as a measure to encourage informed consent, but as a simple coercive measure to try and force union terms on union members that will put impractical minimum charges or procedural requirements on allowing your voice work to help AI development in an effort to strangle the "with enough third party data" clause above that lets voice AI become a generalized tool that can replace even those unwilling to directly participate in its development.

I'm not even stridently 100% anti-union, but union intervention in this instance is largely indistinguishable from gov intervention, it's just a smaller and more focused government still taking choices out of individuals' hands. It's stopping those individuals willing to accept even short term lucrative contracts that will drastically cut down the demand for them in the future, for the sake of those who would refuse those contracts because they are unwilling to change.

113 days ago
1 score