My use of the word "profound" is in reference to the tools required to make the internal code of a Neural Network readable to humans.
A weighted neuron probability matrix is not a copy of image, nor does it contain an image in any meaningful sense. Yes, it can produce images in a particular, specific style of art. That isn't the same thing.
We're talking past each other. I'm speaking about an aesthetic sense of "profound" and I made that pretty clear as well. I don't care about the legal ramifications either.
When fed with certain prompts there are AI art models that will return the same pose in the same angle and remain highly resistant to further parameters. For example: person sleeping. The AI always returns the character with their head on their arm from a profile angle. No matter what additional tags or weights are added, it will return that basic structure for "sleeping." Whether the neural network has an exact copy of some art (it doesn't, apparently) doesn't mean anything to me. It's obvious that it's mimicking a shallow pool of training data (or whatever semantic distinction you feel is appropriate), and aesthetically I don't consider that a profound transformation.
Your objection is that the NN was trained with a small pool of images for sleeping people? Or that the NN doesn't know what a person skipping rope looks like?
You realize that this is an issue of the training data set. You said so. I infer that you know that with a bigger training set that the objections you are raising would be minimized. It would be trivial to train a NN to show more than two hundred sleeping poses. The hardest part would be to assemble a high quality set of training data.
A mathematical model of nerve cells creates art on demand, as per the training it is given, and that isn't transformative enough for you, because the results closely match the given examples.
You are entitled to your opinion. You are clearly wrong, but you can be as wrong as you like.
I mean, you are not considering the process, only the product. You appear see an image generated by a Neural Network no different from a photocopy. You seem think the law should treat the two the same way.
I mean, you are not considering the process, only the product. You appear see an image generated by a Neural Network no different from a photocopy. You seem think the law should treat the two the same way.
Do you have some kind of disorder that you can't read where I said (twice now) that I don't care about the legal issues?
And you're correct, I'm only considering the product, not the process. Ultimately that's all that matters to me. It'll be fascinating to see how much workflow it can replace with its current capabilities.
We're talking past each other. I'm speaking about an aesthetic sense of "profound" and I made that pretty clear as well. I don't care about the legal ramifications either.
When fed with certain prompts there are AI art models that will return the same pose in the same angle and remain highly resistant to further parameters. For example: person sleeping. The AI always returns the character with their head on their arm from a profile angle. No matter what additional tags or weights are added, it will return that basic structure for "sleeping." Whether the neural network has an exact copy of some art (it doesn't, apparently) doesn't mean anything to me. It's obvious that it's mimicking a shallow pool of training data (or whatever semantic distinction you feel is appropriate), and aesthetically I don't consider that a profound transformation.
Okay. So what?
Your objection is that the NN was trained with a small pool of images for sleeping people? Or that the NN doesn't know what a person skipping rope looks like?
You realize that this is an issue of the training data set. You said so. I infer that you know that with a bigger training set that the objections you are raising would be minimized. It would be trivial to train a NN to show more than two hundred sleeping poses. The hardest part would be to assemble a high quality set of training data.
A mathematical model of nerve cells creates art on demand, as per the training it is given, and that isn't transformative enough for you, because the results closely match the given examples.
You are entitled to your opinion. You are clearly wrong, but you can be as wrong as you like.
I mean, you are not considering the process, only the product. You appear see an image generated by a Neural Network no different from a photocopy. You seem think the law should treat the two the same way.
Good luck with that.
Do you have some kind of disorder that you can't read where I said (twice now) that I don't care about the legal issues?
And you're correct, I'm only considering the product, not the process. Ultimately that's all that matters to me. It'll be fascinating to see how much workflow it can replace with its current capabilities.