Firstly, you have misread my post, which is surprising because you quoted the relevant passage in your replies.
Go back and read the post again.
My use of the word "profound" is in reference to the tools required to make the internal code of a Neural Network readable to humans.
You seem to think that there is a complete, unaltered copy of an image stored somewhere within the neural network. This just isn't true.
Information is stored inside a neural network as a matrix of weighted probabilities arranged into neurons. Each affects adjacent, connected neurons. The neural network is responding to the image on the pixel level. The response of individual neurons is one of activation intensity, and by themselves they don't do much. Together they are really good at recognizing or creating patterns.
You can't print out the code and look at the images. You can't even reconstruct the images used in training from that code, not with any tools no matter how profoundly they interpret the data. The training data set just isn't there. What you are proposing would be roughly equivalent of looking at slices of a human brain and seeing a story of the person's fifteenth birthday party. Yes, the person with that brain can write stories and accounts of their memorable party. No, you can't see the story by analysis of the brain tissue. The story isn't in the cells.
A weighted neuron probability matrix is not a copy of image, nor does it contain an image in any meaningful sense. Yes, it can produce images in a particular, specific style of art. That isn't the same thing.
Training the Neural Network doesn't even happen the way that you seem think it does. The image isn't fed into the neural network. Instead the neural network produces an image, and then that product is compared to an image from the set of training data. The training framework is playing the "Hotter / Colder" game with the neural network. FIRST the NN creates something. THEN the training framework responds. "Colder. Try again. Colder. Try again. Warmer. Try again. Hot. Getting hot!" This repeats literally thousands or millions of times in an automated process.
Your ignorance of the subject is clearly very profound, and you have devolved to arguing a semantic issue, based on you misreading my post, then declared a technical victory entirely on semantics.
The law does not support your distinction, nor does any competent analysis of Deep Learning Neural Networks.
My use of the word "profound" is in reference to the tools required to make the internal code of a Neural Network readable to humans.
A weighted neuron probability matrix is not a copy of image, nor does it contain an image in any meaningful sense. Yes, it can produce images in a particular, specific style of art. That isn't the same thing.
We're talking past each other. I'm speaking about an aesthetic sense of "profound" and I made that pretty clear as well. I don't care about the legal ramifications either.
When fed with certain prompts there are AI art models that will return the same pose in the same angle and remain highly resistant to further parameters. For example: person sleeping. The AI always returns the character with their head on their arm from a profile angle. No matter what additional tags or weights are added, it will return that basic structure for "sleeping." Whether the neural network has an exact copy of some art (it doesn't, apparently) doesn't mean anything to me. It's obvious that it's mimicking a shallow pool of training data (or whatever semantic distinction you feel is appropriate), and aesthetically I don't consider that a profound transformation.
Your objection is that the NN was trained with a small pool of images for sleeping people? Or that the NN doesn't know what a person skipping rope looks like?
You realize that this is an issue of the training data set. You said so. I infer that you know that with a bigger training set that the objections you are raising would be minimized. It would be trivial to train a NN to show more than two hundred sleeping poses. The hardest part would be to assemble a high quality set of training data.
A mathematical model of nerve cells creates art on demand, as per the training it is given, and that isn't transformative enough for you, because the results closely match the given examples.
You are entitled to your opinion. You are clearly wrong, but you can be as wrong as you like.
I mean, you are not considering the process, only the product. You appear see an image generated by a Neural Network no different from a photocopy. You seem think the law should treat the two the same way.
I mean, you are not considering the process, only the product. You appear see an image generated by a Neural Network no different from a photocopy. You seem think the law should treat the two the same way.
Do you have some kind of disorder that you can't read where I said (twice now) that I don't care about the legal issues?
And you're correct, I'm only considering the product, not the process. Ultimately that's all that matters to me. It'll be fascinating to see how much workflow it can replace with its current capabilities.
Firstly, you have misread my post, which is surprising because you quoted the relevant passage in your replies.
Go back and read the post again.
My use of the word "profound" is in reference to the tools required to make the internal code of a Neural Network readable to humans.
You seem to think that there is a complete, unaltered copy of an image stored somewhere within the neural network. This just isn't true.
Information is stored inside a neural network as a matrix of weighted probabilities arranged into neurons. Each affects adjacent, connected neurons. The neural network is responding to the image on the pixel level. The response of individual neurons is one of activation intensity, and by themselves they don't do much. Together they are really good at recognizing or creating patterns.
You can't print out the code and look at the images. You can't even reconstruct the images used in training from that code, not with any tools no matter how profoundly they interpret the data. The training data set just isn't there. What you are proposing would be roughly equivalent of looking at slices of a human brain and seeing a story of the person's fifteenth birthday party. Yes, the person with that brain can write stories and accounts of their memorable party. No, you can't see the story by analysis of the brain tissue. The story isn't in the cells.
A weighted neuron probability matrix is not a copy of image, nor does it contain an image in any meaningful sense. Yes, it can produce images in a particular, specific style of art. That isn't the same thing.
Training the Neural Network doesn't even happen the way that you seem think it does. The image isn't fed into the neural network. Instead the neural network produces an image, and then that product is compared to an image from the set of training data. The training framework is playing the "Hotter / Colder" game with the neural network. FIRST the NN creates something. THEN the training framework responds. "Colder. Try again. Colder. Try again. Warmer. Try again. Hot. Getting hot!" This repeats literally thousands or millions of times in an automated process.
Your ignorance of the subject is clearly very profound, and you have devolved to arguing a semantic issue, based on you misreading my post, then declared a technical victory entirely on semantics.
The law does not support your distinction, nor does any competent analysis of Deep Learning Neural Networks.
Have a nice day.
We're talking past each other. I'm speaking about an aesthetic sense of "profound" and I made that pretty clear as well. I don't care about the legal ramifications either.
When fed with certain prompts there are AI art models that will return the same pose in the same angle and remain highly resistant to further parameters. For example: person sleeping. The AI always returns the character with their head on their arm from a profile angle. No matter what additional tags or weights are added, it will return that basic structure for "sleeping." Whether the neural network has an exact copy of some art (it doesn't, apparently) doesn't mean anything to me. It's obvious that it's mimicking a shallow pool of training data (or whatever semantic distinction you feel is appropriate), and aesthetically I don't consider that a profound transformation.
Okay. So what?
Your objection is that the NN was trained with a small pool of images for sleeping people? Or that the NN doesn't know what a person skipping rope looks like?
You realize that this is an issue of the training data set. You said so. I infer that you know that with a bigger training set that the objections you are raising would be minimized. It would be trivial to train a NN to show more than two hundred sleeping poses. The hardest part would be to assemble a high quality set of training data.
A mathematical model of nerve cells creates art on demand, as per the training it is given, and that isn't transformative enough for you, because the results closely match the given examples.
You are entitled to your opinion. You are clearly wrong, but you can be as wrong as you like.
I mean, you are not considering the process, only the product. You appear see an image generated by a Neural Network no different from a photocopy. You seem think the law should treat the two the same way.
Good luck with that.
Do you have some kind of disorder that you can't read where I said (twice now) that I don't care about the legal issues?
And you're correct, I'm only considering the product, not the process. Ultimately that's all that matters to me. It'll be fascinating to see how much workflow it can replace with its current capabilities.