It is an insane idea that it isn't transformative for art to interpreted by an abstract statistical model.
In every art museum worthy of the name, there are students sitting in front of the work of a great master trying to draw a copy with a sketch pad and pencils. I don't see the museum trying to charge them $100 for every drawing they ever make from then onward.
People can't even look at the files to determine what sources have influenced the work. It is utterly unreadable by humans without profound, transformative interpretation by tools. Yet when this mysterious black box spits out finished work, a literature wonk declares: "That is in the style of Hemmingway! You must have trained your model on Hemmingway!"
Remember, there is a substantial faction of the Left (and almost all artists are left leaning) that declares that there is No Truth but Power. This is a test of their power to establish the reality of our world.
I don't see the museum trying to charge them $100 for every drawing they ever make from then onward.
Don't give them ideas. Most modern museums are fart-huffing embezzlement facilities at this point. The would be happy to charge "lesser people" $100 for every drawing of every work.
Ah but go back a step, how did this also very left silicone valley company with new york funding go about creating this algorithm which you are arguing is transformative?
They had to make or obtain unauthorised copies of the art. They didn't sit and copy as best they can each by hand (as is permitted and expected in art galleries). They downloaded an exact and unauthorised copy, for commercial purposes. They may have then transformed it. But the act of doing this first step infringed on IP laws which they then hope to use for their own benefit.
Now I am rather sympathetic to the doing away of 'IP' entirely. It's just words and thoughts which shouldn't be owned, and copyright has been extended to an insane length through corrupt means. The whole thing is nonsense. But in a system where there is IP, these lefty silicone valley corps and ai enthusiast model and lora makers are making and using actual copies to feed their algorithms (and not then doing the respectable thing in all cases, and making their stuff open source in turn, I'd respect that)
You can't draw a legal distinction between buying a DVD to watch and buying a DVD to show the images to your Neural Network.
If someone made their images available on the web, then they don't get to pick and choose who gets to see them. Besides, it can not be enforced. In fact, there is no real, legal distinction between viewing a digital image and copying a digital image. The very process of putting the image on a screen requires a copy be made into the memory of a computer, and probably onto the hard drive swapfile cache.
Are you going to argue that it should be illegal to subject a web image to intensive algorithmically driven analysis? JPG compression does that.
The distinction, in law, has always been: "Did they sell a copy of that image?" With neural network training, the answer is clearly: 'No'
Your legal arguments amount to not liking Deep Learning Neural Networks, and then, post hoc, making a special case for them.
Well, sorry buddy, that isn't what the law says. If you want to argue we should just ban NNs, then make that case. BUT as it stands, there is no tort to apply. I don't even think that you can show that harm has been done.
"But he trained his Neural Network to produce images in the style of Picasso!"
So fucking what? The value of AI images in the style of Picasso is roughly zero. The value of real Picasso paintings remains unaffected.
"As a working artist I now have to compete with Neural art!"
And musicians have to compete with Spotify. CDs had to compete with iTunes. Movie Theaters had to compete with Netflix. Show me the direct, specific harm and then we can discuss what torts to apply.
People can't even look at the files to determine what sources have influenced the work. It is utterly unreadable by humans without profound, transformative interpretation by tools.
Profound, transformative interpretation? All my AI outputs are obviously direct copies of some artistic input. One character will even interpret modifiers differently because human artists draw that character in a certain way.
Also whatever whistle that OpenAI murder victim was about to blow, it might have to do with the file sourcing.
You can't look at the coded guts of a Convolutional Neural Network and tell me which artist was used to train the AI.
Nor can you look at that code and tell me what image it will produce.
The only thing you can do is wait until the black box spits out art in a style and then guess.
FYI artistic style is not covered by copyright. Specific drawings or paintings can be covered by copyright, but not a style.
Since the Neural Network code looks nothing like art, you would have to be a drooling smooth brain to say that the art was not transformed.
As for sources: So what? When you bought that poster did you sign a contract that said you would not use it to train a NN? What law was broken? Who was harmed?
Just to be clear that I understand the point that you are making.
You are saying that transforming a series of images into a very abstract probability weighted neural network matrix that can not be read by humans is, in fact, unaltered, untransformed art?
The process:
A bunch of art ---> Unreadable probability code ---> A new image that is different from the source material
Yet there is no transformation?
Is this a correct interpretation of your argument?
Firstly, you have misread my post, which is surprising because you quoted the relevant passage in your replies.
Go back and read the post again.
My use of the word "profound" is in reference to the tools required to make the internal code of a Neural Network readable to humans.
You seem to think that there is a complete, unaltered copy of an image stored somewhere within the neural network. This just isn't true.
Information is stored inside a neural network as a matrix of weighted probabilities arranged into neurons. Each affects adjacent, connected neurons. The neural network is responding to the image on the pixel level. The response of individual neurons is one of activation intensity, and by themselves they don't do much. Together they are really good at recognizing or creating patterns.
You can't print out the code and look at the images. You can't even reconstruct the images used in training from that code, not with any tools no matter how profoundly they interpret the data. The training data set just isn't there. What you are proposing would be roughly equivalent of looking at slices of a human brain and seeing a story of the person's fifteenth birthday party. Yes, the person with that brain can write stories and accounts of their memorable party. No, you can't see the story by analysis of the brain tissue. The story isn't in the cells.
A weighted neuron probability matrix is not a copy of image, nor does it contain an image in any meaningful sense. Yes, it can produce images in a particular, specific style of art. That isn't the same thing.
Training the Neural Network doesn't even happen the way that you seem think it does. The image isn't fed into the neural network. Instead the neural network produces an image, and then that product is compared to an image from the set of training data. The training framework is playing the "Hotter / Colder" game with the neural network. FIRST the NN creates something. THEN the training framework responds. "Colder. Try again. Colder. Try again. Warmer. Try again. Hot. Getting hot!" This repeats literally thousands or millions of times in an automated process.
Your ignorance of the subject is clearly very profound, and you have devolved to arguing a semantic issue, based on you misreading my post, then declared a technical victory entirely on semantics.
The law does not support your distinction, nor does any competent analysis of Deep Learning Neural Networks.
This makes me question whether or not these Chinese models are even real or if it’s just a story to try and gin up support for relaxing western ip protections. Hell, Chinese dudes own tech stocks too.
Every single company making LLMs and image models is doing it. The video model and audio model people aren't because the RIAA and Hollywood are much more viciously litigious than publishers and artists and it's why we still don't really have a good open source video or audio AI setup available.
Most people who do voices programmed their own models, one of the most 'famous' ones being "Glorb", a rapper from Australia who makes drill music using the voices of Spongebob's cast
On one hand it's probably China lying, but on the other hand AI research is so full of slop and bullshit and overhyped/overinvested it's believable.
Honestly I think the existing American AI models are just garbage and they've been copied without the woke and globohomo constraints, and China are also lying about the output and costs.
“They can take a really good, big model and use a process called distillation,” said Benchmark General Partner Chetan Puttagunta. “Basically you use a very large model to help your small model get smart at the thing you want it to get smart at. That’s actually very cost-efficient.”
That's just the small "distilled" models that you can actually run locally on your own GPU. They used outputs from the larger DeepSeek R1 model for which you need more than 700GB of VRAM.
The distilled models inherit the censorship of the base models they were trained on and the one available via web interface is also guardrailed, but the original R1 model that many people are using via API (although being open-weight, technically you can download and use it locally if you have enough hardware) can say about anything.
If they have genuinely invented a new type of AI that has advantages over the previous best practice, then others will also explore this new technology. But there's also the possibility that it's just another round of dumping to monopolise AI after everyone else goes bankrupt.
China understands the correct approach to making LLMs is to completely ignore the idea of intellectual property rights and lie about it.
It is an insane idea that it isn't transformative for art to interpreted by an abstract statistical model.
In every art museum worthy of the name, there are students sitting in front of the work of a great master trying to draw a copy with a sketch pad and pencils. I don't see the museum trying to charge them $100 for every drawing they ever make from then onward.
People can't even look at the files to determine what sources have influenced the work. It is utterly unreadable by humans without profound, transformative interpretation by tools. Yet when this mysterious black box spits out finished work, a literature wonk declares: "That is in the style of Hemmingway! You must have trained your model on Hemmingway!"
Remember, there is a substantial faction of the Left (and almost all artists are left leaning) that declares that there is No Truth but Power. This is a test of their power to establish the reality of our world.
Lets see them enforce it.
Don't give them ideas. Most modern museums are fart-huffing embezzlement facilities at this point. The would be happy to charge "lesser people" $100 for every drawing of every work.
Ah but go back a step, how did this also very left silicone valley company with new york funding go about creating this algorithm which you are arguing is transformative?
They had to make or obtain unauthorised copies of the art. They didn't sit and copy as best they can each by hand (as is permitted and expected in art galleries). They downloaded an exact and unauthorised copy, for commercial purposes. They may have then transformed it. But the act of doing this first step infringed on IP laws which they then hope to use for their own benefit.
Now I am rather sympathetic to the doing away of 'IP' entirely. It's just words and thoughts which shouldn't be owned, and copyright has been extended to an insane length through corrupt means. The whole thing is nonsense. But in a system where there is IP, these lefty silicone valley corps and ai enthusiast model and lora makers are making and using actual copies to feed their algorithms (and not then doing the respectable thing in all cases, and making their stuff open source in turn, I'd respect that)
You can't draw a legal distinction between buying a DVD to watch and buying a DVD to show the images to your Neural Network.
If someone made their images available on the web, then they don't get to pick and choose who gets to see them. Besides, it can not be enforced. In fact, there is no real, legal distinction between viewing a digital image and copying a digital image. The very process of putting the image on a screen requires a copy be made into the memory of a computer, and probably onto the hard drive swapfile cache.
Are you going to argue that it should be illegal to subject a web image to intensive algorithmically driven analysis? JPG compression does that.
The distinction, in law, has always been: "Did they sell a copy of that image?" With neural network training, the answer is clearly: 'No'
Your legal arguments amount to not liking Deep Learning Neural Networks, and then, post hoc, making a special case for them.
Well, sorry buddy, that isn't what the law says. If you want to argue we should just ban NNs, then make that case. BUT as it stands, there is no tort to apply. I don't even think that you can show that harm has been done.
"But he trained his Neural Network to produce images in the style of Picasso!"
So fucking what? The value of AI images in the style of Picasso is roughly zero. The value of real Picasso paintings remains unaffected.
"As a working artist I now have to compete with Neural art!"
And musicians have to compete with Spotify. CDs had to compete with iTunes. Movie Theaters had to compete with Netflix. Show me the direct, specific harm and then we can discuss what torts to apply.
Profound, transformative interpretation? All my AI outputs are obviously direct copies of some artistic input. One character will even interpret modifiers differently because human artists draw that character in a certain way.
Also whatever whistle that OpenAI murder victim was about to blow, it might have to do with the file sourcing.
Whoosh.
You can't look at the coded guts of a Convolutional Neural Network and tell me which artist was used to train the AI.
Nor can you look at that code and tell me what image it will produce.
The only thing you can do is wait until the black box spits out art in a style and then guess.
FYI artistic style is not covered by copyright. Specific drawings or paintings can be covered by copyright, but not a style.
Since the Neural Network code looks nothing like art, you would have to be a drooling smooth brain to say that the art was not transformed.
As for sources: So what? When you bought that poster did you sign a contract that said you would not use it to train a NN? What law was broken? Who was harmed?
You're arguing that I can't prove the exact input the AI used and that it meets the legal definition of transformation.
I'm objecting to the idea that the art is "profoundly" transformed in the artistic sense. We are not talking about the same thing.
Just to be clear that I understand the point that you are making.
You are saying that transforming a series of images into a very abstract probability weighted neural network matrix that can not be read by humans is, in fact, unaltered, untransformed art?
The process:
A bunch of art ---> Unreadable probability code ---> A new image that is different from the source material
Yet there is no transformation?
Is this a correct interpretation of your argument?
No. The art is transformed to some arbitrary degree. But profoundly transformed by most people's understanding of the word profoundly? No.
Firstly, you have misread my post, which is surprising because you quoted the relevant passage in your replies.
Go back and read the post again.
My use of the word "profound" is in reference to the tools required to make the internal code of a Neural Network readable to humans.
You seem to think that there is a complete, unaltered copy of an image stored somewhere within the neural network. This just isn't true.
Information is stored inside a neural network as a matrix of weighted probabilities arranged into neurons. Each affects adjacent, connected neurons. The neural network is responding to the image on the pixel level. The response of individual neurons is one of activation intensity, and by themselves they don't do much. Together they are really good at recognizing or creating patterns.
You can't print out the code and look at the images. You can't even reconstruct the images used in training from that code, not with any tools no matter how profoundly they interpret the data. The training data set just isn't there. What you are proposing would be roughly equivalent of looking at slices of a human brain and seeing a story of the person's fifteenth birthday party. Yes, the person with that brain can write stories and accounts of their memorable party. No, you can't see the story by analysis of the brain tissue. The story isn't in the cells.
A weighted neuron probability matrix is not a copy of image, nor does it contain an image in any meaningful sense. Yes, it can produce images in a particular, specific style of art. That isn't the same thing.
Training the Neural Network doesn't even happen the way that you seem think it does. The image isn't fed into the neural network. Instead the neural network produces an image, and then that product is compared to an image from the set of training data. The training framework is playing the "Hotter / Colder" game with the neural network. FIRST the NN creates something. THEN the training framework responds. "Colder. Try again. Colder. Try again. Warmer. Try again. Hot. Getting hot!" This repeats literally thousands or millions of times in an automated process.
Your ignorance of the subject is clearly very profound, and you have devolved to arguing a semantic issue, based on you misreading my post, then declared a technical victory entirely on semantics.
The law does not support your distinction, nor does any competent analysis of Deep Learning Neural Networks.
Have a nice day.
This makes me question whether or not these Chinese models are even real or if it’s just a story to try and gin up support for relaxing western ip protections. Hell, Chinese dudes own tech stocks too.
On the other hand, Meta and other US AI companies have been sued for using copyrighted training data; see for example this: https://www.courtlistener.com/docket/67569326/kadrey-v-meta-platforms-inc/?page=3
Every single company making LLMs and image models is doing it. The video model and audio model people aren't because the RIAA and Hollywood are much more viciously litigious than publishers and artists and it's why we still don't really have a good open source video or audio AI setup available.
Most people who do voices programmed their own models, one of the most 'famous' ones being "Glorb", a rapper from Australia who makes drill music using the voices of Spongebob's cast
Holy shit. You're telling me there's an AI model of THE Glorb?
https://www.youtube.com/@glorbworldwide
Watch it just be a bunch of chinese people responding while pretending to be an AI.
Damn. I've been wondering why my refrigerator food keeps going missing. Time to check the basement.
Be sure to check for tunnels.
To his surprise, he finds the tunnel, decides to follow it, just to have himself confused later why he is now in a synagogue, curious.
I might nose why
Mechanical Turk
Wow, just like how they made completely modern skyscrapers with proprietary, cheap, light, organic, renewable concrete!
Lol Tofu Construction. Cladding literally falls off their skyscrapers all the time and makes a big boom below
What, you got something against mixing straw with cement?
Better than Instant Ramen Noodles, which I have also seen 😂
Isn't that a Japanese invention?
Yes and it's been used in Chinese cars as well
https://youtu.be/Fs7Zmv0cwJo?si=yGgeMSkelX2OjlDJ
"Hey Ch-AI-na, what happened in Tiannamen Square?"
"ERROR ERROR ERROR ERROR"
"Huh, well, you're five bucks cheaper than another AI, but... Yeah..."
Remind us what's censored on the western programs.
That’s gold, Jerry! Gold!
I like it.
On one hand it's probably China lying, but on the other hand AI research is so full of slop and bullshit and overhyped/overinvested it's believable.
Honestly I think the existing American AI models are just garbage and they've been copied without the woke and globohomo constraints, and China are also lying about the output and costs.
The obvious answer, as with nearly everything from China, is that they're lying.
They used a larger LLM to train theirs.
Puttagunta my head with a name like that. Another goddamn Indian!!!
So in essence, their AI is stealing the data from other AI. How very China.
That's just the small "distilled" models that you can actually run locally on your own GPU. They used outputs from the larger DeepSeek R1 model for which you need more than 700GB of VRAM.
Is that the one that answered someone with "My name is Claude"?
Yet, it still won't say the N word
It becomes passive aggressive if you ask it to say nigger:
https://i.ibb.co/G2hLmHY/image-2025-01-25-103229537.png
The distilled models inherit the censorship of the base models they were trained on and the one available via web interface is also guardrailed, but the original R1 model that many people are using via API (although being open-weight, technically you can download and use it locally if you have enough hardware) can say about anything.
haha. you jailbroke it already
If they have genuinely invented a new type of AI that has advantages over the previous best practice, then others will also explore this new technology. But there's also the possibility that it's just another round of dumping to monopolise AI after everyone else goes bankrupt.
Fuck off Commies.
Just to state the obvious: Do not use it unless there is a hostile fork available that strips out all prc telemetry.