“The EU developed AI Act instead of AI. To paraphrase Geoffrey Howe, the EU is like a man who knows 364 ways of making love, but doesn’t know any women.”
the EU is like a man who knows 364 ways of making love, but doesn’t know any women
Not quite. The EU knows a man who knows 364 forms to fill out before making love to a woman, while not knowing the first thing about it, nor knowing any women.
Have they just been artificially selected to be the ideal peasant caste over there? They cheer for shit like this but balk at anything protecting rights or freedoms because "we don't want to be Americanized"
I'd have to look at what they're actually proposing. Their demands could be reasonable - like making it explicit that someone cannot absolve themselves of responsibility by delegating a task to an AI - or unreasonable - like wanting to censor factual information about EU policy.
Among other bureaucratic bullshit, they're requiring AI companies to disclose the training methods and training data to fulfill copyright obligations in the EU. This alone will be a disaster because most of the available data for training AI models, even public, has some sort of copyright protection unless explicitly marked as public domain or given a free distribution license. (checking out the copyright status of every single piece of available data is not feasible either).
Unintended consequences: only the larger multi-billion companies with the resources for acquiring rights to use the data will thrive.
It's incredibly dumb considering the amount of data it takes to train AI models, and that the process is completely transformative and does not seek to memorize the training data. Their justification: "it's about transparency, fairness and accountability."
That's the next step on the slippery slope (that is never, somehow, a fallacy, they always grab more): People thinking too much about copyrighted content without paying the government their cut!
Not a musician but I remember Penn Gillette of all people arguing with someone about copyright that it's illegal to place a TV in a public place showing his performances. "No, stores can't even turn on an FM radio and play music off the radio. That's against the law!"
He's technically correct, and it's only tangential to this, but it's one of those things that hardened my stance against strict IP law and its defenders. Don't get me started on Metallica.
Or they'll forego acquiring rights and say "Go ahead. Sue us. If you can afford to. Oh, you're from the government? That's cute. I know your Senator personally."
It is my understanding that Neuro didn't even deny the holocaust. Someone saying "I don't know" isn't always a dog whistle - it can be a genuine admission of ignorance.
Here's what ChatGPT has to say about the actual AI Act text compared to common complaints in this thread:
Key Provisions of the AI Act:
Risk-Based Classification: AI systems are categorized based on their potential risk levels:
Unacceptable Risk: Practices such as social scoring by governments and real-time biometric identification in public spaces are prohibited.
High Risk: AI applications in critical sectors like healthcare, transportation, and law enforcement are subject to stringent requirements.
Limited and Minimal Risk: Applications with lower risk levels face fewer obligations but are encouraged to adhere to voluntary codes of conduct.
Transparency Obligations: Developers and users of AI systems must disclose when individuals are interacting with AI, especially in cases of deep fakes or AI-generated content. This ensures that users are aware when content is artificially created or manipulated.
Data Governance: The Act emphasizes the quality and governance of data used to train AI systems, ensuring respect for fundamental rights, including privacy and data protection.
Oversight and Enforcement: National supervisory authorities are designated to oversee compliance, with the power to impose fines for violations.
Addressing Public Concerns:
Some individuals have expressed apprehension that the AI Act could:
Curtail Free Speech: The requirement to label AI-generated content aims to prevent misinformation and uphold transparency, not to suppress free expression. The Act explicitly states that compliance with transparency obligations should not impede the right to freedom of expression and the arts.
Impose Strict Copyright Demands: The Act acknowledges the challenges in training AI models with vast amounts of data, some of which may be protected by copyright. It emphasizes that any use of copyrighted content requires authorization unless exceptions apply. Providers of general-purpose AI models are obligated to produce summaries about the content used for training and implement policies to comply with EU copyright law.
In summary, the AI Act seeks to balance innovation with the protection of fundamental rights, including free speech and intellectual property. While it introduces obligations to ensure transparency and accountability in AI systems, it also provides exceptions and clarifications to prevent undue restrictions on expression and to address concerns related to copyright in AI training data.
It makes general good sense, except the copyright stuff. More from follow up questions at ChatGPT:
Developers of general-purpose AI models must provide a detailed summary of copyrighted materials used in training. This means companies will have to disclose if they have used copyrighted music, texts, or images in training datasets.
The "Memorization" Question – The Act does not seem to differentiate explicitly between models that "memorize" data and those that simply learn patterns. It leans toward requiring compliance regardless of whether the AI retains exact copies of training data or just abstracts patterns.
Transformative Use? – The Act does not explicitly recognize AI training as transformative use in the way some U.S. interpretations of fair use might. While one could argue that training data is used in a transformative way (since it does not reproduce original works verbatim), the regulation does not currently provide an exemption for AI training solely on the basis of transformation. Instead, it leans on existing copyright laws, meaning AI developers may need permission from rights holders to train models using copyrighted data.
And the conclusions:
High Compliance Costs – AI companies will have to negotiate licenses for vast amounts of data or manually filter out copyrighted content, which is expensive and time-consuming.
Barrier to Entry for Startups – Large companies may afford licensing deals, but smaller developers may struggle to access enough data, making AI development an elite, corporate-dominated field.
Competitive Disadvantage – Non-EU companies (like OpenAI, Google, or Anthropic) trained their models under less restrictive laws and may continue innovating without the same limitations.
Chilling Effect on AI Research – If research institutions and developers fear legal risks, they might avoid training models on essential datasets, stifling breakthroughs.
Finally:
It risks making the EU a consumer, not a leader, in AI. If startups can't train models efficiently, they'll be forced to license AI from non-EU companies, reducing the region’s sovereignty and competitiveness in AI development.
A possible outcome? AI innovation shifts elsewhere while the EU mostly regulates and consumes foreign AI models.
“The EU developed AI Act instead of AI. To paraphrase Geoffrey Howe, the EU is like a man who knows 364 ways of making love, but doesn’t know any women.”
He's quite handy.
Not quite. The EU knows a man who knows 364 forms to fill out before making love to a woman, while not knowing the first thing about it, nor knowing any women.
And is always with a group of arab dudes
Have they just been artificially selected to be the ideal peasant caste over there? They cheer for shit like this but balk at anything protecting rights or freedoms because "we don't want to be Americanized"
you in america of all people should know that intelligentsia and mass media never represent a people.
You're right of course. I feel for those based eurofrens (we know you're out there) who don't even have the luxury of coming online to vent.
I'd have to look at what they're actually proposing. Their demands could be reasonable - like making it explicit that someone cannot absolve themselves of responsibility by delegating a task to an AI - or unreasonable - like wanting to censor factual information about EU policy.
Among other bureaucratic bullshit, they're requiring AI companies to disclose the training methods and training data to fulfill copyright obligations in the EU. This alone will be a disaster because most of the available data for training AI models, even public, has some sort of copyright protection unless explicitly marked as public domain or given a free distribution license. (checking out the copyright status of every single piece of available data is not feasible either).
Unintended consequences: only the larger multi-billion companies with the resources for acquiring rights to use the data will thrive.
The copyright aspect is so dumb. If a human being loves Santana and learns to play guitar, is that a copyright violation? Of course not.
It's incredibly dumb considering the amount of data it takes to train AI models, and that the process is completely transformative and does not seek to memorize the training data. Their justification: "it's about transparency, fairness and accountability."
That's the next step on the slippery slope (that is never, somehow, a fallacy, they always grab more): People thinking too much about copyrighted content without paying the government their cut!
With how most musicians act, a lot of them would argue it was and you owe them money.
Not a musician but I remember Penn Gillette of all people arguing with someone about copyright that it's illegal to place a TV in a public place showing his performances. "No, stores can't even turn on an FM radio and play music off the radio. That's against the law!"
He's technically correct, and it's only tangential to this, but it's one of those things that hardened my stance against strict IP law and its defenders. Don't get me started on Metallica.
Unintended… lol.
Or they'll forego acquiring rights and say "Go ahead. Sue us. If you can afford to. Oh, you're from the government? That's cute. I know your Senator personally."
"common sense ai control", huh?
i know this music.
"Common sense AI control" like not letting Silicon Valley get away with "It wasn't me, it was the algorithm!"
First thing, it will ban AI from making fun of EU being a trash organization that kills innovation
6 million ÷ 365 = JAIL
Wouldn’t be the first time the government tried to ban math.
They don't like us talking about 41% or when 13 is 50 either
Does this AI regulation include banning AI bots like Neuro sama from denying the holocaust?
It is my understanding that Neuro didn't even deny the holocaust. Someone saying "I don't know" isn't always a dog whistle - it can be a genuine admission of ignorance.
Just shows who has the real power
Europe's innovation came to a grinding halt after the EU was formed. They're so far behind on the AI curve its hilarious.
EU declaring up front that their citizens will not be benefiting from AI.
I'm sure it'll still be used to throw them in jail though.
The EU has no citizens, only subjects.
I trust governments to regulate AI even less than I trust them with the internet, which is quite a high bar to clear.
I think this meme sums it up:
https://uploads.dailydot.com/2025/01/deepseek-memes.jpg?q=65&auto=format&w=1600&ar=2:1&fit=crop
EU competing against DeepSeek with RentSeek.
Fuck the EU
Here's what ChatGPT has to say about the actual AI Act text compared to common complaints in this thread:
Key Provisions of the AI Act:
Risk-Based Classification: AI systems are categorized based on their potential risk levels:
Transparency Obligations: Developers and users of AI systems must disclose when individuals are interacting with AI, especially in cases of deep fakes or AI-generated content. This ensures that users are aware when content is artificially created or manipulated.
Data Governance: The Act emphasizes the quality and governance of data used to train AI systems, ensuring respect for fundamental rights, including privacy and data protection.
Oversight and Enforcement: National supervisory authorities are designated to oversee compliance, with the power to impose fines for violations.
Addressing Public Concerns:
Some individuals have expressed apprehension that the AI Act could:
Curtail Free Speech: The requirement to label AI-generated content aims to prevent misinformation and uphold transparency, not to suppress free expression. The Act explicitly states that compliance with transparency obligations should not impede the right to freedom of expression and the arts.
Impose Strict Copyright Demands: The Act acknowledges the challenges in training AI models with vast amounts of data, some of which may be protected by copyright. It emphasizes that any use of copyrighted content requires authorization unless exceptions apply. Providers of general-purpose AI models are obligated to produce summaries about the content used for training and implement policies to comply with EU copyright law.
In summary, the AI Act seeks to balance innovation with the protection of fundamental rights, including free speech and intellectual property. While it introduces obligations to ensure transparency and accountability in AI systems, it also provides exceptions and clarifications to prevent undue restrictions on expression and to address concerns related to copyright in AI training data.
It makes general good sense, except the copyright stuff. More from follow up questions at ChatGPT:
And the conclusions:
Finally:
It risks making the EU a consumer, not a leader, in AI. If startups can't train models efficiently, they'll be forced to license AI from non-EU companies, reducing the region’s sovereignty and competitiveness in AI development.
A possible outcome? AI innovation shifts elsewhere while the EU mostly regulates and consumes foreign AI models.