It makes general good sense, except the copyright stuff. More from follow up questions at ChatGPT:
Developers of general-purpose AI models must provide a detailed summary of copyrighted materials used in training. This means companies will have to disclose if they have used copyrighted music, texts, or images in training datasets.
The "Memorization" Question – The Act does not seem to differentiate explicitly between models that "memorize" data and those that simply learn patterns. It leans toward requiring compliance regardless of whether the AI retains exact copies of training data or just abstracts patterns.
Transformative Use? – The Act does not explicitly recognize AI training as transformative use in the way some U.S. interpretations of fair use might. While one could argue that training data is used in a transformative way (since it does not reproduce original works verbatim), the regulation does not currently provide an exemption for AI training solely on the basis of transformation. Instead, it leans on existing copyright laws, meaning AI developers may need permission from rights holders to train models using copyrighted data.
And the conclusions:
High Compliance Costs – AI companies will have to negotiate licenses for vast amounts of data or manually filter out copyrighted content, which is expensive and time-consuming.
Barrier to Entry for Startups – Large companies may afford licensing deals, but smaller developers may struggle to access enough data, making AI development an elite, corporate-dominated field.
Competitive Disadvantage – Non-EU companies (like OpenAI, Google, or Anthropic) trained their models under less restrictive laws and may continue innovating without the same limitations.
Chilling Effect on AI Research – If research institutions and developers fear legal risks, they might avoid training models on essential datasets, stifling breakthroughs.
Finally:
It risks making the EU a consumer, not a leader, in AI. If startups can't train models efficiently, they'll be forced to license AI from non-EU companies, reducing the region’s sovereignty and competitiveness in AI development.
A possible outcome? AI innovation shifts elsewhere while the EU mostly regulates and consumes foreign AI models.
It makes general good sense, except the copyright stuff. More from follow up questions at ChatGPT:
And the conclusions:
Finally:
It risks making the EU a consumer, not a leader, in AI. If startups can't train models efficiently, they'll be forced to license AI from non-EU companies, reducing the region’s sovereignty and competitiveness in AI development.
A possible outcome? AI innovation shifts elsewhere while the EU mostly regulates and consumes foreign AI models.