AI - The innovators vs the creatives
It is no secret that the power of artificial intelligence (‘AI’) lies within the sheer volume of data on which it is trained. The question that is now being asked is whose data was that and should it have been trained on that material in the first place?
To quote the salient warning by Dr. Ian Malcolm in Jurassic Park: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
Scraping & generation
There are fundamentally two issues that copyright creators have with the dawn of generative AI:
- the use of their material to train AI models without their consent
- the ability for the tools that can then use that source material to create new content.
The first issue here is one that is difficult to fix retrospectively. Effectively the cat is already out of the bag, and their material, despite being protected by a form of law, has been taken, used, replicated, and reimagined regardless of those measures.
The second issue is potentially even more problematic: how that data is used. Whether you hear Metallica’s vocalist James Hetfield singing children’s lullabies that he’s never actually recorded or have a video of Dustin Hoffman chatting with himself at different ages, the effects seem harmless enough. There is a more sinister side though. What if those AI models are used for a political, misleading, or dangerous message? Those without the requisite cynicism or the necessary computer/content literacy may not be able to differentiate between what is real or fabricated content. Equally, what if the content is being used to create competing material that limits the original artist’s own earnings or falls below the par of the original material to be begin with and users simply cannot tell the difference, or just associate that fabricated content with the original artist without question?
Controversies
Unfortunately, these issues are not hypothetical. Back in 2023, one of the longest labour strikes in American history was held by the Writers Guild of America and SAG-AFTRA who effectively brought Hollywood to a standstill, citing the use of generative AI as one of the main reasons for the walkout. Notable artists in the UK have also voiced their concerns, including some of the biggest names in the music industry.
More recently, a viral trend to have images converted into the style of Studio Ghibli animations came to attention of the media. Whilst it is very difficult to protect a style or influence within the current legal parameters, the base material must have used countless Studio Ghibli images to be able to understand the task and reproduce the user’s image in the style of the studio to begin with.
Creative protection vs political interest
The issues here are not just about consent and integrity but are about rights. If copyright laws have been breached, then where are the legal actions being taken? What does the law say about AI?
The real answers may not lie in the law, but actually in the politics behind it all. What we have is a burgeoning tech-based economy which rivals the creative industries, manifesting itself as a struggle between the tech innovators and the content creators themselves.
Legal measures
On both sides of the pond, we are seeing the legislators being caught in the battle and trying to arbitrate both sides of the argument, whilst implementing meaningful laws that can both support and regulate the industry.
In the US, a new law was passed at national level that will prevent any form of AI-based legislation for a decade. The idea is that any restrictions drafted now may stifle innovations for the years ahead. In essence, without the legislation to restrict AI-based tech, then there remains an open playing field.
Within Europe, the EU AI Act came into force last year with a staggered implementation timeline, with most of its provisions set to come into force August 2026 for which amendments are already being sought.
In the UK, the Houses of Parliament are currently debating this very issue and going back and forth between the houses as part of the deliberations of the Data (Use and Access) Bill (‘DUA Bill’). Over the past few weeks, the House of Lords have been seeking to implement a transparency measure for AI-solutions to provide a breadcrumb trail of the copyrighted material used to train the large language models – a motion that has been countered by the House of Commons, citing that providing such transparency would stifle the development and growth of AI solutions within the UK.
Where do we go from here?
The honest answer is that we do not currently know, but it does seem like the practices will vary from territory to territory, with people being upset on either side.
In the UK, the DUA Bill was originally scheduled to receive Royal Assent by now, but this topic (amongst several others) has meant that the Bill remains in the paper stage, with narrow margins between each House’s own votes on what to do. It may even be that the topic falls to a more specific AI Bill, which is now overdue.
Whichever side of the fence you sit on, we will all have to keep a close eye on all of these and other developments to ensure that your content and/or your technology remains compliant with the applicable laws.
The views above provide an overview of the current legal landscape and do not constitute legal advice. For further information, please email Mark Hughes or call 0151 906 1000.