Matthew Lesh is Country Manager at Freshwater Strategy and author of a new report, available on Onward Corner, about copyright and AI
In 1841, Thomas Babington Macaulay (later, Lord Macaulay) described copyright in a House of Commons debate as a monopoly. “It is good that authors should be remunerated,” he said. “Yet monopoly is an evil… the evil ought not to last a day longer than is necessary.”
This debate reflects Britain’s pioneering role in defining the appropriate bounds of intellectual property. As far back as 1710, the Statute of Anne was the first law anywhere in the world to grant authors exclusive rights over their works. It emerged from the recognition that society benefits when creators can earn a living from their work, thereby encouraging the production of more books, ideas, and cultural and scientific goods.
But copyright was never meant to be absolute; instead, as the statute’s name made clear, it was An Act for the Encouragement of Learning. Copyright’s purpose is to stimulate knowledge, not to lock it away. That’s why there have always been limitations.
Copyright restricts the reproduction of a work, not the broader exchange of ideas, styles, or influences that allow creativity to flourish. You have always been allowed to take others’ ideas and build on them; we are all standing on the shoulders of giants. It was also always time-limited, originally to just 14 years, with the option of extension if the author was still alive.
Later, ideas about moral rights emerged on the continent (blame the French) and seeped into the UK through membership of the European Union. We have also had self-interest outweighing the public interest. Most famously, Disney repeatedly succeeded in lobbying for extensions to keep profiting from Mickey Mouse long after Walt’s death.
Now, as artificial intelligence fundamentally reshapes digital technologies and has the potential to revolutionise the global economy, the copyright debate has emerged in a new dimension. At the heart of the discussion is whether AI developers should be able to use publicly available information, such as websites, blogs, and web forums, to train their models.
AI is typically trained on vast amounts of data to learn patterns, which are represented as mathematical weights, enabling it to generate new content without storing or reproducing the underlying data. Importantly, the higher the quantity and quality of the input, the better the output.
Copyright law prohibits the unauthorised reproduction of content. As AI models learn from but do not (for the most part) reproduce training data, their operation aligns with copyright principles: learning, not copying. Analogously, it is akin to a person reading many books, synthesising the information, and writing and publishing an article on the same topic.
However, the UK’s copyright regime creates legal uncertainty for AI. It is generally accepted that if a model reproduces the training material in its output, it would constitute a violation of copyright.
But the ability to train – to learn – sits on shaky foundations. As part of the training process, an AI developer will typically make a copy, which, even if later deleted, could infringe copyright law (as the meaning of a provision allowing temporary and incidental copying is contested).
In practice, this legal uncertainty has prompted developers to train their models in jurisdictions with more permissive laws, such as the US, Japan, and Singapore. Notably, none of the most prominent large language models, like OpenAI’s ChatGPT or Anthropic’s Claude, are trained in the UK. Even British companies, like London-based Stability AI, have firmly stated that their AI training takes place entirely outside the UK.
The current Labour government, as well as the last Tory one, has promised reform to address this issue at various points. In 2022, a government consultation led by the Intellectual Property Office proposed a copyright exception for AI training, known as text and data mining (TDM), for any purpose. This was also backed by policy reviews undertaken for the current and previous governments, including Sir Patrick Vallance’s Pro-innovation Regulation of Technologies Review in 2023 and Matt Clifford’s AI Opportunities Action Plan in January 2025.
The latest consultation proposing reform, which came out in December last year, was met with an aggressive oppositional campaign from the rights holders: news media, book publishers, and the music industry. They advocate for an opt-in model, requiring AI developers to pay a licence fee to use materials in AI training, along with cumbersome transparency requirements. Some even advocate extraterritoriality, meaning that AI models that fail to comply with UK-specific rules would be inaccessible in Britain.
Unfortunately, the rights holders’ response has led the government to backtrack from its earlier commitments to reform. They are still deciding their response. The Conservative opposition, who backed reform while in government, also appear to have flipped their position, fearing the wrath of the news publishers and the creative sector, which have been lobbying hard on the issue.
It is perhaps understandable that these interest groups are seeking a transfer of cash from a lucrative AI industry. But requiring payments could come with severe consequences for the UK’s AI sector – as well as for the broader economy and society – for little realistic benefit to the creative sector. Changes that make the UK a more difficult place to train and adopt AI than even the EU (which has already begun reforming its laws), let alone the more hospitable jurisdictions, would prompt an exodus of top AI companies, talent, and investment.
In practice, the rights holder’s proposals are unlikely to result in significant payments to copyright holders. Instead, AI companies will continue to train their models elsewhere. This approach risks the UK becoming an AI laggard, with less advanced models available for British consumers across all sectors. Ironically, one of the biggest losers would be the creative industry, which is already relying on cutting-edge AI tools to produce world-leading films and music. To put it bluntly, British data would still be used to train AI, as it is today – this would just be done elsewhere, to the benefit of others.
The UK has many advantages in AI, from world-leading academic and policy initiatives to pioneering labs such as Google DeepMind and Stability AI. Britain is ranked third globally for investment, and it could add £550 billion to the UK economy by 2035. We are on the cusp of breakthroughs in life-saving medical treatments, driverless cars, better public services, and cutting-edge scientific discoveries.
The government says it wants the UK to be an AI ‘superpower’. There are already several barriers. Data centres — essential for AI development and adoption — are being hindered by restrictive planning rules. The UK has the highest industrial electricity costs among developed countries, making energy-intensive industries such as AI significantly less competitive.
But there is something we can do to make Britain a more hospitable place for this technology: adopt a Japan-style copyright regime for AI. This would make Britain one of the best places in the world to develop AI and help avoid the risk of more talent and capital going elsewhere.






![Virginia Democrat Candidate's Leaked Texts Are So Bad Even MSNBC Turns on Him [WATCH]](https://www.right2024.com/wp-content/uploads/2025/10/Virginia-Democrat-Candidates-Leaked-Texts-Are-So-Bad-Even-MSNBC-350x250.jpg)
![Task Force Takes Down Mexican Mafia-Controlled San Pedro Gang on RICO Charges [WATCH]](https://www.right2024.com/wp-content/uploads/2025/10/Task-Force-Takes-Down-Mexican-Mafia-Controlled-San-Pedro-Gang-on-350x250.jpg)







