Have your say on the Productivity Commission report on copyright and AI

The Productivity Commission is looking for feedback on its interim report on harnessing data and digital technology.

The report includes recommendations for a text and data mining (TDM) exception to Australia’s Copyright Act, which would allow the AI industry to use writers and artists’ work without consent or compensation to train their Large Language Models (LLMs) – with potentially devastating effects for Australia’s creators and creative industries.

How to have your say

  • You can read the interim report and have your say on the draft recommendations by Monday 15 September 2025 (5pm AEST) on the Productivity Commission website.
    • You are also welcome to copy and paste anything you find useful from my own draft submission (below).
  • The Australian Society of Authors (ASA) also recommends you contact your local MPs to:
    • Share your concerns about the theft of your work by overseas tech giants. Allowing this theft is unjust – it amounts to wage theft for the creative industries.
    • Outline the risks to your livelihood and Australian culture if copyright laws are watered down to benefit overseas tech giants.
  • Help spread the word.

Draft submission

(Last updated 11 September 2025)

I welcome the opportunity to provide feedback on the Productivity Commission’s Interim Report on harnessing data and digital technology – in particular, its recommendations around specific Artificial Intelligence (AI) legislation, mandatory guardrails for high-risk AI, copyright, text and data mining (TDM), and the training of large language models (LLMs) for AI platforms.

I do not oppose ethical, transparent and fair AI in principle. I recognise its extraordinary potential in improving disability access, supporting literacy, and reducing inequalities. However, the application of existing platforms has and continues to fail in terms of ethics, transparency and fairness thus far.

Response to AI’s productivity potential

Part 1 of the Interim Report outlines AI’s productivity potential. However, while the report identifies that, ‘as with any new technology, AI can raise risks,’ it focuses on the potential risks of ‘poorly designed’ or ‘burdensome’ regulations on the AI industry, rather than the systemic risks the AI industry creates for Australian creators, creative industries and the broader economy. These include:

  • The use of AI-generated data increasing the risk of Australian businesses acting on incorrect information, given a recent study that showed AI search engines currently report a 60-96% error rate due to incorrect and/or unverified source material and ‘hallucinations’ (false information created through the Generative AI process itself). This also means any time saved in using these platforms is offset by the loss of productivity required to verifying their findings. (AI search engines fail accuracy test, study finds 60% error rate, Techspot website, 11 March 2025)
  • Well-documented AI biases creating additional barriers for disabled and people of colour to enter or progress through the workforce, creating risks around increasing unemployment, organisational performance and economic productivity – given extensive academic and industry evidence that shows more diverse organisations lead to better financial performance.
  • The impact on organisations’ ability to meet their legal and fiduciary duties and duty of care, given they have no recourse against AI programs that give them bad advice, or information gained in illegal or unethical ways.
  • The impact on organisational credibility and viability from endorsing AI business models that are based on theft, fuel human rights abuses, and have a disproportionate and devastating environmental impact (including energy and water use, emissions and e-waste), as more stakeholders divest from organisations that no longer live up to their values.
  • The impact on human capital and capabilities, with research already reporting AI’s negative effects on de-skilling Australia’s workforce, as well as on creativity, critical thinking, learning and cognitive ability more broadly.

Yes, AI offers new opportunities and efficiencies, but it also poses significant risks to Australian creators and creative industries, as well as to the broader economy and workforce.

It is imperative that the development and use of AI is carefully regulated, and the interests of the AI industry not prioritised or safeguarded over the rights of human creators and content holders, or Australian workers, creative industries or businesses.

Response to Draft Recommendation 1.2

AI-specific regulation should be a last resort

The Interim Report reflects a reluctance to regulate AI technology. However, tech giants are businesses whose interests lie in generating profit. Past poor conduct, particularly relating to copyright, demonstrates we cannot rely on them to do the right thing.

It is government’s role to take social interests into account and set sensible regulation that mitigates the risks of this technology and protects people from harm. Additional protections are required to ensure the principles of consent, credit and compensation underwrite the use of copyright works in the training of LLMs and AI platforms, and in any other process related to text and data mining, or the production of Generative AI.

I support calls from the Media Entertainment and the Arts Alliance (MEAA), Australian Society of Authors (ASA), Copyright Agency and others for the Australian Government for standalone AI legislation that requires AI developers, as a condition of doing business in Australia to:

  • Disclose all data sources for AI training.
  • Obtain consent from creators for use copyright material for AI training.
  • Give credit (attribution) to creators for use of their work.
  • Pay reasonable compensation to creators for use of their work in AI training.
  • Identify Indigenous Cultural and Intellectual Property (ICIP) and comply with cultural protocols before making use of such material for AI training
  • Disclose copyright works used for AI training (and for what purpose) to minimise copyright infringement and/or bias.

Response to Draft Recommendation 1.3

Pause steps to implement mandatory guardrails for high-risk AI

The exploitative impact of Generative AI on Australia’s creators and creative industries makes them particularly high-risk.

Mandatory guardrails are key for ethical AI development. With foundational AI models built off the back of creators’ intellectual property, without consent or remuneration, the transparency provided by mandatory guardrails is a necessary first step for future expansion.

Planned steps to introduce mandatory guardrails should not be paused. Doing so would prioritise the productivity of multi-national tech companies over the Australian economy, and move Australia away from international best practice (such as the European Union requirement for AI developers to be transparent about what copyright works they have used to train their models).

Response to Information Request 1.1

Are reforms to the copyright regime (including licensing arrangements) required? How would an exception covering text and data mining affect the development and use of AI in Australia? What are the costs, benefits and risks of a text and data mining exception likely to be?

Access to legally sourced content for AI-related development and activities in Australia is enabled by the current copyright regime, which should be upheld and enforced. Australia should not weaken those protections through the introduction of a TDM exception.

The suggestion to do so in the Interim Report:

  • Does not reference or appear to respond to relevant evidence about Australia’s specific economic context that would support a TDM exception.
  • Does not reference or appear to respond to ICIP, which risks perpetuating the ongoing impacts of colonisation and causing significant harm to creators and communities, as well as to Australia’s reputation and human rights record. As always, the impact of unfair and unethical AI falls heaviest on those who were already strategically under-valued, under-represented and marginalised – including First Nations creators, whose ICIP rights have been dismissed without consideration of specific permissions or cultural protocols, what it means to appropriate, misuse or misrepresent language or customs, or what it means to steal from, quote, or even simply refer to First Nations people who have died.
  • Attempts to align Australia with international precedents that were implemented prior to recent revelations about the AI industry’s theft of copyright material, while not referencing or appearing to respond to the outcomes of those actions, which currently include more than 40 lawsuits in the US and challenges to proposed TDM exceptions in the UK. For example:
    • The UK House of Lords notes that: ‘LLMs may offer immense value to society. But that does not warrant the violation of copyright law or its underpinning principles. We do not believe it is fair for tech firms to use rightsholder data for commercial purposes without permission or compensation, and to gain vast financial rewards in the process. There is compelling evidence that the UK benefits economically, politically and societally from upholding a globally respected copyright regime.’ (Communications and Digital Committee, Large language models and generative AI (House of Lords Paper No 54, Session 2023-24) 2 February 2024, para 245)
    • The New Zealand Society of Authors Te Puni Kaituhi o Aotearoa has also strongly condemned ‘the appropriation of New Zealand Aotearoa authors intellectual property’, noting ‘our writers should not be the ones deprived of lost revenue in the development of this new technology.’ (Books scraped in the LibGEN dataset by Meta?, NZSA website, 24 March 2025)
    • Anthropic has recently signed a $1.5 billion settlement with book authors. With approximately 500,000 books covered by the lawsuit, authors are expected to receive around $3,000 per book, making it the largest publicly reported copyright recovery in history. Beyond the financial compensation, Anthropic will also have to destroy the LibGen and PiLiMi datasets.
  • Is likely to have minimal impact, given the majority of AI models have already been developed overseas, and it is unclear if/what LLM training would occur in Australia.
  • Is unnecessary, given developers can already get legal access to Australian content for AI development under existing copyright and licensing arrangements. Keeping Australia TDM-free helps Australian creators because they can be paid for AI-related activity that occurs here.
  • Would have immediate and harmful effects, including:
    • Creating significant new barriers for Australia’s creative industries and the ability of individual creators to earn a liveable wage and sustain a creative practice or career – at a time when we are already experiencing a national cultural workforce crisis, and the average income of Australian creators is barely minimum wage.
      • ‘Authors earn a living through their copyright, and in Australia make on average just $18,200 per year from their creative practice,’ CEO of the Australian Society of Authors Lucy Hayward said in response to Meta’s theft. ‘It is not only unfair, it is appalling that one of the world’s wealthiest companies has chosen to use creators’ work without permission or payment – work that has been essential to the development of AI technology.’ (Australian authors’ books included in AI training dataset, ASA website, 25 March 2025)
      • Reducing investment in Australian content and culture, with the AI industry’s disrespect and devaluation of cultural labour demotivating creators and creative industries to continue to produce new work. Given their very low earnings, even a small disruption to income may mean the permanent loss of many professional creators, resulting in a contraction of authentic Australian voices and our unique Australian perspective. Who will tell authentic Australian stories when there are no local creators left to do so?
      • Reducing reliability of content, given LLMs are trained using incorrect and/or unverified source material, create ‘hallucinations’, and have demonstrable in-built and learnt biases – all of which call truth and trustworthiness into question. Australians need to be able to trust that the news they read, the film and television they watch, and the music they listen to has been produced by creators whose work has not been compromised by AI.
      • Establishing dangerous precedents, as already seen in Meta’s legal defence that LLMs ‘transform’ source material into new work and so should therefore count as ‘fair use’ – which sends the message to other multinational corporations that theft of Australian intellectual property is not only okay, but good business.
      • Appears to preference the interests of multinational tech companies (which can afford to pay to license content for AI training) over local creators and creative industries (who cannot afford to have their work stolen), while placing the onus on creators to contest the ‘fairness’ of the use and bring claims against technology giants. It is patently unfair to oblige authors to give their work away for free to AI companies – some of the wealthiest companies on the planet – to develop AI tools intended to displace them and their work.
  • Appears to undermine:
    • The Australian Copyright Act, by infringing copyright owners’ exclusive right to reproduce their material, without permission, license or compensation, which is how creators earn a living. A TDM amounts to government-sanctioned wage theft for the creative industries.
    • The Moral Rights of Australian creators, by creating risks regarding attribution of authorship, and of work being treated in unauthorised or prejudicial ways.
    • Cultural protocols at a time when The Australian Government is exploring standalone legislation for ICIP.
    • The United Nations Declaration on the Rights of Indigenous Peoples, which Australia has endorsed, which outlines the right of First Nations people to maintain, control and protect their knowledge and cultural expression.
    • The Australian Government’s previous rejection of fair use provisions.
    • Australia’s obligations under the Berne Convention for the Protection of Literary and Artistic Works. It is difficult to see how the use of copyright material in AI processes would satisfy the three-step test Australia must comply with as a signatory, which requires the introduction of any limitation or exception to the exclusive rights of creators to:
      • Only apply in certain special cases;
      • Not conflict with a ‘normal exploitation’ of the work; and
      • Not unreasonably prejudice the legitimate interests of creators.
    • Emerging licensing markets (such as the HarperCollins AI deal).

A TDM doesn’t even work in the long term for AI companies. More high-quality human-authored ‘data’ will be required to refine AI models. There will be no high-quality data if the copyright industries are decimated.

The introduction of TDM exceptions to the Copyright Act in the name of ‘reform’ would extract the work of Australian creators and creative industries without permission, under the guise of innovation. It would further erode copyright owner rights and restrict the ability of copyright owners to obtain remuneration for their work, thereby reducing the productivity and economic benefits it purports to achieve.

Licensing is the fair and sustainable solution: enabling creators to say yes or no to AI training, and to be compensated fairly for that use. An ethical, transparent and fair AI industry requires content that is legitimately sourced under Australia’s well-established copyright licensing arrangements, with full consent, credit and compensation provided to copyright owners.

I welcome the Australian Government’s recognition that AI must be regulated to ensure its safe and responsible development and use. (Supporting responsible AI: discussion paper, Department of Industry, Science and Resources, January 2024)

I support calls for the Australian Government to:

  • Safeguard the rights of Australian creators to ensure career sustainability and sector productivity.
  • Introduce comprehensive AI legislation and mandatory guardrails, including provisions for ICIP developed in collaboration with First Nations peoples.
  • Introduce a scheme to compensate Australians whose works have been used for AI training, potentially through a levy on big tech developers who must be made to pay for the work they’ve stolen from Australian creators. It may be too late to undo the theft, but it is not too late for creators to be paid.
  • Require AI-generated products to be labelled as wholly or partially AI-generated, in order for educational, research and cultural institutions – as well as consumers – to be able to easily identify AI-generated and AI-informed work.
  • Require AI companies to publish the carbon footprints and human rights reports of LLMs, so consumers can choose (and companies be more motivated to provide) ethical, transparent and fair AI.

Again, none of this is to say that I cannot imagine a future in which ethical, transparent and fair AI is incorporated into all of our personal, professional and creative lives.

However, while Generative AI platforms may seem free for the end user, for now they come at a price – a price that Australia should not be so willing to pay.

Read more

Subscribe or support

For future updates, subscribe to my free occasional enews.

If any of my work or writing has been of value to you, I’d appreciate you joining me as an advocate, ally or accomplice from just $2.50/month on Patreon).

Unknown's avatar

Author: katelarsenkeys

Writer. Rabble-rouser. Arts, Cultural and Non-Profit Consultant.

Leave a comment