Raining on the parade of Generative AI in the arts

The Victorian Government released its updated cultural policy, Creative State 2028, at the end of last year. But I won’t be surprised if you missed it.

The launch was quickly overshadowed by the news that 24 previously-funded organisations would no longer receive support through its principal Creative Enterprise Program (CEP) – including my beloved Writers Victoria. Overall, CEP funding dropped from $21.2 million for 93 organisations in 2022-25 to $17.9 million for 81 organisations in 2026-29, with several of the successful organisations further destabilised through the receipt of two-year grants instead of the four they’d applied for. 

Last year’s Parliamentary Inquiry into cultural and creative industries in Victoria had already found that it has never been harder for Victorians to make a living in the cultural and creative industries – and then this made it harder.

So, it’s no wonder Creative State has stayed mostly under the radar, nor – given the State’s financial circumstances – that the critique it’s received has been mostly about not putting its money where its mouth is. ‘The new policy presents some promising directions,’ NAVA’s Penelope Benton wrote for ArtsHub, ‘but it stops short of committing investment or clear timelines.’

Lofty ambitions

This includes in the area of Artificial Intelligence (or specifically, Generative AI), which is one of the focus areas of Creative State 2028, under Pillar 4 (‘an inspiring creative future’) to ‘build a strong, resilient and prosperous creative sector and promote the value of creativity for more Victorians.’

‘The use of generative AI in content creation and creative practice represents a new chapter in the relationship between technology and creativity. With the rise of widely available consumer tools, generative AI is now used to make stories, songs, videos, and other content, challenging our conceptions of originality, creativity and even how we relate to art itself. Generative AI offers potential opportunities for creatives to incorporate emerging technologies into their practice, at the same time, the protection of artists and creative workers’ intellectual property is vital.’

Creative State 2028 also notes the Victorian Government’s support of the principles for Generative AI and creative work set out by Creative Australia, including the need to ‘support artists and users to ethically engage with AI.’

Low-bar outcomes

Unfortunately, it only committed Creative Victoria to one single action in order to achieve this aim (#20), which was to ‘partner with ACMI to host a forum on the opportunities and risks of generative AI for the creative industries and advocate for adequate protections and fair remuneration of intellectual property.’

And, as one of the people invited to the one-day Cultural Industries and Artificial Intelligence Forum at ACMI last month, I can verify that this box has definitely been ticked.

But when a closed, unpaid forum that many of us had to lose a day of work to attend (which limits diversity and representation in the room) is held under Chatham House Rule (which limits how we can talk about it) and doesn’t appear to capture or share what was discussed (which limits its usefulness), how low is the bar that lets us say this job is done?

And while I suspect I may have been invited because of my work on the issues and ethics of Generative AI, and perhaps even specifically to rain on some parades, I left the forum ethically depleted, discomforted by the difficult reality of navigating ‘how to be good humans in a digital world’, and queasy from the complicity of Generative AI in art making, speech writing and organisational governance when so many people (and our planet) are ‘going under the wheels’ of this problematic technology. 

Ethical and practical concerns

My contributions to the forum included many of the ethical and operational objections I have written and spoken about before. 

  • Loss of copyright and intellectual property: As we saw with Meta’s theft of millions of Aussie books to train its Large Language Model (LLM), AI datasets are primarily based on unlicensed, uncredited and uncompensated source material. With technology is moving faster than legal precedent, there are also concerns in terms of output – meaning it remains unclear who is the legal author, owner or copyright holder of any text, images or code AI platforms generate for their users.Which is all the more ironic that AI platforms have announced plans to charge and require attribution for generated content they used our stolen work to create.
  • Loss of cultural workforce and art work: Not only is our work being stolen by AI platforms, but put in competition with AI-generated ‘art work’ (and even the first AI actors). Given our already-low earnings and precarious working conditions, even small disruptions to income or wellbeing may see the permanent loss of even more artists, and the contraction of Australian stories and perspectives.
  • Loss of informed and unbiased decision making: With research showing AI search results are wrong 60% to 96% of the time, the use of AI-generated data dramatically increases organisations’ use of incorrect information from unverified source material.Users are also at risk of ‘hallucinations’ (false information created through the Generative AI process itself), as well as in-built and learned biases (for which many AI models have already being criticised) – all of which calls truth and trustworthiness into question, and can offset any of the time saved through automation with the additional workload required to fact-check and proof-read.
  • Data insecurity and sovereignty issues: Those who upload their work or board papers to third-party AI generators don’t have control over those platforms’ data security (and no recourse when they inevitably leak). Nor do they have any way of ensuring the data they upload is only used to answer the questions they ask of it. Organisations that use AI to analyse member or audience data or operational performance also risk privacy concerns and loss of control of confidential information – which is particularly problematic in terms of cultural safety, Indigenous Cultural and Intellectual Property and Indigenous Data Sovereignty. 
  • Legal and fiduciary duties: Any one of these issues can create legal issues for boards and board members, with unquestioned reliance on AI-generated data not an excuse when boards fail to meet their responsibilities. 
  • Cookie-cutter creativity: AI-generated data decreases access to different perspectives, nuance, creativity and craft, which means organisations that use AI for strategic plans, pitches and external communications are increasingly easy to spot (as are AI-generated job and grant applications). This is a particularly hypocritical look for arts, cultural or other creative organisations and practitioners that use AI to write about creativity while simultaneously stifling it. 
  • Deskilling: While some Forum participants saw a need for more AI skills in the workforce, less was said on how AI actively deskills its users. A 2025 study from MIT’s Media Lab showed that the growing reliance on AI also reduce practice decreasing cognitive ability, reading, learning, creative and critical thinking – without which, it becomes even more likely that we’ll consume incorrect or biased information as fact. 
  • Environmental destruction: Nearly every organisation I know of has some sort of strategic priority on reducing their environmental footprint, but few have considered how using Generative AI makes meeting those ambitions harder (if not impossible) – when a single AI query can use 10-50 times more energy than a standard internet search, and require a full bottle of water to cool its massive, wasteful and polluting server-farms.
  • Social injustice: Similarly, many organisations with a social justice mandate (in which I include all those who provide access to art, culture or self-expression), fail to consider how their use of Generative AI makes them complicit in human rights abuses happening all over the world. Just in the week of the Cultural Industries and Artificial Intelligence Forum, this included Grok’s sexual abuse of women and children through deep fake technology, Chat GPT becoming the biggest supporter of Trump and ICE, and Anthropic’s AI being deployed on the front lines of several illegal wars.Not to mention the ongoing impact of Generative AI in compounding racism and colonisation, ableism and eugenics – through privileging white and non-disabled datasets to produce harmful content about First Nations and people of colour, Deaf and disabled people, and other marginalised communities.Interestingly, while some of the arguments in favour of AI can also ableist (such as Nanowrimo’s fumbled attempt to say disabled writers need AI), some of the strongest arguments in favour of AI are its potential benefits for disability access and communication. However, this means the ethical weight of these problematic technologies falls disproportionately onto those it hurts most and need it most and have the least power to change it.
  • Reputation and risk management: Artists and organisations that use Generative AI or act on AI-generated data also risk backlash from stakeholders and audiences when doing so appears hypocritical, or no longer reflects their shared values – leading them to divest. 

Separating the art from the platform

I spent much of the Forum contemplating the synergies between this discomfort and the ‘separating the art and the artist‘ debate – but I think it goes further, in that my enjoyment of AI art is diminished regardless of who makes it. Because – for me, for now – the technology is inherently problematic.

It’s not just another tech disruption that will begrudgingly-but-inevitably be adopted. It’s not merely a tool or raw material for us to innocently experiment with without those experiments causing harm. Nor is it a future we can imagine without putting that future at risk – especially for those more vulnerable than ourselves, and those not in the room where these conversations are held.

Instead, it is a capitalist technology offer built to the lowest possible standards, that uses colonising, culturally violent and extractive business models that only work to entrench existing wealth, while creating deep collateral damage.

And while Generative AI may seem free to the end user, it always, always comes at a price. As Australian artist and advocate Matt Chun reminds us:

‘When we use AI, we exploit human artists and writers. AI is not ‘inspired’ by its source material: that material is stolen. When we use AI, we contribute to climate catastrophe. The water, energy consumption and access requirements of AI are unprecedented and exponential. [And] when we use AI, we help train the very technology that imperialist and colonial militaries are already using to surveil, target, oppress and mass-murder in places like Palestine and West Papua.’

And yes, okay, it can be funny. And yes, of course, it’s much easier to generate a full-rendered image, essay or tune than it is to spend thousands of hours learning how to do that yourself. But all of those silly little caricatures we created while freely giving away all our personal information will be little consolation when it’s too late.

Justifications and advocacy

Over the course of the Forum, I heard a range of obfuscations and justifications from artists and organisations that choose to use these technologies – either unknowingly, or in spite of their hypocrisies and deficits.

Within organisations, those issues can be deepened by whether they allow their board, staff or collaborators to opt out of AI that’s been enshrined in policies or artistic processes, or even to share their objections out loud.

Those of us who object to Generative AI can be cast in the role of technophobes. And as a digital evangelist and practitioner myself, it’s been strange and uncomfortable to witness my own transition to a digital nay-sayer over the last year – specifically around the issues and ethics of Generative AI.

And while Generative AI has already been so deeply embedded into third-party platforms that we don’t always know when we’re using it, it’s not surprising that our concerns are often dismissed with hurrumphs that ‘the cat is out of the bag,’ ‘the genie is out of the bottle’ or ‘the AI train has already left the station.’ (Though it’s interesting to note that many of these comments come from those with a vested interest in AI business-as-usual).

But that doesn’t mean we can’t make a better bottle, or decide not to use that bottle until doing so doesn’t cause as much harm – particularly when doing so is both in everyone’s best interests and completely within our control. We can, we should, and we need to. Because this is not a neutral conversation, nor one that can be separated from our work – and ignoring it or silencing it only exacerbates the harm it can cause.

No, not-yet or conditional-yes

But in spite of its challenges, I was grateful to the Forum for providing evidence of skepticism, resistance and the different forms of advocacy that are happening despite our sector’s exhaustion. And grateful for the reminder that our processes matter as much as our outcomes, and that those with the privilege of merely playing with AI have a responsibility to make it better for those it harms and for those who need it most.

But that doesn’t have to mean boycotts and picket lines. To use another cliche, ‘the AI ship has sailed,’ and we have all been made complicit in the debris it leaves in its wake.

But it can start by asking: how do we say ‘no’ or ‘not yet’ to what we can avoid using, or a ‘conditional yes’ that mitigates the effects of what we can’t?

  • Such as organisations that choose to put a No-AI policy in place;
  • Or choose not to use Generative AI until more ethical and sustainable platforms are available;
  • Or specify not only how AI can be used (in line with organisational values, policies and legal responsibilities) but also how its effects will be mitigated:
    • Both internally, such as fact-checking;
    • And through external lobbying, such as:
      • Calling for the legislation, guardrails and policy assurances that vanished from last year’s Productivity Commission report; and/or
      • Applying direct pressure to Generative AI providers to publish the carbon footprints and human rights reports of their LLMs, so consumers can choose (and companies be more motivated to provide) greener and more ethical AI.
Unknown's avatar

Author: katelarsenkeys

Writer. Rabble-rouser. Arts, Cultural and Non-Profit Consultant.

Leave a comment