The Victorian Government released its updated cultural policy, Creative State 2028, at the end of last year. But I won’t be surprised if you missed it.
The launch was quickly overshadowed by the news that 24 previously-funded organisations would no longer receive support through its principal Creative Enterprise Program (CEP) – including my beloved Writers Victoria. Overall, CEP funding dropped 16% from $21.2 million for 93 organisations from 2022-25 to $17.9 million for 81 organisations from 2026-29, with several successful orgs further destabilised by surprise two-year grants instead of the four they’d applied for.
Last year’s Parliamentary Inquiry into cultural and creative industries in Victoria had already found that it has never been harder for Victorians to make a living in the cultural and creative industries – then this made it harder.
So, it’s no wonder Creative State has stayed mostly under the radar, nor – given the State’s financial circumstances – that the critique it’s received has been mostly about not putting its money where its mouth is. ‘The new policy presents some promising directions,’ NAVA’s Penelope Benton wrote for ArtsHub, ‘but it stops short of committing investment or clear timelines.’
Lofty ambitions
This includes in the area of Artificial Intelligence (or specifically, Generative AI), which is one of the focus areas of Creative State 2028, under Pillar 4 (‘an inspiring creative future’) to ‘build a strong, resilient and prosperous creative sector and promote the value of creativity for more Victorians.’
‘The use of generative AI in content creation and creative practice represents a new chapter in the relationship between technology and creativity. With the rise of widely available consumer tools, generative AI is now used to make stories, songs, videos, and other content, challenging our conceptions of originality, creativity and even how we relate to art itself. Generative AI offers potential opportunities for creatives to incorporate emerging technologies into their practice, at the same time, the protection of artists and creative workers’ intellectual property is vital.’
Creative State 2028 also notes the Victorian Government’s support of the principles for Generative AI and creative work set out by Creative Australia, including the need to ‘support artists and users to ethically engage with AI.’
Low-bar outcomes
Unfortunately, it only committed Creative Victoria to one single action in order to achieve this aim (#20), which was to ‘partner with ACMI to host a forum on the opportunities and risks of generative AI for the creative industries and advocate for adequate protections and fair remuneration of intellectual property.’
And, as one of the people invited to the one-day Cultural Industries and Artificial Intelligence Forum at ACMI last month, I can verify that box has been ticked.
But when a closed, unpaid forum that many of us had to lose a day of work to attend (which limits diversity and representation in the room) is held under Chatham House Rule (which limits how we can talk about it) and doesn’t appear to capture or share what was discussed (which limits its usefulness), how low is the bar that lets us say this job is done?
And while I suspect I may have been invited because of my work on the issues and ethics of Generative AI, and perhaps even specifically to rain on some parades, I left the forum ethically depleted, discomforted by the difficult reality of navigating ‘how to be good humans in a digital world’, and queasy from the complicity of Generative AI in art making, speech writing and organisational governance when so many people (and our planet) are ‘going under the wheels’ of this problematic technology.
Ethical and practical concerns
My contributions to the forum included many of the ethical and operational objections I have written and spoken about before.
- Loss of copyright and intellectual property: As we saw with Meta’s theft of millions of Aussie books to train its Large Language Model (LLM), AI datasets are primarily based on unlicensed, uncredited and uncompensated source material. With technology moving faster than legal precedent, there are also concerns in terms of output, given it’s still unclear who is the legal author, owner or copyright holder of generated text, images or code. Which is all the more ironic now some AI platforms have announced plans to charge and require attribution for generated content they used our stolen work to create.
- Loss of cultural workforce and art work: Not only is our work being stolen by AI platforms, but it’s also being put in competition with AI-generated ‘art’ (and even the first AI actors). But given our already-low earnings and precarious working conditions, even small disruptions to income or wellbeing may see the permanent loss of even more artists, and the contraction of Australian stories and perspectives.
- Loss of informed and unbiased decision making: With research showing AI search results are wrong 60% to 96% of the time, the use of AI-generated data dramatically increases organisations’ use of incorrect information from unverified source material. Users are also at risk of ‘hallucinations’ (false information created through the Generative AI process itself), as well as in-built and learned biases – all of which calls truth and trustworthiness into question, and can offset any time saved through automation with the additional workload required to fact-check and proof-read.
- Data insecurity and sovereignty issues: Those who upload their work or board papers to third-party AI generators don’t have control over those platforms’ data security (and no recourse when they inevitably leak). Nor do they have any way of ensuring the data they upload is only used to answer the questions they ask of it. Organisations that use AI to analyse member or audience data or operational performance also risk privacy concerns and loss of control of confidential information – which is particularly problematic in terms of cultural safety, Indigenous Cultural and Intellectual Property and Indigenous Data Sovereignty.
- Cookie-cutter creativity: AI-generated data decreases access to different perspectives, nuance, creativity and craft, which means organisations that use AI for strategic plans, pitches and external communications are increasingly easy to spot (as are AI-generated job and grant applications). This is a particularly hypocritical look for arts, cultural or other creative organisations and practitioners that use AI to write about creativity while simultaneously stifling it.
- Deskilling: While some Forum participants saw a need for more AI skills in the workforce, less was said on how AI actively deskills its users. But a 2025 study from MIT’s Media Lab shows that growing reliance on AI is decreasing cognitive ability, reading, learning, creative and critical thinking skills – without which, it becomes even more likely that we’ll consume incorrect or biased information as fact.
- Environmental destruction: Nearly every organisation I know of has some sort of strategic priority on reducing their environmental footprint, but few have considered how using Generative AI makes meeting those ambitions harder (if not impossible) – when a single AI query can use 10-50 times more energy than a standard internet search, and require a full bottle of water to cool its massive, wasteful and polluting server-farms.
- Social injustice: Similarly, many organisations with a social justice mandate (in which I include all those who provide access to art, culture or self-expression), fail to consider how their use of Generative AI makes them complicit in human rights abuses happening all over the world. Just in the same week of the Forum, this included Grok’s sexual abuse of women and children through deep fake technology, Chat GPT becoming the biggest supporter of Trump and ICE, and Anthropic’s AI being deployed on the front lines of several illegal wars. Not to mention the ongoing impact of Generative AI in compounding racism and colonisation, ableism and eugenics – through privileging white and non-disabled datasets to produce harmful content about First Nations and people of colour, Deaf and disabled people, and other marginalised communities. Interestingly, while some of the arguments in favour of AI can also ableist (such as Nanowrimo’s fumbled attempt to insist disabled writers need AI), some of its clearest potential benefits are in the areas of disability access and communication. However, this means the ethical weight of these problematic technologies falls disproportionately onto those it hurts most and need it most and have the least power to change it.
- Loss of productivity, wellbeing and duty of care: In spite of industry rhetoric, recent reports indicate that AI doesn’t reduce work, it intensifies it. And health experts are already documenting an AI-induced health crisis characterised by poor mental health, psychosis and suicidality (along with inducements of criminality and harm). This requires organisations that use AI to expand how they address its impact on duty of care – and what they could be seen to be culpable in by normalising and endorsing the use of harmful platforms.
- Reputation and risk management: Artists and organisations that use Generative AI or act on AI-generated data also risk backlash from stakeholders and audiences when doing so appears hypocritical, or no longer reflects their shared values – leading them to divest.
- Legal and fiduciary duties: Any one of these issues can create legal issues for boards and organisations, given unquestioned reliance on AI-generated data is not an excuse when boards fail to meet their responsibilities, and when adoption of AI into organisational policies or artistic processes doesn’t allow board, staff or collaborators to opt out or even share their objections out loud.
And that’s even before the dystopian reports of AIs bypassing their security protocols, refusing to be shut down (including in simulations in which human lives are at stake), and blackmailing humans that threaten their existence.
Separating the art from the platform
I spent much of the Forum contemplating the synergies between this discomfort and the ‘separating the art from the artist‘ debate – but I think it goes further, in that my enjoyment of AI art is diminished regardless of who makes it. Because – for me, for now – the technology is inherently problematic.
This is not just another tech disruption that will begrudgingly-but-inevitably be adopted. Just as it’s not just a tool or raw material for us to innocently experiment with – without those experiments causing harm. Nor is it a future we can imagine without putting that future at risk – especially for those more vulnerable than ourselves, and those not in the room where these conversations are held.
Instead, it is a capitalist technological product built to the lowest possible logsitical and ethical standards, which uses colonising, culturally violent and extractive business models that only work to entrench existing wealth, while creating deep collateral damage.
And while Generative AI may seem free to the end user, it always, always comes at a price. As Australian artist and advocate Matt Chun reminds us:
‘When we use AI, we exploit human artists and writers. AI is not ‘inspired’ by its source material: that material is stolen. When we use AI, we contribute to climate catastrophe. The water, energy consumption and access requirements of AI are unprecedented and exponential. [And] when we use AI, we help train the very technology that imperialist and colonial militaries are already using to surveil, target, oppress and mass-murder in places like Palestine and West Papua.’
And yes, if we can push those things into the backs of our consciences, Generative AI can be appear both fun and useful. And yes, it’s overwhelmingly easier to generate a fully-rendered image, essay or tune than it is to spend thousands of hours learning how to do that yourself. But I doubt any of the silly little caricatures we’ve created while freely giving away all of our personal information and contributing to so much harm will be little consolation when it’s too late.
Justifications and advocacy
Over the course of the Forum, I heard a range of obfuscations and justifications from artists and organisations that have chosen to use these technologies – either unknowingly, or in spite of their hypocrisies and deficits. And for those proud or excited about their work in the AI space, it’s easy to understand how hearing their work is causing harm may be deeply uncomfortable, lead to defensiveness over reflexiveness, or to dismissing the people who have raised their concerns.
Which means those of us who object to Generative AI can be cast as clueless or naive technophobes. And as a digital evangelist and practitioner myself, it’s been strange and uncomfortable to witness my own transition to a digital nay-sayer over the last year – specifically around the issues and ethics of Generative AI.
When Generative AI has already been so deeply embedded into third-party platforms that we don’t always know when we’re using it, it’s not surprising concerns are often dismissed with hurrumphs that ‘the cat is out of the bag,’ ‘the genie is out of the bottle’ or ‘the AI train has already left the station.’ Or, as Australian cartoonist David Blumenstein writes:
‘Any artist who brings up the ethics behind AI art receives the full fury of babies who think you’re taking their toy away.’
(Though it’s interesting to note how many of these comments come from those with a vested interest in AI business-as-usual).
But that doesn’t mean we shouldn’t make a better bottle, or decide not to use that bottle until doing so doesn’t cause as much harm – particularly when doing so is both in everyone’s best interests and completely within our control. Because this is not a neutral conversation, nor one that can be separated from our work – and ignoring it or silencing it only exacerbates the harm it can cause.
No, not-yet or conditional-yes
Despite its challenges, I was grateful to Forum participants for providing evidence of skepticism, resistance and the different forms of advocacy happening in spite of our sector’s exhaustion. And grateful for the reminder that our processes matter as much as our outcomes, and that those with the privilege of merely playing with AI have a responsibility to make it better for those it harms and those who need it most.
But that doesn’t have to be through boycotts and picket lines (though the QuitGPT campaign is gaining momentum). To use another cliche, ‘the AI ship has sailed,’ and we’ve all been made complicit in the debris it leaves in its wake.
So, we can start by asking: how do we say ‘no’ or ‘not yet’ to what we can avoid using, or a ‘conditional yes’ that mitigates the effects of what we can’t?
- Such as organisations that choose to put a No-AI policy in place, or make a strategic and values-led decision not to use Generative AI until more ethical and sustainable platforms are available.
- Or that specify not only how AI can be used (in line with organisational values, policies and legal responsibilities) but also how its effects will be mitigated -both internally (such as fact-checking) and externally (through advocacy, lobbying or offsets), which could include:
- Calling on the Australian Government for AI legislation, guardrails and policy assurances, which vanished from last year’s Productivity Commission report;
- Applying direct pressure to Generative AI providers to publish the carbon footprints and human rights reports of their LLMs, so consumers can choose (and companies be more motivated to provide) greener and more ethical AI;
One thought on “Raining on the parade of Generative AI in the arts”