My policy on AI
I believe in our humanity
We humans can do so much better than trusting our culture, lives and economy to an advanced version of autocomplete. For many reasons, I do not work with generative AI tools in my work, and I humbly suggest you reject them too.
One of the advantages of being older, is having seen situations often enough to spot patterns, and be able to draw on previous solutions to develop new ones – “Been there, done that!” is a saying for a reason. The difference between that and what a generative AI model does, is that I actually understand and can critically interpret those patterns based on real-world experience. This is fundamentally different to what generative AI does, which is to use a probabilistic model to statistically guess at it, trained on previous human work. As AI has no actual ability to think at all, only perform probabilistic maths, these guesses inherently come without any consideration given to quality or suitability. Or to put it another way, generative AI simply remixes what has been before, without understanding or critical analysis of the content it extrudes, or the context of its use. It’s not even a new idea. In the 1940s, George Orwell wrote in his book, 1984:
“Here were produced rubbishy newspapers containing almost nothing except sport, crime and astrology, sensational five-cent novelettes, films oozing with sex, and sentimental songs which were composed entirely by mechanical means on a special kind of kaleidoscope known as a versificator.”
It’s quite funny to see George Orwell describe the AI slop extruded from chatGPT, Gemini, Copilot, Midjourney et al over 70 years before it started getting thrust into every bit of our computing lives. It’s a little less funny when the companies involved are only pushing it in a desperate attempt to stoke growth out of a mature computing industry that’s run out of ideas.
So, I do not use AI tools in my work, primarily because all they can offer is poor quality remixes of what has come before, remixes that cannot understand what a client needs, cannot genuinely interpret what a client says, and cannot suggest to a client when an idea is a bad one (designers sometimes say no, AI never does), or offer reasoned alternative pathways. But there’s more to this than just wanting to produce the best work for clients.
I also do not use AI tools because they place vast additional stress on the electricity, water and ecological systems we all rely on, at a time when we desperately need to use less, not more.
I do not use AI tools because the AI companies themselves have conducted research that shows AI use has significant negative cognitive effects on AI users that make them less able to produce their own work, less able to think critically, and can lead to psychological damage so great that there have already been deaths directly caused by AI systems.
I do not use AI tools because they are reliant on the wholesale theft of work from my fellow artists, designers, writers, musicians and other creative people in order to train their AI models, without our consent, compensation, notification or redress.
I do not use AI tools because they rely on incredibly poorly paid workers in the “majority world” to create the guardrails AI companies like to talk about. Guardrails are the endlessly evolving attempts to stop damaging content being created, with workers moderating content and analysing, categorising and labelling frankly horrific content. AI tools rely on the exploitation of tens of thousands of desperately poor workers in other countries, please remember that.
I do not use AI tools because they embed and propagate negative biases that are impossible to remove from training data, and further the interests of one group of people over another. There are brown people sitting in prison cells right now due to the embedded biases and racism of AI systems. Brown people are dying from preventable curable illnesses right now because the training data these systems use are skewed due to the greater amount of data on white people like me in them, simply because white people have better access to health care.
I do not use AI tools because doing so once again cedes ever more power and control to massive multinational corporations who answer only to their shareholders, and have no long term interest in our communities or planet. Think about why corporations are so desperate to force this technology on us – they hope for a world where everything is ever more automated, where our work is ever more deskilled and low paid, and where our reliance is even greater on them.
AI is not inevitable, which is exactly why AI proponents keep on insisting in increasingly shrill tones that it is – it is an attempt to remove your agency and to make you believe you have no hope or choice but to “embrace”, “harness” or “unleash” it in your life. But AI is no more inevitable than slavery, which people in the past thought would literally lead to the collapse of western civilisation if it was abolished. AI is no more inevitable than access to radioactive materials, which you used to be able to buy in children’s chemistry sets, and companies put into false teeth to give them a healthy whitening glow, until society realised that was an incredibly reckless idea, and banned it. AI is no more inevitable than dying of easily treatable illnesses, or in horrendous agony, as poor people regularly did before the NHS was founded. I repeat, AI is not inevitable.
I believe in humans, in all our messiness and imperfections. But that belief only works in a world where our messiness and imperfections aren’t amplified up by automated systems into dystopias. AI use is horrendously damaging for human society, other animals and the wider environment – it is the asbestos of our times – and I will not be complicit in the poisoning of our world, physical and mental, with it.
Copyright ©2025 David Earls
I don’t use cookies or other creepy tracking technology on my site.
∞