The cost of AI

March 31st 2025

This post was originally published on 4/11/2024 and is being updated with new content regularly. I am not an AI expert (nor do I fancy polluting this blog with too much on the topic) but feel it will be useful to collect various media on the subject into one post.

A recent update you should check out is this brilliant talk by Léonie Watson, titled “AI and Accessibility: the Good, the Bad, and the Bollocks”, at the FullFrontal conference last year.


Update 5/3/25: Save the AI :-P

text: save the ai

Do your bit to help advance the AI revolution by sitting in the dark, drinking less water, and showering less. – Save the AI https://savethe.ai/

mastodon.ie/@IanMoore3000/114099075869734276, shared on Mastodon by Aral Balkan

Even though the AI discussions have been going on for ages now, I still have not had time to fully explore the topic or tools myself. To be honest, I am torn on this subject ~ and very concerned about two aspects particulary: the data used to train the LLMs (and in turn the flawed outputs) and the aspect of sustainability. I can see many good uses which show that AI tools can be of benefit. But so far, in my view, nothing really justifies the cost.

As there is much to learn about how AI is evolving I wanted to put together a snapshot of time in form of a list of articles and information out there at the moment. And then Jeremy posted about his day at UX Brighton and his thoughts on what was left Unsaid — this triggered this post as he summed up my worries: the lack of even the mention of the word ‘copyright’ in context of the sources used for machine learning, the lack of transparency and discussion of the ethics underlying the data used to train the LLMs as well as not enough emphasis on the concerns over the growing energy consumption…

robot with text: there is no AI just other people's data
via KubikPixel, Nov 2024

It is baffling to me that so many aspects seem to be missing from articles and conversations:

  • With the fast evolution of digital and automation tools ~ why are we not more concerned with the copright of the sources used, or the manipulation of data used in the training of the LLMs?
  • In a time when we hope to make progress with inclusion and respect for all ~ why are we not talking more about the datasets used which have been proven to produce false outputs and show bias?
  • While we aim for sustainable practices for delivering our web content and worry about the increasing energy consumption ~ why are there not more conversations and actions regarding the extreme energy usage required to build, deploy, train and use AI tools?

Research & policy

While much of the reading list included in this post is quite depressing, there are some good things afoot. Such as ongoing work by the Digital Good Network who are doing some excellent research into Public Voices in AI, hoping to give everyone a voice and to advise governments and policy makers by sharing their findings and ever-growing insights.

disclaimer: we (eyedea studio) designed and built their site a few years ago ~ and know a few of the Sheffield team quite well, having worked on various projects over the years. And we know that they are good people ~ and amazing researchers too.

Four Frictions: or, How to Resist AI in Education

This is an excellent piece by Sonja Drimmer & Christopher J. Nygren, published on publicbooks.org. I came across it on Mastodon, 21/12/2025, do read this thread by Prof. Emily M. Bender.

I would highly recommend this article, especially if you are a student or an educator.

Related reading

(ordered chronologically, updated February 2026)

Further reading

Official docs

Snapshots of comments on the topic on Mastodon

My concerns

(the following is content written at the date of publishing of this post 4/11/2024)

As someone who works in both design and education, I feel that the bias in coverage towards the mere uses of AI tools is worrying. Media outlets are covering the topic in terms of what the latest and fastest ways are of how these tools might improve workflow and result in less time spent on certains tasks, like searches for example. This means that many people will not even be aware of the negative aspects involved and cannot make informed decisions.

Similar patterns…

It very much reminds me of how the topic of accessibility had to evolve to become the norm it is today (mostly). Thanks to the many people discussing in detail of how important accessibility is for good user-centred and inclusive design – I have always considered this one of the core principles of our work. This started in my early days as web designer, going back to the early 2000s (yikes!).

Back then, accessibility needed to be promoted by many to become a defacto. Many articles, case studies and demos of assistive tools had to be produced. People were sharing their experiences, standards were worked on and evolved and much discussion had before the importance of accessibility finally became part of any design work by default. This took a lot of work and effort – progress was made. These days, accessibility is talked about as one essential part of working on the web and digital products. Overall, we have come a long way and accessibility continues to improve hugely across much of the web ツ

For my design work, I always included accessibility as part of early conversations with my clients about the design and choice of technology. They often know little about this but always understand its importance and usually begin to consider accessibility for all their online outputs. They appreciate learning about it and agree that people and their respecitve abilities have to be part of the design and build process of their websites.

For my teaching, accessibility is at the core of what I teach, hoping to instill the right principles in my students and emphasize that user-centred as well as sustainable design are vital for an inclusive web. I’m proud to say that most of my students take on board these principles and strive to create websites that will work for everyone, regardless of their abilities or choice of devices and browsers, and that are optimised to demand as little energy as possible when accessed.

All this shows to me that only education and knowledge will enable people to make an informed choice. Principles like accessibility and sustainability will influence the direction we go in ~ do we care about people? Do we strive for a free, open and inclusive web and a healthy planet, or don’t we?

Do we respect the work people do, individually and in collaboration and respect their copyright?
Do we want to fight bias and discrimination by people and corporations?
Do we care how new technologies are built and what effect their energy consumption has on our environment?

The way forward

If we are to maintain the web as the brilliant platform is was meant to be, surely we need to keep working together to keep the human aspect of the online content and focus on fair and inclusive outputs while considering the effect on our planet. We cannot divorce the progress new technology brings from the reality of how it will affect people and our environment.

We need to insist on transparency and aim to educate those around us on all the good, the bad and the ugly that the arrival of AI involves. The fact that so many apps we use now have built in AI functions without us asking for them is not a good thing. Online services now often include some form of data collection for LLM training purposes – too many are opt-out only. This will lead to many people involuntarily feeding the machines without their knowledge. This is just wrong!

It is up to those of us who work on the web to inform others and spread the word. I am aiming to include AI now in conversations both with clients and students. I am no expert on AI but as long as we point towards this information when published by educated people in the field, we can at least raise awareness to some degree.


AI and Accessibility: the Good, the Bad, and the Bollocks

Léonie Watson / FFConf 2024

Depending on what you read, and who you believe, AI is either the ultimate solution or armageddon in motion, so in this talk, Léonie is going to cut through the clickbait, dodge the doomscrollers, and focus on the facts to bring you the good, the bad, and the bollocks of AI and accessibility.

Remy Sharp, Full Frontal Organiser, 3/2/2025

Update 19/11/25:
this is a brilliant snippet – too good not to share ツ
I saw this on Mastodon today and I can relate so much! Emma Thompson’s reaction seems to have struck a chord with many.


Interesting tools & explorations:

Spawning is building an ecosystem-wide solution to meet the needs of rights holders and AI developers. Spawning’s Do Not Train Tool Suite consolidates machine-readable opt-out methods around the Do Not Train Registry, making it simple to declare data preferences and to respect them.

spawing.ai, as seen 26/4/2024

LAION-5B is an open-source foundation dataset used to train AI models such as Stable Diffusion. It contains 5.8 billion image and text pairs—a size too large to make sense of. In this visual investigation, we follow the construction of the dataset to better understand its contents, implications and entanglements.

Visual Story • Models all the Way Down

Talking Art with Steven Zapata

published 17 Oct 2022, Steven Zapata
Illustrator and Concept Designer