The cost of AI
March 31st 2025This post was originally published on 4/11/2024 and is being updated with new content regularly. I am not an AI expert (nor do I fancy polluting this blog with too much on the topic) but feel it will be useful to collect various media on the subject into one post.
A recent update you should check out is this brilliant talk by Léonie Watson, titled “AI and Accessibility: the Good, the Bad, and the Bollocks”, at the FullFrontal conference last year.
Update 5/3/25: Save the AI :-P
Do your bit to help advance the AI revolution by sitting in the dark, drinking less water, and showering less. – Save the AI https://savethe.ai/
mastodon.ie/@IanMoore3000/114099075869734276, shared on Mastodon by Aral Balkan
Even though the AI discussions have been going on for ages now, I still have not had time to fully explore the topic or tools myself. To be honest, I am torn on this subject ~ and very concerned about two aspects particulary: the data used to train the LLMs (and in turn the flawed outputs) and the aspect of sustainability. I can see many good uses which show that AI tools can be of benefit. But so far, in my view, nothing really justifies the cost.
As there is much to learn about how AI is evolving I wanted to put together a snapshot of time in form of a list of articles and information out there at the moment. And then Jeremy posted about his day at UX Brighton and his thoughts on what was left Unsaid — this triggered this post as he summed up my worries: the lack of even the mention of the word ‘copyright’ in context of the sources used for machine learning, the lack of transparency and discussion of the ethics underlying the data used to train the LLMs as well as not enough emphasis on the concerns over the growing energy consumption…

It is baffling to me that so many aspects seem to be missing from articles and conversations:
- With the fast evolution of digital and automation tools ~ why are we not more concerned with the copright of the sources used, or the manipulation of data used in the training of the LLMs?
- In a time when we hope to make progress with inclusion and respect for all ~ why are we not talking more about the datasets used which have been proven to produce false outputs and show bias?
- While we aim for sustainable practices for delivering our web content and worry about the increasing energy consumption ~ why are there not more conversations and actions regarding the extreme energy usage required to build, deploy, train and use AI tools?
Research & policy
While much of the reading list included in this post is quite depressing, there are some good things afoot. Such as ongoing work by the Digital Good Network who are doing some excellent research into Public Voices in AI, hoping to give everyone a voice and to advise governments and policy makers by sharing their findings and ever-growing insights.
disclaimer: we (eyedea studio) designed and built their site a few years ago ~ and know a few of the Sheffield team quite well, having worked on various projects over the years. And we know that they are good people ~ and amazing researchers too.
Four Frictions: or, How to Resist AI in Education
This is an excellent piece by Sonja Drimmer & Christopher J. Nygren, published on publicbooks.org. I came across it on Mastodon, 21/12/2025, do read this thread by Prof. Emily M. Bender.
I would highly recommend this article, especially if you are a student or an educator.
Related reading
(ordered chronologically, updated February 2026)
- Generative AI: What You Need To Know, Baldur Bjarnason
- How AI assistance impacts the formation of coding skills, Anthropic, 29/01/26
- Making human learning visible in a world of invisible AI, Wonkhe, 26/01/26
- Giving University Exams in the Age of Chatbots, Ploum, 19/01/26
- AI companies will fail. We can salvage something from the wreckage, Cory Doctorow for The Guardian, UK, 18/01/26
- LLMs are a 400-year-long confidence trick, Tom Renner, 13/01/26
- AI Coding Assistants Are Getting Worse, Jamie Twiss, 08/01/26
- Four Frictions: or, How to Resist AI in Education, Sonja Drimmer & Christopher J. Nygren, 16/12/25
- Privacy concerns raised as Grok AI found to be a stalker’s best friend, Graham CLULEY, 8/12/25
- I don’t care how well your “AI” works, fiona fokus, 25/11/25
- Leaving Academia.edu: ‘AI’, Coercion, and Information Loss, Peter Tarras, 19/9/25
- UK government trial of M365 Copilot finds no clear productivity boost, Paul Kunert, 4/9/25
- Where’s the Shovelware? Why AI Coding Claims Don’t Add Up, Mike Judge, 3/9/25
- The air is hissing out of the overinflated AI balloon, Steven J. Vaughan-Nichols for The Register, 25/8/25
- An example of what I consider a misleading article about AI and the environment, Andy Masley, 24/8/25
- AI thrives where education has been devalued, Kenan Malik for The Observer, 2/8/2025
- Content Independence Day: no AI crawl without compensation!, Matthew Prince for cloudflare, 1/7/2025
- Make Fun Of Them, Edward Zitron, 30/6/2025
- Disney And NBCUniversal Sue AI Company Midjourney For Copyright Infringement, by Ted Johnson, 11/06/2025
- ChatGPT Lost a Chess Game to an Atari 2600, by Jon Martindale, 10/06/2025
- Apple Just Pulled the Plug on the AI Hype. Here’s What Their Shocking Study Found, by Rohit Kumar Thakur, 8/6/2025
- Large Language Muddle, by Jason Santa Maria, 3/6/2025
- Therapy Chatbot Tells Recovering Addict to Have a Little Meth as a Treat, by Victor Tangermann, 2/6/2025
- Economics & labor rights in AI skepticism, by Henry Desroches, 2/6/2025
- The promise that wasn’t kept, Sala Malam-Naylor, 28/5/2025
- Stop saying that AI is just a tool and it only matters how it is used by Frank Elavsky, 25/5/2025
- We did the math on AI’s energy footprint. Here’s the story you haven’t heard. by James O’Donnell and Casey Crownhart for MIT Technology REVIEW, 20/5/2025
- People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies, Miles Klee for RollingStone, 4/5/2025
- Tech Companies Apparently Do Not Understand Why We Dislike AI, by Soatok, 4/5/2025
- “AI-first” is the new Return To Office, Anil Dash, 19/4/2025
- Elon Musk Reportedly Doing Something Horrid to Power His AI Data Center, Joe Wilkins for Futurism, 11/4/2025
- OpenAI CEO Sam Altman Continues to Completely Miss the Point With AI Art, Harrison Jacobs, for Artnews, 9/4/2025
- Vibe Coding: Symptom or Solution?, Ruhollah Delpak, 9/4/2025
- Denial, Jeremy Keith, 7/4/2025
- Karpathy’s ‘Vibe Coding’ Movement Considered Harmful, Namanyay, 27/3/2025
- A Forgotten Manifesto: Mozilla Betrays Its Own Values on Open Source AI, Sam Johnston, 3/3/2025
- Study after study also shows that AI assistants erode the development of critical thinking skills and knowledge retention. , Christine Lemmer-Webber on Mastodon, 16/2/2025
- The Case Against AI-Generated Users, by Dan Perkel, Angela Kochoska and Will Notini for ideo.com, January 2025
- AI is Creating a Generation of Illiterate Programmers, by Namanyay, 24/1/2025
- Microsoft eggheads say AI can never be made secure – after testing Redmond’s own products, The Register, 19/1/2025
- AI is guzzling gas, heated world, 19/12/2024
- Generative AI and Climate Change Are on a Collision Course, by Sasha Luccioni for wired.com, 18/12/2024
- AI’s Energy Demands Are Fueling the Climate Crisis, by Omar Ocampo and A.J. Schumann for inequality.org, 16/12/2024
- Every time you use ChatGPT, half a litre of water goes to waste, The Sydney Morning Herald, 2/12/2024
- Unsaid, Jeremy Keith, 2/11/24
- AIs show distinct bias against Black and female résumés in new study, Kyle Orland for ars technica, 1/11/24
- Fixing AI’s energy crisis, Katherine Bourzac for nature, 17/10/24
- I am tired of AI, Bas Dijkstra, 17/9/24
- What price?, Jeremy Keith, 10/9/24
- Why A.I. Isn’t Going to Make Art, Ted Chiang for The New Yorker, 31/8/24
- Study: Transparency is often lacking in datasets used to train large language models, Adam Zewe | MIT News, 30/8/24
- Cut the ‘AI’ bullshit, UCPH, Denise Utochkin, 12/8/24
- Power-hungry AI is driving a surge in tech giant carbon emissions. Nobody knows what to do about it, theconversation.com, 8/7/2024
- Electricity grids creak as AI demands soar, Chris Baraniuk, 21/5/24
- AI isn’t useless. But is it worth it?, Molly White, 17/4/24
- How much electricity does AI consume?, James Vincent for The Verge, 16/2/24
- The AI Boom Could Use a Shocking Amount of Electricity, Lauren Leffer, 13/10/23
Further reading
- AI Consequences
- Simon Willison’s blog
- 404 media: stories on artifical intelligence
- The Register : articles on the topic of AI
Official docs
- Guidance: AI Playbook for the UK Government, gov.uk
- AI and education: guidance for policy-makers, unesco.org
Snapshots of comments on the topic on Mastodon
My concerns
(the following is content written at the date of publishing of this post 4/11/2024)
As someone who works in both design and education, I feel that the bias in coverage towards the mere uses of AI tools is worrying. Media outlets are covering the topic in terms of what the latest and fastest ways are of how these tools might improve workflow and result in less time spent on certains tasks, like searches for example. This means that many people will not even be aware of the negative aspects involved and cannot make informed decisions.
Similar patterns…
It very much reminds me of how the topic of accessibility had to evolve to become the norm it is today (mostly). Thanks to the many people discussing in detail of how important accessibility is for good user-centred and inclusive design – I have always considered this one of the core principles of our work. This started in my early days as web designer, going back to the early 2000s (yikes!).
Back then, accessibility needed to be promoted by many to become a defacto. Many articles, case studies and demos of assistive tools had to be produced. People were sharing their experiences, standards were worked on and evolved and much discussion had before the importance of accessibility finally became part of any design work by default. This took a lot of work and effort – progress was made. These days, accessibility is talked about as one essential part of working on the web and digital products. Overall, we have come a long way and accessibility continues to improve hugely across much of the web ツ
For my design work, I always included accessibility as part of early conversations with my clients about the design and choice of technology. They often know little about this but always understand its importance and usually begin to consider accessibility for all their online outputs. They appreciate learning about it and agree that people and their respecitve abilities have to be part of the design and build process of their websites.
For my teaching, accessibility is at the core of what I teach, hoping to instill the right principles in my students and emphasize that user-centred as well as sustainable design are vital for an inclusive web. I’m proud to say that most of my students take on board these principles and strive to create websites that will work for everyone, regardless of their abilities or choice of devices and browsers, and that are optimised to demand as little energy as possible when accessed.
All this shows to me that only education and knowledge will enable people to make an informed choice. Principles like accessibility and sustainability will influence the direction we go in ~ do we care about people? Do we strive for a free, open and inclusive web and a healthy planet, or don’t we?
Do we respect the work people do, individually and in collaboration and respect their copyright?
Do we want to fight bias and discrimination by people and corporations?
Do we care how new technologies are built and what effect their energy consumption has on our environment?
The way forward
If we are to maintain the web as the brilliant platform is was meant to be, surely we need to keep working together to keep the human aspect of the online content and focus on fair and inclusive outputs while considering the effect on our planet. We cannot divorce the progress new technology brings from the reality of how it will affect people and our environment.
We need to insist on transparency and aim to educate those around us on all the good, the bad and the ugly that the arrival of AI involves. The fact that so many apps we use now have built in AI functions without us asking for them is not a good thing. Online services now often include some form of data collection for LLM training purposes – too many are opt-out only. This will lead to many people involuntarily feeding the machines without their knowledge. This is just wrong!
It is up to those of us who work on the web to inform others and spread the word. I am aiming to include AI now in conversations both with clients and students. I am no expert on AI but as long as we point towards this information when published by educated people in the field, we can at least raise awareness to some degree.
AI and Accessibility: the Good, the Bad, and the Bollocks
Léonie Watson / FFConf 2024
Depending on what you read, and who you believe, AI is either the ultimate solution or armageddon in motion, so in this talk, Léonie is going to cut through the clickbait, dodge the doomscrollers, and focus on the facts to bring you the good, the bad, and the bollocks of AI and accessibility.
Remy Sharp, Full Frontal Organiser, 3/2/2025
Update 19/11/25:
this is a brilliant snippet – too good not to share ツ
I saw this on Mastodon today and I can relate so much! Emma Thompson’s reaction seems to have struck a chord with many.
Interesting tools & explorations:

Spawning is building an ecosystem-wide solution to meet the needs of rights holders and AI developers. Spawning’s Do Not Train Tool Suite consolidates machine-readable opt-out methods around the Do Not Train Registry, making it simple to declare data preferences and to respect them.
spawing.ai, as seen 26/4/2024

LAION-5B is an open-source foundation dataset used to train AI models such as Stable Diffusion. It contains 5.8 billion image and text pairs—a size too large to make sense of. In this visual investigation, we follow the construction of the dataset to better understand its contents, implications and entanglements.
Visual Story • Models all the Way Down
Talking Art with Steven Zapata
Illustrator and Concept Designer




