Myself and AI, a complex relationship
Hi all, a lot has happened since I last wrote in this space. I haven’t been the best host and I’ve neglected my little corner of the web. I’ve personally been through a lot and sought help for my mental health; perhaps I’ll talk about that in the future. For now, please know that if you need help, there are people and places available to support you.
Let’s get to the main topic, my complex relationship with LLMs. When ChatGPT first launched, I was a bit skeptical, and frankly, unimpressed. But being curious by nature, I started digging into the theory behind the technology. To be fair I found it quite clever and intriguing.
This led me to experiment with different models and build a local version of a Copilot-style coding assistant - which I documented here . At the same time, I’ve been reading extensively on the topic, subscribing to newsletters in English and Italian - Andrew Ng’s The Batch, Ethan Mollick One Useful Thing and Alberto Puliafito Artificiale - and reading books about how big tech is reshaping the internet - Cory Doctorow Enshittification, Tim Berners-Lee This is for Everyone, and Kyle Chaka Filterworld
Meanwhile models have improved rapidly, particularly in software engineering. The code these models produce is becoming increasingly refined, which both amazes and scares me. On the top of this my company now expects us to use these tools and while current expectations are low, I have a feeling the ultimate goal is to hand the ‘development keys’ over to the machines.
Ok, a lot in a few lines, but let’s get to the point. I now have a love/hate relationship with these tools.
What I hate:
-
The environmental impact of LLMs and the ginormous server farms needed to train them and make them work. They need tons of energy and water to cool down these huge buildings, and companies are more and more relying on fossil fuels to power them, and they’re using a vast amount of the water resources of an area to cool the servers down and producing a lot of pollution in the area where they’re located.
-
The disruption caused to workplaces and the fact that companies are using AI as an excuse to lay off a lot of workers. Knowledge workers are the ones who will be most affected by this. Talking about software engineering, my area of expertise, there’s always been a high demand for developers and not enough on the offer side. This has led to high salaries compared to other “white-collar” jobs, and I feel like we are a sort of bug in the capitalistic world where lower salaries are what stakeholders expect, thus AI is perfect to replace or deskill us, which will be the perfect excuse to reduce the pay or keep fewer workers, given that in their mind, one employee with the new tools could could match the output of two or more.
-
The “Brain Fry” effect caused when someone has to review a lot of content quickly produced by the models, which is way more than what a human was and is used to.
-
The AI slop. Using the tool without knowing what coding is and what are the best practices will definitely lead to more security problems, more bugs, and badly written code. AI slop is not only a coding problem, but also a general problem affecting websites, content, politics—think about all the lame memes some parties are pushing out—video, etc.
-
AI-generated content has flooded the web and it’s really difficult to understand what is slop and what is human-curated content.
-
The harm caused to all the humans that work as content moderators on the material that will be used to train a model. People that have to watch hours of harmful content that will then be used to train a model to recognise violent material. The harm caused by deepfake videos and fake content used to discredit or hurt minorities or single persons. It is worth remembering that there have been many episodes of suicide induced by LLMs.
-
Currently, it is not a neutral tool, but is in the hands of a bunch of companies that can decide to make it more or less harmful.
-
It’s trained on a lot of copyrighted material that has been scraped from the Internet without correctly rewarding the original authors.
-
It’s been pushed on a lot of people without asking them if they agreed or not about using it, but it’s been more take it or leave it.
-
I’m afraid of losing the skills I’ve acquired in many years doing my job if I let an LLM do it for me.
What I like:
-
It’s a powerful technology that if used correctly could potentially help anyone. I’m thinking about science, medicine, education, engineering could all benefit from this tool that will facilitate new discoveries, innovation, and learning. I don’t honestly know the right recipe for this, but maybe this should be one of those tools that needs to be democratised and not held in the hand of a couple of powerful corporations.
-
I have instant feedback about my writings in a language that is not my mother tongue and have suggestions about improvements.
-
I can quickly discover and fix a bug in the code I wrote or experiment with new programming languages.
-
I could use it as a brainstorming tool to discover what to do next or to make up my mind about some ideas or doubts.
-
On a personal level, these tools are quite good as coding assistants.
All these conflicting feelings are leading me to try to use these new tools with a pinch of salt and try to use them in as much a human way as possible, as an aid and I try to keep my critical thinking as much alive as possible, without conceding to the slop.
Things I like - in random order
Here are some pictures I recently took using a 1963 - circa - Olympus Pen original, an half frame film camera.
Today’s Links
Today’s links are mostly related to the topics I previously discussed
the “Brain Fry” effect
A third of new websites are AI generated
AI generated content on the web
Indian female workers and abusive content
Energy impact of AI and democracy
AI and water consumption, a different POV
Effects of AI Generated Code
There are some Positive ones too:
New Gene Therapy Enables Children to Hear
London Marathon Incredible Record
Foo Fighters New Album