![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1497c516-b0d5-4f79-b4e7-d1acdf27b0de_512x512.jpeg)
I’m going to attempt to tackle some shit I’d rather avoid today. If you know me, you know I’m a sceptic of technological utopianism. Call me boring – backwards even - but I’m a strong proponent of ethical consideration before the development of new technologies, something humanity hasn’t been too good at in the past. I also believe that not everything needs to be more convenient, efficient or easier, and that some things are better left alone. That’s why the sudden scrambling of platforms like Google and Snapchat to integrate AI features for general public use annoys me at best. Don’t misunderstand me – I’m a fan of progress, of increasing the quality of life for those who need it, but I can’t help but feel we might be running before we can walk. Also, to be completely frank - as a creative and a writer, the idea of a machine spitting out a novel or an AI generated photograph downright scares me sometimes. It’s as natural to feel threatened by AI ‘solutions’ to the problem of having to pay people for creative work, as much as it’s natural to be fascinated by the magic mirror in your pocket.
Last night, a friend posted about AI apparently now being able to read minds, followed by the tempting suggestion that it’s time to go live on a farm (I’ll unpack that later). I’ve certainly considered running away from it all in the past, however I also come from a long line of farmers, and they didn’t enjoy their backbreaking work very much. As enticing as sticking your head in the sand is, it doesn’t make things any clearer. So, how to deal with the whiplash of rapid technological change? I think the answer is, like so often, trying to understand it.
In my experience, the best solution to feeling powerless or overwhelmed is checking for the monster under the bed and asking some question if you do find one. Ignorance can be bliss, but it can also be dangerous, so today’s article is a way of dispelling the (sometimes intentional[1]) mysticism surrounding mind-reading robots and creative genius ChatGPT. This is a good rule of thumb in general (some people make quite a lot of money from the fact that half of us don’t understand what’s really going on anyway). As Joshua Cooper Ramo points out, many of us can no longer compute how the networks and systems of power that run our world function practically, which explains why we may feel lost and exposed. Today more than ever, knowledge is power.
The Magic Behind the Mirror
If we’re thinking about whether AI programmes like ChatGPT are equally good or even more creative than the humans who made them, an understanding of how they’re built is key. Where does the data they feed on come from? How are they trained? Did you know that every time you’ve completed a CAPTCHA to access a website, your choice has been used for machine learning?
The GPT in ChatGPT stands for ‘Generative Pre-Training’, running on an LLM or ‘Large Language Model’. This means that everything it spits out is reconstituted from existing language, in this case 570GB of text harvested from the internet. ChatGPT attaches one word at a time to form sentences in order of probability, functioning kind of like a very sophisticated version of what happens when you just keep pressing on Apple’s predictive text function. Sometimes it throws in a random word to keep it fresh, but in a sense, ChatGPT is actually as uncreative as you can get. It uses only what millions of people have already written on the internet. It’s a mirror of what has already been done, broken and puzzled together in a new way. To me, that makes its outputs devoid of meaning – whether they’re technically accurate or not. I find strange consolation in the fact that without human creativity, ChatGPT could not exist, and that sans intention and sentiment, art without an artist is without value. That’s enough to keep me creating.
In terms of social impact, however, my philosophical plaster isn’t too helpful. Geoffrey Hinton himself,[2] who has been dubbed the ‘Godfather of AI’, has just quit Google in order to speak openly about the dangers of the technology he played a huge part in developing. Hinton’s concerns centre on potential abuses of the technology, particularly in the spreading of disinformation and the labour market. Although the fact that programmes like ChatGPT require human-generated information to function and this means they cannot be considered sentient (yet), their reliance on informational ‘food’ means if they are trained in false information, they will convincingly process and output said false information at incredible speed. Although Hinton compares the invention of ChatGPT to that of the wheel, I’d liken it more to that of the printing press – a powerful communication tool that could change the way information is shared forever.
CloneGPT?
One unsettling train of thought to follow is whether ChatGPT – or other future kinds of AI – would be better at ‘being me’ than myself. Could AI mimic my writing style? My voice? Could it predict what I was thinking before I’ve even thought it? Some people might find this idea enticing; a convincing clone to take over your responsibilities, whilst you play golf or do yoga all day.[3] Theoretically, it’s possible. Your clone would need to be trained in everything you – the more data, the more accurate.
It’s highly unlikely that someone would have an interest in cloning me against my will, though. That would be a very expensive endeavour. Still, fact is that I’ve left a lot of myself behind online already. This very article could be sucked into a language sample for a new LLM, and I’m definitely not excited at the prospect of the next-gen ChatGPT learning from my tone of voice and writing style. The point is – the only way AI could (theoretically) mimic you is by learning from the parts of you it has access to. The more you share publicly, the more tasty data crumbs you leave behind, the more of your behaviour is observed, the more of you could be used for purposes beyond your control (or awareness). We are what we do, how we speak, look, and feel. In the era of big data fuelled by the attention economy you are the product, so intentional privacy is perhaps the most radical thing you can practice. Keep some secrets and think twice before you overshare.
Psychic Robot?
Next on the list of dystopian developments: AI can now read minds. Although it’s bad, it’s not quite as bad as it sounds – the media just seems to struggle with nuance. The study in question[4] is by Tang, LeBel & Jain et al. and looks at developing a successful “brain-computer interface” which would be a less invasive alternative to existing technologies requiring neural implants. The idea is to allow neurologically impaired people – for example sufferers of strokes or ALS – to communicate non-verbally. The researcher’s ‘semantic decoder’ successfully monitored brain activity in order to deduce thoughts, including “meaning of perceived speech” (e.g.- the content of a podcast that participants were listening to), “imagined speech” and perceived images.
But how? A personalised decoder was trained for each participant. Using an fMRI scanner, researchers trained an AI model to learn how the individual’s brain reacted to stimuli, and to then deduce the meaning of subsequent measured brain activity using a similar “most likely” model. The decoders were able to deduce specific words, however meanings were often scrambled and sentences paraphrased. Still: no matter whether time consuming or in its infancy, the technology works.
As an afterthought, researchers noted that we should begin to think about implementing policies on “mental privacy”, and that “subject cooperation is currently required to train and apply the decoder [...] future developments might enable decoders to bypass these requirements”. In other words, only the minds of willing participants can be read, and you can resist your thoughts being decoded … at the moment. Yet another reason to practice meditation and mindfulness? I’m not sure if that’s enough.
Perhaps the future of self-defence is neurological, and the tinfoil hat-wearers of the world have been right all along. Regardless, this development means we seriously need to consider our priorities. As much as I agree that a stroke is a tragedy for both the sufferer and their loved ones and being able to communicate again would feel incredible for the lucky few, I’m not sure that the possible abuses of decoder technologies are worth the benefits.
Social impact / What’s Next?
All of this can feel overwhelming, but I maintain that remaining informed is the best thing you can do for yourself. I’ve pointed out lots of problems today, so I’ll try to balance this dystopian tirade with some practical points.
Eternal Leisure?
The idea of increased redundancies due to developments in AI is nothing new. Looking at a future without work through rose-tinted glasses is tempting, however we are not prepared for that kind of shift. Finding meaning in life without work to keep us occupied is one thing (albeit a thing I think I’d be pretty good at) but accommodating for the hoards of potential unemployed does not seem to be a concern that is being approached with equal urgency as developing the technologies themselves. For the utopian option (which may at best be an irl Wall-E situation), we’d need a robust social safety net, universal income and people in power who are equally as concerned with the wellbeing of the general public as saving pennies. I don’t know about you, but I’d rather just keep working for my money.
Ethicists Assemble!
Although ethical considerations are central to embarking on any robust research project, I don’t think enough is being done. In an increasingly complex and rapidly changing world, ethical standards need to be stricter and consider large scale impact. My (very clever) mother was already talking about systems theory in the 90s, and I honestly can’t believe it isn’t a central, required aspect of academic research, policy-making and education yet. Weighing up whether an avenue of research is somehow useful is not good enough anymore. We need to anticipate whether developing something in the first place is really worth it, all things considered. Let’s focus more brain power on solving the climate crisis and raising the baseline standard of living before we bite more off than we can chew. In a world full of existing problems, how can anyone be bored enough to create solutions to imaginary ones like virtual friends?
Earth to Humanity
Back to the subsistence farm? Maybe not. I do however expect a growing concern around the negative impacts of emerging technologies, especially amongst young people. We are human, and humans belong in the physical world. From zines to digital detoxes, Polaroids to brick phones, there’s a subtle but strong current of people looking to put a foot back on the ground. At the risk of sounding like a hippie, I think many more of us are going to wake up and realise we need to go touch some grass.
Thanks!
That’s it for now. I hope I didn’t scare you too much to come back next week 😊
[1] Why otherwise would Google name their new AI chatbot “Bard”? Come on.
[2] It’s also interesting to note that his mother once told him “Be an academic or a failure”. I often wonder what pushes people to do research for research’s sake, regardless of the damage it could do. In Hinton’s case it seems to have been an overly critical parent.
[3] I Googled this to check whether something like this already exists, and found ‘Clone’. It’s a company developing the first iterations of AI ‘androids’.
[4] Here’s a simpler summary of the study for those of you who don’t want to get into the scientific fineprint, although the headline is a bit sensationalist: