top of page
  • Flora Leask

'AI and Popular Culture': In Conversation with Dr Lee Barron

Image provided by Lee Barron.

The Broad’s Creative Director Flora Leask chatted with Dr Lee Barron about his new non-fiction book AI and Popular Culture. Lee is an Associate Professor in the Department of Design at Northumbria University, whose research interests take him far and wide – from fashion to tattoo culture, celebrities, and now Artificial Intelligence.

Flora Leask: When did you first become interested in the topic of AI, and how does it connect to other aspects of your research?

Lee Barron: I got into AI as an undergraduate when I first read William Gibson’s Neuromancer, and that novel has followed me ever since.

In my MA I was talking about representations of virtual reality and cyberspace, and it all kind of went quiet because the technology wasn’t there – it was a hype that disappeared. Then, when teaching branding to fashion students, suddenly VR was back. I thought ‘interesting, I was writing about that back in 1997’. The technology has caught up and become, not quite Neuromancer, but closer to it.

Another strand of my research is smart technologies, including autonomous cars. What’s interesting about that is it’s all based on AI, just not the ‘Terminator’ AI that everyone gets excited about. The reality of AI is far more interesting than malevolent, omnipotent matrixes and ‘Skynets’. It’s much more interwoven into lives – to the extent that most people probably don’t know that they interact with AI every time they watch streaming TV or listen to music on Spotify.

FL: I thought what was most interesting about the book is that you have the science fiction depictions of ‘general’, or superintelligent, AI versus the reality of ‘narrow’, algorithmic, AI.

LB: Absolutely. The science-fiction scenarios are important, but the answer to that is when will we have ‘general AI’: tomorrow, next month, next year, a thousand years, never?

What’s important is the amount of autonomy we’re already giving over to algorithmic AI. When we’re applying for a loan, do we want a human making that decision, or are we happy with an algorithm? Increasingly it’s an algorithm, and we can’t argue with it when it turns us down. And we can’t even ask why – why did it come to that decision.

FL: Could you talk about the role of ethics in AI studies?

LB: Increasingly our lives are subject to decision-making entities which we don’t have any control over. Theres a whole ethical debate in terms of the drive – if you excuse the pun – to autonomous, AI driven cars. If there’s an accident, who’s responsible? Is it the AI, or is it the owner, and what do you do about it?

The ethics of weapons systems is a huge issue, and the research is suggesting that it’s almost inevitable that we’ll have to employ them, because rogue nations are bound to do it. But it means giving life or death decisions, literally, to machines.

I think AI tends to be seen as a sort of runaway train. We can see the way in which the job market is likely to be transformed, particularly as we see increasing algorithmic management processes and administrative processes. And even artistic processes – what AI can do for fashion designers, for example. Is it tool or threat? There have been some interesting examples where AI, because it can scour the net, is coming up with some great design processes. It tends to be more mass market, but still, it’s quite unprecedented. And we’re seeing the impact of Chat GPT: could it write scripts? Could it write advertising copy? Can systems start to engage in graphic design?

For some, this argument is just an extension of technological assistance, so the human now has an artistic partner. The optimistic view is that this will make us better professionals.

One of things I discuss later on in the book is the ‘deepfake’ element. It’s based on a very similar platform to Chat GPT, and we’re now seeing deepfakes which are obviously fake, but you have to look twice at. There’s that TV show, ‘Deepfake Neighbour Wars’, where you have Greta Thunberg living next to Kim Kardashian in a housing estate, and it’s all created with deepfake technology. It’s not perfect, but in six months’ time, it’ll be considerably better. In two years’ time, you’ll probably assume it was them. We’re starting to see how reality itself is starting to transform.

FL: There’s a lot of fear and uncertainty around what AI can do, but do you think there are positive outcomes?

LB: The notion of having products recommended on streaming platforms is really helpful, and in many cases, I’ve discovered things I never would have thought of.

Another crucial element, particularly in fashion, is the issue of cutting down garment waste. Instead of just guessing how many items we’re going to have, we have an artificially intelligent informed trend analysis, which helps us to order what we need. There are some really positive outcomes to that. For many, that’s going to play a big part in terms of sustainable manufacturing.

FL: So it’s not all WALL-E scenarios of humanity – which brings me to the Pop Culture in your book. What were the biggest books, films, and television shows that inspired you?

LB: The Matrix, obviously. And Terminator was really crucial. Actually, one that I really enjoyed was WALL-E. There really is a tech company that’s designing an AI system to recycle a lot more effectively, so WALL-E’s becoming quite realistic in that sense.

One of my favourites was fairly obscure, called Colossus: The Forbin Project, from 1970. It has this gigantic computer, which is unwisely given command over missile systems, in the belief that this will take out human error. What it ends with is not the apocalypse, but the machine logic deciding that it can manage humans way better than they can, because they’re flawed. At the end there is a ‘do what I say or else’ scenario – AI rules.

For many, it’s become a very prescient film, because whilst AI does wreak havoc on humanity by launching missiles at specific cities, its goal is not to destroy. The system isn’t evil, it thinks it’s doing the right thing, because humans can’t be trusted. What I like about these films is the lesson we learn is just about giving over too much control.

You can buy AI and Popular Culture here.

And you can find Flora's review of AI and Popular Culture here.


bottom of page