The Cultured Resident: This article is not written by AI. But it could be…
- Maeve Burrell
- 51 minutes ago
- 5 min read

In this day and age, I would be hard pressed to encounter someone who has not heard of Chat GPT. While this form of generative AI is sweeping the internet, delivering computer-generated content from text to images, many of us have already been wondering what impact this will have on creative industries like art, writing, and even media. This week we are bringing the subject of AI closer to home, examining the implications of AI on generating the very fabric of The Cultured Resident: articles themselves. This topic highlights the value of a human touch when it comes to news or opinion articles, however nuanced the topic. So, let's dive right in!
When I google ‘AI generated articles’ to research for this week’s column, what the internet primarily feeds me is not a series of articles discussing the potential use of AI in this field, but a plethora of software designed to help me create one. Technologies like Grammarly are even slyly offering their services at making this AI undetectable, with a ‘Humanizer’ designed to cover up the generated text and make it sound more believable with the help of yes, you guessed it, even more AI. This is only the first step in showing how readily available and exploitable AI technology is, with nothing stopping us from using it to crank out an article with speed and ease.
We might think there are a few telltale signs for AI generated text, from funky syntax to vague phrasing, but as these softwares are getting more and more intelligent and trained by humans, sources suggest detection software built into browsers is becoming powerless to AI’s disingenuous threat. For more conversational and opinion pieces (like this one!) it may be easier to identify the human voice behind the text, but for short-form informational posts, their concise, educational style and hot-off-the-press content leaves readers believing the articles before they’ve even read them yet. The relevance of the information and our predisposition to often take images and publicised text as empirical fact combines to convince readers of the AI’s salience. Perhaps we need to be a little more cynical…
In August 2024, Newscatcher stated that ‘60,000 AI-generated news articles are published every day’, or almost 7% of the sampled data over a month. This number can only have grown as AI becomes more advanced and more available. However the site also delivers the somewhat hopeful information that ‘readers were 3.6x more likely on average to visit human articles than those that were AI generated’ which suggests we can still somewhat identify text produced from a human. What’s more, it seems that we prefer it. This illuminates an interesting margin in the media and news industry where entertainment and information intersect: we must still find value not just in knowing things, but in hearing them from an embodied point of view.
Surely it would be easier and more efficient to have AI scan the internet and synthesise the most relevant/up-to-date information, delivering a purely informational synthesis rather than a nuanced take? Skip the flowery language and potential for individual bias and produce a bullet-pointed, easily digestible, and most importantly OBJECTIVE take. After all, there’s no ulterior motive behind a machine, right? Well, the problems arise when you factor in 1) the fact that AI can still be biased, depending on who has programmed/trained it, as well as the sources it is gleaning information from in the first place, and 2) AI’s sneaky tendency to extrapolate; resulting in some pretty hairy mistruths that teeter riskily on the edge of fake news.
An article from Global Witness gives us a real-life example of the dangers of fake news when applied, for example, to the U.S election: ‘the phrase "AI-generated" increasingly sows confusion and distrust in our democratic discourse’. From audio deepfakes to AI generated images, it is clearly not just in the headlines that we need to be discerning. People are beginning to question the value of perceived truth of what they perceive on the internet, which when taken too far could have even more implications itself. Of course, some would argue that while we may use AI to generate articles, we still need humans to check, edit and peer review them in a similar model that goes on elsewhere. Is AI just a helpful middleman, or a blocker for original thought as we simply fix it and edit it down? You decide.
So, what do the people think? Upon discussing this topic, I asked the question of how important is complete objectivity in news and media, or is this objectivity a utopian ideal? Better perhaps to always pull from several different inputs in order to divulge an opinion of our own. Though this might not fly in a world of convenience and micro-learning. People want to be informed - and that's about enough. I was presented with the statement ‘well that depends on how perfect you want people to be at their jobs’ - a striking revelation when considering the ethical responsibility of our media creators and news sharers. It is certainly important not to be blatantly biased, but as we veer further and further into AI, we have started to recognise the value in human nuance and even error for connecting us more deeply with the world. While this may seem like a holistic view, it is the same argument behind having some physical cashiers alongside the self-checkouts to harvest trust and provide accessibility - but all this sparks the same panic over the deletion of human jobs that self-checkouts did in their day as well.
A Reuters institute study on people’s perceptions of AI in journalism and society shines a light on public opinion in this area:
Both the awareness and use of AI has increased globally, doubling since May 2024
Across six countries, just 12% say they are very or somewhat comfortable with news made entirely by AI
43% are comfortable with news made mostly by a human with some help from AI
Younger people tend to be more comfortable with AI-driven news production
People remain sceptical of AI use in a news context, despite the fact that they are more comfortable with its use for menial tasks like grammar correction, and in ‘less serious’ topic areas such as lifestyle and arts
All this is to say that humans seem to find value in human-generated art, and believability in human-generated content. but where the unique object of the article sits between fact and opinion, its object places it precariously close to the realm of AI doctoring.
As a society we seem to be pretty aware of the potential for bias in the news - particularly for example in UK media, where canonically certain newspaper titles are conflated with a particular political stance. Where overtly opinionated publications can facilitate the formation of echo chambers by offering content that aligns with readers’ existing views, their ideological leanings are well known. This makes it equally straightforward for readers to seek out contrasting perspectives that challenge their assumptions. Unlike assuming the objective nature of computer-generated content, there is no secrecy or coercion in this human-led bias.
To leave you as I intend to with some thought-provoking questions so no one can throw the title of ‘AI philistine’ your way, we might want to consider together:
How important is objectivity in our news and reporting, or is there fun and important deduction work in discerning the potential bias and human opinion behind the articles we read?
Should we protect articles and traditional reporting in the same way that we value human-made art? And how safe are media jobs?
What is developing faster: generative technology or our perception, ability and scepticism to detect it? Along with purpose-built detection software,
Is there still hope?
Ironically, this article is very much written by a human, and not by AI, and I believe in this instance it is pretty easy to tell. But as our societal values are shifting more and more from the process to the end result, getting objective news fast and without the potential for human error or even nuance could be obsoleting our human-led media.
Who knows, everyone, they could replace me with a robot yet!







Comments