Authenticity And Trust In Media

Our resident digital philosopher Dave Shapton asks us all to consider whether we know what is real and how much we value authenticity.
Almost since the first shaky moving pictures over a century ago, we’ve measured or quantified content with the same parameters: frame rate, resolution or quality, colour vs black and white, and perhaps a text description of what the material is about. Before digital media, this information might be written on the reel, or the box or tin containing it. You could add more layers of meaning by storing it on a particular shelf in a designated room. All of these were types of metadata, and they feel very familiar to us. But with the advent of generative AI, there’s a factor that’s as important as all of these: it’s “authenticity”.
The quest for authenticity isn’t new. News organisations are bombarded with content and all of it needs verification and authentication. You might think that today’s user generated content (UGC) is more likely to carry falsehoods, but that’s not necessarily the case, although the quest for engagement might mean that absolute objectivity has to take a back seat. In fact, frequent exposure to individual contributors allows organisations to build up trust, which, ultimately, is the first line of defence against embedded falsehood.
Until the advent of generative AI, the biggest assault on authenticity is wartime and government propaganda. Since the beginning of modern image-based media, governments have been able to impose their will on filmmakers, either explicitly by forbidding them to cover certain topics, or covertly, by tightly controlling journalistic access. You don’t have to look far in today’s news landscape to see this happening in various parts of the world.
About ten years ago, in a thought experiment to see what content production would look like at some almost infinite point in the future, I wrote that one day, we will be able to feed a film script into a computer, and the output would be a finished feature film. That sounded implausible then and only slightly less implausible now, except that it’s already happening. Today, you can effectively feed text into a computer and get a cinematic output that is good enough to convince anyone that it is real. That decade-old prediction was 100% right and 100% wrong at the same time. Yes, you can produce cinematic-looking videos out of thin air, but no, it’s not going to take a thousand years; it took ten.
And remember, this text-to-image technology is as bad today as it will ever be. It will only get better, and within a year or so, we will have completely lost our ability to distinguish between real content (based on events that actually happened) and generated content. This is a very big deal.
Imagine a world where we can no longer know what’s real. That would be bad enough if it only applied to TV and Cinema, but if, as seems likely, we start wearing augmented reality glasses or even totally immersive headsets, it won’t just be what’s on a screen that’s deceiving us but our sense of reality itself. Eventually, when display technologies merge to create a seamless, contiguous metaverse, we may lose our sense of objective reality altogether.
It might sound like a futuristic dystopia, but the seeds are already sown. It’s a future we urgently need to prepare for. Specifically, we need to verify authenticity. It’s the only way to distinguish between generated and real content.
The problem is even wider than it might have initially seemed, specifically because of generative text models. These are becoming so adept and apparently authentic that it feels like it is getting harder to establish the provenance and ground truth of news stories.
It’s a complicated picture, and to solve it - or even to usefully understand it - we may have to resort to somewhat more abstract thinking than we’re used to. But there are some easy ways to approach these issues.
First, here’s a trick question. What are AI models best at? The answer is: sounding plausible.
AI models are typically trained primarily on published material. Authors - consciously or habitually - try to make their work believable. There’s nothing wrong with that, except that being plausible is not the same as being true. To the gullible, flat-earth theories might sound plausible, but (to the best of my knowledge) they’re not true. And this leads us to the most significant distinction in the entire topic of authenticity: that between belief and knowledge. In news reporting and documentary making, it’s very easy to blur the two, yet they are in entirely different categories of meaning.
Knowledge is a philosophical field with its own Greek name: epistemology. It deserves its own field of study because it’s not a simple concept, even if it seems like one. If we don’t know what knowledge is, we can’t distinguish between truth and belief. If we start treating belief as an indicator of the truth, objectivity becomes impossible. I know this sounds very abstract, so here are some concrete illustrations.
Let’s imagine you believe you are invincible and that you are going to live forever. If that were the case, there would be no need to look both ways when you cross a busy road. But the minute you try to put this into practice, you will notice that oncoming vehicles have no knowledge of your beliefs, and still less behave as if they’re influenced by them. The harsh reality is that your belief doesn't affect the rest of the world. (There are exceptions, especially in news gathering, which we will look at in a minute).
So clearly, belief isn’t the same as knowledge. So what is it to “know” something?
It seems fair to say that if something isn’t true, you can’t “know” it. If my dog is called “Rover”, I can’t “know” that it’s called Sally, but I can “believe” it, albeit wrongly. Now, this will sound picky, but imagine that a spacecraft has just landed in your back garden. But the curtains are closed, and it’s a silent spacecraft. If it lands at the same time as you express a belief that there’s a spacecraft outside, would that be “knowledge”? No, because your belief doesn’t mean you knew aliens were about to knock on your patio door. It was just luck. The missing link here is that knowledge is true, justified belief. Your supposed knowledge was unjustified because you were completely unaware of the real spaceship on your lawn. The emergence of your belief was merely a coincidence.
So, knowledge is true, justified belief. I know it’s raining because dozens of people on the station platform are holding umbrellas. I know I am related to the great apes (indeed, I am one) because I have studied evolutionary science. That kind of knowledge is called “Empirical”, because it comes from experience and evidence. Another type of knowledge (called “a priori”) comes from definition or sheer logical inevitability (”tautology”). Triangles have three sides. You don’t have to experience a triangle to know that; you would never win a competition to design a four-sided triangle. You know, without stepping outside, that the statement “either it’s raining or not raining” is always true.
The distinction between knowledge and truth has never been more significant than today, when politics and - increasingly - AI can provide what looks like evidence but might in reality be the result of deliberate attempts to manipulate viewers or the result of AI-generated material leaking into your content.
Pausing for a moment, it’s worth mentioning that beliefs can affect reality, often with the help of the media.
If you say something often enough, people might start to believe it’s true. This is called “truth by repeated assertion.” Politicians of all shades often use it to promote their policies. Eventually, the idea becomes part of the conversation and then, without any visible transition, is accepted as the truth. It might change how people vote, and hence the future world, irrespective of whether it is grounded in truth.
All of these beg the question of “balance” in broadcasting. That’s a topic for another day, but - spoiler alert - “truth by repeated assertion” and those who use it are not a good source of balance.
After that brief introduction to knowledge, truth and objectivity, we are in a good position to consider the coal face of authenticity in the age of generative AI. We’ve already seen that in textual content, it’s easy to be beguiled by overwhelming plausibility as a transport mechanism for untruths. The position with images is as bad, if not worse.
It’s not difficult to spot the dangers of AI-generated images. They can show anything you like, and do it convincingly. Admittedly, a picture of the Prime Minister on the back of a flying horse might not figure highly on a plausibility chart, but for almost anything not involving mythical animals (or in the case of the UK, good weather), it is becoming nearly impossible to judge the veracity of generated images and videos. It’s even harder when a real (as in “captured by a camera in the real world”) image is enhanced by AI. Remember that all errors are cumulative if they’re unnoticed. There are several opportunities for enhancing images in a production chain, while at the same time, unintentionally distancing them from the truth. Here’s how it could happen.
Look at the image above; on the left it’s a picture of me during a poorly lit podcast recording. It lacks detail, but anyone who knows me will know it’s me, without any question.
Now look at the image on the right.
This has been “improved” by an AI photo enhancement program. Everything looks good, including the stitching on my shirt collars. It’s sharp and full of detail. What could possibly go wrong? But there’s a problem: it’s not me anymore. It’s a generic recreation of my face and shoulders. The eyes, mouth, ears and hair are not mine. It is, effectively, an impostor. When I show this image to people who know me, they look startled. Worryingly, when I show it to people who don’t know me, they seem unconcerned. Again, these errors are cumulative. Any AI-based image enhancement could be prone to this. Most of the time, it’s fine: if your AI enhancement program adds a few blades of grass to make the background look sharper, it doesn’t really matter if they’re based on the grass that was there at the time or not. But if it does this with faces, imagine the potential issues around, say, claiming that you have a photograph of a politician’s face, but it’s not them. The entire population could “latch on” to the false image and might not recognise the actual politician in the future. This kind of error could potentially start wars.
So what’s the answer? It’s a simple one, albeit not even close to foolproof.
We should scrutinise our news-gathering workflows with a process analogous to hygiene. Another way to characterise this is to say that we need to develop a “cognitive immune system” -essentially a framework of scepticism that we apply to incoming content.
Imagine you’re on a train. Someone gets on at a station and says, “I found this sandwich on the platform. Would you like it?”
You wouldn’t take it, for obvious reasons. Of course, routine editorial procedures would encourage you to verify your sources, but what if there’s nothing seemingly wrong with it, or if it has passed your regular tests but AI has somehow “improved it”? This is a serious issue today, and will become more so when we develop AI-based video codecs. These will have massive advantages, but they will all need to carry an “authenticity” health warning.
Building a team of trusted contributors and production workflows will ultimately keep the “authenticity” meter’s needle angled towards the truth. But the more we depend on AI generative techniques to “finish” our content, the greater the opportunity for disastrous errors.
Another analogy:
It’s easy to spot gross issues with aeroplanes: a wing that’s dropped off, or a collapsed undercarriage, for example. But the most dangerous faults are the ones you can’t see, like the stress-induced hairline cracks that made early jet airliners fall out of the sky. These flaws weren’t obvious, and the fact that they were completely hidden until the crash investigation shows how hazardous they were.
So we need to watch out for hairline cracks in our media’s authenticity.
You might also like...
Monitoring & Compliance In Broadcast: File Based Monitoring In Production Systems
File based monitoring tools sit at the heart of broadcast workflow. As production requirements evolve to embrace remote production and multi-site teams, such systems must also evolve to meet the new challenges.
Microphones: Part 10 - Mid-Side (M-S) Recording And Processing
M-S techniques provide useful sound-field positioning and a convenient way to check mono compatibility. We explain the hard science behind this often misunderstood technique.
Monitoring & Compliance In Broadcast: Monitoring Cloud Infrastructure
If we take cloud infrastructures to their extreme, that is, their physical locality is unknown to us, then monitoring them becomes a whole new ball game, especially as dispersed teams use them for production.
Phil Rhodes Image Capture NAB 2025 Show Floor Report
Our resident image capture expert Phil Rhodes offers up his own personal impressions of the technology he encountered walking the halls at the 2025 NAB Show.
Microphones: Part 9 - The Science Of Stereo Capture & Reproduction
Here we look at the science of using a matched pair of microphones positioned as a coincident pair to capture stereo sound images.