16 Feb 2024 — Nitin Verma

Language, and authenticity?

Since the launch of ChatGPT to the public in November 2022, LLMs (or large language models) have caused much concern about the authenticity of text on the internet. The question often becomes, “how do I know if this piece of text I’m reading has been written by a human, or by an AI?”

But why argue about authenticity of the written word, if our main concern is the veracity of the propositional information supplied in a piece of writing? Isn’t human language fundamentally unreliable on its own? While most text on the internet, until recently, was written by humans, the fact that it was written by humans should already cast a certain amount of doubt on the veracity of its substance. In fact, we have always had our doubts about this type of authenticity when it comes to the written word—digital or otherwise.

So, to me, it does not make a big difference to the overarching question of lying and deception via the written (or for that matter, spoken) word. The deluge of AI-generated text that we are about to witness in the coming days will be accompanied by the publication of many concerned voices and opinions on the issue of authenticity or fakery.

I suspect (what else could I do as of now?) that this cultural awareness would almost certify the inherent unreliability of the written word. Given the unreliability of text as a mode of authentic communication about verifiable ideas, not much is about to change. Maybe, in fact, we would be less vulnerable.