Opinion
Emerging Technologies

What if ChatGPT could make us less gullible?

ChatGPT: epistemic threat or BS-spotting boon?

ChatGPT: epistemic threat or BS-spotting boon? Image: Reuters/Florence Lo/Illustration

Erwan Lamy
Associate Professor , Information and Operations Management, ESCP Business School

Listen to the article

  • 'AI could be an epistemic threat,' says ChatGPT, which does not differentiate between reliable and questionable information.
  • Insights from virtue epistemologists' work on deepfakes have been helpful for thinking about the possible effects of ChatGPT.
  • ChatGPT could help increase our ‘digital sensibility’, allowing us to accurately tell true from false in the mass of images and videos circulating online

Released by OpenAI on 30 November 2022, ChatGPT can almost perfectly mimic entire discussions or respond to complex questions by producing texts that seem to come straight from a human brain.

This breakthrough raises an array of concerns, including economic, ethical and “epistemic” (the production or acquisition of reliable knowledge and information) issues, since this type of artificial intelligence does not at this stage differentiate between reliable and questionable information.

‘AI could be an epistemic threat,’ says ChatGPT

“AI could be an epistemic threat since it can generate convincing, but false information. This could challenge our understanding of the world or even endanger the validity of our knowledge. This raises concerns about the possibility of using AI to spread disinformation or manipulate people’s beliefs.”

It's not me saying it, it is … ChatGPT itself! It generated the previous paragraph when asked the following question: “How is AI an epistemic threat?” As we can see from this example, its answers can be very convincing – yet fatuous. Sometimes fake content is obvious, and sometimes it is harder to spot.

In our example, while there is little wrong with the first sentence, the second is a meaningless cliche: what exactly does “challenge our understanding of the world” or “endanger the validity of our knowledge” mean? And the third sentence doesn't make much sense: these AI do not spread anything, and they are not the best suited for manipulation (since we do not have much control over what they produce).

But that is just the issue: we have to think to get to the truth.

ChatGPT does not care in the least about the veracity of its responses

Erwan Lamy, Associate Professor, Information and Operations Management, ESCP Business School

What we must understand is that ChatGPT is not programmed to respond to questions, but to produce plausible-sounding texts. It is especially powerful when there is a large amount of text available for it to “read”. In ChatGPT’s case, it is phenomenal.

When given a certain sequence of words, ChatGPT can determine the most likely sequence of words to complete it. ChatGPT can therefore “respond” to a question, in a necessarily believable way since it calculates the most likely response. But there is no logic or thinking involved. There is nothing more than a calculation of probability. ChatGPT does not care in the least about the veracity of its responses. In other words, even for academics like Princeton University computer scientists Arvind Narayanan and Sayash Kapoor it is a “bullshit generator”.

Since Harry Frankfurt made “bullshit” the topic of an article and then a book in the 2000s, it has become a philosophical concept. Serious researchers in the fields of psychology, philosophy, neurosciences and management sciences now study the topic. The term has become more complex, but here we can refer to its original definition: an indifference to the truth. It is not a lie: a liar is concerned with the truth, since they seek to distort it. A bullshitter, on the other hand, has no regard for the truth and seeks simply to captivate – what they say may be right at times, and wrong at others, but no matter.

Have you read?

This is precisely the case for the highly talented ChatGPT. When it gets things wrong, it is not obvious – at least, not immediately. A powerful, easy-to-use words generator available to everyone with no remit to present the facts? There is indeed reason to worry. It is not hard to imagine how editors of unscrupulous content could easily use this tool to produce “information”, especially since ChatGPT even seems capable of fooling academics in their own areas of expertise.

Epistemic vices and virtues

What is at stake is a sense of intellectual ethics. Contrary to popular belief, the production and acquisition of knowledge (scientific or otherwise) is not only a matter of method. It is also a moral matter. Philosophers talk about intellectual vices and virtues, which can be defined as personality traits that hinder or facilitate the acquisition and production of reliable information.

Open-mindedness is an example of epistemic virtue, whereas dogmatism is an example of vice. These concepts have been addressed in a growing body of philosophical literature since the early 1990s: virtue epistemology. Since its goal was to correctly define knowledge, this research was essentially technical at the outset, but it now also focuses on the epistemic problems of our times: disinformation, fake news, and of course, the dangers posed by AI.

Until recently, virtue epistemologists who discussed the epistemic consequences of AI focused especially on ‘deepfakes’ – videos generated entirely by AIs such as OpenAI’S DALL·E that can show real individuals in entirely made-up, scandalous situations with a striking degree of realism. Insights from this work have been helpful for thinking about the possible effects of ChatGPT, and perhaps to nuance a pessimism that has probably been somewhat excessive.

The production of deepfakes is obviously a problem, but the widespread availability of such videos may lead the public to develop a form of general scepticism about the role of images, a form of “intellectual cynicism”. The author who made this suggestion (in 2022) saw it as an epistemic vice since it would lead people to doubt both made-up and evidence-based information. But I am not sure that cynicism would be such a bad thing: It would amount to going back to a time, not that long ago, when images did not play such a significant role in information acquisition. It does not seem to me that this time (pre-1930s) was particularly vicious, epistemically speaking.

Boosting our digital sensibility

In any event, this cynicism could in turn lead to the development of an epistemic virtue: a certain “digital sensibility”, allowing us to accurately tell true from false in the mass of images and videos circulating online.

Such digital sensibility could also be boosted by ChatGPT. The readers of texts produced by this AI, put off by the torrents of bullshit it may spew out, could become warier when reading a text online, or when faced with an image (out of fear of being fooled by a deepfake) – without falling into a form of general scepticism.

Discover

How is the World Economic Forum ensuring the responsible use of technology?

More generally, the rise of AI could underscore the need to cultivate epistemic virtues and combat vices, such as the common tendency not to doubt conspiracy theories spreading on social media. Ultimately, these worrying technologies may be good news for intellectual ethics.

• This article was originally published in French by The Conversation.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Tech and Innovation

Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

5 ways to achieve effective cyber resilience

Filipe Beato and Jamie Saunders

November 21, 2024

Why AI is Southeast Asia's new engine for profitable growth

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum