Alex Kirsch
Independent Scientist
DE | EN

Blog

ChatGPT: dangerous or only useless?

14.04.2023
My two cents on the ChatGPT debate.

It is here! A computer program that produces text that cannot (easily) be distinguished from text written by humans. A game changer!

Really? I remember an interview with Joseph Weizenbaum, the author of the famous Eliza program, the first chatbot ever created. He related that back in the 1950s he and his colleagues were thinking what would I talk about with a computer?. Since there was no way to represent or acquire any knowledge in a computer that anyone would want to talk about, they settled on the story of a psychotherapist. Using the nondirective technique of Carl Rogers all the chatbot had to do was to rephrase the input of the user, who would all by her- or himself infer meaning from the response and continue the conversation. The problem of representing knowledge adequately has still not been solved (I would even say, not even tackled). So, naturally, chatGPT is just a more fancy version of Eliza. Whatever a user types in, is used as a seed to generate text, and only text. While the motivation to produce text from a human perspective is communication, for chatGPT it is just to chain words in a gramatically correct way. The confusion arises when humans interpret generated text as a form of communication.

While AI debates in previous waves of the hype have concentrated on super-human AI and whether this would be the ruin or salvation of humankind, this time even journalists focus on the more practical threats: spam e-mail, generated websites with questionable validity, theses being generated in bulk, search engines delivering untrue results, plus the usual trouble with offensive content.

I don't deny these problems; I just question their impact in the long run. If search engines return wrong results (whatever that is, see below), users either don't recoginize the difference and it has no impact, or users do realize and they will stop using the service, returning to more traditional methods of information retrieval (such as other search engines, or even books!).

And there is a way to distinguish generated text from real communication: you have to read it. After the first awe of how real the texts look, when you really try to get information out of them, you have a hard time. They are almost a parody of present management culture and other forms of superficial text production. I may not be able to distinguish generated empty nonsense from manually produced empty nonsense, but I know whether a text is worth my time reading it.

We already had debates about plagiarism in PhD theses (predominantly by politicians) in Germany years ago. I have always wondered what kind of relationship the advisors had with their students. An advisor should be involved in the production of the thesis and participate in discussions and help with problems. If the final thesis reflects this process, I don't care how the words were produced. If students choose tune text generators until they produce adequate content rather than writing the text directly, I doubt that they save a lot of time. In any case the interesting part of a thesis is the message, the work that was done, the scientific contribution. If evaluators accept a text that is thoughtlessly pieced together (whether manually or by ChatGPT), they are doing a poor job.

Another issue I have with the current debate (and other related debates about fake news that have been around for a longer time) is the inherent assumption of true facts versus fakes. The latest version of ChatGPT (and several other systems) can also generate images. And panic breaks out: we cannot believe any more what we are seeing! But what exactly is a real image? One that was produced by a camera? Without any post processing? This would probably put all my personal travel pictures into the fake category, because every modern camera or phone does some kind of postprocessing or color enhancement automatically. You could argue that I did not change the arrangement of objects. But I am certainly guilty of looking for camera angles to make my motive look good. Some of my pictures of Machu Picchu look as if I had been there all alone—a complete lie, of course; I was surrounded by other tourists trying to fake their photos in the same way.

While the point is easiest to illustrate on images, the same holds for texts. In books about traditional herbs, you find statements such as already Hildegard von Bingen used this herb to treat wounds. Ok, nice historical fact, but does it work? Scientific methodology had not been invented in the middle ages, and I am pretty sure that a medieval nun performed quite a few rituals for curing wounds that I would not believe to work. Even though the sentence is not really wrong (Hildegard von Bingen did use the herb to treat wounds), it spreads untested believes that could very well be wrong (the author seems not to have cared enough to check).

After the first excitement, I expect that we will get accustomed to the situation that machines can now produce text, as have learned not to click on links in questionable e-mails. If just text fulfills some need, such as entertainment or management reports, we will use it, otherwise we will return to methods that work better. I also think that this is a historic chance for traditional publishing companies to gain back ground from the everything is free internet culture by offering high-quality, human-produced and -reviewed content. The only question is if there are any left that have adequate quality standards.

I am not happy that ChatGPT and related systems are around. But now they are here I trust on the common sense of people to develop ways to deal with the situation. We should not forget that all of us have a say in the matter: we decide which tools we use, what we read and whom we trust.

← Zurück zur Blog-Übersicht