AI hallucination
There’s always something to spoil perfection. Just as we discover this wonderful new toy, generative AI, that will create texts for us while we go off and have a cup of tea, we find that it is capable of making mistakes, and of presenting those mistakes in its generated texts with all the plausibility of Donald Trump making election promises. We have to read what is produced diligently with a very critical eye to spot the errors that are referred to as AI hallucinations.
The problem can arise from bad data in the source that the generative AI is using. Or it could arise from a bad prompt that is confusing or ambiguous or in some way eliciting a response that is unexpected. Since we cannot expect data to be perfect and we cannot all be expert prompt engineers, funny things will happen, our chatbots will hallucinate from time to time. As a first step it is possible to ask the chatbot to factcheck its own output but ultimately a human being has to check it.
Some people don’t like to talk about chatbots hallucinating because it anthropomorphises the chatbot and makes it seem as if it has its own perception of the world which can be distorted. The term hallucination first appeared in the context of computer vision, the use of AI to get a computer to analyse an image and identify objects and people in it. The use of the term has shifted from images to texts.