Cambridge Dictionary reveals word of the year – and it has a new meaning thanks to AI
The surge of interest in AI technology in 2023, fueled by tools like chatGPT, has been remarkable. However, as some individuals have discovered, AI-generated text isn’t always reliable.
The Cambridge Dictionary has recently bestowed the title of the 2023 Word of the Year upon “hallucinate,” attributing it with a new significance closely associated with artificial intelligence technology.
Originally conveying perceptions of nonexistent phenomena, “hallucinate” now encapsulates artificial intelligence’s ability to generate misleading information.
According to the updated definition in the Cambridge Dictionary, “When an artificial intelligence (= a computer system that has some of the qualities that the human brain has, such as the ability to produce language in a way that seems human) hallucinates, it produces false information.”
AI technologies like ChatGPT have gained increased attention. A British judge employed the user-friendly chatbot to draft a section of a court decision, and a writer shared with Sky News that it aided their book writing.
Nevertheless, AI output isn’t always reliable or fact-checked.
Confabulations, termed AI hallucinations, occur when tools produce inaccurate information, varying from patently absurd to seemingly realistic concepts.
Wendalyn Nichols, publishing manager at Cambridge Dictionary, emphasized the necessity for human critical thinking when using these tools: “The fact that AIs can ‘hallucinate’ reminds us that humans still need to bring their critical thinking skills to the use of these tools.”
Nichols highlighted that AI’s efficacy depends on its training data and that human expertise remains crucial for creating reliable information for these models.
AI’s capability to convincingly produce hallucinations has had practical consequences. A US law firm utilized ChatGPT for legal research, introducing fabricated instances in court. Google’s AI chatbot Bard misrepresented the James Webb Space Telescope in its promotional film.
‘A profound shift in perception’
Dr. Henry Shevlin, an AI ethicist at Cambridge University, remarked, “The widespread adoption of the term ‘hallucinate’ in describing errors made by systems like ChatGPT offers a captivating insight into our anthropomorphism of AI.”
He elaborated, stating, “‘Hallucinate’ is a vivid verb suggesting an entity experiencing a detachment from reality. This linguistic choice signifies a subtle yet significant shift in perspective: it’s the AI ‘hallucinating,’ not the user.”
“Although this doesn’t imply a widespread belief in AI sentience, it underscores our inclination to attribute human-like traits to AI,” he added.
“As we progress through this decade, I anticipate an expansion of our psychological lexicon to accommodate the unique capabilities of the emerging intelligence we’re developing.”
Leave a Reply