The IRMA Community
Newsletters
Research IRM
Click a keyword to search titles using our InfoSci-OnDemand powered search:
|
Bane and Boon of Hallucinations in the Context of Generative AI
Author(s): S. M. Nazmuz Sakib (School of Business and Trade, International MBA Institute, Dhaka International University, Bangladesh)
Copyright: 2024
Pages: 24
EISBN13: 9798369373088
Purchase
View Sample PDF
Abstract
The phenomenon of hallucinations takes place when generative artificial intelligence systems, such as large language models (LLMs) like ChatGPT, generate outputs that are illogical, factually incorrect, or otherwise unreal. In generative artificial intelligence, hallucinations have the ability to unlock creative potential, but they also create challenges for producing accurate and trustworthy AI outputs. Both concerns will be covered in this abstract. Artificial intelligence hallucinations can be caused by a variety of factors. There is a possibility that the model will show an inaccurate response to novel situations or edge cases if the training data is insufficient, incomplete, or biassed. It is common for generative artificial intelligence to generate content in response to cues, regardless of the model's “understanding” or the quality of its output.
Related Content
Ioanna Papasolomou, Maria Ioannou, Maria Kalogirou, Panayiotis Christophi, Theodosis Kokkinos.
© 2019.
17 pages.
|
Mert Ersen, Abdulkadir Keskin, Abdulkadir Atalan.
© 2023.
15 pages.
|
José Duarte Santos, José Pita Castelo.
© 2022.
16 pages.
|
Louis Alarcon, Valentina Arangiaro, Adrian Fernandez Rabe, Felicián Elekes, Juliane Gallersdörfer, Katharina Gramiller, Gloria Rotundo, Kristóf Tölgyesi.
© 2022.
15 pages.
|
Haoyue Yu.
© 2025.
16 pages.
|
|
|