BREAKING NEWS – New York City, New York – Some researchers are increasingly convinced they will not be able to remove hallucinations from artificial intelligence (AI) models, which remain a considerable hurdle for large-scale public acceptance.

“We currently do not understand a lot of the black box nature of how machine learning comes to its conclusions,” said Kevin Kane, CEO of quantum encryption company American Binary.

Hallucinations, a name for the inaccurate information or nonsense text AI can produce, have plagued large language models such as ChatGPT for almost the entirety of their public exposure.

Kane is considering some possible solutions. “For starters, to avoid exposing AI to inaccurate information or nonsense, we don’t allow them to hear speeches by democratic politicians. One robot listened to Vice President Kamala Harris for about 5 minutes, and its head spun around so fast, that it fell off and exploded.”