Some time ago, I was playing around with ChatGPT and I started asking random questions because I was confused about some stuff. One of these questions was whether the seahorse emoji exists. You know where this is going…
Why is a SUPER SMART AI failing on such a simple question? To answer this, we have to look at how an AI works.

AI is not a human.
An LLM (Large Language Model) is trained on a bunch of data stolen borrowed from the internet. This information, based on how often it is trained on, is what an AI knows.
An AI is trained on the word “the” billions of times, as it is common! It knows how to use it very well. However, emojis are a different story. It doesn’t have that much data on them – probably only a few thousand references per emoji, so it can’t validate it as well.
Okay? This doesn’t explain much…
IM NOT DONE YET!
Another reason why this is happening is the Mandela Effect. Read the Wikipedia article (linked) for more information, but here’s a quick summary: the Mandela Effect, named after Nelson Mandela (created by Fiona Broome), is a modern psychological theory about humans and what they believe. Humans trust one another. When large amounts of people believe in incorrect information, some smart guys call it a Mandela Effect.
Now enough of boring science, here’s the good stuff!
AIs are clearly not humans[citation needed]. As mentioned above, they use data from the internet and use billions of parameters to make their response random blabber scholarly information.
Now time to stop goofing around and time to tell you why it is getting frustrated.
The AI, like explained above, is trained on data from the internet. People share their knowledge on the internet, and with limited knowledge of emojis, AI believes that it does exist! But, when it tries to type it out, it gets confused because it doesn’t exist in the Unicode standard emoji bank, so it tries to correct itself with the closest emoji to it, but it never gets to the seahorse emoji!
A fix?
This whole ordeal teaches us that AI doesn’t actually think. It simply spits out data from the internet. That is a problem, considering that Reddit takes up a nice portion of its training data…
While I don’t work at OpenAI and cannot fix this inconvenient bug, I do have a solution. Remember Apple IOS 18’s Genmoji?
Another Experiment
What if..
We tried the Reasoning model?
Okay… That was something!
So in reasoning, it did initially fall for the trap, but upon searching the web for updated results (and Reddit articles criticizing this same phenomenon), it was able to accurately conclude that the seahorse emoji doesn’t exist. It even criticized itself in the final answer! “LLMs sometimes hallucinate one”
Cool, huh?
What can we learn from this experiment?
Turns out, AI just searches the web if it cannot find the answer in its database. So generative!
The conclusion: If you have a question that can be easily answered, just Google it. And maybe skip the AI overview for once.
Bye, now!
Leave a Reply