Search

Hallucinations

In transformers, hallucinations are words or phrases that are generated by the model that are often nonsensical or grammatically incorrect. Hallucinations can be caused by a number of factors, including the model is not trained on enough data, or the model is trained on noisy or dirty data, or the model is not given enough context, or the model is not given enough constraints. Hallucinations can be a problem for transformers because they can make the output text difficult to understand. They can also make the model more likely to generate incorrect or misleading information.

See Also: LLM Drawbacks

<< Return to Glossary

Subscribe to our newsletter

Join over 1,000+ other people who are mastering AI in 2024

You will be the first to know when we publish new articles

Subscribe to our newsletter

Join over 1,000+ other people who are mastering AI in 2024