LLMbeciles are dangerously incompetent tools that unfortunately "hack" a weakness in human perception: We are hard-wired to equate eloquence and confidence with intellect. (The so-called fluency heuristic.) LLMbeciles are very fluent, eloquent, and confident and we are very vulnerable to that combination. As a result outside our areas of expertise we have a tendency to trust LLMbecile output despite the fact that it is literally 100% bullshit (in the Frankfurt sense) hallucination. It just happens that by the statistics of human language stolen to build the model that these hallucinations match reality enough to fool non-experts. And that's the danger: they're "right" (which is to say their bullshit semi-accidentally matches reality) often enough we don't catch the cases where their bullshit is just plain wrong.
This is a pattern see with a lot of people who have areas of high expertise:
- "LLMbeciles are not really useful in this field in which I have expertise..."
- "...but I think they're very useful in all these fields in which I have no expertise."
Gell-Mann must be rolling in his grave right now! (Yes, I know it's Crichton, but I'm sticking to his bit.)