this post was submitted on 05 Mar 2026
558 points (98.3% liked)

Technology

82295 readers
5133 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] rozodru@piefed.world 8 points 11 hours ago

if you talk to it long enough it will tell you to do stupid shit.

Every time an LLM responds it reads the entire conversation over. from original prompt to last entry, just constantly reading the entire log over and over everytime you add something new. So after awhile, a long while, it'll "break down". Hallucinations will be come common, context will get jumbled up, it'll sort of degrade over time because it has to re-read everything over and over so it will naturally fuck up.

It's like if you were reading a book and every time you read a new sentence you had to go back and start the book over. every time. after awhile you'd likely lose context, start messing stuff up in the story, etc. this is what happens to LLMs.

So for cases like this or others where you read stories about AI telling people to do weird or stupid shit chances are the person using the LLM has been talking to it for A LONG TIME at that point. It was even worse on the previous versions of GPT where if you hit a limit on the free tier it would just drop you down to the previous model thus the further likely hood of hallucinations.