232
this post was submitted on 20 Feb 2026
232 points (99.6% liked)
Not The Onion
20552 readers
2494 users here now
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
100% not true if they were using a single session to check multiple grants.
Every prompt you send contains a hashed version of your entire conversation with the chatbot. When this exceeds the chat bots context window, it's answers become less and less relevant.
You'll notice this if you've ever had a chat or guide you through something for an hour or more. It eventually gets something wrong takes you down a rabbit hole, and goes in a big circle. At this point, it can be very difficult to get the chat bot to simply respond to your prompt, i.e. if you say "you know what let's talk about _______ instead." It will keep talking about whatever you were talking about staying in your dumb rabbit hole loop.
So if they did this with multiple grants eventually it would basically realize theyre looking for "yes that's dei" and just responding with different versions of that ad nauseam.
Yeah, but if the people who are hired to review grants are checking for DEI, are they smart enough to understand what they're reading?