118
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 09 May 2024
118 points (95.4% liked)
PCGaming
6465 readers
26 users here now
Rule 0: Be civil
Rule #1: No spam, porn, or facilitating piracy
Rule #2: No advertisements
Rule #3: No memes, PCMR language, or low-effort posts/comments
Rule #4: No tech support or game help questions
Rule #5: No questions about building/buying computers, hardware, peripherals, furniture, etc.
Rule #6: No game suggestions, friend requests, surveys, or begging.
Rule #7: No Let's Plays, streams, highlight reels/montages, random videos or shorts
Rule #8: No off-topic posts/comments
Rule #9: Use the original source, no editorialized titles, no duplicates
founded 1 year ago
MODERATORS
You really can't.
You can run checks and fence it in with traditional software, you can train it more narrowly...
I haven't seen anything that suggests AI hallucinations are actually a solvable problem, because they stem from the fact that these models don't actually think, or know anything.
They're only useful when their output is vetted before use, because training a model that gets things 100% right 100% of the time, is like capturing lightning in a bottle.
It's the 90/90 problem. Except with AI it's looking more and more like a 90/99.99999999 problem.