this post was submitted on 16 May 2025
11 points (100.0% liked)

Hacker News

2026 readers
314 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

founded 10 months ago
MODERATORS
top 4 comments
sorted by: hot top controversial new old
[–] Timatal@awful.systems 6 points 2 months ago* (last edited 2 months ago)

This is sort of the type of problem that a specifically trained ML model could be pretty good at.

This isn't that though, its seems to me to literally be asking a LLM to just make stuff up. Given that, the results are interesting but I wouldn't trust it.

[–] meyotch@slrpnk.net 3 points 2 months ago

The accuracy is similar to what a carny running the guess-your-weight hustle could achieve.

[–] abcdqfr@lemmy.world 2 points 2 months ago

Can't wait to be called a fat ass with 95% semantic certainty. Foolish machine, you underestimate my power! I'm a complete fat ass!!

[–] Etterra@discuss.online 2 points 2 months ago

Please remember that the LLM does not actually understand anything. It's predictive, as in it can predict what a person would say, but it doesn't understand the meaning of it.