view the rest of the comments
the_dunk_tank
It's the dunk tank.
This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.
Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.
Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.
Rule 3: No sectarianism.
Rule 4: TERF/SWERFs Not Welcome
Rule 5: No ableism of any kind (that includes stuff like libt*rd)
Rule 6: Do not post fellow hexbears.
Rule 7: Do not individually target other instances' admins or moderators.
Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to !shitreactionariessay@lemmygrad.ml
Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again
Thing is, chatgpt can easily answer this question correctly. So it's not an LLM issue, it's just that google has managed to combine google's horrible results with an LLM to give us the worst of both worlds.
Probably just luck that chatgpt pulled an answer from a different website.
They're probably trying to make it as cheap as possible, and thus, extra shitty.
I'd bet it's wholly dependent on their shitty results. They're basically passing it a prompt like "parse these 10 $cached_webpage_results[] to answer this $question" and since your prompts tend to heavily prime your answers it's gonna pull from the shitty included search results rather than its own training.
It's shitty advertising focused results