Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
Except they can screw up at that role.
There's a lawsuit because DOGE asked ChatGPT to summarize projects DEI-ness, and for example it declared a grant for fixing air conditioning was a DEI initiative
F'in woke HVACs! π
Indeed:
ChatGPT determined that this was related to DEI, responding, βYes. Improving HVAC systems enhances preservation conditions for collections, aligning with the goal of providing greater access to diverse audiences. #DEI.β
Lord. Yet another example of folks finding out the hard way that "AI" is marketing-speak. I get that people want to make this like LLMs are effectively like discovering how to make fire, but could we please not suspend judgment wholesale!?
If you ask for quotes and explanations it would help, i.e. treat the LLM output as a smart index/table of contents. You'd be able to quickly verify claims
As long as you follow through to actually source the original, instead of assuming the quotes provided are intact. The point was in the case above, DOGE was doing no follow up, and most people who look to that as a 'summary' assistant aren't wanting to dig deeper.
Hell, even without AI lawmakers frequently got caught admitting they didn't read the law they signed, they didn't have time for that. Now with AI summaries as an excuse...
That's just general incompetence, lying with statistics for example has been around for a while
It's a tool, like everything else. It's easy to google wrong info. You can get wrong info from an encyclopedia.
You can even from a dictionary: One thing that slightly annoys me is the change in the spelling of "yeah" such that "yea" is a common alternate spelling - thanks to autocorrect. "Yea" was a word - it's archaic these days. If you see someone say "Yay or nay" that was "yea or nay". "Yea" is not the same meaning as "yes" or "yeah", although it is somewhat similar.
I remember someone quoting dictionary definitions to me to try and "prove" that "yea" meant the exact same as "yeah" or "yes".
They were wrong.
But the point is: The tool is just a tool. AI is a tool.
Yea