this post was submitted on 16 Feb 2026
91 points (100.0% liked)

Technology

42227 readers
438 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
 

Backstory here: https://www.404media.co/ars-technica-pulls-article-with-ai-fabricated-quotes-about-ai-generated-article/

Personally I think this is a good response. I hope they stay true to it in the future.

top 50 comments
sorted by: hot top controversial new old
[–] sgibson5150@lemmy.dbzer0.com 3 points 3 hours ago

I read them regularly for years until they started banning folk in the forums for pointing out how problematic it is for Eric Berger to still be slobbering on Elon's knob.

Don't think I'm missing much, though I do miss Beth Mole.

[–] verstra@programming.dev 14 points 9 hours ago

Woah, they take the blame and apologize. This is not often seen and commands respect.

[–] lvxferre@mander.xyz 25 points 12 hours ago (1 children)

Link to the archived version of the article in question.

I actually like the editor's note. Instead of naming-and-shaming the author (Benj Edwards), it's blaming "Ars Technica". It also claims they looked for further issues. It sounds surprisingly sincere for corporate apology.

Blaming AT as a whole is important because it acknowledges Edwards wasn't the only one fucking it up. Whatever a journalist submits needs to be reviewed by at least a second person, exactly for this reason: to catch up dumb mistakes. Either this system is not in place or not working properly.

I do think Edwards is to blame but I wouldn't go so far as saying he should be fired, unless he has a backstory of doing this sort of dumb shit. (AFAIK he doesn't.) "People should be responsible for their tool usage" is not the same as "every infraction deserves capital punishment"; sometimes scolding is enough. I think @totally_human_emdash_user@piefed.blahaj.zone's comment was spot on in this regard: he should've taken sick time off, but this would have cost him vacation time, and even being forced to make this choice is a systemic problem. So ultimately it falls on his employer (AT) again.

[–] Kirk@startrek.website 4 points 4 hours ago (1 children)

I agree with you. For better or worse, I have to imagine a lot of people who's job relies on pumping out regular articles use LLMs to get the ball rolling. Which is what appears to have happened here.

[–] totally_human_emdash_user@piefed.blahaj.zone 4 points 3 hours ago (1 children)

Just to be clear, the article itself was written by him; he was just experimenting with an AI tool to extract quotes (because learning about AI tools is literally his job), and because he had COVID at the time he got mixed up and pasted paraphrased quotes rather than original quotes. (Arguably he should not have been experimenting with a new tool while sick, but I am willing to cut him some slack because he was probably not thinking clearly at the time.)

The serious thing here is actually not so much that he used an AI tool at some point in the process but that fabricated quotes ended up in a published article.

[–] Kirk@startrek.website 2 points 2 hours ago

Thanks for clarifying, after I left that comment I realized I had the order of events reversed!

[–] ryper@lemmy.ca 34 points 16 hours ago (6 children)

Benj Edwards, the author responsible, has posted his side.

[–] Kirk@startrek.website 1 points 4 hours ago

Thanks for sharing, I was wondering if he would say anything about it. They seem to be handling it well.

[–] LukeZaz@beehaw.org 13 points 12 hours ago* (last edited 12 hours ago) (1 children)

This is a good way to handle the situation and an understandable and believable scenario, so I'm perfectly willing to forgive this. I'm a little less okay with an apparent "work in spite of illness" policy, however.

But still, it's a serious blunder, and it needs to be said that any repeat of this at all would be very damning. I can't forgive this level of fuckup twice. Any AI use is a risk, folks; treat it like one.

[–] Kirk@startrek.website 6 points 4 hours ago

When I first became aware of it, I did not expect this story to become a good case for worker's rights and ensuring everyone has enough rest but here we are.

[–] XLE@piefed.social 8 points 12 hours ago (1 children)

Why would he play with an AI toy while he's doing his job and he's sick?

Of course something was bound to happen.

[–] Assassassin@lemmy.dbzer0.com 1 points 1 hour ago (1 children)

You can't empathize with someone having to work while sick and wanting to use a tool to make that work slightly easier?

[–] XLE@piefed.social 1 points 1 hour ago (1 children)

I tend to empathize with the victims of plagiarism over the perpetrators of it.

[–] Assassassin@lemmy.dbzer0.com 0 points 23 minutes ago

That's an incredibly narrow minded way to view this issue.

[–] Ashtear@piefed.social 23 points 15 hours ago

Well, I can see how that could happen, and in fact, copy-paste artifacts and unintended summaries/hallucinations have happened to me when grabbing output back from an LLM.

Here's the thing though: I catch it 100% of the time because my writing has version control and I compare diffs. When dealing with something that can exist as plain text, there isn't a good reason not to have that setup. I'm no journalist, but it blows my mind that writers who deal specifically in reported facts apparently don't have systems in place to idiot-proof and preserve their sources of truth.

I get it, at some point back in the analog days there were more editors and copywriters that actually verified these things, and those jobs were sacrificed at the altar of capitalism. I've seen writing quality on the web take a downturn as a result. But for fuck's sake y'all, maybe do the bare minimum and start implementing safeguards before you let your writers use inherently lossy tools?

[–] artyom@piefed.social 19 points 15 hours ago (1 children)

Thank fuck Bsky has these character limits or else he would have had to put all that text in an easily legible format for reading and copying. Fuck character limits up their stupid asses.

[–] usernamesAreTricky@lemmy.ml 5 points 15 hours ago (1 children)

The author added the entire text in the alt text if you click on the image and then the ... to see the full thing. Can easily copy and paste from that or read it there instead

[–] artyom@piefed.social 5 points 15 hours ago (1 children)

All the more stupid. Why is it hidden in the alt text and not in the text of the post?

[–] Kirk@startrek.website 0 points 4 hours ago (1 children)

Not using an ActivityPub based platform has it's drawbacks I guess

[–] irelephant@lemmy.dbzer0.com 1 points 3 hours ago

Some bluesky clients/instances support longer posts.

[–] kibiz0r@midwest.social 9 points 15 hours ago* (last edited 15 hours ago)

This sounds eerily familiar…

I don’t know if Hearst told him to use a chatbot to generate their “Best of Summer Lists,” but it doesn’t matter. When you give a freelancer an assignment to turn around ten summer lists on a short timescale, everyone understands that his job isn’t to write those lists, it’s to supervise a chatbot.

But his job wasn’t even to supervise the chatbot adequately (single-handedly fact-checking 10 lists of 15 items is a long, labor-intensive pro­cess). Rather, it was to take the blame for the factual inaccuracies in those lists. He was, in the phrasing of Dan Davies, “an accountability sink” (or as Madeleine Clare Elish puts it, a “moral crumple zone”).

https://locusmag.com/feature/commentary-cory-doctorow-reverse-centaurs/

[–] artyom@piefed.social 17 points 16 hours ago (1 children)

On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said.

That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns. In this case, fabricated quotations were published in a manner inconsistent with that policy. We have reviewed recent work and have not identified additional issues. At this time, this appears to be an isolated incident.

Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here.

We regret this failure and apologize to our readers. We have also apologized to Mr. Scott Shambaugh, who was falsely quoted.

Nothing about who put it in there or what you're doing to them?

[–] otter@lemmy.ca 9 points 16 hours ago (22 children)

We are reinforcing our editorial standards following this incident.

It sounds like they will be reminding their team not to do that and scrutinizing articles in the near future

load more comments (22 replies)
[–] GammaGames@beehaw.org 8 points 16 hours ago (1 children)

I don’t see anything new? It’s a response, I was hoping they’d actually say what happened instead of… just repeating that it did.

[–] locuester@lemmy.zip 13 points 16 hours ago (4 children)
[–] GammaGames@beehaw.org 6 points 15 hours ago (1 children)

Good response, and it sounds like lessons learned!

[–] totally_human_emdash_user@piefed.blahaj.zone 5 points 12 hours ago (1 children)

Agreed, why is why I really do not like how much people are beating on him, but the problem remains that he published an article with fabricated quotes, which hurts not only his own credibility but that of Ars as a whole. I think that it may be best for everyone if he applies the lessons that he learned at another place of employment.

(Also, though, Ars really needs to do something about its culture regarding working while sick, as that makes it inevitable that a mistake like this is going to be made, AI or not.)

[–] GammaGames@beehaw.org 2 points 12 hours ago (1 children)

I think beating him while he’s down is too much. Mean comments on the internet do not compare to having to find a new job while recovering from covid

[–] totally_human_emdash_user@piefed.blahaj.zone 3 points 12 hours ago (1 children)

Fair enough. Realistically, my understanding is that he and the other authors are part of WGA, so Ars would be required to go through an investigative process before firing him, which would probably take enough time that he would have had plenty of time to recover from COVID before having to hunt for a job.

Having said that, I am out for change, not for blood. I think that if Ars announced that the root problem was the lack of sick leave so it was a systemic failure rather than a personal failure (or something along those lines), then that might actually be a pretty good outcome as well.

[–] GammaGames@beehaw.org 3 points 12 hours ago

That would be a good outcome!

load more comments (3 replies)
[–] jaennaet@sopuli.xyz 5 points 15 hours ago

The question is, how many other articles with fake quotes are there on Ars? And not just Ars, but across the mainstream media

load more comments
view more: next ›