this post was submitted on 23 Mar 2026
699 points (99.0% liked)

Technology

83102 readers
5991 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] SchwertImStein@lemmy.dbzer0.com 73 points 2 days ago (2 children)

First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy.

translation assistance

[–] UnderpantsWeevil@lemmy.world 16 points 2 days ago* (last edited 2 days ago) (2 children)

The former I'm still looking sideways at.

The latter, probably the only truly benevolent use of LLMs. And even then, you'll get plenty of grumbling.

[–] ThunderComplex@lemmy.today 12 points 2 days ago (1 children)

Eh I think this sounds ok. If you prompt an AI to improve your text, you submit that, and another human reviews that (and maybe asks you to make changes) it should be fine. I can see this giving more people the ability to make edits (e.g. non-native speakers)

[–] Nalivai@lemmy.world 2 points 1 day ago (2 children)

The problem is, it doesn't improve text, it worsens it. And if your grasp of the language isn't good enough, you can edit a page in your own language, or ask nerds in the discussion section to help you, it will be better written, they will be happy, and you might learn something.
Asking a slop generator to generate some slop about what you wanted to write will make things worse.

[–] mirshafie@europe.pub 5 points 1 day ago (8 children)

This is a bit alarmist I think. It's about how you use it. If your prompt is "please write a funny story about a bunny" you'll get slop. If you write a full-ass Wikipedia article and ask it to simplify and punctuate long passages for increased legibility you can get valuable feedback.

load more comments (8 replies)
[–] teuniac_@lemmy.world 3 points 1 day ago (1 children)

I think it's more nuanced than that. It all depends on what you're asking it to do (and a bit of luck that it complies as intended). Using a thesaurus can also either improve or worsen a text.

I'm not a native English speaker, but have lived in an English speaking country for many years now. I still make mistakes, but there is no point in me asking for help with English writing as my mistakes are subtle and I don't realise I made them. Getting an AI to detect clumsy use of English and grammar mistakes has worked quite well for me before publishing reports. While I don't always use the correct grammar while writing, I'm very capable of judging whether an LLM suggested improvement is actually better.

Of course, letting an LLM rewrite a whole text is much riskier in terms of the original meaning getting lost. But that's not the only way to use it.

[–] ThunderComplex@lemmy.today 2 points 1 day ago

There's definitely a lot of nuance in this topic. I think discarding the whole thing and saying "And if your grasp of the language isn’t good enough, you can edit a page in your own language" is a bit naïve. English is the lingua franca of the world, so if you have knowledge about something that should be in Wikipedia but isn't, adding or appending to a English page will reach the widest audience. Ideally you'd then do the same for your native language as well.

As long as there are humans at the beginning and end of the pipeline I at least hope that this won't negatively affect the quality.

[–] Holytimes@sh.itjust.works 1 points 1 day ago

Honestly anything is an improvement over the subpar translation tools we had before. Still ain't great but we can give a W where it's earned.

[–] rodneylives@lemmy.world 6 points 2 days ago
[–] infeeeee@lemmy.zip 412 points 3 days ago (15 children)

Saved you a click:

After much debate, the new policy is in effect: Wikipedia authors are not allowed to use LLMs for generating or rewriting article content. There are two primary exceptions, though.

First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy. In other words, it’s being treated like any other grammar checker or writing assistance tool. The policy says, “ LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”

The second exemption for LLMs is with translation assistance. Editors can use AI tools for the first pass at translating text, but they still need to be fluent enough in both languages to catch errors. As with regular writing refinements, anyone using LLMs also has to check that incorrect information hasn’t been injected.

[–] arcine@jlai.lu 3 points 1 day ago

Treating it like a tool instead of treating it like a God. What a novel idea !

[–] RIotingPacifist@lemmy.world 260 points 3 days ago (7 children)

AIbros: we're creating God!!!

AI users: it can do translation & reformating pretty well but you got to check it's not chatting shit

[–] halcyoncmdr@piefed.social 100 points 3 days ago (3 children)

The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they're asking anyway. All output needs to be verified before being used or relied upon.

The "AI" is just streamlining the process to save time.

Relying on it otherwise is stupid and just proves instantly that you are incompetent.

[–] Zagorath@quokk.au 12 points 3 days ago (2 children)

the user needs to be smart enough to do whatever they're asking anyway

I'm gonna say that's ideal but not quite necessary. What's needed is that the user is capable of properly verifying the output. Which anyone who could do it themselves definitely can, but it can be done more broadly. It's an easier skill to verify a result than it is to obtain that result. Think: how film critics don't necessarily need to be filmmakers, or the P=NP question in computer science.

[–] Pyro@programming.dev 16 points 3 days ago (8 children)

But if the output has issues, what're you going to do, prompt it again? If you are only able to verify but not do the task, you cannot correct the AI's mistakes yourself.

load more comments (8 replies)
load more comments (1 replies)
load more comments (2 replies)
load more comments (6 replies)

Seems pretty reasonable to use it as a grammar checker. As long as it's not changing content, just form or readability, that seems like a pretty decent use for it, at least with a purely educational resource like Wikipedia.

[–] Goodlucksil@lemmy.dbzer0.com 12 points 3 days ago

To save you another few clicks: this is the discussion (RfC) that implemented the changes, and the policy is linked at the top.

[–] ji59@hilariouschaos.com 22 points 3 days ago

So, it should be used reasonably, as it should have always been.

[–] daychilde@lemmy.world 18 points 3 days ago

Liar. I already read the article before opening the comments. YOU SAVED ME NOTHING.

;-)

load more comments (9 replies)
[–] ZILtoid1991@lemmy.world 18 points 2 days ago (1 children)

There should be only one exception: In case someone needs an example of an AI-generated text.

[–] UnderpantsWeevil@lemmy.world 8 points 2 days ago

LLMs are excellent tools for mapping one set of words and phrases to another, which is more or less exactly what you need out of a language translator.

[–] eletes@sh.itjust.works 2 points 1 day ago

There should be a Wikipedia LLM with a sole purpose to check that the tone of the text is objective and matches Wikipedia standards.

The LLM should flag any changes it would make and if the the changes are above a threshold, the edit should be flagged to be reviewed more by another human.

[–] SpaceNoodle@lemmy.world 87 points 3 days ago* (last edited 3 days ago) (1 children)

An extremely measured and level-headed response. Kudos to Wikipedia for maintaining high standards.

[–] kazerniel@lemmy.world 114 points 3 days ago (4 children)

It has to be said, they originally changed their stance due to the considerable editor pushback when they tried to introduce LLM summaries on the top of articles. So kudos to the editor community's resistance! ✊

[–] SpaceNoodle@lemmy.world 41 points 3 days ago* (last edited 3 days ago)

Good point. The real strength of Wikipedia truly lies in the editors .

[–] banshee@lemmy.world 3 points 2 days ago

Does anyone like LLM summaries in pages? This seems like a better fit for a browser extension to generate a summary on demand instead of wasting resources generating it for everyone. Google's documentation is absolutely littered with the mess.

load more comments (2 replies)
[–] SunlessGameStudios@lemmy.world 45 points 3 days ago* (last edited 3 days ago) (1 children)

I know at least one writing major who won an award from his volunteer work at Wikipedia. He did it as a hobby. They don't really need AI, they need people like him.

load more comments (1 replies)
[–] yucandu@lemmy.world 23 points 3 days ago (2 children)

Banned the people who openly admit it, anyway.

load more comments (2 replies)
[–] Mwa@thelemmy.club 17 points 3 days ago

W Wikipedia,would be better to remove the exceptions but its fine tbh.

[–] amateurcrastinator@lemmy.world 7 points 3 days ago (1 children)

But how do they know it is ai written?

Wikipedia has banned AI-generated text,

Smiling Gus

... with two exceptions

Glaring Gus

load more comments
view more: next ›