First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy.
translation assistance
This is a most excellent place for technology news and articles.
First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy.
translation assistance
The former I'm still looking sideways at.
The latter, probably the only truly benevolent use of LLMs. And even then, you'll get plenty of grumbling.
Eh I think this sounds ok. If you prompt an AI to improve your text, you submit that, and another human reviews that (and maybe asks you to make changes) it should be fine. I can see this giving more people the ability to make edits (e.g. non-native speakers)
The problem is, it doesn't improve text, it worsens it. And if your grasp of the language isn't good enough, you can edit a page in your own language, or ask nerds in the discussion section to help you, it will be better written, they will be happy, and you might learn something.
Asking a slop generator to generate some slop about what you wanted to write will make things worse.
This is a bit alarmist I think. It's about how you use it. If your prompt is "please write a funny story about a bunny" you'll get slop. If you write a full-ass Wikipedia article and ask it to simplify and punctuate long passages for increased legibility you can get valuable feedback.
I think it's more nuanced than that. It all depends on what you're asking it to do (and a bit of luck that it complies as intended). Using a thesaurus can also either improve or worsen a text.
I'm not a native English speaker, but have lived in an English speaking country for many years now. I still make mistakes, but there is no point in me asking for help with English writing as my mistakes are subtle and I don't realise I made them. Getting an AI to detect clumsy use of English and grammar mistakes has worked quite well for me before publishing reports. While I don't always use the correct grammar while writing, I'm very capable of judging whether an LLM suggested improvement is actually better.
Of course, letting an LLM rewrite a whole text is much riskier in terms of the original meaning getting lost. But that's not the only way to use it.
There's definitely a lot of nuance in this topic. I think discarding the whole thing and saying "And if your grasp of the language isn’t good enough, you can edit a page in your own language" is a bit naïve. English is the lingua franca of the world, so if you have knowledge about something that should be in Wikipedia but isn't, adding or appending to a English page will reach the widest audience. Ideally you'd then do the same for your native language as well.
As long as there are humans at the beginning and end of the pipeline I at least hope that this won't negatively affect the quality.
Honestly anything is an improvement over the subpar translation tools we had before. Still ain't great but we can give a W where it's earned.
Thank you!
Saved you a click:
After much debate, the new policy is in effect: Wikipedia authors are not allowed to use LLMs for generating or rewriting article content. There are two primary exceptions, though.
First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy. In other words, it’s being treated like any other grammar checker or writing assistance tool. The policy says, “ LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”
The second exemption for LLMs is with translation assistance. Editors can use AI tools for the first pass at translating text, but they still need to be fluent enough in both languages to catch errors. As with regular writing refinements, anyone using LLMs also has to check that incorrect information hasn’t been injected.
Treating it like a tool instead of treating it like a God. What a novel idea !
AIbros: we're creating God!!!
AI users: it can do translation & reformating pretty well but you got to check it's not chatting shit
The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they're asking anyway. All output needs to be verified before being used or relied upon.
The "AI" is just streamlining the process to save time.
Relying on it otherwise is stupid and just proves instantly that you are incompetent.
the user needs to be smart enough to do whatever they're asking anyway
I'm gonna say that's ideal but not quite necessary. What's needed is that the user is capable of properly verifying the output. Which anyone who could do it themselves definitely can, but it can be done more broadly. It's an easier skill to verify a result than it is to obtain that result. Think: how film critics don't necessarily need to be filmmakers, or the P=NP question in computer science.
But if the output has issues, what're you going to do, prompt it again? If you are only able to verify but not do the task, you cannot correct the AI's mistakes yourself.
Seems pretty reasonable to use it as a grammar checker. As long as it's not changing content, just form or readability, that seems like a pretty decent use for it, at least with a purely educational resource like Wikipedia.
To save you another few clicks: this is the discussion (RfC) that implemented the changes, and the policy is linked at the top.
So, it should be used reasonably, as it should have always been.
Liar. I already read the article before opening the comments. YOU SAVED ME NOTHING.
;-)
There should be only one exception: In case someone needs an example of an AI-generated text.
LLMs are excellent tools for mapping one set of words and phrases to another, which is more or less exactly what you need out of a language translator.
There should be a Wikipedia LLM with a sole purpose to check that the tone of the text is objective and matches Wikipedia standards.
The LLM should flag any changes it would make and if the the changes are above a threshold, the edit should be flagged to be reviewed more by another human.
An extremely measured and level-headed response. Kudos to Wikipedia for maintaining high standards.
It has to be said, they originally changed their stance due to the considerable editor pushback when they tried to introduce LLM summaries on the top of articles. So kudos to the editor community's resistance! ✊
Good point. The real strength of Wikipedia truly lies in the editors .
Does anyone like LLM summaries in pages? This seems like a better fit for a browser extension to generate a summary on demand instead of wasting resources generating it for everyone. Google's documentation is absolutely littered with the mess.
I know at least one writing major who won an award from his volunteer work at Wikipedia. He did it as a hobby. They don't really need AI, they need people like him.
W Wikipedia,would be better to remove the exceptions but its fine tbh.
But how do they know it is ai written?
Wikipedia has banned AI-generated text,

... with two exceptions
