this post was submitted on 21 Aug 2025
52 points (100.0% liked)
Technology
40038 readers
281 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I do appreciate the direct link to exactly what Wales said, and the full conversation with his replies and such. It's definitely a bit heady - Wales points out that editors are overstretched and he gives an example where he used ChatGPT to give helpful feedback to a new contributor. Then, a bunch of editors file in and point out parts of the GPT response that are inaccurate and go against Wikipedia policy. They also point out how LLMs themselves are already making life hell for editors.
If the site is being flooded by LLM submissions, and then Wikipedia starts using LLMs to provide feedback on rejected articles, when does a human step in to clear out the hallucinations? If I was submitting an article, and then I got bot feedback and edited my article with that feedback, and then a human looked at it and told me half the stuff the bot told me was wrong, I would be rightly pissed. If I was a new contributor dipping my toe into the scene for fun, that might just turn me off from Wiki editing forever.
And all of this is without considering the environmental impact of adding yet another major website to the data center load of existing LLMs. But it is clear that there are problems with this idea, even if the environmental costs are a nonfactor.
I've done fact checking on LLM models for work before, and it quickly becomes evident that many models rely on Wikipedia as a heavily weighted source of truth.
If LLMs have even a small role in producing Wikipedia content, the ouroborus of declining quality will accelerate.
I have studied academic biblical scholarship for over 30 years. All of Wikipedia's biblical pages are riddled with errors. IMO, Wikipedia is a decent starting point but that would be it.
I’d urge you to submit corrections
My self and others in this field tried for about 6 months to no avail. We gave up...they didn't want to hear it.
Can you give an example of an article with an error that you tried to correct? Not trying to cast doubt on your statements, genuinely just curious what kind of roadblocks you hit. I'm no Wikipedia expert, but I have started to dip my toe into editing in recent months.
This was 6 years ago so I cannot recall precisely which pages. However, I just skimmed over about 15 pages that I thought would be riddled with errors. To my surprise, I only found one instance where there was a 'citation needed' mark and could find no major errors...maybe a few little splitting hairs examples but nothing serious.
So, it appears that improvements have been made over the past 6 years. On the other hand, I only looked over roughly 15 pages.
Probably the same caution would be true for any encyclopedia. Namely, these can be pretty good starting points but not for serious scholarly research.
that's cool of you to give it even that brief review! At the end of the day, we 100% agree - Wikipedia is best for surface-level research. But that is what I love it for! Because of Wikipedia, we can all be surface-level experts on almost any topic in minutes. One of the few treasures of living in the 21st century.
I agree. I grew up using hard copy encyclopedias at libraries. It is incredible now to have all of this at our fingertips at home.