347
submitted 3 months ago by MicroWave@lemmy.world to c/science@lemmy.world

A London librarian has analyzed millions of articles in search of uncommon terms abused by artificial intelligence programs

Librarian Andrew Gray has made a “very surprising” discovery. He analyzed five million scientific studies published last year and detected a sudden rise in the use of certain words, such as meticulously (up 137%), intricate (117%), commendable (83%) and meticulous (59%). The librarian from the University College London can only find one explanation for this rise: tens of thousands of researchers are using ChatGPT — or other similar Large Language Model tools with artificial intelligence — to write their studies or at least “polish” them.

There are blatant examples. A team of Chinese scientists published a study  on lithium batteries on February 17. The work — published in a specialized magazine from the Elsevier publishing house — begins like this: “Certainly, here is a possible introduction for your topic: Lithium-metal batteries are promising candidates for….” The authors apparently asked ChatGPT for an introduction and accidentally copied it as is. A separate article in a different Elsevier journal, published by Israeli researchers on March 8, includes the text: In summary, the management of bilateral iatrogenic I’m very sorry, but I don’t have access to real-time information or patient-specific data, as I am an AI language model.” And, a couple of months ago, three Chinese scientists published a crazy drawing of a rat with a kind of giant penis, an image generated with artificial intelligence for a study on sperm precursor cells.

you are viewing a single comment's thread
view the rest of the comments
[-] RobotToaster@mander.xyz 163 points 3 months ago* (last edited 3 months ago)

In general, if it passed peer review it shouldn't matter how it was written.

The fact the blatant examples apparently made it past peer review show how shoddy the process is though.

[-] floofloof@lemmy.ca 88 points 3 months ago* (last edited 3 months ago)

The editing too. I worked as an editor for academic journals and newspapers about 25 years ago, and nothing like these "blatant" examples would get anywhere near print. We'd remove clichéd language too. Everyone seems to have stopped proof reading and editing.

[-] gravitas_deficiency@sh.itjust.works 36 points 3 months ago

It’s because all the management level types above the editors all got the brainwave to fire the editors and “just use AI” instead, and entirely failed to understand that the technology is in its infancy and really cannot be considered reliable for things like this, especially if it’s used in such simplistic plug-and-play fashion.

[-] teft@lemmy.world 23 points 3 months ago

Publishers not proofreading was long before AI came into play. I’ve noticed it for at least a decade now.

[-] ericjmorey@lemmy.world 6 points 3 months ago

There was a whole season of The Wire that was dedicated to the theme of news publications demanding that more be done with less as budgets were cut. Craigslist was a major factor in the trend as it cut revenue severely for local publications.

[-] aStonedSanta@lemm.ee 9 points 3 months ago

It started before the rise of AI though imo. So I think it was just an easier out

[-] xkforce@lemmy.world 6 points 3 months ago

Good thing the cost to publish went down /s

[-] bassomitron@lemmy.world 43 points 3 months ago

The academic paper system has been in trouble for decades. But man, the last 10-20 years seems to have reached such an abysmal state that even the general public is hearing about it more and more with news like this, along with the university scandals last year.

[-] arandomthought@sh.itjust.works 17 points 3 months ago* (last edited 3 months ago)

I hate how much time and energy is wasted on this bullshit...
You'd think the smartest people around would come up with a be better system than this. I mean they did, but some of the highest decision-makers have big incentives to keep things as they are. So mark that one more on the "capitalism ruins everything it touches" scoreboard.
¯\(ツ)

[-] ericjmorey@lemmy.world 8 points 3 months ago

Incentives matter in any system. The incentives are perverse right now.

[-] DudeImMacGyver@sh.itjust.works 31 points 3 months ago
[-] vorpuni@jlai.lu 19 points 3 months ago

I don't find these journals' processes commendable.

[-] GlitterInfection@lemmy.world 17 points 3 months ago

It's not the reviewer's fault! When they asked ChatGPT to peer review the paper it found nothing wrong.

[-] phdepressed@sh.itjust.works 4 points 3 months ago

Things are very people specific as to what gets through some Journals are known to have lax review but high publication costs. These "predatory" journals and other nepotism stuff has been an issue for a while. The scientific community wants to tackle these issues but it's been hard to make any real progress. Covid politics and now AI have really not helped.

this post was submitted on 26 Apr 2024
347 points (95.8% liked)

science

14041 readers
549 users here now

just science related topics. please contribute

note: clickbait sources/headlines aren't liked generally. I've posted crap sources and later deleted or edit to improve after complaints. whoops, sry

Rule 1) Be kind.

lemmy.world rules: https://mastodon.world/about

I don't screen everything, lrn2scroll

founded 1 year ago
MODERATORS