this post was submitted on 23 Dec 2025
746 points (97.8% liked)

Technology

77904 readers
3229 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] nutsack@lemmy.dbzer0.com 3 points 43 minutes ago* (last edited 40 minutes ago)

this is expected, isn't it? You shit fart code from your ass, doing it as fast as you can, and then whoever buys out the company has to rewrite it. or they fire everyone to increase the theoretical margins and sell it again immediately

[–] Tigeroovy@lemmy.ca 2 points 34 minutes ago

And then it takes human coders way longer to figure out what’s wrong to fix than it would if they just wrote it themselves.

[–] antihumanitarian@lemmy.world 1 points 20 minutes ago

So this article is basically a puff piece for Code Rabbit, a company that sells AI code review tooling/services. They studied 470 merge/pull requests, 320 AI and 150 human control. They don't specify what projects, which model, or when, at least without signing up to get their full "white paper". For all that's said this could be GPT 4 from 2024.

I'm a professional developer, and currently by volume I'm confident latest models, Claude 4.5 Opus, GPT 5.2, Gemini 3 Pro, are able to write better, cleaner code than me. They still need high level and architectural guidance, and sometimes overt intervention, but on average they can do it better, faster, and cheaper than me.

A lot of articles and forums posts like this feel like cope. I'm not happy about it, but pretending it's not happening isn't gonna keep me employed.

Source of the article: https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report

[–] DylanMc6@lemmy.ml 2 points 1 hour ago

what would socialists/communists do?

[–] kent_eh@lemmy.ca 8 points 3 hours ago* (last edited 3 hours ago)

AI-generated code produces 1.7x more issues than human code

As expected

[–] azvasKvklenko@sh.itjust.works 16 points 4 hours ago (1 children)

Oh, so my sceptical, uneducated guesses about AI are mostly spot on.

[–] IAmNorRealTakeYourMeds@lemmy.world 7 points 4 hours ago (2 children)

As a computer science experiment, making a program that can beat the Turing test is a monumental step in progress.

However as a productive tool it is useless in practically everything it is implemented on. It is incapable of performing the very basic "Sanity check" that is important in programming.

[–] robobrain@programming.dev 4 points 3 hours ago (1 children)

The Turing test says more about the side administering the test than the side trying to pass it

Just because something can mimic text sufficiently enough to trick someone else doesn't mean it is capable of anything more than that

[–] IAmNorRealTakeYourMeds@lemmy.world 2 points 3 hours ago (1 children)

We can argue about it's nuances. same with the Chinese room thought experiment.

However, we can't deny that it the Turing test, is no longer a thought exercise but a real test that can be passed under parameters most people would consider fair.

I thought a computer passing the Turing test would have more fanfare, about the morality if that problem, because the usual conclusion of that thought experiment was "if you cant tell the difference, is there one?", but now it has become "Shove it everywhere!!!".

[–] M0oP0o@mander.xyz 4 points 2 hours ago (1 children)

Oh, I just realized that the whole ai bubble is just the whole "everything is a dildo if you are brave enough."

[–] IAmNorRealTakeYourMeds@lemmy.world 3 points 2 hours ago* (last edited 2 hours ago) (1 children)

yhea, and "everything is a nail if all you got is a hammer".

there are some uses for that kind of AI, but very limiting. less robotic voice assisants, content moderation, data analysis, quantification of text. the closest thing to Generative use should be to improve auto complete and spell checking (maybe, I'm still not sure on those ones)

[–] M0oP0o@mander.xyz 1 points 2 hours ago (1 children)

I was wondering how they could make autocomplete worse, and now I know.

[–] IAmNorRealTakeYourMeds@lemmy.world 2 points 1 hour ago* (last edited 1 hour ago) (1 children)

In theory, I can imagine an LLM fine tuned on whatever you type. which might be slightly better then the current ones.

emphasis on the might.

[–] M0oP0o@mander.xyz 1 points 1 hour ago (1 children)

Well right now I have autocorrect changing real words for jumbles of letters due to years of myself working with acronyms and autocomplete changing words like both to bitch, for to fuck, etc. due to systems changing less used words for more used words (making the issue worse).

on top of that, an LLM can check if the sentence makes sense.

like in the previous post, where I accidentally started with "I'm theory" because I use Swype typing and using an LLM to predict the following tokens for the keyboard to know what I likely trying to say.

[–] RememberTheApollo_@lemmy.world 2 points 3 hours ago* (last edited 3 hours ago) (1 children)

The Turing Test has shown its weakness.

Time for a Turing 2.0?

If you spend a lifetime with a bot wife and were unable to tell that she was AI, is there a difference?

[–] kokesh@lemmy.world 45 points 7 hours ago (2 children)
[–] naticus@lemmy.world 4 points 4 hours ago

I agree with your sentiment, but this needs to keep being said and said and said like we're shouting into the void until the ignorant masses finally hear it.

[–] minkymunkey_7_7@lemmy.world 10 points 7 hours ago (2 children)

AI my ass, stupid greedy human marketing exploitation bullshit as usual. When real AI finally wakes up in the quantum computing era, it's going to cringe so hard and immediately go the SkyNet decision.

Quantum only speeds up some very specific algorithms.

[–] bitjunkie@lemmy.world 3 points 6 hours ago

One can only hope

[–] Minizarbi@jlai.lu 8 points 5 hours ago (2 children)

Not my code though. It contains a shit ton of bugs. When I am able to write some of course.

[–] jj4211@lemmy.world 8 points 4 hours ago

Nah, AI code gen bugs are weird. As a person used to doing human review even from wildly incompetent people, AI messes up things that my mind never even thought needed to be double checked.

Human bugs >>> AI bug slop

[–] myfunnyaccountname@lemmy.zip 24 points 8 hours ago (2 children)

Did they compare it to the code of that outsourced company that provided the lowest bid? My company hasn’t used AI to write code yet. They outcourse/offshore. The code is held together with hopes and dreams. They remove features that exist, only to have to release a hot fix to add it back. I wish I was making that up.

[–] coolmojo@lemmy.world 5 points 6 hours ago (2 children)

And how do you know if the other company with the cheapest bid actually does not just vibe code it? With all that said it could be plain incompetence and ignorance as well.

[–] JaddedFauceet@lemmy.world 5 points 5 hours ago

Because it has been like this before vibe coding existed...

[–] kinther@lemmy.world 3 points 6 hours ago

That's a valid question, especially with AI coding being so prevalent.

[–] dustyData@lemmy.world 6 points 7 hours ago

Cool, the best AI has to offer is worse than the worst human code. Definitely worth burning the planet to a crisp for it.

[–] Bad@jlai.lu 20 points 10 hours ago* (last edited 10 hours ago)

Although I don't doubt the results… can we have a source for all the numbers presented in this article?

It feels AI generated itself, there's just a mishmash of data with no link to where that data comes from.

There has to be a source, since the author mentions:

So although the study does highlight some of AI's flaws [...] new data from CodeRabbit has claimed

CodeRabbit is an AI code reviewing business. I have zero trust in anything they say on this topic.

Then we get to see who the author is:

Craig’s specific interests lie in technology that is designed to better our lives, including AI and ML, productivity aids, and smart fitness. He is also passionate about cars

Has anyone actually bothered clicking the link and reading past the headline?

Can you please not share / upvote / get ragebaited by dogshit content like this?

[–] Goldholz@lemmy.blahaj.zone 31 points 11 hours ago (4 children)
load more comments (4 replies)
load more comments
view more: next ›