775

The research from Purdue University, first spotted by news outlet Futurism, was presented earlier this month at the Computer-Human Interaction Conference in Hawaii and looked at 517 programming questions on Stack Overflow that were then fed to ChatGPT.

“Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose,” the new study explained. “Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style.”

Disturbingly, programmers in the study didn’t always catch the mistakes being produced by the AI chatbot.

“However, they also overlooked the misinformation in the ChatGPT answers 39% of the time,” according to the study. “This implies the need to counter misinformation in ChatGPT answers to programming questions and raise awareness of the risks associated with seemingly correct answers.”

(page 2) 50 comments
sorted by: hot top controversial new old
[-] disconnectikacio@lemmy.world 5 points 2 months ago

Yes there are mistakes, but if you direct it to the right direction, it can give you correct answers

load more comments (12 replies)
[-] 1984@lemmy.today 5 points 2 months ago

Actually the 4o version feels worse than the 4. Im getting tons of wrong answers now..

[-] AIhasUse@lemmy.world 4 points 2 months ago

Yeah, it's not supposed to be better than 4 for logic/reason/coding, etc.. its strong points are it's natural voice interaction, ability to react to streaming video, and its fast and efficient inference. The good voice and video are not available to many people yet. It is so efficient that it is going to be available to free users. If you want good reasoning, then you need to stick with 4 for now, or better yet, switch to something like Claude Opus. If you really want strong reasoning abilities, then at this point, you need a setup using agents, but that requires some research and understanding.

[-] originalfrozenbanana@lemm.ee 4 points 2 months ago

C-suites:

tHis iS inCReDibLe! wE cAn SavE sO MUcH oN sTafFiNg cOStS!

[-] aesthelete@lemmy.world 4 points 2 months ago* (last edited 2 months ago)
load more comments (1 replies)
[-] S13Ni@lemmy.studio 3 points 2 months ago

It does but when you input error logs it does pretty good job at finding issues. I tried it out first by making game of snake that plays itself. Took some prompting to get all features I wanted but in the end it worked great in no time. After that I decided to try to make distortion VST3 plugin similar to ZVEX Fuzz Factory guitar pedal. It took lot's of prompting to get out something that actually builds without error I was quickly able to fix those when I copied the error log to the prompt. After that I kept prompting it further eg. "great, now it works but Gate knob doesn't seem to do anything and knobs are not centered".

In the end I got perfectly functional distortion plugin. Haven't compared it to an actual pedal version yet. Not that AI will just replace us all but it can be truly powerful once you go beyond initial answer.

[-] cultsuperstar@lemmy.world 3 points 2 months ago

Not a programmer by any means (haven't done any since college) but I've asked it for help in writing Jira queries or Excel mess and it's been pretty solid with that stuff.

load more comments
view more: ‹ prev next ›
this post was submitted on 25 May 2024
775 points (97.1% liked)

Technology

57226 readers
3929 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS