this post was submitted on 12 May 2024
76 points (94.2% liked)

Futurology

2989 readers
205 users here now

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Lugh@futurology.today 20 points 1 year ago (15 children)

There's a strong push-back against AI regulation within some quarters. Predictably, the issue seems to have split along polarized political lines. With right-wing leaning people not favoring regulation. They see themselves as 'Accelerationist' and those with concerns about AI as 'Doomers'.

Meanwhile the unaddressed problems mount. AI can already deceive us, even when we design it not to do so, and we don't why.

[–] snooggums@midwest.social 37 points 1 year ago* (last edited 1 year ago) (11 children)

AI can already deceive us, even when we design it not to do so, and we don’t why.

The most likely explanation is that we keep acting like AI has intelligence and intent when describing the defects. AI doesn't deceive, it returns inaccurate responses. That is because it is programmed to return answers like people do, and deceptions were included in the training data.

[–] Bipta@kbin.social -4 points 1 year ago (1 children)

Claude 3 understood it was being tested... It's very difficult to fathom that that's a defect...

[–] jacksilver@lemmy.world 8 points 1 year ago (1 children)

Do you have a source on that one? My current understanding of all the model designs would lead me to believe that kind of "awareness" would be impossible.

[–] Kecessa@sh.itjust.works 6 points 1 year ago* (last edited 1 year ago) (1 children)

https://arstechnica.com/information-technology/2024/03/claude-3-seems-to-detect-when-it-is-being-tested-sparking-ai-buzz-online/

Still not proof of intelligence to me but people want to believe/scare themselves into believing that LLMs are AI.

[–] jacksilver@lemmy.world 3 points 1 year ago (1 children)

Thanks for following up with a source!

However, I tend to align more with the skeptics in the article, as it still appears to be responding in a realistic manner and doesn't demonstrate an ability to grow beyond the static structure of these models.

[–] Kecessa@sh.itjust.works 2 points 1 year ago* (last edited 1 year ago) (1 children)

I wasn't the user you originally replied to but I didn't expect them to provide one and I totally agree with you, just another person that started believing that LLM is AI...

[–] jacksilver@lemmy.world 1 points 1 year ago

Ah, my bad I didn't notice, but do still appreciate the article/source!

load more comments (9 replies)
load more comments (12 replies)