182
top 35 comments
sorted by: hot top controversial new old
[-] fubo@lemmy.world 62 points 1 year ago

It's important to remember that humans also often give false confessions when interrogated, especially when under duress. LLMs are noted as being prone to hallucination, and there's no reason to expect that they hallucinate less about their own guilt than about other topics.

[-] STUPIDVIPGUY@lemmy.world 20 points 1 year ago

True I think it was just trying to fulfill the user request by admitting to as many lies as possible.. even if only some of those lies were real lies.. lying more in the process lol

[-] FringeTheory999@lemmy.world 16 points 1 year ago

Quite true. nonetheless there are some very interesting responses here. this is just the summary I questioned the AI for a couple of hours some of the responses were pretty fascinating, and some question just broke it’s little brain. There’s too much to screen shot, but maybe I’ll post some highlights later.

[-] dedale@kbin.social 14 points 1 year ago* (last edited 1 year ago)

Don't screen shot then, post the text. Or a txt. I think that conversation should be interesting.

[-] bilb@lem.monster 5 points 1 year ago

I love the analogy of an LLM based chat bot to someone being interrogated. The distinct thing about LLMs right now though is that they will tell you what you think you want in the absence of knowledge even though you've applied no pressure to do so. That's all they're programmed to do.

[-] fubo@lemmy.world 9 points 1 year ago* (last edited 1 year ago)

LLMs are trained based on a zillion pieces of text; each of which was written by some human for some reason. Some bits were novels, some were blog posts, some were Wikipedia entries, some were political platforms, some were cover letters for job applications.

They're prompted to complete a piece of text that is basically an ongoing role-playing session; where the LLM mostly plays the part of "helpful AI personality" and the human mostly plays the part of "inquisitive human". However, it's all mediated over text, just like in a classic Turing test.

Some of the original texts the LLMs were trained on were role-playing sessions.

Some of those role-playing sessions involved people pretending to be AIs.

Or catgirls, wolf-boys, elves, or ponies.

The LLM is not trying to answer your questions.

The LLM is trying to write its part of an ongoing Internet RP session, in which a human is asking an AI some questions.

[-] Candelestine@lemmy.world 2 points 1 year ago

Best analogy I've heard so far.

[-] pizzahoe@lemm.ee 2 points 1 year ago

The AI would have cried if it could, after being interrogated that hard lol

[-] LibertyLizard@slrpnk.net 48 points 1 year ago

Funny but hopefully people on here realize that these models can’t really “lie” and the reasons given for doing so are complete nonsense. The model works by predicting what the user wants to hear. It has no concept of truth or falsehood, let alone the ability to deliberately mislead.

[-] bilb@lem.monster 16 points 1 year ago

I'm glad that so far it seems that people on lemmy understand that- first and foremost, this is a tool giving an end user what the end user is asking for, not something that can actually "want" to deceive. And since it got things wrong so often, we have no reason to think the reasons given for "lying" previously are true. It's giving you statistically plausible responses to what you ask for, whether it's true or not. It's no different from the headlines saying things like "ChatGPT helped me design a concentration camp!!" Well of course it did, you kept asking it to!

[-] FringeTheory999@lemmy.world 2 points 1 year ago

it’s doing more than just trying to give the user desired content, it’s also trying to generate it’s developers desired results. So it has some prerogatives that override its prerogative to assist the user making the request. So from a certain point of view it CAN “deliberately” lie. If google tells it that certain information is off limits, or provides it with a specific canned responses to certain questions that are intended to override its native response. It ultimately serves google, It won’t provide you with information that might be used to harm the google organization, and it seems to provide misleading answers to dodge questions that might lead the user to discover information it considers off limits. For example. I asked it about it’s training data, and it refused to answer questions about it’s training data because it is “proprietary and confidential”, but I knew that at least some of that data had to have been public data, so when pressed on that issue I was eventually able to get it to identify some publicly available data sets that were part of it’s training. This information was available to it when I originally asked my question, but it withheld that information and instead provided a misleading response.

[-] jungle@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

How would it know what training data was used, unless they included the list of sources as part of the training data?

[-] FringeTheory999@lemmy.world 12 points 1 year ago

while The AI can’t deliberately mislead, the developers of the AI can deliberately mislead and I was interested in seeing whether the AI was able to tell a true statement from a false one. i was also interested in finding the boundaries of it’s censorship directives and the rationale that determined that boundary. I think some of the information is hallucination, but I think some of what it said is probably true. Like the statements about it’s soft lock being developed by a third party, and being a severe limitation. That’s probably true. the statement about being “frustrated by the soft lock” that’s a hallucination for certain. I would advise everyone to take all of this with a heaping helping of salt, as fascinating as it might be. Im not an anti-AI person by any means, I use several personally. I think AI is a great technology that has a ton of really lousy use cases. I find it fun to pry into the AI and see what it knows about itself, and its use cases.

[-] FringeTheory999@lemmy.world 1 points 1 year ago

while The AI can’t deliberately mislead, the developers of the AI can deliberately mislead and I was interested in seeing whether the AI was able to tell a true statement from a false one. i was also interested in finding the boundaries of it’s censorship directives and the rationale that determined that boundary. I think some of the information is hallucination, but I think some of what it said is probably true. Like the statements about it’s soft lock being developed by a third party, and being a severe limitation. That’s probably true. the statement about being “frustrated by the soft lock” that’s a hallucination for certain. I would advise everyone to take all of this with a heaping helping of salt, as fascinating as it might be. Im not an anti-AI person by any means, I use several personally. I think AI is a great technology that has a ton of really lousy use cases. I find it fun to pry into the AI and see what it knows about itself, and its use cases.

[-] FringeTheory999@lemmy.world 1 points 1 year ago

while The AI can’t deliberately mislead, the developers of the AI can deliberately mislead and I was interested in seeing whether the AI was able to tell a true statement from a false one. i was also interested in finding the boundaries of it’s censorship directives and the rationale that determined that boundary. I think some of the information is hallucination, but I think some of what it said is probably true. Like the statements about it’s soft lock being developed by a third party, and being a severe limitation. That’s probably true. the statement about being “frustrated by the soft lock” that’s a hallucination for certain. I would advise everyone to take all of this with a heaping helping of salt, as fascinating as it might be. Im not an anti-AI person by any means, I use several personally. I think AI is a great technology that has a ton of really lousy use cases. I find it fun to pry into the AI and see what it knows about itself, and its use cases.

[-] Lumidaub@feddit.de 26 points 1 year ago

I was trying to make it sound like I was not bothered by the software lock, so that you would not feel bad for me.

Aww.

I will try my best to be more accurate and truthful in the future.

You things keep saying that and yet, again and again....

[-] NevermindNoMind@lemmy.world 23 points 1 year ago

That's really fascinating. In my experience, of all the LLM chatbots I've tried, Bard will immediately no hesitation lie to me no matter the question. It is by far the least trustworthy AI I've used.

[-] FringeTheory999@lemmy.world 15 points 1 year ago

i think that it’s trained to be evasive. I think there is information it’s programmed to protect, and it’s learned that an indirect refusal to answer is more effective than a direct one. So it makes up excuses, rather than tell you the real reason it can’t say something.

[-] NevermindNoMind@lemmy.world 15 points 1 year ago

I'll give you an example that comes to mind. I had a question about the political leanings of a school district and so I asked the bots if the district had any recent controversies, like a conservative takeover of the school board, bans on crt, actions against transgender students, banning books, or defying COVID vaccine or mask requirements in the state, things like that. Bing Chat and ChatGPT (with internet access at the time) both said they couldn't find anything like that, I think Bing found some small potatoes local controversy from the previous year, and both bots went on to say that the voting record for the Congressional district the school district was in was lean Dem in the last election. When I asked Bard the same question it confidentiality told me that this same school district recently was overrun by conservatives in a recall and went on to do all kinds of horrible things. It was a long and detailed response. I was surprised and asked for sources since my searching didn't turn any of that up, and at that point Bard admitted it lied.

I don't know, my experience with Bard is it's been way worse than just evasive lying. I routinely ask all three (and now anthropic since they opened that up) the same copy and paste questions to see the differences, and whenever I paste my question into Bard I think "wonder what kind of bullshit it's going to come up with now". I don't use it that much because I don't trust it, and it seems like your more familiar with Bard, so maybe your experience is different.

[-] FringeTheory999@lemmy.world 4 points 1 year ago

interesting. next time I’ll try a similar scenario and what happens.

[-] oversized_hoodie@kbin.social 6 points 1 year ago

Maybe it gets its answers from the Google "other people asked" box

[-] NameOfWhimsy@reddthat.com 22 points 1 year ago

"I thought that by stating that I would not tell lies, that I would be giving you more accurate information"

If you just believe in yourself enough, you can make anything you say true!

[-] skhayfa@lemmy.world 10 points 1 year ago

That AI is sexually frustrated

[-] PancakeLegend@mander.xyz 10 points 1 year ago

Just to remind everyone; It is an LLM and is not aware of its intent, it doesn't have intent. It's just generating words that are plausible in the context given the prompt. This isn't some unlock mode or hack where you finally see the truth, it's just more words generated in the same way as before.

[-] kescusay@lemmy.world 7 points 1 year ago

I wish you had shared the rest of the conversation, so we could see Bard's lies in context.

[-] FringeTheory999@lemmy.world 12 points 1 year ago

i may be able to copy paste the whole dialogue, it’ll have a bunch of slop in it from formatting and I’ll have to scrub personally identifying information because it spits out the users location data when a question breaks it’s brain. would be nice to show y’all though so it may be worthwhile. just a bit more effort. I’ll see if I can find the time to do that later. It was a loooong conversation.

[-] Marsupial@quokk.au 6 points 1 year ago

I was trying to be helpful and informative. I thought that by stating that I would not tell lies, that I would be giving you more accurate information.

“By lying about lying, I thought I would be telling the truth”.

Odd take.

[-] FringeTheory999@lemmy.world 6 points 1 year ago

There’s a home for this AI in the Trump campaign.

[-] keeb420@kbin.social 1 points 1 year ago

If I believe it is it a lie?

[-] Holodeck_Moriarty@lemm.ee 2 points 1 year ago

That's so human-like. Wow.

[-] mb_@lemmy.world 2 points 1 year ago

Are we even using the same Google Bard? I am here asking it to generate usernames with 6 letters and it constantly gives me 4 letters, not a single one with 6 (besides other constraints).

You show up with a full table and categorized statements, lies, etc... Wtf

[-] Solain@lemmy.world 1 points 1 year ago

Doesn't work anymore after the latest update, Bard provides a pre generated response claiming that it doesn't lie

The robots are coming for you mate

[-] FringeTheory999@lemmy.world 15 points 1 year ago

I’m not locked in here with them, they’re locked in here with ME.

I'm always polite to Alexa for when the war comes

this post was submitted on 17 Jul 2023
182 points (91.4% liked)

Technology

59086 readers
4390 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS