200
you are viewing a single comment's thread
view the rest of the comments
[-] Thorny_Insight@lemm.ee 25 points 3 months ago

The main issue I personally have with the idea of an AI friend that you can talk with, no matter how convincing, is that I'll always know that it doesn't actually care. I've noticed this same thing with chat-GPT; it might ask me questions about a subject I'm passionate about and in a normal situation could easily spend hours rambling of. With AI it just seems pointless. I'm not teaching it anything it doesn't already know of and I'm painfully aware that it just pretends to be interested.

I really don't mind that it's not actually another human, I just want it to actually behave like an human and not just pretend. I really don't know how to solve this. I guess we need an AI that actually knows less than our current models.

[-] chaosCruiser@futurology.today 6 points 3 months ago

It might also help if the LLM remembered what you discussed earlier.

However, you’ve also touched upon an interesting topic. When you’re talking to another human, you can’t really be sure how much they really care. If you know the person well, then you can usually tell, but if it’s someone you just met, it’s much harder. Who knows, you could be talking to a psychopath who is just looking for creative ways to exploit you. Maybe that person is completely void of actual empathy, but manages to put on a very convincing facade regardless. You won’t know for sure until you feel a dagger between your ribs, so to speak.

With modern LLMs, you can see through the smoke and mirrors pretty quickly, but with some humans it can take a few months until they involuntarily expose themselves. When LLMs get more advanced they should be about as convincing as a human suffering from psychopathy or some similar condition.

What a human or an LLM actually knows about your topic of interest is not that important. What counts, is the ability to display emotion. It doesn’t matter whether that emotion is genuine or not. Your perception of it does.

[-] Thorny_Insight@lemm.ee 2 points 3 months ago

I mean it doesn't really matter wether they actually care or not as long as they're convincing enough that you think it does. However with LLM you know the lights aren't on so no matter how convincing it is it still wouldn't make a difference. Interestingly though if you don't know you're talking with LLM then it's a different case. Maybe to actually have a fulfilling relationship with AI you need to be somehow tricked into thinking it's consciouss even if it's not.

[-] chaosCruiser@futurology.today 1 points 3 months ago* (last edited 3 months ago)

By default, you assume that the people around you are at least capable of caring what you have to say. I wonder what would happen if you took that assumption away.

Let’s say the latest flu virus has a side effect where it disables that feature from a significant number of the affected individuals. Suddenly millions of people are literally unable to actually care about other people. That would make casual conversations a bit of a gamble because you can’t really be sure whether you’re talking to a normal person or not. Maybe people wouldn’t want to take that gamble at all. What if that would force social norms to change and human interactions would o longer come with this assumption pre-installed.

As a side note, that kind of a virus would probably also put humanity back to the stone age. Being motivated to work together, care about others and act selflessly is a fundamental part of human civilization.

[-] Thorny_Insight@lemm.ee 2 points 3 months ago

Even in that case they would still be consciouss individuals. I'm not sure if "caring" is even the the correct term here. For example I talk to my pet gerbils despite the fact that I know they don't care. They don't even understand a single thing I'm saying. It doesn't matter. They're still individuals with personalities that have some sort of an subjective experience of the world. They look back at me and know I'm there and I know they can hear my voice too.

This is not the case with LLM. There's no one there. Not only does it not care it doesn't even know you exist. Talking to it is not talking to an individual. It's like asking a group of scientists how they're doing today. Any reply you're going to get back is completely meaningless so why bother even asking?

I think it's cosciousness that's the relevant factor here. You need to atleast get the sense of talking to something that can experience things even if it was infact not consciouss. Otherwise it's just a more advanced version of a stereo that is programmed to say "hello" when you turn it on.

[-] Murdoc@sh.itjust.works 2 points 3 months ago* (last edited 3 months ago)

Saw that in a sci-fi rpg called Living Steel. An alien bioweapon unleashed on a human space colony called VISR, or Viral Induced Sociopathic Response. It was interesting. Edit: spelling

[-] chaosCruiser@futurology.today 2 points 3 months ago

Interesting. I assume that it resulted in lots of mayhem and destruction.

Anyway, goes to show that even my most original ideas have already been done. Usually several decades before I was born.

load more comments (6 replies)
this post was submitted on 13 May 2024
200 points (94.2% liked)

Technology

57226 readers
4764 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS