342
submitted 10 months ago by throws_lemy@lemmy.nz to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] praise_idleness@sh.itjust.works 82 points 10 months ago* (last edited 10 months ago)
  • works 24/7
  • no emotional damage
  • easy to train
  • cheap as hell
  • concurrent, fast service possible

This was pretty much the very first thing to be replaced by AI. I'm pretty sure it'd be way nicer experience for the customers.

[-] applebusch@lemmy.world 68 points 10 months ago

Doubt. These large language models can't produce anything outside their dataset. Everything they do is derivative, pretty much by definition. Maybe they can mix and match things they were trained on but at the end of the day they are stupid text predictors, like an advanced version of the autocomplete on your phone. If the information they need to solve your problem isn't in their dataset they can't help, just like all those cheap Indian call centers operating off a script. It's just a bigger script. They'll still need people to help with outlier problems. All this does is add another layer of annoying unhelpful bullshit between a person with a problem and the person who can actually help them. Which just makes people more pissed and abusive. At best it's an upgrade for their shit automated call systems.

[-] RogueBanana@lemmy.zip 26 points 10 months ago

Most call centers have multiple level teams where the lower ones are just reading of a script and make up the majority. You don't have to replace every single one to implement AI. Its gonna be the same for a lot of other jobs as well and many will lose jobs.

[-] praise_idleness@sh.itjust.works 9 points 10 months ago

I know how AI works inside. AI isn't going to completely replace such thing, yes, but it'll also be the end of said cheap Indian call centers.

[-] hitmyspot@aussie.zone 7 points 10 months ago

Who also don’t have the information or data that I need.

[-] anarchy79@lemmy.world 1 points 10 months ago

It isn't going to completely replace whole business departments, only 90% of them, right now.

In five years it's going to be 100%.

[-] thetreesaysbark@sh.itjust.works 8 points 10 months ago

I'd say at best it's an upgrade to scripted customer service. A lot of the scripted ones are slower than AI and often have stronger accented people making it more difficult for the customer to understand the script entry being read back to them, leading to more frustration.

If your problem falls outside the realm of the script, I just hope it recognises the script isn't solving the issue and redirects you to a human. Oftentimes I've noticed chatgpt not learning from the current conversation (if you ask it about this it will say that it does not do this). In this scenario it just regurgitates the same 3 scripts back to me when I tell it it's wrong. In my scenario this isn't so bad as I can just turn to a search engine but in a customer service scenario this would be extremely frustrating.

[-] SirGolan 7 points 10 months ago

Check out this recent paper that finds some evidence that LLMs aren't just stochastic parrots. They actually develop internal models of things.

[-] guacupado@lemmy.world 2 points 10 months ago* (last edited 10 months ago)

Your description of AI limitations sounds a lot like the human limitations of the reps we deal with every day. Sure, if some outlier situations comes up then that has to go to a human but let's be honest - those calls are usually going to a manager anyway so I'm not seeing your argument. An escalation is an escalation. The article itself is even saying that's not a literal 100% replacement of humans.

[-] anarchy79@lemmy.world 0 points 10 months ago* (last edited 10 months ago)

You can doubt it all you want, the fact of the matter is that AI is provably more than capable to take over the roles of humans in many work areas, and they already do.

[-] GALM@lemmy.world 60 points 10 months ago

And the way customer support staff can be/is abused in the US is so dehumanizing. Nobody should have to go through that wrestling ring.

[-] fluxion@lemmy.world 53 points 10 months ago

A lot of that abuse is because customer service has been gutted to the point that it is infuriating to a vast number of customers calling about what should be basic matters. Not that it's justified, it's just that is doesn't necessarily have to be such a draining job if not for the greed that puts them in that situation.

[-] blanketswithsmallpox@lemmy.world 3 points 10 months ago* (last edited 10 months ago)

There was a recent episode of Ai no Idenshi an anime regarding such topics. The customer service episode was nuts and hits on these points so well.

It's a great show for anyone interested in fleshing some of the more mundane topics of ai out. I've read and watched a lot of scifi and it hit some novel stuff for me.

https://reddit.com/r/anime/s/0uSwOo9jBd

[-] DessertStorms@kbin.social 19 points 10 months ago

I’m pretty sure it’d be way nicer experience for the customers.

Lmfao, in what universe? As if trained humans reading off a script they're not allowed to deviate from isn't frustrating enough, imagine doing that with a bot that doesn't even understand what frustration is..

[-] praise_idleness@sh.itjust.works 0 points 10 months ago

defacto instant reply, if trained right, way more knowledgeable that the human counterparts, no more support center loop... current experience is such a low bar.

[-] cley_faye@lemmy.world 8 points 10 months ago

defacto instant reply

Not with a good enough model, no. Not without some ridiculous expense, which is not what this is about.

if trained right, way more knowledgeable that the human counterparts

Support is not only a question of knowledge. Sure, for some support services, they're basically useless. But that's not necessarily the human fault; lack of training and lack of means of action is also a part of it. And that's not going away by replacing the "human" part of the equation.

At best, the first few iterations will be faster at leading you off, and further down the line once you get something that's outside the expected range of issues, it'll either go with nonsense or just makes you circle around until you're moved through someone actually able to do something.

Both "properly training people" and "properly training an AI model" costs money, and this is all about cutting costs, not improving user experience. You can bet we'll see LLM better trained to politely turn people away way before they get able to handle random unexpected stuff.

[-] testfactor@lemmy.world 1 points 10 months ago

While properly training a model does take a lot of money, it's probably a lot less money than paying 1.6 million people for any number of years.

[-] philodendron@lemdro.id 4 points 10 months ago

Yeah but are you ready for “my grandma used to tell me $10 off coupon codes as I fell asleep…”

[-] gravitas_deficiency@sh.itjust.works 2 points 10 months ago

Cheap as hell until you flood it with garbage, because there is a dollar amount assigned for every single interaction.

Also, I’m not confident that ChatGPT would be meaningfully better at handling the edge cases that always make people furious with phone menus these days.

this post was submitted on 07 Oct 2023
342 points (96.5% liked)

Technology

57226 readers
4764 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS