HedyL

joined 2 years ago
[–] HedyL@awful.systems 1 points 2 hours ago

I think most cons, scams and cults are capable of damaging vulnerable people's mental health even beyond the most obvious harms. The same is probably happening here, the only difference being that this con is capable of auto-generating its own propaganda/PR.

I think this was somewhat inevitable. Had these LLMs been fine-tuned to act like the mediocre autocomplete tools they are (rather than like creepy humanoids), nobody would have paid much attention to them, and investors would have started to focus on the high cost of running them quickly.

This somewhat reminds me of how cryptobros used to claim they were fighting the "legacy financial system", yet they were creating a worse version (almost a parody) of it. This is probably inevitable if you are running an unregulated financial system and are trying to extract as much money from it as possible.

Likewise, if you have a tool capable of messing with people's minds (to some extent) and want to make a lot of money from it, you are going to end up with something that resembles a cult, an LLM or similarly toxic groups.

[–] HedyL@awful.systems 6 points 4 hours ago

I think this has happened before. There are accounts of people who completely lost touch with reality after getting involved with certain scammers, cult-leaders, self-help gurus, "life coaches", fortune tellers or the like. However, these perpetrators were real people who could only handle a limited number of victims at any given time. Also, they probably had their very specific methods and strategies which wouldn't work on everybody, not even all the people who might have been the most susceptible. ChatGPT, on the other hand, can do this at scale. Also, it was probably trained from all websites and public utterances of any scammer, self-help author, (wannabe) cult leader, life coach, cryptobro, MLM peddler etc. available, which allows it to generate whatever response works best to keep people "hooked". In my view, this alone is a cause for concern.

[–] HedyL@awful.systems 10 points 14 hours ago

I think we don't know how many people might be at risk of slipping into such mental health crises under the right circumstances. As a society, we are probably good at protecting most of our fellow human beings from this danger (even if we do so unconsciously). We may not yet know what happens when people regularly experience interactions that follow a different pattern (which might be the case with chatbots).

[–] HedyL@awful.systems 9 points 3 days ago (1 children)

Just guessing, but the reported "90% accuracy" are probably related to questions that could be easily answered from an FAQ list. The rest is probably at least in part about issues where the company itself f*cked up in some way... Nothing wrong with answering from an FAQ in theory, but if all the other people get nicely worded BS answers (for which the company couldn't be held accountable), that is a nightmare from every customer's point of view.

[–] HedyL@awful.systems 10 points 3 days ago* (last edited 3 days ago)

At the very least, actual humans have an incentive not to BS you too much, because otherwise they might be held accountable. This might also be the reason why call center support workers sound less than helpful sometimes - they are unable to help you (for various technical or corporate reasons) and feel uneasy about this. A bot is probably going to tell you whatever you want to hear while sounding super polite all the time. If all of it turns out to be wrong... well, then this is your problem to deal with.

[–] HedyL@awful.systems 8 points 5 days ago

Almost sounds as if in order to steal intellectual property, they had to go down the "traditional" route of talking to someone, making promises etc. If it turns out that a chatbot isn't the best tool for plagiarizing something, what is it even good for?

[–] HedyL@awful.systems 3 points 6 days ago (1 children)

And there might be new "vulture funds" that deliberately buy failing software companies simply because they hold some copyright that might be exploitable. If there are convincing legal reasons why this likely won't fly, fine. Otherwise I wouldn't rely on the argument that "this is a theoretical possibility, but who would ever do such a thing?"

[–] HedyL@awful.systems 2 points 6 days ago (4 children)

And, after the end of the AI boom, do we really know what wealthy investors are going to do with the money they cannot throw at startups anymore? Can we be sure they won't be using it to fund lawsuits over alleged copyright infringements instead?

[–] HedyL@awful.systems 9 points 1 week ago

At the very least, many of them were probably unable to differentiate between "coding problems that have been solved a million times and are therefore in the training data" and "coding problems that are specific to a particular situation". I'm not a software developer myself, but that's my best guess.

[–] HedyL@awful.systems 8 points 1 week ago (2 children)

Even the idea of having to use credits to (maybe?) fix some of these errors seems insulting to me. If something like this had been created by a human, the customer would be eligible for a refund.

Yet, under Aron Peterson's LinkedIn posts about these video clips, you can find the usual comments about him being "a Luddite", being "in denial" etc.

[–] HedyL@awful.systems 14 points 1 week ago (19 children)

It is funny how, when generating the code, it suddenly appears to have "understood" what the instruction "The dog can not be left unattended" means, while that was clearly not the case for the natural language output.

[–] HedyL@awful.systems 9 points 1 week ago (2 children)

FWIW, due to recent developments, I've found myself increasingly turning to non-search engine sources for reliable web links, such as Wikipedia source lists, blog posts, podcast notes or even Reddit. This almost feels like a return to the early days of the internet, just in reverse and - sadly - with little hope for improvement in the future.

view more: next ›