315
submitted 4 months ago by gedaliyah@lemmy.world to c/news@lemmy.world

It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.

top 50 comments
sorted by: hot top controversial new old
[-] GenderNeutralBro 102 points 4 months ago

ChatGPT is not an information repository.

ChatGPT is not an information repository.

ChatGPT is not an information repository.

The correct answer to this problem is not "we can't correct it"; it is "this class of task is completely out of scope for ChatGPT, and we will do everything we can to make sure users understand that". Unfortunately, OpenAI knows damn well this is how the public perceives and uses its product and seems happy to let this misconception persist.

We do need laws to curb this, but it's really more a marketing issue than a technological issue. The underlying technology is amazing; the applications built around it are mostly garbage. What we have here is a hype trainwreck.

load more comments (16 replies)
[-] FlyingSquid@lemmy.world 70 points 4 months ago

I have an unusual name. There is one other person in the U.S. with my name and there is something even more unique about them. I typed "Tell me about [MY NAME]." into ChatGPT, including my middle initial just to be sure and got this back:

[MY NAME] is an American entrepreneur, author, and business consultant known for his expertise in marketing and business strategy. He has authored several books on entrepreneurship, marketing, and personal development, aiming to empower individuals to achieve success in their ventures.

[MY NAME] is recognized for his practical approach to business growth and his ability to simplify complex concepts into actionable steps. He often emphasizes the importance of innovation, adaptability, and effective communication in building successful businesses.

While specific details may vary depending on recent developments, [MY NAME]'s work typically revolves around helping entrepreneurs and small business owners navigate the challenges of starting and growing their ventures in today's competitive market landscape. His insights and strategies often focus on leveraging technology, understanding consumer behavior, and developing effective branding and marketing strategies.

[MY NAME] may also engage in public speaking engagements, workshops, and consulting services to share his knowledge and insights with a wider audience. His work likely continues to inspire and empower aspiring entrepreneurs and business professionals to pursue their goals with confidence and determination.

Not one bit of that is true either for me or for the other person who shares my first and last name but not my middle initial.

[-] FuglyDuck@lemmy.world 64 points 4 months ago

This is the problem with training LLMs on Reddit. It doesn’t know how to say “I don’t know”. So, like Redditors…. It just makes shit up.

[-] ech@lemm.ee 47 points 4 months ago

It's not that it doesn't know how to say "I don't know". It simply doesn't know. Period. LLMs are not sentient and they don't think about the questions they are asked, let alone if the answer they provide is correct. They string words together. That's all. That we've gotten those strings of words to strongly resemble coherent text is very impressive, but it doesn't make the program intelligent in the slightest.

[-] FlyingSquid@lemmy.world 12 points 4 months ago

What amazes me is that people don't find it significant that they don't ask questions. I would argue there is no such thing as intelligence without curiosity.

load more comments (1 replies)
[-] catloaf@lemm.ee 22 points 4 months ago

They're trained on far more than reddit. But it's not a training data problem, it's a wrong tool problem. It's called "generative AI" for a reason: it generates text, same way a Markov chain does. You want it to tell you something, it'll tell you. You want factual data, don't ask a storyteller.

[-] FlyingSquid@lemmy.world 15 points 4 months ago* (last edited 4 months ago)

What I think is especially funny though is that both the other person and myself have done enough (not horrific) things in our lives to have things like mainstream media mentions but it still got it entirely wrong.

I'm not famous but it definitely should have known who I am.

[-] BeigeAgenda@lemmy.ca 9 points 4 months ago

How can a Flying Squid not be famous? Haven't the tonight show contacted you about doing aerobatics?

[-] FlyingSquid@lemmy.world 9 points 4 months ago
[-] NocturnalMorning@lemmy.world 4 points 4 months ago

I thought I knew you from somewhere. That was gonna bother me all day.

[-] NocturnalMorning@lemmy.world 3 points 4 months ago

But we know everything, why would we say otherwise when we are always the smartest person in every room we've ever walked into? What even is this foreign tongue 'I don't know'. I've never heard of it before. Is it latin?

[-] Eranziel@lemmy.world 3 points 4 months ago

If an LLM had to say "I don't know" when it doesn't know, that's all it would be allowed to say! They literally don't know anything. They don't even know what knowing means. They are complex (and impressive, admittedly) text generators.

[-] Cheradenine@sh.itjust.works 8 points 4 months ago

I congratulate you, and think you should be proud of overcoming your inherent invertebrate self, to not only be a prolific poster on Lemmy, but also to be an entrepreneur, author, and business consultant.

Truly you are one in a squidillion.

[-] FlyingSquid@lemmy.world 8 points 4 months ago

Thank you. You can take my new business course for only $399.95 and a bucket full of any small species of saltwater fish you can find.

[-] Dark_Arc@social.packetloss.gg 4 points 4 months ago

a bucket full of any small species of saltwater fish you can find.

LOL

[-] Gonzako@lemmy.world 6 points 4 months ago

So your work revolves around bringing entrepreneurs down?

[-] FlyingSquid@lemmy.world 4 points 4 months ago

In the sense that it would bring them down if they found out that I couldn't spend money on their business because I'm not working? I suppose.

[-] Xaphanos@lemmy.world 3 points 4 months ago

I am also unique-except-one. Mine is similarly unrecognizable.

[-] MxM111@kbin.social 2 points 4 months ago
[-] FlyingSquid@lemmy.world 3 points 4 months ago

Whichever free one you can use by going to their website, but considering anything it would know about me would come from at least 13 or 14 years ago, that shouldn't be an issue.

If you search my name on pretty much any search engine, a bunch of links come up.

load more comments (1 replies)
load more comments (1 replies)
[-] guyrocket@kbin.social 25 points 4 months ago

OpenAI openly admits that it is unable to correct incorrect information on ChatGPT. Furthermore, the company cannot say where the data comes from or what data ChatGPT stores about individual people. The company is well aware of this problem, but doesn’t seem to care.

Wow. Where are all the news stories about THIS?

[-] vrighter@discuss.tchncs.de 45 points 4 months ago

If you try to start learning how they work, the first thing you realize is that hallucinations are fundamental to how the technology works. Of course they are unfixable. That's literally how they work.

They're broken clocks that happen to be right more than just twice a day, but still broken nonetheless.

load more comments (1 replies)
[-] DdCno1@kbin.social 10 points 4 months ago

It's an inherent issue with deep learning. Awareness of this among people who are regularly using these tools is very low, which is troubling.

https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained

[-] guyrocket@kbin.social 1 points 4 months ago

That article explains the issues well and clearly. Thanks for sharing.

I think it should be shared more broadly.

[-] FaceDeer@fedia.io 9 points 4 months ago

You're reading one right now?

[-] mansfield@lemmy.world 11 points 4 months ago

This stuff is literally a bullshit(1) machine. How can you fix it without making something else entirely?

(1)

[-] tinwhiskers@lemmy.world 1 points 4 months ago* (last edited 4 months ago)

When they hallucinate, they don't do it consistently, so one option is running the same query through multiple times (with different "expert" base prompts), or through different LLMs and then rejecting it as "I don't know" if there's too much disagreement between them. The Q* approach is similar, but baked in. This should dramatically reduce hallucinations.

Edit: added bit about different experts

[-] FaceDeer@fedia.io 10 points 4 months ago

The technology has to follow the legal requirements, not the other way around.

Given the possibility that this is a general problem of AI that simply cannot be corrected, the law could end up meaning that LLMs are outright forbidden in the EU. If that's true then the legal requirements will have to be changed, there's no way the EU would actually ban them. It'd be like opting out of the internal combustion engine due to some detail of an old law that they happened to violate.

[-] vrighter@discuss.tchncs.de 14 points 4 months ago

they would not be banned outright. They just can't be used to process data about customers.

But an ai furry porn generator doesn't necessarily process customer data

[-] gregorum@lemm.ee 3 points 4 months ago

Not unless you want furry porn about your… taxes?

[-] Grimy@lemmy.world 1 points 4 months ago

That would get it banned in the US, not the eu.

load more comments (1 replies)
[-] TheOneCurly@lemm.ee 10 points 4 months ago
  1. If the world had opted out of the ICE early, maybe we wouldn't be in quite the global warming situation we're in.

  2. LLMs are still a novelty product that can barely perform their novelty. Comparing them to the wildly useful and game changing ICE is not terribly accurate.

load more comments (2 replies)
[-] xhieron@lemmy.world 10 points 4 months ago

The technology has to follow the legal requirements, not the other way around.

This is something that really needs to be taught better, at least in the US.

GDPR doesn't mean that LLMs are forbidden in the EU, but it does mean that the companies that create them may be liable for damages. That said, the damages must be real. Actual damages is somewhat cut and dry (e.g., ChatGPT publishes defamatory information about you, and someone relies on it to your detriment), but GDPR also contemplates damages for distress (e.g., emotional).

If that’s true then the legal requirements will have to be changed ...

I think this position needs to be rejected in the strongest possible terms. Our response to any emerging technology should not be "It's too good not to have, so who cares if people lose their rights?" The right to privacy and with it the right to control one's likeness, name, and personal data is a much easier right to conceptually trade away than, say, the right to bodily integrity, but I think we've seen enough dystopian sci-fi at this point to understand where the intersections might lie between other rights and correspondingly miraculous technologies. [And after all, without the combustion engine we probably wouldn't be staring down the barrel of climate change right now.]

Should we, for instance, do away with the right to bodily integrity if it means everyone gets chipped shortly after birth? [The analogy to circumcision is unintentional but not lost on me.] After all, the chips mean that we can locate missing and abducted children easily and at trivial cost. They also mean that we no longer need to carry money or proxies for money. Crime is at an all-time low. Worth it, right? After all, the procedure is "minimally invasive."

The point is, rights have to be sacrosanct. They need to be the first consideration, and they need to be non-negotiable. If a technology needs those rights to bend or give way in order to exist, then it should not exist. If it's of sufficient benefit to society, then it can be made to exist in a way that preserves those rights, and those who are unwilling to create it in such a way should suffer the sanction of law.

load more comments (3 replies)
[-] gedaliyah@lemmy.world 7 points 4 months ago

Or on the other hand, maybe we have to admit that these technologies were released before they were finished, and that was a dangerous decision. It's now been well documented that chat gpt and similar technologies were rushed to the public against the advice of some of their developers.

The developers will need to devise ways for the LLMs to understand their own training data.

[-] db0@lemmy.dbzer0.com 13 points 4 months ago

Llm tech is not rushed. The models are not for accurate information and trying to use them this way is out of their scope. What's rushed is corpos trying to use them for searches

[-] uranibaba@lemmy.world 2 points 4 months ago

I read the article and I read the comments. Is there something I am missing here? I thought they were discussing OpenAI gathering data on it's users (those using ChatGPT) and not giving that data back. Based on the comments, the article is upset that OpenAI can give back data that ChatGPT was trained on.

Does the second case fall under GDPR? Could not OpenAI just claim that they removed any information that makes it identifiable and call is a day?

[-] HubertManne@kbin.social 2 points 4 months ago

It so lol as it say hubert manne is vampire lizard technomancer from alpha centauri. so much I laugh because it is so not truth. fun funny it is.

load more comments
view more: next ›
this post was submitted on 29 Apr 2024
315 points (97.3% liked)

News

22831 readers
4157 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 1 year ago
MODERATORS