1035
Mayonnaise Rule (files.catbox.moe)
submitted 11 months ago by Gork@lemm.ee to c/196@lemmy.blahaj.zone
top 50 comments
sorted by: hot top controversial new old
[-] riodoro1@lemmy.world 118 points 11 months ago* (last edited 11 months ago)

The future of information ladies and gentlemen

[-] casmael@lemm.ee 31 points 11 months ago

Wow it’s so realistic and smart and easy to use I can feel my knowledge being revolutionised

load more comments (1 replies)
[-] huntrss@feddit.de 105 points 11 months ago* (last edited 11 months ago)

It's so human how - instead of admitting its error - it's pulling this bs right out of its ass 🤣

[-] darthfabulous42069@lemm.ee 13 points 11 months ago

🤔 I wonder what the hell it is that's so scary about admitting they're wrong to other people.

[-] Duranie@literature.cafe 30 points 11 months ago

Growing up in an environment where mistakes were unacceptable sets the stage. Our willingness and ability to understand that that's fucked up and change our attitudes about mistakes takes more growth.

For some people it's easier to dig in their heels and double down.

[-] darthfabulous42069@lemm.ee 12 points 11 months ago* (last edited 11 months ago)

🤔🤔🤔 I guess I can empathize. People are always traumatized by whatever their parents tell them. What a shame.

[-] vox@sopuli.xyz 79 points 11 months ago
[-] SpunkyMcGoo@lemmy.world 34 points 11 months ago

"where?" comes across as confrontational, you made it scared :(

[-] hark@lemmy.world 53 points 11 months ago

Large Lying Model. This could make politicians and executives obsolete!

[-] fidodo@lemmy.world 19 points 11 months ago

More like large guessing models. They have no thought process, they just produce words.

[-] TotallynotJessica@lemmy.world 15 points 11 months ago

They don't even guess. Guessing would imply them understanding what you're talking about. They only think about the language, not the concepts. It's the practical embodiment of the Chinese room thought experiment. They generate a response based on the symbols, but not the ideas the symbols represent.

[-] fidodo@lemmy.world 7 points 11 months ago

I'm equating probability with guessing here, but yes there is a nuanced difference.

[-] gerryflap@feddit.nl 50 points 11 months ago

I think these models struggle with this because they don't process text as individual characters, but rather as tokens that often contain parts of a word. So the model never sees the actual characters within a token, and can only infer the contents of a token from the training data itself if the training data contains more information about it. It can get it right, but this depends on how much it can infer from training data and context. It's probably a bit like trying to infer what an English word sounds like when you've only heard 10% of the dictionary spoken aloud and knowing what it sounds like isn't actually that important to you.

More info can be found here: https://platform.openai.com/tokenizer

[-] Krauerking@lemy.lol 11 points 11 months ago

Ok, so, tokenization of the words is why I get that I have seen tech nerds get so excited about a system that allows for being able to come up with synonyms for words that were auto-generated that have a basic ability to sometimes be correct by looking at the words before and after it....

But it's such a shitty way to look up synonyms! Using the words on either side doesn't mean you found a synonym just that you found another word that might work and it still has to use the full horsepower of ridiculously overpowered system.

Or you could have a lookup table that just reads the frickin word and has alternate synonyms predefined and it was able to run in word 97.

It's ridiculous that we think this is better in any meaningful way instead of just wasteful development.

load more comments (6 replies)
[-] Viking_Hippie@lemmy.world 42 points 11 months ago

Mayonnaine: mayo with cocaine. The favorite condiment of Wall Street.

[-] unreachable@lemmy.world 38 points 11 months ago
[-] FakeGreekGirl@lemmy.blahaj.zone 12 points 11 months ago

HOW BABBY IS FORMED

[-] chetradley@lemmy.world 12 points 11 months ago

PRAGERT SEX. Hurt baby top of head?

[-] jkozaka@lemm.ee 35 points 11 months ago* (last edited 11 months ago)

You forgot the rest of the posts where the llm gaslights her after. There are too many images to put here, so I'll link a post to them.
I'm not sure if this is the original post, but it's where I found it. initially

load more comments (3 replies)
[-] megopie@lemmy.blahaj.zone 34 points 11 months ago

Yah, people don’t seem to get that LLM can not consider the meaning or logic of the answers they give. They’re just assembling bits of language in patterns that are likely to come next based on their training data.

The technology of LLMs is fundamentally incapable of considering choices or doing critical thinking. Maybe new types of models will be able to do that but those models don’t exist yet.

[-] CurlyMoustache@lemmy.world 13 points 11 months ago* (last edited 11 months ago)

A grown man I work with, he's in his 50s, tells me he asks ChatGPT stuff all the time, and I can't for the life of me figure out why. It is a copycat designed to beat the Turing test. It is not a search engine or Wikipedia, it just gambles it can pass the Turing test after every prompt you give it.

[-] Ookami38@sh.itjust.works 13 points 11 months ago

Honestly though, with a bit of verification, chatgpt 4 gives waaaaaay better answers than any search engine. Like, it's how it was back when you'd just ask Google a plain-english question and it'd give you SOMETHING at least.

Again, verify everything it tells you, it's still prone to hallucinations, but it's a damn good first step.

[-] CurlyMoustache@lemmy.world 7 points 11 months ago

Sure. But take it for what it is. It is a language model designed to imitate humans writing. What the future holds, I can't say

load more comments (1 replies)
load more comments (3 replies)
[-] megopie@lemmy.blahaj.zone 6 points 11 months ago

People want functioning web searching back, but rather than address issues in the industry breaking an otherwise functional concept, they want a new fancy technology to make the problem go away.

load more comments (1 replies)
load more comments (1 replies)
[-] miss_brainfarts@lemmy.blahaj.zone 31 points 11 months ago

The funniest thing is that even when the answer is correct, asking an LLM to explain its reasoning step by step can produce the dumbest results

[-] MacStache@sopuli.xyz 29 points 11 months ago

Artificial Intelligencensence.

[-] mondoman712@lemmy.ml 26 points 11 months ago

I just tried in google gemini

[-] jaemo@sh.itjust.works 24 points 11 months ago

I wonder what we'll rebrand 'using an LLM' as once the bubble bursts and we realize it's only artificial-advanced-grammarly and not 'intelligence'.

[-] Bazz@feddit.de 24 points 11 months ago
[-] sverit@feddit.de 7 points 11 months ago
load more comments (1 replies)
[-] Wilzax@lemmy.world 23 points 11 months ago

The letter n appears twice in the letter m. The count is correct, the reasoning is not

[-] fidodo@lemmy.world 11 points 11 months ago

That's not what it was doing behind the scenes

[-] sleep_deprived@lemmy.world 21 points 11 months ago

If anybody's curious, I tried it with GPT4 and it got it right.

[-] stebo02@lemmy.dbzer0.com 72 points 11 months ago

I think GPT3.5 bamboozled me

[-] thorbot@lemmy.world 12 points 11 months ago

I fucking love this

[-] Shardikprime@lemmy.world 8 points 11 months ago

Bro you've been hoodwinked

[-] Ookami38@sh.itjust.works 7 points 11 months ago

Ok that got me lmao

[-] AnUnusualRelic@lemmy.world 6 points 11 months ago

Not to mention that all those n look suspiciously similar...

[-] Gork@lemm.ee 6 points 11 months ago
load more comments (1 replies)
load more comments (1 replies)
[-] fox2263@lemmy.world 20 points 11 months ago

Their coming fer are jerbs

load more comments (1 replies)
[-] smotherlove@sh.itjust.works 12 points 11 months ago

It's this dumb and they will still find a way to ruin our lives with it

[-] FakeGreekGirl@lemmy.blahaj.zone 8 points 11 months ago

That's what gets me too. Like, you want to replace all writers, artists, coders, and decision makers... with this?

load more comments (3 replies)
[-] Happybara@lemmy.world 9 points 11 months ago

Bless it's heart it's doing its best.

[-] Waluigi@feddit.de 7 points 11 months ago

That escalated quickly

load more comments
view more: next ›
this post was submitted on 10 Feb 2024
1035 points (100.0% liked)

196

16810 readers
3359 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 2 years ago
MODERATORS