437
submitted 8 months ago by ElCanut@jlai.lu to c/technology@beehaw.org
top 50 comments
sorted by: hot top controversial new old
[-] Phroon@beehaw.org 152 points 8 months ago

“You may not instantly see why I bring the subject up, but that is because my mind works so phenomenally fast, and I am at a rough estimate thirty billion times more intelligent than you. Let me give you an example. Think of a number, any number.”

“Er, five,” said the mattress.

“Wrong,” said Marvin. “You see?”

― Douglas Adams, Life, the Universe and Everything

[-] AlexisFR@jlai.lu 9 points 8 months ago

The mattress? Like for sleeping?

[-] Asafum@feddit.nl 40 points 8 months ago* (last edited 8 months ago)

Yep! The hitchhikers books are so much fun lol

I still think one of my favorite lines is "the ships hung in the sky in much the same way that bricks don't."

[-] Bishma@discuss.tchncs.de 125 points 8 months ago

37 is well represented. Proof that we've taught AI some of our own weird biases.

[-] GenderNeutralBro 44 points 8 months ago

What's special about 37? Just that it's prime or is there a superstition or pop culture reference I don't know?

[-] Bishma@discuss.tchncs.de 103 points 8 months ago

If you discount the pop-culture numbers (for us 7, 42, and 69) its the number most often chosen by people if you ask them for a random number between 1 and 100. It just seems the most random one to choose for a lot of people. Veritasium just did a video about it.

[-] metallic_z3r0@infosec.pub 28 points 8 months ago

37 is my favorite, because 3x7x37=777 (three sevens), and I think that's neat.

load more comments (6 replies)
[-] SubArcticTundra@lemmy.ml 12 points 8 months ago
[-] Bishma@discuss.tchncs.de 20 points 8 months ago

I'm curious about that too. Something is twisting weights for 57 fairly strongly in the model but I'm not show what. Maybe its been trained on a bunch of old Heinz 57 varieties marketing.

load more comments (1 replies)
load more comments (4 replies)
load more comments (9 replies)
[-] Karyoplasma@discuss.tchncs.de 18 points 8 months ago* (last edited 8 months ago)

Probably just because it's prime. It's just that humans are terrible at understanding the concept of randomness. A study by Theodore P. Hill showed that when tasked to pick a random number between 1 and 10, almost a third of the subjects (n was over 8500) picked 7. 10 was the least picked number (if you ditch the few idiots that picked 0).

load more comments (11 replies)
load more comments (6 replies)
[-] FiniteBanjo@lemmy.today 11 points 8 months ago

Why would that need to be proven? We're the sample data. It's implied.

[-] jarfil@beehaw.org 10 points 8 months ago

The correctness of the sampling process still needs a proof. Like this.

load more comments (4 replies)
load more comments (3 replies)
[-] olicvb@lemmy.ca 63 points 8 months ago

holy crap, the answer to life the universe and everything XD

[-] WarmSoda@lemm.ee 35 points 8 months ago

More than likely it's because of that book and how often it's qouted

load more comments (8 replies)
[-] FiniteBanjo@lemmy.today 55 points 8 months ago

No shit, sherlock, it's sample data is the internet.

[-] Appoxo@lemmy.dbzer0.com 42 points 8 months ago
[-] Chadus_Maximus@lemm.ee 21 points 8 months ago

That's a naughty number and we don't allow those.

load more comments (3 replies)
load more comments (1 replies)

What does "temperature" on the Y-axis refer to?

[-] gerryflap@feddit.nl 40 points 8 months ago

I'm not a hundred percent sure, but afaik it has to do with how random the output of the GPT model will be. At 0 it will always pick the most probable next continuation of a piece of text according to its own prediction. The higher the temperature, the more chance there is for less probable outputs to get picked. So it's most likely to pick 42, but as the temperature increases you see the chance of (according to the model) less likely numbers increase.

This is how temperature works in the softmax function, which is often used in deep learning.

load more comments (2 replies)
[-] HarkMahlberg@kbin.social 30 points 8 months ago* (last edited 8 months ago)

I mean... they didn't specify it had to be random (or even uniform)? But yeah, it's a good showcase of how GPT acquired the same biases as people, from people..

[-] OsrsNeedsF2P@lemmy.ml 22 points 8 months ago

uniform

Reminds me of my previous job where our LLM was grading things too high. The AI "engineer" adjusted the prompt to tell the LLM that the average output should be 3. I had a hard time explaining that wouldn't do anything at all, because all the chats were independent events.

Anyways, I quit that place and the project completely derailed.

[-] lauha@lemmy.one 29 points 8 months ago

Ask humans the same and most common numer is 37

[-] Catsrules@lemmy.ml 13 points 8 months ago

I saw that YouTube video as well.

load more comments (14 replies)
[-] ForestOrca@kbin.social 25 points 8 months ago

WAIT A MINUTE!!! You mean Douglas Adams was actually an LLM?

load more comments (3 replies)
[-] FlashMobOfOne@beehaw.org 22 points 8 months ago

HA, funny that this comes up. DND Beyond doesn't have a d100, so I opened my ChatGPT sub and had it roll a d100 for me a few times so I could use my magic beans properly.

[-] terminhell@lemmy.dbzer0.com 18 points 8 months ago

I use the percentile die for that.

[-] FlashMobOfOne@beehaw.org 8 points 8 months ago

Also an excellent method.

[-] TauriWarrior@aussie.zone 11 points 8 months ago* (last edited 8 months ago)

Opened up DND Beyond to check since i remember rolling it before and its there, its between D8 and D10, the picture shows 2 dice

load more comments (1 replies)
[-] Urist@lemmy.ml 9 points 8 months ago

Roll two d10, once for each digit, and profit?

load more comments (2 replies)
load more comments (1 replies)
[-] pipows@lemmy.today 21 points 8 months ago

LMs aren't thinking, aren't inventing, they are predicting what is supposed to be answered next, so it's expected that they will produce the same results every time

[-] xthexder@l.sw0.com 12 points 8 months ago* (last edited 8 months ago)

This graph actually shows a little more about what's happening with the randomness or "temperature" of the LLM.
It's actually predicting the probability of every word (token) it knows of coming next, all at once.
The temperature then says how random it should be when picking from that list of probable next words. A temperature of 0 means it always picks the most likely next word, which in this case ends up being 42.
As the temperature increases, it gets more random (but you can see it still isn't a perfect random distribution with a higher temperature value)

load more comments (2 replies)
[-] DarkFox@pawb.social 17 points 8 months ago

Which model?

When I tried on ChatGPT 4, it wrote a short python script and executed it to get a random integer.

import random

# Pick a random number between 1 and 100
random_number = random.randint(1, 100)
random_number
load more comments (3 replies)
[-] xyguy@startrek.website 10 points 8 months ago

Only 1000 times? It's interesting that there's such a bias there but it's a computer. Ask it 100,000 times and make sure it's not a fluke.

[-] thesmokingman@programming.dev 8 points 8 months ago

42, 47, and 50 all make sense to me. What’s the significance of 37, 57, and 73?

[-] Rekhyt@beehaw.org 32 points 8 months ago

There's a great Veritasium video recently about this exact thing: https://youtu.be/d6iQrh2TK98

It's a human thing, though. This is just more evidence of LLM's problem with garbage in, garbage out: it's human biases being present in a system that people want to claim doesn't have them.

[-] humbletightband@lemmy.dbzer0.com 9 points 8 months ago

People do mention Veritasium, though he doesn't give any significant explanation of the phenomenon.

I still wonder about 47. In Veritasium plots, all these numbers provide a peak, but not 47. I recall from my childhood that I indeed used to notice that number everywhere, but idk why.

load more comments (4 replies)
[-] Grimpen@lemmy.ca 9 points 8 months ago

Veritasium just released a video about people picking 37 when asked to pick a random number.

[-] PhreakyByNature@feddit.uk 8 points 8 months ago

NEEDS MOAR 69 FELLOW HUMAN

load more comments
view more: next ›
this post was submitted on 10 Apr 2024
437 points (100.0% liked)

Technology

37761 readers
352 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS