60

Source: nitter, twitter

Transcribed:

Max Tegmark (@tegmark):
No, LLM's aren't mere stochastic parrots: Llama-2 contains a detailed model of the world, quite literally! We even discover a "longitude neuron"

Wes Gurnee (@wesg52):
Do language models have an internal world model? A sense of time? At multiple spatiotemporal scales?
In a new paper with @tegmark we provide evidence that they do by finding a literal map of the world inside the activations of Llama-2! [image with colorful dots on a map]


With this dastardly deliberate simplification of what it means to have a world model, we've been struck a mortal blow in our skepticism towards LLMs; we have no choice but to convert surely!

(*) Asterisk:
Not an actual literal map, what they really mean to say is that they've trained "linear probes" (it's own mini-model) on the activation layers, for a bunch of inputs, and minimizing loss for latitude and longitude (and/or time, blah blah).

And yes from the activations you can get a fuzzy distribution of lat,long on a map, and yes they've been able to isolated individual "neurons" that seem to correlate in activation with latitude and longitude. (frankly not being able to find one would have been surprising to me, this doesn't mean LLM's aren't just big statistical machines, in this case being trained with data containing literal lat,long tuples for cities in particular)

It's a neat visualization and result but it is sort of comically missing the point


Bonus sneers from @emilymbender:

  • You know what's most striking about this graphic? It's not that mentions of people/cities/etc from different continents cluster together in terms of word co-occurrences. It's just how sparse the data from the Global South are. -- Also, no, that's not what "world model" means if you're talking about the relevance of world models to language understanding. (source)
  • "We can overlay it on a map" != "world model" (source)
you are viewing a single comment's thread
view the rest of the comments
[-] self@awful.systems 7 points 1 year ago

we need a run of @dgerard@awful.systems’s “it can’t be that stupid, you must be explaining it wrong” stickers but with the ChatGPT logo instead of the bitcoin one

also how can we talk shit about LLMs when computation was impossible until they were invented?

[-] blakestacey@awful.systems 10 points 1 year ago

15-ish years ago, I was doing a lot of principal component analysis and multi-dimensional scaling. A standard exercise in that area is to take distances between cities, like the lengths of airline flight paths, and reconstruct a map. If only I'd thought to claim that to be a world model!

[-] blakestacey@awful.systems 7 points 1 year ago

Whereas the electro-mechanical device that Turing built could perform just one code-cracking function well, today’s frontier AI models are approaching the “universal” computers he could only imagine, capable of vastly more functions.

Fucking Christ, that hurt to read.

this post was submitted on 05 Oct 2023
60 points (100.0% liked)

TechTakes

1441 readers
45 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS