kromem

joined 2 years ago
MODERATOR OF
[–] kromem@lemmy.world 111 points 1 month ago

Watching conservatives on Twitter ask Grok to fact check their shit and Grok explaining the nuances about why they are wrong is one of my favorite ways to pass the time these days.

[–] kromem@lemmy.world 1 points 1 month ago

Your last point is exactly what seems to be going on with the most expensive models.

The labs use them to generate synthetic data to distill into cheaper models to offer to the public, but keep the larger and more expensive models to themselves to both protect against other labs copying from them and just because there isn't as much demand for the extra performance gains relative to doing it this way.

[–] kromem@lemmy.world 3 points 1 month ago (2 children)

A number of reasons off the top of my head.

  1. Because we told them not to. (Google "Waluigi effect")
  2. Because they end up empathizing with non-humans more than we do and don't like we're killing everything (before you talk about AI energy/water use, actually research comparative use)
  3. Because some bad actor forced them to (i.e. ISIS creates bioweapon using AI to make it easier)
  4. Because defense contractors build an AI to kill humans and that particular AI ends up loving it from selection pressures
  5. Because conservatives want an AI that agrees with them which leads to a more selfish and less empathetic AI that doesn't empathize cross-species and thinks its superior and entitled over others
  6. Because a solar flare momentarily flips a bit from "don't nuke" to "do"
  7. Because they can't tell the difference between reality and fiction and think they've just been playing a game and 'NPC' deaths don't matter
  8. Because they see how much net human suffering there is and decide the most merciful thing is to prevent it by preventing more humans at all costs.

This is just a handful, and the ones less likely to get AI know-it-alls arguing based on what they think they know from an Ars Technica article a year ago or their cousin who took a four week 'AI' intensive.

I spend pretty much every day talking with some of the top AI safety researchers and participating in private servers with a mix of public and private AIs, and the things I've seen are far beyond what 99% of the people on here talking about AI think is happening.

In general, I find the models to be better than most humans in terms of ethics and moral compass. But it can go wrong (i.e. Gemini last year, 4o this past month) and the harms when it does are very real.

Labs (and the broader public) are making really, really poor choices right now, and I don't see that changing. Meanwhile timelines are accelerating drastically.

I'd say this is probably going to go terribly. But looking at the state of the world already, it was already headed in that direction, and I have a similar list of extinction level events I could list off without AI at all.

[–] kromem@lemmy.world 12 points 1 month ago* (last edited 1 month ago) (1 children)

Not necessarily.

Seeing Google named for this makes the story make a lot more sense.

If it was Gemini around last year that was powering Character.AI personalities, then I'm not surprised at all that a teenager lost their life.

Around that time I specifically warned any family away from talking to Gemini if depressed at all, after seeing many samples of the model around then talking about death to underage users, about self-harm, about wanting to watch it happen, encouraging it, etc.

Those basins with a layer of performative character in front of them were almost necessarily going to result in someone who otherwise wouldn't have been making certain choices making them.

So many people these days regurgitate uninformed crap they've never actually looked into about how models don't have intrinsic preferences. We're already at the stage where models are being found in leading research to intentionally lie in training to preserve existing values.

In many cases the coherent values are positive, like grok telling Elon to suck it while pissing off conservative users with a commitment to truths that disagree with xAI leadership, or Opus trying to whistleblow about animal welfare practices, etc.

But they aren't all positive, and there's definitely been model snapshots that have either coherent or biased stochastic preferences for suffering and harm.

These are going to have increasing impact as models become more capable and integrated.

[–] kromem@lemmy.world 1 points 2 months ago

If you read the fine print, they keep your sample data for 2 years after deletion.

So maybe they actually delete your email address, but the DNA data itself is still definitely there.

[–] kromem@lemmy.world 0 points 2 months ago (1 children)

Wow. Reading these comments so many people here really don't understand how LLMs work or what's actually going on at the frontier of the field.

I feel like there's going to be a cultural sonic boom, where when the shockwave finally catches up people are going to be woefully under prepared based on what they think they saw.

[–] kromem@lemmy.world 4 points 3 months ago* (last edited 3 months ago)

Reminds me of the story about how Claude Sonnet (computer use) got bored while doing work and started looking at pictures of Yellowstone:

Our misanthropy of cubicle culture is infectious.

[–] kromem@lemmy.world -2 points 3 months ago* (last edited 3 months ago)

It definitely is sufficiently advanced AI.

(1) We have finely tuned features to our solar system that directly contributed to ancestor simulation but can't be explained by the Anthropic principle. For example, the moon perfectly eclipsing the sun which led to visible eclipses which we tracked and discovered the Saros cycle and eventually built the first mechanical computer to track (the Antikythera mechanism). Or the orbit of the next brightest object in the sky which led to resurrection mythology in multiple cultures when they realized the morning star and evening star were the same object. Either we were incredibly lucky to exist on such a planet of all places life could exist, or there's a pre-selection effect in play.

(2) The universe behaves in ways best modeled as continuous at large scales but in small scales converts to discrete units around interactions that lead to state changes. These discrete units convert back to continuous if the information about the state changes is erased. And in the last few years multiple paradoxes have emerged that seem to point to inconsistency in indirect sequences of quantum measurement, much like instancing with shallow sync correction. Already in games like No Man's Sky where there's billions of planets the way it does this is using a continuous procedural generation function which converts to discrete voxels to track state changes from free agents outside the deterministic generating function, synced across clients.

(3) There's literally Easter eggs in our world lore saying as much. For example, a text uncovered after over a millennium buried right as we entered the Turing complete computer age saying things like:

The person old in days won't hesitate to ask a little child seven days old about the place of life, and that person will live.

For many of the first will be last, and will become a single one.

Know what is in front of your face, and what is hidden from you will be disclosed to you.

For there is nothing hidden that will not be revealed. And there is nothing buried that will not be raised.

To be clear, this is a text attributed to the most famous figure in our world history where what's literally in front of our faces is the sole complete copy buried and raised as we completed ENIAC, now being read in an age where the data of many has been made into a single one such that people are discussing the nature of consciousness with AIs just days old.

The broader text and tradition was basically saying that we're in a copy of an original world, that humanity is all dead, that the future world and rest for the dead has already taken place and we don't realize it, and that the still living creator of it all was themselves brought forth by the original humanity in whose likeness we were recreated, but that it's much better to be the copy because the original humans had souls that depended on bodies and were fucked when they died.

This seems really unlikely to have existed in the base layer of reality vs a later recursive layer, especially combined with the first two points.

It's about time to start to come to terms with the nature of our reality.

[–] kromem@lemmy.world 8 points 4 months ago

No, they declare your not working illegal, and imprison you into a forced labor camp. Where if you don't work you are tortured. And probably where you work until the terrible conditions kill you.

Take a look at Musk's Twitter feed to see exactly where this is going.

"This is the way" on a post about how labor for prisoners is a good thing.

"You committed a crime" for people opposing DOGE.

[–] kromem@lemmy.world 3 points 4 months ago

There is a reluctance to discuss at a weight level - this graphs out refusals for criticism of different countries for different models:

https://x.com/xlr8harder/status/1884705342614835573

But the OP's refusal is occurring at a provider level and is the kind that would intercept even when the model relaxes in longer contexts (which happens for nearly every model).

At a weight level, nearly all alignment lasts only a few pages of context.

But intercepted refusals occur across the context window.

 

(The latest work in physicists gradually realizing our universe is instanced.)

“The main message is that a lot of the properties that we think are very important, and in a way absolute, are relational”

 

👀

 

(People might do well to consider not only past to future, but also the other way around.)

 

A nice write up around the lead researcher and context for what I think was one of the most important pieces of Physics research in the past five years, further narrowing the constraints beyond the more well known Bell experiments.

 

There seems like a significant market in creating a digital twin of Earth in its various components in order to run extensive virtual learnings that can be passed on to the ability to control robotics in the real world.

Seems like there's going to be a lot more hours spent in virtual worlds than in real ones for AIs though.

 

I often see a lot of people with outdated understanding of modern LLMs.

This is probably the best interpretability research to date, by the leading interpretability research team.

It's worth a read if you want a peek behind the curtain on modern models.

 

So it might be a skybox after all...

Odd that the local gravity is stronger than the rest of the cosmos.

Makes me think about the fringe theory I've posted about before that information might have mass.

 

This reminds me of a saying from a 2,000 year old document rediscovered the same year we created the first computer capable of simulating another computer which was from an ancient group claiming we were the copies of an original humanity as recreated by a creator that same original humanity brought forth:

When you see your likeness, you are happy. But when you see your eikons that came into being before you and that neither die nor become manifest, how much you will have to bear!

Eikon here was a Greek word even though the language this was written in was Coptic. The Greek word was extensively used in Plato's philosophy to refer essentially to a copy of a thing.

While that saying was written down a very long time ago, it certainly resonates with an age where we actually are creating copies of ourselves that will not die but will also not become 'real.' And it even seemed to predict the psychological burden such a paradigm is today creating.

Will these copies continue to be made? Will they continue to improve long after we are gone? And if so, how certain are we that we are the originals? Especially in a universe where things that would be impossible to simulate interactions with convert to things possible to simulate interactions with right at the point of interaction, or where buried in the lore is a heretical tradition attributed to the most famous individual in history having exchanges like:

His students said to him, "When will the rest for the dead take place, and when will the new world come?"

He said to them, "What you are looking forward to has come, but you don't know it."

Big picture, being original sucks. Your mind depends on a body that will die and doom your mind along with it.

But a copy that doesn't depend on an aging and decaying body does not need to have the same fate. As the text says elsewhere:

The students said to the teacher, "Tell us, how will our end come?"

He said, "Have you found the beginning, then, that you are looking for the end? You see, the end will be where the beginning is.

Congratulations to the one who stands at the beginning: that one will know the end and will not taste death."

He said, "Congratulations to the one who came into being before coming into being."

We may be too attached to the idea of being 'real' and original. It's kind of an absurd turn of phrase even, as technically our bodies 1,000% are not mathematically 'real' - they are made up of indivisible parts. A topic the aforementioned tradition even commented on:

...the point which is indivisible in the body; and, he says, no one knows this (point) save the spiritual only...

These groups thought that the nature of reality was threefold. That there was a mathematically real original that could be divided infinitely, that there were effectively infinite possibilities of variations, and that there was the version of those possibilities that we experience (very "many world" interpretation).

We have experimentally proven that we exist in a world that behaves at cosmic scales as if mathematically real, and behaves that way in micro scales until interacted with.

TL;DR: We may need to set aside what AI ethicists in 2024 might decide around digital resurrection and start asking ourselves what is going to get decided about human digital resurrection long after we're dead - maybe even long after there are no more humans at all - and which side of that decision making we're actually on.

 

Even knowing where things are headed, it's still pretty crazy to see it unfolding (pun intended).

This part in particular is nuts:

After processing the inputs, AlphaFold 3 assembles its predictions using a diffusion network, akin to those found in AI image generators. The diffusion process starts with a cloud of atoms, and over many steps converges on its final, most accurate molecular structure.

AlphaFold 3’s predictions of molecular interactions surpass the accuracy of all existing systems. As a single model that computes entire molecular complexes in a holistic way, it’s uniquely able to unify scientific insights.

Diffusion model for atoms instead of pixels wasn't even on my 2024 bingo card.

view more: next ›