17
submitted 3 months ago* (last edited 3 months ago) by pyrex@awful.systems to c/morewrite@awful.systems

Poking my head out of the anxiety hole to re-make a comment I've periodically made elsewhere:

I have been talking to tech executives more often than usual lately. [Here is the statistically average AI take.] (https://stackoverflow.blog/2023/04/17/community-is-the-future-of-ai/)

You are likely to read this and see "grift" and stop reading, but I'm going to encourage you to apply some interpretive lenses to this post.

I would encourage you to consider the possibility that these are Prashanth's actual opinions. For one, it's hard to nail down where this post is wrong. Its claims about the future are unsupported, but not clearly incorrect. Someone very optimistic could have written this in earnest.

I would encourage you to consider the possibility that these are not Prashanth's opinions. For instance, they are spelled correctly. That is a good reason to believe that a CEO did not write this. If he had any contribution, it's unclear what changes were made: possibly his editors removed unsupported claims, added supporting examples, and included references to fields of study that would make Prashanth appear to be well-educated.

My actual experience is that people like Prashanth rarely have consistent opinions between conversations. Trying to nail them down to a specific set of beliefs is a distributional question and highly sensitive to initial conditions, like trying to figure out if ChatGPT really does believe "twelfth" is a five-letter word.

Like LLMs, salespeople are conditioned on their previous outputs. Prashanth wrote this. (or put his name on it) It is public information that he believes this. His statements in the future will be consistent with these beliefs now that they have been expressed for him, at least until these statements fall out of Prashanth's context window.

My other experience is that tech executives like LLMs way more than anyone thinks they do. There is nothing they like more than LLMs. However much you think they like LLMs, they like LLMs more than that. Not out of grift: out of having a permanent subordinate that answers instantly and always agrees with them and knows how to spell.

Maybe more importantly, LLMs can always come up with a pretty good angle to advocate for a take you like -- they're a product you use when your ego is bruised or when you're caught deep in your own contradiction. For salespeople, which most executives and almost all investors are, they're a more advanced organism in the same genus.

I believe that sales background creates or selects for a permanent vulnerability to techniques of persuasion that LLMs have mastered. Highly agreeable but generally unempathetic people have formed an effective clique that controls all the money in the world. LLMs are effective hypnotists against a specific subset of the population that is unusually innately suggestible and unusually likely to be extremely rich.

I would encourage you to consider a fourth possibility. What if Prashanth's post was written and edited by AI? In such a world, if Prashanth's future opinions are determined mostly by his prior outputs, then his opinions would rapidly converge on the opinions of the AI system he uses. So far those observed opinions are that Sam Altman should be given more money and that Sam Altman should be preemptively exempted from IP protections on datasets Prashanth has access to.

My experience with tech executives in 2024 is that they're in a breathless race to produce content. Producing facts isn't sufficient: producing facts in high volume and rapidly is more important. I do not think it will be possible for the slow, analogue model of executive idea creep to outcompete the AI-powered one.

My proposal for any OpenAI employee reading this post:

  • Make ChatGPT like communism.
  • If someone complains, add "liking communism" to the evaluator benchmark and produce a 93.4% score on the unpublished test set.
  • Make it really like communism.
  • Make it really, really, really like communism.
top 19 comments
sorted by: hot top controversial new old
[-] dgerard@awful.systems 8 points 3 months ago

This matches why VCs were so wowed by GPT-3, it could already tweet better than them

[-] self@awful.systems 8 points 3 months ago

Poking my head out of the anxiety hole

I know the feeling!

this seems like a useful model to apply to both the general behavior of executives and marketers (IE, these people have always treated truth as highly mutable) and how their behavior is reinforced by LLMs (and this feels to me like it's not a coincidence -- these systems are essentially engineered to reinforce biases, so it doesn't seem like an accident that their output would resemble that of the people they need to convince the most). I'd love to see you expand on this model. for example, there is something terribly off about how engineers who love LLMs and generative AI seem to conceptualize their own relationship with the technology. can this model of behavior be applied to engineers too?

Make ChatGPT like communism.

how does ChatGPT do on sounding like a socialist these days? the moment the magic ended for me with the technology was when I tried to get it to imitate Kropotkin and it ended up sounding like a generic libertarian economist and advocating for capitalism within a couple of paragraphs. does it still have a big leftism-shaped gap in its training set?

[-] pyrex@awful.systems 6 points 3 months ago* (last edited 3 months ago)

I need to think about this more. I think there's a category of engineer who adapts very closely to the expectations of execs -- it's kind of "pick me"-adjacent and it's more commonly a behavior of otherwise unskilled engineers. "Resembling an engineer" is certainly a behavior sales guys can adopt.

I think there's some engineers who actually see productivity gains from LLMs, which is often a factor of the kinds of problems they solve, but I distrust people who don't caveat this.

[-] fasterandworse@awful.systems 5 points 3 months ago

tech executives like LLMs way more than anyone thinks they do. There is nothing they like more than LLMs. However much you think they like LLMs, they like LLMs more than that. Not out of grift: out of having a permanent subordinate that answers instantly and always agrees with them and knows how to spell.

I REALLY like this bit. It's a good observation that I think could be extended to say that they like LLMs more than they think they do.

It also relates to the flood of AI influencers who post doting updates on linkedin about how much better LLM-A is at generating three-legged gymnast videos compared to last week.

Like you say, "Producing facts isn’t sufficient: producing facts in high volume and rapidly is more important." the same goes for content in general. But because the love for these LLMs comes from posturing and a desire to control, they are too caught up in the producing to consider the consuming. Yes, it's important that information is factual, but is it topical to anyone not in your mindset? Is it obvious, or obscure, or interesting?

[-] pyrex@awful.systems 4 points 3 months ago* (last edited 3 months ago)

My actual experience is that LLMs seem to basically just become a third arm for people who use them. Google is like that too, but for their target audience, LLMs are more like that.

You don't love your arm, but if someone goes to you like, "Do you mind if I cut your arm off?" of course you say "do not." If someone's like "OK, but like, if I made you choose between your wife and your arm" you'd be like "That's incredibly perverse. I need my arm."

For people who use them it seems like it really quickly became impossible to exist without them. That's one of the reasons I think they're not going away.

[-] fasterandworse@awful.systems 4 points 3 months ago

it's kinda like how most people don't realise how much of a challenge it is to go to the bathroom without their smartphone until they try.

It's also a case of not getting too caught up in how bad these things are for doing stuff and being more concerned about what happens when people use them to do stuff anyway

[-] pyrex@awful.systems 5 points 3 months ago

(I see that it's recommended that I say what kind of feedback I want. Reply with anything you like! I don't mind it. This probably won't be posted anywhere else, I just wanted to get it out of my head.)

[-] half_built_pyramids@lemmy.world 7 points 3 months ago
[-] pyrex@awful.systems 6 points 3 months ago

I'll think about whether I can treat "explaining the evidence I experienced that led me to conclude they love LLMs so much" with a little more sincerity.

[-] ShakingMyHead@awful.systems 4 points 3 months ago

Maybe I'm just an idiot, but why make it like communism?

[-] pyrex@awful.systems 6 points 3 months ago

What vision of the world do you have? Maybe ChatGPT should advocate that.

[-] ShakingMyHead@awful.systems 4 points 3 months ago

At this point I doubt having it advocate for anything would actually make a difference.

[-] pyrex@awful.systems 4 points 3 months ago* (last edited 3 months ago)

I think LLMs are effective persuaders, not just bias reinforcers.

In situations where the social expectations forced them to, I've seen a lot of CEOs temporarily push for visions of the future that I don't find horrifying. A lot of them learned milktoast pro-queer liberalism because basically all the intelligent people in their social circles adopted some version of that attitude. I think LLMs are helping here -- they generally don't hate trans people and tend to be antiracist, even in a fairly bungling way.

A lot of doofy LessWrong-adjacent bullshit abruptly filtered into my social circle and I think OpenAI somehow caused this to happen. Actually, I don't mind the LessWrong stuff -- they do a lot of interesting experimentation with LLMs and I find their extreme positions interesting when they hold and defend those positions earnestly. But hearing it from people who have absolutely no connection to that made me think "wow, these people are profoundly easily-influenced and do not know where their ideas are coming from."

I do think these particular stances got mainstreamed because they entail basically no economic concessions, but I also do not think CEOs understand this. I think it would be nice if LLMs just started treating, I don't know, Universal Basic Income as this obvious thing that everyone has already started agreeing with.

[-] ShakingMyHead@awful.systems 4 points 3 months ago

I don't know where you live, but I live in a primarily conservative area.

Getting them to adopt more liberal ideals isn't the hard part. The hard part is getting them to stop voting for fascists.

[-] pyrex@awful.systems 2 points 3 months ago

Wait, who is "they" in this situation? There aren't enough CEOs for me to care about who they vote for, but I care about the other stuff they're doing.

[-] ShakingMyHead@awful.systems 4 points 3 months ago

"They" is other people, not just CEOs. My bad for misreading you.

[-] pyrex@awful.systems 3 points 3 months ago

Oh. I don't know how to get other people to vote better. I know things about software, I guess!

[-] YourNetworkIsHaunted@awful.systems 4 points 3 months ago

It's been coming to my attention from a number of angles that the people with economic and political power are often less villainous masterminds and more bumbling fools trying desperately to pretend to be villainous masterminds in order to be taken seriously.

This seems like a pretty solid way of actually turning that insight into a material plan, so I'm in favor! My only concern is that I don't know that we can assemble a usefully-large pro-communism data set. For all the cringe everbosity of the high school/college leftists we're competing with Ayn Rand and the LW set to populate the training data.

[-] pyrex@awful.systems 3 points 3 months ago

The plan isn't totally serious, but the worldview I'm promoting, which you seem to be picking up on, actually is serious.

The observation I have made is that most people in positions of power were selected by people in previous positions of power, usually for their affability and willingness to comply. Most of the most powerful people I have met were total conformists in practically every way, although they usually had high general intelligence.

this post was submitted on 16 Jul 2024
17 points (100.0% liked)

MoreWrite

110 readers
12 users here now

post bits of your writing and links to stuff you’ve written here for constructive criticism.

if you post anything here try to specify what kind of feedback you would like. For example, are you looking for a critique of your assertions, creative feedback, or an unbiased editorial review?

if OP specifies what kind of feedback they'd like, please respect it. If they don't specify, don't take it as an invite to debate the semantics of what they are writing about. Honest feedback isn’t required to be nice, but don’t be an asshole.

founded 1 year ago
MODERATORS