lagrangeinterpolator

joined 8 months ago

Surely this is a suitable reference for a math article!

[–] lagrangeinterpolator@awful.systems 4 points 7 hours ago* (last edited 7 hours ago)

I think Aumann's theorem is even narrower than that, after reading the Wikipedia article. The theorem doesn't even reference "reasoning", unless you count observing that a certain event happened as reasoning.

[–] lagrangeinterpolator@awful.systems 6 points 7 hours ago* (last edited 7 hours ago) (1 children)

I'd say even the part where the article tries to formally state the theorem is not written well. Even then, it's very clear how narrow the formal statement is. You can say that two agents agree on any statement that is common knowledge, but you have to be careful on exactly how you're defining "agent", "statement", and "common knowledge". If I actually wanted to prove a point with Aumann's agreement theorem, I'd have to make sure my scenario fits in the mathematical framework. What is my state space? What are the events partitioning the state space that form an agent? Etc.

The rats never seem to do the legwork that's necessary to apply a mathematical theorem. I doubt most of them even understand the formal statement of Aumann's theorem. Yud is all about "shut up and multiply," but has anyone ever see him apply Bayes's theorem and multiply two actual probabilities? All they seem to do is pull numbers out of their ass and fit superexponential curves to 6 data points because the superintelligent AI is definitely coming in 2027.

[–] lagrangeinterpolator@awful.systems 12 points 1 day ago* (last edited 1 day ago) (10 children)

The sad thing is I have some idea of what it's trying to say. One of the many weird habits of the Rationalists is that they fixate on a few obscure mathematical theorems and then come up with their own ideas of what these theorems really mean. Their interpretations may be only loosely inspired by the actual statements of the theorems, but it does feel real good when your ideas feel as solid as math.

One of these theorems is Aumann's agreement theorem. I don't know what the actual theorem says, but the LW interpretation is that any two "rational" people must eventually agree on every issue after enough discussion, whatever rational means. So if you disagree with any LW principles, you just haven't read enough 20k word blog posts. Unfortunately, most people with "bounded levels of compute" ain't got the time, so they can't necessarily converge on the meta level of, never mind, screw this, I'm not explaining this shit. I don't want to figure this out anymore.

[–] lagrangeinterpolator@awful.systems 11 points 2 weeks ago (3 children)

Randomly stumbled upon one of the great ideas of our esteemed Silicon Valley startup founders, one that is apparently worth at least 8.7 million dollars: https://xcancel.com/ndrewpignanelli/status/1998082328715841925#m

Excited to announce we’ve raised $8.7 Million in seed funding led by @usv with participation from [list a bunch of VC firms here]

@intelligenceco is building the infrastructure for the one-person billion-dollar company. You still can’t use AI to actually run a business. Current approaches involve lots of custom code, narrow job functions, and old fashioned deterministic workflows. We’re going to change that.

We’re turning Cofounder from an assistant into the first full-stack agent company platform. Teams will be able to run departments - product/engineering, sales/GTM, customer support, and ops - entirely with agents.

Then, in 2026 we’ll be the first ones to demonstrate a software company entirely run by agents.

$8.7 million is quite impressive, yes, but I have an even better strategy for funding them. They can use their own product and become billionaires, and now they can easily come up with $8.7 million considering that is only 0.87% of their wealth. Are these guys hiring? I also have a great deal on the Brooklyn Bridge that I need to tell them about!

Our branding - with the sunflowers, lush greenery, and people spending time with their friends - reflects our vision for the world. That’s the world we want to build. A world where people actually work less and can spend time doing the things they love.

We’re going to make it easy for anyone to start a company and build that life for themselves. The life they want to build, and spend every day dreaming about.

This just makes me angry at how disconnected from reality these people are. All this talk about giving people better lives (and lots of sunflowers), and yet it is an unquestionable axiom that the only way to live a good life is to become a billionaire startup founder. These people do not have any understanding or perspective other than their narrow culture that is currently enabling the rich and powerful to plunder this country.

When capitalism did contribute to innovation and technological advancement, it was through stuff like Bell Labs, which was funded by a corporation but functioned in practice like its own research institute. I think that the idea of Bell Labs is a little offensive to present day venture capitalists, though. What do you mean, innovation comes from scientists and engineers? We all know that innovation comes from plucky, young, hotshot founders with big ideas who go against conventional wisdom!

[–] lagrangeinterpolator@awful.systems 12 points 3 weeks ago (1 children)

These worries are real. But in many cases, they're about changes that haven't come yet.

Of all the statements that he could have made, this is one of the least self-aware. It is always the pro-AI shills who constantly talk about how AI is going to be amazing and have all these wonderful benefits next year (curve go up). I will also count the doomers who are useful idiots for the AI companies.

The critics are the ones who look at what AI is actually doing. The informed critics look at the unreliability of AI for any useful purpose, the psychological harm it has caused to many people, the absurd amount of resources being dumped into it, the flimsy financial house of cards supporting it, and at the root of it all, the delusions of the people who desperately want it to all work out so they can be even richer. But even people who aren't especially informed can see all the slop being shoved down their throats while not seeing any of the supposed magical benefits. Why wouldn't they fear and loathe AI?

[–] lagrangeinterpolator@awful.systems 13 points 4 weeks ago* (last edited 4 weeks ago) (7 children)

So many CRITICAL and MANDATORY steps in the release instruction file. As it always is with AI, if it doesn't work, just use more forceful language and capital letters. One more CRITICAL bullet point bro, that'll fix everything.

Sadly, I am not too surprised by the developers of Lean turning towards AI. The AI people have been quite interested in Lean for a while now since they think it is a useful tool to have AIs do math (and math = smart, you know).

[–] lagrangeinterpolator@awful.systems 11 points 4 weeks ago (2 children)

There are some comments speculating that some pro-AI people try to infiltrate anti-AI subreddits by applying for moderator positions and then shutting those subreddits down. I think this is the most reasonable explanation for why the mods of "cogsuckers" of all places are sealions for pro-AI arguments. (In the more recent posts in that subreddit, I recognized many usernames who were prominent mods in pro-AI subreddits.)

I don't understand what they gain from shutting down subreddits of all things. Do they really think that using these scummy tactics will somehow result in more positive opinions towards AI? Or are they trying the fascist gambit hoping that they will have so much power that public opinion won't matter anymore? They aren't exactly billionaires buying out media networks.

[–] lagrangeinterpolator@awful.systems 13 points 1 month ago* (last edited 1 month ago)

Don't forget the other comment saying that if you hate AI, you're just "vice-signalling" and "telegraphing your incuruosity (sic) far and wide". AI is just like computer graphics in the 1960s, apparently. We're still in early days guys, we've only invested trillions of dollars into this and stolen the collective works of everyone on the internet, and we don't have any better ideas than throwing more ~~money~~ compute at the problem! The scaling is still working guys, look at these benchmarks that we totally didn't pay for. Look at these models doing mathematical reasoning. Actually don't look at those, you can't see them because they're proprietary and live in Canada.

In other news, I drew a chart the other day, and I can confidently predict that my newborn baby is on track to weigh 10 trillion pounds by age 10.

EDIT: Rich Hickey has now disabled comments. Fair enough, arguing with promptfondlers is a waste of time and sanity.

[–] lagrangeinterpolator@awful.systems 13 points 1 month ago (7 children)

I went deep into the Yud lore once. A single fluke SAT score served as the basis for Yud's belief in his own world-changing importance. In middle school, he took an SAT with a score of 670 verbal and 740 math (maximum 800 each) and the Midwest Talent Search contacted him to tell him that his scores were very high for a middle schooler. Despite his great pains to talk about how he tried to be humble about it, he also says that he was in the "99.9998th percentile" and "not only bright but waayy out of the ordinary."

I was in the math contest scene. I have good friends who did well on AP Calculus in middle school, and were skilled enough at contests that they would have easily gotten an 800 on the math SAT if they took it. Even so, there were middle schoolers who were far more skilled than them, and I have seen other people who were far less "talented" in middle school rise to great heights later in life. As it turns out, skills can be developed through practice.

Yud's performance would not even be considered impressive in the math contest community, let alone justify calling him one of the most important people in the world. Perhaps at the time, he didn't know better. But he decided to make this a core part of his self-identity. His life quickly spiraled out of control, starting with him refusing to attend high school.

[–] lagrangeinterpolator@awful.systems 18 points 1 month ago* (last edited 1 month ago) (7 children)

It is how professors talk to each other in ... debate halls? What the fuck? Yud really doesn't have any clue how universities work.

I am a PhD student right now so I have a far better idea of how professors talk to each other. The way most professors (in math/CS at least) communicate in a spoken setting is through giving talks at conferences. The cool professors use chalkboards, but most people these days use slides. As it turns out, debates are really fucking stupid for scientific research for so many reasons.

  1. Science assumes good faith out of everyone, and debates are needlessly adversarial. This is why everyone just presents and listens to talks.
  2. Debates are actually really bad for the kind of deep analysis and thought needed to understand new research. If you want to seriously consider novel ideas, it's not so easy when you're expected to come up with a response in the next few minutes.
  3. Debates generally favor people who use good rhetoric and can package their ideas more neatly, not the people who really have more interesting ideas.
  4. If you want to justify a scientific claim, you do it with experiments and evidence (or a mathematical proof when applicable). What purpose does a debate serve?

I think Yud's fixation on debates and "winning" reflects what he thinks of intellectualism. For him, it is merely a means to an end. The real goal is to be superior and beat up other people.

view more: next ›