corbin

joined 2 years ago
[–] corbin@awful.systems 3 points 8 hours ago

Sometimes, yeah! There was a classic theory of metacompilers in the 1960s with examples like META II. In the 1980s, partial evaluation was put onto solid ground following Futamura's programme, and in the 1990s the most successful team wrote The Book on the topic. My current weekend project is a fork of META II and it evolves by gradual changes to the compiler punctuated by two self-rebuild cycles.

[–] corbin@awful.systems 12 points 1 day ago (5 children)

I guess that I'm the resident compiler engineer today. Let's go.

So why not write an optimizing compiler in its own language, and then run it on itself?

The process will reach a fixed point after three iterations. In fancier language, Glück 2009 shows that the fourth, fifth, and sixth Futamura projections are equivalent to the third Futamura projection for a fixed choice of (compiler-)compiler and optimizer. This has practical import for cross-compiling; when I used to use Gentoo, I would watch GCC build itself exactly three times, and we still use triples in our targets today.

[S]uppose you built an optimizing compiler that searched over a sufficiently wide range of possible optimizations, that it did not ordinarily have time to do a full search of its own space — so that, when the optimizing compiler ran out of time, it would just implement whatever speedups it had already discovered.

Oh, it's his lucky day! Yud, you've just been Schmidhuber'd! Starting in 2003, Schmidhuber's lab has published research on Gödel machines, self-improving machines which prove that their self-modifications will always be better than previous iterations. They are named not just after Gödel, but after his First Incompleteness Theorem; Schmidhuber et al proved easily that there will always be at least one speedup theorem which a Gödel machine can never reach (for a given choice of axioms, etc.)

EURISKO used "heuristics" to, for example, design potential space fleets. It also had heuristics for suggesting new heuristics, and metaheuristics could apply to any heuristic, including metaheuristics. … EURISKO could modify even the metaheuristics that modified heuristics. … Still, EURISKO ran out of steam. Its self-improvements did not spark a sufficient number of new self-improvements.

Once again the literature on metaheuristics exists, and it culminates in the discovery of genetic algorithms. As such, we can immediately apply the concept of gene-oriented evolution ("beanbag" or "gene pool" reasoning) and note that, if goals don't change and new genes don't enter the pool, then eventually the population stagnates as the possible range of mutated genes is tested and exhausted. It doesn't matter that some genes are "meta" genes that act on other genes, nor that such actions are indirect. Genes are genes.

I'm gonna close with a sneer from Jay Bellou, who I hope is not a milkshake duck, in the comments:

All "insights" eventually bottom out in the same way that Eurisko bottomed out; the notion of ever-increasing gain by applying some rule or metarule is a fantasy. You make the same sort of mistake about "insight" as do people like Roger Penrose, who believes that humans can "see" things that no computer could, except that you think that a computer can too, whereas in reality neither humans nor computers have access to any such magical "insight" sauce.

[–] corbin@awful.systems 9 points 4 days ago (3 children)

I'm sorry you had to learn this way. Most of us find out when SciShow says something that triggers the Gell-Mann effect. Green's background is in biochemistry and environmental studies, and he is trained as a science communicator; outside of the narrow arenas of biology and pop science, he isn't a reliable source. Crash Course is better than the curricula of e.g. Texas, Louisiana, or Florida (and that was the point!) but not better than university-level courses.

[–] corbin@awful.systems 7 points 4 days ago (2 children)

Okay. It feels like your comment is totally disconnected from evidence and reality. Also, it feels like you didn't actually want to make a germane comment. Finally, it feels like you don't have anything of substance to add, regardless of relevance.

[–] corbin@awful.systems 11 points 5 days ago (4 children)

Spoken like somebody who has zero commits in Chrome or Chromium, to be honest.

[–] corbin@awful.systems 5 points 5 days ago (2 children)

A lot of court documents are sealed or redacted, so I can't quite get at all the details. Nonetheless here's what I've got so far:

  • Chrome is just the browser, including Chromium, but not ChromiumOS (a Gentoo fork, basically) or ChromeOS (the branded OS on Chromebooks)
  • Chrome is unaffordable because it was quite expensive to build and continues to be a maintenance burden
  • The government is vaguely aware that forcing a sale of Chrome could be adverse for the market but the court hasn't said anything on the topic yet
  • Via filing from Apple, the court is aware that Firefox materially depends on Google, although they haven't done much beyond allow Apple to file as amicus

The court hasn't cracked open AMD v Intel yet, where it was found that a cash remedy would be better than punishing the ongoing business concerns of a duopoly, but it would be one possible solution: instead of selling Chrome, Google would have to pay its competitors a lump sum and change their business practices somewhat.

I am genuinely not sure what happens to "the browser market", as it were. The Brave and Safari teams are relatively small because they make tweaks on top of an existing browser core; the extreme propagation of Electron suggests that once a browser is written, it does not need to be written again. The court may find browsers to be a sort of capital which is worth a lot of money on its own but not expensive to maintain. This would destroy Mozilla along with Google!

[–] corbin@awful.systems 6 points 1 week ago

I encourage NYC neighbors to spread the idea of deranking. It worked in Portland. We had an exceptionally shitty candidate:

Once touted as the law and order candidate, Gonzalez was the only mayoral candidate cited for breaking the law during the 2024 election cycle.

We pushed to derank him. And the result:

… Gonzalez was the subject of an effort to convince voters not to rank him regardless of the voter's other preferred candidates. Gonzalez earned 20% of first ranked choices but ultimately finished the election in third place …

[–] corbin@awful.systems 6 points 2 weeks ago

I don't know about Ed, but I've had scenes from Network stuck in my head for months, particularly the scene where the corporate hatchet man Hackett is explaining that a Saudi conglomerate is about to buy out a failing TV network. He says, "We need that Saudi money bad."

[–] corbin@awful.systems 5 points 2 weeks ago (4 children)

It's the cost of the electricity, not the cost of the GPU!

Empirically, we might estimate that a single training-capable GPU can pull nearly 1 kilowatt; an H100 GPU board is rated for 700W on its own in terms of temperature dissipation and the board pulls more than that when memory is active. I happen to live in the Pacific Northwest near lots of wind, rivers, and solar power, so electricity is barely 18 cents/kilowatt-hour and I'd say that it costs at least a dollar to run such a GPU (at full load) for 6hrs. Also, I estimate that the GPU market is currently offering a 50% discount on average for refurbished/like-new GPUs with about 5yrs of service, and the H100 is about $25k new, so they might depreciate at around $2500/yr. Finally, I picked the H100 because it's around the peak of efficiency for this particular AI season; local inference is going to be more expensive when we do apples-to-apples units like tokens/watt.

In short, with bad napkin arithmetic, an H100 costs at least $4/day to operate while depreciating only $6.85/day or so; operating costs approach or exceed the depreciation rate. This leads to a hot-potato market where reselling the asset is worth more than operating it. In the limit, assets with no depreciation relative to opex are treated like securities, and we're already seeing multiple groups squatting like dragons upon piles of nVidia products while the cost of renting cloudy H100s has jumped from like $2/hr to $9/hr over the past year. VCs are withdrawing, yes, and they're no longer paying the power bills.

[–] corbin@awful.systems 4 points 2 weeks ago (6 children)

I went into this with negative expectations; I recall being offended in high school that The Flashbulb was artificially sped up, unlike my heroes of neoclassical guitar and progressive-rock keyboards, and I've felt that their recent thoughts on newer music-making technology have been hypocritical. That said, this was a great video and I'm glad you shared it.

Ears and eyes are different. We deconvolve visual data in the brain, but our ears actually perform a Fourier decomposition with physical hardware. As a result, psychoacoustics is a real and non-trivial science, used e.g. in MP3, which limits what an adversary can do to frustrate classification or learning, because the result still has to sound like music in order to get any playtime among humans. Meanwhile I'm always worried that these adversarial groups are going to accidentally propagate something like McCollough stripes, a genuine cognitohazard that causes edges to become color-coded in the visual cortex for (up to) months after a few minutes of exposure; it's a kind of possible harm that fundamentally defies automatic classification by definition.

HarmonyCloak seems like a fairly boring adversarial tool for protecting the music industry from the music industry. Their code is incomplete and likely never going to get properly published; again we're seeing an industry-capture research group taking and not giving back to the Free Software community. I think all of the demos shown here are genuine, but he fully admits that this is a compute-intensive process which I estimate is going to slide back out of affordability by the end of 2026. This is going to stop being effective as soon as we get back into AI winter, but I'm not going to cry for Nashville.

I really like the two attacks shown near the end, starting around 22:00. The first attack, if genuinely not audible to humans, is likely a Mosquito-style frequency that is above hearing range and physically vibrates the components of the microphone. Hofstadter and the Tortoise would be proud, although I'm concerned about the potential long-term effects on humans. The second attack is again adversarial but specific to models on home-assistant devices which are trained to ignore some loud sounds; I can't tell spectrographically whether that's also done above hearing range or not. I'm reluctant to call for attacks on home assistants, but they're great targets.

Fundamentally this is a video that doesn't want to talk about how musicians actually rip each other off. The "tones and rhythms" that he keeps showing with nice visualizations have been machine-learnable for decades, ranging from beat-finders to frequency-analyzers to chord-spellers to track-isolators built into our music editors. He doubles down on copyright despite building businesses that profit from Free Software. And, most gratingly, he talks about the Pareto principle while ignoring that the typical musician is never able to make a career out of their art.

[–] corbin@awful.systems 5 points 2 weeks ago

It's well-known folklore that reinforcement learning with human feedback (RLHF), the standard post-training paradigm, reduces "alignment," the degree to which a pre-trained model has learned features of reality as it actually exists. Quoting from the abstract of the 2024 paper, Mitigating the Alignment Tax of RLHF (alternate link):

LLMs acquire a wide range of abilities during pre-training, but aligning LLMs under Reinforcement Learning with Human Feedback (RLHF) can lead to forgetting pretrained abilities, which is also known as the alignment tax.

[–] corbin@awful.systems 5 points 2 weeks ago (3 children)

In practice, the behaviors that the chatbots learn in post-training are FUD and weasel-wording; they appear to not unlearn facts, but to learn so much additional nuance as to bury the facts. The bots perform worse on various standardized tests about the natural world after post-training; there are quantitative downsides to forcing them to adopt any particular etiquette, including speaking like a chud.

The problem is mostly that the uninformed public will think that the chatbot is knowledgeable and well-spoken because it rattles off the same weak-worded hedges as right-wing pundits, and it's addressed by the same improvements in education required to counter those pundits.

Answering your question directly: no, slop machines can't be countered with more slop machines without drowning us all in slop. A more direct approach will be required.

 

Sorry, no sneer today. I'm tired of this to the point where I'm dreaming up new software licenses.

A trans person no longer felt safe in our community and is no longer developing. In response, at least four different forums full of a range of Linux users and developers (Lemmy #1, Lemmy #2, HN, Phoronix (screenshot)) posted their PII and anti-trans hate.

I don't have any solutions. I'm just so fucking disappointed in my peers and I feel a deep inadequacy at my inability to get these fuckwads to be less callous.

 

After a decade of cryptofascism and failed political activism, our dear friend jart is realizing that they don't really have much of a positive legacy. If only there was something they could have done about that.

 

In this big thread, over and over, people praise the Zuck-man for releasing Llama 3's weights. How magnanimous! How courteous! How devious!

Of course, Meta is doing this so that they don't have to worry about another 4chan leak of weights via Bittorrent.

 

Sometimes what is not said is as sneerworthy as what is said.

It is quite telling to me that HN's regulars and throwaway accounts have absolutely nothing to say about the analysis of cultural patterns.

 

Possibly the worst defense yet of Garry Tan's tweeting of death threats towards San Francisco's elected legislature. In yet more evidence for my "HN is a Nazi bar" thesis, this take is from an otherwise-respected cryptographer and security researcher. Choice quote:

sorry, but 2Pac is now dad music, I don't make the rules

Best sneer so far is this comment, which links to this Key & Peele sketch about violent rap lyrics in the context of gang violence.

 

Choice quote:

Actually I feel violated.

It's a KYC interview, not a police interrogation. I've always enjoyed KYC interviews; I get to talk about my business plans, or what I'm going to do with my loan, or how I ended up buying/selling stocks. It's hard to empathize with somebody who feels "violated" by small talk.

 

In today's episode, Yud tries to predict the future of computer science.

 

Choice quote:

Putting “ACAB” on my Tinder profile was an effective signaling move that dramatically improved my chances of matching with the tattooed and pierced cuties I was chasing.

 

As usual, I struggle to form a proper sneer in the face of such sheer wrongheadedness. The article is about a furry who was dating a Nazifur and was battered for it; the comments are full of complaints about the overreach of leftism. Choice quote:

Anti-fascists see fascism everywhere (your local police department) the same way the John Birch Society saw communism everywhere (Dwight Eisenhower.). Or maybe they are just jealous that the fascists have cool uniforms and boots. Or maybe they think their life isn’t meaningful enough and it has to be like a comic book or a WWII movie.

Well, I do wear a Captain America shirt often…

 

A well-respected pirate, neighbor, and Lisper is also a chud. Welcome to HN, the Nazi Bar where everybody's also an expert in technology.

 

Eminent domain? Never heard of it! Sounds like a fantasy from the "economical illiterate."

Edit: This entire thread is a trash fire, by the way. I'm only highlighting the silliest bit from one of the more aggressive landlords.

view more: next ›