this post was submitted on 15 Jun 2025
14 points (100.0% liked)

TechTakes

1978 readers
76 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] gerikson@awful.systems 5 points 4 hours ago (1 children)

That hatchet job from Trace is continuing to have some legs, I see. Also a reread of it points out some unintentional comedy:

This is the sort of coordination that requires no conspiracy, no backroom dealing—though, as in any group, I’m sure some discussions go on...

Getting referenced in a thread on a different site talking about editing an article about themselves explicitly to make it sound more respectable and decent to be a member of their technofascist singularity cult diaspora. I'm sorry that your blogs aren't considered reliable sources in their own right and that the "heterodox" thinkers and researchers you extend so much grace to are, in fact, cranks.

[–] antifuchs@awful.systems 11 points 9 hours ago (1 children)

Unilever are looking for an Ice Cream Head of Artificial Intelligence.

I think I have found a new favorite way to refer to true believers.

[–] sinedpick@awful.systems 7 points 7 hours ago* (last edited 7 hours ago) (1 children)

This role is responsible for the creation of a virtual AI Centre of Excellence that will drive the creation of an Enterprise-wide Autonomous AI platform. The platform will connect to all Ice Cream technology solutions providing an AI capability that can provide [blah blah blah...]

it's satire right? brilliantly placed satire by a disgruntled hiring manager having one last laugh out the door right? no one would seriously write this right?

[–] YourNetworkIsHaunted@awful.systems 3 points 3 hours ago (1 children)

I mean it does return a 404 now.

[–] fullsquare@awful.systems 3 points 1 hour ago

maybe they filled that position already

[–] blakestacey@awful.systems 14 points 1 day ago

In other news, I got an "Is your website AI ready" e-mail from my website host. I think I'm in the market for a new website host.

[–] BlueMonday1984@awful.systems 17 points 1 day ago (2 children)

New article from Axos: Publishers facing existential threat from AI, Cloudflare CEO says

Baldur Bjarnason has given his commentary:

Honestly, if search engine traffic is over, it might be time for blogs and blog software to begin to deny all robots by default

Anyways, personal sidenote/prediction: I suspect the Internet Archive's gonna have a much harder time archiving blogs/websites going forward.

Up until this point, the Archive enjoyed easy access to large swathes of the 'Net - site owners had no real incentive to block new crawlers by default, but the prospect of getting onto search results gave them a strong incentive to actively welcome search engine robots, safe in the knowledge that they'd respect robots.txt and keep their server load to a minimum.

Thanks to the AI bubble and the AI crawlers its unleashed upon the 'Net, that has changed significantly.

Now, allowing crawlers by default risks AI scraper bots descending upon your website and stealing everything that isn't nailed down, overloading your servers and attacking FOSS work in the process. And you can forget about reigning them in with robots.txt - they'll just ignore it and steal anyways, they'll lie about who they are, they'll spam new scrapers when you block the old ones, they'll threaten to exclude you from search results, they'll try every dirty trick they can because these fucks feel entitled to steal your work and fundamentally do not respect you as a person.

Add in the fact that the main upside of allowing crawlers (turning up in search results) has been completely undermined by those very same AI corps, as "AI summaries" (like Google's) steal your traffic through stealing your work, and blocking all robots by default becomes the rational decision to make.

This all kinda goes without saying, but this change in Internet culture all-but guarantees the Archive gets caught in the crossfire, crippling its efforts to preserve the web as site owners and bloggers alike treat any and all scrapers as guilty (of AI fuckery) until proven innocent, and the web becomes less open as a whole as people protect themselves from the AI robber barons.

On a wider front, I expect this will cripple any future attempts at making new search engines, too. In addition to AI making it piss-easy to spam search systems with SEO slop, any new start-ups in web search will struggle with quality websites blocking their crawlers by default, whilst slop and garbage will actively welcome their crawlers, leading to your search results inevitably being dogshit and nobody wanting to use your search engine.

[–] smiletolerantly@awful.systems 7 points 16 hours ago

I don't like that it's not open source, and there are opt-in AI features, but I can highly, highly recommend Kagi from a pure search result standpoint, and one of the only alternatives with their own search index.

(Give it a try, they've apparently just opened up their search for users without an account to try it out.)

Almost all the slop websites aren't even shown (or put in a "Listicles" section where they can be accessed, but are not intrusive and do not look like proper results, and you can prioritize/deprioritize sites (for example, I have gituib/reddit/stackoverflow to always show on top, quora and pinterest to never show at all).

Oh, and they have a fediverse "lens" which actually manages to reliably search Lemmy.

This doesn't really address the future of crawling, just the "Google has gone to shit" part 😄

[–] HedyL@awful.systems 8 points 22 hours ago (1 children)

FWIW, due to recent developments, I've found myself increasingly turning to non-search engine sources for reliable web links, such as Wikipedia source lists, blog posts, podcast notes or even Reddit. This almost feels like a return to the early days of the internet, just in reverse and - sadly - with little hope for improvement in the future.

[–] fnix@awful.systems 7 points 13 hours ago

Searching Reddit has really become standard practice for me, a testament to how inhuman the web as a whole has gotten. What a shame.

[–] Soyweiser@awful.systems 7 points 1 day ago (4 children)

Weird conspiracy theory musing: So we know Rokos Basilisk only works on a very specific type of person who needs to belief in all the LW stuff about what the AGI future will be like, but who also feel morally responsible, and have high empathy. (Else the thing falls apart, you need to care about, feel responsible for, and believe the copies/simulated things are conscious). We know caring about others/empathy is one of those traits which seem to be rarer on the right than the left, and there is a feeling that a lot of the right is doing a war on empathy (see the things Musk has said, the whole chan culture shit, but also themotte which somebody once called an 'empathy removal training center' which stuck so I also call it that. If you are inside once of these pipelines you can also notice it, or if you get out, you can see it looking back, I certainly did when I read more LW/SSC stuff). We also know Roko is a bit of a chud, who wants some sort of 'transhumanist' 'utopia' where nobody is non-white or has blue hair (I assume this is known, but if you care to know more about Roko (why?) search sneerclub (Ok, one source as a treat)).

So here is my conspiracy theory. Roko knew what he was doing, it was intentional on Rokos part, he wanted to drive the empathic part of LW mad, discredit them. (That he was apparently banned from several events for sexual harassment also is interesting. Does remind me of another 'lower empathy' thing the whole manosphere/pua thing which was a part of early LW, which often trains people to think less of women).

Note that I don't believe in this, as there is no proof for it, I don't think Roko planned for this (nor considered it in any way) and I think his post was just a honest thought experiment (as was Yuds reaction). It was just an annoying thought which I had to type up else I keep thinking about it. Sorry to make it everybodies problem.

[–] saucerwizard@awful.systems 6 points 1 day ago

iirc he has a lawyer on retainer in case of another sexual harassment claim.

[–] Architeuthis@awful.systems 11 points 1 day ago* (last edited 1 day ago) (1 children)

Not wanting the Basilisk eternal torture dungeon to happen isn't an empathy thing, they just think that a sufficiently high fidelity simulation of you would be literally you, because otherwise brain uploads aren't life extension. It's basically transhumanist cope.

Yud expands on it in some place or other, along the lines that the gap in consciousness between the biological and digital instance isn't that different from the gap created by anesthesia or a night's sleep, it's just on the space axis instead of the time axis, or something like that.

And since he also likes the many world interpretations it turns out you also share a soul with yourselves in parallel dimensions; this is why the zizians are so eager to throw down, since getting killed in one dimension just lets supradimensional entities know you mean business.

Early 21st century anthropology is going to be such a ridiculous field of study.

[–] Soyweiser@awful.systems 4 points 1 day ago* (last edited 1 day ago) (1 children)

Clearly you do not have low self-esteem. But yes that is the weak point of this whole thing, and why it is a dumb conspiracy theory. (Im mismatching the longtermist 'future simulated people are important' utilitarian extremism with the 'simulated yous are yous' extreme weirdness).

The problems with yuds argument is that all these simulations will quickly diverge and no longer are the real 'you' see twins for a strawman example. The copies should then be ran in exactly the same situations and then wtf is the point. When I slam my toe into a piece of furniture I dont morn all the many world mes who also did just break a toe again. It just weird, but due to the immortality cope it makes sense for insiders.

[–] Architeuthis@awful.systems 7 points 1 day ago* (last edited 1 day ago)

I'd say if there's a weak part in your admittedly tongue-in-cheek theory it's requiring Roko to have had a broader scope plan instead of a really catchy brainfart, not the part about making the basilisk thing out to be smarter/nobler than it is.

Reframing the infohazard aspect as an empathy filter definitely has legs in terms of building a narrative.

[–] aio@awful.systems 8 points 1 day ago (2 children)

I thought part of the schtick is that according to the rationalist theory of mind, a simulated version of you suffering is exactly the same as the real you suffering. This relies on their various other philosophical claims about the nature of consciousness, but if you believe this then empathy doesn't have to be a concern.

[–] Amoeba_Girl@awful.systems 8 points 1 day ago (4 children)

The key thing is that the basilisk makes a million billion digibidilion copies of you to torture, and because you know statistics you know that there's almost no chance you're the real you and not a torture copy.

[–] aio@awful.systems 9 points 1 day ago* (last edited 1 day ago)

Yeah for some reason they never covered that in the stats lectures

[–] blakestacey@awful.systems 7 points 1 day ago

I'm the torture copy and so is my wife

[–] o7___o7@awful.systems 2 points 21 hours ago

checks the news

Well shit

[–] Architeuthis@awful.systems 3 points 1 day ago* (last edited 1 day ago)

you know that there’s almost no chance you’re the real you and not a torture copy

I basilisk's wager was framed like that, that you can't know if you are already living in the torture sim with the basilisk silently judging you, it would be way more compelling that the actual "you are ontologically identical with any software that simulates you at a high enough level even way after the fact because [preposterous transhumanist motivated reasoning]".

[–] Soyweiser@awful.systems 2 points 1 day ago

Yeah you are correct, im mismatching longtermism with transhumanist digital immortality, which is why I called it a conspiracy theory, it being wrong and all that. (Even if I do think empathy for perfect copies of yourself is a thing not everyone might have).

[–] BlueMonday1984@awful.systems 5 points 1 day ago

...Honestly, I can't help but feel you're on to something. I'd have loved to believe this was an honest thought experiment, but after seeing the right openly wage a war on empathy as a concept, I wouldn't be shocked if Roko's Basilisk (and its subsequent effects) weren't planned from the start.

[–] fullsquare@awful.systems 9 points 1 day ago (1 children)
[–] o7___o7@awful.systems 4 points 1 day ago

This guy is a real self-licking ice cream cone (flavor: pralines and dick)

[–] scruiser@awful.systems 13 points 2 days ago* (last edited 2 days ago) (3 children)

So us sneerclubbers correctly dismissed AI 2027 as bad scifi with a forecasting model basically amounting to "line goes up", but if you end up in any discussions with people that want more detail titotal did a really detailed breakdown of why their model is bad, even given their assumptions and trying to model "line goes up": https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models

tldr; the AI 2027 model, regardless of inputs and current state, has task time horizons basically going to infinity at some near future date because they set it up weird. Also the authors make a lot of other questionable choices and have a lot of other red flags in their modeling. And the picture they had in their fancy graphical interactive webpage for fits of the task time horizon is unrelated to the model they actually used and is missing some earlier points that make it look worse.

[–] Soyweiser@awful.systems 8 points 1 day ago* (last edited 1 day ago) (1 children)

Good for him to try and convince the LW people that the math is wrong. Do think there is a bigger problem with all of this. Technological advancement doesn't follow exponential curves, it follows S-curves. (And the whole 'the singularity is near' 'achtually that is true, but the rate of those S-curves is in fact exponential is just untestable unscientific hopeium, but it is odd the singularity people are now back unto exponential curves for a specific tech).

Also lol at the 2027 guys believing anything about how grok was created. Nice epistemology yall got there, hows the Mars base?

[–] scruiser@awful.systems 7 points 1 day ago (1 children)

Also lol at the 2027 guys believing anything about how grok was created.

Judging by various comments the AI 2027 authors have made, sucking up to techbro side of the alt-right was in fact a major goal of AI 2027, and, worryingly they seem to have succeeded somewhat (allegedly JD Vance has read AI 2027) but lol at the notion they could ever talk any of the techbro billionaires into accepting any meaningful regulation. They still don't understand their doomerism is free marketing hype for the techbros, not anything any of them are actually treating as meaningfully real.

[–] Soyweiser@awful.systems 6 points 1 day ago

Yeah, think that is prob also why a Thiel supports Moldbug, not because he believes in what Moldbug says, but because Moldbug says things that are convenient for him if others believe it (Even if Thiel prob believes a lot of the same things, looking at his anti-democracy stuff, and the 'rape crisis is anti men' stuff (for which he apologized, wonder if he apologized for the apology now that the winds have seemingly changed).

[–] aio@awful.systems 11 points 2 days ago* (last edited 2 days ago) (1 children)

If the growth is superexponential, we make it so that each successive doubling takes 10% less time.

(From AI 2027, as quoted by titotal.)

This is an incredibly silly sentence and is certainly enough to determine the output of the entire model on its own. It necessarily implies that the predicted value becomes infinite in a finite amount of time, disregarding almost all other features of how it is calculated.

To elaborate, suppose we take as our "base model" any function f which has the property that lim_{t → ∞} f(t) = ∞. Now I define the concept of "super-f" function by saying that each subsequent block of "virtual time" as seen by f, takes 10% less "real time" than the last. This will give us a function like g(t) = f(-log(1 - t)), obtained by inverting the exponential rate of convergence of a geometric series. Then g has a vertical asymptote to infinity regardless of what the function f is, simply because we have compressed an infinite amount of "virtual time" into a finite amount of "real time".

[–] scruiser@awful.systems 6 points 1 day ago

Yeah AI 2027's model fails back of the envelope sketches as soon as you try working out any features of it, which really draws into question the competency of it's authors and everyone that has signal boosted it. Like they could have easily generated the same crit-hype bullshit with "just" an exponential model, but for whatever reason they went with this model. (They had a target date they wanted to hit? They correctly realized adding in extraneous details would wow more of their audience? They are incapable of translating their intuitions into math? All three?)

[–] swlabr@awful.systems 6 points 2 days ago (1 children)

titotal?!?!? I heard they were dead! (jk. why did they stop hanging here, I forget...)

[–] scruiser@awful.systems 11 points 2 days ago (1 children)

We did make fun of titotal for the effort they put into meeting rationalist on their own terms and charitably addressing their arguments and you know, being an EA themselves (albeit one of the saner ones)...

[–] swlabr@awful.systems 5 points 9 hours ago

Ah, right. That. Reminds me of that old adage about monsters and abysses. "Fighting monsters and abyss staring is good and cool, actually. France is bacon." Something like that, don't fact check me.

[–] antifuchs@awful.systems 12 points 2 days ago (3 children)

AllTrails doing their part in the war on genAI by disappearing the people who would trust genAI: https://www.nationalobserver.com/2025/06/17/news/alltrails-ai-tool-search-rescue-members

Amazing. Can't wait for the doomers to claim that somehow this has enough intent to classify as murder. I wonder if they'll end up on one of the weirdly large number of "bad things that happen to people in the national parks" podcasts.

[–] o7___o7@awful.systems 7 points 2 days ago (1 children)

Don't make tap the sign:

Don't feed the bears!

[–] swlabr@awful.systems 12 points 2 days ago

My AllTrails told me bears keep eating his promptfondlers so I asked how many promptfondlers he has and he said he just goes to AllTrails and gets a new promptfondler afterwards so I said it sounds like he’s just feeding promptfondlers to bears and then his parks service started crying.

[–] BlueMonday1984@awful.systems 7 points 2 days ago* (last edited 1 day ago)

Darwin Award-as-a-service

[–] BlueMonday1984@awful.systems 4 points 2 days ago* (last edited 2 days ago)

ZITRON DROPPED (sadly, its premium)

[–] fasterandworse@awful.systems 7 points 2 days ago

new rant from me about how boosters asking critics to admit that AI is "useful" is not the win they think it is vid: https://www.youtube.com/watch?v=bRcBCji6XvE audio: https://pnc.st/s/faster-and-worse/94cb1cda/useful-is-nothing

load more comments
view more: next ›