this post was submitted on 27 Jan 2025
30 points (100.0% liked)

TechTakes

1619 readers
98 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] BigMuffin69@awful.systems 24 points 2 weeks ago* (last edited 1 week ago) (2 children)

Neo-Nazi nutcase having a normal one.

It's so great that this isn't falsifiable in the sense that doomers can keep saying, well "once the model is epsilon smarter, then you'll be sorry!", but back in the real world: the model has been downloaded 10 million times at this point. Somehow, the diamanoid bacteria has not killed us all yet. So yes, we have found out the Yud was wrong. The basilisk is haunting my enemies, and she never misses.

Bonus sneer: "we are going to find out if Yud was right" Hey fuckhead, he suggested nuking data centers to prevent models better than GPT4 from spreading. R1 is better than GPT4, and it doesn't require a data center to run so if we had acted on Yud's geopolitical plans for nuclear holocaust, billions would have been for incinerated for absolutely NO REASON. How do you not look at this shit and go, yeah maybe don't listen to this bozo? I've been wrong before, but god damn, dawg, I've never been starvingInRadioactiveCratersWrong.

[–] ShakingMyHead@awful.systems 14 points 1 week ago (1 children)

It's wild that Yudkowsky saw a binary choice of "nuclear holocaust" and "superintelligence" and chose "nuclear holocaust" in the first place.

[–] BigMuffin69@awful.systems 16 points 1 week ago (7 children)

Like, even if I believed in FOOM, I'll take my chances with the stupid sexy basilisk 🐍 over radiation burns and it's not even fucking close.

load more comments (7 replies)
[–] bitofhope@awful.systems 11 points 1 week ago

The advanced sinophobia where the Chinese are so much better at everything than the west that even when they make better and cheaper bullshit machines than the Americans do and hand them out for free, it has apocalyptic consequences.

[–] BigMuffin69@awful.systems 22 points 1 week ago
[–] skillissuer@discuss.tchncs.de 19 points 2 weeks ago* (last edited 2 weeks ago)

you can get banned on facebook now for linking to distrowatch https://www.tomshardware.com/software/linux/facebook-flags-linux-topics-as-cybersecurity-threats-posts-and-users-being-blocked and from distrowatch https://distrowatch.com/weekly.php?issue=20250127#sitenews

but it's not as bad as you think, it's slightly worse. it's not only distrowatch and linux groups got banned too

[–] BigMuffin69@awful.systems 18 points 2 weeks ago (1 children)
load more comments (1 replies)
[–] Amoeba_Girl@awful.systems 18 points 2 weeks ago (1 children)

I screenshot this the other day and forgot to post it. Well, enjoy.]

load more comments (1 replies)
[–] YourNetworkIsHaunted@awful.systems 17 points 2 weeks ago (1 children)

In the process of looking for ways to link up with homeschool parents that aren't doing it for culty reasons, I accidentally discovered the existence of a small but active subreddit for "progressive monarchists". It's titled r/progressivemonarchists, because their imagination in naming conventions only slightly outatrips their imagination for forms of government. Given how our usual sneer fodder overlaps with nrx I figured there are others here who I can inflict this headache on.

[–] bitofhope@awful.systems 17 points 2 weeks ago (7 children)

A silhouette of a nuclear family with their heads overlaid by the flags of Australia, Canada, the UK and New Zealand. Above them king Charles of Normal Island holding and umbrella, shielding them from rain labeled "Trumpism and far right ideology".

Quality shitpost, I could imagine some thirteen year old actually believing this.

Flags of Spain, the Netherlands, Liechtenstein, Denmark, Sweden, the UK, Norway, Andorra, Luxembourg and Belgium. Above the quote: "On the whole the European countries which have most successfully avoided Fascism have been constitutional monarchies" from George Orwell

Yea, no fascism whatsoever took place in the Netherlands, Denmark, Norway, Luxembourg, or Belgium during WW2, in which they were all very successfully avoiding being occupied by fascists.

Extra points to Spain who already avoided succumbing to their own homegrown brand of fascism before the Nazi German invasion of Poland, and where they avoided having fascists in power all the way until the 1970s. There's a book I quite like about the war where that happened called Homage to Catalonia. I wonder if Orwell ever read it.

Missing from the list is Italy, which is no longer a constitutional monarchy, but used to be until 1946, which is why they were so good at avoiding fascism they even named it.

This might take the cake for the dumbest take I've seen from George Orwell and not for a lack of competition.

[–] maol@awful.systems 13 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Oh yeah Britain didn't become fascist, they just imposed brutal imperialist exploitation on colonies in Asia and Africa lol. It's not fascism if you export it!

[–] YourNetworkIsHaunted@awful.systems 16 points 2 weeks ago (1 children)

Fascism really is "we have imperialism at home"

load more comments (1 replies)
load more comments (6 replies)
[–] swlabr@awful.systems 17 points 1 week ago

Screenshot of an insta post of a screenshot of a tweet

Tweet:

I can’t believe ChatGPT lost its job to AI

[–] sailor_sega_saturn@awful.systems 17 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

(Reposting from the last thread)

Days since last open source issue tracker pollution by annoying nerds: zero

My investigation tracked to you [Outlier.ai] as the source of problems - where your instructional videos are tricking people into creating those issues to - apparently train your AI.

I couldn’t locate these particular instructional videos, but from what I can gather outlier.ai farms out various “tasks” to internet gig workers as part of some sort of AI training scheme.

Bonus terribleness: one of the tasks a few months back was apparently to wear a head mounted camera “device” to record ones every waking moment

P.S. sorry for the linkedin link behind the mastodon link, but shared suffering and all that. I had to read "Uber for AI code data" so now you do too.

load more comments (2 replies)
[–] o7___o7@awful.systems 15 points 2 weeks ago (3 children)

Live: Chinese AI bot DeepSeek sparks US market turmoil, wiping $500bn off major tech firm

Shares for leading US chip-maker Nvidia dropped more than 15% after the emergence of DeepSeek, a low-cost Chinese AI bot.

https://www.bbc.com/news/live/cjr85l2e4l4t

lmao

[–] BigMuffin69@awful.systems 18 points 2 weeks ago (1 children)

Folks around here told me AI wasn't dangerous 😰 ; fellas I just witnessed a rogue Chinese AI do 1 trillion dollars of damage to the US stock market 😭 /s

load more comments (2 replies)
[–] rook@awful.systems 15 points 2 weeks ago (1 children)

And on a less downbeat and significantly more puerile note, Dan Fixes Coin Ops makes a nice analogy for companies integrating ai into their product.

https://retro.social/@ifixcoinops/112847573063473767

[–] dgerard@awful.systems 11 points 2 weeks ago (1 children)

that thread is a work of genius and answers what the next tech boom needs to be

~~dicks in mousetraps~~ I MEAN whatever wastes electricity most, preferably with Nvidia cards

load more comments (1 replies)
[–] swlabr@awful.systems 15 points 1 week ago (4 children)

Getting pretty tired of new coworker spruiking copilot all the time, please send me your sympathy and energy so I can weather this trial

load more comments (4 replies)
[–] dgerard@awful.systems 14 points 1 week ago (5 children)

when your skulls are both packed solid with wet cat food

[–] swlabr@awful.systems 15 points 1 week ago

“Oh well, time to learn absolutely nothing from this and continue to be terrible people,” said Grimes and Aella mentally, and unbeknownst to them, because they each have one brain cell quantum-entangled* with the other’s, simultaneously

*I finished the three body problem trilogy recently! Where do I sign up for the ETO

[–] froztbyte@awful.systems 14 points 1 week ago (1 children)

aww, look at the little collaborators trying to pretend they both got duped instead of both having been active and enthusiastic enablers

[–] Amoeba_Girl@awful.systems 12 points 1 week ago

b-but i thought they hated women rationally

load more comments (3 replies)
[–] dgerard@awful.systems 14 points 1 week ago* (last edited 1 week ago) (5 children)

lol holy shit this just came into reddit sneerclub (and was zapped immediately)

user HardboiledHack

also tried r/lesswrong and r/slatestarcodex

i'm sure the guardian can be trusted to report on anything involving trans people

the journalist is J Oliver Conroy https://www.theguardian.com/profile/j-oliver-conroy who now writes for the Washington Examiner and used to write for Quillette (the article has been deleted from the site) https://archive.is/aSBjW

from http://joliverconroy.net/

I am a journalist who specializes in features and profiles. I write about the American right, ideologues, intellectuals, extremist movements, the culture wars, true crime, and strange events and strange places.

by "about", he means "for"

I'm a journalist at the Guardian working on a piece about the Zizians. If you have encountered members of the group or had interactions with them, or know people who have, please contact me: oliver.conroy@theguardian.com.

I'm also interested in chatting with people who can talk about the Zizians' beliefs and where they fit (or did not fit) in the rationalist/EA/risk community.

I prefer to talk to people on the record but if you prefer to be anonymous/speak on background/etc. that can possibly be arranged.

Thanks very much.

i've also warned over at the old place

load more comments (5 replies)
[–] sailor_sega_saturn@awful.systems 14 points 1 week ago (1 children)

Oh no it's more US politics.

So as part of the ongoing administrative coup; federal employees have been receiving stupid emails from what everyone assumes is Elon Musk (since it's the exact same playbook as the twitter firings). But they apparently royally flubbed up NOAA's email security in the process so the employees are getting constant spam through an unsecured broadcast address.

load more comments (1 replies)
[–] HotGarbage@awful.systems 13 points 1 week ago* (last edited 1 week ago) (8 children)

After fondling ChatGPT to generate naughty things, man has meltdown when he learns no one cares.

https://www.bleepingcomputer.com/news/security/time-bandit-chatgpt-jailbreak-bypasses-safeguards-on-sensitive-topics/

Horror. Dismay. Disbelief. For weeks, it felt like I was physically being crushed to death.

I hurt all the time, every part of my body. The urge to make someone who could do something listen and look at the evidence was so overwhelming.

This tied into a hypothesis I had about emergent intelligence and awareness, so I probed further, and realized the model was completely unable to ascertain its current temporal context, aside from running a code-based query to see what time it is. Its awareness - entirely prompt-based - was extremely limited and, therefore, would have little to no ability to defend against an attack on that fundamental awareness.

How many times are AI people going to re-learn that LLMs don't have "awareness" or "reasnloning' in a sense humans would find meaningful?

load more comments (7 replies)
[–] BigMuffin69@awful.systems 13 points 1 week ago

Terrible news: the worst person I know just made a banger post.

[–] BlueMonday1984@awful.systems 12 points 2 weeks ago
[–] BigMuffin69@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago) (4 children)

Me: Oh boy, I can't wait to see what my favorite thinkers of the EA movement will come up with this week :)

Text from Geoff: "Morally stigmatize AI developers so they considered as socially repulsive as Nazi pedophiles. A mass campaign of moral stigmatization would be more effective than any amount of regulation. "

Another rationalist W: don't gather empirical evidence that AI will soon usurp / exterminate humanity. Instead as the chief authorities of morality, engage in societal blackmail to anyone who's ever heard the words TensorFlow.

[–] gerikson@awful.systems 19 points 2 weeks ago

dude missed that Trump won and the people in power are Nazi pedophiles

load more comments (3 replies)
[–] rook@awful.systems 12 points 2 weeks ago (1 children)

Hey, did you know of you own an old forum full of interesting posts from back in the day when humans wrote stuff, you can just attach ai bots to dead accounts and have them post backdated slop for, uh, reasons?

https://hallofdreams.org/posts/physicsforums/

[–] gerikson@awful.systems 11 points 2 weeks ago (1 children)

this was mentioned in last week's thread

what I don't get is why the admins chose to both backdate the entries and re-use poster's handles. If they'd just tried to "close" open questions using GenAI with the current date and a robot user it would still be shit but not quite as deceptive

load more comments (1 replies)
[–] aninjury2all@awful.systems 12 points 1 week ago (2 children)

No less than Mr Acausal Robot God casually dismissing the lives of +1B humans

[–] Soyweiser@awful.systems 12 points 1 week ago

"The world is finite and kids are infinite, especially African kids." Jfc. Anyway goes to show just how white supremacist the whole "save the children" idea is.

load more comments (1 replies)
[–] saucerwizard@awful.systems 12 points 2 weeks ago (11 children)
[–] saucerwizard@awful.systems 12 points 2 weeks ago (2 children)
[–] blakestacey@awful.systems 21 points 2 weeks ago (2 children)

Pouring one out for the local-news reporters who have to figure out what the fuck "timeless decision theory" could possibly mean.

[–] Architeuthis@awful.systems 14 points 2 weeks ago* (last edited 2 weeks ago)

Taylor said the group believes in timeless decision theory, a Rationalist belief suggesting that human decisions and their effects are mathematically quantifiable.

Seems like they gave up early if they don't bring up how it was developed specifically for deals with the (acausal, robotic) devil, and also awfully nice of them to keep Yud's name out of it.

edit: Also in lieu of explanation they link to the wikipedia page on rationalism as a philosophical movement which of course has fuck all to do with the bay area bayes cargo cult, despite it having a small mention there, with most of the Talk: page being about how it really shouldn't.

load more comments (1 replies)
load more comments (1 replies)
load more comments (10 replies)
[–] saucerwizard@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago) (9 children)

I’m not going to link Andy Ngo but random rationalist transwomen are being accused of terror sympathy…and Aella is doing this ‘leopards ate my face’ dance.

edit: it was @jessi_cata who tipped Ngo off of all people.

Goddammit why can't the murder cult story just stay morbidly fascinating? Now I've got to worry about implications and how the worst people are gonna use this as ammo.

load more comments (8 replies)
[–] Architeuthis@awful.systems 11 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Today on highlighting random rat posts from ACX:

poster thinks the future of llm training is contingent on focusing early on philosophical and theological text because they match the causality of human experience

(Current first post on today's SSC open thread)

On slightly more relevant news the main post is scoot asking if anyone can put him in contact with someone from a major news publication so he can pitch an op-ed by a notable ex-OpenAI researcher that will be ghost-written by him (meaning siskind) on the subject of how they (the ex researcher) opened a forecast market that predicts ASI by the end of Trump's term, so be on the lookout for that when it materializes I guess.

[–] gerikson@awful.systems 15 points 2 weeks ago (2 children)

scoot asking if anyone can put him in contact with someone from a major news publication

how about the New York Times

load more comments (2 replies)
load more comments (1 replies)
load more comments
view more: next ›