27
submitted 2 months ago* (last edited 2 months ago) by zogwarg@awful.systems to c/techtakes@awful.systems

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this)

you are viewing a single comment's thread
view the rest of the comments
[-] V0ldek@awful.systems 10 points 2 months ago

Why are you saying that LLMs are useless when they're useless only most of the time

I'm sorry but I've been circling my room for an hour now seeing this and I need to share it with people lest I go insane.

[-] mirrorwitch@awful.systems 15 points 2 months ago

I find the polygraph to be a fascinating artifact. most on account of how it doesn't work. it's not that it kinda works, that it more or less works, or that if we just iron out a few kinks the next model will do what polygraphs claims to do. the assumptions behind the technology are wrong. lying is not physiological; a polygraph cannot and will never work. you might as well hire me to read the tarot of the suspects, my rate of success would be as high or higher.

yet the establishment pretends that it works, that it means something. because the State desperately wants to believe that there is a path to absolute surveillance, a way to make even one's deepest subjectivity legible to the State, amenable to central planning (cp. the inefficacy of torture). they want to believe it so much, they want this technology to exist so much, that they throw reality out of the window, ignore not just every researcher ever but the evidence of their own eyes and minds, and pretend very hard, pretend deliberately, willfully, desperately, that the technology does what it cannot do and will never do. just the other day some guy way condemned to use a polygraph in every statement for the rest of his life. again, this is no better than flipping a coin to decide if he's saying the truth, but here's the entire System, the courts the judge the State itself, solemnly condemning the man to the whims of imaginary oracles.

I think this is how "AI" works, but on a larger scale.

[-] dgerard@awful.systems 7 points 2 months ago

see also voice stress analysis, another thing that doesn't work but is sold as working with AI

[-] dgerard@awful.systems 12 points 2 months ago

that dude advocates LLM code autocomplete and he's a cryptographer

like that code's gotta be a bug bounty bonanza

[-] self@awful.systems 10 points 2 months ago

dear fuck:

From 2018 to 2022, I worked on the Go team at Google, where I was in charge of the Go Security team.

Before that, I was at Cloudflare, where I maintained the proprietary Go authoritative DNS server which powers 10% of the Internet, and led the DNSSEC and TLS 1.3 implementations.

Today, I maintain the cryptography packages that ship as part of the Go standard library (crypto/… and golang.org/x/crypto/…), including the TLS, SSH, and low-level implementations, such as elliptic curves, RSA, and ciphers.

I also develop and maintain a set of cryptographic tools, including the file encryption tool age, the development certificate generator mkcert, and the SSH agent yubikey-agent.

I don’t like go but I rely on go programs for security-critical stuff, so their crypto guy’s bluesky posts being purely overconfident “you can’t prove I’m using LLMs to introduce subtle bugs into my code” horseshit is fucking terrible news to me too

but wait, mkcert and age? is that where I know the name from? mkcert’s a huge piece of shit nobody should use that solves a problem browsers created for no real reason, but I fucking use age in all my deployments! this is the guy I’m trusting? the one who’s currently trolling bluesky cause a fraction of its posters don’t like the unreliable plagiarization machine enough? that’s not fucking good!

maybe I shouldn’t be taking this so hard — realistically, this is a Google kid who’s partially funded by a blockchain company; this is someone who loves boot leather so much that most of their posts might just be them reflexively licking. they might just be doing contrarian trolling for a technology they don’t use in their crypto work (because it’s fucking worthless for it) and maybe what we’re seeing is the cognitive dissonance getting to them.

but boy fuck does my anxiety not like this being the personality behind some of the code I rely on

[-] gerikson@awful.systems 8 points 2 months ago

Oh shit, that's where I recognize his name from. Very disappointing he's full on the LLM train.

[-] self@awful.systems 8 points 2 months ago

cryptographers: need strict guarantees on code ordering and timing because even compiler optimizations can introduce exploitable flaws into code that looks secure

the go cryptographer: there’s no reason not to completely trust a system that pastes plagiarized code together so loosely it introduces ordering-based exploits into ordinary C code and has absolutely no concept of a timing attack (but will confidently assert it does)

[-] froztbyte@awful.systems 5 points 2 months ago

yeah. Been following valsorda for a while because reasons, but there’s a certain type of thing they frequently go for. “It’s popular and thus worth it, who cares about the side effects” isn’t something they seem to concern themselves with in respect to the gallery of shit

I know that rage exists, but haven’t really tried to make serious use of it yet. Probably worth checking out

[-] self@awful.systems 7 points 2 months ago

I know that rage exists, but haven’t really tried to make serious use of it yet.

oh I make serious use of rage all the time in my work

not the program, but that looks cool too

[-] froztbyte@awful.systems 4 points 2 months ago
[-] FredFig@awful.systems 9 points 2 months ago

Criticizing others for not being perfectly exacting with their language and then jumping in front of the LLM headlights all at once, truly the human mind has no limits.

[-] rook@awful.systems 7 points 2 months ago

Valsorda was on mastodon for a bit (in ‘22 maybe?) and was quite keen on it , but left after a bunch of people got really pissy at him over one of his projects. I can’t actually recall what it even was, but his argument was that people posted stuff publicly on mastodon, so he should be able to do what he liked with those posts even if they asked him not to. I can see why he might not have a problem with LLMs.

Anyone remember what he was actually doing? Text search or network tracing or something else?

[-] dgerard@awful.systems 6 points 2 months ago

oh! was he the guy doing a search engine archiving as much of the fediverse as possible, over the objections of the people being indexed?

yeah that tracks

[-] blakestacey@awful.systems 8 points 2 months ago

So many techbros have decided to scrape the fediverse that they all blur together now... I was able to dig up this:

"I hear I’m supposed to experiment with tech not people, and must not use data for unintended purposes without explicit consent. That all sounds great. But what does it mean?" He whined.

[-] dgerard@awful.systems 6 points 2 months ago

yeah, that's the fucker. as a large language model, he does not have a data type for consent

[-] froztbyte@awful.systems 5 points 2 months ago

I always wondered why he was at google for so long, and cut a teeny bit of hypothetical slack in light of "hmm maybe it gave a significantly better life than what he could in italy" (which honestly I can understand as a drive, if not necessarily agree with)

that slack's gone now

[-] swlabr@awful.systems 6 points 2 months ago

Some ok anti-AI voices in that thread. But mostly a torrent of shit

this post was submitted on 09 Sep 2024
27 points (100.0% liked)

TechTakes

1441 readers
45 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS