27
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 09 Sep 2024
27 points (100.0% liked)
TechTakes
1441 readers
45 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 1 year ago
MODERATORS
Why are you saying that LLMs are useless when they're useless only most of the time
I'm sorry but I've been circling my room for an hour now seeing this and I need to share it with people lest I go insane.
I find the polygraph to be a fascinating artifact. most on account of how it doesn't work. it's not that it kinda works, that it more or less works, or that if we just iron out a few kinks the next model will do what polygraphs claims to do. the assumptions behind the technology are wrong. lying is not physiological; a polygraph cannot and will never work. you might as well hire me to read the tarot of the suspects, my rate of success would be as high or higher.
yet the establishment pretends that it works, that it means something. because the State desperately wants to believe that there is a path to absolute surveillance, a way to make even one's deepest subjectivity legible to the State, amenable to central planning (cp. the inefficacy of torture). they want to believe it so much, they want this technology to exist so much, that they throw reality out of the window, ignore not just every researcher ever but the evidence of their own eyes and minds, and pretend very hard, pretend deliberately, willfully, desperately, that the technology does what it cannot do and will never do. just the other day some guy way condemned to use a polygraph in every statement for the rest of his life. again, this is no better than flipping a coin to decide if he's saying the truth, but here's the entire System, the courts the judge the State itself, solemnly condemning the man to the whims of imaginary oracles.
I think this is how "AI" works, but on a larger scale.
see also voice stress analysis, another thing that doesn't work but is sold as working with AI
that dude advocates LLM code autocomplete and he's a cryptographer
like that code's gotta be a bug bounty bonanza
dear fuck:
I don’t like go but I rely on go programs for security-critical stuff, so their crypto guy’s bluesky posts being purely overconfident “you can’t prove I’m using LLMs to introduce subtle bugs into my code” horseshit is fucking terrible news to me too
but wait, mkcert and age? is that where I know the name from? mkcert’s a huge piece of shit nobody should use that solves a problem browsers created for no real reason, but I fucking use age in all my deployments! this is the guy I’m trusting? the one who’s currently trolling bluesky cause a fraction of its posters don’t like the unreliable plagiarization machine enough? that’s not fucking good!
maybe I shouldn’t be taking this so hard — realistically, this is a Google kid who’s partially funded by a blockchain company; this is someone who loves boot leather so much that most of their posts might just be them reflexively licking. they might just be doing contrarian trolling for a technology they don’t use in their crypto work (because it’s fucking worthless for it) and maybe what we’re seeing is the cognitive dissonance getting to them.
but boy fuck does my anxiety not like this being the personality behind some of the code I rely on
Oh shit, that's where I recognize his name from. Very disappointing he's full on the LLM train.
cryptographers: need strict guarantees on code ordering and timing because even compiler optimizations can introduce exploitable flaws into code that looks secure
the go cryptographer: there’s no reason not to completely trust a system that pastes plagiarized code together so loosely it introduces ordering-based exploits into ordinary C code and has absolutely no concept of a timing attack (but will confidently assert it does)
yeah. Been following valsorda for a while because reasons, but there’s a certain type of thing they frequently go for. “It’s popular and thus worth it, who cares about the side effects” isn’t something they seem to concern themselves with in respect to the gallery of shit
I know that rage exists, but haven’t really tried to make serious use of it yet. Probably worth checking out
oh I make serious use of rage all the time in my work
not the program, but that looks cool too
samesies
Criticizing others for not being perfectly exacting with their language and then jumping in front of the LLM headlights all at once, truly the human mind has no limits.
Valsorda was on mastodon for a bit (in ‘22 maybe?) and was quite keen on it , but left after a bunch of people got really pissy at him over one of his projects. I can’t actually recall what it even was, but his argument was that people posted stuff publicly on mastodon, so he should be able to do what he liked with those posts even if they asked him not to. I can see why he might not have a problem with LLMs.
Anyone remember what he was actually doing? Text search or network tracing or something else?
(e: apologies, this turned into more of a wall-of-text sneer than I meant to, but I'll leave it for flavour and detail)
as someone from (and living in) the global south (fairly familiar with but not myself at worse end of the resources spectrum), I cannot tell you how fucking ridiculous it sounds each time I see some North American Fuckwit post shit like that. whether it was the coiners going "banking the unbanked!!!!!" or the llm trash "can help you write professional!!!!!", it's always some Extremely Resourced thinking that just does. not. apply. this side of the world
I probably should make this a long detailed post sometime somewhere, demonstrating just how utterly fucking wrong some of these presumptions are, because oh god they're many:
the amount of data it takes to communicate with this trash (in a number of markets, you get people buying data bundles in 10/50/100MB increments in day or hour units because that's what they can afford at that point (there is another rant here to be had about exploitative behaviour on the part of telcos but separate rant))
just reaching the servers for this shit requires a good network connection, nevermind the interaction latency (higher base latencies = much longer cumulative = much slower "experience"... and this shit was already slow from US networks)
hell, just having the hardware that's capable is sometimes a big blocker - so-called "feature phones" are somewhat common (how much depends on where you are). sideline mention: locally in some areas they're called "trililis", after the way they ring, which I fucking love. and even when you have users with smartphones, the devices are not necessarily good. sometimes it's low resourced (because cost), sometimes it's buggy as fuck (vendors, cost), sometimes it's just plain fucked (because hard knocks life)
and don't even get me goddamn started on the language. the phenomenon of nigerian english being Too Florid For USA has already featured here previously, but it goes so much beyond that. show me one of these fucking prompts working even half-well in Pedi, Sotho, Swazi, Tsongo, Tswana, Venda, Xhosa, Zulu, or Afrikaans. and those are just the other national (spoken/textual) languages here (in ZA). one single border away there's 25+ more that I know of
and that's to just look at the resource/technical/implementation side of it, and saying nothing about the Northern Saviour dynamic - so many of these fucking people advertise working for a non-profit, wearing it like a badge. wandering around DC a few years back, running into many of these, with so-called focuses on places in africa I've been to and worked in.. it was surreal how wide the gap was between reality and what they had in their heads
oh! was he the guy doing a search engine archiving as much of the fediverse as possible, over the objections of the people being indexed?
yeah that tracks
So many techbros have decided to scrape the fediverse that they all blur together now... I was able to dig up this:
"I hear I’m supposed to experiment with tech not people, and must not use data for unintended purposes without explicit consent. That all sounds great. But what does it mean?" He whined.
yeah, that's the fucker. as a large language model, he does not have a data type for consent
I always wondered why he was at google for so long, and cut a teeny bit of hypothetical slack in light of "hmm maybe it gave a significantly better life than what he could in italy" (which honestly I can understand as a drive, if not necessarily agree with)
that slack's gone now
Some ok anti-AI voices in that thread. But mostly a torrent of shit