[-] zogwarg@awful.systems 11 points 2 months ago

Reading about the hubris of young Yud is a bit sad, a proper Tragedy. Then I have to remind myself that he remains a manipulator, and that he should be old enough to stop believe—and promote—in magical thinking.

[-] zogwarg@awful.systems 10 points 3 months ago

Actually reading the python discussion boards, what's striking is the immense volume of chatter produced by Tim, always in couched in:

  • "Hypothetically"
  • "Everyone tells me they are terrified of inclusivity, you wouldn't know because they are terrified of admitting it to YOU"
  • "I'm not saying that you are an awful person 😉" (YMMV: But I find his use of the winking face emoji truly egregious)
  • "Hey I'm liberal like you, let me explain everything wrong with it"
  • "Hey we were inclusive before any of this PC bullshit" proceeds to use unpleasant descriptors of marginalized individuals, and how very welcome they were, despite what he seems to see as "shortcomings"

In his heart he must understand how bad he his, or he wouldn't couch his discourse in so much bad faith, and he wouldn't make so much of a stink out of making removing Python Fellow status more easy to remove.

[-] zogwarg@awful.systems 10 points 4 months ago

Aaah!

See text description below

PagerDuty suggestion popup: Resolve incidents faster with Generative AI. Join Early Access to try the new PD Copilot.

[-] zogwarg@awful.systems 10 points 6 months ago

Also according to my freelance interpreter parents:

Compared to other major tools, was also one of the few not too janky solutions for setting up simultaneous interpreting with a separate audio track for the interpreters output.

Other tools would require big kludges (separate meeting rooms, etc…), unlikely in to be working for all participants across organizations, or require clunky consecutive translation.

[-] zogwarg@awful.systems 10 points 7 months ago* (last edited 7 months ago)

The article almost looks like satire.

If all script kiddies waste their time trying to use generative AI to produce barely functional malware, we might be marginally safer for a while ^^. Or maybe this is the beginning of an entirely new malware ecology, clueless development using LLMs falling prey to clueless malware using LLMs.

[-] zogwarg@awful.systems 10 points 7 months ago* (last edited 7 months ago)

A choice selection of musks deposition with TurdRationalist™ adjacent brainrot shibboleths:

Q: (By Mr. Bankston) And this quote says from the Isaacson book, "My tweets are like Niagara Falls sometimes and they come too fast," Musk says. "Just dip a cup in there and try to avoid the random turds." Do you think that's an accurate quotation from you?

A: (By Elon) That is acutally not -- not accurate. [...] The things that I see on twitter, not the [...] posts that I make are like Niagara Falls. [...] my account is the most interacted with in the world I believe. It is physically impossible for, you know, any one person to see all of the interactions that happen. So the only way I can really gauge the interactions is by sampling them essentially.

Q: Got you. So would it be fair to say that Isaacson made a mistake here and what thus really should say is not my tweets are like Niagara Falls, but everyone else's tweets are like Niagara Falls?

A: Not exactly. It means [...] all of what I see when I use the X app, [...] all the posts that I see and all the interactions that happen with those posts, are far to numerous [...] for any human being to consume.

Q: Okay. So when this quote talks about random turds; these are other people's random turds?

A: I mean I suppose I -- I could be guilty of a random turd too, but [...] what I'm really referring to is that the only way for me to actually get an understanding of what is happening on the system is to sample it. Like try to do -- just like in statistics, you don't -- you do -- try to do -- you sample a distribution in order to understand what's going on, but you cannot look at every single data point.

I can only gauge truth from first principled anecdotal sampling of my nazi friends, I can't look at everything alas, I'll leave community notes to deal with pesky liberals

[Which btw in other parts of the deposition he says, for a community note to be surfaced people must vote the same note as being helpful, where they previously disagreed, which doesn't sound at all like it couldn't be gamed, and doesn't at all sound like it would sometimes force "centrism" with nazis]

On a all too sadly self-aware note

Elon: I may of done more to financially impair the company than to help it.

You think?

[-] zogwarg@awful.systems 10 points 9 months ago

What's the reward function for simulating me, I live a pretty dull life, what possible ROI this goes against all laws of economics 101! (The only true way to carve reality at the joints.)

[-] zogwarg@awful.systems 11 points 1 year ago

Either way it's a circus of incompetence.

[-] zogwarg@awful.systems 11 points 1 year ago

^^ Quietly progressing from humans are not the only ones able to do true learning, to machines are the only ones capable of true learning.

Poetic.

PS: Eek at the *cough* extrapolation rules lawyering 😬.

[-] zogwarg@awful.systems 11 points 1 year ago* (last edited 1 year ago)

Not even that! It looks like a blurry jpeg of those sources if you squint a little!

Also I’ve sort of realized that the visualization is misleading in three ways:

  1. They provide an animation from shallow to deep layers to show the dots coming together, making the final result more impressive than it is (look at how many dots are in the ocean)
  2. You see blobby clouds over sub-continents, with nothing to gauge error within the cloud blobs.
  3. Sorta-relevant but obviously the borders as helpfully drawn for the viewer to conform to “Our” world knowledge aren’t even there at all, it’s still holding up a mirror (dare I say a parrot?) to our cognition.
[-] zogwarg@awful.systems 11 points 1 year ago

I wouldn't be so confident in replacing junior devs with "AI":

  1. Even if it did work without wasting time, it's unsustainable since junior devs need to acquire these skills, senior devs aren't born from the void, and will eventually graduate/retire.
  2. A junior dev willing to engage their brain, would still iterate through to the correct implementation for cheaper (and potentially faster), than senior devs needing spend time reviewing bullshit implementations, and at arcane attempts of unreliable "AI"-prompting.

It's copy-pasting from stack-overflow all over again. The main consequence I see for LLM based coding assistants, is a new source of potential flaws to watch out for when doing code reviews.

[-] zogwarg@awful.systems 10 points 1 year ago

That is a delightfully ironic cover ^^, my headcannon is that someone in the distribution pipeline was intentionally taking the piss, surely no one can be that shortsighted; wait ...

view more: ‹ prev next ›

zogwarg

joined 1 year ago