Architeuthis

joined 2 years ago
[–] Architeuthis@awful.systems 5 points 5 days ago

There are days when 70% error rate seems low-balling it, it's mostly a luck of the draw thing. And be it 10% or 90%, it's not really automation if a human has to be double-triple checking the output 100% of the time.

[–] Architeuthis@awful.systems 13 points 6 days ago (3 children)

Training a model on its own slop supposedly makes it suck more, though. If Microsoft wanted to milk their programmers for quality training data they should probably be banning copilot, not mandating it.

At this point it's an even bet that they are doing this because copilot has groomed the executives into thinking it can't do wrong.

[–] Architeuthis@awful.systems 12 points 1 week ago

LLMs are bad even at converting news articles to smaller news articles faithfully, so I'm assuming in a significant percentage of conversions the dumbed down contract will be deviating from the original.

[–] Architeuthis@awful.systems 7 points 1 week ago* (last edited 1 week ago)

I posted this article on the general chat at work the other day and one person became really defensive of ChatGTP, and now I keep wondering what stage of being groomed by AI they're currently at and if it's reversible.

[–] Architeuthis@awful.systems 12 points 1 week ago (1 children)

Not really possible in an environment were the most useless person you know keeps telling everyone how AI made him twelve point eight times more productive, especially when in hearing distance from the management.

[–] Architeuthis@awful.systems 8 points 1 week ago (1 children)

A programmer automating his job is kind of his job, though. That's not so much the problem as the complete enshittification of software engineering that the culture surrounding these dubiously efficient and super sketchy tools seems to herald.

On the more practical side, enterprise subscriptions to the slop machines do come with assurances that your company's IP (meaning code and whatever else that's accessible from your IDE that your copilot instance can and will ingest) and your prompts won't be used for training.

Hilariously, github copilot now has an option to prevent it from being too obvious about stealing other people's code, called duplication detection filter:

If you choose to block suggestions matching public code, GitHub Copilot checks code suggestions with their surrounding code of about 150 characters against public code on GitHub. If there is a match, or a near match, the suggestion is not shown to you.

[–] Architeuthis@awful.systems 99 points 1 week ago (21 children)

Liuson told managers that AI “should be part of your holistic reflections on an individual’s performance and impact.”

who talks like this

[–] Architeuthis@awful.systems 7 points 1 week ago* (last edited 1 week ago)

Good parallel, the hands are definitely strategically hidden to not look terrible.

[–] Architeuthis@awful.systems 2 points 1 week ago

Like, assuming we could reach a sci-fi vision of AGI just as capable as a human being, the primary business case here is literally selling (or rather, licensing out) digital slaves.

Big deal, we'll just configure a few to be in a constant state of unparalleled bliss to cancel out the ones having a hard time of it.

Although I'd guess human level problem solving needn't imply a human-analogous subjective experience in a way that would make suffering and angst meaningful for them.

[–] Architeuthis@awful.systems 9 points 1 week ago* (last edited 1 week ago) (2 children)

Ed Zitron summarizes his premium post in the better offline subreddit: Why Did Microsoft Invest In OpenAI?

Summary of the summary: they fully expected OpenAI would've gone bust by now and MS would be looting the corpse for all it's worth.

[–] Architeuthis@awful.systems 3 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Fund copyright infringement lawsuits against the people they had been bankrolling the last few years? Sure, if the ROI is there, but I'm guessing they'll likely move on to then next trendy sounding thing, like a quantum remote diddling stablecoin or whatevertheshit.

 

Sam Altman, the recently fired (and rehired) chief executive of Open AI, was asked earlier this year by his fellow tech billionaire Patrick Collison what he thought of the risks of synthetic biology. ‘I would like to not have another synthetic pathogen cause a global pandemic. I think we can all agree that wasn’t a great experience,’ he replied. ‘Wasn’t that bad compared to what it could have been, but I’m surprised there has not been more global coordination and I think we should have more of that.’

 

original is here, but you aren't missing any context, that's the twit.

I could go on and on about the failings of Shakespear... but really I shouldn't need to: the Bayesian priors are pretty damning. About half the people born since 1600 have been born in the past 100 years, but it gets much worse that that. When Shakespear wrote almost all Europeans were busy farming, and very few people attended university; few people were even literate -- probably as low as ten million people. By contrast there are now upwards of a billion literate people in the Western sphere. What are the odds that the greatest writer would have been born in 1564? The Bayesian priors aren't very favorable.

edited to add this seems to be an excerpt from the fawning book the big short/moneyball guy wrote about him that was recently released.

view more: ‹ prev next ›