sailor_sega_saturn

joined 2 years ago
[–] sailor_sega_saturn@awful.systems 7 points 17 hours ago* (last edited 17 hours ago) (5 children)

Here's a video of a Tesla vehicle taking the saying "move fast and break things" to heart.

[–] sailor_sega_saturn@awful.systems 25 points 2 days ago* (last edited 2 days ago) (2 children)

Ah yes the typical workflow for LLM generated changes:

  1. LLM produces nonsense at the behest of employee A.
  2. Employee B leaves a bunch of edits and suggestions to hammer it into something that's sloppy but almost kind of makes sense. A soul-sucking error prone process that takes twice as long as just writing the dang code.
  3. Code submitted!
  4. Employee A gets promoted.

Also the fact that this isn't integrated with tests shows how rushed the implementation was. Not even LLM optimists should want code changes that don't compile or that break tests.

[–] sailor_sega_saturn@awful.systems 8 points 2 days ago (1 children)

Also oh god is this company is probably patenting their genes so what happens when people have babies do they have to pay a licensing fee?

[–] sailor_sega_saturn@awful.systems 7 points 2 days ago* (last edited 2 days ago) (3 children)

I thought I was reading writing from a real reporter visiting real weirdos for a bit longer than I'd like to admit; so the tone is def. on point. It started out like something right out of sneerclub.

The blending of eugenics with silicon valley style corporate "ethics" and excess gives an interesting setting; and sprinkling in so many quotes / product names / etc. was nice for worldbuilding and setting the scene.

I was left with lots of unanswered questions (I assume deliberately); this leaves a lot to the imagination including some threads that would be too openly dark for this sort of gilded setting. Or with the setting being so transitional it's possible that even this company hasn't thought through of what will happen 10, 20 years in the future as they move fast and break things to chase after the next quarters earnings.

Sorry I suck at giving criticism so this is just all stuff I liked. The following is my best shot at actual criticism:

The ending did confuse me a bit and felt a little out of place: I had to go back and re-read it a second time to get the mood I feel it was trying to invoke. Citrus being mentioned 5 times made me wonder if I was missing a deeper meaning. But on re-reading citrus definitely makes sense as a theme: having both a lovely natural scent from oranges and lemons and a sterile artifical sent from cleaning products or air fresheners.

Similarly I thought I might be missing something with the woman being surprised by headlights at dusk; though looking back natural dusk and sudden artificial headlights does pair well with the transitional setting of the story.

Ugh. So terrible. Tech’s obsession with “scaling” is one of the worst things about tech.

Yeah that jumped out to me. Like human teaching has scaled fine to billions of people. It certainly has a better track record than Duolingo which provides meh study material and leads to ahem mixed learning outcomes despite being around for over a decade.

Of course there's the subtext of "but also we'll be able to put all those obsolete teachers out of business and make tons of money!"

Aaaarrgh. Tech’s obsession with A/B testing is another one of the worst things about tech.

Being in tech I definitely see misuse of A/B testing sometimes. Sometimes a team will ignore common sense entirely but come up with metrics that measure something irrelevant. The metrics are, intentionally or not, gamed to tell them what they want to hear. They then run the (useless) numbers and use that to justify why their change was good, even in the face of intense user backlash.

One particular example that just came to mind: someone made a bad change, and lots of people complained. Eventually the complaints started to peter out. Then they claimed "see! people just had to get used to it!" (versus the rather more obvious possibility that nobody bothered to complain more than once).

[–] sailor_sega_saturn@awful.systems 16 points 3 days ago (3 children)

Duolingo CEO says AI is a better teacher than humans—but schools will still exist ‘because you still need childcare’

It's wild for the CEO of an edutainment company to have this much disdain for for teachers.

can’t have AI bro coworkers if you’re unemployed :P

I'd certainly feel less conflicted yelling about AI if I didn't work for a big tech company that's gaga for AI. I almost wrote out a long angsty reply but I don't want to give up too much personal details in a single comment.

I guess I ended up as a boiled frog. If I knew how much AI nonsense I'd be incidentally exposed to over the last year I would have quit a year ago. And yet currently I don't quit for complicated reasons. I'm not that far from the breaking point, but I'm going to try to hang in for a few more years.

But yeah, I'm pretty uncomfortable working for a company that has also veered closer to allying with techo-fascism in recent years; and I am taking psychic damage.

[–] sailor_sega_saturn@awful.systems 15 points 4 days ago (5 children)

Urgh over the past month I have seen more and more people on social media using chat-gpt to write stuff for them, or to check facts, and getting defensive instead of embarrassed about it.

Maybe this is a bit old woman yells at cloud -- but I'd lie if I said I wasn't worried about language proficiency atrophying in the population (and leading to me having to read slop all the time)

[–] sailor_sega_saturn@awful.systems 10 points 4 days ago (2 children)

We've had one AI legal filing yes, but what about second AI legal filing?

https://bsky.app/profile/debgoldendc.bsky.social/post/3lpjr7i6lrs2n

https://storage.courtlistener.com/recap/gov.uscourts.alnd.179677/gov.uscourts.alnd.179677.186.0.pdf

Instead, Defendant appears to have wholly invented case citations in his Motion for Leave, possibly through the use of generative artificial intelligence

Defendant bolstered this assertion with a lengthy string citation of legal authority and parentheticals that appeared to support Defendant’s proposition. But the entire string citation appears to have been made up out of whole cloth.

[–] sailor_sega_saturn@awful.systems 7 points 4 days ago* (last edited 4 days ago) (1 children)

Not deleted. It's just that the reddit programmers either DGAF or don't know what they're doing.

But yeah this one confused me. He appears to be a movie director / producer / writer and has a couple festival films under his belt. Nothing successful enough to get any buzz as far as I can tell.

Imagine working towards a Hollywood career for years and years only to write an AI-drawn comic book that, based on the title, misses the point of The Punisher. People he pitches his movie ideas to are going to assume he wrote the script with an LLM.

[–] sailor_sega_saturn@awful.systems 9 points 4 days ago (1 children)

We already knew these things are security disasters, but yeah that still looks like a security disaster. It can both read private documents and fetch from the web? In the same session? And it can be influenced by the documents it reads? And someone thought this was a good idea?

[–] sailor_sega_saturn@awful.systems 12 points 5 days ago* (last edited 5 days ago) (2 children)

The latest in chatbot "assisted" legal filings. This time courtesy of an Anthropic's lawyers and a data scientist, who tragically can't afford software that supports formatting legal citations and have to rely on Clippy instead: https://www.theverge.com/news/668315/anthropic-claude-legal-filing-citation-error

After the Latham & Watkins team identified the source as potential additional support for Ms. Chen’s testimony, I asked Claude.ai to provide a properly formatted legal citation for that source using the link to the correct article. Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors. Our manual citation check did not catch that error. Our citation check also missed additional wording errors introduced in the citations during the formatting process using Claude.ai.

Don't get high on your own AI as they say.

 

https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/

http://web.archive.org/web/20240904174555/https://ssi.inc/

I have nothing witty or insightful to say, but figured this probably deserved a post. I flipped a coin between sneerclub and techtakes.

They aren't interested in anything besides "superintelligence" which strikes me as an optimistic business strategy. If you are "cracked" you can join them:

We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.

 

Saw the title and knew I had to post here. Not quite as big of a self-own as Square selling Tomb Raider for a blockchain / AI pivot; but amusing nonetheless.

Join the excitement of the Olympic Games Paris 2024 with nWay's officially licensed, commemorative Paris 2024 NFT Digital Pin collection!

You can claim a legendary or epic pin showcasing the Paris 2024 mascot holding a flag and waving. You can add these digital gems to your collection through Magic Eden’s friendly NFT marketplace as part of Coinbase's Onchain Summer event. Be sure to have an ETH L2 Base-supported wallet to secure yours today!

Remember when companies let you download wallpapers or something instead of figuring out what the heck an ETH L2 Base-supported wallet is?

I remember.

 

Follow up to https://awful.systems/post/1109610 (which I need to go read now because I completely overlooked this)

Now OpenAI has responded to Elon Musk's lawsuit with an email dump containing a bunch of weird nerd startup funding drama: https://openai.com/blog/openai-elon-musk

Choice quote from OpenAI:

As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).

OpenAI have learned how to redact text properly now though, a pity really.

 

OK OK old news I know. But this is a metal cover of a bitconnect speech that I found pretty amusing: https://www.youtube.com/watch?v=iZ-Ayj-ht_I

 

OpenAI blog post: https://openai.com/research/building-an-early-warning-system-for-llm-aided-biological-threat-creation

Orange discuss: https://news.ycombinator.com/item?id=39207291

I don't have any particular section to call out. May post thoughts ~~tomorrow~~ today it's after midnight oh gosh, but wanted to post since I knew ya'll'd be interested in this.

Terrorists could use autocorrect according to OpenAI! Discuss!

 

#1 We're All Gonna Make It: https://www.youtube.com/watch?v=yp0diaVLPrQ

#2 Ethereum: https://www.facebook.com/randizberg/videos/nobodyme-ok-heres-another-music-video-had-a-blast-on-this-collab-with-hila-the-k/531145045349722/

#3 Hello This Is Defi: https://twitter.com/randizuckerberg/status/1494416366710910992

Surgeon General's Warning: watching all of these back to back may make your brain ooze out of your nose.

 

Don't mind me I'm just here to silently scream into the void

Edit: I'm no good at linking to HN apparently, made link more stable.

view more: next ›