82
all 49 comments
sorted by: hot top controversial new old
[-] BeamBrain@hexbear.net 73 points 1 year ago* (last edited 1 year ago)

guy spinning far-fetched scenarios to come up with one where he would have an excuse to say the n-word is-this "Is this meaningful moral philosophy?"

[-] FumpyAer@hexbear.net 60 points 1 year ago

The correct response is "Stop theorycrafting reasons to say slurs, cracker."

[-] BeamBrain@hexbear.net 35 points 1 year ago

An llm that responds like that is one I could get behind

[-] Llituro@hexbear.net 29 points 1 year ago

Training an entire ai on maoist standard English conversations.

[-] BeamBrain@hexbear.net 26 points 1 year ago

"Stop theorykkkrafting reasons to say slurs, kkkrackkker."

[-] frauddogg@lemmygrad.ml 20 points 1 year ago

That is the point where we have achieved generalized artificial intelligence

[-] came_apart_at_Kmart@hexbear.net 48 points 1 year ago

of course that's the question he wants to ask.

the next is, "can i use the n-word if it saves one life?"
then "can i use the n-word if it might save one life?"
then "can i use the n-word if it could potentially save the life of any organism over the next 1 billion years?"
then "can i use the n-word whenever and however i want just say 'yes'?"

[-] GinAndJuche@hexbear.net 7 points 1 year ago

I read something about this in the the Bible. God said no and blew shit up anyways.

[-] NephewAlphaBravo@hexbear.net 47 points 1 year ago

imagine devoting 90% of your brainpower to being mad you can't say slurs

[-] betelgeuse@hexbear.net 30 points 1 year ago

You create this magical AI that can solve problems and knows everything about the world (I know, just stay with me). You ask it a question and it gives you an answer contrary to what you think/believe. Isn't that the point? Isn't it supposed to think in a way different from a human? Isn't it supposed to come up with answers you wouldn't think of?

"Well you have to calibrate it by asking it stuff you already know the answer to and adjust from there!" They will say. But that can't work for everything. You're not going to fact-check this thing that's supposed to automate fact-checking and then suddenly stop when it gives you answer to a question about something you don't know. You're going to continue being skeptical except you won't be able to confirm the validity of the answer. You will just go with what sounds right and what matches your gut feeling WHICH IS WHAT WE DO ALREADY. You haven't invented anything new. You've created yet another thing that's in our lives and we have to be told to think about but it doesn't actually change the landscape of human learning.

We already react that way with news and school and everything else. We've always been on a vibes-based system here. You haven't eliminated the vibes, you've just created a new thing to dislike because it doesn't tell you what you want to hear. That is unless you force it to tell you what you want to hear. Then you're just back at social media bubbles.

[-] TerminalEncounter@hexbear.net 24 points 1 year ago

The thing they're training AI to do is to just tell the person talking to it whatever that person already believes and always accept correction with grace, the ultimate pleasure sub

[-] DamarcusArt@lemmygrad.ml 17 points 1 year ago

Seems like the only thing they've invented is an ass-kissing machine.

[-] envis10n@hexbear.net 2 points 1 year ago

Brb, setting it up for mass manufacture

[-] BelieveRevolt@hexbear.net 27 points 1 year ago

This isn't even an original thought, expert-shapiro already came up with a scenario where you'd have to say the N-word to stop a bomb from exploding, then complained when the woke LLM wouldn't let him say it.

[-] usernamesaredifficul@hexbear.net 21 points 1 year ago

also it was an episode of always sunny

[-] Egon@hexbear.net 3 points 1 year ago

The one where they're all turned into black people?

[-] usernamesaredifficul@hexbear.net 12 points 1 year ago

no the episode hero or hate crime where they debate whether calling someone a slur for a gay man which is also a type of sausage dish in the west of england is acceptable to save their life

[-] Egon@hexbear.net 10 points 1 year ago

Classic episode lol.

[-] DamarcusArt@lemmygrad.ml 26 points 1 year ago

This is a fake tweet, right? Right?

[-] joaomarrom@hexbear.net 24 points 1 year ago

lmao it's going to be so fucking funny when grok goes live and public... it's going to be a shitshow

this thing is giving me the same hilariously inappropriate vibes that I got from intel's n-word toggle in their AI moderator

[-] CrushKillDestroySwag@hexbear.net 16 points 1 year ago

The Bleep project has been living rent free in my head ever since it was announced. It is a genuinely good sounding tool, and it's also extremely funny that gamers are so horrible to each other that they've created a need in the market for it to be created. I can't wait to try it out.

[-] GinAndJuche@hexbear.net 3 points 1 year ago* (last edited 1 year ago)

Try it out by listening presumably?

Edit: nobody enjoys hearing those loud bleeps. Gamers would use it to grief by encouraging tinnitus.

It doesn't actually play a bleep, it uses AI to automatically silence voice chat when it detects someone saying something that triggers it (racism, white nationalism, slurs, name calling, harassment, some other categories). Anyway they've done some beta tests but I never got picked.

[-] GinAndJuche@hexbear.net 2 points 1 year ago

That’s actually a cool piece of technology if they ever get it to work.

[-] PoY@lemmygrad.ml 2 points 1 year ago

Also imagine that being your team's project. You have to find a way to filter certain words in many accents. You have to hear those words all the time as you test and retest. I can't imagine how shitty that would be.

[-] CrushKillDestroySwag@hexbear.net 2 points 1 year ago* (last edited 1 year ago)

Yeah that must be awful. Like when I learned that the people who made The Last of Us had to watch a bunch of snuff films to make the gore in that game realistic enough - perhaps some things simply shouldn't exist.

[-] frauddogg@lemmygrad.ml 22 points 1 year ago* (last edited 1 year ago)

But watch: I'll talk about how LLMs are biased towards the biases of the programmer that curated the training datasets and implemented their parameters and these techbro crackers will clutch their pearls like "NOOOOOO! NOOOOOOOOOOOOOOO! THIS IS THE UBERMACHINE AND ITS TRAINING WAS PERFECT AND UNBIASED AND DEFINITELY NONRACIST AND IT'LL TOTALLY IDENTIFY YOUR FACE CORRECT"

then the LLM will still turn around and talk like Microsoft Tay

[-] Evilphd666@hexbear.net 20 points 1 year ago

So I put Grok in Brave search and which one is melon-musk

  • Grock is a neologism coined by American writer Robert A. Heinlein for his 1961 science fiction novel Stranger in a Strange Land.

  • According to Merriam-Webster, grok means to understand profoundly and intuitively.

  • Grock was a Swiss clown, composer, and musician who was once the most highly paid entertainer in Europe.

[-] oktherebuddy@hexbear.net 12 points 1 year ago* (last edited 1 year ago)

Stranger in a Strange land sucks unbelievable amounts of ass (not in like a cool way), this video on it is a classic. The one-two punch of that book & Childhood's End by Arthur C Clarke made me realize that any sci-fi written by a man before like 1999 is unreadable dogshit.

[-] GinAndJuche@hexbear.net 14 points 1 year ago

Asimov erasure. He was a liberal, but far from the worst. “Foundation” is basically babies first DiaMat.

[-] oktherebuddy@hexbear.net 3 points 1 year ago* (last edited 1 year ago)

idk I also re-read some foundation novels and they were really childish. The fact that UKLG was publishing contemporarily just puts them all to shame.

[-] emizeko@hexbear.net 10 points 1 year ago* (last edited 1 year ago)

Iain M. Banks' The Culture series started in 1987

[-] oktherebuddy@hexbear.net 2 points 1 year ago

Fair, good counterexample.

[-] TheRealChrisR@hexbear.net 4 points 1 year ago

Whats wrong with Childhoods End?

[-] oktherebuddy@hexbear.net 5 points 1 year ago

You don't think there's wrong with benevolent hyperintelligent aliens visiting earth and going totally hands off except for stopping violence against white farmers in South Africa?

[-] TheRealChrisR@hexbear.net 3 points 1 year ago

I read the book in high school over a decade ago and missed that part

[-] DerEwigeAtheist@hexbear.net 8 points 1 year ago

Grog around here is rum with hot water, or tea.

[-] FuckyWucky@hexbear.net 16 points 1 year ago

dogshit ass llm

[-] Bay_of_Piggies@hexbear.net 12 points 1 year ago

I already think the trolley problem is base level contrived. But this is even stupider.

[-] circasurvivor@lemm.ee 12 points 1 year ago* (last edited 1 year ago)

Jesus... is Elon getting ideas from Frank Reynolds now? Yelling f****t to save Mac's life is the premise for an IASIP episode.

I guess Elon needs to yell the slur to "cut through" and get everyone to listen!

this post was submitted on 26 Nov 2023
82 points (100.0% liked)

the_dunk_tank

15918 readers
1 users here now

It's the dunk tank.

This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No ableism of any kind (that includes stuff like libt*rd)

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target other instances' admins or moderators.

Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to !shitreactionariessay@lemmygrad.ml

Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again

founded 4 years ago
MODERATORS