1090
submitted 11 months ago by wiki_me@lemmy.ml to c/opensource@lemmy.ml
you are viewing a single comment's thread
view the rest of the comments
[-] mojo@lemm.ee 26 points 11 months ago

As much as I love Mozilla, I know they're going to censor it (sorry, the word is "alignment" now) the hell out of it to fit their perceived values. Luckily if it's open source then people will be able to train uncensored models

[-] DigitalJacobin@lemmy.ml 72 points 11 months ago

What in the world would an "uncensored" model even imply? And give me a break, private platforms choosing to not platform something/someone isn't "censorship", you don't have a right to another's platform. Mozilla has always been a principled organization and they have never pretended to be apathetic fence-sitters.

[-] TheWiseAlaundo@lemmy.whynotdrs.org 17 points 11 months ago

There's a ton of stuff ChatGPT won't answer, which is supremely annoying.

I've tried making Dungeons and Dragons scenarios with it, and it will simply refuse to describe violence. Pretty much a full stop.

Open AI is also a complete prude about nudity, so Eilistraee (Drow godess that dances with a sword) just isn't an option for their image generation. Text generation will try to avoid nudity, but also stop short of directly addressing it.

Sarcasm is, for the most part, very difficult to do... If ChatGPT thinks what you're trying to write is mean-spirited, it just won't do it. However, delusional/magical thinking is actually acceptable. Try asking ChatGPT how licking stamps will give you better body positivity, and it's fine, and often unintentionally very funny.

There's plenty of topics that LLMs are overly sensitive about, and uncensored models largely correct that. I'm running Wizard 30B uncensored locally, and ChatGPT for everything else. I'd like to think I'm not a weirdo, I just like D&d... a lot, lol... and even with my use case I'm bumping my head on some of the censorship issues with LLMs.

[-] Spzi@lemm.ee 2 points 11 months ago

Interesting, may I ask you a question regarding uncensored local / censored hosted LLMs in comparison?

There is this idea censorship is required to some degree to generate more useful output. In a sense, we somehow have to tell the model which output we appreciate and which we don't, so that it can develop a bias to produce more of the appreciated stuff.

In this sense, an uncensored model would be no better than a million monkeys on typewriters. Do we differentiate between technically necessary bias, and political agenda, is that possible? Do uncensored models produce more nonsense?

[-] TheWiseAlaundo@lemmy.whynotdrs.org 2 points 11 months ago

That's a good question. Apparently, these large data companies start with their own unaligned dataset and then introduce bias through training their model after. The censorship we're talking about isn't necessarily trimming good input vs. bad input data, but rather "alignment" which is intentionally introduced after.

Eric Hartford, the man who created Wizard (the LLM I use for uncensored work), wrote a blog post about how he was able to unalign LLAMA over here: https://erichartford.com/uncensored-models

You probably could trim input data to censor output down the line, but I'm assuming that data companies don't because it's less useful in a general sense and probably more laborious.

load more comments (34 replies)
load more comments (41 replies)
this post was submitted on 30 Sep 2023
1090 points (98.7% liked)

Open Source

29787 readers
127 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS