88
submitted 7 months ago by Gaywallet@beehaw.org to c/technology@beehaw.org
you are viewing a single comment's thread
view the rest of the comments
[-] jarfil@beehaw.org 1 points 7 months ago

That's my point. They claim to reduce misrepresentation, while at the same time they erase a bunch of correct representations.

Going back to what I was saying: fine tuning doesn't increase diversity, it only shifts the biases. Encoding actual diversity would require increasing the model, then making sure it can output every correct representation.

[-] Even_Adder@lemmy.dbzer0.com 3 points 7 months ago

It doesn't necessarily have to shift away from diversity biases. I think with care, you can preserve the biases that matter most. That was just their first shot at it, this seems like something you'd get better at over time.

[-] jarfil@beehaw.org 2 points 7 months ago

I guess their main shortcoming was the cultural training set. I'm still unconvinced that level of fine tuning is possible without increasing model size, but we'll see what happens if/when someone curates a much larger set with cultural labeling.

The labels might also need to be more granular, like "culture:subculture:period", or something... which is kind of a snakes nest by itself.

this post was submitted on 16 Jan 2024
88 points (100.0% liked)

Technology

37443 readers
354 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS