529
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 01 Aug 2023
529 points (82.5% liked)
Technology
59889 readers
2111 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
It's not necessarily a matter of fact checking, but of correcting for systemic biases in the data. That's often not the easiest thing to do. Systems run by humans often have outcomes that reflect the biases of the people involved.
The power of suggestion runs fairly deep with people. You can change a hiring manager's opinion of a resume by only changing the name at the top of it. You can change the terms a college kid enrolled in a winemaking program uses to describe a white wine using a bit of red food coloring. Blind auditions for orchestras result in significantly more women being picked than unblinded auditions.
Correcting for biases is difficult, and it's especially difficult on very large data sets like the ones you'd use to train chatgpt. I'm really not very hopeful that chatgpt will ever reflect only justified biases, rather than the biases of the broader culture.