Disillusionist

joined 3 weeks ago
[–] Disillusionist@piefed.world 1 points 3 days ago

Your suggestions about topic areas and default subscriptions sound great actually. I could see something like that being helpful.

we're not attracting the best and brightest here but rather the ones who have nowhere else to go

I find this statement troubling. Such negative generalizations don't seem accurate, helpful, or fair.

[–] Disillusionist@piefed.world 2 points 3 days ago* (last edited 2 days ago)

The Fediverse is one of the precious few bastions where real talk can happen without algorithmic shaping and interference. News and politics are a fundamental part of society, and inseparable from real discussion. I disagree with the idea that to make the Fediverse better, we have to sacrifice these forms of discussion in favor of "anything else".

Your call for stopping, slowing down, or posting literally anything else is inadvertently also a call for self-censorship in service of your personal ideal. You saying that this is the answer to the problem of attracting new membership is you expressing your own preferences and applying them broadly, and isn't borne out by fact. People are not avoiding any of the major social media platforms due to these things, and it seems unlikely they are avoiding the Fediverse for this reason either.

The Fediverse's lower membership is likely more of a complicated problem involving things like a broad lack of awareness of it, and the average person being put off by the technical-seeming complexity of it, which makes it appear less accessible. They are also reluctant to step outside of their existing communities, which is exacerbated by the fact that those communities tend to settle into those platforms that appear easier and more familiar.

Bottom line is, I respect your right to your opinions and your right to engage with the Fediverse according to your own needs, wants, and perspectives. I however strongly disagree with your call for community-wide self censorship in the name of filling the Fediverse with positivity at the expense of real talk under the premise of attracting new membership.

You're more than welcome to spread as much positivity as you want wherever you want, and to distance yourself from anything you don't personally favor. By all means start a community, encourage others to start communities based on your preferences. But calls for self-censorship on the Fediverse are problematic at best, especially given the circumstances we are currently living in.

[–] Disillusionist@piefed.world 9 points 3 days ago

Awesome work. And I agree that we can have good and responsible AI (and other tech) if we start seeing it for what it is and isn't, and actually being serious about addressing its problems and limitations. It's projects like yours that can demonstrate pathways toward achieving better AI.

[–] Disillusionist@piefed.world 3 points 1 week ago

The material might seem a bit dense and technical, but it presents concepts which may be critical to include in conversations around AI safety, and safety conversations are among the most important we should be having.

 

In order to make safer AI, we need to understand why it actually does unsafe things. Why:

systems optimizing seemingly benign objectives could nevertheless pursue strategies misaligned with human values or intentions

Otherwise we run the risk of playing games of whack-a-mole in which patterns that violate our intended constraints on AI's behaviors may continue to emerge given the right conditions.

[Edited for clarity]

[–] Disillusionist@piefed.world 2 points 1 week ago

This is a subject that people (understandably) have strong opinions on. Debates get heated sometimes and yes, some individuals go on the attack. I never post anything with the expectation that no one is going to have bad feelings about it and everyone is just going to hold hands and sing a song.

There are hard conversations that need to be had regardless. All sides of an argument need to be open enough to have it and not just retreat to their own cushy little safe zones. This is the Fediverse, FFS.

[–] Disillusionist@piefed.world 2 points 1 week ago* (last edited 1 week ago)

I have never once said that AI is bad. Literally everything I've argued pertains to the ethics and application of AI. It's reductive to call all arguments critical of how AI is being implemented "AI bad".

It's not even about it being disruptive, though I do think discussions about that are absolutely warranted. Experts have pointed to potentially catastrophic "disruptions" if AI isn't dealt with responsibly, and we are currently anything but responsible in our handling of it. It's unregulated, running rampant and free everywhere claiming to be all things for all people, leaving a mass of problems in its wake.

If a specific individual or company is committed to behaving ethically, I'm not condemning them. A major point to understand is that those small, ethical actors are the extreme minority. The major players, like those you mentioned, are titans. The problems they create are real.

[–] Disillusionist@piefed.world 24 points 1 week ago

Not all problems may be cured immediately. Battles are rarely won with a single attack. A good thing is not the same as nothing.

[–] Disillusionist@piefed.world 9 points 1 week ago (3 children)

He's jumping ship because it's destroying his ability to eke out a living. The problem isn't a small one, what's happening to him isn't a limited case.

[–] Disillusionist@piefed.world 8 points 1 week ago* (last edited 1 week ago) (5 children)
[–] Disillusionist@piefed.world 2 points 1 week ago

I agree with you that there can be value in "showing people that views outside of their likeminded bubble[s] exist". And you can't change everyone's mind, but I think it's a bit cynical to assume you can't change anyone's mind.

[–] Disillusionist@piefed.world 18 points 1 week ago (1 children)

From what I've heard, the influx of AI data is one of the reasons actual human data is becoming increasingly sought after. AI training AI has the potential to become a sort of digital inbreeding that suffers in areas like originality and other ineffable human qualities that AI still hasn't quite mastered.

I've also heard that this particular approach to poisoning AI is newer and thought to be quite effective, though I can't personally speak to its efficacy.

[–] Disillusionist@piefed.world 25 points 1 week ago (1 children)

"Public" is a tricky term. At this point everything is being treated as public by LLM developers. Maybe not you specifically, but a lot of people aren't happy with how their data is being used to train AI.

 

Website operators are being asked to feed LLM crawlers poisoned data by a project called Poison Fountain.

The project page links to URLs which provide a practically endless stream of poisoned training data. They have determined that this approach is very effective at ultimately sabotaging the quality and accuracy of AI which has been trained on it.

Small quantities of poisoned training data can significantly damage a language model.

The page also gives suggestions on how to put the provided resources to use.

 

Across the world schools are wedging AI between students and their learning materials; in some countries greater than half of all schools have already adopted it (often an "edu" version of a model like ChatGPT, Gemini, etc), usually in the name of preparing kids for the future, despite the fact that no consensus exists around what preparing them for the future actually means when referring to AI.

Some educators have said that they believe AI is not that different from previous cutting edge technologies (like the personal computer and the smartphone), and that we need to push the "robots in front of the kids so they can learn to dance with them" (paraphrasing a quote from Harvard professor Houman Harouni). This framing ignores the obvious fact that AI is by far, the most disruptive technology we have yet developed. Any technology that has experts and developers alike (including Sam Altman a couple years ago) warning of the need for serious regulation to avoid potentially catastrophic consequences isn't something we should probably take lightly. In very important ways, AI isn't comparable to technologies that came before it.

The kind of reasoning we're hearing from those educators in favor of AI adoption in schools doesn't seem to have very solid arguments for rushing to include it broadly in virtually all classrooms rather than offering something like optional college courses in AI education for those interested. It also doesn't sound like the sort of academic reasoning and rigorous vetting many of us would have expected of the institutions tasked with the important responsibility of educating our kids.

ChatGPT was released roughly three years ago. Anyone who uses AI generally recognizes that its actual usefulness is highly subjective. And as much as it might feel like it's been around for a long time, three years is hardly enough time to have a firm grasp on what something that complex actually means for society or education. It's really a stretch to say it's had enough time to establish its value as an educational tool, even if we had come up with clear and consistent standards for its use, which we haven't. We're still scrambling and debating about how we should be using it in general. We're still in the AI wild west, untamed and largely lawless.

The bottom line is that the benefits of AI to education are anything but proven at this point. The same can be said of the vague notion that every classroom must have it right now to prevent children from falling behind. Falling behind how, exactly? What assumptions are being made here? Are they founded on solid, factual evidence or merely speculation?

The benefits to Big Tech companies like OpenAI and Google, however, seem fairly obvious. They get their products into the hands of customers while they're young, potentially cultivating their brands and products into them early. They get a wealth of highly valuable data on them. They get to maybe experiment on them, like they have previously been caught doing. They reinforce the corporate narratives behind AI — that it should be everywhere, a part of everything we do.

While some may want to assume that these companies are doing this as some sort of public service, looking at the track record of these corporations reveals a more consistent pattern of actions which are obviously focused on considerations like market share, commodification, and bottom line.

Meanwhile, there are documented problems educators are contending with in their classrooms as many children seem to be performing worse and learning less.

The way people (of all ages) often use AI has often been shown to lead to a tendency to "offload" thinking onto it — which doesn't seem far from the opposite of learning. Even before AI, test scores and other measures of student performance have been plummeting. This seems like a terrible time to risk making our children guinea pigs in some broad experiment with poorly defined goals and unregulated and unproven technologies which may actually be more of an impediment to learning than an aid in their current form.

This approach has the potential to leave children even less prepared to deal with the unique and accelerating challenges our world is presenting us with, which will require the same critical thinking skills which are currently being eroded (in adults and children alike) by the very technologies being pushed as learning tools.

This is one of the many crazy situations happening right now that terrify me when I try to imagine the world we might actually be creating for ourselves and future generations, particularly given personal experiences and what I've heard from others. One quick look at the state of society today will tell you that even we adults are becoming increasingly unable to determine what's real anymore, in large part thanks to the way in which our technologies are influencing our thinking. Our attention spans are shrinking, our ability to think critically is deteriorating along with our creativity.

I am personally not against AI, I sometimes use open source models and I believe that there is a place for it if done correctly and responsibly. We are not regulating it even remotely adequately. Instead, we're hastily shoving it into every classroom, refrigerator, toaster, and pair of socks, in the name of making it all smart, as we ourselves grow ever dumber and less sane in response. Anyone else here worried that we might end up digitally lobotomizing our kids?

view more: next ›