anise

joined 4 days ago
[–] anise@quokk.au 1 points 4 hours ago

won't somebody think of the poor friends of childrapists who are unable to go to this one conference now? oh the humanity!

[–] anise@quokk.au 1 points 5 hours ago

of course, this witch is a known and convicted child rapist and trafficker, dead on I say, dead on

[–] anise@quokk.au 8 points 6 hours ago

I'm sure you must've heard about the disastrous effects on the environment and the electrical grid by now, as well as the crisis in computer parts (GPUs, RAM, SSDs and now also HDDs) caused by AI data centres. Besides this, AI output is polluting the internet. This can be used to very quickly spin up a lot of sites to support a narrative, or to fill a site with "content" that increases SEO to get advertiser money. This makes looking anything up these days almost impossible. This is especially a problem because AI is unreliable. AI works purely off of statistics and doesn't have any conception of truth or falsehood, so it generates what is in philosophical terms called "bullshit" (real term!). Output can be true accidentally, but you are never getting "the truth". This property has already been exploited by companies by generating data optimised for LLM consumption in order to advertise their products. Many chatbots are also built in a way that is very dangerous. They are optimised to keep you using them, which is often done by making them agree with pretty much everything you say. This has been shown multiple times to cause psychotic breakdowns in all kinds of people, even if they started out using it to, for example, write code. However, the group most at risk of this are the people using the bots as an alternative to a therapist. unfortunately, AI companies encourage this usage through initiatives like gpt health. It also turns out that AI dependence can harm your ability to learn certain things. This makes sense intuitively, a coder who relies on a chatbot to write parts of the code or to debug is less likely to develop those skills themselves. This is especially a problem with increased usage in schools. Yet more ethical problems arise with the image generation modes of AI, which, unfortunately(but unsurprisingly) turn out to be trained on like… A LOT of child porn. This has been one of the controversies with grok recently. Unfortunately, there is no real way to stop someone from asking for anything in the training data. Best you can do is either try to give negative incentive to the model or to hardecode in a bunch of phrases to automatically reject. This is a fundamental problem with the architecture. Generation of revenge porn, child porn and misinformation has run rampant. AI is also a privacy and security nightmare. Due to the fundamental architecture of AI models there is no way to distinguish between code and instructions. This means that "agents" can be injected with instructions to, for example, leak confidential data. This is a big problem with parties like hospitals attempting to integrate AI into their workflow. This is in addition to pretty much all models being run "in the cloud" due to the high costs associated with running a model. Speaking of costs, all of these models currently operate at a gigantic loss. They are currently essentially circulating an IOU between themselves and a few hardware companies(nvidia), but that cannot last forever. If any of these companies survive, they will be way more expensive to use than now. Many of the current companies are also pretty evil, being explicitly associated by figures like Peter Thiel, whose stated goal in life has been to end democracy. There are also some arguments surrounding copyright. While I do not want to strengthen copyright law and so will be careful around my comments on this topic, it is certainly true that ai often outputs essentially exactly someone elses work without crediting them.

This is all I could think of off the top of my head, but there surely is more. hope this helps!

[–] anise@quokk.au 5 points 13 hours ago (1 children)

I refuse to believe that libertarians ask that question. Surely they would be flooded with so many examples so quickly that they wouldn't try again?

[–] anise@quokk.au 1 points 1 day ago

hey, thanks for reaching out ^ ^. keeping an eye on new accounts makes sense, especially with open registrations. I was wondering about what exactly happened. I figured one mustve blocked/defeded the other but couldn't figure it out from a cursory glance at the modlog. hope no issues occur with them.

ps: I think fediseer lists the instance as registrations closed, I don't know if that is intentional or not.

[–] anise@quokk.au 1 points 2 days ago

oh goddamnit I was about to comment about the format as well but this really is the one¹ format where you shouldn't

¹it really isn't important for any format but it does always hurt me a little

[–] anise@quokk.au 5 points 2 days ago

but still better than master of one

[–] anise@quokk.au 8 points 3 days ago

it really is a card you would’ve seen on mtgcirclejerk

[–] anise@quokk.au 1 points 3 days ago* (last edited 3 days ago) (2 children)

OT: been slowly migrating from dbzer0. I broadly like the community on that instance, but it has become harder and harder for me to justify associating with anything ai-friendly. Because quokk.au is a piefed instance my client hasn't fully caught up, but they seem to be working on it.

E: ok, more issues than I thought, since it displayed this stubsack instead of the most recent one for some reason