this post was submitted on 29 Jan 2025
16 points (94.4% liked)

askchapo

22888 readers
1 users here now

Ask Hexbear is the place to ask and answer ~~thought-provoking~~ questions.

Rules:

  1. Posts must ask a question.

  2. If the question asked is serious, answer seriously.

  3. Questions where you want to learn more about socialism are allowed, but questions in bad faith are not.

  4. Try !feedback@hexbear.net if you're having questions about regarding moderation, site policy, the site itself, development, volunteering or the mod team.

founded 4 years ago
MODERATORS
 

Most people do not read the article link that's posted. So I put an AI summary of the link as a comment, but as a spoiler so if you don't want to engage with it you don't have to and also the full article so people can more accessibly read the article. Also as a spoiler so it doesn't take up a full page of a comment. It got removed by a mod as AI slop.

I could use AI on a headline and you would never know the difference. I could just say it's my own summary also probably wouldn't know the difference. Punishing people for being transparent about using LLMs who are not forcing the reader to engage with them is a net positive and a good practice to teach. The opposite is people still use them and just pretend they aren't.

top 41 comments
sorted by: hot top controversial new old
[–] WhyEssEff@hexbear.net 33 points 1 month ago* (last edited 1 month ago) (1 children)

honesty is only a virtue unalloyed. the goal is to eradicate AI slop in this space. why would we allow it under the pretense of 'at least they admit it?' that's not the goal. the goal is to remove it entirely. when it's detected, it should be gone.

it is also not at all an accessibility aid. as the exact demographic of person (rather severe presentation of ADHD) who would be supposedly most aided by this, as well as being a data science major, I wholeheartedly reject the idea that it in any way meets an acceptable standard for constituting that. the average person genuinely doesn't know the sheer amount of subtle fuckups and misinformation these diceroll plagiarism boxes output even when provided the exact text they are supposed to paraphrase. rather, its main effect–due to them 'seeming right'–is a disinformative capacity, encouraging people to skip the article and defer to the generated 'summary.' I simply do not think this is a sound argument.

just write the summary yourself. I assume you've read the article. It can be a paragraph. let's say you don't want to. we can access the text. we can access these chatbots. we can toss the article at the chatbots on our own time. I don't want AI slop on this forum at all and oppose the normalization of it, especially under flimsy pretenses such as this.

[–] Antiwork@hexbear.net 2 points 1 month ago (1 children)

This I don't want AI slop on this forum at all and oppose the normalization of it, especially under flimsy pretenses such as this.

Exactly why it's under a spoiler. So you don't have to engage with it at all.

[–] WhyEssEff@hexbear.net 23 points 1 month ago* (last edited 1 month ago) (1 children)

I […] oppose the normalization of it [on this forum]

[–] Antiwork@hexbear.net 2 points 1 month ago (2 children)

So the opposite effect is the normalization of it hiding it plain sight. I would way rather promote honesty and allowing people the option to disengage.

[–] WhyEssEff@hexbear.net 21 points 1 month ago* (last edited 1 month ago) (1 children)

this is just the argument libertarians use for why you can't ever regulate anything? this is not a free-speech radical forum. we're not making market solutions for content here. in the same vein in which we both have an automatic slur filter, remove blatant racism, and attempt to weed out subtle racism, the solution isn't normalizing the open racism–the solution is stamping it out with an iron fist whenever it's caught. yes–things slip through the cracks, it's imperfect–but it's infinitely better than Twitter despite its imperfections, and it wards away the people who are incentivized by its normalization. I would personally like this site to strive to be a space free from this slop. There are numerous ethical, labor, environmental and health issues with its normalization and usage, and I'd like to be in a space carved away from indulgence in it in an open and unabashed manner. I feel uncomfortable with the encouragement of usage or reliance on it in any capacity or degree of separation, especially systematically. Again:

just write the summary yourself. I assume you've read the article. It can be a paragraph. let's say you don't want to. we can access the text. we can access these chatbots. if we're so inclined, we can toss the article at the chatbots on our own time.

[–] Antiwork@hexbear.net 1 points 1 month ago (1 children)

Comparing using an llm to racism and other forms of bigotry is def a thing to do.

[–] WhyEssEff@hexbear.net 12 points 1 month ago* (last edited 1 month ago) (1 children)

hey even though I've emphasized it again, you still haven't responded to my last point. i have to ask:

  1. why can't you write the summaries yourself, it's a minute at most if you're reading the article before you post it
  2. why can't you copy the byline if you refuse to put in the minute of work to summarize the article you've read
  3. even assuming both are impossible, not happening, why do you assume that the demographic of "people who want AI summaries of articles in their social media posts" do not know where and how to access the chatbots that can summarize them themselves. does it have to be in the post itself?
[–] Antiwork@hexbear.net 1 points 1 month ago (1 children)

In this instance one could. I was using my example as an example not as one scenario to pick apart.

The point is that some people hate AI and don't want to see it. Other people are going to use it. Asking people to put barriers like we do with content warnings seemed like a good compromise, but I guess most of you see LLMs on the same level as outward bigotry, which is so mind boggling to me I don't really care to engage in the nonsense.

[–] WhyEssEff@hexbear.net 10 points 1 month ago* (last edited 1 month ago) (1 children)

>springboards with a real example i should be able to do this rule-breaking thing because i'm honest about it and it's for good reasons
>okay, here's what you could do in this real example to not do that and still fulfill those good reasons
>here's how you can ignore how i'm doing that
>no, you shouldn't be doing that, we're not going to allow it and we'll keep enforcing it
>if you don't allow it, everyone else is going to do it, secretly, so allow it if we're open about it
>here is a real example of something we don't allow and how we enforce it and that strategy seems to work better
>why are you comparing my thing to that really bad thing
>hey, you still haven't engaged with my first point, here's how not to do that, can you do that
>actually this is a broader point for hypothetical situations on principle (validating llm usage [cool, good, fine])

[–] Antiwork@hexbear.net 1 points 1 month ago* (last edited 1 month ago) (1 children)

Except the main point was referenced in the original post.

Punishing people for being transparent about using LLMs who are not forcing the reader to engage with them is a net positive and a good practice to teach. The opposite is people still use them and just pretend they aren't.

I thought this is the area of the site we discuss rules. Guess it was just a space to point at the rule and tell me what I should be doing. And then use ad hominem to make yourself feel more right about the rules. Notes taken.

here is a real example of something we don't allow and how we enforce it and that strategy seems to work better

Hahahaha that's so funny. Here's this thing that is outward bigotry vs a thing some of don't like. Yeah I wonder if there's a difference. there's other things certain people don't like but yet you only put a content warning around those things it's almost like it matters what the thing is for it to get a content warning vs removed by mod.

[–] WhyEssEff@hexbear.net 12 points 1 month ago* (last edited 1 month ago) (2 children)

a thing some of don't like

which we're not allowing on this forum. we're not free-speech radicals, this is a site that embodies a politic. we have real political stances which we enforce as a general standard of conduct here based on broader consensus among ourselves. we're also taking an iron fist to, say, suggestions that forceful imposition of "western values" is the solution to reactionary tendencies in peripheral countries–an idea that a notable amount of self-identified 'progressives' support, but we don't tolerate on this forum. you're talking about it as if LLMs are in an apolitical vacuum and don't exacerbate real labor problems and real environmental problems and real exploitation around the world.

this isn't a very-intelligent you have iphone yet you exist situation–you are making a conscious choice to use it and you can stop at any time. it is a service. it provides no real value that cannot be filled with human thought. if we find that real value, then it merely has that and none more. it is a service that we have lived without until 2022, and–likely–a plurality, if not majority, continue to do as such. it is built on the non-consensual theft of the labor of all who have been preserved on the internet and is maintained by exploitation of the poor in the periphery. it is being used as justification to shepherd in draconian natsec clamps and chauvinist trade policies, and its use has festered a notable acceleration of environmental damage due to its inefficiencies and compute power necessary. the development of it is bankrolled by individuals that seek to use it as a springboard to have a final cutting of ties with the rest of humanity from their profit mode. it is notoriously unreliable and has an entire industry-established term for its tendency for misinformation. consistent usage of it results in the degradation/atrophying of internal processing, prior-held skills, and critical thinking (and once again, to note w/rt this, it has notoriously unreliable output) due to said functions being outsourced to it over time. it also fucking sucks at writing and its output is annoying to read when viewed by anyone who has a functional internal metric for it, no matter if they do detect its 'author.' its use is not mandated neither by broader consensus among the general population nor literally mandated in any capacity. just because you personally deem these acceptable doesn't mean we have to tolerate you nor any other subjecting us to it.

your arguments seem to be coming from the fact that you cannot comprehend the disconnect between your position and the site's position here, but we are not changing the site's position merely because you refuse to engage with the multitude of points people are bringing up and just want it that way. tough shit, I guess.

[–] blunder@hexbear.net 5 points 1 month ago

order-of-lenin

GOOD fuckin post

[–] Antiwork@hexbear.net 1 points 1 month ago (1 children)

I wonder if there's a difference between that and outward bigotry. Nope must be on the exact same level. If you truly believe they were on the same level wouldn't you ban all users who admit to using LLMs? Because I would hope you would ban anyone who admits to bigotry and not just remove their comment.

[–] WhyEssEff@hexbear.net 7 points 1 month ago
[–] AssortedBiscuits@hexbear.net 10 points 1 month ago (1 children)
  1. Nobody's going to read the summary, human or AI, because nobody reads in this website. At best, people glance at the headline.

  2. Since nobody reads anyways, saying it's done by AI just normalizes AI for no gain whatsoever.

The real solution is to not bother writing a summary, and if you want to write a summary that nobody will read, at least do it without AI for the sake of not normalizing AI.

[–] Antiwork@hexbear.net 2 points 1 month ago (1 children)

The reason I wanted to use them is because they help me when people post them, but I guess I'm nobody. Ah well.

[–] AssortedBiscuits@hexbear.net 10 points 1 month ago (1 children)

You are just wasting your time. The only person who thinks it's a good idea is you. Nobody else here thinks it's a good idea. At this point, your options are to either revisit using AI to write summaries or do it anyways but not say so.

[–] Antiwork@hexbear.net 1 points 1 month ago (1 children)

It would be better if you didn't claim to speak for everyone, but yet you continue.

[–] AssortedBiscuits@hexbear.net 8 points 1 month ago* (last edited 1 month ago)

By "here," I mean this entire post that only you the OP think is a good idea. Or is there any comment that I missed?

People who think using AI for article summaries is good:
You

People who think using AI for article summaries is trash:
WhyEssEff
sgtlion (sgtlion only said AI is good for coding and debugging and said that AI is 90% slop)
DoiDoi
MiraculousMM
RotundLadSloopUnion
Leon_Grotsky
imogen_underscore
Infamousblt
blunder
Me

People who are asking clarifying questions:
glans

People who are shitposting:
Lemmygradwontallowme

Do you dispute with how I'm characterizing their opinion on using AI for article summaries?

[–] DoiDoi@hexbear.net 30 points 1 month ago* (last edited 1 month ago) (1 children)

All uses of AI are slop 100% of the time

If people already can't commit to reading an article they probably shouldn't suffer further brain damage from the shit machine

[–] sgtlion@hexbear.net 5 points 1 month ago (1 children)

100% remains an overstatement- 90% plausibly. But I continue to argue that AI significantly helps me practically with coding, debugging, and learning stuff.

[–] Antiwork@hexbear.net 3 points 1 month ago (1 children)

By using an LLM to learn anything or help you in your career you are the cause of all climate catastrophe.

[–] blunder@hexbear.net 7 points 1 month ago* (last edited 1 month ago)

There may be professional uses which, although they do not justify the environmental or labor impact, do have operational merit, shitposting on this website is most certainly not one of them. This place is for human beings.

[–] imogen_underscore@hexbear.net 20 points 1 month ago

please don't

[–] MiraculousMM@hexbear.net 16 points 1 month ago

I don't feel that any potential benefits of the bazinga plagiarism machine outweigh the very obvious downsides, like how the outputs are often completely wrong and the massive energy consumption and environmental impact the AI industry runs on.

[–] Leon_Grotsky@hexbear.net 15 points 1 month ago* (last edited 1 month ago) (1 children)

As an Article Reader:

Are you verifying the "AI" is spitting out something legible and actually carrying the spirit of the source material?

If so, how much more effort is that than typing up your own summary, eliminating the uncertainty of the "AI"?

[–] Antiwork@hexbear.net 2 points 1 month ago (1 children)

It has less to do with me and more what can and will happen. People will say they wrote it when they indeed did not.

I think having set boundaries around AI is more helpful than tasking mods with what they believe is AI. I've seen people on here reply with AI generations and say it's AI and don't really have a problem with it and find it actually to be a good practice. I just think we should go a step further and actually put it behind a spoiler so people who don't want to engage with it don't have to.

Removing it entirely will just mean it's gets posted without people saying it's AI and without spoiler tags.

[–] RotundLadSloopUnion@hexbear.net 13 points 1 month ago (1 children)

Removing it entirely will just mean it's gets posted without people saying it's AI and without spoiler tags.

"Sometimes its hard to tell if something is AI therefore we should stop moderating AI entirely" is not the winning argument you think it is.

I noticed that you haven't addressed any of the environmental concerns people have brought up in this thread. Putting everything else aside, doesn't the effect on the climate bother you in the slightest? Don't you feel that alone is a very compelling argument for refusing to engage with or promote it?

[–] Antiwork@hexbear.net 1 points 1 month ago* (last edited 1 month ago) (1 children)

My point is that trying to block LLMs is playing a wack a mole. Acknowledging their presence and putting some guide rails seem to be a better way to go about it, but the community obviously disagrees with me and I'm fine with that. Not a big deal to me at all.

It's not a very compelling argument. I can run an llm on my device as done for this summary. How much energy do people waste on erroneous things that is causing climate catastrophe.

Gamers used 34TWh of energy almost a decade ago. Probably way higher now. But yet we have a community for gamers on this site. Do they not care about the climate disaster they're adding to every year? We could do this with almost anything and everything that uses electricity that people learn from or enjoy using.

[–] blight@hexbear.net 2 points 1 month ago

the only way you can get away from this nonsense was to make a statement that the people in power were the same people who voted against this and they were not even elected to office in any form of government or anything other then a bunch that is a lie and you know what you’re doing to the right to be a good thing for the country is the one that has done it all you can say that you don’t have a problem and you don’t care what anyone thinks you have to do with the rest and i don’t care about it so i just don’t want you being upset or upset about anything else and you have no one to talk about that i just don’t care about your feelings i do not want it i just don’t like it i just wanted your opinions on this is what you do what i do not have any other than what i do what you do not care what people do not have a problem and that’s it is what you don’t want it doesn’t matter what i do i do whatever you do what you’re the one thing

[–] glans@hexbear.net 13 points 1 month ago (2 children)

So the problem is:

Most people do not read the article link that's posted.

Proposed solution:

So I put an AI summary of the link as a comment

Do you think the AI summary prompts people to read the link?

[–] Lemmygradwontallowme@hexbear.net 6 points 1 month ago* (last edited 1 month ago)

Well, if the AI summary is so horrible they instead resort to the original article itself, I don't see why not?

[–] Antiwork@hexbear.net 4 points 1 month ago

I do. I think it's a step between a headline and an article that actually entices people to read more.

[–] Infamousblt@hexbear.net 10 points 1 month ago (1 children)

I'm glad to hear you have found a way to use AI that isn't stealing the data and work of other people and is also using all of the extra green energy capacity we have thereby not contributing to global warming at all.

If you think that "but it's okay if you don't want to see it" is a good argument in favor of this you've got some more thinking to do.

[–] Antiwork@hexbear.net 4 points 1 month ago (1 children)

Yes on device LLMs are using so much power my device depleted energy from this one summary. And what work is being stolen from a summary?

Somehow this site has turned from don't blame individuals for global warming to anyone who even downloads R1 doesn't care about humanity.

[–] Infamousblt@hexbear.net 9 points 1 month ago (1 children)

It's clear you literally have no clue how AI works at all

[–] Antiwork@hexbear.net 3 points 1 month ago

How much energy did take to create R1?

[–] merthyr1831@lemmy.ml 5 points 1 month ago

This doesn't need to be an LLM. There are already bots on lemmy that condense articles and post it as a comment beneath posts with article links. They're usually quite useful and don't rely on LLMs so they dont hallucinate any content that wasnt in the body of the article itself.

Would not mind seeing that brought here.

[–] AtmosphericRiversCuomo@hexbear.net 3 points 1 month ago (1 children)

"Just write your own summary" probably burns more fossil fuel than using deepseek, just because of how wasteful our food systems are. I think any work that's not powered by a ham sandwich is better than average.

People here have decided they don't like this tech first, and therefore have decided that the environmental cost of it is too high second. If you use it you're burning the earth, you're not a real leftist, there are no legitimate uses, etc. there's zero room for discussion around it.

Let them have their space the way they want it. It doesn't matter really. The goal posts will keep moving as this tech develops anyway.

[–] Antiwork@hexbear.net 2 points 1 month ago

Yeah i was thinking about this. It’s more about who is pushing it that makes them hate the tech. It’s the same people forcing bitcoin that are totally ass and so it’s just about creating a frame which to hate it. I was trying to have a discussion about it, but it’s obvious they don’t want to and that’s fine. I enjoy using it and what’s kinda funny is I’ve seen countless people on here also talk about what they like about the tech, so I know it’s not just a me thing, but like Reddit once you have a dissenting opinion to the hive mind it’s just pile on time especially when a few people agree with the mods and admins

[–] umbrella@lemmy.ml 2 points 1 month ago

eh, i can run it through my own llm if i feel like it. i usually dont.