this post was submitted on 27 Sep 2023
50 points (98.1% liked)

Firefox

20497 readers
21 users here now

/c/firefox

A place to discuss the news and latest developments on the open-source browser Firefox.


Rules

1. Adhere to the instance rules

2. Be kind to one another

3. Communicate in a civil manner


Reporting

If you would like to bring an issue to the moderators attention, please use the "Create Report" feature on the offending comment or post and it will be reviewed as time allows.


founded 5 years ago
MODERATORS
 

Is there an extension that warns you when you are wasting time reading ai-generated crap?

Case in point, I was reading an article that claimed to compare kubernetes distros and wasted some good minutes before realizing it was full of crap.

all 15 comments
sorted by: hot top controversial new old
[–] Nawor3565@lemmy.blahaj.zone 6 points 2 years ago (1 children)

Unfortunately, even OpenAI themselves took down their AI detection tool because it was too inaccurate. It's really, REALLY hard to detect AI writing with current technology, so any such addon would probably need to use a master list of articles that are manually flagged by human.

[–] DogMuffins@discuss.tchncs.de 4 points 2 years ago (2 children)

If you could detect AI authored stuff, couldn't you use that to train your LLM?

[–] BetaDoggo_@lemmy.world 2 points 2 years ago

It could be used to create a reward model like what is done right now with RLHF.

[–] apis@beehaw.org 1 points 2 years ago

Suspect it would operate more on the basis of a person confirming that the article is of reasonable quality & accuracy.

So not unlike editors selecting what to publish, what to reject & what to send back for improvements.

If good articles by AI get accepted & poor articles by people get rejected, there may still be impacts, but at face value it might be sufficient for us seeking to read stuff.

[–] monobot@lemmy.ml 5 points 2 years ago

I think at some point we will have to introduce human confirmation from creator side.

I don't mind someone using chatgpt as a tool to write better articles, but most of internet is sensles bs.

[–] starman@programming.dev 2 points 2 years ago (2 children)

It's not possible to create 100% reliable ML-generated content detection

[–] blakeus12@hexbear.net 3 points 2 years ago

Marxist-Leninists cant reliably detect content D:

[–] JohnDClay@sh.itjust.works 2 points 2 years ago (1 children)

I don't even know of any that are 75% reliable. It's a really hard problem.

[–] strawberry@artemis.camp 2 points 2 years ago (1 children)

wasn't openai's ai detector like 25% accurate? at that point its just random chance mostly

[–] Cwilliams@beehaw.org 2 points 2 years ago

I know there's GPT Zero. I personally don't trust it at all, but you could still look into it

[–] naut@infosec.pub -2 points 2 years ago
[–] Cwilliams@beehaw.org -3 points 2 years ago

Another thought: does it really matter if it's AI generated or not? As long as you can fact-check the content and the quality isn't horrible, I don't see why it matters if it's written by a real person or not