view the rest of the comments
World News
A community for discussing events around the World
Rules:
-
Rule 1: posts have the following requirements:
- Post news articles only
- Video links are NOT articles and will be removed.
- Title must match the article headline
- Not United States Internal News
- Recent (Past 30 Days)
- Screenshots/links to other social media sites (Twitter/X/Facebook/Youtube/reddit, etc.) are explicitly forbidden, as are link shorteners.
-
Rule 2: Do not copy the entire article into your post. The key points in 1-2 paragraphs is allowed (even encouraged!), but large segments of articles posted in the body will result in the post being removed. If you have to stop and think "Is this fair use?", it probably isn't. Archive links, especially the ones created on link submission, are absolutely allowed but those that avoid paywalls are not.
-
Rule 3: Opinions articles, or Articles based on misinformation/propaganda may be removed. Sources that have a Low or Very Low factual reporting rating or MBFC Credibility Rating may be removed.
-
Rule 4: Posts or comments that are homophobic, transphobic, racist, sexist, anti-religious, or ableist will be removed. “Ironic” prejudice is just prejudiced.
-
Posts and comments must abide by the lemmy.world terms of service UPDATED AS OF 10/19
-
Rule 5: Keep it civil. It's OK to say the subject of an article is behaving like a (pejorative, pejorative). It's NOT OK to say another USER is (pejorative). Strong language is fine, just not directed at other members. Engage in good-faith and with respect! This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban.
Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.
-
Rule 6: Memes, spam, other low effort posting, reposts, misinformation, advocating violence, off-topic, trolling, offensive, regarding the moderators or meta in content may be removed at any time.
-
Rule 7: We didn't USED to need a rule about how many posts one could make in a day, then someone posted NINETEEN articles in a single day. Not comments, FULL ARTICLES. If you're posting more than say, 10 or so, consider going outside and touching grass. We reserve the right to limit over-posting so a single user does not dominate the front page.
We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.
All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.
Lemmy World Partners
News !news@lemmy.world
Politics !politics@lemmy.world
World Politics !globalpolitics@lemmy.world
Recommendations
For Firefox users, there is media bias / propaganda / fact check plugin.
https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/
- Consider including the article’s mediabiasfactcheck.com/ link
Yeah this is a weird one. I don't really know how the line gets drawn between training an AI and plagiarism. My gut feeling is that this feels like suing somebody for being inspired by your work or learning a new word from it.
Yeah, I'm not sure how I feel about it... But I somehow instinctively feel that a human being "inspired" by other works is different to a neural network being trained on a novel. I don't know that I can articulate specifically why one feels okay and the other doesn't... But that's how it feels to me.
Part of the problem is that AI research likes to use terminology that sounds like what people do, when that's not what the AI actually does.
Large language models are not intelligent in any sense. They are autocomplete on steroids. This is a computer program that was fed a book someone wrote, then mathematically tweaked to be able to guess the next word in a sentence in a way that resembles that book. That's all it does. It does not think or learn in any sense we'd apply to a human.
To me, LLMs sound like a massive plagiarism engine, and I think they should need to get a license from the authors whose works they used to make the LLM under whatever terms that author wants to give, just like a publisher needs to get permission to print a copy of the work. But copyright law has no easy "bright line" for what counts and what doesn't. So the courts will have to decide whether what the AI "creates" is similar enough to the original works to count as a violation, or if the AI and its results are transformative enough to count as something new.
I am sick of this trope of trying to argue that system X is or isn't intelligent because it was built to do something that can be done non intelligently. LLMs are autocomplete, that's just literally what they do. The autocomplete on your phone isn't very intelligent if at all. Humans are DNA replicators but so are bacteria, which aren't very intelligent if at all. You can't argue from the type and/or character of the task whether something that was built to do that task is intelligent or not. LLMs at least appear to be intelligent because they do just about everything the AI skeptics were demanding machines must do in order to prove intelligence just 5 years ago, if you want to argue they're not intelligent you need to do much more work than just calling them names like fuzzy jpeg, stochastic parrot, and autocomplete on steroids.
I use the term "autocomplete on steroids" because it gets across a vaguely accurate idea of what an LLM is and how it works to people who are thinking of it like sci-fi movie AI. Sorry if it came across that was my whole reason for considering them not intelligent.
LLMs do seem to pass a lot of intelligence tests we've come up with. Talking with one for the first time is a really uncanny experience, it's a totally different thing than the old voice assistants. But they also consistently fail at tasks that would indicate an understanding of a topic. They produce good looking equations, but the math underneath doesn't make sense. They hallucinate facts that don't fit with the rest of what they themselves are saying, but look similar to the way right answers are written and defended. They produce really convincing responses, but when they fail they betray some really basic failures to understand what they're saying.
I feel that LLMs are brute-forcing the tests people designed to measure intelligence. They can pass the bar exam, but they also contain thousands of successful bar exams to consult and millions of bits of text to glue those answers together with. But if you ask the LLM to actually do the job of a lawyer, they start producing all kinds of garbage that sounds good but doesn't stand up to scrutiny when someone looks up the hallucinated case references.