this post was submitted on 05 Jun 2025
9 points (100.0% liked)

Politics

777 readers
287 users here now

For civil discussion of US politics. Be excellent to each other.

Rule 1: Posts have the following requirements:
▪️ Post articles about the US only

▪️ Title must match the article headline

▪️ Recent (Past 30 Days)

▪️ No Screenshots/links to other social media sites or link shorteners

Rule 2: Do not copy the entire article into your post. One or two small paragraphs are okay.

Rule 3: Articles based on opinion (unless clearly marked and from a serious publication-No Fox News or equal), misinformation or propaganda will be removed.

Rule 4: Keep it civil. It’s OK to say the subject of an article is behaving like a jerk. It’s not acceptable to say another user is a jerk. Cussing is fine.

Rule 5: Be excellent to each other. Posts or comments that are homophobic, transphobic, racist, sexist, ableist, will be removed.

Rule 6: Memes, spam, other low effort posting, reposts, advocating violence, off-topic, trolling, offensive, regarding the moderators or meta in content may be removed at any time.

Rule 7. No conjecture type posts (this could, might, may, etc.). Only factual. If the headline is wrong, clarify within the body. More info

Info Video about techniques used in cults (and politics)

Bookmark Vault of Trump's First Term

USAfacts.org

The Alt-Right Playbook

Media owners, CEOs and/or board members

Video: Macklemore's new song critical of Trump and Musk is facing heavy censorship across major platforms.

founded 2 years ago
MODERATORS
 

On Monday, the FDA publicly announced the agency-wide rollout of a large language model (LLM) called Elsa, which is intended to help FDA employees—"from scientific reviewers to investigators." The FDA said the generative AI is already being used to "accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets."

However, according to a report from NBC News, Elsa could have used some more time in development. FDA staff tested Elsa on Monday with questions about FDA-approved products or other public information, only to find that it provided summaries that were either completely or partially wrong.

According to Stat, Elsa is based on Anthropic's Claude LLM and is being developed by consulting firm Deloitte. Since 2020, Deloitte has been paid $13.8 million to develop the original database of FDA documents that Elsa's training data is derived from. In April, the firm was awarded a $14.7 million contract to scale the tech across the agency. The FDA said that Elsa was built within a high-security GovCloud environment and offers a "secure platform for FDA employees to access internal documents while ensuring all information remains within the agency."

top 3 comments
sorted by: hot top controversial new old
[–] ExtantHuman@lemm.ee 4 points 4 days ago

This is not what LLMs are designed to do. There are other AI- adjacent technologies that are way better at this kind of data analysis and pattern recognition type thing than the glorified autocorrect that is an LLM.

[–] piccolo@sh.itjust.works 2 points 4 days ago

Hollywood lied to us. AI isnt going to end humanity in a glorious nuclear war. But blindly instruct us to poison ourselves.

[–] pelespirit@sh.itjust.works 1 points 4 days ago

Is this why people were going after Ars Technica yesterday? I knew that something was in the pipeline, but I'm not positive this is the one.