this post was submitted on 03 Mar 2026
305 points (96.9% liked)

Technology

82250 readers
4833 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] FauxPseudo@lemmy.world 86 points 1 day ago (4 children)

From a Facebook post I made on February 17th:

There are giant AI data firms that promise they can go through massive troves of data and pull out general and specific information from them. Information that is actionable and accurate. Give it 6 million data points and it'll find all the links and organize them for you and unmask hidden details that aren't visible to the naked eye.

Not one of those companies is stepping up to go through the publicly released Epstein files.

[–] Spaniard@lemmy.world 4 points 21 hours ago* (last edited 21 hours ago) (1 children)

Today I asked AI to tell me which phone providers were available short by price and offers and it lied all the time, when I pointed it the AI corrected most of it but also removed some that were accurate for some reason.

It would have been quicker if I did that myself instead of ask AI, oh also didn't provide all companies.

Maybe those companies have better AI that can make no mistakes but I doubt it, I think the LLMs will lie and no one has time to check if they are correct.

[–] bleistift2@sopuli.xyz 3 points 20 hours ago* (last edited 18 hours ago) (1 children)

AI info is never up to date. What where you expecting?

[–] Spaniard@lemmy.world 2 points 18 hours ago* (last edited 18 hours ago) (1 children)

How come it ended up giving me the right answer albeit removing some previous right answers then? (removed a few companies for some reason)

Anyway that was a small and easy to check misinformation but if they have over 3 decades of online informational about me noway a person is going to confirm the LLM didn't bullshit it's way to an answer to satisfy the human.

[–] madmantis24@lemmy.wtf 3 points 18 hours ago

These models aren't going to produce accurate information about the people they investigate, and it won't even matter if it's accurate. What "matters" is that their reports will add new layers of the facade of legitimacy to whatever story the authorities using them want to construct

[–] Randomgal@lemmy.ca 29 points 1 day ago (1 children)

This is what I find crazy. Where are the AI bros chewing through the Epstein files?

[–] osaerisxero@kbin.melroy.org 21 points 1 day ago

I would be shocked if someone hasn't shoved them into a local model somewhere, but all the big ones would filter them to death with content restrictions

[–] General_Effort@lemmy.world 5 points 1 day ago (1 children)

There were reports of people trying to unredact the files almost immediately.

[–] FauxPseudo@lemmy.world 4 points 1 day ago (1 children)

But that's not the same, is it?

[–] General_Effort@lemmy.world 2 points 19 hours ago (1 children)

I don't think you can do literally the same thing on the Epstein files. Maybe I'm misunderstanding what you have in mind.

[–] FauxPseudo@lemmy.world 1 points 18 hours ago (1 children)

In theory, using the information and the released files and the information the public sources, it should be possible to figure out who those redacted names are based on writing style and other factors. We should be able to deanonymize.

[–] General_Effort@lemmy.world 1 points 16 hours ago (1 children)

Hmm. Maybe but it is not the same problem as those discussed in OP. I also have some doubts about the paper, but that's another story. You could try it out?

[–] FauxPseudo@lemmy.world 1 points 13 hours ago (1 children)

I'm not qualified to design the prompts and home users can't really pile in 3 million+ documents.

[–] General_Effort@lemmy.world 0 points 6 hours ago (1 children)

Prompts are in the appendix: https://arxiv.org/abs/2602.16800

I don't know how far you get on the free tier but it should be at least enough for a proof of principle; to get other people to chip in. You didn't have qualms demanding other people should do this for free.

Mind that this is a serious GDPR violation in Europe. So there will be serious pressure on AI companies to prevent this kind of use.

[–] FauxPseudo@lemmy.world 1 points 3 hours ago

Seriously, I'm not qualified. No amount of appendix prompts and Dunning Kruger is going to change that.

I'm not demanding anything. I'm suggesting that AI can't do what is claimed or that people with something to prove are not interested in proving something.

[–] Mubelotix@jlai.lu 1 points 1 day ago (2 children)

We wouldn't want that tbh. Justice needs to be precise and backed up by tangible facts

[–] KeenFlame@feddit.nu 4 points 1 day ago

Also don't use dna tests or chemical analysis. It's invisible hocus pocus and can be wrong! And woe if someone that fucks and tortures kids regularly is wrongly accused of raping kids and running their child minds no that would be awful

[–] FauxPseudo@lemmy.world 4 points 1 day ago (1 children)

You can use the results of the AI analysis to identify people and then use that to do a proper investigation. Right now none of that is happening. No speculation. No tangibles. No investigation. No indictment.

Trying to unmask people is a step in the right direction.

[–] SpikesOtherDog@ani.social 1 points 1 day ago (1 children)

I'm not a fan of genAI for most things, and the environmental aspect sucks balls, but this seems like a reasonable use of the tool that's already been built.

[–] FauxPseudo@lemmy.world 1 points 1 day ago (1 children)
[–] SpikesOtherDog@ani.social 1 points 23 hours ago (1 children)

At the very worst, the administration would put out a very confusing statement not to trust AI.

[–] FauxPseudo@lemmy.world 1 points 18 hours ago

That would be fun.