this post was submitted on 31 May 2025
57 points (89.0% liked)

Technology

39010 readers
360 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] thomasembree@me.dm 6 points 1 week ago (28 children)

@Kissaki In another thread, people are mocking AI because the free language models they are using are bad at drawing accurate maps. "AI can't even do geography". Anything an AI says can't be trusted, and AI is vastly inferior to human ability.

These same people haven't figured out the difference between using a language AI to draw a map, and simply asking it a geography question.

[–] 2xsaiko@discuss.tchncs.de 9 points 1 week ago (2 children)

Daniel Stenberg has banned AI-edited bug reports from cURL because they were exclusively nonsense and just wasted their time. Just because it gets a hit once doesn’t mean it’s good at this either.

[–] Kissaki@beehaw.org 10 points 1 week ago (1 children)

It does show that it can be a useful tool, though.

Here, the security researcher was evaluating it and stumbled upon a previously undiscovered security bug. Obviously, they didn't let the AI create the bug report without understanding it. They verified the answer and took action themselves, presumably analyzing, verifying, and reporting in a professional and respectful way.

The cURL AI spam is an issue at the opposite side of that. But doesn't really tell us anything about capabilities. It tells us more about people. In my eyes, at least.

[–] 2xsaiko@discuss.tchncs.de 7 points 1 week ago

Yeah, that’s fair. When verified beforehand, and what it discovered is an actual issue, why not. It does overwhelmingly attract people who have no idea what they’re doing and then submit bogus reports because it looks good to them though.

[–] thomasembree@me.dm 1 points 1 week ago* (last edited 1 week ago) (1 children)

@2xsaiko That is a poorly made AI model, then. Whoever put that system in place didn't train the model properly. In fact, I'm going to guess that you chose a random model like ChatGPT or llama or Gemini.

Or you might not even realize that you need a model specifically trained to handle the kind of thing you are asking.

That isn't a limitation of AI, that is human error. Do you think people are just pretending it works or something?

[–] tuhriel@discuss.tchncs.de 3 points 6 days ago

That is the problem they get promoted as the one-size-fits-all solution on everything. And people are using it as it's promoted

load more comments (25 replies)