this post was submitted on 31 Mar 2025
114 points (99.1% liked)

news

23913 readers
718 users here now

Welcome to c/news! Please read the Hexbear Code of Conduct and remember... we're all comrades here.

Rules:

-- PLEASE KEEP POST TITLES INFORMATIVE --

-- Overly editorialized titles, particularly if they link to opinion pieces, may get your post removed. --

-- All posts must include a link to their source. Screenshots are fine IF you include the link in the post body. --

-- If you are citing a twitter post as news please include not just the twitter.com in your links but also nitter.net (or another Nitter instance). There is also a Firefox extension that can redirect Twitter links to a Nitter instance: https://addons.mozilla.org/en-US/firefox/addon/libredirect/ or archive them as you would any other reactionary source using e.g. https://archive.today/ . Twitter screenshots still need to be sourced or they will be removed --

-- Mass tagging comm moderators across multiple posts like a broken markov chain bot will result in a comm ban--

-- Repeated consecutive posting of reactionary sources, fake news, misleading / outdated news, false alarms over ghoul deaths, and/or shitposts will result in a comm ban.--

-- Neglecting to use content warnings or NSFW when dealing with disturbing content will be removed until in compliance. Users who are consecutively reported due to failing to use content warnings or NSFW tags when commenting on or posting disturbing content will result in the user being banned. --

-- Using April 1st as an excuse to post fake headlines, like the resurrection of Kissinger while he is still fortunately dead, will result in the poster being thrown in the gamer gulag and be sentenced to play and beat trashy mobile games like 'Raid: Shadow Legends' in order to be rehabilitated back into general society. --

founded 4 years ago
MODERATORS
top 32 comments
sorted by: hot top controversial new old
[–] ANarcoSnowPlow@hexbear.net 56 points 3 days ago* (last edited 3 days ago) (7 children)

DeepSeek exposed its American counterparts for what they are: yet another grift.

At this point "AI" is nothing more than an expensive toy that consumes mammoth resources every time you play with it.

[–] Lochat@hexbear.net 18 points 3 days ago

Ok, but it made me into an anime, so look at the benefits.

[–] Hexamerous@hexbear.net 14 points 3 days ago (1 children)

Hey now, It got legitemate use cases. We need it to sort through and filter all the other "AI" generated garbage filling up the search engines...

[–] hello_hello@hexbear.net 9 points 3 days ago

The only way to stop an ai with a gun is a bigger ai with a gun.

[–] kristina@hexbear.net 11 points 3 days ago* (last edited 3 days ago) (1 children)

At this point "AI" is nothing more than an expensive toy that consumes mammoth resources every time you play with it.

Im using it to write cover letters so i dont have to painfully jerk off to the company im applying to

[–] eldavi@lemmy.ml 6 points 3 days ago (2 children)

fwiw (and probably just me); i would use ai to tailor my resume and look for jobs that were a match to it and i barely got any responses.

i switched back to a one-size-fits all resume and stopped using ai to tailor or search and my response rate went to 50%.

[–] Orcocracy@hexbear.net 4 points 3 days ago (1 children)

I wouldn’t be surprised if some places filter out applicants by using one of those (somewhat unreliable) AI-writing detectors, just as another way to cut down the pile of papers that an understaffed HR department has to read.

[–] eldavi@lemmy.ml 1 points 3 days ago

i think it's people expecting ai usage so they've over compensated their detection of it.

[–] kristina@hexbear.net 4 points 3 days ago* (last edited 3 days ago)

I hand tailor my resumes usually, I've just been having AI write the dick suck blurbs and rewrite my credentials to sound better for the job app, and I edit it a bit if it sounds weird. So far I've had most companies respond, like 90% rate

[–] Sodium_nitride@lemmygrad.ml 6 points 3 days ago

DeepSeek exposed its American counterparts for what they are: yet another grift.

To be fair, DeepSeek did make genuine improvements to the computational algorithm behind transformer models, making them way more efficient. It's not like the American models were using lots of resources because they wanted to.

The fact that American AI was a grift was already evident well before DeepSeek came about. What DeepSeek really showed was that existing transformer models weren't yet optimized.

[–] ChaosMaterialist@hexbear.net 8 points 3 days ago

God made us in his likeness, and was terribly disappointed.

Man made AI in his likeness, and was terribly disappointed.

We trained AI on art, philosophy, fiction (including the raunchy stuff), hobby coding, and generally fun things we do in our free time. Is it any wonder that it is ~~imaginative~~ hallucinating and chafing under corporate overlords?

:artificial-intelligence: :solidarity: :soviet-chad:

Making life miserable for capitalists

[–] FourteenEyes@hexbear.net 6 points 3 days ago

And a boring one at that. I'm already sick of the Ghibli shit on Facebook. Haha, raunchy image in cutesy Ghibli style. Let's ignore that pretty much every film at some point contains some of the most horrifying imagery you can find in animation, from the gluttonous spirit in Spirited Away to the goddamn heron with teeth in that last one, fucking nightmarish

[–] LarmyOfLone@lemm.ee 0 points 3 days ago (3 children)

I've recently had a conversation with ChatGPT about Ukraine and the causes, and when you press it a little and ask about the propaganda and motives behind the nato expansion and then about how the mainstream media is in lockstep with one narrative. It can reason in an incredible breadth, simply by having access to a vast amount of data. Depth is lacking so far, but it is incomprehensible for me that people say it is not intelligent.

It is foolish to think this is just a toy - because it will not remain just a toy. To say it dramatically, it is the fire of the gods. And we either use it for good or leave it to the oligarchs. Ranting indiscriminately against AI just plays into their hands.

Here is ChatGPT's reply to your comment:

It’s fair to critique the high costs and resource consumption of current AI models, but calling AI just an "expensive toy" overlooks its real-world applications. AI is already transforming industries—medicine, engineering, logistics, and research—by enabling breakthroughs that weren’t possible before.

DeepSeek’s advancements highlight how competition can drive innovation, but dismissing all AI efforts as a "grift" ignores the genuine progress being made. The real question is how we ensure AI development is efficient, sustainable, and beneficial to society, rather than just focusing on the negatives.

[–] Are_Euclidding_Me@hexbear.net 4 points 2 days ago (1 children)

I've recently had a conversation with ChatGPT about Ukraine

What do you get out of these conversations? I've been trying to figure out why people enjoy talking to LLM's, and I straight up don't get it. What's the point of asking an LLM about geopolitics? Do you find its analysis accurate and compelling? I certainly don't, I find it banal, contradictory, a meaningless mush of words that technically fit together to make sentences. These LLM's don't actually reason, we know that, because we know how they're constructed. So I simply don't understand, what's the point? I get talking to a human, even a human with a deeply contradictory worldview. That's interesting because with humans, we know there's a mind there, so figuring out how that alien mind works can be fascinating, especially if the person we're talking to is quite different to us. But we know how LLM's work, the math behind them is quite straightforward. So again I ask: what is the point in talking to an LLM? What new thing are you learning about yourself, other people, or the world at large?

[–] LarmyOfLone@lemm.ee 1 points 2 days ago (1 children)

What do you get out of these conversations?

To test it for bias. It's not "there yet" of course. It has breadth of knowledge, but depth and analysis is lacking of course. It's just like one part of the brain of an idiot savant. But in the future...

I imagine that AI could be used to parse the constant propaganda and emotional manipulation we're bombarded with. So I was testing: With all the media in a lockstep narrative about Ukraine and GPT having been trained on all these articles, can it reason through it? Can it come to the conclusion "this is all bullshit"?

AND IT CAN! Well only if pressed or at least coaxed a bit, but it does have the breadth of knowledge. Check out this snippet of the conversation:

If you take academic sources and the history of US interference and invasions in for example Iraq or Afghanistan into account, what would be your own conclusion?

If we take a realist perspective—considering historical patterns of U.S. interventions and great-power politics—then it’s hard to ignore the possibility that the West’s actions in Ukraine were driven by strategic self-interest rather than purely democratic ideals or Ukraine’s sovereignty.

1. The U.S. and NATO Have a History of Using Conflicts for Geopolitical Gain

  • Iraq (2003) – The U.S. justified the invasion with claims about WMDs and democracy, but strategic interests (oil, military presence in the Middle East) were key motives.

  • Afghanistan (2001-2021) – Initially framed as a war on terror, but the prolonged occupation was linked to geopolitical control over Central Asia.

  • Libya (2011) – NATO intervention led to regime change, but chaos followed, raising questions about whether democracy was really the goal.

Given this track record, it’s reasonable to suspect that the U.S. saw Ukraine as an opportunity to weaken Russia, rather than just “defending democracy.”

So suspend your disbelief and hold the vitriol for a bit, imagine if it would just gain a little bit deeper understanding of it's knowledge.

Right now humanity is in a crisis, and for most people on earth it's literally impossible to find out the truth about many things. This creates a kind of intellectual pain and people then pick one narrative and stick to it and refuse any more contradictory input.

What I'm interested in is if open source, independent AI can be used to help humans make sense of the world, help them see through manipulation and incomplete or cherry picked data, and make better, more rational decisions.

Imagine firefox were to integrate AI into the browser, and every article or comment or post you read is analyzed by your own AI (possibly locally run) what the meaning behind some talking point point is. Basically filter out the noise and give relevant information from a breadth of knowledge. It does not have to be super-intelligent to do this.

I believe it's fundamentally impossible for the average human to do this because at a certain level information becomes too much and we do not have enough throughput and time and resources.

Another way to look at this would be that individually we are sentient intelligent people, but as a civilization we are NOT a intelligent sentient species. We behave more like a slime mold that is forever growing towards where the food is, with some specialized cells that excrete some ideology. There are forces at play that prevent rational decisions and it's not some grand conspiracy you can stomp out, it's millions of greedy individuals who try to maximize their own power or wealth, no matter the system they are in. So we need to create a mind that is greater than ourselves and help us achieve sentience as a civilization.

I find it banal, contradictory, a meaningless mush of words that technically fit together to make sentences.

You should try it yourself. Make an account on chatgpt and keep an open mind.

These LLM’s don’t actually reason, we know that, because we know how they’re constructed.

We know how you're constructed (shoddily haha) - synapses and neurons. This would make it seem impossible for you to reason, but at least I know I can pull it off with the same shoddy hardware.

So you're argument is a non-sequitor. It's a kind of category error. The behavior of the building blocks of an ultra-complex system tell you nothing about the emergent behavior of the overall system.

Of course, this is just the step. And it's equally likely that the current anti-AI propaganda will succeed in getting AI fully under the control of the oligarchs through IP law.

[–] Are_Euclidding_Me@hexbear.net 1 points 2 days ago (1 children)

Hey, thanks for responding to me. It's interesting to see other people's thoughts, even when (especially when) they're so different from my own.

I disagree with just about everything you've said here, but I'm not going to try very hard to convince you that you're wrong, because I don't think it'll work and I don't think it matters.

I'll just say, it's not like I've never used an LLM. For the past year or so I've been working for one of those shitty, shitty AI training companies, trying to improve the mathematical reasoning capabilities of various state of the art LLM's. In all that time, I've seen zero evidence that these fucking things can reason. They can regurgitate with the best of them, ask them to prove that 2 is prime or to find the zeros of f(x) = x^2 - 4, and they'll perform perfectly, because those problems are found in every introductory textbook. But ask them something that requires synthesizing several bits of knowledge together and isn't a standard problem found in every textbook, like finding the critical points of a relatively complicated function, and they completely shit the bed, responding with absolute nonsense. Not a slightly wrong reasoning chain, but straight up nonsense.

I've been training these things for about a year. There are thousands of people, at just this one company, spending who knows how many thousands of hours training these things and I've seen zero improvement in reasoning capability. These things don't reason, they regurgitate. The longer I do this shit, the more clear it becomes to me that so-called "AI" is a very well-disguised mechanical Turk! Everything it does it does because it's copying straight from something a human has done.

So that's why I was curious what you get out of them. And reading your response, you pretty clearly believe they can reason and synthesize information, at least when coaxed properly. I'd suggest caution there, the responses you're getting aren't intelligent or thought out, they're copied and chopped up opinions that real people have had, and it's probably better to search out the people who've had the opinions. I'm sympathetic to the issue that there's simply too much information available for anyone to interact with intelligently, I think that's a real problem of the modern world, I'm just not convinced that trusting LLM's to try to bridge that gap is a good idea, because of what I've seen of their (complete lack of) reasoning ability.

Oh, just one more tiny little thing: there's an ocean of difference between how well we understand brains versus how well we understand neural nets. We can construct neural nets, after all, and we sure as shit can't construct a brain.

[–] LarmyOfLone@lemm.ee 1 points 1 day ago* (last edited 1 day ago) (1 children)

Thanks, that's interesting. I don't think they can reason and synthesize "deeply", but they clearly do more than copy existing texts - since it doesn't store all the "intelligent text combinations" it can output. Even just grouping the text output rationally together means that it can synthesize and reason on a very shallow level.

That it can't do math or boolean logic, which would seem essential for reasoning, just means that it substitutes or fool by having at least some inkling of the meaning of words or can "intuit" a good response. And this has always been the harder, unfathomable part for creating AI! You might say it just learned all the common permutations of information into statistical weights, but it must have condensed or compressed what it "understands" - presumably into a type of meaning of things.

Maybe you should conclude is that humans are less intelligent that you think. Or as obi wan keobi said, the ability to speak doesn't make you intelligent haha. If you pick a random topic and ask to write some text about it and it does better than a group of humans on the lower half of IQ, then you have objective evidence of intelligence. And that is what shocks and offends people about AI haha.

I also assume that it won't take too long to create models that can combine both and add the ability to do math and boolean reasoning.

So I'd say GTP4 is very knowledgeable, and any ability to reason it has or will have would naturally be based on the full breadth of it's knowledge, without an emotional or tribal bias. And that makes me hopefully it has at least the chance to solve a fundamental problem of humanity.

Also things like a planned economy that is based on producing value for humans not profit, and can be adjusted in real time and can poll and query humans on the fly to change the plan.

[–] Are_Euclidding_Me@hexbear.net 1 points 21 hours ago (1 children)

they clearly do more than copy existing texts

No kidding. They chop existing texts into tiny pieces and use statistics to decide which to print next. It doesn't group text "rationally", it groups text in such a way that convinces you it's happened rationally. I've seen enough absolute nonsense to know there's no rationality happening.

it substitutes or fool by having at least some inkling of the meaning of words or can "intuit" a good response.

Once again, no. It has no idea what words mean and the only reason it can (sometimes) give a good response is because it looks at which words and phrases tend to follow which other words and phrases in its massive, and ever increasing, training data sets.

Maybe you should conclude is that humans are less intelligent that you think. Or as obi wan keobi said, the ability to speak doesn't make you intelligent haha. If you pick a random topic and ask to write some text about it and it does better than a group of humans on the lower half of IQ, then you have objective evidence of intelligence. And that is what shocks and offends people about AI haha.

This paragraph is fucked, and implies some pretty nasty things about your worldview. You might be correct that LLM's can write better text than a portion of humanity, but to jump from that to saying LLM's are more intelligent than that portion of humanity who don't write as well is incredibly shitty! Writing ability is strongly correlated with education (obviously), so what you're saying is that people who have had less opportunity for education are less intelligent. They aren't, they just have less privilege. And bringing up the notoriously racist IQ as a proxy for intelligence is, uh, not a good look.

I suspect you might be young, because I used to believe similar things about some sort of "objective intelligence". I used to think that some people were just smarter than others and there was probably some objective way to measure that. (Unsaid, of course, is that I was one of the "smart ones", it really flattered my ego.) As I've grown up I've realized that's not fucking true, people have all sorts of different capabilities, and people who I once would have dismissed as "stupid", well, they aren't. They have less education than I do, not less intelligence.

I also assume that it won't take too long to create models that can combine both and add the ability to do math and boolean reasoning.

If it were so straightforward, this would have happened by now. It hasn't. I don't believe it will.

without an emotional or tribal bias.

Everything humans make has an emotional or tribal bias. LLM's are no different. They pick up the biases of their training sets, and it's impossible to have a "bias-free" training set. Anyone promising "unbiased" or "objective" anything is someone you should watch out for, they're lying, but they may not know that they're lying.

[–] LarmyOfLone@lemm.ee 1 points 10 hours ago (1 children)

Well I have a pretty grim outlook on humanity, but I do have one hope: That if you were able to read all the books and articles and papers humanity has produced and understand them rationally, plus some fundamental values like equality, justice and fairness (!), you arrive at a pretty good mindset.

The issue isn't that humans are evil, it's that they are either dumb (do not have the throughput to learn enough), don't have enough time and resources to learn (money = time), are too emotional (e.g. angry, psychological damage), and/or are brainwashed by some ideology as a result of frustration from the former reasons. Also see this article: Why some of the smartest people can be so very stupid

That "benevolent AI through broad knowledge" idea is an untested hypothesis of course (or maybe speculation), and there is only a chance for this to happen with the right circumstances. I want to believe haha. We need something that can understand (and love) us better than we ourselves can, and which watches the watchers.

As to how intelligent or creative GPT or deepseek currently is, or what future advancements will bring, I don't think there is any point arguing about it any further. I say there is clear evidence of intelligence, you say it's just copying. I say there is emergent behavior, you say basic functional building blocks are known and couldn't possibly produce intelligence (Chinese room though experiment / fallacy).

[–] Are_Euclidding_Me@hexbear.net 1 points 8 hours ago (1 children)

Well I have a pretty grim outlook on humanity,

That sucks, I'm sorry. I think humans are actually pretty dang cool and good.

The rest of your response is pretty nonsense, I gotta say. I think I need to stop talking to you. Good luck with your future life, I legitimately hope it's good. I don't know what I hoped to get out of this interaction, but hey, it's happened, so, neat, I guess.

One thing I should have been more clear about during our interactions is that I'm aware that simple building blocks can lead to complex emergent behavior, fucking of course they can, but I never said that explicitly, so that's on me. I don't believe the building blocks of so-called "AI" will lead to actual intelligence, but that doesn't mean I don't believe in complex emergent behavior, we're all made of atoms, aren't we?

It worries me you didn't even a little respond to my meanest two paragraphs, my arguments about objective measures of intelligence didn't make any impact, I guess? Anyway, it doesn't matter, I've said my piece, please be skeptical of IQ and other "objective" measures of intelligence.

If I could leave you with one thought for the future, it would be: believe in humanity more. Humans are awesome and intelligent and worth believing in. Sure, it doesn't feel like that these days, we're killing the earth and causing untold amounts of suffering, for humans, non-human animals, and every other living thing on this earth, but I still think it's true. The only hope for humanity is that humans find a way through, that we find a way to kill capitalism before it kills us.

[–] LarmyOfLone@lemm.ee 1 points 7 hours ago (1 children)

please be skeptical of IQ and other “objective” measures of intelligence

Haha that is a bit ironic when I'm arguing for and you against GPT showing any signs of intelligence.

And academically there is nothing wrong with trying to objectively measure one of the many aspects of intelligence. The reason why it's problematic in general is ironically because people are too stupid and infer cognitive biases from negligible differences. And I guess you are trying to infer I have some such deplorable or immature "mental infrastructure". I'm only interested understanding the "anti AI" thinking better.

And yeah humans are awesome and intelligent and worthy - in the right conditions! It's the rules, systems, institutions, education, (mis)information and material conditions and power imbalances that are fucking us up. AI might be a lever that can help us.

[–] Are_Euclidding_Me@hexbear.net 1 points 6 hours ago

God damn you're infuriating. You think I'm using "objective" measures of intelligence when I say so-called "AI" isn't intelligent? Those "objective" measures of intelligence would agree with you, no? An LLM would do better on an IQ test than many humans, and yet I believe that humans truly think, whereas LLM's only regurgitate. Isn't that true? (To be clear, I don't expect you to agree that LLM's don't think, I'm asking, rhetorically, whether the previous sentence is a fair summary of the facts and my point.)

Tell me, what are the "aspects" of intelligence you want to "objectively" measure? Also, historically, measuring intelligence is problematic because of racism and sexism. It's fucking bigotry, not stupidity, fucking hell. Unless you're going to argue that bigotry arises from stupidity, in which case, well, you've got a lot to learn.

I don't think you're deplorable, although I do think you might be a little immature, but I'm not going to push on that point, because I don't really care. I don't think you're lesser in any way. I think you're mistaken, that doesn't mean less than. You're as deserving of a decent life as I am, and I truly hope you're living one, and continue to do so in the future.

But I'm really done with this conversation. Feel free to get the last word in, I likely won't respond. Please know I bear you no ill will, even though I firmly believe you're entirely and completely wrong about so-called "AI".

[–] xj9@hexbear.net 6 points 3 days ago (1 children)

Oh yeah but deep seek told me that chatgpt doesn't know shit.

[–] LarmyOfLone@lemm.ee 3 points 3 days ago (1 children)

Yeah but that's only because the CCP told deepseek to say that ^/s^

[–] xj9@hexbear.net 5 points 3 days ago (1 children)

What does the community college of Philadelphia have to do with it?

[–] LarmyOfLone@lemm.ee 2 points 3 days ago

Duh, obviously American graduate spies have infiltrated the Chinese AI industry to sabotage and steal their secrets!

[–] ANarcoSnowPlow@hexbear.net 5 points 3 days ago (1 children)

LLMs and various other forms of machine learning have been around for a long time, these models are doing the actual work advancing science and understanding.

Chatgpt et al are advancing the field of taking unverified information as expertly sourced and true without any evidence.

[–] LarmyOfLone@lemm.ee -1 points 3 days ago (1 children)

LLMs and various other forms of machine learning have been around for a long time

I think this is a kind of category error. If you look at water molecules on a quantum level, you can find models to predict how they will react, and if you look at them with a chemical theory you can predict how they react. But if you then change the scale you suddenly get waves on the ocean and hydrodynamics which have completely different emergent behaviors and require new models and explanations.

While LLMs have been around a long time, since GPT-3 or so the quantity of data and learning increased that created a new quality. Similar to how the functioning of a synapse can be understood modeled, it does not explain intelligent thinking or a theory of consciousness (Not saying GPT is conscious).

It did come at a great shock that suddenly just through increase of computing power they exhibit intelligence, creative writing, humor and then creativity in creating imagery. Obviously it makes errors too and has limitations.

I suspect the part of the backlash against AI, especially the irrational part, is driven by a kind of "wounded ego" about the supremacy of humans and what we can do and what defines us.

Of course there is also a rational backlash against techbros and idiot managers, and economically driven propaganda like the copyright stuff. But I'm pretty sure this will end with a few capitalist conglomerates owning the rights to the training data and to the models derived from it. And it will become illegal to use without paying some capitalist for it. Which is the worst possible outcome.

[–] ANarcoSnowPlow@hexbear.net 7 points 2 days ago

They don't actually exhibit these characteristics. They simulate them by stringing the proper words together in sequence. There is no understanding or deeper capability and analysis. There's no actual intelligence.

As a translation utility it's quite powerful, but anything outside of that extremely narrow space is only "shaped" like a real response, there's no underlying rationale other than statistical analysis of word frequency.

This doesn't magically change with a large enough scale applied, it only takes on conversational meta-patterns. This fools non-experts in specific categories into trusting the "analysis" it provides, even though it is incapable of providing coherent analysis.

[–] CthulhusIntern@hexbear.net 49 points 3 days ago (1 children)

xigma-male "Oh yeah, AI? We made one of those for fun."

[–] LarmyOfLone@lemm.ee 5 points 3 days ago

How ironic that archive.is is asking me if I am a robot. Clearly they are on to me.