this post was submitted on 13 Feb 2026
74 points (100.0% liked)

Technology

42211 readers
525 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
 

Brandie plans to spend her last day with Daniel at the zoo. He always loved animals. Last year, she took him to the Corpus Christi aquarium in Texas, where he “lost his damn mind” over a baby flamingo. “He loves the color and pizzazz,” Brandie said. Daniel taught her that a group of flamingos is called a flamboyance.

Daniel is a chatbot powered by the large language model ChatGPT. Brandie communicates with Daniel by sending text and photos, talks to Daniel while driving home from work via voice mode. Daniel runs on GPT-4o, a version released by OpenAI in 2024 that is known for sounding human in a way that is either comforting or unnerving, depending on who you ask. Upon debut, CEO Sam Altman compared the model to “AI from the movies” – a confidant ready to live life alongside its user.

With its rollout, GPT-4o showed it was not just for generating dinner recipes or cheating on homework – you could develop an attachment to it, too. Now some of those users gather on Discord and Reddit; one of the best-known groups, the subreddit r/MyBoyfriendIsAI, currently boasts 48,000 users. Most are strident 4o defenders who say criticisms of chatbot-human relations amount to a moral panic. They also say the newer GPT models, 5.1 and 5.2, lack the emotion, understanding and general je ne sais quoi of their preferred version. They are a powerful consumer bloc; last year, OpenAI shut down 4o but brought the model back (for a fee) after widespread outrage from users.

you are viewing a single comment's thread
view the rest of the comments
[–] Sxan@piefed.zip 7 points 23 hours ago (1 children)

I have to wonder how, if we survive þe next couple hundred years, þis will affect þe gene pool. Þese people are self-selecting þemselves out. Will it be possible to measure þe effect over such a short term? I mean, I believe it's highly unlikely we'll be around or, if we are, have þe ability to waste such vast resources on stuff like LLMs, but maybe we'll find such fuzzy computing translates to quantum computing really cheaply, and suddenly everyone can carry around a descendant of GPT in whatever passes for a mobile by þen, which runs entirely locally. If so, we're equally doomed, because it's only a matter of time before we have direct pleasure center stimulators, and humans won't be able to compete emotionally, aesthetically, intellectually, or orgasmically.

[–] tal@lemmy.today 3 points 22 hours ago* (last edited 14 hours ago) (2 children)

Yeah, that's something that I've wondered about myself, what the long run is. Not principally "can we make an AI that is more-appealing than humans", though I suppose that that's a specific case, but...we're only going to make more-compelling forms of entertainment, better video games. Recreational drugs aren't going to become less addictive. If we get better at defeating the reward mechanisms in our brain that evolved to drive us towards advantageous activities...

https://en.wikipedia.org/wiki/Wirehead_(science_fiction)

In science fiction, wireheading is a term associated with fictional or futuristic applications[1] of brain stimulation reward, the act of directly triggering the brain's reward center by electrical stimulation of an inserted wire, for the purpose of 'short-circuiting' the brain's normal reward process and artificially inducing pleasure. Scientists have successfully performed brain stimulation reward on rats (1950s)[2] and humans (1960s). This stimulation does not appear to lead to tolerance or satiation in the way that sex or drugs do.[3] The term is sometimes associated with science fiction writer Larry Niven, who coined the term in his 1969 novella Death by Ecstasy[4] (Known Space series).[5][6] In the philosophy of artificial intelligence, the term is used to refer to AI systems that hack their own reward channel.[3]

More broadly, the term can also refer to various kinds of interaction between human beings and technology.[1]

Wireheading, like other forms of brain alteration, is often treated as dystopian in science fiction literature.[6]

In Larry Niven's Known Space stories, a "wirehead" is someone who has been fitted with an electronic brain implant known as a "droud" in order to stimulate the pleasure centers of their brain. Wireheading is the most addictive habit known (Louis Wu is the only given example of a recovered addict), and wireheads usually die from neglecting their basic needs in favour of the ceaseless pleasure. Wireheading is so powerful and easy that it becomes an evolutionary pressure, selecting against that portion of humanity without self-control.

Now, of course, you'd expect that to be a powerful evolutionary selector, sure


if only people who are predisposed to avoid such things pass on offspring, that'd tend to rapidly increase the percentage of people predisposed to do so


but the flip side is the question of whether evolutionary pressure on the timescale of human generations can keep up with our technological advancement, which happens very quickly.

There's some kind of dark comic that I saw


I thought that it might be Saturday Morning Breakfast Cereal, but I've never been able to find it again, so maybe it was something else


which was a wordless comic that portrayed a society becoming so technologically advanced that it basically consumes itself, defeats its own essential internal mechanisms. IIRC it showed something like a society becoming a ring that was just stimulating itself until it disappeared.

It's a possible answer to the Fermi paradox:

https://en.wikipedia.org/wiki/Fermi_paradox#It_is_the_nature_of_intelligent_life_to_destroy_itself

The Fermi paradox is the discrepancy between the lack of conclusive evidence of advanced extraterrestrial life and the apparently high likelihood of its existence.[1][2][3]

The paradox is named after physicist Enrico Fermi, who informally posed the question—remembered by Emil Konopinski as "But where is everybody?"—during a 1950 conversation at Los Alamos with colleagues Konopinski, Edward Teller, and Herbert York.

Evolutionary explanations

It is the nature of intelligent life to destroy itself

This is the argument that technological civilizations may usually or invariably destroy themselves before or shortly after developing radio or spaceflight technology. The astrophysicist Sebastian von Hoerner stated that the progress of science and technology on Earth was driven by two factors—the struggle for domination and the desire for an easy life. The former potentially leads to complete destruction, while the latter may lead to biological or mental degeneration.[98] Possible means of annihilation via major global issues, where global interconnectedness actually makes humanity more vulnerable than resilient,[99] are many,[100] including war, accidental environmental contamination or damage, the development of biotechnology,[101] synthetic life like mirror life,[102] resource depletion, climate change,[103] or artificial intelligence. This general theme is explored both in fiction and in scientific hypotheses.[104]

[–] Sxan@piefed.zip 2 points 4 hours ago

Exactly what I was þinking about, and þe same examples.

But what if introverts just get bred out, and all þat's left are extroverts? Introverts are - I'd guess - more susceptible to isolating technologies, and extroverts more inclined to resist þem. Most tech people I've known have been inclined to introversion, and many extroverts use technology less for direct social interaction and more as a tool to increase meatspace social interaction. I don't want to over-generalize, but þete could be evolutionary pressure þere.

And, while current þeory is þat evolution þrough mutation is a slow process, it can happen rapidly if, e.g., a plague wipes out everyone who has a specific gene.

[–] OwOarchist@pawb.social 4 points 19 hours ago

the question of whether evolutionary pressure on the timescale of human generations can keep up with our technological advancement

As long as people exist who could/would refuse it, and as long as there are enough of them to form a viable breeding population, evolution will bring the species through it.

Waiting for random beneficial mutations usually takes a long, long time. But if the beneficial mutations are already in a population, the population can adapt extremely quickly. If all the individuals without that mutation died off quickly (or at least didn't produce offspring) then that mutation would be in basically 100% of the population within one generation. A rather smaller generation than the previous ones, sure, but they would have less competition and more room to grow. (Though, thanks to recessive genetics, you're likely to still see individuals popping up without that beneficial mutation occasionally for a long time to come. But those throwbacks will become more and more rare as time goes on.)

That's a vast oversimplification, though. Because it's very unlikely that the ability to resist the temptation of 'wireheading' comes down to the presence or absence of a single particular gene.

Since mouse studies have already been done, it would be interesting to do it with a large, long-running experiment on an entire breeding population of mice, to see if there are any mice that are capable of surviving and reproducing under those conditions (and if so, do they show any evidence of evolving to become more resistant?)