adding to the confusion, the shithead who replied with “thanks, blocked” is, I’m fairly certain, already banned from our instance
anyway yeah the lemmy/mastodon interop has many sharp edges
adding to the confusion, the shithead who replied with “thanks, blocked” is, I’m fairly certain, already banned from our instance
anyway yeah the lemmy/mastodon interop has many sharp edges
oxy is a mastodon user who follows David so they’re most likely telling the AI startups in question to GTFO and lemmy has kindly reinterpreted that as a tag
however,
Why are people hating on / blocking dgerard?
let’s go down the list! depending on the subculture, you’re blocking dgerard because:
there are more, we can go deeper
here’s the summary table that the article pulled its numbers from

here’s a specific question regarding AI

DAIR, the AI-critical research organization founded by Timnit Gebru, is looking for a communications lead
it’s really rude to market a game as a language learning app
me telling you to go fuck yourself makes me exactly as unfair and mean as Palantir, a Peter Thiel company specializing in genocide and mass surveillance. yes hmm I see
you and your friend can both fuck off with the type of truth and transparency where you claim to not be defending fucking Palantir of all things while uncritically parroting their words. nobody fucking needs that in any context. you don’t in fact have to hand it to the fascists on this or any other point.
I’m not defending palantir but no need to invent reasons to be mad at them.
the fuck is wrong with you
as a choosy problem gambler you’d never dream of touching anything but the original griftcoins
if you should ever happen to be short on resumes…
(it feels like a zero AI job board might be a good thing to have, but we’d need a way to vet submissions and handle anonymous submissions and inquiries so people don’t dox themselves)
In March 2025, the large language model (LLM) GPT-4.5, developed by OpenAI in San Francisco, California, was judged by humans in a Turing test to be human 73% of the time — more often than actual humans were. Moreover, readers even preferred literary texts generated by LLMs over those written by human experts.
do you know how hard it is to write something that aged poorly months before it was written? it’s in the public consciousness that LLMs write like absolute shit in ways that are very easy to pick out once you’ve been forced to read a bunch of LLM-extruded text. inb4 some asshole with AI psychosis pulls out “technically ChatGPT’s more human than you are, look at the statistics” regarding the 73% figure I guess. but you know when statistics don’t count!
A March 2025 survey by the Association for the Advancement of Artificial Intelligence in Washington DC found that 76% of leading researchers thought that scaling up current AI approaches would be ‘unlikely’ or ‘very unlikely’ to yield AGI
[…] What explains this disconnect? We suggest that the problem is part conceptual, because definitions of AGI are ambiguous and inconsistent; part emotional, because AGI raises fear of displacement and disruption; and part practical, as the term is entangled with commercial interests that can distort assessments.
no you see it’s the leading researchers that are wrong. why are you being so emotional over AGI. we surveyed Some Assholes and they were pretty sure GPT was a human and you were a bot so… so there!
uhm @self can you show me where I wrote this? can you show me where I wrote these exact words? no? that’s so irrational of you.
if for whatever reason you need example text of the mid phases of someone being driven out of their fucking mind by a chatbot, the above will do nicely