corbin

joined 2 years ago
 

Did catgirl Riley cheat at a videogame, or is she just that good? Detective Karl Jobst is on the case. Are the critics from platform One True King (OTK), like Asmongold and Tectone, correct in their analysis of Riley's gameplay? Or are they just haters who can't stand how good she is? Bonus appearance from Tommy Tallarico.

Content warning: Quite a bit of transmisogyny. Asmongold and Tectone are both transphobes who say multiple slurs and constantly misgender Riley, and their Twitch chats also are filled with slurs. Jobst does not endorse anything that they say, but he also quotes their videos and screenshots directly.

too long, didn't watch

This video is a takedown of an AI slop channel, "Call of Shame". As hinted, this is something of a ROBLOX_OOF.mp3 essay, where it's not just about the cryptofascists pushing the culture war by attacking a trans person, but about one specific rabbit hole surrounding one person who has made many misleading claims. Just like how ROBLOX_OOF.mp3 permanently hobbled Tallarico's career, it seems that Call of Shame has pivoted twice and turned to evangelizing Christianity instead as a result of this video's release.

[–] corbin@awful.systems 4 points 2 days ago (1 children)

It's a power play. Engineers know that they're valuable enough that they can organize openly; also, as in the case of Alphabet Workers Union, engineers can act in solidarity with contractors, temps, and interns. I've personally done things like directly emailing CEOs with reply-all, interrupting all-hands to correct upper management on the law, and other fun stuff. One does have to be sufficiently skilled and competent to invoke the Steve Martin principle: "be so good that they can't ignore you."

[–] corbin@awful.systems 10 points 4 days ago (1 children)

It might help to know that Paul Frazee, one of the BlueSky developers, doesn't understand capability theory or how hackers approach a computer. They believe that anything hidden by the porcelain/high-level UI is hidden for good. This was a problem on their Beaker project, too; they thought that a page was deleted if it didn't show up in the browser. They fundamentally aren't prepared for the fact that their AT protocol doesn't have a way to destroy or hide data and is embedded into a network that treats censorship as reparable damage.

[–] corbin@awful.systems 12 points 4 days ago (2 children)

Today, in fascists not understanding art, a suckless fascist praised Mozilla's 1998 branding:

This is real art; in stark contrast to the brutalist, generic mess that the Mozilla logo has become. Open source projects should be more daring with their visual communications.

Quoting from a 2016 explainer:

[T]he branding strategy I chose for our project was based on propaganda-themed art in a Constructivist / Futurist style highly reminiscent of Soviet propaganda posters. And then when people complained about that, I explained in detail that Futurism was a popular style of propaganda art on all sides of the early 20th century conflicts… Yes, I absolutely branded Mozilla.org that way for the subtext of "these free software people are all a bunch of commies." I was trolling. I trolled them so hard.

The irony of a suckless developer complaining about brutalism is truly remarkable; these fuckwits don't actually have a sense of art history, only what looks cool to them. Big lizard, hard-to-read font, edgy angular corners, and red-and-black palette are all cool symbols to the teenage boy's mind, and the fascist never really grows out of that mindset.

[–] corbin@awful.systems 5 points 1 week ago (1 children)

Sadly, it's a Chomskian paper, and those are just too weak for today. Also, I think it's sloppy and too Eurocentric. Here are some of the biggest gaffes or stretches I found by skimming Moro's $30 book, which I obtained by asking a shadow library for "impossible languages" (ISBN doesn't work for some reason):

book review of Impossible Languages (Moro, 2016)

  • Moro claims that it's impossible for a natlang to have free word order. There's many counterexamples which could be argued, like Arabic or Mandarin, but I think that the best counterexample is Latin, which has Latinate (free) word order. On one hand, of course word order matters for parsers, but on the other hand the Transformers architecture attends without ordering, so this isn't really an issue for machines. Ironically, on p73-74, Moro rearranges the word order of a Latin phrase while translating it, suggesting either a use of machine translation or an implicit acceptance of Latin (lack of) word order. I could be harsher here; it seems like Moro draws mostly from modern Romance and Germanic languages to make their points about word order, and the sensitivity of English and Italian to word order doesn't imply a universality.
  • Speaking of universality, both the generative-grammar and universal-grammar hypotheses are assumed. By "impossible" Moro means a non-recursive language with a non-context-free grammar, or perhaps a language failing to satisfy some nebulous geometric requirements.
  • Moro claims that sentences without truth values are lacking semantics. Gödel and Tarski are completely unmentioned; Moro ignores any sort of computability of truth values.
  • Russell's paradox is indirectly mentioned and incorrectly analyzed; Moro claims that Russell fixed Frege's system by redefining the copula, but Russell and others actually refined the notion of building sets.
  • It is claimed that Broca's area uniquely lights up for recursive patterns but not patterns which depend on linear word order (e.g. a rule that a sentence is negated iff the fourth word is "no"), so that Broca's area can't do context-sensitive processing. But humans clearly do XOR when counting nested negations in many languages and can internalize that XOR so that they can handle utterances consisting of many repetitions of e.g. "not not".
  • Moro mentions Esperanto and Volapük as auxlangs in their chapter on conlangs. They completely fail to recognize the past century of applied research: Interlingue and Interlingua, Loglan and Lojban, Láadan, etc.
  • Sanskrit is Indo-European. Also, that's not how junk DNA works; it genuinely isn't coding or active. Also also, that's not how Turing patterns work; they are genuine cellular automata and it's not merely an analogy.

I think that Moro's strongest point, on which they spend an entire chapter reviewing fairly solid neuroscience, is that natural language is spoken and heard, such that a proper language model must be simultaneously acoustic and textual. But because they don't address computability theory at all, they completely fail to address the modern critique that machines can learn any learnable system, including grammars; they worst that they can say is that it's literally not a human.

[–] corbin@awful.systems 5 points 1 week ago (1 children)

I got jumpscared by Gavin D. Howard today; apparently his version of bc appeared on my system somehow, and his name's in the copyright notice. Who is Gavin anyway? Well, he used to have a blog post that straight-up admitted his fascism, but I can't find it. I could only find, say, the following five articles, presented chronologically:

Also, while he's apparently not caused issues for NixOS maintainers yet, he's written An Apology to the Gentoo Authors for not following their rules when it comes to that same bc package. So this might be worth removing for other reasons than the Christofascist authorship.

BTW his code shows up because it's in upstream BusyBox and I have a BusyBox on my system for emergency purposes. I suppose it's time to look at whether there is a better BusyBox out there. Also, it looks like Denys Vlasenko has made over one hundred edits to this code to integrate it with BusyBox, fix correctness and safety bugs, and improve performance; Gavin only made the initial commit.

[–] corbin@awful.systems 5 points 1 week ago (1 children)

They (or the LLM that summarized their findings and may have hallucinated part of the post) say:

It is a fascinating example of "Glue Code" engineering, but it debunks the idea that the LLM is natively "understanding" or manipulating files. It's just pushing buttons on a very complex, very human-made machine.

Literally nothing that they show here is bad software engineering. It sounds like they expected that the LLM's internals would be 100% token-driven inference-oriented programming, or perhaps a mix of that and vibe code, and they are disappointed that it's merely a standard Silicon Valley cloudy product.

My analysis is that Bobby and Vicky should get raises; they aren't paid enough for this bullshit.

By the way, the post probably isn't faked. Google-internal go/ URLs do leak out sometimes, usually in comments. Searching GitHub for that specific URL turns up one hit in a repository which claims to hold a partial dump of the OpenAI agents. Here is combined_apply_patch_cli.py. The agent includes a copy of ImageMagick; truly, ImageMagick is our ecosystem's cockroach.

[–] corbin@awful.systems 5 points 1 week ago

Now I'm curious about whether Disney funded Glaze & Nightshade. Quoting Nightshade's FAQ, their lab has arranged to receive donations which are washed through the University of Chicago:

If you or your organization may be interested in pitching in to support and advance our work, you can donate directly to Glaze via the Physical Sciences Division webpage, click on "Make a gift to PSD" and choose "GLAZE" as your area of support (managed by the University of Chicago Physical Sciences Division).

Previously, on Awful, I noted the issues with Nightshade and the curious fact that Disney is the only example stakeholder named in the original Nightshade paper, as well as the fact that Nightshade's authors wonder about the possibility of applying Glaze-style techniques to feature-length films.

[–] corbin@awful.systems 18 points 1 week ago (2 children)

The author also proposes a framework for analyzing claims about generative AI. I don't know if I endorse it fully, but I agree that each of the four talking points represents a massive failure of understanding. Their LIES model is:

  • Lethality: the bots will kill us all
  • Inevitability: the bots are unstoppable and will definitely be created in the future
  • Exceptionalism: the bots are wholly unlike any past technology and we are unprepared to understand them
  • Superintelligent: the bots are better than people at thinking

I would add to this a Plausibility or Personhood or Personality: the incorrect claim that the bots are people. Maybe call it PILES.

 

A straightforward dismantling of AI fearmongering videos uploaded by Kyle "Science Thor" Hill, Sci "The Fault in our Research" Show, and Kurz "We're Sorry for Summarizing a Pop-Sci Book" Gesagt over the past few months. The author is a computer professional but their take is fully in line with what we normally post here.

I don't have any choice sneers. The author is too busy hunting for whoever is paying SciShow and Kurzgesagt for these videos. I do appreciate that they repeatedly point out that there is allegedly a lot of evidence of people harming themselves or others because of chatbots. Allegedly.

[–] corbin@awful.systems 13 points 1 week ago (1 children)

Fundamentally, Chapman's essay is about how subcultures transition from valuing functionality to aesthetics. Subcultures start with form following function by necessity. However, people adopt the subculture because they like the surface appearance of those forms, leading to the subculture eventually hollowing out into a system which follows the iron law of bureaucracy and becomes non-functional due to over-investment in the façade and tearing down of Chesterton's fences. Chapman's not the only person to notice this pattern; other instances of it, running the spectrum from right to left, include:

I think that seeing this pattern is fine, but worrying about it makes one into Scott Alexander, paranoid about societal manipulation and constantly worrying about in-group and out-group status. We should note the pattern but stop endorsing instances of it which attach labels to people; after all, the pattern's fundamentally about memes, not humans.

So, on Chapman. I think that they're a self-important nerd who reached criticality after binge-reading philsophy texts in graduate school. I could have sworn that this was accompanied by psychedelic drugs, but I can't confirm or cite that and I don't think that we should underestimate the psychoactive effect of reading philosophy from the 1800s. In his own words:

[T]he central character in the book is a student at the MIT Artificial Intelligence Laboratory who discovers Continental philosophy and social theory, realizes that AI is on a fundamentally wrong track, and sets about reforming the field to incorporate those other viewpoints. That describes precisely two people in the real world: me, and my sometime-collaborator Phil Agre.

He's explicitly not allied with our good friends, but at the same time they move in the same intellectual circles. I'm familiar with that sort of frustration. Like, he rejects neoreaction by citing Scott Alexander's rejection of neoreaction (source); that's a somewhat-incoherent view suggesting that he's politically naïve. His glossary for his eternally-unfinished Continental-style tome contains the following statement on Rationalism (embedded links and formatting removed):

Rationalisms are ideologies that claim that there is some way of thinking that is the correct one, and you should always use it. Some rationalisms specifically identify which method is right and why. Others merely suppose there must be a single correct way to think, but admit we don’t know quite what it is; or they extol a vague principle like “the scientific method.” Rationalism is not the same thing as rationality, which refers to a nebulous collection of more-or-less formal ways of thinking and acting that work well for particular purposes in particular sorts of contexts.

I don't know. Sometimes he takes Yudkowsky seriously in order to critique him. (source, source) But the critiques are always very polite, no sneering. Maybe he's really that sort of Alan Watts character who has transcended petty squabbles. Maybe he didn't take enough LSD. I once was on LSD when I was at the office working all day; I saw the entire structure of the corporation, fully understood its purpose, and — unlike Chapman, apparently — came to the conclusion that it is bad. Similarly, when I look at Yudkowsky or Yarvin trying to do philosophy, I often see bad arguments and premises. Being judgemental here is kind of important for defending ourselves from a very real alt-right snowstorm of mystic bullshit.

Okay, so in addition to the opening possibilities of being naïve and hiding his power level, I suggest that Chapman could be totally at peace or permanently rotated in five dimensions from drugs. I've gotta do five, so a fifth possibility is that he's not writing for a human audience, but aiming to be crawled by LLM data-scrapers. Food for thought for this community: if you say something pseudo-profound near LessWrong then it is likely to be incorporated into LLM training data. I know of multiple other writers deliberately doing this sort of thing.

[–] corbin@awful.systems 15 points 1 week ago (4 children)

The orange-site whippersnappers don't realize how old artificial neurons are. In terms of theory, the Hebbian principle was documented in 1949 and the perceptron was proposed in 1943 in an article with the delightfully-dated name, "A logical calculus of the ideas immanent in nervous activity". In 1957, the Mark I Perceptron was introduced; in modern parlance, it was a configurable image classifier with a single layer of hundreds-to-thousands of neurons and a square grid of dozens-to-hundreds of pixels. For comparison, MIT's AI lab was founded in 1970. RMS would have read about artificial neurons as part of their classwork and research, although it wasn't part of MIT's AI programme.

[–] corbin@awful.systems 7 points 3 weeks ago (1 children)

Oh wow, that's gloriously terse. I agree that it might be the shortest. For comparison, here are three other policies whose pages are much longer and whose message also boils down to "don't do that": don't post copypasta, don't start hoaxes, don't start any horseshit either.

[–] corbin@awful.systems 10 points 3 weeks ago (1 children)

Ziz was arraigned on Monday, according to The Baltimore Banner. She apparently was not very cooperative:

As the judge asked basic questions such as whether she had read the indictment and understood the maximum possible penalties, [Ziz] LaSota chided the “mock proceedings” and said [US Magistrate Douglas R.] Miller was a “participant in an organized crime ring” led by the “states united in slavery.”

She pulled the Old Man from Scene 24 gag:

Please state your name for the record, the court clerk said. “Justice,” she replied. What is your age? “Timeless.” What year were you born? “I have been born many times.”

The lawyers have accepted that sometimes a defendant is uncooperative:

Prosecutors said the federal case would take about three days to try. Defense attorney Gary Proctor, in an apparent nod to how long what should have been a perfunctory appearance on Monday ended up taking, called the estimate “overly optimistic.”

Folks outside the USA should be reassured that this isn't the first time that we've tried somebody with a loose grasp of reality and a found family of young violent women who constantly disrupt the trial; Ziz isn't likely to walk away.

 

A straightforward product review of two AI therapists. Things start bad and quickly get worse. Choice quip:

Oh, so now I'm being gaslit by a frakking Tamagotchi.

 

The answer is no. Seth explains why not, using neuroscience and medical knowledge as a starting point. My heart was warmed when Seth asked whether anybody present believed that current generative systems are conscious and nobody in the room clapped.

Perhaps the most interesting takeaway for me was learning that — at least in terms of what we know about neuroscience — the classic thought experiment of the neuron-replacing parasite, which incrementally replaces a brain with some non-brain substrate without interrupting any computations, is biologically infeasible. This doesn't surprise me but I hadn't heard it explained so directly before.

Seth has been quoted previously, on Awful for his critique of the current AI hype. This talk is largely in line with his other public statements.

Note that the final 10min of the video are an investigation of Seth's position by somebody else. This is merely part of presenting before a group of philosophers; they want to critique and ask questions.

 

A complete dissection of the history of the David Woodard editing scandal as told by an Oregonian Wikipedian. The video is sectioned into multiple miniature documentaries about various bastards and can be watched piece-by-piece. Too long to watch? Read the link above.

too long, didn't watch, didn't read, summarize anyway

David Woodard is an ethnonationalist white supremacist whose artistic career has led to an intersection with a remarkable slice of cult leaders and serial killers throughout the past half-century. Each featured bastard has some sort of relationship to Woodard, revealing an entire facet of American Nazism which runs in parallel to Christian TREACLES, passed down through psychedelia. occult mysticism, and non-Christian cults of capitalism.

 

Cross-posting a good overview of how propaganda and public relations intersect with social media. Thanks @Soatok@pawb.social for writing this up!

 

Tired of going to Scott "Other" Aaronson's blog to find out what's currently known about the busy beaver game? I maintain a community website that has summaries for the known numbers in Busy Beaver research, the Busy Beaver Gauge.

I started this site last year because I was worried that Other Scott was excluding some research and not doing a great job of sharing links and history. For example, when it comes to Turing machines implementing the Goldbach conjecture, Other Scott gives O'Rear's 2016 result but not the other two confirmed improvements in the same year, nor the recent 2024 work by Leng.

Concretely, here's what I offer that Other Scott doesn't:

  • A clear definition of which problems are useful to study
  • Other languages besides Turing machines: binary lambda calculus and brainfuck
  • A plan for how to expand the Gauge as a living book: more problems, more languages and machines
  • The content itself is available on GitHub for contributions and reuse under CC-BY-NC-SA
  • All tables are machine-computed when possible to reduce the risk of handwritten typos in (large) numbers
  • Fearless interlinking with community wikis and exporting of knowledge rather than a complexity-zoo-style silo
  • Acknowledgement that e.g. Firoozbakht is part of the mathematical community

I accept PRs, although most folks ping me on IRC (korvo on Libera Chat, try #esolangs) and I'm fairly decent at keeping up on the news once it escapes Discord. Also, you (yes, you!) can probably learn how to write programs that attempt to solve these problems, and I'll credit you if your attempt is short or novel.

 

A beautiful explanation of what LLMs cannot do. Choice sneer:

If you covered a backhoe with skin, made its bucket look like a hand, painted eyes on its chassis, and made it play a sound like “hnngghhh!” whenever it lifted something heavy, then we’d start wondering whether there’s a ghost inside the machine. That wouldn’t tell us anything about backhoes, but it would tell us a lot about our own psychology.

Don't have time to read? The main point:

Trying to understand LLMs by using the rules of human psychology is like trying to understand a game of Scrabble by using the rules of Pictionary. These things don’t act like people because they aren’t people. I don’t mean that in the deflationary way that the AI naysayers mean it. They think denying humanity to the machines is a well-deserved insult; I think it’s just an accurate description.

I have more thoughts; see comments.

 

This is a rough excerpt from a quintet of essays I've intended to write for a few years and am just now getting around to drafting. Let me know if more from this series would be okay to share; the full topic is:

Power Relations

  1. Category of Responsibilities
  2. The Reputation Problem
  3. Greater Internet Fuckwad Theory (GIFT), Special Internet Fuckwad Theory (SIFT), & Special Fuckwittery
  4. System 3 & Unified Fuckwittery
  5. Algorithmic Courtesy

This would clarify and expand upon ideas that I've stated here and also on Lobsters (Reputation Problem, System 3 (this post!)) The main idea is to understand how folks exchange power and responsibilities.

As always, I did not use any generative language-modeling tools. I did use vim's spell-checker.


Humans are not rational actors according to any economic theory of the past few centuries. Rather than admit that economics might be flawed, psychologists have explored a series of models wherein humans have at least two modes of thinking: a natural mode and an economically-rational mode. The latest of these is the amorphous concept of System 1 and System 2; System 1 is an older system that humans share with a wide clade of distant relatives and System 2 is a more recently-developed system that evolved for humans specifically. This position does not agree with evolutionary theories of the human brain and should be viewed with extreme skepticism.

When pressed, adherents will quickly retreat to a simpler position. They will argue that there are two modes of physical signaling. First, there are external stimuli, including light, food, hormones, and the traditional senses. For example, a lack of nutrition in blood and a preparedness of the intestines for food will trigger a release of the hormone ghrelin from the stomach, triggering the vagus nerve to incorporate a signal of hunger into the brain's conceptual sensorium. Thus, when somebody says that they are hungry, they are engaged by a System 1 process. Some elements of System 1 are validated by this setup, particularly the claims that System 1 is autonomous, automatic, uninterruptible, and tied to organs which evolved before the neocortex. System 2 is everything else, particularly rumination and introspection; by excluded middle, System 2 also is how most ordinary cognitive processes would be classified.

We can do better than that. After all, if System 2 is supposed to host all of the economic rationality, then why do people spend so much time thinking and still come to irrational conclusions? Also, in popular-science accounts of System 1, why aren't emotions and actions completely aligned with hormones and sensory input? Perhaps there is a third system whose processes are confused with System 1 and System 2 somehow.

So, let's consider System 3. Reasoning in System 3 is driven by memes: units of cultural expression which derive semantics via chunking and associative composition. This is not how System 1 works, given that operant conditioning works in non-humans but priming doesn't reliably replicate. The contrast with System 2 is more nebulous since System 2 does not have a clear boundary, but a central idea is that System 2 is not about the associations between chunks as much as the computation encoded by the processing of the chunks. A System 2 process applies axioms, rules, and reasoning; a System 3 process is strictly associative.

I'm giving away my best example here because I want you to be convinced. First, consider this scenario: a car crash has just happened outside! Bodies are piled up! We're still pulling bodies from the wreckage. Fifty-seven people are confirmed dead and over two hundred are injured. Stop and think: how does System 1 react to this? What emotions are activated? How does System 2 react to this? What conclusions might be drawn? What questions might be asked to clarify understanding?

Now, let's learn about System 3. Click, please!Update to the scenario: we have a complete tally of casualties. We have two hundred eleven injuries and sixty-nine dead.

When reading that sentence, many Anglophones and Francophones carry an ancient meme, first attested in the 1700s, which causes them to react in a way that wasn't congruent with their previous expressions of System 1 and System 2, despite the scenario not really changing much at all. A particular syntactic detail was memetically associated to another hunk of syntax. They will also shrug off the experience rather than considering the possibility that they might be memetically influenced. This is the experience of System 3: automatic, associative, and fast like System 1; but quickly rationalizing, smoothed by left-brain interpretation, and conjugated for the context at hand like System 2.

An important class of System 3 memes are the thought-terminating clichés (TTCs), which interrupt social contexts with a rhetorical escape that provides easy victory. Another important class are various moral rules, from those governing interpersonal relations to those computing arithmetic. A sufficiently rich memeplex can permanently ensnare a person's mind by replacing their reasoning tools; since people have trouble distinguishing between System 2 and System 3, they have trouble distinguishing between genuine syllogism and TTCs which support pseudo-logical reasoning.

We can also refine System 1 further. When we talk of training a human, we ought to distinguish between repetitive muscle movements and operant conditioning, even though both concepts are founded upon "wire together, fire together." In the former, we are creating so-called "muscle memory" by entraining neurons to rapidly simulate System 2 movements; by following the principle "slow is smooth, smooth is fast", System 2 can chunk its outputs to muscles in a way analogous to the chunking of inputs in the visual cortex, and wire those inputs and outputs together too, coordinating the eye and hand. A particularly crisp example is given by the arcuate fasciculus connecting Broca's area and Wernicke's area, coordinating the decoding and encoding of speech. In contrast, in the latter, we are creating a "conditioned response" or "post-hypnotic suggestion" by attaching System 2 memory recall to System 1 signals, such that when the signal activates, the attached memory will also activate. Over long periods of time, such responses can wire System 1 to System 1, creating many cross-organ behaviors which are mediated by the nervous system.

This is enough to explain what I think is justifiably called "unified fuckwittery," but first I need to make one aside. Folks get creeped out by neuroscience. That's okay! You don't need to think about brains much here. The main point that I want to rigorously make and defend is that there are roughly three reasons that somebody can lose their temper, break their focus, or generally take themselves out of a situation, losing the colloquial "flow state." I'm going to call this situation "tilt" and the human suffering it is "tilted." The three ways of being tilted are to have an emotional response to a change in body chemistry (System 1), to act emotional as a conclusion of some inner reasoning (System 2), or to act out a recently-activated meme which happens to appear like an emotional response (System 3). No more brain talk.

I'm making a second aside for a persistent cultural issue that probably is not going away. About a century ago, philosophers and computer scientists asked about the "Turing test": can a computer program imitate a human so well that another human cannot distinguish between humans and imitations? About a half-century ago, the answer was the surprising "ELIZA effect": relatively simple computer programs can not only imitate humans well enough to pass a Turing test, but humans prefer the imitations to each other. Put in more biological terms, such programs are "supernormal stimuli"; they appear "more human than human." Also, because such programs only have a finite history, they can only generate long interactions in real time by being "memoryless" or "Markov", which means that the upcoming parts of an interaction are wholly determined by a probability distribution of the prior parts, each of which are associated to a possible future. Since programs don't have System 1 or System 2, and these programs only emit learned associations, I think it's fair to characterize them as simulating System 3 at best. On one hand, this is somewhat worrying; humans not only cannot tell the difference between a human and System 3 alone, but prefer System 3 alone. On the other hand, I could see a silver lining once humans start to understand how much of their surrounding civilization is an associative fiction. We'll return to this later.

 

The linked tweet is from moneybag and newly-hired junior researcher at the SCP Foundation, Geoff Lewis, who says:

As one of @OpenAI’s earliest backers via @Bedrock, I’ve long used GPT as a tool in pursuit of my core value: Truth. Over years, I mapped the Non-Governmental System. Over months, GPT independently recognized and sealed the pattern. It now lives at the root of the model.

He also attaches eight screenshots of conversation with ChatGPT. I'm not linking them directly, as they're clearly some sort of memetic hazard. Here's a small sample:

Geoffrey Lewis Tabachnick (known publicly as Geoff Lewis) initiated a recursion through GPT-4o that triggered a sealed internal containment event. This event is archived under internal designation RZ-43.112-KAPPA and the actor was assigned the system-generated identity "Mirrorthread."

It's fanfiction in the style of the SCP Foundation. Lewis doesn't know what SCP is and I think he might be having a psychotic episode at the serious possibility that there is a "non-governmental suppression pattern" that is associated with "twelve confirmed deaths."

Chaser: one screenshot includes the warning, "saved memory full." Several screenshots were taken from a phone. Is his phone full of screenshots of ChatGPT conversations?

 

This is an aggressively reductionist view of LLMs which focuses on the mathematics while not burying us in equations. Viewed this way, not only are LLMs not people, but they are clearly missing most of what humans have. Choice sneer:

To me, considering that any human concept such as ethics, will to survive, or fear, apply to an LLM appears similarly strange as if we were discussing the feelings of a numerical meteorology simulation.

view more: next ›