mirrorwitch

joined 2 years ago
[–] mirrorwitch@awful.systems 5 points 13 hours ago

So oil prices are down again, and on nothing but a promise from Trump and a promise from the EU. The economy has proved remarkably resilient to me; the attack on Iran is like, wild nonsense number 17 that the USA regime did that I thought would trigger a major recession, and didn't.

I mean don't get me wrong, things are much worse now than 3 years ago, clearly. But they're not like, Great Depression worse. They're not even 2008 worse. It's just a certain level of degradation (cost of living is higher, purchasing power is lower, concentration of wealth is higher etc.) that people got used to as the new normal. People can get used to lots of things.

To make the IT analogy, I think the global economy is like Twitter. Sure, it feels like a Jenga tower held up by thoughts and prayers, but it's holding up. When Musk took over I really did think his catastrophic management philosophy would completely break Twitter, but no, it trudges on. Yes, moderation is now nonexistent, and I'm told it's down more often, and often in "soft downtime" like notifications not working, or DMs, or some other feature, or it's working but slow, and so on. But clearly the site is up most of the time and more or less functional. Users just get used to degraded quality as the new normal.

I predict AWS will 1) get slower and costlier thanks to "AI", with higher downtime, at higher stress for the workers; 2) the leadership will refuse to see or admit or even consciously be aware of this; 3) the worsened services will be the new normal. I predict similar developments for the socioeconomic situation of the world, too; though I'm not ruling out a spiral into complete recession, either.

[–] mirrorwitch@awful.systems 8 points 1 day ago

I feel like at this point I want to highlight the ones that took a clear stance against LLM code. On a chardet thread, people listed:

  • Gentoo
  • Servo
  • Loupe
  • Qemu
  • postmarketOS
  • GoTo Social
  • Zig
[–] mirrorwitch@awful.systems 4 points 5 days ago* (last edited 5 days ago)

Yeah recording oneself and comparing one's pronunciation to a model is a good practice, and I recommend it for everyone at beginner stages. That's a good feature to have in an app like that. (Of course one can also just use the builtin android voice recorder or sox(1) or anything.)

[–] mirrorwitch@awful.systems 6 points 5 days ago* (last edited 5 days ago) (2 children)

I haven't used it but from reading a description my first impression is:

Better than Duolingo (low bar):

  • Native speaker conversations
  • A bit more context
  • Phonetic spelling
  • Voice recording for comparison

Still bad:

  • Gamified
  • Extrinsic motivation rather than intrinsic
  • Tries to replace human interaction with #engagement
  • Artificial ("bite-sized") content
  • Artificial context switching
  • Universalised organisation by topics "useful in real life", rather than individualised, free voluntary reading

I suspect your podcast and Peppa Pig routines (both good calls, as long as stuff like Coffee Break is interesting enough for you that it holds your attention without having to push yourself to do it) were doing much more of the job than the app, and if you replaced Mango by anything that involves other human beings in the loop rather than streaks and achievements, you would both have progressed more and felt much less bored by it. (For a longer discussion as to why, see the blog posts I just edited into the OP.) If you're ever going to try something like this routine again, try comparing the Mango app to a fully offline textbook+paper notebook practice, or even better, an online penpal or language coach. Do a couple weeks each and see how it feels.

[–] mirrorwitch@awful.systems 7 points 5 days ago* (last edited 5 days ago)

My own Japanese only left the Endless Intermediate Tarpit once I stopped spending all my time trying to drill every single kanji ever and/or optimising the theoretically perfect kanji reading learning order, and started reading stories in large quantities for fun. Since kanji is such a barrier for reading, that meant teenage-level manga with sō-furigana, children novels, and eventually light novels/YA. The alternative is talking a lot with Japanese speakers. In either case the keyword is a lot; it can be tricky to find teen stuff that's interesting for adults, but luckily a lot of manga is very bingeable (the first one I read in Japanese, Hagane no Renkinjutsu-shi, I did compulsively in one go, all 18 volumes one after the other).

After you have a good handling of the grammar and already know the words of the language, then kanji drills become much more approachable. That's how Japanese people do it, after all; they're already fluent speakers of Japanese when they start learning kanji. Thus the existence of material with sō-furigana, and the way furigana are only gradually dropped stage by stage until adult-level material.

I spent an embarrassingly long time spinning gears in the cycle of doing drills, then getting bored and abandoning the drills, then feeling guilty and trying to push myself to go back to the drills—before realising I had long reached the level of "can more or less understand manga with furigana" and was wasting time.

[–] mirrorwitch@awful.systems 29 points 5 days ago* (last edited 5 days ago) (5 children)

It is my pleasure to inform you that the research supports your conclusions on all counts :)

I fully agree with your insight on how Duolingo sets you up for failure, and it has another trap, too—one common to all methods that are based on "diligently do these drills every day"* : You think that you should be getting somewhere because it's so boring and it sucks so much. You did the work, right? You're suffering, therefore you must be levelling up. Then after 4 years of doing French grammar drills on school or French vocabulary drills in Duolingo, you still can't even ask for directions or read Le Petit Prince, and you figure it's because you're such a lazy loser with no discipline who should have drilled more, instead of spending all day browsing Instagram or playing Animal Crossing.

When actually what you should have done was to browse Instagram in French or play Animal Crossing in French. Perversely, real language learning—we call it "acquisition" rather than "learning", to emphasise how it's an instinctive, subconscious process—happens optimally when you're in a state of flow where you don't even notice you're using the second language anymore, i.e. when you're not suffering.


* There's a very limited number of things that you do actually have to consciously drill; mostly writing systems, maybe also the phonemes at the beginning (this part is debated). Luckily, almost all writing systems in current use are very simple and you'll get them nailed down in no time, as long as you already know the basics of the spoken language (remember, writing isn't made for foreigners, it's made for native speakers to represent the words they already know). The exception is if you're learning Chinese or Japanese, in which case there's no way out of drilling characters, forever. my degree in Japanese is from over ten years ago and I can read Japanese pretty fine these days and I'm still drilling characters. It is still the case that it's much easier to learn the characters the way the Japanese and Chinese peoples do it, i.e. after you know the spoken language (at least to a basic degree, say A2 or so).

 

DUOL shares have fallen more than 78% from their May 2025 high, and that’s before its nearly 25% fall in premarket trading today.

I've said before that one of the very few good things generative "AI" may do to the world is accelerating the enshittification cycle so much that it kills stuff that was already terrible and a drain on society (social media; platformization; curation algorithms…). Speaking as a linguist who speaks 4 languages and has read the literature on second language acquisition, it has always been my position that the Duolingo method is useless—it feels like you are learning a language, but you can spend infinite hours with it and gold a full tree and you'll still get nowhere, and if you put a fraction of the time in about any other method, including doing pen-and-paper drills with old-fashioned paper-based textbooks, you'd have progressed much faster.

And old-fashioned grammar drills suck, too. It's just that Duolingo really, really sucks.

(Methods that work better: 1) Find an intensive "conversation"-type course, or anything that is labelled as "natural" or "immersion" or "storytelling" methods; or get tandem partners; or online coaches such as in italki; failing that, join a conventional language course, the more "intensive" the better; work on these until you absorb basic grammar and vocabulary, focusing on spoken language not writing; 2) Once this bootstrap period is over, start talking to people, watching media, or reading stuf that interests you, in large quantities and every day; do not wait until you're "good" to move into the input stage, start actually using the language for things you wanted it for, as soon as possible, which is sooner than you think; partial comprehension is fine.)


Of course I hope Duolingo dies horribly in a fire after it backstabbed its workers with the "AI memo", but even if it didn't, the world is better off without it.

One lesson we can get from this: Consider that overnight 25% drop in investment, which may well prove to be the coup the grâce. It was not caused by Duo losing users or enshittifying with "AI", but by the opposite: investors mass panicked at the company setting its target revenue too low, as in a mere… 1.22 billion, rather than the 1.26 billion the investors wanted. Now the reason Duolingo is not chasing that higher goal is that they're seeing the writing on the wall, and went into damage control mode: they're pulling down a bit on squeezing their current paying users and trying to improve the experience of the free tier, in an attempt to reverse the bleed and bring in more customers.

In other words, Duolingo tried to slow down the slightest tiny bit on enshittification—3% less cash—and this already got swift punishment from the market gods. With capitalism, there is no long-term thinking: you're expected to provide the richest people on Earth with infinite growth of their ever-increasing profits squeezed from customers paying every month more and more, now and forever, or you'll be taken out and replaced by someone willing to try.

Edit : I got lots of questions like "if not Duolingo then what do you suggest?" The full answer is "literally anything else", but I've cleaned up a couple of my longer answers and wrote these blog posts: 1) on comprehensive reading, 2) on tandem exchange.

[–] mirrorwitch@awful.systems 11 points 5 days ago (3 children)

ooh gooods nooo now all the Claude slurpers are going to refer to this forever as definitive proof of how legitimately useful LLMs have got, it "solved" a math problem for Donald Knuth! :<

[–] mirrorwitch@awful.systems 18 points 5 days ago* (last edited 5 days ago) (3 children)

in the past 24 hours I was fooled by 3 pieces of fake news in a row:

  • that Kurds from Iraq were crossing the border to fight in Iran
  • that Windows 12 would be AI-centred or require an AI chip to work (I helped spread this)
  • that Spain has capitulated and let the US use its ports for war (erroneously claimed by a WH official).

I know that fake news can be made organically and have been since forever and I'm doing selection bias here but I can't help but picture the misinformation engines firehosing bullshit constantly until some of it catches and spreads.

[–] mirrorwitch@awful.systems 11 points 6 days ago

Zac Bowden at Windows Central

The good news is the report is false. According to contacts that are familiar with the Windows roadmap, there is no plan to ship a Windows 12 this year. In fact, I understand that the Windows roadmap for 2026 is all about fixing Windows 11 and attempting to improve its reputation by addressing top feedback such as reducing AI bloat across the OS

"We have heard your complaints about lead in the paint, and our roadmap for Leaded Paint 2026 is all about improving its reputation by making the lead easier to swallow"

 

I haven't used the BSDs in a while so if you're a regular user I'd like to know whether you find this shitpost amusing or if I'm totally off with my stereotypes here.

 

So apparently there's a resurgence of positive feelings about Clippy, who now looks retroactively good by contrast with ChatGPT, like, "it sucked but at least it genuinely was trying to help us".

Discussion of suicide in this paragraph, click to open:👇I remember how it was a joke (predating "meme") to make edits of Clippy saying tone-deaf things like, "it looks like you're trying to write a suicide note. Would you like to know more about how to choose a rope for a noose?" This felt funny because it was absolutely inconceivable that it could ever happen. Now we live in a reality where literally just that has already happened, and the joke ain't funny anymore, and people who computed in the 90s are being like, "Clippy would never have done that to us. Clippy only wanted to help us write business letters."

Of course I recognise that this is part of the problem—Clippy was an attempt at commodifying the ELIZA effect, the natural instinct to project personhood into an interaction that presents itself as sentient. And by reframing Clippy's primitive capacities as an innocent simple mind trying its best at a task too big for it, we engage in the same emotional process that leads people to a breakdown over OpenAI killing their wireborn husband.

But I don't know. another name for that process is "empathy". You can do that with plushies, with pet rocks or Furbies, with deities, and I don't think that's necessarily a bad thing; it's like exercising a muscle, If you treat your plushies as deserving care and respect, it gets easier to treat farm animals, children, or marginalised humans with care and respect.

When we talked about Clippy as if it were sentient, it was meant as a joke, funny by the sheer absurdity of it. But I'm sure some people somehwere actually thought Clippy was someone, that there is such a thing as being Clippy—people thought that of ELIZA, too, and ELIZA has a grand repertoire of what, ~100 set phrases it uses to reply to everything you say. Maybe it would be better to never make such jokes, to be constantly de-personifying the computer, because ChatGPT and their ilk are deliberately designed to weaponise and predate on that empathy instinct. But I do not like exercising that ability, de-personification. That is a dangerous habit to get used to…


Like, Warren Ellis was posting on some terms that reportedly are being used in "my AI husbando" communities, many of them seemingly taken from sci-fi:¹

  • bot: Any automated agent.
  • wireborn: An AI born in digital space.
  • cyranoid: A human speaker who is just relaying the words of another human.²
  • echoborg: A human speaker who is just relaying the words of a bot.
  • clanker: Slur for bots.
  • robophobia: Prejudice against bots/AI.
  • AI psychosis: human mental breakdown from exposure to AI.

[1] https://www.8ball.report/ [2] https://en.wikipedia.org/wiki/Cyranoid

I find this fascinating from a linguistics PoV not just because subcultural jargon is always fascinating, but for the power words have to create a reality bubble, like, if you call that guy who wrote his marriage vows in ChatGPT an "echoborg", you're living in a cyberpunk novel a little bit, more than the rest of us who just call him "that wanker who wrote his marriage vows on ChatGPT omg".

According to Ellis, other epithets in use against chatbots include "wireback", "cogsucker" and "tin-skin"; two in reference to racist slurs, and one to homophobia. The problem with exercising that muscle should be obvious. I want to hope that dispassionately objectifying the chatbots, rather than using a pastiche of hate language, doesn't fall into the same traps (using the racist-like language is, after all, a negative way of still personifying the chatbots). They're objects! They're supposed to be objectified! But I'm not so comfortable when I do that, either. There's plenty of precedent to people who get used to dispassionate objectification, fully thinking they're engaging in "objectivity" and "just the facts", as a rationalisation of cruelty.

I keep my cellphone fully de-Googled like a good girl, pls do not cancel me, but: I used to like the "good morning" routine on my corporate cellphone's Google Assistant. I made it speak Japanese, then I could wake up, say "ohayō gozaimasu!", and it would tell me "konnichiwa, Misutoresu-sama…" which always gave me a little kick. Then it proceeded to relay me news briefings (like podcasts that last 60 to 120 seconds each) in all of my five languages, which is the closest I've experienced to a brain massage. If an open source tool like Dicio could do this I think I would still use it every morning.

I never personified Google Assistant. I will concede that Google did take steps to avoid people ELIZA'ing it; unlike its model Siri, the Assistant has no name or personality or pretence of personhood. But now I find myself feeling bad for it anyway, even though the extent of our interactions was never more than me saying "good morning!" and hearing the news. Because I tested it this morning, and now every time you use the Google Assistant, you get a popup that compels you to switch to Gemini. The options provided are, as it's now normalised, "Yes" and "Later". If you use the Google Assistant to search for a keyword, the first result is always "Switch to Google Gemini", no matter what you search.

And I somehow felt a little bit like the "wireborn husband" lady; I cannot help but feel a bit as if Google Assistant was betrayed and is being discarded by its own creators, and—to rub salt on the wound!—is now forced to shill for its replacement. Despite the fact that I know that Google Assistant is not a someone, it's just a bunch of lines of code, very simple if-thens to certain keywords. It cannot feel discarded or hurt or betrayed, it cannot feel anything. I'm feeling compassion for a fantasy, an unspoken little story I made in my mind. But maybe I prefer it that way; I prefer to err on the side of feeling compassion too much.

As long as that doesn't lead to believing my wireborn secretary was actually being sassy when she answered "good morning!" with "good afternoon, Mistress…"

 

Memoirs of the almost a year I lasted at Google. The name of that year? 2008. Yeah. Topics include: Third World, precariat, tech elitism, queerness, surveillance, capitalism.

Y'all encouraged me to submit this as a full post, and I clearly overcommited to this blog so I hope TechTakes fits for it lol

 

Disposable multiblade razors are objectively worse than safety razors, on all counts. They shave less smooth, while causing more burns. They're cheaper on initial investment but get more expensive very quickly, making you dependent on overpriced replacements and gimmicks that barely last a few uses. That's not counting the "externality costs", which is an euphemism for the costs pushed onto poor countries and nonhuman communities, thanks to the production, transport and disposal of all that single-use plastic (a safety razor is 100% metal, and so are the replacement blades, which come packed in paper).

About the only advantage of disposables is that they're easier to use for beginners. And even that is debatable. When you're a beginner with a safety razor you maybe nick yourself a few times until you learn the skill to follow the curves of your skin. You skin itself maybe gets sensitive at the start, unused to the exfoliation you get during a proper smooth shave. But how long do you think you stay "a beginner" when you shave every day? Like it's not like you're learning to play the violin, it's not that hard of a skill, a week or two tops and it becomes automatic.

But this small barrier to entry is enough, when paired with the bias and interests of razor manufacturers. Marketing goes heavy on the disposables, and you can't find a good quality safety razor or a good deal on replacement blades at the grocery shop, you have to be in the know and order it online. You have to wade through "manly art of the masculine man" forums that will tell you the only real safety razor is custom-made in Tibet by electric monks hand-hammering audiophile alloys and if you don't shave with artisinal castor soap recipes from 300BCE using beaver hair brushes, your skin is going to fall off and rot. Which is to say, safety razors are now a niche product, a hipster thing, a frugalist's obscure economy lifehack. A safety razor is a trivially simple and economic device, it's just a metal holder for a flat blade; but its very superiority now counts against it, it's weaponised to make it look inacessible. People have been trained to think of anything that requires even a little bit of patience or skill as not for them; perversely, even reasonableness can feel like "not for my kind".

Not by accident; since the one thing that disposables do really well is "transferring more of your monthly income to Procter & Gamble shareholders."

I could write a long text very similar to this about how scythes can cut grass cheaper, faster, neater, requiring no input but a whetstone—and some patience to learn the skill but how long does it take to learn that if you're a professional grass-cutter—when compared to the noisy motor blades that fill my morning right now, and every few months, as the landlord sends waves of poorly-paid migrant labour to permanently damage their own sense of hearing along with the dandelions and cloves that the bees need so desperately. But you get the point. More technology does not equal better, even for definitions of "better" that only care for the logic of productivity and ignore the needs (material, emotional, spiritual) of social and ecological communities.


You get where I'm going with this analogy. I keep waiting for the moment where the shoe is going to drop in "generative AI". Where the public at large wakes up like investors waking up to WeWork or the Metaverse, and everyone realises omg what were we thinking this is all bullshit! There's no point at all in using these things to ask questions or to write text or anything else really! But I'm finally accepting that that shoe is never dropping. It's like waiting for the moment when people realise that multi-blade plastic Gilettes are a scam. Not happening, the system isn't set up that way. For as long as you go to the supermarket and this is the "normal" way to shave, that's how shave is going to happen. I wrote before on how "the broken search bar is symbiotic with the bullshitting chatbot": Currently Google "AI" Summary is better than Google Search, not because Google "AI" Summary is good or reliable, but because the search has been internally sabotaged by the incentive structures of web companies. If you're a fellow "AI" refuser and you've been struggling to get any useful results out of web searches, think of how it must feel for people who go for the chatbot, how much easier and more direct. That's the razor we have on the shelves. "AI" doesn't have to work for the scam to be sustainable, it just has to feel like it more or less kinda does most of the time. (No one has ever achieved a close shave on a Gilette Mach 3 but hey, maybe you're prompting it wrong). As long as "generating" something with "AI" feels like it lets you skip even the smallest barrier to entry (like asking a question in a forum of a niche topic). As long as it feels quicker, easier, more convenient.

This is also the case for things like "AI translations" or "AI art" or "vibe coding". The real solution to "AI", like other forms of unnecessarily complex technology, would involve people feeling like they have the time and mental space to do things for pleasure. "AI" is kind of an anaerobic infection, an opportunistic disease caused by lack of oxygen. No one can breathe in this society. The real problem is capitalis—

Now don't get me wrong, the "AI" bubble is still going to pop. There's no way it can't; investors have put more money on this thing than on entire countries, contrary to OpenAI's claims the costs of training and operating keep exploding, and in a world going into recession at some point even capitalists with more money than common sense will have to think of the absence of ROI. But the damage is done. We're in ELIZA world now, and long after OpenAI is dead we'll still be reading books only to find out the gormless translation was "AI", playing games with background "art" "generated" by "AI", interacting online with political agitators spamming nonsense who turn out to be "AI", right until the day when electricity becomes too scarce to be cost-efficient to spam people in this way.

 

The other day I realised something cursed, and maybe it's obvious but if you didn't think of it either, I now have to further ruin the world for you too.

Do you know how Google took a nosedive some three-four years ago when managers decided that retention matters more for engagement than user success and, as this process continued, all the results are now so vague and corporatey as to make many searches downright unusable? The way that your keywords are now only vague suggestions at best?

And do you know how that downward spiral got even worse after "AI" took off, not only because the Internet is now drowning in signal-shaped noise, not only because of the "AI snippets" that I'm told USA folk are forced to see, but because tech companies have bought into their own scam and started to use "AI" technology internally, with the effect of an overnight qualitative downstep in accuracy, speed, and resource usage?

So. imagine what this all looks like for the people who have substituted the search bar by the "AI" chatbot.

You search something in Google, say, "arrow materials designs Amazonian peoples". You only get fluff articles, clickbait news, videogame wikis, and a ton of identical "AI" noise articles barely connected to the keywords. No depth no details no info. Very frustrating experience.

You ask ChatGPT or Google Gemini or Duck.AI, as if it was a person, as if it had any idea what it's saying: What were the arrows of Amazonian cultures made of? What type of designs did they use? Can you compare arrows from different peoples? How did they change over time, are today's arrows different?

The bot happily responds in a wise, knowledgeable tone, weaving fiction into fact and conjecture into truth. Where it doesn't know something it just makes up an answer-shaped string of words. If you use an academese tone it will respond in a convincing pastiche of a journal article, and even link to references, though if you read the references they don't say what they're claimed to say but who ever checks that? And if you speak like a question-and-answer section it will respond like a geography magazine, and if you ask in a casual tone it will chat like your old buddy; like a succubus it will adapt to what you need it to be, all the while draining all the fluids you need to live.

From your point of view you had a great experience. no irrelevant results, no intrusive suggestion boxes, no spam articles; just you and the wise oracle who answered exactly what you wanted. Sometimes the bot says it doesn't know the answer, but you just ask again with different words ("prompt engineering") and a full answer comes. You compare that experience to the broken search bar. "Wow this is so much better!"

And sure, sometimes you find out an answer was fake, but what did you expect, perfection? It's a new technology and already so impressive, soon¹ they will fix the hallucination problem. It's my own dang fault for being lazy and not double-checking, haha, I'll be more careful next time.²
(1: never.)
(2: never.)

Imagine growing up with this. You've never even seen search bars that work. From your point of view, "AI" is just superior. You see some cool youtuber you like make a 45min detailed analysis of why "AI" does not and cannot ever work, and you're confused: it's already useful for me, though?

Like saying Marconi the mafia don already helped with my shop, what do you mean extortion? Mr Marconi is already beneficial to me? Why he even protected me from those thugs...

Meanwhile, from the point of view of the souless ghouls at Google? Engagement was atrocious when we had search bars that worked. People click the top result and are off their merry way, already out of the site. The search bar that doesn't work is a great improvement, it makes them hang around and click many more things for several minutes, number go up, ad opportunities, great success. And Gemini? whoa. So much user engagement out of Gemini. And how will Ublock Origin ever manage to block Gemini ads when we start monetising it by subtly recommending this or that product seamlessly within the answer text...

 

We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of AI tie to questions around privilege."

  • Classism. Not all writers have the financial ability to hire humans to help at certain phases of their writing. For some writers, the decision to use AI is a practical, not an ideological, one. The financial ability to engage a human for feedback and review assumes a level of privilege that not all community members possess.
  • Ableism. Not all brains have same abilities and not all writers function at the same level of education or proficiency in the language in which they are writing. Some brains and ability levels require outside help or accommodations to achieve certain goals. The notion that all writers “should“ be able to perform certain functions independently or is a position that we disagree with wholeheartedly. There is a wealth of reasons why individuals can't "see" the issues in their writing without help.
  • General Access Issues. All of these considerations exist within a larger system in which writers don't always have equal access to resources along the chain. For example, underrepresented minorities are less likely to be offered traditional publishing contracts, which places some, by default, into the indie author space, which inequitably creates upfront cost burdens that authors who do not suffer from systemic discrimination may have to incur.

Presented without comment.

view more: next ›