this post was submitted on 14 Feb 2026
136 points (97.9% liked)

Technology

81161 readers
4806 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 21 comments
sorted by: hot top controversial new old
[–] planish@sh.itjust.works 1 points 3 hours ago

Why don't browsers know how to render a Markdown content-type yet, all by themselves? It's ubiquitous now and it's not like it's hard to parse, but every site has to translate it into HTML itself for the browser.

[–] fruitycoder@sh.itjust.works 2 points 7 hours ago

Firefox addon for regular users when?

[–] TORFdot0@lemmy.world 37 points 18 hours ago (1 children)

Does this mean that if I pretend to be a bot, I can access any cloudflare site ad-free?

[–] bjoern_tantau@swg-empire.de 23 points 17 hours ago (2 children)

"Prove that you're a bot by factorising this large number."

[–] br3d@lemmy.world 16 points 16 hours ago

"Prove you're a bot by failing to click all the motorcycles in this image"

[–] xthexder@l.sw0.com 7 points 15 hours ago (1 children)

How large a number are we talking? This might be impossible for a computer as well considering this being a hard problem is effectively the basis for most encryption.

[–] bjoern_tantau@swg-empire.de 7 points 14 hours ago

Similar to how ReCAPTCHA was meant to train neural networks for image recognition the anti bot protocol is used to train an autist to find an efficient factorisation algorithm.

[–] webghost0101@sopuli.xyz 109 points 21 hours ago* (last edited 21 hours ago) (5 children)

The autistic community has been dying for this kind of accessibility accommodation for years.

I cannot express how deeply this angers me. Though i am happy to exploit the fuck out of this for personal use.

“Markdown offers a cleaner, more semantically clear representation of the content. This means less noise for ~~language models and other text-analysis systems~~ people that process information neurodivergently, resulting in more efficient processing and ~~potentially~~ lower ~~compute costs~~ real life physical exhaustion.“

[–] artyom@piefed.social 52 points 18 hours ago (1 children)

My brother, this is not just autistic people. Everyone wants this. Except the people who make the sites, because all that noise is how they make money.

If you look at a private blog, they're usually devoid of much noise.

[–] MagicShel@lemmy.zip 13 points 17 hours ago

MD is a nearly ideal format. I keep my personal notes and time management stuff in Obsidian using markdown. Write my blog in Markdown. AsciiDoc is nice, too, for certain use cases.

[–] kernelle@lemmy.dbzer0.com 45 points 19 hours ago

❌️ Adding accessibility features to make the internet usable by anyone

✅️ ~~anyone~~ other computers

[–] deltaspawn0040@lemmy.zip 17 points 18 hours ago (1 children)

Woohoo, the interests of capital have coincidentally aligned with ours in this one brief moment!

[–] FauxLiving@lemmy.world 8 points 14 hours ago

The next step is for Cloudflare to introduce a proprietary markdown tags, then release a library to parse their new crap, then update their systems so it serves degraded 'legacy' markdown but include a paid API to get access to the 'old' markdown, then add features to the library that can only be accessed by API customers, etcetc

I see a commercial entity embrace something and start looking for the extend and extinguish part.

[–] SuspciousCarrot78@lemmy.world 12 points 17 hours ago

I have ASD; I made several tools that explicitly convert web sources to .md and JSON.

The shitty thing is, a lot of sites - even if they have stuff available in simple, beautiful JSON format, refuse to give public access to it. Notoriously, movie session times for local cinemas. That should be a simple look up...but no.

Oh well, at least cool shit like this still exists

https://github.com/chubin/wttr.in

https://github.com/scrapy/scrapy

[–] acosmichippo@lemmy.world 7 points 18 hours ago (1 children)

don’t “reader” views in web browsers essentially accomplish the same thing?

[–] kernelle@lemmy.dbzer0.com 5 points 15 hours ago

Yes, and the way this reader functionality works is using structured tags in the HTML code. A very small effort that leads to a magnitude in accessibility that some just do not give an F about.

The kicker here is that these tags also impact SEO heavily, so not having them not only makes it harder for many people to use or read some sites, but it also makes them score lower on search engines.

[–] LedgeDrop@lemmy.zip 32 points 20 hours ago

jaw-drop I can go back to lynx now! /s

Potentially, this is actually a fantastic improvement. It (in theory) means you could request markdown and convert it back to html and meanwhile strip out ads, Javascript, tracking/cruft, etc.

I wonder how accurate of a markdown translation this would be. Would/could it handle single-page apps?

[–] uninvitedguest@piefed.ca 3 points 18 hours ago (3 children)

A few things come to mind

  1. Is this much different from the "reading view" popularized by Instapaper, reading later, etc and now baked into most browsers today?
  2. What is a token?
  3. How is it that tokens have become the base unit on which we denominate LLM work/effort/task difficulty/cost?
  4. Does every LLM model make use of "tokens" or is it just one or a select few?
  5. If multiple use the idea of tokens, is a token with one LLM/provider equivalent to a token with a different LLM/provider?
[–] wonderingwanderer@sopuli.xyz 8 points 16 hours ago

A token is basically a linguistic unit, like a word or a phrase.

LLMs don't parse text word-by-word because it would miss a lot of idiomatic meaning and other context. "Dave shot a hole in one at the golf course" might be parsed as "{Dave} {shot} {a hole in one} {at the golf course}"

They use NLP to "tokenize" text, meaning parsing it into individual tokens, so depending on the tokenizer I suppose there could be slight variations on how a text is tokenized.

Then the LLM runs each token through layers of matrices on attention heads (basically, vectors) in order to assess the probabilistic relationships between each token, and uses that process to generate a response via next-token prediction.

It's a bit more complex than that, of course. Tensor calculus, billions of weighted parameters, layers divided by hidden sizes, also matmuls, masks, softmax, and dropout. Also the "context window" which is how many tokens it can process at a time. But it's the gist of it.

But a token is just the basic unit that gets run through those processes.

[–] wosat@lemmy.world 5 points 15 hours ago

Here's an OpenAI page that allows you to enter text and see how it gets tokenized:

https://platform.openai.com/tokenizer

[–] CandleTiger@programming.dev 4 points 16 hours ago

A token is the word for the base unit of text that an LLM works with. It’s always been that way. The LLM does not directly work with characters; they are collected together into chunks less than a word and this stream of tokens is what the LLM is processing. This is also why the LLMs have such trouble with spelling questions like “how many Rs in raspberry?” — they do not see the individual letters in the first place so they do not know.

No, the LLMs do not all tokenize the same way. Different tokenizers are (or at least were once) one of the major ways they differed from each other. A simple tokenizer might split words up into one token per syllable but I think they’ve gotten much more complicated than that, now.

My understanding is very basic and out-of-date.