Also, if y'all are interested, run local models!
It’s not theoretical.
The cost of hybrid inference is very low; You can squeeze Qwen 35B on a 16GB RAM machine as long as it has some GPU. Check out ik_llama.cpp and ubergarm's quants in particular:
https://huggingface.co/ubergarm/models#repos
But if you aren’t willing to even try, I think that’s another bad omen for local models. Like the Fediverse, it won’t be served to you on a silver platter, you gotta go out and find it.