It is already here, half of the article thumbnails are already AI generated.
You are easier to track with Adnauseum
Being able to run benchmarks doesn't make it is a great experience to use unfortunately. 3/4 of applications don't run or have bugs that the devs don't want to fix.
Windows is not fine with ARM, which can be a turnoff for some.
Llama models tuned for conversation are pretty good at it. ChatGPT also was before getting nerfed a million time.
JPEG-XL support is being tested in firefox nightly
https://tiz-cycling-live.io/livestream.php
Be sure to use an adblocker, some times the stream get taken down and you have to wait 1/2 min for them to repost one.
The best way to run a Llama model locally is using Text generation web UI, the model will most likely be quantized to 4/5bit GGML / GPTQ today, which will make it possible to run on a "normal" computer.
Phind might make it accessible on their website soon, but it doesn't seem to be the case yet.
EDIT : Quantized version are available thanks to TheBloke
Specifically probleme sovling, chatgpt has multiple model too it is just hidden to the user
This is because librewolf reports itself as firefox for privacy, and vivaldi does the same thing with chrome. Their is no vivaldi string in their user agent.
I use it + portmaster + O&O to kill all of windows spyware, this is great software and should be recommended if you have to run windows
Edited typo
I put zorin on my parent's computer 2 years ago, while its a great distro, their windows app support is just marketing, its an out of date wine version with an unmaintained launcher. Worse than tinkering with wine yourself.