call to end Gaza war
Genocide, not war.
call to end Gaza war
Genocide, not war.
So there are two timelines to this series?
Here's a guide for you (some spoilers inside). You can start any timeline from the start or even jump straight into any show, like the new Gquuuuuux which for all intents and purposes is in it's own timeline.
My introduction to the series was with Turn A Gundam (∀ Gundam), which is technically after the main timeline, but also is it's own thing and watching it first was pretty great! I recommend it. Or Gquuuuuux or The Witch from Mercury if you want something new.
I've watched 0079 and Z as well, but I still have to watch ZZ and the movie...
Wasn't there a deal between the usa and China about the tarifs that should last for a couple months more? How will this relate to that and will China take such a breach meekily like Russia or not?
That's the point of what I was saying, it will depend on the objective.
If it's an LLM made for profit extraction it will try to keep token generation cost to a minimum by using the smaller and cheapest LLM as much as possible while trying to keep people hooked on it, having ads too while stealing people's data and many other things.
But if it was an LLM made for the people it would likely understand the user was annoyed, would try to prompt the user into giving more information about the problem and then try to fix it, in this case by saving a memory with the user preferences and perhaps even consulting a more powerful model/a professional to get a better solution if the problem was bigger.
There’s no use getting angry with it, because it doesn’t understand anger.
Just sent the part until he gets up from the bed, in the third paragraph, to Qwen3-0.6B-q_8o, that is to say, a very small model, and it had the followint "sentiment analysis" of the text:
**Sentiment Analysis:**
**Negative**
**Explanation:**
The text contains elements of confusion and uncertainty (e.g., "What gives?"), indicating a negative sentiment. While the adjustment of the wake-up time is a positive note, the initial confusion and questioning of the time's discrepancy further contribute to a negative emotional state. The overall tone suggests a challenge or confusion, making the sentiment negative.
So I would say that the only reason for such an AI in 4 years to not be able to "understand anger" is if that's not an LLM or if it was a very cheap version made for maximum profits and bare minimum functionality (ie. capitalism would be at fault and not "LLM"s)
Taking a quick look at it it seems like "China removes basically everything and the US keeps a lot still".
Any better sources on this? Anything directly from China??
The tittle is very misleading though as that's "only" the "anti-Communist" part of the death toll of capitalism, which might not even get to 0.1% of everything even with the most conservative of estimates.
Me and a lot of people I know use it to help learning new things, for example. Who wouldn't like a personal teacher to teach you what you need? That seems very useful to me.
Having AI being used to improve our lives and our material reality seems like a very good idea to me though...
and it turns out simply making models bigger does not lead to better outputs.
I'd say that's debatable though, as what we have seen so far could just be that scaling with the current "low quality" data might not be enough. So, just like R1 might have been impossible earlier before there was enough high quality data for RL to work we might still be a ways of of having good enough data for huge models.
If that was the case that is kinda of a plateau, but a temporary one that could be raised once other things are improved enough. Who knows for sure though.
Please be careful not to conflate the official confirmation with Kots’s writing that you’re quoting in the OP. This whole “they swore to never be captured” thing is not officially confirmed and there should be a disclaimer about that.
Yep. Even here people seem to fall for all the anti-Korea/racist propaganda it seems.
First of all, DeepSeek-R1-0528-Qwen3-8B is not the Deepseek model people refer to when talking about Deepseek, so that's misleading to say the least. The actual Deepseek model is the 671B parameter model which they breafly mention but is not the main topic of the article as one would assume from the title. That model is really good, the best open source and one of the best in general, and it is possible to run locally, but requires some 200GB RAM/VRAM to run at the smallest qualities and 800GB+ RAM/VRAM if running at full quality.
As for the model the article is about and that you mentioned, it is based on the Qwen3-8B model which can be run in as little as ~5GB available RAM/VRAM quantized to q4_k_m, ie. it can be run on computers and even in some phones.
As for the target audience, anyone wanting privacy in their LLM uses or simply not paying for an API access for use in automation tasks or research. As this is a thinking version though it will take quite a few tokens to get to an answer, so it's better for people that have a GPU or those who simply need something more powerful locally sometimes.