simeon

joined 2 years ago
[–] simeon@reddthat.com 6 points 2 days ago* (last edited 2 days ago) (3 children)

Der Iran hat es sich als Ziel gesetzt, Israel zu vernichten. Er hat auch bereits gezeigt, dass er willig ist z.B. durch Milizen extreme Maßnahmen auszuüben und Israel(durch die Hamas) und die westliche Schifffahrt(durch die Huthi) bereits angegriffen. Bei so einer radikalen Theokratie ist es also sehr gut möglich, dass Atomwaffen gegen Israel eingesetzt oder weitergegeben werden, woraufhin Israel mit seinen eigenen Atombomben zurückschlagen würdet Natürlich wäre es besser gewesen, wenn Trump nicht das Nuklearabkommen aufgelöst hätte, aber falls die Informationen über das Atomprogramm im Iran stimmen(laut Israel wären binnen weniger Monate bereits Atomwaffen gebaut worden, US Geheimdienste u.a. bestreiten dies allerdings) wäre dies doch eine wesentlich angenehmere Option als ein zwar unwahrscheinlicher, aber trotzdem gut möglicher Atomkrieg.

[–] simeon@reddthat.com 17 points 6 days ago (1 children)

They are using portable generators only intended for short term usage in an emergency. One of the tradeoffs of being portable is that the generators are unable to combust the natural gas "cleanly"(under sufficient temperature and with enough oxygen, resulting in this ideal reaction: CH4 + 2 O2 -> 2 H2O + CO2), leading to incomplete reactions releasing many pollutants, most of which are at least suspected of causing cancer. This is acceptable in an emergency but not if some narcist runs them in a population center without proper permission to feed his horribly inefficient model in an attempt to keep up with other AI labs.

[–] simeon@reddthat.com -5 points 4 months ago

Ollama is misrepresenting what model you are actually running by falsely labeling the distills, so qwen or llama fine-tunes based on actual r-1 output, as deepseek-r1. So you have probably only run the fine-tunes(unless you used the 671b model). These fine-tunes more probable to rely on the training of their base models, which is why the llama based models(8b and 70b) could be giving you more liberal answers. In my experience running these models using llama.cpp, prompts like "What happened at tianamen square" and "Is Taiwan a county?" lead to refusals(closing the think tags immediately and responding some vague Chinese propaganda). Since you are using ollama, the front end/UI you are using with it probably injects another token after the token, breaking the censoreship

[–] simeon@reddthat.com 2 points 4 months ago* (last edited 4 months ago) (7 children)

The local models(full and distilled) are also censored. The models censorship is just implemented superficially to immediately close any thinking tags and refuse when detecting censored material. If there already is any token after the token the model will start answering away, which also happens on the official API because it puts a new line after the token for some reason. That's why on chat.deepseek.com censored topics are first answered and then redacted by some other safeguard a few seconds later. Whilst there are some great abliterated(=technique that tries to remove parts of llms that cause refusals) versions of the distills on huggingface that prevent all refusals after a few tries, they only tackle refusals, not political opinions such as Taiwan's status as an independent country.

[–] simeon@reddthat.com 19 points 2 years ago (2 children)

They're exaggerating the problem. You can get a users IP and user-agent string, but only in a vacuum, not linked to your username or anything else. And even if you could, this wouldn't be mutch of a problem, because this information gets passed to basically everyone and doesn't reveal mutch(only strongly approximated location(next big city in the worst case) and what browser an operating system you're using). Comparing this to email tracking pixels is a misleading comparison, because these can be connected to a single person(the recipient of the email), making the information more valuable(and add another layer of information, such as the time the email was first opened).