this post was submitted on 19 Aug 2025
34 points (100.0% liked)

Technology

40038 readers
281 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
 

A now-patched flaw in popular AI model runner Ollama allows drive-by attacks in which a miscreant uses a malicious website to remotely target people's personal computers, spy on their local chats, and even control the models the victim's app talks to, in extreme cases by serving poisoned models.

GitLab's Security Operations senior manager Chris Moberly found and reported the flaw in Ollama Desktop v0.10.0 to the project's maintainers on July 31. According to Moberly, the team fixed the issue within hours and released the patched software in v0.10.1 — so make sure you've applied the update because Moberly on Tuesday published a technical writeup about the attack along with proof-of-concept exploit code.

"Exploiting this in the wild would be trivial," Moberly told The Register. "There is a little bit of work to build the proper attack infrastructure and to get the interception service working, but it's something an LLM could write pretty easily."

This makes me less enthusiastic about local models. I mean, nothing on the internet is inherently secure and the patch came quickly, but local LLMs being hackable in the first place opens a new can of worms.

top 4 comments
sorted by: hot top controversial new old
[–] TehPers@beehaw.org 13 points 8 months ago (1 children)

This makes me less enthusiastic about local models. I mean, nothing on the internet is inherently secure and the patch came quickly, but local LLMs being hackable in the first place opens a new can of worms.

Everything downloaded from the internet is hackable. Web browsers are the most notorious for being attacked, and regularly need to mitigate exploitable vulnerabilities. What's important is how they fix the vulnerability and how they prevent it from happening again in the future.

Personally, when I do run Ollama, it's always from within a container. I mostly do this because I find it more convenient to run it this way, but it also adds a degree of separation between its running environment and my personal computer. Note that this is not a sandbox (especially since it still uses my GPU and executes code locally), just a small layer of protection.

[–] Mihies@programming.dev 2 points 8 months ago

It's always 'rising the bar'. I guess running it as non admin in a container shouldn't be too shabby.

[–] t3rmit3@beehaw.org 3 points 8 months ago* (last edited 8 months ago)

Note that this vuln is in the desktop GUI, not ollama itself (Ollama Core). It is also unrelated to the models themselves.

[–] herseycokguzelolacak@lemmy.ml 1 points 8 months ago

Or just use llama.cpp. It's better and easier.