Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
-
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
You gotta hook it to a local LLM. Then it's boss.
Any pointers where to begin?
Install Ollama on a machine with fast CPU or GPU and enough RAM. I currently use Qwen3 that takes 8GB RAM. Runs on an NVIDIA GPU. Running it on CPU is also fast enough. There's a 4GB version which is also decent for device control. Add Ollama integration in Home Assistant. Connect it to the Ollama on the other machine. Add Ollama as conversation agent to the Home Assistant's voice assistant. Expose HA devices to be controllable. That's about it on high level.