this post was submitted on 19 Jan 2026
47 points (100.0% liked)

Selfhosted

56396 readers
1517 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I'm curious if anyone has had much luck leveraging older AMD hardware to use ROCm, I have an 6700 xt that I've just begun inquiring about, and it seems it falls outside of official support.

Right now I intend to pass it through to my Debian Docker VM to support transcoding in some containers in addition to machine learning applications.

you are viewing a single comment's thread
view the rest of the comments
[–] despoticruin@lemmy.zip 2 points 3 weeks ago (1 children)

Their guides specifically call for an exact kernel version, distribution, and hardware. If you are trying to operate outside of the official requirements then it shouldn't come as a surprise when the official documentation doesn't work for you.

[–] panda_abyss@lemmy.ca 0 points 3 weeks ago

do you know how insane it is their official guides don’t work with kernel point updates?

https://github.com/ROCm/ROCm/issues/5824

This has been an issue for a long time. 

I have to maintain a file of which specific kernel+os+firmware versions I’m on and have downgraded to just to get the most popular ML library in the world to du a matrix multiply. 

I don’t get how this bug gets into production branch, let alone shipped requiring firmware downgrades, on their new line of GPUs/chips. How do they not test their latest hardware with their own firmware?