this post was submitted on 19 Jan 2026
47 points (100.0% liked)

Selfhosted

56396 readers
1507 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I'm curious if anyone has had much luck leveraging older AMD hardware to use ROCm, I have an 6700 xt that I've just begun inquiring about, and it seems it falls outside of official support.

Right now I intend to pass it through to my Debian Docker VM to support transcoding in some containers in addition to machine learning applications.

all 24 comments
sorted by: hot top controversial new old
[–] despoticruin@lemmy.zip 12 points 3 weeks ago* (last edited 3 weeks ago)

You need to set an override in your environment variables to force it to use the gfx1030 kernel modules, but otherwise you shouldn't have too many issues.

It's unofficial, but the 6700xt uses the exact same core as one of the supported enterprise cards, so just using the drivers for it generally works just fine. I use a 6800M personally.

If you are struggling to get rocm installed at all then stop using the amd guides and just install the pre built binaries directly. Fedora packages them in their repository and in my experience rocm just works once you run dnf install rocm*.

[–] poVoq@slrpnk.net 10 points 3 weeks ago

Mostly bad, but running AI models on Vulkan surprisingly isn't any worse than ROCm, so there is little point in using that it seems.

[–] afk_strats@lemmy.world 6 points 3 weeks ago (1 children)

ROCm on my 7900xt is solid. ROCm on my MI50s (Vega) is a NIGHTMARE

[–] roundup5381@sh.itjust.works 1 points 3 weeks ago

and here I thought the instinct vega line was within supported scope, no wonder nvidia is eating amd's lunch

[–] PetteriPano@lemmy.world 6 points 3 weeks ago

I run it on a 6650xt just fine. I have to explicitly set what version I want, but no issues.

You should be in a better spot with a 6700xt.

[–] First_Thunder@lemmy.zip 6 points 3 weeks ago

I have the rx6650xt and managed to make it work in NixOS, although there is some environment variable you have to set on ollama-rocm iirc. GFX something

[–] bjoern_tantau@swg-empire.de 5 points 3 weeks ago

I had it running on my Vega 64. But it had to be exactly one specific version of ROCm. Been a while since I've played around with that so I don't remember the specifics.

[–] Blaster_M@lemmy.world 5 points 3 weeks ago* (last edited 3 weeks ago)

Yeah, it technically works, but requires telling ROCm you have a 6800 XT instead

[–] herseycokguzelolacak@lemmy.ml 5 points 3 weeks ago

Just use Vulkan backends.

[–] snekerpimp@lemmy.world 5 points 3 weeks ago

My 6700xt works pretty well with nixos, arch, Debian and Ubuntu. Can’t seem to get comfyui to recognize it, but ollama and lama.ccp use it just fine. Just cause it’s not supported doesn’t mean it won’t work. I have an instinct mi25 that I flashed to a wx9100, per this. About 6 months ago, was working great on Debian 12, Ubuntu 20 and 22, nixos and arch. Now, can’t even get it to work on Ubuntu 20 using rocm 5.4. Super sad I can’t leverage the extra vram.

[–] cornshark@lemmy.world 4 points 3 weeks ago

It's a lot easier to just use the vulkan support for models and seems to work well enough

[–] Zos_Kia@lemmynsfw.com 3 points 3 weeks ago

Kind of a tangent : this depends a lot on your use case but I've found that transcoding with GPU is not necessarily a good thing. You generally get larger files, and it's not always faster than CPU. That is because ffmpeg can distribute the load among all your CPU cores. If you've got enough of those you'll get better multipliers than on an old GPU.

[–] False@lemmy.world 2 points 3 weeks ago

I had it working on a 5700xt a couple years ago

My RX 6600 can use HIP just fine in blender

[–] panda_abyss@lemmy.ca 1 points 3 weeks ago (3 children)

God after buying an amd machine last year I'm never doing it again.

What are you trying to use rocm for? Their own guides don't work.

[–] despoticruin@lemmy.zip 2 points 3 weeks ago (1 children)

Their guides specifically call for an exact kernel version, distribution, and hardware. If you are trying to operate outside of the official requirements then it shouldn't come as a surprise when the official documentation doesn't work for you.

[–] panda_abyss@lemmy.ca 0 points 3 weeks ago

do you know how insane it is their official guides don’t work with kernel point updates?

https://github.com/ROCm/ROCm/issues/5824

This has been an issue for a long time. 

I have to maintain a file of which specific kernel+os+firmware versions I’m on and have downgraded to just to get the most popular ML library in the world to du a matrix multiply. 

I don’t get how this bug gets into production branch, let alone shipped requiring firmware downgrades, on their new line of GPUs/chips. How do they not test their latest hardware with their own firmware?

[–] roundup5381@sh.itjust.works 1 points 3 weeks ago (1 children)

mostly it is the hardware I have on hand; first project in mind is ROCm machine learning for immich. after that it's pretty much trying to understand the technology, I'm sure I'll come up with something fun.

[–] panda_abyss@lemmy.ca 3 points 3 weeks ago (1 children)

I don’t know how the immich ml works, but if you’re going LLMs stick to llama.cpp. 

going beyond that, I’ve had serious kernel bugs with PyTorch and onnx that are still unresolved. The most popular ML/AI frameworks basically don’t work due to drivers for me. 

Vulkan flows are fine and generally comparable in speed so far, so if there’s a vulkan option try rock first then revert to vulkan. 

[–] roundup5381@sh.itjust.works 1 points 3 weeks ago

thanks for the heads up, in truth I'd probably be headed to vulkan now if it were compatible with immich. I'll put llama.cpp on my radar.

[–] hydrian@twit.social 1 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

@roundup5381 I have at ROCM 6.4.2 working on my RX6600XT with 8GB well. Still having had the time to upgrade to ROCM 7.x.

Works well with the #ubuntu packages on my #linuxmint desktop.

[–] roundup5381@sh.itjust.works 1 points 3 weeks ago (1 children)

it has me considering moving my docker containers over to something ubuntu flavored, that ubuntu is specifically supported

[–] tal@lemmy.today 1 points 3 weeks ago* (last edited 3 weeks ago)

I'm using Debian trixie on two systems with (newer) AMD hardware:

ROCm 7.0.1.70001-42~24.04 on an RX 7900 XTX

ROCm 7.0.2.70002-56~24.04 on an AMD AI Max 395+.