76

But in all fairness, it's really llama.cpp that supports AMD.

Now looking forward to the Vulkan support!

you are viewing a single comment's thread
view the rest of the comments
[-] sardaukar@lemmy.world 7 points 6 months ago

I've been using it with a 6800 for a few months now, all it needs is a few env vars.

this post was submitted on 16 Mar 2024
76 points (100.0% liked)

LocalLLaMA

2220 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS