this post was submitted on 11 Jan 2025
13 points (100.0% liked)
LocalLLaMA
2825 readers
34 users here now
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I believe exllama and vllm offer quantization. But llama.cpp should be able to run on a graphics card as well, maybe the default settings are wrong for your computer. Or you have like an AMD card and need a different build of llama.cpp?
And by the way, you don't need to quantize that model yourself. Some people already uploaded that in several quantized formats to Huggingface. AWQ, GGUF, exl2 ...