this post was submitted on 13 Feb 2025
7 points (88.9% liked)

LocalLLaMA

2884 readers
1 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
 

I have an GTX 1660 Super (6 GB)

Right now I have ollama with:

  • deepseek-r1:8b
  • qwen2.5-coder:7b

Do you recommend any other local models to play with my GPU?

you are viewing a single comment's thread
view the rest of the comments
[–] possiblylinux127@lemmy.zip 2 points 2 months ago (1 children)

Mistral

I personally run models on my laptop. I have 48 GB of ram and a i5-12500U. It runs a little slow but usable

[–] Disonantezko 2 points 2 months ago (1 children)

My gear is an old:

I7-4790 16GB RAM

How many tokens by second?

[–] possiblylinux127@lemmy.zip 1 points 2 months ago

The biggest bottleneck is going to be memory. I would just stick with GPU only since your GPU memory has the most bandwidth.