y0shi

joined 2 weeks ago
[–] y0shi@lemm.ee 2 points 5 hours ago (1 children)

Oh yeah, had long discussion with wife about that choice that I think you are referring to, argument implications etc…

[–] y0shi@lemm.ee 4 points 1 day ago

It does! If it takes too long to extract (too fine grind), you will get over extracted result and increased bitterness, and the opposite if water disappears too quickly and extraction time is shorter. In my experience, for light roast fine grinding and some over extraction is preferable in order to get all the flavour out of the beans, but with every new coffee I get, I adjust it to get the balance I look for in my cup (it is different for every person) Usually, adjusting the grinding for your target extraction, let’s say 3:00, is good enough to start experimenting with different level of coffee roasts!

[–] y0shi@lemm.ee 7 points 1 day ago (4 children)

For me, the parameter that changes the final result the most is, undoubtedly, the grinder setting’s, which, as you already pointed out, affects the total extraction time.

[–] y0shi@lemm.ee 4 points 6 days ago

Omg, that pool surely brings some memories!! :’)

[–] y0shi@lemm.ee 2 points 6 days ago

Thanks for your inputs! Definitely will check it out, I’m tired of Youtube client ads and recommendations :’)

[–] y0shi@lemm.ee 5 points 1 week ago (2 children)

This sounds like something I’d consider for my homelab. Do you mind elaborating more on the pipeline? How does it look like to prune watched content? Or you keep it forever?

[–] y0shi@lemm.ee 2 points 1 week ago

That sounds like a great way of leveraging existing infrastructure! I host Plex together with other services in a server with intel transcoding capable CPU. I’m quite sure I would get much better performance with the GPU machine, might end up following this path!

[–] y0shi@lemm.ee 3 points 1 week ago (3 children)

I’ve an old gaming PC with a decent GPU laying around and I’ve thought of doing that (currently use it for linux gaming and GPU related tasks like photo editing etc) However ,I’m currently stuck using LLMs on demand locally with ollama. Energy costs of having it powered on all time for on demand queries seems a bit overkill to me…