[-] Toribor@corndog.social 9 points 13 hours ago

My peen make it easier for me to build muscle mass! Which I don't take advantage of so instead I am just hairy everywhere except the part of my head that would look fashionable with hair. Can I just trade those things for being good at power tools instead?

[-] Toribor@corndog.social 2 points 1 day ago

I've been testing Bazzite out on a variety of hardware. It's very easy to setup and required no additional fiddling at all to get working, even with an Nvidia card which is the usual source of Linux gaming frustrations.

If you're used to the limitations of the Steam Deck OS and haven't had any issues there then you should have a good experience with Bazzite which is presented in a very similar way even if it's a little different under the hood.

[-] Toribor@corndog.social 5 points 1 day ago* (last edited 1 day ago)

Bazzite is basically exactly this already. If you have an AMD gpu you can boot straight into steam. The desktop mode uses KDE like the Steam Deck and the package manager makes it much easier to layer in additional system packages which is kind of a pain on the Deck. Plus there are some additional gaming specific tweaks popularized by tools like cryoutility included by default.

[-] Toribor@corndog.social 16 points 1 day ago

Mmmm, dog soup.

[-] Toribor@corndog.social 7 points 1 day ago

I've been running a tabletop campaign for Scum & Villainy which is very much in the Space Opera/Western category. It's been a really fun and evocative setting to game in.

[-] Toribor@corndog.social 26 points 2 days ago

How else are they going to email you 20 times about changes to their privacy policy?

And then the inevitable email when they have to admit that all the data they gathered on you was stolen and that there is nothing you can do about it.

[-] Toribor@corndog.social 6 points 3 days ago

Alternatively what you're describing sounds like SponsorBlock but for podcasts. You probably wouldn't have to rehost the actual audio files to accomplish this, just have a podcast client/addon that allows user submissions for ad segments and a database somewhere that can host the metadata for ad breaks.

Biggest issue is probably that you're probably building or forking an existing podcast app to do it, and some podcasts dynamically insert ads so it's possible that peoples downloaded files could have different ad segments/times.

[-] Toribor@corndog.social 20 points 3 days ago

I usually try to remind people that The Dark Knight is PG-13 and it's arguably a pretty great action movie that doesn't feel like the violence is toned down.

I mean the Joker stabs a guy in the skull with a pencil. It's fast and brutal but it doesn't need a lot of blood or gore to sell the moment.

[-] Toribor@corndog.social 15 points 6 days ago

Well it may not be accurate or effective, but at least it's expensive.

[-] Toribor@corndog.social 17 points 6 days ago* (last edited 6 days ago)

Shouldn't have put the 'implode' action on the shoulder button. It was only a matter of time before he triggered it on accident.

[-] Toribor@corndog.social 23 points 6 days ago

if you go to another country, you have to adjust to their law

Big business knows no national boundaries. They'll build factories wherever labor is cheap, put headquarters wherever the taxes are low, and sell their wares wherever consumer rights are weak.

[-] Toribor@corndog.social 5 points 6 days ago* (last edited 6 days ago)

I've been testing Ollama in Docker/WSL with the idea that if I like it I'll eventually move my GPU into my home server and get an upgrade for my gaming pc. When you run a model it has to load the whole thing into VRAM. I use the 8gb models so it takes 20-40 seconds to load the model and then each response is really fast after that and the GPU hit is pretty small. After I think five minutes by default it will unload the model to free up VRAM.

Basically this means that you either need to wait a bit for the model to warm up or you need to extend that timeout so that it stays warm longer. That means that I cannot really use my GPU for anything else while the LLM is loaded.

I haven't tracked power usage, but besides the VRAM requirements it doesn't seem too intensive on resources, but maybe I just haven't done anything complex enough yet.

view more: next ›

Toribor

joined 1 year ago