this post was submitted on 04 May 2025
60 points (78.8% liked)

Asklemmy

47890 readers
449 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] brucethemoose@lemmy.world 23 points 3 days ago* (last edited 3 days ago) (3 children)

Before it was hot, I used ESRGAN and some other stuff for restoring old TV. There was a niche community that finetuned models just to, say, restore classic SpongeBob or DBZ or whatever they were into.

These days, I am less into media, but keep Qwen3 32B loaded on my desktop… pretty much all the time? For brainstorming, basic questions, making scripts, an agent to search the internet for me, a ‘dumb’ writing editor, whatever. It’s a part of my “degoogling” effort, and I find myself using it way more often since it’s A: totally free/unlimited, B: private and offline on an open source stack, and C: doesn’t support Big Tech at all. It’s kinda amazing how “logical” a 14GB file can be these days, and I can bounce really personal/sensitive ideas off it that I would hardly trust anyone with.

…I’ve pondered getting back into video restoration, with all the shiny locally runnable tools we have now.

[–] grue@lemmy.world 1 points 2 days ago (1 children)

Do you have any recommendations for a local Free Software tool to fix VHS artifacts (bad tracking etc., not just blurriness) in old videos?

[–] brucethemoose@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

That work well out of the box? Honestly, I’m not sure.

Back in the day, I’d turn to vapoursynth or (Or avisynth+) filters and a lot of hand editing, basically go through the trouble sections one-by-one and see which combination of VHS-specific correction and regeneration looks best.

These days, we have far more powerful tools. I’d probably start by training a LoRA for Wan 2B or something, then use it to straight up regenerate damaged test sections with video-2-video. Then I’d write a script to detect them, and mix in some “traditional” vapoursynth filters.

…But this is all very manual, like python dev level with some media/ml knowledge, unfortunately. I am much less familiar with, like, a GUI that could accomplish this. Paid services out there likely offer this, but who knows how well they work.

[–] yo_scottie_oh@lemmy.ml 2 points 3 days ago (1 children)

Do you run this on NVIDIA or AMD hardware?

[–] brucethemoose@lemmy.world 4 points 3 days ago

Nvidia.

Back then I had a 980 TI RN I am lucky enough to have snagged a 3090 before they shot up.

I would buy a 7900, or a 395 APU, if they were even reasonably affordable for the VRAM, but AMD is not pricing their stuff well…

But FYI you can fit Qwen 32B on a 16GB card with the right backend/settings.

[–] Geometrinen_Gepardi@sopuli.xyz 1 points 3 days ago (1 children)

How do you get it to search the internet?

[–] brucethemoose@lemmy.world 2 points 1 day ago* (last edited 1 day ago)

The front end.

Some UIs (Like Open Web UI) have built in “agents” or extensions that can fetch and parse search results as part of the context, allowing LLMs to “research.” There are in fact some finetunes specializing in this, though these days you are probably best off with regular Qwen3.

This is sometimes called tool use.

I also (sometimes) use a custom python script (modified from another repo) for research, getting the LLM to search a bunch of stuff and work through it.

But fundamentally the LLM isn’t “searching” anything, you are just programmatically feeding it text (and maybe fetching its own requests for search terms).

The backend for all this is a TabbyAPI server, with 2-4 parallel slots for fast processing.