GnuLinuxDude

joined 2 years ago
[–] GnuLinuxDude@lemmy.ml 26 points 2 weeks ago (3 children)

Looking forward to a map produced next year by some thinly veiled US-supported NGO that shows corruption in the world and America will still be not corrupt but America's enemies will be very corrupt.

[–] GnuLinuxDude@lemmy.ml 27 points 3 weeks ago

I think "defense" minister Israel Katz is the one who said Tehran will burn. Regardless, I can't see this as anything other than Israel promising they'll commit a second genocide. Fully backed by the USA, of course.

[–] GnuLinuxDude@lemmy.ml 6 points 3 weeks ago

I have never used a food delivery service because they all feel so fucking scummy and exploitative. Seems like they are in equal need as we are for regulatory overhaul of this business practice.

[–] GnuLinuxDude@lemmy.ml 7 points 3 weeks ago

Once I "Got" it (and realized the comm this is posted in) this post became good lol

[–] GnuLinuxDude@lemmy.ml 10 points 3 weeks ago (3 children)

What kind of place do you go to to find these things? Sometimes I get really lucky (see my post history about my wonderful new printer), but if I could increase my odds that would be cool.

[–] GnuLinuxDude@lemmy.ml 1 points 3 weeks ago* (last edited 3 weeks ago)

Dude. I thought That was bad. Just now I went to arstechnica to view one article and I did the same thing to "support" the site. It was 36MB in one minute.

[–] GnuLinuxDude@lemmy.ml 8 points 3 weeks ago

Trump unilaterally tears up the JCPOA. Biden sits on his ass and fuels the genocide. Trump continues Biden's policy. And here we are.

[–] GnuLinuxDude@lemmy.ml 23 points 3 weeks ago (5 children)

Trump needs to put his dog on a leash. fucking hell.

[–] GnuLinuxDude@lemmy.ml 6 points 3 weeks ago

If this hacked trove of documents news is real that's a pretty fucking huge deal unto itself. If the IAEA is passing along confidential memos that's also a pretty fucking huge deal on top of the huge deal.

[–] GnuLinuxDude@lemmy.ml 10 points 3 weeks ago (1 children)

This seems a bit weird because as detestable as Yeonmi Park is, she's Korean and spends her time spinning lies about Korea. Does she talk about China?

[–] GnuLinuxDude@lemmy.ml 23 points 3 weeks ago (1 children)

Just yesterday I was on a news website. I wanted to support it and the author of the piece so I opened a clean session of firefox. No extensions or blocking of any kind.

The "initial" payload (i.e. after I lost patience approximately 30s after initial page load and decided to call a number) was 14.79MB transferred. But the traffic never stopped. In the network view you could see the browser continually running ad auctions and about every 15s the ads on the page would cycle. The combination of auctions and ads on my screen kept that tab fully occupied at 25-40% of my CPU. Firefox self-reported the tab as taking over 400MB of RAM.

This was so egregious that I had to run one simple test. I set my DNS on my desktop to my PiHole and re-ran my experiment.

Initial payload went from almost 14.79 -> 4.00MB (much of which was fonts and oversized images to preview other articles). And the page took 1/4 the RAM and almost no CPU anymore.

Modern web is dogshit.

This was the website in question. https://www.thenation.com/article/politics/welcomefest-dispatch-centrism-abundance/

[–] GnuLinuxDude@lemmy.ml 5 points 3 weeks ago (1 children)

but the flooding of the art fields with low quality products

It's even worse than that, because the #1 use case is spam, regardless of what others think they personally gain out of it. It is exhausting filtering through the endless garbage spam results. And it isn't just text sites. Searching generic terms into sites like YouTube (e.g. "cats") will quickly lead you to a deluge of AI shit. Where did the real cats go?

It's incredible that DrNik is coming out with a bland, fake movie trailer as an example of how AI is good. It's "super creative" to repeatedly prompt Veo3 to give you synthetic Hobbit-style images that have the vague appearance of looking like VistaVision. Actually, super creative is kinda already done, watch me go hyper creative:

"Whoa, now you can make it look like an 80s rock music video. Whoa, now you can make it look like a 20s silent film. Whoa, now you can make look like a 90s sci-fi flick. Whoa, now you can make it look like a super hero film."

 

Some context about this here: https://arstechnica.com/information-technology/2023/08/openai-details-how-to-keep-chatgpt-from-gobbling-up-website-data/

the robots.txt would be updated with this entry

User-agent: GPTBot
Disallow: /

Obviously this is meaningless against non-openai scrapers or anyone who just doesn't give a shit.

 

tl;dr question: How do I get the Handbrake Flatpak to operate at a high niceless level in its own cgroup by default? I'm using Fedora Linux.


So if I understand things correctly, niceness in Linux affects how willing the process scheduler is to preempt a process. However, with cgroups, niceness only affects this scheduling relative to other processes within a cgroup. This means a process running with a high niceness in its own cgroup has the same priority as other processes in equivalent cgroups, and it will not in fact be preempted in a way one would expect.

So why does this matter to me at all? I have a copy of Handbrake installed from Flatpak. And sometimes I want to encode a video in the background while still having a decently responsive desktop experience so I can do other things, and basically let Handbrake occupy the cpu cycles I'm not using. Handbrake and the video encoding process should be at the bottom priority of everything to the maximum extent possible.

But it does not appear to be enough to just go into htop and set the handbrake process's niceness level to 19 and then start an encode, because of the cgroup business I mentioned above.

Furthermore, in my opinion Handbrake should always be the lowest priority process without my having to intervene. I would like to be able to launch it without having to set its niceness. Does anybody have suggestions on this? Is my understanding of the overall picture even correct?

 

I have been encoding some videos in AV1 lately and I thought I'd share my technique for those who may wish to do some AV1 on their own without having a messy setup. I think this is a pretty clean way, ultimately, to use Av1an's Docker image.

A forewarning: AV1 can be pretty to slow encode with. I've been doing it with DVDs where the 640x480 resolution of the video means a frame can be processed relatively quickly, but videos in 1920x1080 or 4k resolutions might be pretty intense where the encode speed only ends up being a frame a second.

Forewarning pt. 2: Something I learned that I CANNOT rely on is trying a faster encode speed to guesstimate the resulting file size and picture quality and then really maximize my results by lowering the encode speed. My observation has been that a slower encode speed will in fact improve the picture quality (and file size), such that I cannot be sure what something will look like without just encoding a very short sample at a slow speed. OK. Let's begin.

Operating System & Environment

I am using Fedora Linux 38. I'd like to use the Av1an package but that only has an official Arch release. I definitely don't want to spend time compiling this myself, so I will use the official Docker image instead. And I won't use Docker, actually, but Podman. I also use the Fish Shell. Its syntax is very slightly different from Bash's.

Now, Fedora users may know about SELinux. And something that kept happening to me was the security context of some of the files I'm shuffling around my hard drives would end up being not correct, making Podman incapable of seeing the files I'm trying to use. So instead of fixing the context per file (annoying) I just temporarily disabled SELinux.

sudo setenforce Permissive

Container image

From here things are pretty straightforward. I'll pull the docker image, which has a full Av1an setup ready to go.

podman pull docker.io/masterofzen/av1an:master

One little note is that you should use the master tag. A confusing thing about this image is that the latest tag is the old python version, and we want the current Rust version.

Executing Av1an

Now, navigate to whatever directory your source video is in. In my case, I losslessly encoded the DVDs with Handbrake into h264 and passed through the audio/chapter markers, etc. This gave me a good source to work with, even though it was a little bloated in file size. I don't think Av1an accepts MPEG-2, which is why I did that.

First I'll explain what the Podman command is doing for those who aren't familiar with Docker/Podman, and then I'll give a full working example.

podman run -v "$(pwd)":/videos:z --userns=keep-id -it --rm docker.io/masterofzen/av1an:master -i sourcevideo.mp4 -s scenes.csv --pix-format yuv420p10le -o output.webm -v "--VIDEO_OPTIONS" --keep -a "--AUDIO_OPTIONS"

  • podman run - Execute a container
  • -v "$(pwd)":/videos:z - Mount the present working directory as /videos in the container, and the :z is an SELinux labeling thing that can be dropped for non-SELinux users.
  • --userns=keep-id - This flag helps keep the user id and group ids consistent between the host and container so that they don't get mangled. Your output file will belong to your user.
  • -it - Execute the command in a visible shell session
  • --rm - Remove the container (not the image, the container) when the command is done executing.

Final example

The rest of the flags are for Av1an itself, or for the encoders. So here's a full working example of how I used it, to encode with aomenc and Opus for the audio. Av1an uses aomenc by default.

podman run -v "$(pwd)":/videos:z --userns=keep-id -it --rm docker.io/masterofzen/av1an:master -i sourcevideo.mp4 -s scenes.csv --pix-format yuv420p10le -o output.webm -v " --cpu-used=3 --enable-qm=1 --threads=4 -b 10 --end-usage=q --cq-level=28 --lag-in-frames=48 --auto-alt-ref=1 --enable-fwd-kf=1" --keep -a "-c:a libopus -b:a 128k"

I think for an explanation for what individual flags do, and perhaps some guidance on how to use them effectively, I can only refer one to the guide written by Reddit user BlueSwordM https://www.reddit.com/r/AV1/comments/t59j32/encoder_tuning_part_4_a_2nd_generation_guide_to/

 

PipeWire 0.3.77 (2023-08-04)

This is a quick bugfix release that is API and ABI compatible with previous 0.3.x releases.

Highlights

  • Fix a bug in ALSA source where the available number of samples was miscaluclated and resulted in xruns in some cases.
  • A new L permission was added to make it possible to force a link between nodes even when the nodes can't see each other.
  • The VBAN module now supports midi send and receive as well.
  • Many cleanups and small fixes.
 

cross-posted from: https://lemmy.ml/post/2333026

After approximately 10 months in a release candidacy phase, OpenMW 0.48 has finally been released. A list of changes can be found in the link.

The OpenMW team is proud to announce the release of version 0.48.0 of our open-source engine!

So what does another fruitful year of diligent work bring us this time? The two biggest improvements in this new version of OpenMW are the long-awaited post-processing shader framework and an early version of a brand-new Lua scripting API! Both of these features greatly expand what the engine can deliver in terms of visual fidelity and game logic. As usual, we've also solved numerous problems major and minor, particularly pertaining to the newly overhauled magic system and character animations.

A full list of changes can be found in the link to Gitlab.

What is OpenMW?

"OpenMW is a free, open source, and modern engine which re-implements and extends the 2002 Gamebryo engine for the open-world role-playing game The Elder Scrolls III: Morrowind."

It is an excellent way to play Morrowind on modern systems, and on alternative systems other than MS Windows. It requires the a copy of the original game data from Morrowind, as OpenMW does not include assets or any other game data - it is simply a recreation of the game engine. OpenMW can be found on Flathub for Linux users here. https://flathub.org/apps/org.openmw.OpenMW

223
submitted 2 years ago* (last edited 2 years ago) by GnuLinuxDude@lemmy.ml to c/linux_gaming@lemmy.ml
 

After approximately 10 months in a release candidacy phase, OpenMW 0.48 has finally been released. A list of changes can be found in the link.

The OpenMW team is proud to announce the release of version 0.48.0 of our open-source engine!

So what does another fruitful year of diligent work bring us this time? The two biggest improvements in this new version of OpenMW are the long-awaited post-processing shader framework and an early version of a brand-new Lua scripting API! Both of these features greatly expand what the engine can deliver in terms of visual fidelity and game logic. As usual, we've also solved numerous problems major and minor, particularly pertaining to the newly overhauled magic system and character animations.

A full list of changes can be found in the link to Gitlab.

What is OpenMW?

"OpenMW is a free, open source, and modern engine which re-implements and extends the 2002 Gamebryo engine for the open-world role-playing game The Elder Scrolls III: Morrowind."

It is an excellent way to play Morrowind on modern systems, and on alternative systems other than MS Windows. It requires the a copy of the original game data from Morrowind, as OpenMW does not include assets or any other game data - it is simply a recreation of the game engine. OpenMW can be found on Flathub for Linux users here. https://flathub.org/apps/org.openmw.OpenMW

view more: ‹ prev next ›