shrugal

joined 2 years ago
[–] shrugal@lemm.ee 8 points 1 week ago (2 children)

Not OP, but when I was looking for an alternative it was the music analysis and Auto-Playlist/DJ features that set Plexamp apart.

[–] shrugal@lemm.ee 3 points 3 weeks ago (1 children)

Nothing wrong with having to pay for software if the prices are reasonable. It's a product like any other, with real people working on it.

[–] shrugal@lemm.ee 5 points 1 month ago (1 children)

The simple answer is: Yes! If you want to be completely sure no one is accessing your data - now or in the future - then you have to host it yourself. There are companies and countries that are more trustworthy/safe than others, but you never know how politics will change.

I've been using a Synology NAS for ages, and I can wholeheartedly recommend it! Especially if you don't have that much experience with Linux and servers, but also if you want something that's more Plug-n-Play and stable, or you want access to some of their proprietary services or really good customer support. Just make sure you get one that supports Docker, because that's how you'll install most if not all of the 3rd party services.

That being said, building one yourself can also be great fun, and you do have that one additional level of control if everything is open-source and installed by you.

[–] shrugal@lemm.ee 2 points 2 months ago* (last edited 2 months ago) (1 children)

if I can get it working

It's really as simple as starting one container per chat service, with a config like this:

services:
    beeper-<service>:
        image: ghcr.io/beeper/bridge-manager
        restart: unless-stopped
        environment:
            - MATRIX_ACCESS_TOKEN=<your beeper matrix token>
            - BRIDGE_NAME=sh-<service>
        volumes:
            - ./beeper-<service>:/data

then messaging the @sh-<service>bot:beeper.local bot user, and logging in to your chat account.

[–] shrugal@lemm.ee 8 points 2 months ago (3 children)

I'm using the Beeper Matrix server, but self-host their bridges. That way the de- and reencryption is done on my server, and Beeper only sees encrypted Matrix messages. It's extremely easy to set up if you've used docker before, much less work than running a full Matrix server yourself.

[–] shrugal@lemm.ee 1 points 3 months ago

I opened specific ports where needed, and also limit most frontends to local requests only.

[–] shrugal@lemm.ee 1 points 3 months ago* (last edited 3 months ago) (2 children)

I'm using the DS920+, as it's still the best 4-bay Synology NAS for media streaming/encoding tasks afaik. Caches are read-write, and do use the NVMe slots.

The RAM upgrade and added caches definitely made a huge difference. The system is averaging around 70% RAM usage, and goes beyond that for certain tasks, so the current workload wouldn't really be feasible without the extra RAM. And the caches really make most IO operation noticably faster, especially random drive access e.g. from multiple simultaneous processes.

I have some Arr containers on there, as well as Plex, Audiobookshelf, AppFlowy, some Beeper Matrix bridges, FileFlows for media conversion, my own Piped instance, SearXNG, Vaultwarden, FirefoxSync, and a few smaller ones.

[–] shrugal@lemm.ee 2 points 1 year ago

I switched the account in the app, so it should use it and fetch content from LW.

[–] shrugal@lemm.ee 8 points 1 year ago* (last edited 1 year ago)

I agree with everyone here that self-hosting email is never easy, but if you still decide to go down this route then here are two tips that I personally found very helpful, especially when you decide to host it at home:

The first is to get an SMTP relay server. That's just another mail server that yours can log into to actually send its mail, just like an email client would. That way you don't have to worry about your IP's sending reputation, because everyone will only see the relay's reputable IP.

Second is to configure a Backup MX. That's an additional MX DNS entry with lower priority than the primary, and it points to a special mail server that accepts any mail for you and tries to deliver it to the primary server forever (or something like an entire week). So when your primary server is unreachable other sending servers will deliver mail to the backup, and it delivers the mail to the primary as soon as that's back online.

You can get these as separate services, but some DNS providers (like Strato for example) offer both with the base domain package. It makes self-hosting an email server much simpler and more reliable in my experience.

[–] shrugal@lemm.ee 2 points 1 year ago* (last edited 1 year ago) (3 children)

That was my first thought as well! But I also tried LW which is still on 0.19.3, same problem.

Edit: My bad! I had "show read posts" enabled on my LW account, and read posts are correctly hidden when I disable it. So it really seems to be a problem with the new version.

 

I have "Show read posts" disabled in the settings, but it just stopped working all of a sudden. Since yesterday I'm seeing read posts again.

I tried toggling the setting, clearing cache and switching instances, but no luck so far.

Anybody else who has this problem? Any idea how to fix it?

Edit: Looks like it's a problem with the new Lemmy version!

[–] shrugal@lemm.ee 24 points 1 year ago

Welcome to the Linux community. :)

You will probably never understand everything about Linux and all of its included and associated systems. That's completely fine, no one does! That's why we are many, and it's what asking for advice or help is for. You can just learn whatever interests you at your own pace, and know that there will always be interesting things you haven't seen yet.

 

In this election there won't be any % barrier in some countries, but I still haven't seen any poll numbers for small parties here in Germany for example. Everything below 2-3% gets lumped in with "Others" as usual, even though about 0.5% would already get them a seat in parliament this time. This makes voting strategically very difficult, because we have no idea whether any small party could even get in.

I get that there are limits to what you can show in a graphic, but even the source links I checked didn't provide more details. Why is that, and has anyone seen poll numbers for small parties, particularly for Germany?

[–] shrugal@lemm.ee 3 points 1 year ago* (last edited 1 year ago)

I really like the idea of creating a decentralized network that has a fair monetization model built right in, instead of relying on donations like the Fediverse. Crypto got a very bad rep, but this kind of stuff is exactly what it's good for imo.

It also has some core features that are missing from other similar messengers, like multi-device sync. And lastly, the devs seem pretty capable and open as well. They are very transparent with their work and seem to have the right ideas about where things should go and which trade-offs to make. E.g. their reasoning for not using the Signal protocol seems solid to me.

So I'm hopeful, but time will tell if it all works out.

 

Hey everyone,

My personal server of choice is a DiskStation right now, and I'm using the default reverse proxy for all my subdomains. I went through a few stages to secure them, and now that I'm finally finished (famous last words heh?!) I thought I'd document my approach and provide some configs and code. I've seen a few unanswered questions here and there about how to do this on Synology, so hopefully this helps a few people.

The guide covers limiting access to local IPs, as well as adding Basic or SSO authentication. The main goal is to integrate well with the GUI and access control profiles, and to leave all existing and autogenerated files untouched, so updates and changes via the GUI still work as expected.

Here is the basic idea:

The nginx server config is located in /etc/nginx/, and the reverse proxies are defined in the sites-available/server.ReverseProxy.conf file inside that folder. There's one server directive for every proxied site, and the DSM config adds a include .acl.<random string>.conf* directive if you set up an access control profile for a site. That * at the end there is crucial, because it means we can manually add more configuration files with the same prefix, and they will automatically be included and applied to all sites using this access control profile.

There are also include directives for the main and http scopes, as well as for the default DSM server directives. This means we can inject configurations in these places, just by adding correctly named files to the conf.d folder.

For Single Sign-On (SSO) authentication we run a Vouch-Proxy instance to handle the communication between nginx and the OIDC server. We also need to spin up another nginx reverse proxy and forward requests to it, because the built-in one doesn't support the required auth_request directive. Its container script just copies the default reverse proxy configuration with some modifications, and it is set up to reload whenenver the original file changes.

Link

1
submitted 1 year ago* (last edited 1 year ago) by shrugal@lemm.ee to c/firefox_addons@lemmy.ml
 

Hey everyone,

I created an addon to bring touchscreen navigation gestures to the desktop version of Firefox, so mainly for 2-in-1 laptops and Linux/Windows tablets. It adds back/forward navigation and pull-to-refresh gestures, shows the same icons as existing touchpad gestures, and will check beforehand if you can still scroll in a given direction.

Here is the link: Touch Navigation

 

So I know what AC3 means of course, but what does AC3D mean in some releases?

view more: next ›