CondorWonder

joined 2 years ago
[–] CondorWonder@lemmy.ca 5 points 1 month ago (1 children)

ddrescue doesn’t work properly on audio discs (even if you use the correct sector size of 2048 bytes). Have you tried something like cdparanoia https://www.xiph.org/paranoia/?

[–] CondorWonder@lemmy.ca 1 points 1 month ago (1 children)

Ok so you’re using docker, the drive is a USB disk?

I think you need to:

  • create a mount point folder
  • mount the drive in fstab so it’s available on boot
  • create a dedicated user for Jellyfin to run as
  • make sure the new user has read/write access to the folders with media (this may mean changing ownership, adding to a new or existing group)
  • set the docker container to run as the new user
  • add the mount point as a volume inside the docker container
  • add the folders with media in Jellyfin
[–] CondorWonder@lemmy.ca 8 points 1 month ago (3 children)

More info needed. How are you running Jellyfin? How is the drive attached? What OS?

[–] CondorWonder@lemmy.ca 7 points 2 months ago (1 children)

Not sure what you mean by this - Nabu Casa has a Zwave device already called ZWA-2 which is fully supported.

[–] CondorWonder@lemmy.ca 3 points 2 months ago

A strong mesh is a better way to go to me - ensuring you have a mesh of router devies between the coordinator and the end device has worked well to ensure that no matter where the device is it works. A better antenna may help but all it takes is a glitch like your 2.4 wifi moving to overlap with the Zigbee range and the device drops out.

I have a tubesb Zigbee device with an external antenna and I’m not sure I’ll benefit from the ZBT2 but the 2.4ghz band is very busy here. I’m tempted to try it and see if it makes any difference. I find my Zigbee network ‘slow’ - like sensor updates take 1-2 seconds before HA receives them.

[–] CondorWonder@lemmy.ca 1 points 2 months ago

Bcache can’t differentiate between data and metadata on the cache drive (it’s block level caching), so if something happens to a write-back cache device you lose data, and possibly the entire array. I wouldn’t use bcache (or zfs caching) without mirrored devices personally to ensure resiliency of the array. I don’t know if zfs is smarter - presumably is can be because it’s in control of the raw disks, I just didn’t want to deal with modules.

[–] CondorWonder@lemmy.ca 1 points 2 months ago (2 children)

For your second scenario - yes you can use md under bcache with no issues. It becomes more to configure but once set up has been solid. I actually do md/raid1 - luks - bcache - btrfs layers for the SSD cache disks, where the data drives just use luks - bcache - btrfs. Keep in mind that with bcache if you lose a cache disk you can’t mount - and of course if you’re doing write-back caching then the array is also lost. With write-through caching you can force disconnect the cache disk and mount the disks.

[–] CondorWonder@lemmy.ca 3 points 3 months ago (1 children)

This. If you have any sort of set up - just do a backup and restore. All the configuration, automations, etc. will come across exactly as it was, including your subscription set up.

I’ve migrated from a Pi to a mini pc so it works between different platforms too - there I had to reinstall add ons but it was still generally an easy migration.

[–] CondorWonder@lemmy.ca 3 points 3 months ago (4 children)

I work around this with the uptime integration then conditions in automations that uptime must be over whatever time I want.

You could try using not_from in your state trigger but I’ve had limited success with that working recently. Something like this:

#…
  - trigger: state
    entity_id:
      - event.inovelli_on_off_switch_config
    not_from:
      - unavailable
      - unknown
#…
[–] CondorWonder@lemmy.ca 3 points 4 months ago (1 children)

There’s your answer: you need an active PoE injector that follows 802.3af. None of the ones you pictured are the correct ones, they are passive not active and worst case can damage your device.

The difference is the active injector and the device communicate to determine how much power to provide, where the passive injectors just whack the device with their rated power. The device shouldn’t work without negotiation (per the spec).

[–] CondorWonder@lemmy.ca 2 points 4 months ago (1 children)
[–] CondorWonder@lemmy.ca 3 points 4 months ago

Based on what I’ve seen with my use of ZRam I don’t think it reserves the total space, but instead consumes whatever is shown in the output of zramctl --output-all. If you’re swapping then yes it would take memory from the system (up to the 8G disk size), based on how compressible the swapped content is (like if you’re getting a 3x ratio it’s 8GB/3=2.6GB). That said - it will take memory from the disk cache if you’re swapping.

Realistically I think your issue is IO and there’s not much you can do with if your disk cache is being flushed. Switching to zswap might help as it should spill more into disk if you’re under memory pressure.

view more: next ›