For a short moment after it was added to the kernel, it seemed like there was a good chance of BcacheFS becoming an institution within the Linux ecosystem. A new filesystem with built-in multi-drive prioritized caching, replicas, encryption, subvolumes, the works. Anyone paying attention to the saga knows by now that this is not how things turned out, and with the release of Linux 6.18, BcacheFS was stripped out completely. BcacheFS still lives as an independently maintained project, an can be installed though the DKMS system, but this is a bit contrived even for my tastes.
While BcacheFS and Linux were still in the honeymoon phase in 2023, I decided to jump in with both feet. Today my main system runs a BcacheFS cluster composed of two 6TB hard disks and a 2TB NVMe. This created a >12TiB volume which transparently prioritizes the most frequently accessed files to the NVMe, while allowing me to set replication parameters on a per-directory basis. Aside from the nightmare of configuring the thing to boot, the experience has been stellar. Unfortunately, this is the end of the road. I'll be switching back to a more "conventional" LVM-based setup. I don't consider the potential situation where I need to compile out-of-tree kernel modules on a recovery USB to simply chroot into my system to be workable.
So today I will spend the day doing the whole hermet crab shell exchange with my files as I take the first drive from the cluster offline, reformat it, move files from the rest of the cluster to it, take another drive offline, etc. Wish me luck.

Best of luck. The LVM+LUKS+ext4 combo is rock solid. btrfs, bcachefs and zfs always seemed more risk than reward.
I've never been bitten by btrfs after using it for many years, but I think I will be going with XFS on LUKS on LVM (with caching) this time around.
I always confused xfs with zfs and write it off. I absolutely should migrate some of my big Raid10 volumes to xfs.
btrfs is good, just don't use its raid5 or raid6. The tool even tells you not to, it's experimental.
I don't even use those with LVM. 1, 0, or 10, only.
At least 1 and 10 are stable with btrfs. I personally prefer to mirror for major storage so that rebuilds are simple and fast.
Btrfs has been great for me. Love being able to roll back to a previous snapshot if something gets messed up. I use it for my system drive and xfs for my user drives.
I've been conservative about switching to btrfs, but it's been the default on Fedora for so long, and I've been using it on less critical computers without a problem for so long, that I finally switched my server to it when I upgraded my server hdd. Hope to take advantage of snapshot-based backups eventually.
Btrfs + zstd compression + bees for dedupe is amazing. Reflinks are the main reason I am able to host a bajillion game servers without running out of space (along with ksm for memory dedupe but that's another topic).
Me too. I tried btrfs but I had problems that I couldn't track down or resolve.