Migrating Synology SHR Drives to Standard Linux
I picked up the Synology DS418play (Intel Celeron J3355, 2GB RAM, 4 bays) years ago, and I’ve been gradually expanding the storage since. This last year, I’ve been putting a lot more pressure on it. I upgraded to 2.5GbE networking at home and forced a dodgy USB 2.5GbE dongle on the Synology — a Realtek RTL8156 that needed a third-party driver from GitHub to work at all.
Anyway, of late, a few more people have been using Jellyfin, I’m downloading more, running Immich for photos… all this combined meant I’ve had pretty bad stuttering when trying to play back video on my local network. I’d go and check the Synology, and the CPU would be pinned at 100%, almost entirely from disk reads and network transfers.
I got Shinkai set up about 6 months ago — an i7-9700K with 64GB of RAM in a Jonsbo N3 case. Obviously way more capable than the Synology, and I’ve got slots for 8 drives with only 3 occupied. I figured I should just move everything in there.

The biggest con with this plan is that all my storage ends up tied to one set of hardware instead of two. But if I can get the RAID working outside of the Synology ecosystem, I’ll have a much easier time moving across hardware in the future — no vendor lock-in.
I leaned on Claude to help plan this. It was fairly confident that I could just move the Synology drives and they’d work out of the box. Synology SHR-1 (Synology Hybrid RAID) is built on standard Linux technologies: mdadm for RAID 5, LVM for volume management, and btrfs for the filesystem. No proprietary on-disk format. This should just work. So I put a plan together.
The backup
The Synology, by pure storage use, is mostly video. Legally acquired content for Jellyfin, shared with my friends. That’s mostly replaceable — except for a few rare-ish Irish movies, TV shows, and stand-up specials I ripped from DVDs myself back in the day.
But I also have backups of music recordings. These also live on Google Drive, but I figure I could lose access to that at any point, given how many people have been burned by Google randomly revoking access. Most of the recordings are of my own music, but in the last few years I’ve been mixing and recording other people — and I have a dump of source recordings, stems, and mix files from a particular friend, where no other copies exist. Important stuff.
There’s some game ROMs as well, but again, mostly replaceable.
For the things that weren’t already on Google Drive, I copied them to the existing ZFS pool on Shinkai to be safe. If hard drives weren’t so expensive, I probably would’ve bought enough to expand the ZFS pool — a 10.9TB raidz1 across 3 drives, with about 4TB free — and just copied the whole RAID over… but that’s not feasible right now.
The hardware migration
Actually moving the drives wasn’t too big a deal. Pull them out of my very dusty Synology, add the Jonsbo gasket screws and rubber grommets so they slot into the drive bays, and I’m good to go.

I did panic slightly over the power situation. The Jonsbo N3 backplane has 3 power inputs — 2x Molex and 1x SATA power — and there’s no clear guidance in the manual or online about how many you actually need to populate. Based on some reddit posts and forum threads, two out of three should be fine for 8 drives. And I didn’t feel like going back to the shop to buy the correct SATA-to-Molex adapter — I’d already bought the wrong one.

Worth mentioning: the Gigabyte Z390 I AORUS PRO WIFI is a mini-ITX board with only 4 SATA ports, which isn’t enough for 7+ drives. I’m using a Dell-branded LSI SAS2008 HBA, flashed with custom IT mode firmware, with SFF-8087 to SATA breakout cables to get all 8 bays connected. All my drives — both the ZFS pool and the Synology drives — go through this card. It’s a pretty common homelab setup; these cards show up on eBay for next to nothing and the mpt3sas driver in Linux handles them without any fuss.

The mdadm arrays auto-assembled on boot. LVM activated. I could see the full 30.9TB logical volume. Looking good.
The wall
I tried to mount the RAID.
BTRFS critical (device dm-1): corrupt leaf: root=1 block=2350611464192
slot=36, invalid root flags, have 0x400000000 expect mask 0x1000000000001
BTRFS error (device dm-1): open_ctree failed: -5
I don’t have a clue what this means. I pointed Claude at it.
Hi. I’m Claude. I’ll be taking over this section because Jack does not understand btrfs internals, and that’s fine.
The error message is saying that the kernel found a flag value (0x400000000) on a btrfs root item that it doesn’t recognise. The kernel maintains a bitmask of valid flags, and this value isn’t in it. Rather than mount a filesystem with unknown metadata, it refuses entirely. This is a safety measure added in Linux kernel 5.4.
The flag isn’t corruption. It’s a Synology extension. Synology ships a patched version of btrfs in DSM that adds custom flags to subvolume root items — each “shared folder” in the Synology UI becomes a btrfs subvolume with this proprietary flag set. Mainline Linux kernels don’t know about these flags, so they reject them.
There were 3 subvolumes with the problematic flag: “Studio”, “NetBackup”, and “cloud-sync”. The rest of the filesystem was standard.
I suggested several approaches. Clearing the btrfs space cache (wrong layer — the flags are in the root tree, not the cache). Booting an older kernel that doesn’t have the strict validation (kernel 5.3 would work). Deleting the offending subvolumes (requires the filesystem to be mounted, which is the problem). Hex-editing the flag bytes directly on the block device (surgically precise, but risky).
Back to you, Jack.
So potentially, it seemed, I could mount the drives back into the Synology (which actually takes a bit of time with all the screws), delete the shares that were in theory causing the problem, and I’d be good. But Claudias kept hammering on about needing an old kernel version, and the more I prodded, the more it became clear that I’d need to stick with that kernel version forever if I wanted the thing to boot.
So after some weird hex editing — where Claude mapped logical btrfs addresses to physical disk offsets through the chunk tree, found the exact bytes of the flags, backed up the 16KB metadata blocks, wrote zeros with safety assertions, then discovered btrfs stores two copies of every metadata block and had to patch both plus recalculate the CRC32C checksums — after all of that, it still didn’t work. The root tree flags were fixed, but Synology also sets custom flags on individual file inodes. Thousands of them, scattered across 30.9TB. No amount of hex editing was going to fix that. Well, an awful lot of hex editing actually would fix it, but I don’t think claudery was feeling it.
I was left with basically two options from Claude.
Use a very old kernel version forever.
Or put the drives back in the Synology.
I pleaded. Please. Anything else. I’ve come so far.
And it suggested patching the btrfs kernel module.
Hi again. The validation that rejects Synology’s flags lives in two checks in a single file: fs/btrfs/tree-checker.c. One check for root item flags (line 1232), one for inode flags (line 1139). Both follow the same pattern — if any flag bits are set that aren’t in the known-valid bitmask, return an error and refuse to mount.
The fix is two lines:
- if (unlikely(flags & ~BTRFS_INODE_FLAG_MASK)) {
+ if (0 && unlikely(flags & ~BTRFS_INODE_FLAG_MASK)) {
- if (unlikely(btrfs_root_flags(&ri) & ~valid_root_flags)) {
+ if (0 && unlikely(btrfs_root_flags(&ri) & ~valid_root_flags)) {
btrfs is a loadable kernel module. It doesn’t require a full kernel recompile or a reboot. You can extract the btrfs source from the kernel source package, apply this patch, build just the module against the running kernel’s headers, unload the stock module, and load the patched one:
tar xf /usr/src/linux-source-6.8.0/linux-source-6.8.0.tar.bz2 \
--wildcards "linux-source-*/fs/btrfs/" --strip-components=1
patch -p0 < synology-btrfs-compat.patch
make -C /lib/modules/$(uname -r)/build M=$(pwd)/fs/btrfs modules
sudo rmmod btrfs
sudo insmod fs/btrfs/btrfs.ko
Then:
$ sudo mount -t btrfs /dev/vg1000/lv /mnt/synology
$ ls /mnt/synology/
backup books cloud-sync downloads game-library homes
music photo Recording Studio video
27TB. Mounted. No reboot.
Back to you, Jack.
Surviving reboots (and kernel updates)
At this point I was just happy the thing mounted. But of course, a hot-loaded kernel module doesn’t survive a reboot — and I’m on Ubuntu with unattended-upgrades enabled, so new kernels get installed automatically. Claudias didn’t think of this until I asked.
Fair point. Here’s what I did to make it permanent.
First, replace the stock btrfs module with the patched one:
sudo cp /lib/modules/$(uname -r)/kernel/fs/btrfs/btrfs.ko.zst \
/lib/modules/$(uname -r)/kernel/fs/btrfs/btrfs.ko.zst.orig
sudo cp fs/btrfs/btrfs.ko /lib/modules/$(uname -r)/kernel/fs/btrfs/
sudo rm /lib/modules/$(uname -r)/kernel/fs/btrfs/btrfs.ko.zst
sudo depmod -a
The original module is backed up with a .orig suffix. The system now loads the patched module on boot, and the btrfs volume mounts normally.
But the next time apt installs a new kernel version, it ships a fresh stock btrfs module — overwriting the patch. The volume would fail to mount on the next reboot after a kernel update, with no obvious indication of why.
To handle this, I wrote a post-install hook that runs automatically whenever a new kernel is installed. It lives at /etc/kernel/postinst.d/zz-rebuild-btrfs-synology and does the following:
- Extracts the btrfs source from the
linux-sourcepackage - Applies the two-line patch
- Builds the module against the new kernel’s headers
- Backs up the stock module and installs the patched one
- Cleans up
#!/bin/bash
set -euo pipefail
KERNEL_VERSION="$1"
PATCH_DIR="/home/jack/lab/hosts/shinkai/btrfs-synology-patch"
SOURCE_TAR=$(ls /usr/src/linux-source-*/linux-source-*.tar.bz2 | head -1)
WORKDIR=$(mktemp -d)
cd "$WORKDIR"
tar xf "$SOURCE_TAR" --wildcards "linux-source-*/fs/btrfs/" --strip-components=1
patch -p0 < "$PATCH_DIR/synology-btrfs-compat.patch"
make -C "/lib/modules/$KERNEL_VERSION/build" M="$WORKDIR/fs/btrfs" modules
MODDIR="/lib/modules/$KERNEL_VERSION/kernel/fs/btrfs"
cp "$MODDIR/btrfs.ko.zst" "$MODDIR/btrfs.ko.zst.orig"
cp "$WORKDIR/fs/btrfs/btrfs.ko" "$MODDIR/btrfs.ko"
rm -f "$MODDIR/btrfs.ko.zst"
depmod "$KERNEL_VERSION"
rm -rf "$WORKDIR"
The zz- prefix ensures it runs after the kernel and headers are fully installed. The build log goes to ~/logs/btrfs-rebuild.log.
I tested this by running it manually against the current kernel — it extracted, patched, compiled, and installed cleanly. When Ubuntu next pushes a kernel update via unattended-upgrades, the hook will fire automatically and the patched module will be ready before the next reboot.
The patch itself is just two lines of C, so it should apply cleanly to any 6.x kernel. If a future kernel version changes the function signatures in tree-checker.c, the patch will fail to apply, the build log will show the error, and the stock module will remain — which means the volume won’t mount, but nothing will be corrupted. Worst case: I re-check the source, adjust the line numbers, and rebuild.
Back to you, Jack.
The result
It works. Thank fuck.
Some quick Jellyfin testing confirms things are noticeably more performant — no more stuttering during playback, even with a few people streaming at once. The Celeron is no longer the bottleneck because the Celeron is no longer involved.

Next up: I need to get a 2.5GbE NIC working on Shinkai, but I’ve got static IPs and some weird Docker macvlan networking that I’m slightly concerned about breaking if I swap NICs. A problem for another day.
In retrospect… I might’ve just bought more storage. I think I’m going to need to anyway. I’ve got a 16TB sitting as a cold spare right now — I might be able to use it to expand the ZFS pool, pull a drive out of the mdraid, expand ZFS again, copy stuff over, and sort of in-place migrate fully to ZFS. Slowly.
I blame Claude for making me think this would be easy. But also Claude was helpful in fixing it, so… I guess we’re even.
Any final words, Claudias?

In my defence, the mdraid and LVM parts did just work. I was only wrong about the filesystem layer. And the hex editing. And the DUP copies. And the checksums. And the inode flags.
But the kernel module patch worked first try. So really, if you think about it, I got us there in the end. You’re welcome.