====== Managing ZFS on Debian ====== ===== Installing ZFS ===== First, make sure your system is up-to-date and restarted. This prevents compilation issues later on. Then, add the ''contrib'' section to your apt sources configuration. It should look similar to this: # deb cdrom:[Debian GNU/Linux 12.5.0 _Bookworm_ - Official amd64 DVD Binary-1 with firmware 20240210-11:28]/ bookworm contrib main non-free-firmware deb http://deb.debian.org/debian/ bookworm main non-free-firmware contrib deb-src http://deb.debian.org/debian/ bookworm main non-free-firmware contrib deb http://security.debian.org/debian-security bookworm-security main non-free-firmware contrib deb-src http://security.debian.org/debian-security bookworm-security main non-free-firmware contrib # bookworm-updates, to get updates before a point release is made; # see https://www.debian.org/doc/manuals/debian-reference/ch02.en.html#_updates_and_backports deb http://deb.debian.org/debian/ bookworm-updates main non-free-firmware contrib deb-src http://deb.debian.org/debian/ bookworm-updates main non-free-firmware contrib Then, add the backports repo, update apt and install the required packages. As root: codename=$(lsb_release -cs);echo "deb http://deb.debian.org/debian $codename-backports main contrib non-free" | tee -a /etc/apt/sources.list apt update apt install linux-headers-amd64 apt install -t stable-backports zfsutils-linux ===== Creating a pool ===== To figure out what physical devices are available, you can use the ''lsblk'' command. Creating a pool that simply appends all disks, without redundancy: zpool create tank /dev/sdb /dev/sdc /dev/sdd /dev/sde If a filesystem is already detected on the disks, use the ''-f'' parameter to force creation. :!: To easily identify devices later on, instead of adding them as ''/dev/sdX'', add them with their serial number identifier instead, e.g. ''/dev/disk/by-id/ata-WDC_WD40EFRX-68MYMN1_WD-WX11D6572HSCP''. This will help identify the disk in e.g. ''zpool status''. If you didn't do this, you can still identify disks by using: lsblk -o name,model,serial ===== Checking available pools ===== zpool list ===== Viewing full pool status (to display errors) ===== zpool status -v ===== Destroying a dataset ===== zfs destroy tank/storage ===== Destroying a zpool ===== zpool destroy tank ===== Listing snapshots ===== zfs list -t snapshot ===== Creating a snapshot ===== zfs snapshot tank/data@snapshotname ===== Transferring a snapshot to a pool on another server ===== Ideally, do this in a ''screen'' or ''tmux'' session to avoid unexpected interruptions, as this might take a while. You can transfer a snapshot via SSH: zfs send tank/data@initial | ssh remotehost zfs recv -F tanker/data Or via ''nc'': # On the sending server: zfs send storage-master/storage@backup-2024-04-12 | nc server5.nexus.home.arpa 7777 # On the receiving server: nc -l -p 7777 | zfs recv -F tank/storage :!: Apparently, when using ''nc'', the pipe may not close automatically when the sending process is done. When there is no longer any throughput, kill the sending and receiving ''nc'' processes. If you have ''pv'' installed, it's ideal for showing progress: # On the sending server: zfs send storage-master/storage@backup-2024-04-12 | pv | nc server5.nexus.home.arpa 7777 # On the receiving server: nc -l -p 7777 | pv | zfs recv -F tank/storage If the source dataset is encrypted, you might want to send it in its encrypted form using the ''raw'' parameter: zfs send --raw storage-master/storage@backup-2024-04-12 | pv | nc server5.nexus.home.arpa 7777 If you want to send the difference between two snapshots, you can do so as follows: zfs send -i tank/storage@snap1 tank/storage@snap2 | zfs recv tank/storage This also works with the ''--raw'' option if needed. ====== References ====== * https://wiki.debian.org/ZFS * https://www.reddit.com/r/zfs/comments/rw20dc/encrypted_remote_backups/ * https://docs.oracle.com/cd/E18752_01/html/819-5461/docinfo.html {{tag>Debian Linux ZFS}}