User Tools

Site Tools


public:managing_zfs_on_debian

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:managing_zfs_on_debian [2024/04/12 19:40] – [Installing ZFS] thomaspublic:managing_zfs_on_debian [2024/05/03 21:29] (current) – [Transferring a snapshot to a pool on another server] thomas
Line 29: Line 29:
 apt install -t stable-backports zfsutils-linux apt install -t stable-backports zfsutils-linux
 </code> </code>
 +
 +===== Creating a pool =====
 +
 +To figure out what physical devices are available, you can use the ''lsblk'' command.
 +
 +Creating a pool that simply appends all disks, without redundancy:
 +
 +<code>
 +zpool create tank /dev/sdb /dev/sdc /dev/sdd /dev/sde
 +</code>
 +
 +If a filesystem is already detected on the disks, use the ''-f'' parameter to force creation.
 +
 +:!: To easily identify devices later on, instead of adding them as ''/dev/sdX'', add them with their serial number identifier instead, e.g. ''/dev/disk/by-id/ata-WDC_WD40EFRX-68MYMN1_WD-WX11D6572HSCP''. This will help identify the disk in e.g. ''zpool status''. If you didn't do this, you can still identify disks by using:
 +
 +<code>
 +lsblk -o name,model,serial
 +</code>
 +===== Checking available pools =====
 +
 +<code>
 +zpool list
 +</code>
 +
 +===== Viewing full pool status (to display errors) =====
 +
 +<code>
 +zpool status -v
 +</code>
 +
 +===== Destroying a dataset =====
 +
 +<code>
 +zfs destroy tank/storage
 +</code>
 +
 +===== Destroying a zpool =====
 +
 +<code>
 +zpool destroy tank
 +</code>
 +===== Listing snapshots =====
 +
 +<code>
 +zfs list -t snapshot 
 +</code>
 +
 +===== Creating a snapshot =====
 +
 +<code>
 +zfs snapshot tank/data@snapshotname
 +</code>
 +
 +===== Transferring a snapshot to a pool on another server =====
 +
 +Ideally, do this in a ''screen'' or ''tmux'' session to avoid unexpected interruptions, as this might take a while.
 +
 +You can transfer a snapshot via SSH:
 +<code>
 +zfs send tank/data@initial | ssh remotehost zfs recv -F tanker/data
 +</code>
 +
 +Or via ''nc'':
 +<code>
 +# On the sending server:
 +zfs send storage-master/storage@backup-2024-04-12 | nc server5.nexus.home.arpa 7777
 +# On the receiving server:
 +nc -l -p 7777 | zfs recv -F tank/storage
 +</code>
 +
 +:!: Apparently, when using ''nc'', the pipe may not close automatically when the sending process is done. When there is no longer any throughput, kill the sending and receiving ''nc'' processes.
 +
 +If you have ''pv'' installed, it's ideal for showing progress:
 +<code>
 +# On the sending server:
 +zfs send storage-master/storage@backup-2024-04-12 | pv | nc server5.nexus.home.arpa 7777
 +# On the receiving server:
 +nc -l -p 7777 | pv | zfs recv -F tank/storage
 +</code>
 +
 +If the source dataset is encrypted, you might want to send it in its encrypted form using the ''raw'' parameter:
 +
 +<code>
 +zfs send --raw storage-master/storage@backup-2024-04-12 | pv | nc server5.nexus.home.arpa 7777
 +</code>
 +
 +If you want to send the difference between two snapshots, you can do so as follows:
 +
 +<code>
 +zfs send -i tank/storage@snap1 tank/storage@snap2 | zfs recv tank/storage
 +</code>
 +
 +This also works with the ''--raw'' option if needed.
 +====== References ======
 +
 +  * https://wiki.debian.org/ZFS
 +  * https://www.reddit.com/r/zfs/comments/rw20dc/encrypted_remote_backups/
 +  * https://docs.oracle.com/cd/E18752_01/html/819-5461/docinfo.html
 +
 +{{tag>Debian Linux ZFS}}
 +
public/managing_zfs_on_debian.1712950828.txt.gz · Last modified: by thomas

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki