Skip to content

ZFS

ZFS is a filesystem that can be compared with RAID, just better.

Install ZFS on Unraid

To install the Unraid-ZFS plugin, copy the URL below into the install plugin page in Unraid.

https://raw.githubusercontent.com/Steini1984/unRAID6-ZFS/master/unRAID6-ZFS.plg

Set up the array

First of all, we need to create a pool.

In my example, im naming my pool Sixpool, my disks is /dev/sda through /dev/sde in a raidz2 (2 disks for redundancy) and you should use this instead of raidz1 (single drive redundancy).

Im using a lz4 (lightweight compression algorithm), and this should be set as a default since it will not decrease perfomrance (If it is, it will not be much), and it will save you disk space.

Im also turning off atime (access time), atime is basically just writing to file every time its accessed, this is something i find unusable for my workload, and it will improve performrance a little bit if its turned off.

zpool create -o ashift=12 -O recordsize=1M -O xattr=sa -O compression=lz4 -O atime=off sixpool raidz2 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde
zpool export sixpool
zpool import -d /dev/disk/by-id sixpool

This zpool array will then be located under path /sixpool.

Create datasets

When you create datasets, it will then create a "folder" under the /sixpool path.

Its not really nessesarry to make more than 1 dataset, but if you want to segment your files with for example: diffrent compression levels (example: 1 dataset for non-compression storage, 1 dataset for archival storage), then you can just create more datasets, the datasets needs to be with their own unique names.

I have made an example of workloads im creating datasets for, and the compression algorithm i want to use on the datasets:

Dataset Compression
database off
work lz4
archive gzip-9

To create the datasets, use the following commands:

zfs create sixpool/database
zfs create sixpool/work
zfs create sixpool/archive

And to set the compression levels to each dataset, do the following commands (The work dataset will be skipped since it uses the parent compression level: lz4):

zfs set compression=off sixpool/database
zfs set compression=gzip-9 sixpool/archive

Cronjobs in Unraid

Its recommended to run weekly scrubs to keep the array running stable.

If you are using Unraid, you should install a plugin called "User scripts", you can find it in the "Community Applications" plugin.

My setup in Unraid is as follows:

zpool_scrub_start

Custom schedule (0 0 * * 7)

#!/bin/bash
zpool scrub sixpool

zpool_status

Custom schedule (0 8 * * 7)

I have also configured the Telegram Bot in Settings > Notification Settings, and added this to the script to get a weekly zpool status sent to my Telegram user.

#!/bin/bash
EVENT=$(zpool status) sh /boot/config/plugins/dynamix/notifications/agents/Telegram.sh

Cronjobs in Linux

If you are setting this up on a Linux machine, you can change cron with the command crontab -e Then you can add the following (Recommended to set up a way to get notifications sent to you)

0 0 * * 7 zpool scrub sixpool
0 8 * * 7 sh /root/telegram_status.sh


telegram_status.sh:

#!/bin/bash
$apikey=""
chat_id=""

text=$(zpool status)
curl -X "POST" "https://api.telegram.org/$apikey/sendMessage?chat_id=$chatid&text=$text"