FreeBSD and ZFS Encryption

Recently, I decided to move to FreeBSD for my main fileserver. I’ve been using ZOL for some time, on Ubuntu. A number of things motivated me to make this change – my home router is a Netgate atom box, which not only has a free BIOS (coreboot / SeaBIOS) but also runs FreeBSD. Getting back into the flow of things with FreeBSD made me realize exactly 1) how much it has improved since I last used it with any regularity (5.x days) and 2) how much I actually like it.

I won’t get into the whole systemd stuff which has been discussed to death elsewhere, but I am not really sure I like the direction that most GNU/Linux distributions are headed, and, as well, I also like to challenge myself by using unfamiliar combinations of technology, and what better place to do so than on a home server?

I also had the desire to encrypt my entire ZFS filesystem (I believe everyone, everywhere, if they have a computer, should be using whole disk encryption at this point). This turns out to be a bit of a challenge. ZoL’s PR for encryption was just merged, and in the FreeBSD case, there (at least was) a sincere lack of interest on the part of the developers, who believe you should be using geli instead. As of this writing, it looks like encryption support in ZFS will make it to FreeBSD, but, most importantly to me, it’s not there just yet.

At any rate, having recently moved cross-country, I had several encrypted backups of my ZFS array, so I was fine with starting from scratch. I wasn’t super familiar with GELI or GEOM, and decided to just dive in. After some google searching which led me to some FreeBSD forum posts as well as a gist by GitHub user kiela, I decided to set it up in the following way:

1) ZFS disks will be encrypted at the block device layer by GELI;

2) The encryption keys will be stored in a vnode-backed memory device, itself encrypted by a randomly generated key and a passphrase;

3) On boot, that keystore will be mounted, I will have to enter a passphrase, and then a separately, randomly generated key stored on the virtual disk will be used to unlock all of the GELI devices.

I decided to go this route rather than having every disk have a separate key, mostly because of administrative reasons, but also for the very simple reason that I don’t want to have to type in 8+ passwords to reboot (one for each GEOM device in the array).

First, we need to start by generating our vnode-backed keystore. This is basically just the equivalent of loopback with a virtual disk on Linux. To make things more confusing for anyone coming from Linux, FreeBSD uses `mdconfig` for this purpose.

Let’s start by making our mount directory, generating the actual blocks for the device, and the key we will use to encrypt it:

mkdir -p /boot/keys/store
dd if=/dev/zero of=/boot/keys/zfs-keys.volume bs=4k count=1000
dd if=/dev/random of=/boot/keys/md0.key bs=4096 count=1

Now, let’s create a vnode-backed memory device, and initialize the GELI encryption using this key (and a passphrase we will enter)

mdconfig -a -t vnode -f zfs-keys.volume
geli init -s 4096 -K /boot/keys/md0.key /dev/md0

Now, attach the device so we can create a filesystem and mount it. I used /boot/keys and /boot/keys/store for this purpose, YMMV.

geli attach -k /boot/keys/md0.key /dev/md0
newfs /dev/md0.eli
mount /dev/md0.eli /boot/keys/store

Lastly, generate the key we will use for our ZFS drives. I didn’t encrypt this key because it’s already stored on an encrypted volume that requires a passphrase to unlock; again, you may want to do this differently.

dd if=/dev/random of=/boot/keys/store/alexandria.key bs=4096 count=1

Now we’re ready to initialize our actual drives. Mine are da0 through da7. Make sure all the modules we’ll need are loaded, too.

kldload zfs
kldload geom_eli

for DRIVE in 0 1 2 3 4 5 6 7; do
  gpart create -s gpt da${DRIVE}
  gpart add -t freebsd-zfs -a 4096 da${DRIVE}
  geli init -P -s 4096 -K /boot/keys/store/alexandria.key /dev/da${DRIVE}p1
  geli attach -p -k /boot/keys/store/alexandria.key /dev/da${DRIVE}p1
done

Now, we do the same for the drives we will use for ZIL/ARC. These, too, are encrypted.

for SSD in 0 1; do
  gpart create -s gpt ada${SSD}
  gpart add -t freebsd-zfs -a 4096 -s 100G ada${SSD}
  gpart add -t freebsd-zfs -a 4096 -s 8G ada${SSD}
  geli init -P -s 4096 -K /boot/keys/store/alexandria.key /dev/ada${SSD}p1
  geli init -P -s 4096 -K /boot/keys/store/alexandria.key /dev/ada${SSD}p2
  geli attach -p -k /boot/keys/store/alexandria.key /dev/ada${SSD}p1
  geli attach -p -k /boot/keys/store/alexandria.key /dev/ada${SSD}p2
done

Finally, we can create our zpool. I generally use mirrors, for a variety of reasons which this blog post nicely summarizes.

My primary pool should have about 21TB available to it, when all is said and done.

zpool create alexandria mirror /dev/da0p1.eli /dev/da1p1.eli mirror /dev/da2p1.eli /dev/da3p1.eli mirror /dev/da4p1.eli /dev/da5p1.eli mirror /dev/da6p1.eli /dev/da7p1.eli log mirror /dev/ada0p2.eli /dev/ada1p2.eli cache /dev/ada0p1.eli /dev/ada1p1.eli 

Everything looks good:

[root@alexandria ~]# zpool status
  pool: alexandria
 state: ONLINE
  scan: none requested
config:

    NAME            STATE     READ WRITE CKSUM
    alexandria      ONLINE       0     0     0
      mirror-0      ONLINE       0     0     0
        da0p1.eli   ONLINE       0     0     0
        da1p1.eli   ONLINE       0     0     0
      mirror-1      ONLINE       0     0     0
        da2p1.eli   ONLINE       0     0     0
        da3p1.eli   ONLINE       0     0     0
      mirror-2      ONLINE       0     0     0
        da4p1.eli   ONLINE       0     0     0
        da5p1.eli   ONLINE       0     0     0
      mirror-3      ONLINE       0     0     0
        da6p1.eli   ONLINE       0     0     0
        da7p1.eli   ONLINE       0     0     0
    logs
      mirror-4      ONLINE       0     0     0
        ada0p2.eli  ONLINE       0     0     0
        ada1p2.eli  ONLINE       0     0     0
    cache
      ada0p1.eli    ONLINE       0     0     0
      ada1p1.eli    ONLINE       0     0     0

Booting

Now, we need to make sure all of this works on boot. Because we’re using GELI, we need to jump through some hoops. I found that it was easier to follow a similar path to others on the FreeBSD forums and write a startup script to handle everything:

#!/bin/sh

# PROVIDE: zfs_crypt
# BEFORE: LOGIN

. /etc/rc.subr

name="zfs_crypt"
rcvar=${name}_enable
start_cmd="${name}_start"
stop_cmd=":"

zfs_crypt_start()
{
        # Load all our modules, just in case
        kldload zfs
        kldload geom_eli
        kldload aesni
        
        echo "Unlocking encrypted drives."
        mdconfig -a -t vnode -f /boot/keys/zfs-keys.volume -u 0
        geli attach -k /boot/keys/md0.key /dev/md0
        mount /dev/md0.eli /boot/keys/store
        for DRIVE in 0 1 2 3 4 5 6 7; do
          geli attach -p -k /boot/keys/store/alexandria.key /dev/da${DRIVE}p1
        done
        for SSD in 0 1; do
          geli attach -p -k /boot/keys/store/alexandria.key /dev/ada${SSD}p1
          geli attach -p -k /boot/keys/store/alexandria.key /dev/ada${SSD}p2
        done
        /etc/rc.d/zfs onestart
}

load_rc_config $name
run_rc_command "$1"

Performance?

GELI does add quite a lot of CPU usage, obviously, even with aesni loaded. My fileserver isn’t the beefiest machine (4 core Xeon, E3 series); the load was between 4 and 6 when copying files over gigabit (and it was capable of saturating gigabit). I am planning on doing some 10Gbit testing in the future, but for my needs right now, it was more than capable of handling 100MB/s. With AESNI the load was slightly lower, but the system is still pretty busy. All in all, I’m pretty pleased with this setup.

Using iozone I was able to get sequential write/rewrite speeds of 353-367 MB/s and read speeds of 1.05 – 1.07 GB/s (obviously cached). For random, it clocked about 1.1GB/s for reads and 333MB/s for writes. It should be mentioned that the load was spectacularly high (>30), but the system was still extremely responsive. Granted, this is with a 10G dataset, and this box has 32GB of RAM – but I considered this test to be a fairly reasonable synopsis of my usage.

Given that this pool is backed by spinning rust – and not high performance rust at that – I think this is more than Good Enough (TM) for my home use. There are probably a lot of tweaks to be made for 10Gbit, and I look forward to also revisiting the issue once ZFS-native encryption lands in FreeBSD.

However, for now, I daresay this itch is successfully scratched.

6 Replies to “FreeBSD and ZFS Encryption”

  1. Thanks for the post.

    Can you explain why in your final script youre not giving /boot/keys/md0.key as one of the arguments during mounting md0 ?

Leave a Reply

Your email address will not be published. Required fields are marked *