MergerFS + Snapraid: A durable backup or media storage array

A durable and expandable backup or media storage array with MergerFS and Snapraid.

MergerFS + Snapraid: A durable backup or media storage array
Photo by Marc PEZIN / Unsplash

What does this solve?

I needed to solve an issue around backing up a large volume of data (50+ TB) without a large budget while still needing the backup medium to be durable in the event of disk failures.

The strongest characteristic of the backup medium I was looking for at the time was the ability to take it offline between backup intervals without having to touch the device.

The second characteristic I was looking for was the ability for each disk in the "array" to have it's own filesystem (no striping). With a lack of striping, any disks lost outside of the Snapraid parity disk count would only be a loss of the data on that particular disk. With this being a backup target for a massive amount of small files, resync in the event of multiple HDDs failing will take significantly less resources and time if at least some, if not most, data is still available on the target after a repair. IO performance for this use-case is not a concern.

Aside from this, the backup appliance had to be within close proximity of my main storage to facilitate large amounts of changed data that would have to copy over and replace based on the data changed since the last backup. Interval in this case would be use-case dependent (how much data you can tolerate to lose if a failure occurs between backup intervals).


Hardware requirements

Hardware requirements will vary greatly on how much someone would want to invest into this type of solution and their storage needs.

I'm utilizing used Dell server hardware to utilize the IDRAC Redfish API with Ansible playbooks to fully automate power up and the backup process from a powered down state. If someone has no need or desire to power the device on and off or are not using Dell hardware with IDRAC, the IDRAC Redfish part won't be applicable.

Note: Some older IDRAC software versions do not contain this. The IDRAC software will likely have to be brought up to date with the latest version to include the Redfish API if not already.

I chose used 8TB SAS disks varying manufacturer from reputable Ebay sellers. A good habit if using pre-owned disks is to validate their health. Software such as HDSenteniel works great upon other options.

For the actual server, as stated above, I'm using a Dell R730XD, also gently used. This holds 12x 3.5" disks on the backplane. I then run the OS from an internal USB drive to not waste the HDD slot on the OS - adding more disks to a MergerFS configuration is a simple process so this could have some heavy benefits in the future as the dataset grows.

For the Operating system I chose to use the latest stable release of Debian - but any Linux distro should work just as well. The operating system requirement is to support MergerFS, Snapraid, and a supported filesystem for MergerFS.

Ansible could be skipped without issue if only looking to build a storage box with MergerFS and Snapraid if no automated processes are needed.


Software Information

The software I'm using for this is all open source to the best of my knowledge. There shouldn't be any licensing issues, but if this were to be used in a business or commercial setting - this should be looked into. I wont be looking into this for my personal environment.

Mergerfs - https://github.com/trapexit/mergerfs

MergerFS will group all of the storage disks together as a single mount point. This is heavily configurable. The configuration file I'll lay out later on will be an example from my running environment.

Snapraid - https://www.snapraid.it/

Snapraid will calculate and store parity data on the disks allocated to Snapraid. This can be anywhere from 1 parity disk up to how many parity disks will meet the storage array durability requirements. Snapraid does not calculate parity real-time so this must be run as a job during intervals if the system is perpetually operational or during the job if used as a backup system, after any data changes are written to the mergerfs array. The configuration file I'll lay out later on will be an example from my running environment.

Ansible - https://github.com/ansible/ansible

Ansible will handle the tasks from a remote host which is always on. My ansible host lives on a hypervisor which has network access to the Dell server to reach both the backup server's IDRAC interface for the Redfish API portion and the Debian OS which Ansible will be running the playbook tasks against via SSH.

Ansible Dell EMC Redfish Powerstate Module - https://docs.ansible.com/ansible/latest/collections/dellemc/openmanage/redfish_powerstate_module.html

This will be the Ansible module used to interact with the Dell server's IDRAC Redfish API.

Debian - The Linux distro I chose for this project.

Debian -- Debian “bookworm” Release Information

Debian Stable

xfsprogs - XFS filesystem support


MergerFS and Snapraid

This is the first step in the preparation process. The assumption is that an operating system is running and the data disks for the backup array are connected to the server.

List all available disks.

Determine the data disks that will be used for MergerFS and the Snapraid parity disks if using disks for parity.

lsblk

In my case, I have 6 8TB disks. I'll use 4 for MergerFS and 2 for SnapRaid.

MergerFS

  • /dev/sda
  • /dev/sdb
  • /dev/sdc
  • /dev/sdd

SnapRaid

  • /dev/sde
  • /dev/sdf

For each disk used for both MergerFS and Snapraid - I'm wiping any existing filesystem then creating a single partition occupying the whole disk. For this process each disk is being handled individually, but this could likely be scripted to not have to run through the process for every disk.

wipefs -a /dev/sda
fdisk /dev/sda

When prompted:

n # New partition
'enter' # Default
'enter' # Default
'enter' # Default
w #Write changes to disk

This leaves you with a new partition on /dev/sda which will be /dev/sda1. Do this for each disk in the list of disks created earlier.

Create XFS filesystem for each disk in the list

Debian natively has no support for XFS. To get started we have to install xfsprogs. I'm going to go ahead and make sure all packages are up to date before installing xfsprogs then install xfsprogs and load it into the Kernel.

apt-get update
apt-get upgrade -y
apt-get install -y xfsprogs
modprobe -v xfs

Once XFSprogs is installed and ready to go, for each partition on each disk we will add an XFS filesystem occupying the full partition created on each disk in the previous step.

mkfs.xfs /dev/sda1

After the partion is created, note down the UUID returned with blkid to add to /etc/fstab mount in the next step.

blkid /dev/sda1

/dev/sda1: UUID="c92114f5-9a5d-4d10-ab8d-b531a21efbc0" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="4b99d4ae-71ba-40c1-b60d-f4b6711d234d"

Do this for each data disk.

Add mount points for the number of data and parity disks

mkdir -p /mnt/disk{1..4}
mkdir -p /mnt/parity{1..2}

Add entries to /etc/fstab

In this step I'm adding all data disks to /etc/fstab for the filesystems to mount on boot. For the data disks I'll have a specific naming structure and for the parity disks I'll have a different specific naming structure.

This will use the UUID's noted in the earlier step for each XFS filesystem created.

...

# Data Disks used by MergerFS
UUID=246ccf5d-7645-4310-b2c1-74190427b4a1 /mnt/disk1 xfs rw,noatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
UUID=2aaaefdc-d364-42d4-b586-c2310f0968b1 /mnt/disk2 xfs rw,noatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
UUID=c92114f5-9a5d-4d10-ab8d-b531a21efbc0 /mnt/disk3 xfs rw,noatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
UUID=5515dc1a-ef7e-4605-b519-f4f5dd591d04 /mnt/disk4 xfs rw,noatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0

# Snapraid Parity Disks
UUID=67d6e57f-9547-4a57-909a-d9ede0495189 /mnt/parity1 xfs rw,noatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
UUID=5240a1cd-2b2a-4961-af0a-79f57c93e44f /mnt/parity2 xfs rw,noatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0

...

Take note to the data disks naming structure for mount points as '/mnt/disk#' and parity disk as '/mnt/parity#'. This naming structure is critical to the mounting of mergerfs and later configuration.

Reload systemd manager configuration.

systemctl daemon-reload

Mount all disks now in fstab

mount -a

Check mounted disks with lsblk or df

lsblk

Install and configure MergerFS

apt-get install -y mergerfs

MergerFS does not have a configuration file. It is simply configured with the mount options in /etc/fstab. Below is my entry in /etc/fstab.This example should work great. Take note to the naming structure for the disks - /mnt/disk*. This uses the mount points specified earlier to use all disks sequentially numbered as /mnt/disk1-4 as a MergerFS fuse mount.

See https://github.com/trapexit/mergerfs for mount options and explanations.

...
# MergerFS
/mnt/disk* /mnt/backup-data fuse.mergerfs defaults,nonempty,allow_other,use_ino,cache.files=off,moveonenospc=true,category.create=mfs,dropcacheonclose=true,minfreespace=250G,fsname=mergerfs 0 0
...

Create the MergerFS mount point

mkdir -p /mnt/backup-data

Reload systemd manager configuration.

systemctl daemon-reload

Mount all disks now in fstab.

mount -a

Check mounted disks with lsblk or df.

lsblk

You should see a large fuse filesystem mounted at /mnt/backup-data at this point.

Install and configure Snapraid

Start off by installing the Snapraid package.

apt-get install -y snapraid

In the above steps we already partitioned and mounted the parity disks we will use. All we have to do here is take care of the configuration. Below is an example configuration file - I use this in my environment.

See https://selfhostedhome.com/combining-different-sized-drives-with-mergerfs-and-snapraid/ as this is where I built my Snapraid configuration off of.

# Defines the file to use as parity storage
# It must NOT be in a data disk
# Format: "parity FILE_PATH"
parity /mnt/parity1/snapraid.parity
2-parity /mnt/parity2/snapraid.2-parity

 
# Defines the files to use as content list
# You can use multiple specification to store more copies
# You must have least one copy for each parity file plus one. Some more don't
# hurt
# They can be in the disks used for data, parity or boot,
# but each file must be in a different disk
# Format: "content FILE_PATH"
content /opt/snapraid/snapraid.content
content /mnt/disk1/.snapraid.content
content /mnt/disk2/.snapraid.content
content /mnt/disk3/.snapraid.content
content /mnt/disk4/.snapraid.content

# Defines the data disks to use
# The order is relevant for parity, do not change it
# Format: "disk DISK_NAME DISK_MOUNT_POINT"
disk d1 /mnt/disk1
disk d2 /mnt/disk2
disk d3 /mnt/disk3
disk d4 /mnt/disk4

# Excludes hidden files and directories (uncomment to enable).
#nohidden
 
# Defines files and directories to exclude
# Remember that all the paths are relative at the mount points
# Format: "exclude FILE"
# Format: "exclude DIR/"
# Format: "exclude /PATH/FILE"
# Format: "exclude /PATH/DIR/"
exclude *.unrecoverable
exclude /tmp/
exclude /lost+found/
exclude *.!sync
exclude .AppleDouble
exclude ._AppleDouble
exclude .DS_Store
exclude ._.DS_Store
exclude .Thumbs.db
exclude .fseventsd
exclude .Spotlight-V100
exclude .TemporaryItems
exclude .Trashes
exclude .AppleDB

Once this configuration is in place, you can run snapraid for the initial configuration.

snapraid sync

This will build your initial parity. Going forward this could be scheduled with Cron or a task scheduler. Since I'm using this as a backup server, I'm using Ansible to handle this.


Automate backup jobs with Ansible

This playbook will have to be tailored to the individual environment - defining how you are mounting and transferring the data from your source storage system to the target backup server.

The playbook will need somewhere to run from with key-based SSH access from the Ansible host to the backup server. I handle the job scheduling with CRON at the moment. This is a simple solution that is easy to set up.

Take note to the IDRAC Redfish section. This should be removed if not using a Dell Server with IDRAC that supports the Redfish API. I'm using this to power up the server which has a normal state of powered down unless performing a backup.

---
- name: Power up backup storage and sync data
  hosts: backup-storage # This should match inventory group
  gather_facts: no
  become: true
  tasks:
     - name: Power on Dell backup server
       dellemc.openmanage.redfish_powerstate:
         baseuri: "<idrac-IP>"
         username: "<username>"
         password: "<password>"
         validate_certs: "false"
         reset_type: "On"
       delegate_to:  localhost
     - name: Wait 600 seconds, but only start checking after 60 seconds
       ansible.builtin.wait_for_connection:
         delay: 60
         timeout: 600
       #delegate_to:  localhost
     - name: Gathering facts # Gathering facts later due to powering up occurring prior to being able to ssh.
       setup:
     - name: Mount NFS volume to backup server as read-only
       ansible.posix.mount:
         src: 10.0.0.88:/mnt/bulkz3/media # main storge
         path: /mnt/media
         opts: ro,sync,hard
         state: mounted
         fstype: nfs
     - name: Synchronize two directories on the backup server
       ansible.posix.synchronize:
         src: /mnt/media/
         dest: /mnt/backup-data/media/
         recursive: true
         archive: true
         rsync_opts:
           --exclude 'recycling-bin'
           --exclude 'downloads'
       delegate_to: "{{ inventory_hostname }}"
     - name: Execute Snapraid sync
       ansible.builtin.shell: snapraid sync
     - name: Delay shutting down the remote node, shut down
       community.general.shutdown:
         delay: 15
     - name: send mail - basic notify without data
       run_once: true
       delegate_to: localhost
       mail:
         host: <mail-host>
         port: <mail-port>
         username: <mail-username>
         password: <mail-password>
         from: Ansible <from-address>
         to: Reporting <to-address>
         subject: Backup Server data sync tasks have run
         body: '{{ ansible_play_hosts }} data sync has completed.'
         secure: starttls         

Considerations and Use-case

  • This playbook currently lacks error checking and notification of failures.

    • This playbook could generally be improved.
  • No striping. This storage architecture will provide a low read performance and a low write performance due to individual files being constrained to a single disk.This is good for a durable backup system but bad for IO intensive application.

    • Understand the usecase and limitations.
    • This solution works great for what I'm using it for and the automation can be improved over time.
  • This solution could be great for mass media/data storage while also needing the ability to expand the array size over time with different sized disks.

  • Snapraid parity disks are required to be equal to or larger than the maximum disk size in the array. Any larger disk added will have to become the parity disk.

    • If using multiple Snapraid parity disks this should be considered with any expansion of the MergerFS.
  • This storage solution does not provide high-availability of data.

    • Any data disk failing will lead to the data on that disk being unavailible.
      • Pro: The data on remaining disks is still available No striping
    • It does provide data durability in the event of hardware failures
    • Recovering using Snapraid is an intensive process and does not provide real-time availability to the parity data.
      • The data from a failed disk is unavailable until the Snapraid recovery is completed.
  • MergerFS and Snapraid are close to a free but far less feature-robust Unraid alternative for similar R/W performance.