MBWE2 Blue Rings fast rsnapshot

Introduction

I am writing this in 2015, for the MBWE blue rings dual drive NAS.
I started trying to follow the main rsnapshot tutorial. I even got it working. Then I realized— why am I doing it this way? Rsnapshot over ssh is painfully slow— 600KB/sec. In the past (i.e. more than ten years ago) some people would use the "none" cipher with ssh, but this possibility was removed. Otherwise there are some disabled-by-default ciphers like arcfour which should be marginally faster. But, on a home network, why do I care about encryption? Ultimately, if encryption is important, it is 2015 and there are many newer possibilities of more powerful NAS.
In short, my approach is:

  • Get access (these instructions are elsewhere)
  • Enable telnet (less overhead than ssh for local access)
  • install optware unfsd (user space nfs server)
  • disable hard drive sleep (sleeping the drives (WD Red 2TB) would save me about $15/year in electricity, but the drives would fail sooner)
  • automatic SMART test, report on failure
  • Mount nfs volume on my computer (linux box), run rsnapshot on it, saving to the MBWE
  • Wife's laptop (Macbook) also will mount nfs volume, for time machine backup

Prerequisites

SSH access (obviously): first-steps-with-mbwe
Upgraded firmware according to: ldso-runpath-enabled-firmware (maybe you can do without this, but I don't see why you'd want to)
If you are upgrading harddrives, you need to first follow: upgrade-blue-ring-500gb-the-easy-way (this one works)

Telnet

in /etc/inet.conf uncomment

telnet  stream  tcp     nowait  root    /usr/sbin/telnetd       telnetd

You can leave ssh available— the sshd isn't run unless there is an ssh connection. And, of course, if you are setting up for your MBWE to be accesible from the internet, you should only expose SSH to the internet.

changing sleep setting

edit /etc/init.d/S15wdc-heat-monitor, changing the the value after -S to 0, as below (or else change to a different sleep value of your choice)

if [ -e /dev/sda ] ; then
            $HDPARM -S 0 /dev/sda
            $SMARTCTL  -d ata  --smart=on  /dev/sda
        fi
        if [ -e /dev/sdb ] ; then
            $HDPARM -S 0 /dev/sdb
            $SMARTCTL  -d ata  --smart=on  /dev/sdb
        fi

setting up user folders

It isn't neccesary for the users to have accounts on the MBWE
as root:

cd /shares/internal
mkdir user1
chown default:default user1
chmod a+rwx user1
chmod a-s user1

nfs exports

example line from my /etc/exports (on MBWE):

/shares/internal/user1     192.168.1.0/255.255.255.0(rw,no_root_squash,sync,no_subtree_check)
/shares/internal/backups      192.168.1.0/255.255.255.0(rw,no_root_squash,sync,no_subtree_check)

192.168.1.0/255.255.255.0 This means only hosts with IP address 192.168.1.1 to 192.168.1.254 can connect to the share. Obviously, you'll need IP/subnet masks appropriate for your network.
rw read/write. ro is read-only
no_root_squash if the user connecting to the NFS share is root, then files will have UID root
sync ensures data is written before letting the client computer send something else. seems safer to me.
no_subtree_check subtree checks have been disabled by default on later NFS implementations due to more problems than benefits

Good reference on NFS options

After setting up /etc/exports you need to run exportfs -a. Even then I found it neccesary to totally restart the MBWE before I could connect from my linux client.

autofs on linux / OSX clients

I decided to set up autofs to connect to the NFS exports on . It simplifies things: accessing the mount point connects to the share, after a predefined period of inactivity, the share is automatically unmounted until next access. If that seems advantageous to you, your linux distro / Mac OSX has documentation how to set it up.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License