Skip to content

Copy zfs filesystems


Sometime I need to copy/migrate zfs file systems across a network ( backups ) or on the same server ( for testing or upgrades of single devices ).
The process is pretty simple.

Create a zfs snapshop

zfs snapshot zfs-pool/zfs-volume@base

Now send this snapshot to the device or remote system ( I use pv to show me the progress )


zfs send zfs-pool/zfs-volume@base | pv | zfs receive zfs-pool/new-zfs-volume


zfs send zfs-pool/zfs-volume@base | pv | ssh root@remote-host “zfs receive zfs-pool/new-zfs-volume”

Sync incremental changes ( if needed ) by creating a new snapshot

zfs snapshot zfs-pool/zfs-volume@latest

Then sending the incremental changes
-i means send the diffs between snaphosts ( in this example the diff between zfs-pool/zfs-volume@base zfs-pool/zfs-volume@latest )
-F means receive the diff and apply it even if the destination volume has changed

zfs send -i zfs-pool/zfs-volume@base zfs-pool/zfs-volume@latest | pv | zfs receive -F zfs-pool/new-zfs-volume

Highly Available Redis Cluster


I’ve done some work on a design for a redis cluster lately, there’s a lot of info on the subject but it is in pieces and I am going to try and provide a complete document here for one of the ways to do this.


  • sentinel: redis’s own monitoring and availability tool, we will use it to monitor our master/slave nodes, sentinel will promote a slave to master when an issue arises.
  • haproxy: a tcp load balancer (and one of my favorite open source tools, ever), haproxy can test if a redis node is a master or slave, we will use it as the front end to which clients will connect to. haproxy will detect which node is the master and make sure traffic flows to the correct node.
  • keepalived: network level load balancer, we will use keepalived to publish an virtual ip and manage failover between our haproxy nodes.


  • haproxy1 – master haproxy node
  • haproxy2 – slave haproxy node
  • redis1 – master haproxy node
  • redis2 – slave haproxy node
  • sentinel – a sentinel quorum node

It will look something like this:

redis How do we reach 100% availability using the above setup ? 

Redis replication – Redis has built in replication, we will setup redis2 as slave, which will make sure both redis nodes have the same RDB data.

Redis failure – If our Redis master fails ( redis1 ), both the sentinel node and the slave redis node ( redis2 ) will detect the failure, we use a dedicated sentinel box to make sure we have a quorum, which will make sure we do not have a false positive failover in case of a network issue between the two redis nodes. Basicly we make sure 2 separate systems monitor the master and both have to agree that the master has failed.  If both redis2 and sentinel agree, the redis-sentinel process running on the slave node ( redis2 ) will convert the node to a master. HAproxy will monitor the master & slave nodes ( redis1/2 ) at all times, it will make sure which ever node is the master will be the one traffic will be directed to.

HAproxy failure – keepaliveD monitors the HAproxy process running on the node it is on, it also monitors its peer node for network connectivity. In the event of an haproxy failure of a hardware failure, keepaliveD will switch the virtual ip to the slave haproxy node.

Let’s get to work! 

I am using ubuntu ( 14.04 LTS ) for this tutorial because all the packages are available without needed to add external sources, however I tested this on Oracle Linux 7 & Centos 6.5 without issues.

HAproxy nodes

sudo apt-get install keepalived haproxy

Tweak sysctl to allow haproxy to bind to the virtual ip, even if it is not assigned to the node its running on.

echo “net.ipv4.ip_nonlocal_bind=1” | sudo tee -a /etc/sysctl.conf && sudo sysctl -p

Edit the haproxy config file and add the redis frontend ( /etc/haproxy/haproxy.cfg ) and reload it.

1 frontend redis
frontend redis
#haproxy should listen on the virtual ip
bind name redis
default_backend redis_backend

backend redis_backend
option tcp-check
#haproxy will look for the following strings to determine the master
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check send info\ replication\r\n
tcp-check expect string role:master
tcp-check send QUIT\r\n
tcp-check expect string +OK
#these are the ip’s of the two redis nodes
server redis1 check inter 1s
server redis2 check inter 1s

sudo /etc/init.d/haproxy reload

Edit keepalived’s configuration file ( /etc/keepalived/keepalived.conf ) and reload it.

vrrp_script chk_haproxy {
script “killall -0 haproxy” # verify the pid existance
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK

vrrp_instance VI_1 {
interface eth0 # interface to monitor
state MASTER
virtual_router_id 51 # Assign one ID for this route
priority 101 # 101 on master, 100 on backup
virtual_ipaddress { # the virtual IP
track_script {

sudo /etc/init.d/keepalived reload

On the redis and sentinel boxes

sudo apt-get install redis-server

Modify redis to listen on all ip addresses (by default it listens on only)
Find this line in /etc/redis/redis.conf


And change it to on redis1




and on redis2

Restart redis server on both nodes

sudo /etc/init.d/redis-server restart

Confirm redis is listening on the correct ip’s using:

moti@redis1:~# sudo netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0* LISTEN 1787/redis-server 1
tcp 0 0* LISTEN 1787/redis-server 1

Setup redis2 as the slave node

redis-cli slaveof 6379

make sure you have a working master/slave setup before configuring sentinel, you can confirm you have slave / master by running:

redis-cli info | grep ^role

Your slave should come back with something similiar to this


setup sentinel config files ( /etc/redis/sentinel.conf )

port 26379
daemonize yes
pidfile “/var/run/redis/”
loglevel notice
syslog-enabled yes

# Master setup
sentinel monitor mymaster 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 900000
sentinel config-epoch mymaster 21

# Slave setup
sentinel known-slave mymaster 6379
sentinel known-sentinel mymaster 26379 d0e835d7a3c263764df51dccddfc184897967995
sentinel known-sentinel mymaster 26379 60459b991198710c271490fb6903f79c48a41584
sentinel monitor resque 6379 2

On the sentinel node, prevent redis-server from starting on boot, and make sure its not running

sudo update-rc.d redis-server disable
sudo /etc/init.d/redis-server stop

Start redis-sentinel on all nodes

sudo /usr/bin/redis-server /etc/redis/sentinel.conf –sentinel

At this point, you should have a redis cluster with master/slave and a sentinel for quorum

Here a sample log entry showing a master down situation

Aug 13 14:02:04 redis2 redis[1852]: +odown master mymaster 6379 #quorum 3/2
Aug 13 14:02:28 redis2 redis[1852]: +new-epoch 28
Aug 13 14:02:28 redis2 redis[1852]: +vote-for-leader 100540f4bb8e1ee5292af3d1a25371c6943485de 28
Aug 13 14:02:29 redis2 redis[1852]: +sdown master resque 6379
Aug 13 14:02:29 redis2 redis[1852]: +odown master resque 6379 #quorum 3/2
Aug 13 14:02:29 redis2 redis[1852]: +new-epoch 29
Aug 13 14:02:29 redis2 redis[1852]: +try-failover master resque 6379
Aug 13 14:02:29 redis2 redis[1852]: +vote-for-leader 8b183893db09b6b2eb9be506358d60352276c767 29
Aug 13 14:02:29 redis2 redis[1852]: voted for 8b183893db09b6b2eb9be506358d60352276c767 29
Aug 13 14:02:29 redis2 redis[1852]: voted for 8b183893db09b6b2eb9be506358d60352276c767 29
Aug 13 14:02:29 redis2 redis[1852]: +elected-leader master resque 6379
Aug 13 14:02:29 redis2 redis[1852]: +failover-state-select-slave master resque 6379
Aug 13 14:02:29 redis2 redis[1852]: +selected-slave slave 6379 @ resque 6379
Aug 13 14:02:29 redis2 redis[1852]: +failover-state-send-slaveof-noone slave 6379 @ resque 6379
Aug 13 14:02:29 redis2 redis[1852]: +vote-for-leader 100540f4bb8e1ee5292af3d1a25371c6943485de 29

At which redis2 is being promoted to master when all nodes agree that redis1 is down

Aug 13 14:02:29 redis2 redis[1852]: +failover-state-wait-promotion slave 6379 @ resque 6379
Aug 13 14:02:29 redis2 redis[1852]: +switch-master resque 6379 6379
Aug 13 14:02:29 redis2 redis[1852]: +slave slave 6379 @ resque 6379

And redis-cli agrees

moti@redis2:~# redis-cli info | grep role

– there’s no init script for redis-sentinel to start it on boot, you need to write one or use the one from ( see link below ).
– if using iptables ( you should! ) make sure the redis boxes can talk to each other ( open port 6379/tcp and 26379/tcp access for all 3 nodes )

Resources used:


Usb Serial, Mac os and a Cisco switch


List device names

ls /dev/cu.*

A list similar to this should show up.

mba:~ root# ls /dev/cu.*
/dev/cu.Bluetooth-Incoming-Port /dev/cu.Bluetooth-Modem /dev/cu.usbserial

connect using cu

cu -l /dev/usb.serial -s 9600

If you do not see a usb serial device, you can list all your usb devices with this command

system_profiler SPUSBDataType

Which in my case showed my Prolific usb adapter

USB-Serial Controller D:

Product ID: 0x2303
Vendor ID: 0x067b (Prolific Technology, Inc.)
Version: 4.00
Speed: Up to 12 Mb/sec
Manufacturer: Prolific Technology Inc.
Location ID: 0x14200000 / 6
Current Available (mA): 500
Current Required (mA): 100

Stop the %BMP-5-BMP_DISCOVER: DHCP DISCOVER messages on a new F10 switch


When configuring new switches, the flood of these messages over serial console can be quite intrusive…

This is because F10 ship in “Jumpstart mode”, run this command to disable it ( reload the switch after executing it).

FTOS# reload-type normal-reload

Setup a stack on Force10 S4810


Set stack switch priority on the switch you want as master

stack-unit 0 priority 14

Enable the stack groups, this will make a specific interface a stack-port ( see below for a mapping of ports to stack-group ).
In this case I am using the 40gig ports ( port 48 ). Once enabled you will not be able to use that port.

stack-unit 0 stack-group 12

accept the prompt/warning

Setting ports Fo 0/48 as stack group will make their interface configs obsolete after a reload.
[confirm yes/no]:yes

Save your config and reload ( both switches )

check your configuration

sh system
sh system stack-ports

Physical port to stack-group mapping

10G ports

Ports 0-3 – stack-group 0
Ports 4-7 – stack-group 1
Ports 8-11 – stack-group 2
Ports 12-15 – stack-group 3
Ports 16-19 – stack-group 4
Ports 20-23 – stack-group 5
Ports 24-27 – stack-group 6
Ports 28-31 – stack-group 7
Ports 32-35 – stack-group 8
Ports 36-39 – stack-group 9
Ports 40-43 – stack-group 10
Ports 44-47 – stack-group 11

40G ports are numbered 48, 52, 56, and 60

Ports 48 – stack-group 12
Ports 52 – stack-group 13
Ports 56 – stack-group 14
Ports 60 – stack-group 15