Growing a MDADM Raid 1 Array for Offsite Data Backup

UPDATE: I am not using this method any more and do not recommend adding a third drive to the array and then removing the drive from the array for off-site storage. A better solution I am using is to rysnc the raid drive to a LUKS encrypted external hard drive that can be stored off-site.

I have two hard drives in a Raid 1 using Linux mdadm. I have a third hard drive of the same size that I would like to be a part of the raid and updated once a week and stored off site. This ensures that if the two raid hard drives are destroyed in a fire or stolen, I would have a recent backup in a separate location.

  • view Raid devices

 cat /proc/mdstat

  • view a specific raid device

sudo mdadm –detail /dev/md1

/dev/md1:
Version : 1.2
Creation Time : Sat Nov 10 11:28:44 2012
Raid Level : raid1
Array Size : 2930135512 (2794.40 GiB 3000.46 GB)
Used Dev Size : 2930135512 (2794.40 GiB 3000.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Jun 29 12:18:44 2014
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

  • List hard drives:

  sudo blkid

  • Add hard drive /dev/sde as a spare

sudo mdadm –add –verbose /dev/md1 /dev/sde

  • View new Raid status

$ sudo mdadm –detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Sat Nov 10 11:28:44 2012
Raid Level : raid1
Array Size : 2930135512 (2794.40 GiB 3000.46 GB)
Used Dev Size : 2930135512 (2794.40 GiB 3000.46 GB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Jun 29 13:38:26 2014
State : active
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Name : server-lvgrm:1
UUID : 3450fa67:fc8ab4e4:f87aa203:f9f0fdbe
Events : 598614
Number   Major   Minor   RaidDevice State
3       8       48        0      active sync   /dev/sdd
2       8       16        1      active sync   /dev/sdb
4       8       64        –      spare   /dev/sde

  • sync /dev/sde, will have to fail one of the other drives (/sev/sdb)

sudo mdadm /dev/md1 -v –fail /dev/sdd

  • View failed device

sudo mdadm –detail /dev/md1

/dev/md1:
Version : 1.2
Creation Time : Sat Nov 10 11:28:44 2012
Raid Level : raid1
Array Size : 2930135512 (2794.40 GiB 3000.46 GB)
Used Dev Size : 2930135512 (2794.40 GiB 3000.46 GB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Jun 29 13:51:56 2014
State : active, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 1
Spare Devices : 1
Rebuild Status : 0% complete
Name : server-lvgrm:1
UUID : 3450fa67:fc8ab4e4:f87aa203:f9f0fdbe
Events : 598616
Number   Major   Minor   RaidDevice State
4       8       64        0      spare rebuilding   /dev/sde
2       8       16        1      active sync   /dev/sdb
3       8       48        –      faulty spare   /dev/sdd

  • Monitor status of Raid rebuild from terminal

watch -n 60 cat /proc/mdstat #-n 60

  • to exit the “watch” command type “ctrl’ + “z”
  • for a 3 Tb hard drive this takes 2-3 days for the first sync
  • The spare drive (/dev/sde) will be used as the offsite backup drive.
  • Once the spare drive is 100% synced, this drive will need to be failed so that it can be removed.

sudo mdadm /dev/md1 -v –fail /dev/sde

  • View failed device

sudo mdadm –detail /dev/md1