Encrypt Hard Drive for Secure Storage in Linux Ubuntu 14.04

I need to be able to back up my data to an external hard drive that will be stored off site to protect my data if my house burns down or the contents are stolen. I also want the data to be private, so that means an encrypted hard drive. I am using Ubuntu 14.04. This machine is on a headless server so all commands are entered from the terminal over ssh. I have incorporated all of these commands into a python backup script for ease of use. Terminal code will look like this sentence. If you have monitor hooked up to your computer, gui tools are available.

  • References
  • Install cryptsetup
    • sudo apt-get install cryptsetup
  • Identify the correct hard drive to use. You will be erasing all data on the drive.
  • Fill the hard drive with random data. I saw arguments were this step is not required, however I felt it safer to do this and I am not in a rush. This step takes a long time.
    • run command as root
      • sudo -s
      • This method is fast if CPU supports AES-NI (hardware acceleration). see http://serverfault.com/questions/6440/is-there-an-alternative-to-dev-urandom
        • openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero | pv -pterb > /dev/sdj
      • This was another command posted but was not as fast
        • dd if=/dev/urandom of=/dev/sdj
      • I then played a youtube playlist in the background to help generate random data
  • Encrypt the hard drive
  • Verify the encryption is using luks
    • sudo cryptsetup -v isLuks /dev/disk/by-uuid/2228745a-0db3-48c7-b582-5a3ddf7e7c70
      • Output should be “Command successful.” if LUKS device
  • Open the encrypted device (decrypt/unlock the device)
    • Then first time the encrypted device is opened, a symbolic link called a “mapping” is created which becomes the name of the LUKS device.
      • For example I chose a descriptive name “backup.serv-offsite” and the LUKS device will be created at /dev/mapper/backup.serv-offsite. “/dev/mapper/backup.serv-offsite” will only be created when the LUKS device is opened.
    • sudo cryptsetup -v luksOpen /dev/disk/by-uuid/2228745a-0db3-48c7-b582-5a3ddf7e7c70 backup.serv-offsite
  • See if the LUKS device is already unlocked/open.
    • sudo cryptsetup status backup.serv-offsite
    • output if device is opened
      • /dev/mapper/backup.serv-offsite is active and is in use.
      • type: LUKS1
      • cipher: aes-xts-plain64
      • keysize: 512 bits
      • device: /dev/sde
      • offset: 4096 sectors
      • size: 3907025072 sectors
      • mode: read/write
    • output if device is closed
      • /dev/mapper/backup.serv-offsite is inactive.
  • Create a filesystem once device is opened
    • mkfs.ext4 /dev/mapper/backup.serv-offsite
  • Mount the filesystem
    • sudo mount --verbose -t ext4 /dev/mapper/media.serv-backup /media/bkup.mserv
      • /dev/mapper/media.serv-backup is the opened LUKS device
      • /media/bkup.mserv is the location mount point of the LUKS device
  • Umount the filesystem
    • sudo umount /media/bkup.mserv
  • Close the LUKS device so that the data stays private
    • sudo cryptsetup -v close media.serv-backup
  • Verify the LUKS device was closed, data is not available
    • sudo cryptsetup status backup.serv-offsite
  • Done

Identify a hard drive serial number in Linux Ubuntu 14.04

It is very important to accurately identify a hard drive before you erase the drive or prepare the drive for encryption by writing random data over the whole drive. If you choose the wrong drive, you can loose all your data from that drive. Below are some commands to indentify your drives. I am using linux Ubuntu 14.04. All commands are entered from the terminal. Terminal code will look like this sentence.

  • For the drive you are about to erase, find the serial number on the drive and record the last 5 digits (you do not need to write down the whole serial number and you will save time).
    • Here are instructions for Western Digital drives. Other manufactures are similar.
    • from http://wdc.custhelp.com/app/answers/detail/a_id/249/~/how-to-find-the-serial-number-of-a-wd-drive-or-product

      The serial number is located on the label on the back or side of the drive. Please note that the location will vary according to the drive. It is usually proceeded by a S/N: or SN:. If the serial number is unable to be read, many customers have taken digital pictures of the label and enlarged it on their computer to make it easier to read.

    • Here are google images of serial number numbers on hard drives if you are still unsure.
    • show serial number in software
      • List all physical devices with “blockid” command
        • sudo blkid
          • output is
          • /dev/sdb: UUID=”2628745a-0db3-48c7-b582-5a3ddf7e7c70″ TYPE=”crypto_LUKS”
          • Remember or copy the uuid. You can identify the device by the UUID (preferred because the UUID doesn’t change) as /dev/disk/by-uuid/2628745a-0db3-48c7-b582-5a3ddf7e7c70
      • show serial number with “hdparm” command
        • you have to change /dev/sdb with what is found on the above blkid command or use UUID
        • Identify disk by UUID
          • sudo hdparm -I /dev/disk/by-uuid/2628745a-0db3-48c7-b582-5a3ddf7e7c70 | grep -e "/dev" -e "Serial Number" -e "device sizewith M = 1000"
        • Identify disk by /dev/sdx
          • sudo hdparm -I /dev/sdb | grep -e "/dev" -e "Serial Number" -e "device sizewith M = 1000"
        • output below
          • /dev/sdb:
          • Serial Number:      WD-WCC4MKC41F22
          • device size with M = 1000*1000:     2000398 MBytes (2000 GB)
        • verify this serial number matches what is on the outside of the hard drive

Open a LUKS device with Python 3 script

Here code from a script to unlock a LUKS device before making a backup to the device. I use LUKS for encrypting hard drives for off-site storage.


print ("Determine if luks device mapper link exists. If link exists, then luks is open")
answer = os.path.islink(luks_location)
print ("Result of luks device being open: ", answer)

if answer == False:
    #luks device is closed, attempt to open
    print ("Attempting to open luks device: ", luks_device_uuid)
    subprocess.call("cryptsetup luksOpen " + "--key-file " + repr(luks_keyfile) + " " + repr(luks_device_uuid) + " " + repr(luks_name), shell=True)
    answer = os.path.islink(luks_location)

    # test to see if luksOpen failed and need to exit program
    if answer == False:
        # luksopen failed, exit program
        print ("luks device failed to open. The answer variable is: ", answer)
        sys.exit ("Exiting program now.")
    else:
        # luksopen was successful
        print ("luks device is now open: ", answer)
        print ("luks location is: ", luks_location)
        print ("luks device mapper name is: ", luks_name)

else:
    # luks device was already open
    print ("luks device is already open: ", answer)
    print ("luks location is: ", luks_location)
    print ("luks device mapper name is: ", luks_name)
sys.exit("exiting program")

Next blog will be mounting the device with python.

Linux Screen command

Use this command to start a long process while ssh into a machine to keep the process running if the ssh connection gets dropped.

http://ss64.com/bash/screen.html

  • screen -r         Resume a detached screen session
  • Control-a ?     Display brief help
  • Control-a ”     List all windows for selection
  • Control-a d     Detach screen from this terminal
  • Control-a c     Create new window running a shell
  • screen -t backup   Start a screen with title “backup”
  • Control-a f    Toggle flow on, off or auto
  • Control-a n    Switch to the Next window
  • Control-a p    Switch to the Previous window
  • Control-a A    Accept a title name for the current window

Screen Customizations

Adding A Calander to OwnCloud and Thunderbird

Here a brief steps to add calendars to Owncloud and then Thunderbird

  • Download calendar from website in ical format and save it to your hard drive
  • Import Calendar into Owncloud
    • Owncloud manual
    • Login to Owncloud account
    • goto files tab
      • select a folder to import a calendar
      • click on the upload arrow at the top of then page
        • a new window will appear and select your ical calendar (has a .ics extension, example: US-holidays.ics)
        • The new file “US-holidays.ics” should now be visible in the files tab
      • click on the “US-holidays.ics” file name
        • a new dialog box appears “Import a calendar file”
          • at “Please choose a calendar”
            • select “create a new calendar”
            • type in calendar name
            • choose a color for the calendar
            • select the box “Remove all events from the selected calender”
          • click button “import”
  • Adding a Calendar to Mozilla Lightning calendar application

jobs command linux, ubuntu 14.04

Instructions for use of the jobs command from the terminal.

To stop a running process:

$ ctrl + z

[1]+  Stopped                 watch -n 60 cat /proc/mdstat

List jobs with process id

$ jobs -l

[1]+ 31337 Stopped                 watch -n 60 cat /proc/mdstat

Kill job #1. If killing process while stopped, you will have to send the process to the foreground for it to be killed.

$ kill %1

$ fg %1

watch -n 60 cat /proc/mdstat

Check that the process is no longer running. No jobs will be listed and the output will be blank.

jobs -l

References

http://www.cyberciti.biz/howto/unix-linux-job-control-command-examples-for-bash-ksh-shell/

 

Stale NFS File Handle Error on Ubuntu Linux Server

updated 2017-1205

After rebooting my Ubuntu server, I sometimes am unable to mount my NFS shares on the server from my desktop or laptop (debian 9.0 or Ubuntu 17.10). The NFS shares are automatically mounted at boot through the file /etc/fstab. I think the problem is that the server gets a new file handle for the share and the desktop machine still has the old file handle and it gets rejected by the server. Here is my solution and all commands are run from the command line. Command line commands will look like this sentence.

  • show mounted filesystems from the client computer
    • $ df -h
      • output: df: ‘/mnt/nfs/backup.serv’: Stale file handle
        • If the file system is mounted, I now need to unmount the share with the old file handle and then mount the share with new file handle. Remounting the share will cause the destop client to get the new updated file handle from the server.

          • sudo umount -v /mnt/nfs/backup.serv
        • If the file system is not mounted
          • Try exporting the file system from the server
            • exportfs -vf
            • exportfs -vra
            • I put this in a crontab on server and have it export every 15 minutes. Open the crontab editor with the following command
              • sudo crontab -e
              • Insert the next two lines
                • # rexport nfs shares every 15 minutes
                • */15 * * * * exportfs -ra
            • From the client, try to mount the nfs share
              • sudo mount -av
          • If the above does not work then restart the nfs-kernel-server on the nfs file server. See section restarting nfs-kernel-server
  • Mounting nfs share from client
    • /media/chad/backup.serv is the mount point for my NFS share
    • Have the client mount all file systems
      • $ sudo mount -av
        • -a mounts all file systems in fstab that are not mounted
    • show mounted file systems
      • $ df -h
        • output: 192.xxx.xxx.xxx:/media/backup.serv 1.8T 913G 828G 53% /media/chad/backup.serv

Restarting nfs-kernel-server

    • If you are getting the error “mount.nfs: mount(2): Stale file handle” on your client when you try to mount your nfs mount, then you need to restart the nfs-kernel-server from the server.
    • ssh into the server
    • run restart command (debian or ubuntu)
      • $ sudo service nfs-kernel-server restart
    • from the client, remount the nfs partition

Miscellaneous

Good information about the problem and I script to find stale nfs file handles at http://serverfault.com/questions/617610/stale-nfs-file-handle-after-reboot

 

mts files to mkv, Home Video Files for MythTV

I own a video camera (Panasonic HC-vc720) that saves its files as mts. The format mts is not recognized by mythtv. I converted these files using handbrake software to mkv files for watching on mythtv. ffmpeg is a command line program that will also convert to mkv files. The mkv format is an open/non-propriety format.

Handbrake was installed from the terminal (Ubuntu 14.04) with the command:

sudo apt-get install handbrake

 

Panasonic HC-V720 Transferring Files to Linux

Panasonic HC-V720 Transferring Files to Linux

I need to move files off of the video camera (vc) onto my linux computer.

  • Remove SD card and put SD card into a card reader attached to the linux computer.
  • To move only the video files, they are .MTS files hidden away in PRIVATE\AVCHD\BDMV\STREAM directory of the card.
  • To move still pictures, they are found in folders located in DCIM
  • After file transfer is complete, delete all files and directories from the PRIVATE AND DCIM directories leaving only those two directories on the card.

 

Growing a MDADM Raid 1 Array for Offsite Data Backup

UPDATE: I am not using this method any more and do not recommend adding a third drive to the array and then removing the drive from the array for off-site storage. A better solution I am using is to rysnc the raid drive to a LUKS encrypted external hard drive that can be stored off-site.

I have two hard drives in a Raid 1 using Linux mdadm. I have a third hard drive of the same size that I would like to be a part of the raid and updated once a week and stored off site. This ensures that if the two raid hard drives are destroyed in a fire or stolen, I would have a recent backup in a separate location.

  • view Raid devices

 cat /proc/mdstat

  • view a specific raid device

sudo mdadm –detail /dev/md1

/dev/md1:
Version : 1.2
Creation Time : Sat Nov 10 11:28:44 2012
Raid Level : raid1
Array Size : 2930135512 (2794.40 GiB 3000.46 GB)
Used Dev Size : 2930135512 (2794.40 GiB 3000.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Jun 29 12:18:44 2014
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

  • List hard drives:

  sudo blkid

  • Add hard drive /dev/sde as a spare

sudo mdadm –add –verbose /dev/md1 /dev/sde

  • View new Raid status

$ sudo mdadm –detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Sat Nov 10 11:28:44 2012
Raid Level : raid1
Array Size : 2930135512 (2794.40 GiB 3000.46 GB)
Used Dev Size : 2930135512 (2794.40 GiB 3000.46 GB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Jun 29 13:38:26 2014
State : active
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Name : server-lvgrm:1
UUID : 3450fa67:fc8ab4e4:f87aa203:f9f0fdbe
Events : 598614
Number   Major   Minor   RaidDevice State
3       8       48        0      active sync   /dev/sdd
2       8       16        1      active sync   /dev/sdb
4       8       64        –      spare   /dev/sde

  • sync /dev/sde, will have to fail one of the other drives (/sev/sdb)

sudo mdadm /dev/md1 -v –fail /dev/sdd

  • View failed device

sudo mdadm –detail /dev/md1

/dev/md1:
Version : 1.2
Creation Time : Sat Nov 10 11:28:44 2012
Raid Level : raid1
Array Size : 2930135512 (2794.40 GiB 3000.46 GB)
Used Dev Size : 2930135512 (2794.40 GiB 3000.46 GB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Jun 29 13:51:56 2014
State : active, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 1
Spare Devices : 1
Rebuild Status : 0% complete
Name : server-lvgrm:1
UUID : 3450fa67:fc8ab4e4:f87aa203:f9f0fdbe
Events : 598616
Number   Major   Minor   RaidDevice State
4       8       64        0      spare rebuilding   /dev/sde
2       8       16        1      active sync   /dev/sdb
3       8       48        –      faulty spare   /dev/sdd

  • Monitor status of Raid rebuild from terminal

watch -n 60 cat /proc/mdstat #-n 60

  • to exit the “watch” command type “ctrl’ + “z”
  • for a 3 Tb hard drive this takes 2-3 days for the first sync
  • The spare drive (/dev/sde) will be used as the offsite backup drive.
  • Once the spare drive is 100% synced, this drive will need to be failed so that it can be removed.

sudo mdadm /dev/md1 -v –fail /dev/sde

  • View failed device

sudo mdadm –detail /dev/md1