automysqlbackup: line 551: mysqlshow: command not found

I have been receiving the following error messages when backing up my MariaDB database every day: “/usr/local/bin/automysqlbackup: line 551: mysqlshow: command not found.” This probably happened when I switched from MySQL to MariaDB for my database. The automysqlbackup program has been reliable over the years in making nightly backups of my database. This is a popular piece of software used by many, and I highly recommend it. It can be downloaded at https://sourceforge.net/projects/automysqlbackup/. It is no longer actively maintained, yet it still works great. The last update was 2019-0720.

I finally became tired of the daily email warning of the missing “mysqlshow” command and decided to take action. After investigation, I discovered that MariaDB uses the “mariadb-show” command instead of “mysqlshow”. Backups were being made correctly, but I wanted the error message to go away. The fix was simply opening up automysqlbackup from a command line text editor (nano) and searching (Ctrl + f in nano) for “mysqlshow” and replacing it with the “mariadb-show” command. The following lines contained “mysqlshow” to be replaced: 532, 538, 541, 544, 551, 555, 559. Nano also has a find/replace command to make this faster. I saved the file with a new name, “automysqlbackup-mariadb”, and updated the crondaily with the new script name. I am attaching the new script here for inspection and use.

mariadb-show reference: https://mariadb.com/docs/server/clients-and-utilities/administrative-tools/mariadb-show

From the MariaDB website 2025-1130

MariaDB Community Server is one of the most popular database servers in the world. The MariaDB database has been downloaded over 1 billion times and is the default over MySQL in the majority of Linux distributions. Created by MySQL’s original developers, MariaDB is compatible with MySQL and Oracle, and is guaranteed to always be open source.

Thank you MariaDB for making a great opensource product!

Rotate your Rsync Backups with rotate-backups, similar to Time Machine

I use rsync on gnome-ubuntu 15.10 to back up my data to my server running Debian 8. This creates incremental backups similar to Apples Time Machine. The backup runs every 2 hours so this creates more backups than needed at the expense of hard drive space. I used to manually delete the files from the server and would try to save a monthly backup, 8 weekly backups, 30 daily backups, and 2 weeks of every 2 hour backups. This was a time consuming process of manually selecting the files and thus I was not consistent about removing the extra backups. My backup scripts are written in python and I was going to write a script that would delete old backups that were not needed any more. Even better than writing your own script is finding one that has already been written such as https://rotate-backups.readthedocs.org/en/latest/#rotate-backups-simple-command-line-interface-for-backup-rotation. This script will automatically delete your old backups and you can configure it for many backups you want to keep.

This script is well documented and easy to use. I give it my highest recommendation.

Open a LUKS device with Python 3 script

Here code from a script to unlock a LUKS device before making a backup to the device. I use LUKS for encrypting hard drives for off-site storage.


print ("Determine if luks device mapper link exists. If link exists, then luks is open")
answer = os.path.islink(luks_location)
print ("Result of luks device being open: ", answer)

if answer == False:
    #luks device is closed, attempt to open
    print ("Attempting to open luks device: ", luks_device_uuid)
    subprocess.call("cryptsetup luksOpen " + "--key-file " + repr(luks_keyfile) + " " + repr(luks_device_uuid) + " " + repr(luks_name), shell=True)
    answer = os.path.islink(luks_location)

    # test to see if luksOpen failed and need to exit program
    if answer == False:
        # luksopen failed, exit program
        print ("luks device failed to open. The answer variable is: ", answer)
        sys.exit ("Exiting program now.")
    else:
        # luksopen was successful
        print ("luks device is now open: ", answer)
        print ("luks location is: ", luks_location)
        print ("luks device mapper name is: ", luks_name)

else:
    # luks device was already open
    print ("luks device is already open: ", answer)
    print ("luks location is: ", luks_location)
    print ("luks device mapper name is: ", luks_name)
sys.exit("exiting program")

Next blog will be mounting the device with python.

Growing a MDADM Raid 1 Array for Offsite Data Backup

UPDATE: I am not using this method any more and do not recommend adding a third drive to the array and then removing the drive from the array for off-site storage. A better solution I am using is to rysnc the raid drive to a LUKS encrypted external hard drive that can be stored off-site.

I have two hard drives in a Raid 1 using Linux mdadm. I have a third hard drive of the same size that I would like to be a part of the raid and updated once a week and stored off site. This ensures that if the two raid hard drives are destroyed in a fire or stolen, I would have a recent backup in a separate location.

  • view Raid devices

 cat /proc/mdstat

  • view a specific raid device

sudo mdadm –detail /dev/md1

/dev/md1:
Version : 1.2
Creation Time : Sat Nov 10 11:28:44 2012
Raid Level : raid1
Array Size : 2930135512 (2794.40 GiB 3000.46 GB)
Used Dev Size : 2930135512 (2794.40 GiB 3000.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Jun 29 12:18:44 2014
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

  • List hard drives:

  sudo blkid

  • Add hard drive /dev/sde as a spare

sudo mdadm –add –verbose /dev/md1 /dev/sde

  • View new Raid status

$ sudo mdadm –detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Sat Nov 10 11:28:44 2012
Raid Level : raid1
Array Size : 2930135512 (2794.40 GiB 3000.46 GB)
Used Dev Size : 2930135512 (2794.40 GiB 3000.46 GB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Jun 29 13:38:26 2014
State : active
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Name : server-lvgrm:1
UUID : 3450fa67:fc8ab4e4:f87aa203:f9f0fdbe
Events : 598614
Number   Major   Minor   RaidDevice State
3       8       48        0      active sync   /dev/sdd
2       8       16        1      active sync   /dev/sdb
4       8       64        –      spare   /dev/sde

  • sync /dev/sde, will have to fail one of the other drives (/sev/sdb)

sudo mdadm /dev/md1 -v –fail /dev/sdd

  • View failed device

sudo mdadm –detail /dev/md1

/dev/md1:
Version : 1.2
Creation Time : Sat Nov 10 11:28:44 2012
Raid Level : raid1
Array Size : 2930135512 (2794.40 GiB 3000.46 GB)
Used Dev Size : 2930135512 (2794.40 GiB 3000.46 GB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Jun 29 13:51:56 2014
State : active, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 1
Spare Devices : 1
Rebuild Status : 0% complete
Name : server-lvgrm:1
UUID : 3450fa67:fc8ab4e4:f87aa203:f9f0fdbe
Events : 598616
Number   Major   Minor   RaidDevice State
4       8       64        0      spare rebuilding   /dev/sde
2       8       16        1      active sync   /dev/sdb
3       8       48        –      faulty spare   /dev/sdd

  • Monitor status of Raid rebuild from terminal

watch -n 60 cat /proc/mdstat #-n 60

  • to exit the “watch” command type “ctrl’ + “z”
  • for a 3 Tb hard drive this takes 2-3 days for the first sync
  • The spare drive (/dev/sde) will be used as the offsite backup drive.
  • Once the spare drive is 100% synced, this drive will need to be failed so that it can be removed.

sudo mdadm /dev/md1 -v –fail /dev/sde

  • View failed device

sudo mdadm –detail /dev/md1