Adding A Calander to OwnCloud and Thunderbird

Here a brief steps to add calendars to Owncloud and then Thunderbird

  • Download calendar from website in ical format and save it to your hard drive
  • Import Calendar into Owncloud
    • Owncloud manual
    • Login to Owncloud account
    • goto files tab
      • select a folder to import a calendar
      • click on the upload arrow at the top of then page
        • a new window will appear and select your ical calendar (has a .ics extension, example: US-holidays.ics)
        • The new file “US-holidays.ics” should now be visible in the files tab
      • click on the “US-holidays.ics” file name
        • a new dialog box appears “Import a calendar file”
          • at “Please choose a calendar”
            • select “create a new calendar”
            • type in calendar name
            • choose a color for the calendar
            • select the box “Remove all events from the selected calender”
          • click button “import”
  • Adding a Calendar to Mozilla Lightning calendar application

jobs command linux, ubuntu 14.04

Instructions for use of the jobs command from the terminal.

To stop a running process:

$ ctrl + z

[1]+  Stopped                 watch -n 60 cat /proc/mdstat

List jobs with process id

$ jobs -l

[1]+ 31337 Stopped                 watch -n 60 cat /proc/mdstat

Kill job #1. If killing process while stopped, you will have to send the process to the foreground for it to be killed.

$ kill %1

$ fg %1

watch -n 60 cat /proc/mdstat

Check that the process is no longer running. No jobs will be listed and the output will be blank.

jobs -l

References

http://www.cyberciti.biz/howto/unix-linux-job-control-command-examples-for-bash-ksh-shell/

 

Stale NFS File Handle Error on Ubuntu Linux Server

updated 2017-1205

After rebooting my Ubuntu server, I sometimes am unable to mount my NFS shares on the server from my desktop or laptop (debian 9.0 or Ubuntu 17.10). The NFS shares are automatically mounted at boot through the file /etc/fstab. I think the problem is that the server gets a new file handle for the share and the desktop machine still has the old file handle and it gets rejected by the server. Here is my solution and all commands are run from the command line. Command line commands will look like this sentence.

  • show mounted filesystems from the client computer
    • $ df -h
      • output: df: ‘/mnt/nfs/backup.serv’: Stale file handle
        • If the file system is mounted, I now need to unmount the share with the old file handle and then mount the share with new file handle. Remounting the share will cause the destop client to get the new updated file handle from the server.

          • sudo umount -v /mnt/nfs/backup.serv
        • If the file system is not mounted
          • Try exporting the file system from the server
            • exportfs -vf
            • exportfs -vra
            • I put this in a crontab on server and have it export every 15 minutes. Open the crontab editor with the following command
              • sudo crontab -e
              • Insert the next two lines
                • # rexport nfs shares every 15 minutes
                • */15 * * * * exportfs -ra
            • From the client, try to mount the nfs share
              • sudo mount -av
          • If the above does not work then restart the nfs-kernel-server on the nfs file server. See section restarting nfs-kernel-server
  • Mounting nfs share from client
    • /media/chad/backup.serv is the mount point for my NFS share
    • Have the client mount all file systems
      • $ sudo mount -av
        • -a mounts all file systems in fstab that are not mounted
    • show mounted file systems
      • $ df -h
        • output: 192.xxx.xxx.xxx:/media/backup.serv 1.8T 913G 828G 53% /media/chad/backup.serv

Restarting nfs-kernel-server

    • If you are getting the error “mount.nfs: mount(2): Stale file handle” on your client when you try to mount your nfs mount, then you need to restart the nfs-kernel-server from the server.
    • ssh into the server
    • run restart command (debian or ubuntu)
      • $ sudo service nfs-kernel-server restart
    • from the client, remount the nfs partition

Miscellaneous

Good information about the problem and I script to find stale nfs file handles at http://serverfault.com/questions/617610/stale-nfs-file-handle-after-reboot

 

mts files to mkv, Home Video Files for MythTV

I own a video camera (Panasonic HC-vc720) that saves its files as mts. The format mts is not recognized by mythtv. I converted these files using handbrake software to mkv files for watching on mythtv. ffmpeg is a command line program that will also convert to mkv files. The mkv format is an open/non-propriety format.

Handbrake was installed from the terminal (Ubuntu 14.04) with the command:

sudo apt-get install handbrake

 

Panasonic HC-V720 Transferring Files to Linux

Panasonic HC-V720 Transferring Files to Linux

I need to move files off of the video camera (vc) onto my linux computer.

  • Remove SD card and put SD card into a card reader attached to the linux computer.
  • To move only the video files, they are .MTS files hidden away in PRIVATE\AVCHD\BDMV\STREAM directory of the card.
  • To move still pictures, they are found in folders located in DCIM
  • After file transfer is complete, delete all files and directories from the PRIVATE AND DCIM directories leaving only those two directories on the card.

 

Growing a MDADM Raid 1 Array for Offsite Data Backup

UPDATE: I am not using this method any more and do not recommend adding a third drive to the array and then removing the drive from the array for off-site storage. A better solution I am using is to rysnc the raid drive to a LUKS encrypted external hard drive that can be stored off-site.

I have two hard drives in a Raid 1 using Linux mdadm. I have a third hard drive of the same size that I would like to be a part of the raid and updated once a week and stored off site. This ensures that if the two raid hard drives are destroyed in a fire or stolen, I would have a recent backup in a separate location.

  • view Raid devices

 cat /proc/mdstat

  • view a specific raid device

sudo mdadm –detail /dev/md1

/dev/md1:
Version : 1.2
Creation Time : Sat Nov 10 11:28:44 2012
Raid Level : raid1
Array Size : 2930135512 (2794.40 GiB 3000.46 GB)
Used Dev Size : 2930135512 (2794.40 GiB 3000.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Jun 29 12:18:44 2014
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

  • List hard drives:

  sudo blkid

  • Add hard drive /dev/sde as a spare

sudo mdadm –add –verbose /dev/md1 /dev/sde

  • View new Raid status

$ sudo mdadm –detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Sat Nov 10 11:28:44 2012
Raid Level : raid1
Array Size : 2930135512 (2794.40 GiB 3000.46 GB)
Used Dev Size : 2930135512 (2794.40 GiB 3000.46 GB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Jun 29 13:38:26 2014
State : active
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Name : server-lvgrm:1
UUID : 3450fa67:fc8ab4e4:f87aa203:f9f0fdbe
Events : 598614
Number   Major   Minor   RaidDevice State
3       8       48        0      active sync   /dev/sdd
2       8       16        1      active sync   /dev/sdb
4       8       64        –      spare   /dev/sde

  • sync /dev/sde, will have to fail one of the other drives (/sev/sdb)

sudo mdadm /dev/md1 -v –fail /dev/sdd

  • View failed device

sudo mdadm –detail /dev/md1

/dev/md1:
Version : 1.2
Creation Time : Sat Nov 10 11:28:44 2012
Raid Level : raid1
Array Size : 2930135512 (2794.40 GiB 3000.46 GB)
Used Dev Size : 2930135512 (2794.40 GiB 3000.46 GB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Jun 29 13:51:56 2014
State : active, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 1
Spare Devices : 1
Rebuild Status : 0% complete
Name : server-lvgrm:1
UUID : 3450fa67:fc8ab4e4:f87aa203:f9f0fdbe
Events : 598616
Number   Major   Minor   RaidDevice State
4       8       64        0      spare rebuilding   /dev/sde
2       8       16        1      active sync   /dev/sdb
3       8       48        –      faulty spare   /dev/sdd

  • Monitor status of Raid rebuild from terminal

watch -n 60 cat /proc/mdstat #-n 60

  • to exit the “watch” command type “ctrl’ + “z”
  • for a 3 Tb hard drive this takes 2-3 days for the first sync
  • The spare drive (/dev/sde) will be used as the offsite backup drive.
  • Once the spare drive is 100% synced, this drive will need to be failed so that it can be removed.

sudo mdadm /dev/md1 -v –fail /dev/sde

  • View failed device

sudo mdadm –detail /dev/md1