Adding a reminder notification in XFCE systray that I should launch a backup script

I’ve started using borg and borgmatic for backups of my machines. I won’t be using a fully automated backup via a crontab for a start. Instead, I’ve added a recurrent reminder system that will appear on my XFCE desktop to tell me it may be time to do backups.

I’m using yad (a zenity on steroids) to add notifications in the desktop via an anacron.

The notification icon, when clicked, will start a shell script that performs the backups, starting borgmatic.

Here are some bits of my setup :

Continue reading “Adding a reminder notification in XFCE systray that I should launch a backup script”

Offline backup/mirror of a Moodle course, using httrack

I havent’ found much details online on how to perform a Moodle course mirror that could be browsed offline using httrack.

This could be useful both for backup purposes, or for distant learners with connectivity issues.

In my case, there’s a login/password dialog that grants access to the moodle platform, which can be processed by httrack by capturing the POST form results using the “catchurl” option.

The strategy I’ve used is to add filters so that everything is excluded and only explicitely mentioned filters are then allowed to be mirrored. This allows to perform the backup connected with a user that may have high privileges, while avoiding to disappear in loops or complex links following for UI rendering variants of Moodle’s interface.
Continue reading “Offline backup/mirror of a Moodle course, using httrack”

Offline backup mediawiki with httrack

I’ve had the need to restore the contents of a wiki which ran mediawiki, recently. Unfortunately there were no backups, and my only solution was to restore from an outdated version that was available in Google’s cache.

The problem was that I only had the HTML “output” version and copy-pasting it into the Wiki sources on restore time lost all formatting and links.

Thus I’ve come up with the following script which is con-ed to make systematic backups in the background, both of an offline viewable version of the wiki, in static HTML pages, and of the wiki pages’ sources, for eventual restoration.

It uses the marvelous httrack and wget tools.

Here we go :

#! /bin/sh

site=wiki.my.site
topurl=http://$site

backupdir=/home/me/backup-websites/$site

httrack -%i -w $topurl/index.php/Special:Allpages \
-O "$backupdir" -%P -N0 -s0 -p7 -S -a -K0 -%k -A25000 \
-F "Mozilla/4.5 (compatible; HTTrack 3.0x; Windows 98)" -%F '' \
-%s -x -%x -%u \
"-$site/index.php/Special:*" \
"-$site/index.php?title=Special:*" \
"+$site/index.php/Special:Recentchanges" \
"-$site/index.php/Utilisateur:*" \
"-$site/index.php/Discussion_Utilisateur:*" \
"-$site/index.php/Aide:*" \
"+*.css" \
"-$site/index.php?title=*&oldid=*" \
"-$site/index.php?title=*&action=edit" \
"-$site/index.php?title=*&curid=*" \
"+$site/index.php?title=*&action=history" \
"-$site/index.php?title=*&action=history&*" \
"-$site/index.php?title=*&curid=*&action=history*" \
"-$site/index.php?title=*&limit=*&action=history"

for page in $(grep "link updated: $site/index.php/" $backupdir/hts-log.txt | sed "s,^.*link updated: $site/index.php/,," | sed 's/ ->.*//' | grep -v Special:)
do
wget -nv -O $backupdir/$site/index.php/${page}_raw.txt "$topurl/index.php?index=$page&action=raw"
done

Hope this helps,

Scripting mysql database backups on phpMyadmin with CURL

Sometimes, you can only access your MySQL database with phpMyadmin (the previous tool I blogged about won’t be helpful, then). But you may wish to backup the database on a regular way.

PhpMyadmin allows you to backup the database, but you may like to do it in an unattended way.

I’ve written a shell-script which will use CURL to do so.

I couldn’t find any such script… so I hope I didn’t reinvent the wheel 😉

Update 2008/04/15 : I have made some modifications to the script, and it is now in SVN. You may grab a copy from the picoforge project’s websvn.

How to backup mysql databases : mysql_backup

I have a couple databases in mysql on my machines (either for stuff like my weblog or more serious applications like for work ;). The machines are probably backed-up regularly using file-system-based tools… so I suppose that my database is backed-up once in a while if it is stored somewhere in /var/lib/mysql…

However, if for some reason the mysql server crashed, and had a problem opening the “raw” backup (maybe no longer backward-compatible after un upgrade), it would be difficult to get back my data…

The idea is to complement the raw backup with one of the data in “full-text”. mysqldump can be used to do so.

I found the mysql_backup(.pl) script developped by Peter Falkenberg Brown really useful. This GPL’ed tool handles the backup of selected mysql databases into files (one for each table), which are then compressed (.tar.gz) and named after the date of the backups, then rotated ala logrotate (btw, for uses of logrotate for similar needs, see this post by benj). It’s easily setup and configured with many options. A must-have in my opinion.

Thought I had blogged about it before… but it seems that I did not 😉

Update 2008/04/15 : Btw, if you can’t access the database directly, but have phpMyAdmin available, see the linked post in the trackbacks of this post for a tool I wrote.