Mar 212018

Parted is a flexible tool for working with partition tables under Linux. Unfortunately it sometimes seems rather stupid. For example when you create a new partition you may get the warning “The resulting partition is not properly aligned for best performance”. It could then of course proceed to suggest the proper alignment, but it doesn’t, so in theory you are left to figure out the right alignment yourself.

Fortunately there is a simple way to get parted to do that for you anyway, as described e.g. in this blogpost by Jari Turkia under “Attempt 4: The simple way”: Use percentages.

mkpart /dev/somedevice ext4 0% 100%

It took me a while to find that one again, so I made a blog post of it, so I can easily come back to it when I need it again.

Mar 142018

In theory it is easy to detect when the user plugs in a USB device to a Linux computer and notify him what was detected. In practice it’s still easy as long as you know how to do it.

First thing to do is add a file to


The file name should follow the convention of NN-SomeDescriptiveName.rules, where NN is a two digit number. In our case it should be one of the last scripts to execute since by then all of the initialization by other scripts should be done and also printing the name to the console is not the most important part of the initialization. So let’s go with


That file defines what kind of events we are interested in. In this case, we are interested in the connection of a USB hard drive, so it looks like this:

ACTION=="add", SUBSYSTEM=="block", SUBSYSTEMS=="usb", KERNEL=="sd?1" RUN+="/usr/local/bin/"

Note that the first 4 entries are conditions which are using C syntax, so there must be a double equal sign (Took me nearly an hour to find out that I missed one, debugging these events is not easy.).

  • ACTION==”add”: We want to know when a new device is being added
  • SUBSYSTEM==”block”: The device must be a block device (e.g. hard disk)
  • SUBSYSTEMS==”usb”: And it must be connected via USB
  • KERNEL=”sd?1″: And the device name must match /dev/sd?1, which means it must be a partition on a disk that is accessed as SCSI (basically everything nowadays)

If these conditions are met the last directive will be executed. It’s also C like, it appends something to the RUN variable, in our case we want to call the script


Once you have created this file, make sure to let udev know that you did that.

sudo /etc/init.d/udev restart

should work on most Linux distributions.

The first script should be simple to check whether it is actually being called.

exit 15

All it does is exiting with an exit code 15. This will show up in /var/log/syslog so we can check whether our script has been executed at all. Don’t forget to make it executable with

sudo chmod u+x /usr/local/bin/

Once we are sure it does, we change it to do the real work:


DevLabel=${temp%% *}
temp="${DEVNAME} (${DevLabel}) connected"
echo $temp | wall
echo $temp > /dev/console
exit 0

udev passes information about the device using many environment parameters. In our case we only want to know the device name and the partition label.

The device name is easy, it’s being passed in $DEVNAME. The device label is trickier. I only found it in $DEVLINKS which contains a list of /dev/* entries that link to the device, one of them being /dev/disk/by-label/[partition-label] which is the label of the partition of the device (and our device is the first partition, see the KERNEL filter above).

So first, we use a bit of bash magic got extract the label from $DEVLINKS, then we create the string $temp we want to write and last we send it to all logged on users using the wall command and for good measure to the local console.

Finally, we exit the script with exit code 0.

That’s it. Easy, when you know how to do it. Hard, if you have to find out about all the parts using Google (which was unhelpful as always and “found” lots of unrelated stuff even when I put the words I wanted it to look for in quotation marks. 🙁 )

Some caveats:

  • Scripts called by udev are restricted in what they are allowed to do. E.g. they usually cannot write to /tmp. It took me a while to figure that out, this answer on helped.
  • Also, sending an email didn’t work for me. Probably another restriction.
  • Writing to the system console is done by writing to /dev/console. (Google was only moderately helpful here again.)
Mar 012018

Sometimes you need a large file for testing purposes or just to take up space that should not be available on the file system.

There are several options on how to generate such a file on Linux:

  • The traditional method is using dd, setting if (in file) to either /dev/null or to /dev/random.
  • A more modern method is using truncate, which generates sparse files, which may be or may not be what you want.
  • An alternative modern method is using fallocate, which does not generate sparse files

Let’s say you want to create a 500 GibiByte file:

Using dd and filling it with 0 is done like this:

 dd if=/dev/zero of=500gbfile bs=500M count=1024

Using truncate (which will be read as a file containing 0s but not actually use that much space) is done like this:

truncate -s 500G 500gbfile

Using fallocate (filling it with 0s and actually allocating the space) is done like this:

fallocate -l 500G 500gbfile

Source: This article on StackOverflow.

Feb 222018

Keeping track of changes in the Linux configuration can be a chore but sometimes it is vital to know what was changed. As a software developer I am used to using version control systems for source code, so why not use one for configuration files?

Enter etckeeper a tool that does exactly that: It tracks changes to files in /etc (including access rights) in a git repository (alternatively it can be configured to use Mercurial, Bazar or Darcs, unfortunately svn is not supported).

It hooks into apt to automatically track changes made by updates. Manual updates can be committed explicitly, but there is also a daily cron job that does it automatically.

Feb 162018

Let’s say you have a directory of backups looking like this:

 \-> user1\
          \-> [date1]_[time1]
          \-> [date2]_[time2]
          \-> some more sub directories with date and time in the name
 \-> user2\
          \-> [date3]_[time3]
          \-> [date4]_[time4]
          \-> some more sub directories with date and time in the name
 \-> some more user sub directories

Where [dateN] is the date of the backup starting with a 4 digit year, followed by a two digit month and day, e.g. 20160531.

Now, you run out of disk space and you want to delete the oldest backups, let’s say those from 2015 and 2016. How do you do that?

You could, of course write a program, or, if you are more of a scripting person, a script, that

  1. recurses through the first level of sub directories
  2. looks for sub directories starting with 2015 or 2016
  3. deletes these recursively

Or, you could combine the shell commands find and rm:

find . -maxdepth 2 -mindepth 2 -type d -name "2015*" -exec rm -r {} \;
find . -maxdepth 2 -mindepth 2 -type d -name "2016*" -exec rm -r {} \;

find searches for files and directories that match the given query and does something for each file found, which in this case is call the command rm. But lets have a look at the specific commands above. It restricts the results by the following conditions:

  • “.” (a dot) means: Start in the current directory
  • “-maxdepth 2” means: Recurse sub directories down to two levels maximum
  • “-mindepth 2” means: Recurse sub directories down two levels minimum
  • “-type d” means: Only process directories (not files or devices or links)
  • “-name “2015*”” means: Process only entries whose name matches the shell wildcard “2015*”, so starts with “2015”
  • “-exec rm -r {} \;” means: For each entry execute the command “rm -r {}”, where {} is a place holder for the current entry name.

If you want to test the find command without risking to lose data, leave out the -exec part at the end. The find command will then simply output the entries it finds.

find . -maxdepth 2 -mindepth 2 -type d -name "2015*"
Feb 072018

Since the last time I looked up how to configure the Web Proxy, apparently somebody came up with WPAD – the Web Proxy Auto-Discovery Protocol (Or maybe I simply missed it).

The idea is quite neat: In the dhcp server, add an entry where the browser can request an url which in turn returns the information for Proxy configuration. Alternatively, instead of the dhcp server, the name server can have entries for or which are then used to request the configuration.

I decided I’d go with the dhcp entry (bust just in case I also added a wpad entry to bind). So, what exactly needs to be done?

1. Add an option local-pac-server with number 252 and type text to the dchp configuration and set its value to the download url of a wpad.dat file. This can be done for the isc-dchp-server with an entry like this to the /etc/dhcp/dhcpd.conf file:

option local-pac-server code 252 = text;
option local-pac-server "";

2. Set up a web server on that computer that serves the file. I didn’t want to install a full blown Apache for this, so I went with mini_httpd. I simply installed the Ubuntu package for it and made two changes to the configuration file /etc/mini-httpd.conf:

# was:
# host=localhost
host=[the IP adress]
# was
# data_dir=/var/www/http

3. Create a wpad.dat file in the root of the data_dir like this:

function FindProxyForURL(url, host) {
return "PROXY; DIRECT";

4. Wonder WhyTF this was so complicated:

  1. Why not simply configure the string “PROXY; DIRECT” in the dhcp server?
  2. Why return a text file with a JavaScript function rather than a text file with just the string that JavaScript function returns?

This worked fine even for the Avira Antivirus updater.

Feb 072018

In olden times, we would add entries for name resolution to /etc/resolv.conf and be done with it. Nowadays, with these newfangled scripts that change the configuration all the time, this file simply gets overwritten by a tool/library called resolvconf, so if we want to add something permanently to it, we must do it somewhere else.

Fortunately it’s quite easy, once you know where:
resolfconf uses the directory /etc/resolvconf/resolv.conf.d as the base for its entries. It usually contains three files:

  • head
  • base
  • tail

(And sometimes a file called original which contains the original contents of /etc/resolv.conf before resolvconf was installed. This file is ignored.)

To add something permanently, just edit the file “head” and be done with it.

But wait, there is more:
Most likely you don’t want to add the information to “head” but rather add it to the iface entries in the /etc/network/interfaces file. It allows you to add one or more name servers like this:

iface eth0 inet static
Feb 032018

(Disclaimer: I am by no means an expert with XenServer. So please don’t take anything you read here for granted. It’s my own experience and what I found in documentation and online.)

Switching a XenServer Linux VM from hardware assisted virtualization to paravirtualization nowadays is quite simple, since most Linux distributions already come with a Xen aware kernel (Ubuntu 16.04 defintely does, but you should check). So, most of what is described here is no longer necessary.

The very first thing to do, is this: Take a snapshot of your working HVM. So, if anything goes wrong, you can easily revert to the snapshot. If the VM is too large (e.g. you just virtualized a large file server), you can get away with temporarily detaching the data disks from the VM, take the snapshot and re-attach the data disks again.

Now, if you haven’t already done that, install the XenTools as described in point 30 and following of the article linked above:

  1. Attach the “guesttools.iso” image to the virtual DVD drive or your VM
  2. Mount it from the console (or via ssh)

    mount /dev/disk/by-label/XenServer\\x20Tools /mnt/cdrom
  3. Goto the directory named “Linux” and run the script
  4. Reboot to make sure the VM still works

Once you have verified that, get the boot parameters from the first menu entry in /boot/grub/grub.cfg

You need:

  • The kernel (or make sure that /vmlinz points to the right kernel)
  • The ramdisk (or make sure that /initrd.img points to the right ramdisk)
  • The boot parameters for the kernel, in particular the UUID of the root partition

You should verify that the UUID is correct otherwise you will end up with an unbootable system!

Shut down your VM.

Once you have all that, use either the script listed on the linked article or download my slightly modified version.

Log in to the console of the physical host on which the VM resides (I suggest using ssh) and execute the script.

It will ask you for the name of the VM to paravirtualize. If you are not sure, enter L to get a list.

After entering the name, you will be asked several questions. For most of them it should be safe to simply press enter and go with the default, but for one it isn’t:

Specify Kernel arguments (root=UUID=... ro quiet):

You must specify the kernel parameters, in particular the UUID of the root partition here. If you don’t your VM will not boot.

Once the script exits, everything is done. XenCenter should now show the VM as being in Virtualization mode “Paravirtualization (PV)”.

Boot the VM and enjoy.

What? It doesn’t boot? You get a grub error? OK, you did make a snapshot as I told you above, didn’t you? If yes, simply revert to that snapshot and try again, just in case you made a mistake. You did not make a snapshot? You’re an idiot. Yes, I mean that, I was an idiot too and I regretted it, why do you think I wrote the previous article Switching a XenServer VM from PVM back to HVM? Try the steps I list there and you might get lucky.

Jan 252018

If I want to clone a Linux system (or any the boot partition/drive of any operating system), I usually use Clonezilla and make a image of the boot disk or boot partition. Unfortunately those image files can become quite large and it is a pain in the lower back to restore them on a smaller hard disk than the original one.

This becomes an issue if you want to move such a system to a virtual machine where you want to keep the size of the boot vdisk to a minimum. So, what other options are there?

In Linux almost everything is a file and there isn’t much “magic” involved in the boot process, so why not take an existing Linux VM and simply synchronize the files only? This won’t be perfect, of course since the disk device names will change so you will most likely end up with an unbootable target system. But we all know how to fix that, don’t we? Simply fix the grub configuration and edit fstab and we are done.

The command to synchronize the files over the network is rsync:

rsync -av --one-file-system --numeric-ids -X -H --acls --delete --sparse --exclude=proc --exclude=dev --exclude=var/log / IpOfTheTargetSystem:/

Where IpOfTheTargetSystem is the IP address of the target system. (And don’t forget the trailing colon and forward slash).

This must be executed as root on the source system. Make sure that ssh login as root works on the target system first, otherwise you will wonder what the problem is later.

Note that this will overwrite the /etc and /boot directories, which has the advantage of cloning your configuration but the disadvantage mentioned above: You will most likely end up with an unbootable system. Also, notice the –delete switch? It will, without asking you, delete all files on the target system that do not exist on the source system.

Also, don’t just execute any command you find on the Internet! Who knows what nefarious purpose I have by posting it here?

(Actually this is mostly for me so I can look up all the switches I had to find out for it to do what I want.)

Jan 242018

The instructions how to install Webmin on Debian (and thereby also Ubuntu) seem a bit outdated because edits to the file /etc/apt/sources.list should be replaced by adding a file to the directory /etc/apt/sources.list.d/.

So, instead of adding

deb sarge contrib

to the file


create a new file


with that content and possibly a comment why you added it.

The rest seems to be up to date:

“You should also fetch and install my GPG key with which the repository is signed, with the commands:”

cd /root
apt-key add jcameron-key.asc

“You will now be able to install with the commands:”

apt-get update
apt-get install apt-transport-https
apt-get install webmin

“All dependencies should be resolved automatically.”

The reason why I installed Webmin was that updating a server from Ubuntu 14.04 to 16.04 broke the Webmin installation. I kept getting the error “module proc does not exist”. Google did not turn up anything useful for this so I decided to simply uninstall Webmin:

apt remove webmin
apt autoremove

And then I reinstalled it with the procedure described above. The error went away. I also got a new UI which will take a while to get used to.

%d bloggers like this: