… even though you can open and read it fine in an editor:

You should check its Linux access permissions. If it is not marked as executable, this might be the cause.

Change it with chmod like this:

root@server:/home/netlogon$ls -la total 12 drwxrwxr-x+ 2 root root 4096 Apr 13 09:04 . drwxr-xr-x 46 root root 4096 Mar 6 12:08 .. -rw-rw-r-- 1 root root 2535 Mar 6 14:32 logon.cmd root@server:/home/netlogon$ chmod +x logon.cmd
root@server:/home/netlogonls -la total 12 drwxrwxr-x+ 2 root root 4096 Apr 13 09:04 . drwxr-xr-x 46 root root 4096 Mar 6 12:08 .. -rwxrwxr-x 1 root root 2535 Mar 6 14:32 logon.cmd  The same goes for other executables on Samba shares. In my case this was the last known problem left from a recent server migration. It worked before, didn’t work after. Something changed with the Samba configuration or maybe it was a change in Samba itself. Recently SourceForge’s service has declined to a point where it gets really annoying. Basically every time I tried to commit a change to the svn repository of one of my projects, I run into timeouts and other errors. I want to spend my time working on my projects, not convincing their infrastructure to accept my changes. So I went looking for alternatives and discovered that basically every hosting service nowadays supports git, some add Mercurial and only very few also offer Subversion. Some claim to offer Subversion access to git repositories but whenever I actually tried that, it turned out that it either didn’t work at all or had limitations. Usually the limitation was that svn:external tags were not supported. OK, this blog post was supposed to be about getting a dump of a remote svn repository, not about hosting services and their limitations… The usual advice you find, when you google for “svn dump repository” is to use svnadmin. Unfortunately it turns out that svnadmin only works for local repositories so it is not suitable for dumping a repository on SourceForge. There are a few tools that claim to sync repositories but none of them worked for me. Eventually I found a reference to svnrdump, which is included in TortoisSVN and is exactly what I was looking for: svnrdump dump http://svn.code.sf.net/p/dzlib/code > dzlib.svndump  creates a dump of the repository in the same format as svnadmin does. In theory it can also load that dump into a remote repository, but that requires that the target repository has been prepared for that operation. Unfortunately, even that operation timed out just now why trying to download revision 162 (of 744), so I’m back to square one. Edit: I eventually succeeded with the above command. But, as Ondrej Kelle pointed out in his comment on my Google+ post, for SourceForge there is a simpler and more robust way: Create a local copy of the repository. rsync -a -v svn.code.sf.net::p/dzlib/code/ .  It’s even documented in the SourceForge support documentation (I only found it after I knew of this option). It requires the rsync tool, which is not available under Windows by default, but hey, are there any developers who don’t have access to a Linux computer or VM? Once you have got that local copy, you can simply use svnadmin to create a dump from it: svnadmin dump --incremental --deltas path\to\repository > repository.dump  I added the options –incremental and –deltas after initially loading the dump into a fresh repository failed when it encountered the first binary file. With these options, the problem did not occur. svnadmin load --file repository.dump path\to\new\repository  Note: The following failed for me under Windows 8.1: type repository.dump | svnadmin load path\to\new\repository  Starting with Delphi 2007 EmBorCodera switched to msbuild for the build system. The newly introduced .dproj file used since then is a valid build script for msbuild but unfortunately the format has changed between Delphi 2007 and 2009. This means that there is a difference if you want to make command line builds and specify the build configuration: With Delphi 2007, you use: msbuild /target:rebuild /p:Configuration=Release %dprname%  You might also want to add the -p:DCC_Quiet=true option to reduce the amount of empty lines output: msbuild /target:rebuild /p:Configuration=Release -p:DCC_Quiet=true %dprname%  With Delphi 2009 and later, you use: msbuild /target:rebuild /p:config=Release %dprname%  (Source: This answer on StackOverflow.) Until today I wasn’t aware of this difference so my automated GExperts builds for Delphi 2007 always used the build configuration that was selected in the IDE, which probably means that I have released debug builds rather than release builds for that Delphi version. Have you ever wondered which functions of GExperts you have used the most? Or how often at all? How much time it has saved you? Now you can find out: I just added Usage Statistics to GExperts which tells you exactly how often you have called each of the experts, in the current session and also in total (data between sessions is saved in the registry). Currently it can be accessed via a button in the Configuration dialog, but I am not sure yet where to put it. So far, it has told me nothing new: My most used functions are, in descending order: But this is just after a few minutes of actually using GExperts after I have added the statistics, so there might still be some surprises. I am wondering about the Code Proofreader for example. I plan to do a new release shortly (oh my, the last one is over a year old!), maybe even during the Easter holidays, but don’t hold your breath. In the meantime, you can simply compile your own GExperts dll. A colleague of mine asked me today, how this could be: Given this exception handler: try // some code that calls methods except on e: Exception do LogError(e.Message); end;  How could e be nil? (and e.Message result in an Access Violation?) It turned out to be an error in one of the methods that were called: if SomeCondition then raise Exception('some error message');  Can you spot the problem here? For whatever reason the .Create call is missing! So instead of an exception object raise was working on a string typecasted to exception. Changing it as follows fixed the error: if SomeCondition then raise Exception.Create('some error message');  We did a grep search for “raise Exception(” in our code base and found 4 more cases where this problem existed. 4 bugs less, probably still quite a few to go. But even worse: Greping for “raise e[a-z]*(” turned up two more cases. One in my own dzlib (unit u_dzVclUtils), another in the jcl (That one has already been fixed, my copy is a bit dated.). Edit: In the comments to the corresponding Google+ post, David Hoyle pointed out another common mistake regarding exceptions: Forgetting to actually raise them: if SomeCondition then Exception.Create('some error message');  So, I did a grep for ” e[a-z]*\.Create\(” and found several, one in the (old) SynEdit version used in GExperts, several in Indy 10 and JVCL (old versions again) and one in System.JSON.Types of the current Delphi 10.2.3 RTL (RSP-20192). None in my own code this time. 🙂 Parted is a flexible tool for working with partition tables under Linux. Unfortunately it sometimes seems rather stupid. For example when you create a new partition you may get the warning “The resulting partition is not properly aligned for best performance”. It could then of course proceed to suggest the proper alignment, but it doesn’t, so in theory you are left to figure out the right alignment yourself. Fortunately there is a simple way to get parted to do that for you anyway, as described e.g. in this blogpost by Jari Turkia under “Attempt 4: The simple way”: Use percentages. mkpart /dev/somedevice ext4 0% 100%  It took me a while to find that one again, so I made a blog post of it, so I can easily come back to it when I need it again. In theory it is easy to detect when the user plugs in a USB device to a Linux computer and notify him what was detected. In practice it’s still easy as long as you know how to do it. First thing to do is add a file to /etc/udev/rules.d The file name should follow the convention of NN-SomeDescriptiveName.rules, where NN is a two digit number. In our case it should be one of the last scripts to execute since by then all of the initialization by other scripts should be done and also printing the name to the console is not the most important part of the initialization. So let’s go with 99-notify_user_of_usb-drive.rules  That file defines what kind of events we are interested in. In this case, we are interested in the connection of a USB hard drive, so it looks like this: ACTION=="add", SUBSYSTEM=="block", SUBSYSTEMS=="usb", KERNEL=="sd?1" RUN+="/usr/local/bin/usb-device-added.sh"  Note that the first 4 entries are conditions which are using C syntax, so there must be a double equal sign (Took me nearly an hour to find out that I missed one, debugging these events is not easy.). • ACTION==”add”: We want to know when a new device is being added • SUBSYSTEM==”block”: The device must be a block device (e.g. hard disk) • SUBSYSTEMS==”usb”: And it must be connected via USB • KERNEL=”sd?1″: And the device name must match /dev/sd?1, which means it must be a partition on a disk that is accessed as SCSI (basically everything nowadays) If these conditions are met the last directive will be executed. It’s also C like, it appends something to the RUN variable, in our case we want to call the script /usr/local/bin/usb-device-added.sh  Once you have created this file, make sure to let udev know that you did that. sudo /etc/init.d/udev restart  should work on most Linux distributions. The first script should be simple to check whether it is actually being called. #!/bin/bash exit 15  All it does is exiting with an exit code 15. This will show up in /var/log/syslog so we can check whether our script has been executed at all. Don’t forget to make it executable with sudo chmod u+x /usr/local/bin/usb-device-added.sh  Once we are sure it does, we change it to do the real work: #!/bin/bash temp={DEVLINKS#*/dev/disk/by-label/}
DevLabel=${temp%% *} temp="${DEVNAME} (${DevLabel}) connected" echo$temp | wall
echo $temp > /dev/console exit 0  udev passes information about the device using many environment parameters. In our case we only want to know the device name and the partition label. The device name is easy, it’s being passed in$DEVNAME. The device label is trickier. I only found it in $DEVLINKS which contains a list of /dev/* entries that link to the device, one of them being /dev/disk/by-label/[partition-label] which is the label of the partition of the device (and our device is the first partition, see the KERNEL filter above). So first, we use a bit of bash magic got extract the label from$DEVLINKS, then we create the string \$temp we want to write and last we send it to all logged on users using the wall command and for good measure to the local console.

Finally, we exit the script with exit code 0.

That’s it. Easy, when you know how to do it. Hard, if you have to find out about all the parts using Google (which was unhelpful as always and “found” lots of unrelated stuff even when I put the words I wanted it to look for in quotation marks. 🙁 )

Some caveats:

• Scripts called by udev are restricted in what they are allowed to do. E.g. they usually cannot write to /tmp. It took me a while to figure that out, this answer on unix.stackexchange.com helped.
• Also, sending an email didn’t work for me. Probably another restriction.
• Writing to the system console is done by writing to /dev/console. (Google was only moderately helpful here again.)

Sometimes you need a large file for testing purposes or just to take up space that should not be available on the file system.

There are several options on how to generate such a file on Linux:

• The traditional method is using dd, setting if (in file) to either /dev/null or to /dev/random.
• A more modern method is using truncate, which generates sparse files, which may be or may not be what you want.
• An alternative modern method is using fallocate, which does not generate sparse files

Let’s say you want to create a 500 GibiByte file:

Using dd and filling it with 0 is done like this:

 dd if=/dev/zero of=500gbfile bs=500M count=1024


Using truncate (which will be read as a file containing 0s but not actually use that much space) is done like this:

truncate -s 500G 500gbfile


Using fallocate (filling it with 0s and actually allocating the space) is done like this:

fallocate -l 500G 500gbfile


Keeping track of changes in the Linux configuration can be a chore but sometimes it is vital to know what was changed. As a software developer I am used to using version control systems for source code, so why not use one for configuration files?

Enter etckeeper a tool that does exactly that: It tracks changes to files in /etc (including access rights) in a git repository (alternatively it can be configured to use Mercurial, Bazar or Darcs, unfortunately svn is not supported).

It hooks into apt to automatically track changes made by updates. Manual updates can be committed explicitly, but there is also a daily cron job that does it automatically.

.. make lemonade. Yes, that would be nice, but unfortunately we are not talking about life and lemons here.

Today a coworker had a problem with Mozilla Thunderbird: She could no longer open attachments sent to her. Saving these attachments resulted in 0 byte files.

This nearly drove me nuts:

The email source looked fine to me. When she forwarded these emails to me, I could open and save these attachments fine, so it wasn’t the emails.

Also, attachments sent to a different account in her Thunderbird worked fine. So it wasn’t Thunderbird or the virus scanner or some kind of access rights problem.

As a test, I installed a new Thunderbird Portable for her account, guess what? It could also open these attachments.

I eventually solved the problem by removing that apparently defective account from Thunderbird and adding it again. Everything started to work normal.

There are days when you just have to accept that you don’t understand why it works now and why it didn’t work before. I definitely hate it when that happens.