This is just a note to myself about splitting a tar archive into multiple smaller files on the fly and joining them again. It’s based on this answer to this question on stackoverflow.
# create archives $ tar cz . | split -b 100GiB - /mnt/8TB/backup.tgz_
This uses the tar command to create a gzip-ed archive of all files in the directory tree starting with the current directory. The output is written to stdout.
This output is then piped into the split command that splits it into multiple files of 100 Gibibyte and writes them using the given prefix, appending aa, ab, ac etc. The "-" tells split to process its stdin rather than a file.
The input amount is massive and contains very may hard links (it’s a daily dirvish backup dating back 2 years).
To restore from this backup, the following command will be used:
# uncompress $ cat /mnt/8TB/backup.tgz_* | tar xz
This uses the cat command to pipe all input files to stdout, concatenating them implicitly in the process. This is then piped to the tar command again which extracts it as a gzip-ed archive to the current directory. (I have to try that yet, the backup is still running.)