In normal mode, bup takes an uncompressed tar archive and splits it internally. rsync-like deduplication kicks in automagically so that only the incremental changes are transferred instead of the whole archive.
simplebackup already has a hook for code to run after all files have been gathered into a temporary directory just before this directory is tarred and bzipped:
# These commands are executed after copying all files into the
# $WORKDIR, but before putting them into the backup archive.
# postcopy() is executed outside a possible $CHROOT.
# Apache logs get too big and nobody reads them anyway,
# so don't include them all in the backup:
# echo removing older apache logfiles
# rm "$WORKDIR"/var/log/apache/*.log.??.gz
# EXPERIMENTAL BUP MADNESS
tar -cvf - "$WORKDIR" | \
su - mitch -c "bup split -r yggdrasil.mitch.h.shuttle.de: -n $NAME -vv"
: # empty functions don't work
Now that was easy.
Note that I use su to switch the use for the bup process. The backup needs root permissions to reach every file while bup uses ssh to connect to the remote system. I don't want root to run ssh with a pre-shared key, so I change the user. Combining both processes via pipe is great because there are no problems with any file permissions.
Also the remote incremental bup backup is in addition to the normal locally stored .tar.gz that simplebackup produces. No need for any remote operation when I just accidentially one file.
My first full bup backup is currently running (and will be for the next hours, thanks to slow DSL upload speeds).
Further runs will definitely be done less verbose :)
Transferring full backups as huge tarballs over UUCP makes the distribution easier (e.g. storing on an external harddisk), but incremental are transferred much faster. I have not yet to made up my mind which is better.