1. Home
  2. Dedicated Servers
  3. Accessing your Backup Space
  1. Home
  2. 'Legacy' Virtual Machines
  3. Accessing your Backup Space

Accessing your Backup Space

If you have a Legacy Virtual Machine or Dedicated Server with Bytemark, you may have access to some backup space. This backup space is guaranteed to be physically separate from your machine, and you can access it using the rsync protocol, Samba protocol or over NFS (not recommended).

If you have a Cloud Server, you can add manual and scheduled backups using Bytemark Panel. See the Cloud Server Backups guide for more information.

Please note: We no longer provide backup space with servers. This article only applies to existing Bytemark customers who already have backup space.

If you are not sure how to use your backup space effectively, we have provided a simple example below, showing how to backup your machine via rsync. If your server is running Windows rather than Linux, you can access your backup space by mapping a network drive to it.

Your backup space will usually be accessible at a URL with the format:

servername.backup.bytemark.co.uk

So, if your server’s host name is joebloggs.vm.bytemark.co.uk or joebloggs.dh.bytemark.co.uk, the URL for your backup space is likely to be:

joebloggs.backup.bytemark.co.uk

If you have multiple servers, your backup space can be merged and access granted to all your servers.

Please note: To keep your backup space secure, access is restricted to your server’s IP address.

How to backup your server

Here, we’ll demonstrate how you can backup the important directories of your machine via rsync. The backup will contain the contents of the following directories:

/etc
/home
/root

Before we can backup to our rsync server, we must first create a temporary local backup. This temporary ‘tree’ will then be transferred to your backup space. There are many ways of generating your local backup, ranging from a simple shell script to a backup system such as the backup2l. We have given 2 examples below.

A simple example

The following shell script will create a temporary tree for you, located beneath /var/backups/today.

#!/bin/sh
prefix=/var/backups/today
if [! -d /var/backups/today]; then
    mkdir -p /var/backups/today
fi
for i in /etc /home /root ; do
    cp -R $i /var/backups/today/
done

Now that we have created the backup locally, you we can copy that via rsync with the following command (don’t forget to replace both instances of example with your server name):

rsync --delete -qazr /var/backups/today example.backup.bytemark.co.uk::example/

This will copy the contents of /var/backups/today to your backup space.

Comprehensive example

backup2l is a fully-featured backup solution which is available to Debian & Ubuntu users. It is included and already configured in Symbiosis. You can read more about it at the backup2l website.

The following configuration file will backup your system to /var/backups/localhost, including all your MySQL databases. Once this
backup is complete it will be copied over to the remote system:

##################################################
# Configuration file for backup2l #
##################################################

# Define the backup2l version for which the configuration file is written.
# This way, future versions can automatically warn if the syntax has changed.
FOR_VERSION=1.4

##################################################
# Volume identification

# This is the prefix for all output files;
# multiple volumes can be handled by using different configuration files
VOLNAME="all"

##################################################
# Source files

# List of directories to make backups of.
# All paths MUST be absolute and start with a '/'!
SRCLIST=(/etc /root /srv /var/mail /usr/local /var/backups/mysql /var/www)

# The following expression specifies the files not to be archived.
# See the find(1) man page for further info. It is discouraged to
# use anything different from conditions (e. g. actions) as it may have
# unforeseeable side effects.

# This example skips all files and directories with a path name containing
# '.nobackup' and all .o files:
SKIPCOND=(-path "*.nobackup*" -o -name "*.o" -o -name "*~" )

# If you want to exclude several directories use the following expression:
# SKIPCOND=(-path '/path1' -o -path '/path1/*' -o -path '/path2' -o -path '/path2/*')

# NOTE: If you do not have anything to skip, use:
# SKIPCOND=(-false) # "SKIPCOND=()" does not work

##################################################
# Destination

# Mount point of backup device (optional)
#BACKUP_DEV="/disk2"

# Destination directory for backups;
# it must exist and must not be the top-level of BACKUP_DEV
BACKUP_DIR="/var/backups/localhost"

##################################################
# Backup parameters

# Number of levels of differential backups (1..9)
MAX_LEVEL=3

# Maximum number of differential backups per level (1..9)
MAX_PER_LEVEL=8

# Maximum number of full backups (1..8)
MAX_FULL=2

# For differential backups: number of generations to keep per level;
# old backups are removed such that at least GENERATIONS * MAX_PER_LEVEL
# recent versions are still available for the respective level
GENERATIONS=1

# If the following variable is 1, a check file is automatically generated
CREATE_CHECK_FILE=1

##################################################
# Pre-/Post-backup functions

# This user-defined bash function is executed before a backup is made
PRE_BACKUP ()
{
    echo " writing dpkg selections to /root/.dpkg-selections.log..."
    dpkg --get-selections | diff - /root/.dpkg-selections.log > /dev/null || dpkg --get-selections > /root/.dpkg-selections.log
    
    echo " dumping databases"
    for i in /var/lib/mysql/*/; do
        name=`basename $i`
        
        # get username + password
        user=$(grep user /etc/mysql/debian.cnf | awk '{print $3}' | head -n 1)
        pass=$(grep pass /etc/mysql/debian.cnf | awk '{print $3}' | head -n 1)
        
        # do the dump
        mysqldump --user="$user" --pass="$pass" $name | gzip > /var/backups/mysql/$name.gz
    done

}

# This user-defined bash function is executed after a backup is made
POST_BACKUP ()
{
    rsync --delete -qazr /var/backups/localhost example.backup.bytemark.co.uk::example/
}

##################################################
# Misc.

# Create a backup when invoked without arguments?
AUTORUN=0

# Size units
SIZE_UNITS="" # set to "B", "K", "M" or "G" to obtain unified units in summary list

# Archive driver for new backups (optional, default = "DRIVER_TAR_GZ")
# CREATE_DRIVER="DRIVER_MY_AFIOZ"

##################################################
# User-defined archive drivers (optional)

# This section demonstrates how user-defined archive drivers can be added.
# The example shows a modified version of the "afioz" driver with some additional parameters
# one may want to pass to afio in order to tune the speed, archive size etc. .
# An archive driver consists of a bash function named
# "DRIVER_<your-driver-name>" implementing the (sometimes simple) operations "-test", "-suffix",
# "-create", "-toc", and "-extract".

# If you do not want to write your own archive driver, you can remove the remainder of this file.

# USER_DRIVER_LIST="DRIVER_MY_AFIOZ" # uncomment to register the driver(s) below (optional)

DRIVER_MY_AFIOZ ()
{
    case $1 in
        -test)
            # This function should check whether all prerequisites are met, especially if all
            # required tools are installed. This prevents backup2l to fail in inconvenient
            # situations, e. g. during a backup or restore operation. If everything is ok, the
            # string "ok" should be returned. Everything else is interpreted as a failure.
            require_tools afio
            # The function 'require_tools' checks for the existence of all tools passed as
            # arguments. If one of the tools is not found by which(1), an error message is
            # displayed and the function does not return.
            echo "ok"
            ;;
        -suffix)
            # This function should return the suffix of backup archive files. If the driver
            #does not create a file (e. g. transfers the backup data immediately to a tape
            # or network device), an empty string has to be returned. backup2l uses this suffix
            # to select a driver for unpacking. If a user-configured driver supports the same
            # suffix as a built-in driver, the user driver is preferred (as in this case).
            echo "afioz"
            ;;
        -create) # Arguments: $2 = BID, $3 = archive file name, $4 = file list file
            # This function is called to create a backup file. The argument $3 is the full file
            # name of the archive file including path and suffix. $4 contains a list of files
            # (full pathname) to be backed up. Directories are not contained, they are handled
            # by backup2l directly without using the driver. All output to stderr should be
            # directed to stdout ("2>&1").
            afio -Zo -G 9 -M 30m -T 2k $3 < $4 2>&1
                # This line passes some additional options to afio (see afio(1)):
                # '-G 9' maximizes the compression by gzip.
                # '-M 30m' increases the size of the internal file buffer. Larger files have to
                # be compressed twice.
                # '-T 2k' prevents the compression of files smaller than 2k in order to save time.
            ;;
        -toc) # Arguments: $2 = BID, $3 = archive file name
            # This function is used to validate the correct generation of an archive file.
            # The output is compared to the list file passed to the '-create' function.
            # Any difference is reported as an error.
            afio -Zt $3 | sed 's#^#/#'
            # The sed command adds a leading slash to each entry.
            ;;
        -extract) # Arguments: $2 = BID, $3 = archive file name, $4 = file list file
            # This function is called by backup2l's restore procedure for each archive.
            # It is extremely important that only those files contained in $4 are restored.
            # Otherwise it may happen that files are overwritten by incorrect (e. g. older)
            # versions of the same file.
            afio -Zinw $4 $3 2>&1
            ;;
    esac
}

##################################################
# More sample archive drivers (optional)

# This is an unordered collection of drivers that may be useful for you,
# either to use them directly or to derive own drivers.

# Here's a version of the standard DRIVER_TAR_GZ driver,
# modified to split the output archive file into multiple sections.
# (donated by Michael Moedt)
DRIVER_TAR_GZ_SPLIT ()
{
    case $1 in
        -test)
            require_tools tar split cat
            echo "ok"
            ;;
        -suffix)
            echo "tgz_split"
            ;;
        -create) # Arguments: $2 = BID, $3 = archive file name, $4 = file list file
            mkdir -p ${3}
            tar cz -T $4 --no-recursion | split --bytes=725100100 - ${3}/part_
            ;;
        -toc) # Arguments: $2 = BID, $3 = archive file name
            cat ${3}/part_* | tar tz | sed 's#^#/#'
            ;;
        -extract) # Arguments: $2 = BID, $3 = archive file name, $4 = file list file
            cat ${3}/part_* | tar xz --same-permission --same-owner -T $4 2>&1
            ;;
    esac
}

# This driver uses afio and bzip2, where bzip2 is invoked by afio.
# (donated by Carl Staelin)
DRIVER_MY_AFIOBZ2 ()
{
    case $1 in
        -test)
            require_tools afio bzip2
            echo "ok"
            ;;
        -suffix)
            echo "afio-bz2"
            ;;
        -create) # Arguments: $2 = BID, $3 = archive file name, $4 = file list file
            afio -z -1 m -P bzip2 -Q -9 -Z -M 50m -T 1k - <$4 >$3 2>&1
            # This line passes some additional options to afio (see afio(1)):
            # '-P bzip2' utilizes bzip2 as an external compressor
            # '-Q 9' maximizes the compression by bzip2.
            # '-M 50m' increases the size of the internal file buffer. Larger files have to
            # be compressed twice.
            # '-T 1k' prevents the compression of files smaller than 1k in order to save time.
            ;;
        -toc) # Arguments: $2 = BID, $3 = archive file name
            afio -t -Z -P bzip2 -Q -d - <$3 | sed 's#^#/#'
            # The sed command adds a leading slash to each entry.
            ;;
        -extract) # Arguments: $2 = BID, $3 = archive file name, $4 = file list file
            afio -Zinw $4 -P bzip2 -Q -d - <$3 2>&1
            ;;
    esac
}

# This driver uses afio and bzip2, such that the I/O stream is piped through bzip2.
# (donated by Carl Staelin)
DRIVER_MY_AFIO_BZ2 ()
{
    case $1 in
        -test)
            require_tools afio bzip2
            echo "ok"
            ;;
        -suffix)
            echo "afio.bz2"
            ;;
        -create) # Arguments: $2 = BID, $3 = archive file name, $4 = file list file
            afio -o - < $4 | bzip2 --best > $3 2>&1
            ;;
        -toc) # Arguments: $2 = BID, $3 = archive file name
            bzip2 -d < $3 | afio -t - | sed 's#^#/#'
            # The sed command adds a leading slash to each entry.
            ;;
        -extract) # Arguments: $2 = BID, $3 = archive file name, $4 = file list file
            bzip2 -d < $3 | afio -inw $4 - 2>&1
            ;;
    esac
}
Updated on February 20, 2019

Was this article helpful?

Related Articles

Have you tried Kubernetes?
Kubernetes (K8s) is helping enterprises to ship faster & scale their business. Sounds good? Let us build a K8s solution for your needs.
Register your interest