Backup a running container over the network with ezvzdump

From OpenVZ Virtuozzo Containers Wiki
Revision as of 11:25, 28 August 2008 by Alla (talk | contribs) (Added ezvzdump script.)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Ezvzdump

Ezvzdump (not to be confused with vzdump) is a shell script that also allows you to dump out a running VE container. The key differences are that ezvzdump allows you to utilize past dumps to speed things up, and that a remote host can be specified to allow backups and tar archiving over the network.

The dump files that are created are compatible with those that vzdump creates, so you must still use `vzdump --restore` to restore them.


Download


#!/bin/bash
# 
# ezvzdump 
#
# Copyright (C) 2008 Alex Lance (alla at cyber.com.au)
# Sponsored by Silverband Pty. Ltd. 
# 
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by the Free
# Software Foundation, either version 3 of the License, or (at your option) any
# later version.
# 
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
# more details: http://www.gnu.org/licenses/gpl.txt
# 
#
# Instructions 
# ------------
#
# This script rsyncs a VE to a specified local directory, suspends the running
# VE, rsyncs again, and then resumes the VE. This creates a stable snapshot of
# the VE directories with minimal downtime.
#
# Once the VE has been dumped out locally, it is rsynced to a remote host. When
# the rsync has completed, a tar archive is created on the remote host. The tar
# archive is compatible with the vzdump tar format, so the VE tar file may be
# later restored with the `vzdump --restore` command.
#
# By tarring the files together on the remote host, the burden of creating
# the tar archive is taken away from the hardware node, and given to the remote
# host / backup server. This ensures that minimal additional CPU/disk resources
# are used on the machine that is running the VE's.
#
# This script runs slow the first time you use it, but from then on it utilizes
# the local and remote stored snapshot directories so that the rsyncs happen
# expediently. The script does not wait for the remote tar process to complete,
# it simply kicks off the tar archive creation and then immediately continues.
#
# This script uses rsync and ssh and assumes that you already have ssh keys
# set up between your hosts. This script was written because vzdump takes too
# long and does not utilize a local cache, and also does not do network
# backups. This script also puts less strain on the hardware node by finishing
# backups more quickly, and making the remote host do the heavy lifting.
#


# This section contains variables that require user customisation. 
# NOTE: all directory names *must* end in a forward-slash. 
 
# This variable contains a space delimited list of VEID's to be backed up.
# You can use VEIDS="$(cd /vz/private/ && echo *)"  to backup all your VE's.
VEIDS="1000 2000 3000"              

VZ_CONF="/etc/vz/conf/"             # the path to your openvz $VEID.conf files
VZ_PRIVATE="/vz/private/"           # the path to the running VE's
LOCAL_DIR="/vz/ezvzdump/"           # the local rsync cache / destination directory 
 
# The remote host and path that this script will rsync the VE's to.
REMOTE_HOST="somehost.example.com"
REMOTE_DIR="/backup/"
 
# Default rsync flags (please note the potentially unsafe delete flags).
# You can also remove the v flag to get less verbose logging.
RSYNC_DEFAULT="rsync -ravH --delete-after --delete-excluded" 
RSYNC_EXCLUDE="/usr/portage /var/log"

# Path to vzctl executable
VZCTL="vzctl"
 

# Nice debugging messages...
function e { 
  echo -e $(date "+%F %T"):  $1 
}
function die {
  e "Error: $1" >&2
  exit 1;
}

# Make sure all is sane
[ ! -d "${VZ_CONF}" ]    && die "\$VZ_CONF directory doesn't exist. ($VZ_CONF)"
[ ! -d "${VZ_PRIVATE}" ] && die "\$VZ_PRIVATE directory doesn't exist. ($VZ_PRIVATE)"
[ ! -d "${LOCAL_DIR}" ]  && die "\$LOCAL_DIR directory doesn't exist. ($LOCAL_DIR)"


# Loop through each VEID
for VEID in $VEIDS; do

  echo ""
  e "Beginning backup of VEID: $VEID";

  # Build up the --exclude string for the rsync command
  RSYNC="${RSYNC_DEFAULT}"
  for path in $RSYNC_EXCLUDE; do
    RSYNC+=" --exclude=${VEID}${path}"
  done;
   

  e "Commencing initial ${RSYNC} ${VZ_PRIVATE}${VEID} ${LOCAL_DIR}"
  [ ! -d "${VZ_PRIVATE}${VEID}" ] && die "\$VZ_PRIVATE\$VEID directory doesn't exist. (${VZ_PRIVATE}${VEID})"
  ${RSYNC} ${VZ_PRIVATE}${VEID} ${LOCAL_DIR}

  # If the VE is running, suspend, re-rsync and then resume it ...
  if [ -n "$(${VZCTL} status ${VEID} | grep running)" ]; then

    e "Suspending VEID: $VEID"
    ${VZCTL} chkpnt $VEID --suspend
  
    e "Commencing second pass rsync ..."
    ${RSYNC} ${VZ_PRIVATE}${VEID} ${LOCAL_DIR}
  
    e "Resuming VEID: $VEID"
    ${VZCTL} chkpnt $VEID --resume

    e "Done."

  else
    e "# # # Skipping suspend/re-rsync/resume, as the VEID: ${VEID} is not curently running."
  fi

 
  # Copy VE config files over into the VE storage/cache area
  if [ ! -d "${LOCAL_DIR}${VEID}/etc/vzdump" ]; then 
    e "Creating directory for openvz config files: mkdir ${LOCAL_DIR}${VEID}/etc/vzdump"
    mkdir ${LOCAL_DIR}${VEID}/etc/vzdump
  fi

  e "Copying main config file: cp ${VZ_CONF}${VEID}.conf ${LOCAL_DIR}${VEID}/etc/vzdump/vps.conf"
  [ ! -f "${VZ_CONF}${VEID}.conf" ] && die "Unable to find ${VZ_CONF}${VEID}.conf"
  cp ${VZ_CONF}${VEID}.conf ${LOCAL_DIR}${VEID}/etc/vzdump/vps.conf

  for ext in start stop mount umount; do
    if [ -f "${VZ_CONF}${VEID}.${ext}" ]; then
      e "Copying other config file: cp ${VZ_CONF}${VEID}.${ext} ${LOCAL_DIR}${VEID}/etc/vzdump/vps.${ext}"
      cp ${VZ_CONF}${VEID}.${ext} ${LOCAL_DIR}${VEID}/etc/vzdump/vps.${ext}
    fi
  done;

  # Run the remote rsync
  if [ -n "${REMOTE_HOST}" ] && [ -n "${REMOTE_DIR}" ]; then
    e "Commencing remote ${RSYNC} ${LOCAL_DIR}${VEID} ${REMOTE_HOST}:${REMOTE_DIR}${VEID}"
    ${RSYNC} ${LOCAL_DIR}${VEID}/ ${REMOTE_HOST}:${REMOTE_DIR}${VEID}/
  fi

  # Move old $VEID.tar archive to $VEID.tar.backup
  e "Checking for existing file ${REMOTE_HOST}:${REMOTE_DIR}${VEID}.tar (and moving it to .backup if exists)"
  ssh ${REMOTE_HOST} "[ -f ${REMOTE_DIR}${VEID}.tar ] && mv -f ${REMOTE_DIR}${VEID}.tar ${REMOTE_DIR}${VEID}.tar.backup"

  # Create a remote tar archive - note you can remove the ampersand from the end if you
  # don't want multiple tar processes running on the remote host simultaneously.
  e "Making a tar archive on remote host (this process will run in the background on the remote host)."
  ssh ${REMOTE_HOST} "tar cf ${REMOTE_DIR}${VEID}.tar -C ${REMOTE_DIR}${VEID} ./ 2>/dev/null " &
  
  e "Done."

done;