Affa

From SME Server
Revision as of 13:01, 26 October 2014 by Turandot (talk | contribs) (Further rework to move instructions towards Affa 3.)
Jump to navigationJump to search
Warning.png Work in Progress:
This page is a Work in Progress. The contents off this page may be in flux, please have a look at this page history the to see list of changes.



PythonIcon.png Skill level: Advanced
The instructions on this page may require deviations from standard procedures. A good understanding of linux and Koozali SME Server is recommended.


Warning.png Warning:
Affa version 2.x code is deprecated and unmaintained. smeserver-affa has a waiting list of 32 open bugs -see link below. And a lot of unpredictable bugs are associated or related with the use of Affa after restoration or rise, such as missing groups, service crash, CGI errors


October 2014 - Some work has recently been carried out on smeserver-affa v2 to fix some of the bugs, see smeserver-affa Bugs. A new Affa version 3 is available on Sourceforge. Additional features helpful for SME have been ported by Arnaud. Keep in mind that Affa 3 is not fully tested up to now. Please also check SME server Affa v3 development.

Is this article helpful to you?
Please consider donating or volunteering
Thank you!



Maintainer

Development and maintenance of Affa for SME Server (Affa version 2.x) by the original maintainer has been discontinued.

Description

The main purpose of this affa package is to make a SME Server a dedicated backup box in a few minutes. Affa backs up as many SME servers as you like or any other servers which have sshd running and rsync installed. Once it is configured, Affa runs reliably unattended and sends warning messages in case of any errors.

All backup archives are full backups, as Affa makes use of the hardlink technique. Therefore a new full backup only needs disk space for the differences plus the filesystem overhead for the hardlinks and directories (which is typically 2-3%).

Affa is based on the rsync program and supports the rsync --compress option. This allows you to run backups over the internet or VPN. A typical setup is one or more Affa backup servers placed in different locations, which backup the production server(s) over the VPN.

A special feature is the rise option, which allows you to rise the backup server to your production server from a backup archive in case of a dead loss of your production server. The rise is executed within a extremely short time, even with huge amount of data. The rise feature uses hardlinks and therefore does not use up additional disk space.

Affa is a command line tool for system administrators and is intentionally designed without a GUI. Therefore it can be efficiently managed on the console and over slow internet connections.

Note: This documents refers to the Affa Version 2 which is not maintained anymore. Information on Affa 3 will be gradually added here once it has been more tested.

Affa features at a glance

  • Affa is secure: All transfers and connections are made by using the ssh protocol with public/private key authentication
  • Periodically runs unattended full backups. Only differences to the previous full backup are transferred over the network
  • Using rsync with optional bandwidth limit and compression allows backups over the internet
  • Uses hardlink technique, i.e. physical disk space only needed for the differences between two full backups
  • Keeps a configurable number of scheduled, daily, weekly, monthly and yearly full backup archives
  • The archives are browseable. No need to unpack them first.
  • Archives can be mapped to a Samba share.
  • Backup jobs are started by the cron daemon
  • Interrupted jobs continuing with already transfered data when restarted
  • Backups the default e-smith directories and files, when property SMEServer is set to yes
  • Additional directories and files can be included
  • Directories and files can be excluded from the backup
  • Non-SME server linux systems can be backed up by setting the SMEServer property to no and using an include list
  • In ESXi mode, running virtual machines can be backed up. See Backup of ESXi Virtual Machines using Affa
  • Configurable nice level for rsync processes on the backup and source server
  • Optional run of custom programs before and after a job run (e.g. running tape backup)
  • Checks the disk space left after a job run with warning levels strict, normal or risky
  • Extensive checking of failure conditions
  • Sends failure messages to a configurable list of email addresses
  • Sends a warning message, if the backup server runs out of disk space
  • Installs an optional watchdog on the source server in case the backupserver fails (SME Server only)
  • Watchdog sends warning, if an expected backup did not run (SME Server only)
  • Watchdog sends a daily reminder message, if the error continues unchecked (SME Server only)
  • Option to display current status of all jobs showing times of last and next run, size and disk usage
  • Status can be mailed on a daily, weekly or monthly schedule
  • Option to display all existing archives of a job shown date, number of files, size and bytes tranferred from the source
  • Option to send (and revoke) the public key to the source server (SME Server and ESXi only)
  • Option to rise the backup server to a production server from a backup (SME Server only)
  • The rise feature does not physically move data and therefore is extremly fast and needs (almost) no extra disk space
  • Rise option can be run remotely as the NIC driver configuration of the backup server are preserved
  • Compares installed RPMs on source with backup server. Sends warning message, if not in sync
  • Undo rise option to restore the backup server
  • Configurable via a e-smith style db, with one record for each job and a default record for all jobs
  • Logs to /var/log/affa/JOB.log and /var/log/affa/affa.log with optional debug switch for higher verbosity
  • Log files are rotated weekly, with 5 logs kept

Installation or Update of Affa 2

Smecontribs for sme8

yum install --enablerepo=smecontribs smeserver-affa

The latest test version can be found here : smeserver-affa-2.0.0-rc6.el5.sme.noarch.rpm Note this is NOT signed.

Please post bug reports in the bug tracker

When you have installed Affa for the first time run the following command to initialize the Affa database

affa --make-cronjobs

and logout and re-login to the console to take the bash auto-completion (TAB key) in effect.

Obsolete way to install

Important.png Note:
SME7 is now EOL and has been moved to the obsolete tree

SME7 version: smeserver-affa-2.0.0-rc4.noarch.rpm

wget http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/smeserver-affa-2.0.0-rc4.noarch.rpm
wget http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm
/usr/bin/yum --enablerepo=smecontribs localinstall \
smeserver-affa-2.0.0-rc4.noarch.rpm \
perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm

SME8 version: SME8 is based on CentOS5, therefore use the perl-Compress-Bzip2 el5 rpm:

wget http://mirror.contribs.org/releases/8/smecontribs/i386/repoview/smeserver-affa.html

For SME8 32bit:

wget http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm
/usr/bin/yum --enablerepo=smecontribs localinstall \
smeserver-affa-2.0.0-rc4.noarch.rpm \
perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm

For SME8 64bit:

wget http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm
/usr/bin/yum --enablerepo=smecontribs localinstall \
smeserver-affa-2.0.0-rc4.noarch.rpm \
perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm

When you have installed Affa for the first time run the following command to initialize the Affa database

affa --make-cronjobs

and logout and re-login to the console to take the bash auto-completion (TAB key) in effect.

Quick start example for Affa 2

You have a SME 7 production server with hostname 'prodbox‘ and IP 10.200.48.1.
Set up a second SME 7 box as your backupserver with hostname 'affabox‘ and IP 10.200.48.2.

  1. log into the 'affabox' and install the packages as described above.
  2. copy the config helper script sample cp /usr/lib/affa/jobconfig-sample.pl /root/prodbox-job.pl
  3. edit /root/prodbox-job.pl and set my $jobname='prodbox'; and 'remoteHostName‘=>'10.200.48.1', optionally, if the remote server port is configured to e.g. 2222 then set 'sshPort'=>2222,
  4. save and run the 'prodbox-job.pl' file. This will write the configuration (this makes the database entries and sets up the cronjobs) /root/prodbox-job.pl
  5. generate the DSA keys and send the public key to the 'prodbox'
    Important.png Note:
    When initially doing this step, you will need to temporarily enable "Allow secure shell access using standard passwords" on the production server

    affa --send-key prodbox
    
  6. run the job manually affa --run prodbox

Configuration

The configuration is stored in an e-smith style database. Use the db command to configure Affa. The jobname is the record key with the type 'job'.
To setup a new job with the name 'prodbox' enter:

db affa set prodbox job

then set the properties

db affa setprop prodbox remoteHostName 192.168.1.1
db affa setprop prodbox TimeSchedule '0030,0730,1130,1330,1730,2030'
db affa setprop prodbox Description 'My Production Server'
db affa setprop prodbox status enabled

and so on...

Alternatively you can you use a script as described above in the 'Quick start' chapter.

To verify your work, type:

db affa show prodbox

Finally set up the cronjobs:

affa --make-cronjobs


Installation of Affa 3

The following instructions assume that a fresh Affa 3 installation is made on a fresh dedicated SME server which serves as a dedicated backup server ('affabox‘). No jobs are migrated from Affa 2. For using an existing SME server, just skip the instructions how to setup a fresh SME box.

Setup a dedicated machine with SME 8.1 or SME 9.0 from CDROM. Use the following example settings:

  1. Domain name: athome.xx (use your existing domain name)
  2. Host name: affabox (must not match to existing host)
  3. IP address: 192.168.0.10 (must not match to existing IP address)
  4. Subnet: 255.255.255.0
  5. role: Server only
  6. Gateway: 192.168.0.1 (use your existing gateway)
  7. DHCP: DEactivate
  8. DNS server: 192.168.0.1 (use your existing DNS server)
  9. activate changes: yes
  10. The fresh server will then restart with the above settings.

To update your server, login as user root on local console. Use 'top' command to display running jobs. Wait until 'yum' stopped running.

yum clean all
yum update

The fresh server will be updated.

signal-event post-upgrade
signal-event reboot

The server will reboot. Afterwards it is ready for installation of Affa 3.

Login as root on local or remote SSH console of affabox.

[edit]

To satisfy several dependencies both the Dag repository and the stephdl repository have to be enabled. These must be enabled before continuing

To install issue the following command:

yum install smeserver-affa-3.2.2.1-0.noarch.rpm --enablerepo=dag --enablerepo=stephdl

To satisfy several dependencies the Dag repository has to be enabled.

Then retrieve additionally required RPM packages:

wget http://www.guedel.eu/phocadownload/uploads_AG/smeserver-affa-3.2.2.1-0.noarch.rpm

Install packages

yum localinstall smeserver-affa-3.2.2.1-0.noarch.rpm --enablerepo=dag

Now manually create directory for archive files:

mkdir /var/affa

Creation of backup jobs

Job configurations of Affa 3 are no longer stored in an e-smith style database. They are stored in configuration files located in /etc/affa . Create your configuration files in this directory, e.g.

touch /etc/affa/backup-smeserver.conf

where backup-smeserver is your jobname, and add content as described below.

Quick start example

You have a SME production server with hostname 'smeserver‘ and IP 192.168.0.2.
You have a second SME box as your backup server with hostname 'affabox‘ and IP 192.168.0.10.

Login to your 'affabox' as root and edit /etc/affa/backup-smeserver.conf . Using e.g. editor nano, create the following example job configuration file for jobname backup-smeserver :

[backup-smeserver]
remoteHostName=192.168.0.2
SMEServer=yes
Watchdog=yes
RPMCheck=yes
ConnectionCheckTimeout=120
Debug=no
Description=Backup of 192.168.0.2 smeserver.athome.xx
DiskSpaceWarn=strict
RootDir=/var/affa
TimeSchedule=0630
localNice=15
remoteNice=15
rsync--inplace=yes
rsyncCompress=no
rsyncTimeout=900
scheduledKeep=1
dailyKeep=7
weeklyKeep=4
monthlyKeep=12
yearlyKeep=1
status=enabled

Then save your job configuration file.

Now check that your configuration is OK:

affa --configcheck

This should throw out no errors. Now create / send key files to your productive smeserver:

affa --send-key backup-smeserver

The following output should appear on the console:

Job sme-backup: Generating DSA keys...
Successfully created DSA key pair.
root@192.168.0.2's password: <Enter password of 192.168.0.2/smeserver and hit ENTER>
Public key sent to 192.168.0.2

Now run your job manually, both for test purposes, as well as to create DSA keys:

affa --run backup-smeserver

The following output should appear on the console:

The authenticity of host 'backup-smeserver (192.168.0.2)' can't be established.
RSA key fingerprint is 3b..........65.
Are you sure you want ton continue connecting (yes/no)? <yes>

Now your backup job should run for the first time. Depending on the volume of the files, this takes quite a while.

Once the job is done, check that the archive is available:

affa –-list-archives

or run the job a second time:

affa --run backup-smeserver

Note that you won't be asked for the password again. Check that the second archive is available too:

affa –-list-archives

From here you are able to work as with Affa 2. Modify your config file as required and described below. Automate the backup using the command affa --make-cronjobs, see below.

Job configuration properties

Note 1: The default values shown in this table are the Affa program defaults and not to be confused with the preset values in the job configuration files, e.g. backup-smeserver.conf.

Note 2: Affa 3 for SME is a fork of Affa 3 for CentOS. It adds a few options back again which have been removed. Generally the configuration properties as described here do apply. stephdl added the following functions for SME:

  • The parameter “SMEServer”
  • The parameter and function “Watchdog”
  • The parameter and function “RPMCheck”
  • The functions “--rise” and “--undo-rise”

Consequently the list below should reproduce the list for the unforked Affa 3 version, plus the additional properties above. For details refer to stephdl's website.

Property Value Multivalue Default Description
remoteHostName FQHN or IP no none FQHN or IP of the source host (mandatory)
remoteUser account no root The user account to be used with all ssh logins. May be case sensitive, e.g. Administrator account on Windows
Description text string no none Any text that describes the job
TimeSchedule HHMM yes 2230 The time the job will run. Use exactly 4 digits, no colon, no point. Important: Using the proper format HHMM is essential. Badly formatted TimeSchedule will cause strange Perl errors.
status enabled or disabled no enabled When set to disabled, no cron entries will made. You can still run a job manually.
Include full path yes none File(s) or directory(s) to be included in the backup.
Exclude full path yes none File(s) or directory(s) to be excluded from the backup.
RootDir full path no /var/affa where to store the backup archives. Do not use /home/e-smith or /root as these are included in the backup and therefore the rise option will not work! Recommended: /var/affa scheduledKeep integer >= 1 no 1 how many of the scheduled backups to be kept
dailyKeep integer >= 0 no 7 how many of the daily backups to be kept
weeklyKeep integer >= 0 no 4 how many of the weekly backups to be kept
monthlyKeep integer >= 0 no 12 how many of the monthly backups to be kept
yearlyKeep integer >= 0 no 2 how many of the yearly backups to be kept
EmailAddress name@domain.com yes root comma separated list of mail addresses, where the messages should be sent to
Note: By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).
RetryAttempts integer >= 0 no 4 When set to a value>0, Affa re-run a failed job RetryAttempts times with a delay of RetryAfter seconds.
RetryAfter seconds >= 0 no 900 when set to a value>0, wait RetryAfter seconds before re-running the job after an error. Only applicable with RetryAttempts>0
RetryNotification yes or no no no when set to no, Affa does not send an error message when a job has failed and RetryAttempts is configured. An error message is only sent when the last attempt has failed. NRPEtrigger hours no 24 NRPE reports a job as critical when the last successful run is older then NRPEtrigger hours. To exclude the job from monitoring set to a value < 0.
SambaShare yes or no no no Access to the job archives via CIFS protocol.
SambaValidUser local account yes affa User who has permission to access the job archives via the CIFS protocol.
preJobCommand, preJobCommandRemote path relative to /etc/affa/scripts/ yes none Scripts to be executed before a job run. The job name and type (scheduled, daily etc.) are passed as arguments to the program. The preJobCommandRemote scripts are copied to the remote server and then executed there. The scripts are executed in alphabetical order. Use a numerical prefix if you need a specific order e.g. 01scriptB, 02remotescriptA, 03scriptA. Use the included prescript-sample.pl and prescriptRemote-sample.pl as a starting point for own scripts.
postJobCommand, postJobCommandRemote path relative to /etc/affa/scripts/ yes none Scripts to be executed after a job run. The job name, the type (scheduled, daily etc.) and the exit code of the job run are passed as arguments to the program. The postJobCommandRemote scripts are copied to the remote server and then executed there. The scripts are executed in alphabetical order. Use a numerical prefix if you need a specific order e.g. 01scriptB, 02remotescriptA, 03scriptA. Use the included postscript-sample.pl and postscriptRemote-sample.pl as a starting point for own scripts.
dedup yes or no no no The purpose of the deduplication is to remove duplicate files to save backup space. When set to 'yes' file deduplication is run after the synchronization has been completed. It looks for files that have identical content, user, group and permissions and replace duplicates by hardlinks. Deduplication scans the just completed archive and the previous one, that usually is

scheduled.0 and daily.0 or scheduled.0 and scheduled.1. Consider this scenario: A user has renamed directories or files. Rsync sees those as new ones and copies them. Deduplication finds the identical copies in the previous archive and replace them by hardlinks. To use deduplication the Freedup program needs to be installed. Affa actually runs freedup -upg scheduled.0 <previous_archive>.

dedupKill yes or no no no When set to 'no' the job ignores affa --kill or affa --killall when deduplication is running. This is useful in the context of killAt which is typically used to stop bandwith utilisation.
sshPort service port no 22 When sshd on the remote host listens on a non-standard port, set the port here.
ConnectionCheckTimeout seconds no 120 Affa checks the ssh connection before the rsync process is started and exits with an error after the configured time if the host did not respond.
BandwidthLimit integer>=0 kilobytes per second no 0 Limits the data transfer rate. A value of zero specifies no limit.
rsyncTimeout seconds no 900 Rsync exits after the configured time if no data was transferred. This avoids infinitely hanging in case of a network error.
rsyncCompress yes or no no yes Compress the transferred data. May be useful with slow internet connections. Increases CPU load on remote and backup host.
rsync--inplace yes or no no yes Set to no if the rsync version on the remote hist does not support this option.
rsync--modify-window integer >= 0 no 0 When comparing two timestamps, rsync treats the timestamps as being equal if they differ by no more than the modify-window value. This is normally 0 for an exact match. A value >= 0 is useful if you can't get the clocks of the remote host and the Affa server in sync.
rsyncOptions string no none Additional option string to be passed to rsync
localNice -19...+19 no 0 run rsync local process niced.
remoteNice -19...+19 no 0 run rsync process on source niced.


SMEServer yes or no yes when set to yes the default e-smith directories are automatically included and the property RPMCheck=yes can be used
RPMCheck yes or no no Only applicable to jobs that backups a SME 7 server.
Compares the packages installation of the source host with this affa backup host. Sends a message with diff list if not in sync. This check is usefull, if you want have the option to rise the backup server to a production server from a backup.
DiskSpaceWarn strict or normal or risky or none strict run a disk space check after a job has been completed. With level 'strict' a warning message will be sent, if the available space is less then the size of the just completed backup. With level 'normal'/'risky' the message is sent, if less than 50%/10% of the backup size is still available.

Note: When RootDir is a symbolic link to another filesystem the disk usage of the local filesystem rather than the linked filesystem is checked. Set the mountpoint as the RootDir to get correct checking.

Watchdog yes or no yes Only applicable to jobs that backups a SME 7 server.
When a job is started, affa installs a watchdog script on the source in /etc/cron.d/, which sends a warning message, if the next scheduled job (taken from the TimeSchedule property + 10 minutes) did not run. This guarantees, that you will be notfied even in case of a affa server outage. The watchdog script send a daily reminder message, if the error continues. The next run job replaces the watchdog script with a new trigger time.


chattyOnSuccess integer >= 0 0 when set to a value>0, Affa sends a message on a successfully completed job run and decrements the chattyOnSuccess value. When the value has reached zero, Affa falls back to the default and only sends messages on errors.
AutomountDevice
AutomountPoint
full path Device and mountpoint of backup device (e.g. USB disk). Device is automounted before a job starts and unmounted after job completion. Both properties must be set to have automount function working.
AutomountOptions string An option string passed to the mount command
AutoUnmount yes or no yes When set to 'no' the automounted device stay mounted after the Affa run.
Debug yes or no no set to yes to increase log verbosity
status enabled or disabled enabled with set to disabled, no cron entries will made. You can still run a job manually.
rsyncdMode yes or no no set to yes to connect to the rsync daemon on the remote host (instead of running rsync over ssh)
rsyncdModule string AFFA the rsyncd module name (only applicable with rsyncdMode=yes)
rsyncdUser string affa the username for authentication to the rsync daemon (only applicable with rsyncdMode=yes)
rsyncdPassword string the password for authentication to the rsync daemon (only applicable with rsyncdMode=yes)
chunkFiles string Filenames that Affa should chunk and compress. Multiple file names are to be separated by the slash (/) character, e.g. 'mysql.dump/pgsql.dump' chunks the two files mysql.dump and pgsql.dump. With ESXi=yes it is implicitly set to '*.vmdk'. See also command line options --chunk-archive and --unchunk-archive

Default configuration properties

All properties can be set as defaults in a configuration file in the [GlobalAffaConfig] section. This is useful, when you set up many similar jobs.

Example: You want to set the property 'localNice' to 19 for all jobs. Then add the following section in one of your configuration files:

[GlobalAffaConfig]
localNice=19

Consequently you don't need to set this property for your indiviual jobs. Properties set in the job record override the defaults.

The special properties 'sendStatus' and 'globalStatus' are only applicable to the GlobalAffaConfig section. 'sendStatus' controls the status report sent by email and can be set to the values 'never', 'daily', 'weekly' or 'monthly'.

Global enable / disable

All jobs can be disabled for execution through 'cronjobs" with setting 'globalStatus' to 'disabled'.

[GlobalAffaConfig]
globalStatus=disabled

To re-enable run, either delete the above configuration line, or set to:

globalStatus=jobs

which is the default value.

You can also set the value to

globalStatus=enabled

which enforces the the job execution through 'cronjobs' and overrides the specified values in the section for the specific job.

Usage and command line options

Important.png Note:
Options can be abbreviated to uniqueness, e.g. --mak is equal to --make-cronjobs


affa --run JOB

Starts a job run. Usually done by the cronjob.


affa --make-cronjobs

Configures the cronjobs as scheduled in the jobs records. By default this command will by executed by 'cronjobs' every 15 minutes. So you don't need to run this command manually, you can also just wait max. 15 minutes before your updates job configurations become effective.


affa --send-key JOB
affa --send-key --host=TARGETHOST [--port=PORT]

This first generates the DSA key for the Affa Server, if not already done. Then it sends the public key to the host 'remoteHostName' as configured in the record of job JOB and generates the job specific ssh known host entry.

Important.png Note:
When initially doing this step, you will need to temporarily enable "Allow secure shell access using standard passwords" on the production server.


Important.png Note:
By default, the --send-key option works for a SME Server as a remote server and for systems where the keys are stored in /root/.ssh/authorized_keys2 and the commands /bin/cat, /bin/touch, /bin/grep and /bin/mv are available.



affa --full-restore JOB [ARCHIVE]

Does a full restore of the standard and additional included files and directories from the backup ARCHIVE on the remote source server as defined in the JOB record. If ARCHIVE is not given, the archive 'scheduled.0' is used as the default. The full restore reconstructs the server as it was at the time of the backup. After the restore the source host reboots.


affa --rise [--all] JOB [ARCHIVE]

Runs a full restore on the Affa server (!) of all standard files and directories from the backup ARCHIVE of job JOB. In other words: After completion, the Affa box reboots as a clone of the source server. Ensure, that the source server has been powered off before you reboot the Affa box, otherwise the network interface will not come up. This is important, when you run --rise remotely. The --rise feature only works with SME servers und should only be used on dedicated backup servers.

With option --all, all files and directories of the archive as defined by the include[] properties are restored. Files or directories with the same name on the Affa server will be overwritten and cannot be restored by a undorise. This should not be an issue on a dedicated Affa server which does not hold any other data. After a possible undorise those additional restored dada must be removed manually.

Please note, that the rise process backs up the the Affa server itself before doing the restore from the archive. This backup is used by a possible undorise run to restore the Affa server. Only the standard files and directories are backed up. Data in non-standard loctions (like /opt) are untouched and will still exist after the rise run. See also: Backup_server_config#Standard_backup_.26_restore_inclusions


affa --undo-rise

This feature reverts a risen Affa box to a backup server. After a reboot, all configured jobs will work again.


affa --unchunk-archive JOB ARCHIVE

Concatenate and uncompress all chunked files in the archive ARCHIVE of job JOB to make the archive ready for a restore. Note that the most recent archive scheduled.0 is never chunked and therfore can be restored as it is without previous unchunking.


affa --chunk-archive JOB ARCHIVE

Chunk and compress all files that match the names in property chunkFiles to save disk space. The should only be used as the reversal of --unchunk-archive. Chunks are stored as bzip2 files in a directory named FILENAME.affa-chunks with 8 levels depth.


affa --create-backup-file JOB [ARCHIVE] [--outfile=FILE]

Creates a gzipped tar archive FILE from the ARCHIVE of job JOB. During creation the MD5 checksum is calculated against which the written tar is checked. The default ARCHIVE is scheduled.0 and default the FILE is ./smeserver.tgz. The checksum is written to FILE.md5sum.

A smeserver.tgz file stored on an USB disk can be used for restore during a SME Server fresh install.


affa --list-archives [--csv] JOB

Displays a table of all present archives of job JOB with date, number of files, size and and bytes received.

Affa version 2.0.0-rc4 on affa1.mydomain.de (10.204.48.2)
+------------------------------------------------------------------------------+
| Job: primmail                                                                |
| Description: Mailserver Albstadt                                             |
| Directory: /var/affa/primmail/                                               |
| Hostname: 10.204.144.2                                                       |
| Email: admin@mydomain.de                                            |
+-------+-----------------------+-----------+----------------+--------+--------+
| Run   | Completion date       | Exec Time |          Files |   Size |  Recvd |
+-------+-----------------------+-----------+----------------+--------+--------+
| M 9   | Sun 2008 Mar 02 06:11 |  0h29m47s |         679010 |   80GB |   60MB |
| M 8   | Sun 2008 Mar 30 06:12 |  0h26m59s |         701683 |   83GB |   48MB |
| M 7   | Sun 2008 Apr 27 06:16 |  0h28m01s |         731332 |   87GB |   47MB |
| M 6   | Sun 2008 Jun 01 06:20 |  0h27m37s |         755529 |   90GB |   50MB |
| M 5   | Sun 2008 Jun 29 06:24 |  0h28m39s |         800200 |   94GB |   52MB |
| M 4   | Sun 2008 Jul 27 06:28 |  0h29m10s |         835398 |   99GB |   59MB |
| M 3   | Sun 2008 Aug 31 06:34 |  0h34m53s |         869409 |  101GB |   65MB |
| M 2   | Sun 2008 Sep 28 06:38 |  0h33m19s |         910889 |  106GB |   59MB |
| M 1   | Sun 2008 Oct 26 06:42 |  0h38m41s |         932627 |  110GB |   61MB |
| M 0   | Sun 2008 Nov 30 06:48 |  0h51m44s |         954090 |  116GB |   75MB |
+-------+-----------------------+-----------+----------------+--------+--------+
| W 3   | Sun 2008 Dec 14 06:50 |  0h53m34s |         962267 |  118GB |   68MB |
| W 2   | Sun 2008 Dec 21 06:52 |  0h53m11s |         974914 |  120GB |   69MB |
| W 1   | Wed 2008 Dec 24 06:52 |  0h48m58s |         974536 |  120GB |   64MB |
| W 0   | Sun 2008 Dec 28 06:58 |  0h39m01s |         977003 |  120GB |   61MB |
+-------+-----------------------+-----------+----------------+--------+--------+
| D 6   | Tue 2008 Dec 30 06:52 |  0h54m45s |         976872 |  120GB |   77MB |
| D 5   | Wed 2008 Dec 31 06:52 |  0h49m10s |         977764 |  120GB |   63MB |
| D 4   | Thu 2009 Jan 01 06:28 |  0h28m09s |         977879 |  120GB |   60MB |
| D 3   | Fri 2009 Jan 02 06:53 |  0h53m21s |         978144 |  120GB |   68MB |
| D 2   | Sat 2009 Jan 03 06:53 |  0h53m15s |         978072 |  120GB |   60MB |
| D 1   | Sun 2009 Jan 04 06:53 |  0h53m16s |         976478 |  120GB |   70MB |
| D 0   | Mon 2009 Jan 05 06:52 |  0h52m42s |         977062 |  120GB |   61MB |
+-------+-----------------------+-----------+----------------+--------+--------+
| S 1   | Mon 2009 Jan 05 17:57 |  0h27m48s |         976353 |  120GB |   98MB |
| S 0   | Tue 2009 Jan 06 06:52 |  0h52m55s |         976539 |  120GB |   70MB |
+-------+-----------------------+-----------+----------------+--------+--------+

With --csv, the output is in machine readable colon separated format.


affa --status [--csv]

Displays a table of all configured jobs with enable status, time of last and next run, size of the most recent archive, exectution time of the last run and the number of scheduled (S), daily (D), weekly (W), monthly (M) and yearly (Y) archives. Last time shows 'failed', if a job did not run in the last 24h. For disabled jobs 'Last' always shows 'failed' after 24 h. To see the date and time of the last run of those jobs use the --list-archives option. Column 'Next' shows the time when the next run will be started, if 'Enabled' is 'yes'.

Affa version 2.0.0-rc4 on backup.mydomain.de (10.204.48.2)
+----------------+-----+-------+-----------+-------+-------+----------------+
| Job            | ENA |  Last | Exec Time |  Next |  Size | N of S,D,W,M,Y |
+----------------+-----+-------+-----------+-------+-------+----------------+
| erp-alb-rsyncd | yes | 00:20 |  0h00m14s | 00:20 |  46MB |  1, 7, 4,10, 0 |
| erp-fra-rsyncd | yes | 02:46 |  0h01m03s | 02:45 | 712MB |  1, 7, 4,10, 0 |
| esxi-TS-W2K    | yes | 00:37 |  4h22m08s | 20:15 |  60GB |  1, 7, 1, 0, 0 |
| esxi-W2KR2     | yes | 02:22 |  3h52m09s | 22:30 |  40GB |  1, 7, 1, 0, 0 |
| helpdesk       | yes | 00:25 |  0h00m47s | 00:25 | 117MB |  1, 7, 4,10, 0 |
| imageserv      | yes | running (pid 16477)               |  1, 7, 4,10, 0 |
| intraweb       | yes | 06:34 |  0h04m50s | 18:00 | 2.1GB |  2, 7, 4,10, 0 |
| pdcalb         | yes | 21:19 |  0h14m27s | 21:05 |  89GB |  1, 7, 4,10, 0 |
| pdcfra         | yes | 04:18 |  0h03m33s | 04:15 |  33GB |  1, 7, 4,10, 0 |
| primmail       | yes | 06:52 |  0h52m55s | 17:30 | 120GB |  2, 7, 4,10, 0 |
| rayofhope      | yes | 21:52 |  0h07m40s | 21:45 |  26GB |  1, 7, 4,10, 0 |
| smecrmpg       | yes | 22:35 |  0h00m21s | 22:35 |  70MB |  1, 7, 4,10, 0 |
| sozserv        | yes | 02:09 |  0h04m03s | 02:05 | 5.9GB |  1, 7, 4,10, 0 |
| wiki           | yes | 03:45 |  0h00m27s | 03:45 | 278MB |  1, 7, 4, 8, 0 |
+----------------+-----+-------+-----------+-------+-------+----------------+
| az32share      |  no | -     |  0h00m22s | 03:05 | 1.3GB |  1, 7, 4, 8, 0 |
+----------------+-----+-------+-----------+-------+-------+----------------+

With --csv, the output is printed in a machine readable colon separated format.


affa --disk-usage

Shows the current disk usage of all root dir filesystems

Affa version 2.0.0-rc4 on backup2.mydomain.de (10.204.0.52)
+------+--------+--------+----------------------------------------------------+
| Use% |   Used |  Avail | Root Dir                                           |
+------+--------+--------+----------------------------------------------------+
|  71% |  938GB |  365GB | /var/affa                                          |
|  20% |  194GB |  759GB | /mnt/affadev                                       |
+------+--------+--------+----------------------------------------------------+

With --csv, the output is printed in a machine readable colon separated format.


affa --show-schedule [--all]

Prints a 'graphical' timetable for all enabled jobs. The resolution is 30 minutes. An 'X' character marks the scheduled start times. The last performance duration is marked with '=' characters.

Affa version 2.0.0-rc4 on backup.mydomain.de (10.204.48.2)
          TIME 0:00     4:00     8:00     12:00    16:00    20:00
erp-alb-rsyncd X------- -------- -------- -------- -------- --------
      helpdesk X------- -------- -------- -------- -------- --------
    esxi-W2KR2 =====--- -------- -------- -------- -------- -----X==
   esxi-TS-W2K ==------ -------- -------- -------- -------- X=======
     imageserv ---X---- -------- -------- -------- -------- --------
       sozserv ----X--- -------- -------- -------- -------- --------
erp-fra-rsyncd -----X-- -------- -------- -------- -------- --------
          wiki -------X -------- -------- -------- -------- --------
        pdcfra -------- X------- -------- -------- -------- --------
      primmail -------- ----X=-- -------- -------- ---X---- --------
      intraweb -------- -----X-- -------- -------- ----X--- --------
        pdcalb -------- -------- -------- -------- -------- --X-----
     rayofhope -------- -------- -------- -------- -------- ---X----
      smecrmpg -------- -------- -------- -------- -------- -----X--
1 disabled jobs not listed. Use --all to display.


affa --send-status

Sends the status table, the disk-usage and the archive list of all jobs to the email addresses configured in the 'DefaultAffaConfig' record. Used by the cronjob 'affa-status'.


affa --mailtest JOB

Sends a test email to the email addresses configured in the JOB record. With property Watchdog=yes, a test email is sent from the remote host, too. Use this to verify, that your mail processing is functional.
Note: By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).


affa --cleanup JOB

After you have lowered a keep value, e.g. scheduledKeep, then archives with a higher indices will no longer be shifted and will exist for ever. This option finds these archives and deletes them.


affa --rename-job JOB NEWNAME

Renames the job JOB to NEWNAME including all database records and archive directories.


affa --move-archive JOB NEWROOTDIR

Moves the archive directory of job JOB to the rootdir NEWROOTDIR and adjusts the property RootDir. NEWROOTDIR must be a full path starting with a slash. As moving across filesystems (e.g. from an external USB drive to the local disk) is not possible, Affa uses a copy command in this case and deletes the source directory after that. Depending on the archive size, copying across filesystems can take a long time.


affa --delete-job [--revoke-key] JOB

Irreversibly deletes a job including all archives, configuration and report databases. With given --revoke-key option, the public key on the remote server will be deleted.


affa --revoke-key JOB
affa --revoke-key --host=TARGETHOST [--port=PORT] [--remoteOS=cygwin]

Deletes the public dsa key on the remote server.

Important.png Note:
By default, the --send-key option works for a SME Server as a remote server and for systems where the keys are stored in /root/.ssh/authorized_keys2. With remoteOS=cygwin it works for a Cygwin/Windows remote server.



affa --check-connections

Checks the ssh login for all configured jobs. For jobs where the public key was not yet sent, you are prompted for the password and the key will be sent then.


affa --kill JOB

Terminates the running job JOB and all its child processes (rsync processes).

affa --debug

Enables verbose logging. Overrides job and global configurations.

Example setups

Dedicated backup server

  • Set up a job for every server you want to backup
  • Send the public keys to every server
affa --send-key JOBNAME
  • Check whether password-less logins are working
affa --check-connections
  • Check whether the scheduled jobs are evenly distributed over the day
affa --show-schedule
  • Create the cron jobs
affa --make-cronjobs
  • Check the status after 24 hours
affa --status


Backup of single ibays

Suppose you want to backup the ibays 'staff1' and 'staff2' on your production server with WAN IP 82.123.1.1 to an Affa server in a different location over the internet every night at 2:30am.

  • log into the Affa server and install the packages as described above.
  • copy the config helper script sample
cp /usr/lib/affa/jobconfig-sample.pl /root/ibay-staff-job.pl
  • edit /root/ibay-staff-job.pl and set
my $jobname='ibay-staff';

and

'remoteHostName‘=>'82.123.1.1',
'TimeSchedule'=>'0230',
'SMEServer'=>'no',
'Include[0]'=>'/home/e-smith/files/ibays/staff1',
'Include[1]'=>'/home/e-smith/files/ibays/staff2',
  • write the configuration
/root/ibay-staff-job.pl
  • send the public key to the production server
affa --send-key ibay-staff
  • check next morning
affa --list-archives ibay-staff
affa --status
ls /var/affa/ibay-staff

Two production servers backup each other

You have two sites connnected via a VPN and a SME Server running on each site. In that case you don't need a dedicated Affa backup server. Both production servers can additionally act as Affa backup servers backing up the server of the other site. Simply install Affa and configure a job that backs up the other one. You can use all Affa features except of the rise feature. When using the rise feature the server become any of the backed up systems, which is less useful in this scenario as it would give you a running copy of the server of the other site while the server of this site is down.

To get redundancy and a faster restore you can configure a local backup to an external USB or NAS device.

Please pay attention, that you do not backup the archives back to the other site. Set the RootDir property to a path which is not included in the SME default backup list. When leaving the RootDir property to its default '/var/affa', this is guaranteed.

Use Affa to backup to a NFS-mounted NAS or a local attached USB drive

You want to backup your SME production server with hostname 'prodbox‘ and IP 10.200.48.1 on a mounted filesystem instead of setting up a dedicated Affa box.

Setup NAS

You have a FreeNAS box with IP 10.200.48.2 up and running with NFS service enabled. The disk is mounted to /mnt/affashare. You have authorized the IP address of your prodbox server to access share /mnt/affashare.

  • log into the 'prodbox' and install the NFS packages
/usr/bin/yum install --enablerepo=smecontribs smeserver-nfs
  • now enable and start the portmapper service
config setprop portmap status enabled
service portmap start
  • mount the NFS share
mkdir -p /mnt/affadevice
mount 10.200.48.2:/mnt/affashare /mnt/affadevice


Alternatively setup a USB drive
  • log into the 'prodbox'
  • connect a USB hard disk to the USB Bus. Now you must determine what device the kernel has assigned to the drive. View the /var/log/message and search for Initializing USB Mass Storage driver. A few lines below you'll find the name of the device. In this example it is sdh. Replace /dev/sdh by your device in following instructions.
  • use the fdisk program to create a linux partition. Verify that this is really the attached USB drive before you continue!
fdisk /dev/sdh

You'll most likely find an existing vfat dos partition, which you have to delete first. In the following we assume, that you have created a single partition /dev/sdh1.

  • now format the drive with an ext3 filesystem
mkfs.ext3 /dev/sdh1
  • make the mount point
mkdir -p /mnt/affadevice
  • add the following line to the /etc/fstab
/dev/sdh1 /mnt/affadevice ext3 defaults
  • mount the drive
mount /mnt/affadevice
  • crosscheck your work using the df command
df
Copying a AFFA USB hard drive archive to a new disk

Affa uses copious amounts of hard links to compress and preserve disk space for its backups. If you are in the situation where you want to copy such a disk archive to a new (bigger) disk, you need to ensure that the hard links are copied correctly or the destination copy may became significantly bigger than the total of the source archive.

One way to copy across file systems (i.e. two different USB disks) and preserve the hard links is as follows:

  • mount both USB drives but with different mount points. e.g. /media/backup1 & /media/backup2 and then:
mkdir /media/backup2/archive 
cd /media/backup1/archive
tar cpf - . | ( cd /media/backup2/archive && tar xpf - )

where archive is the name of the AFFA job you want to move to the other disk.

Setup Affa

You want to run backups on 11:30 h, 15:30 h and 19:30 h and you want to keep the last 3 scheduled backups, 7 daily, 5 weekly, 12 monthly and 1 yearly backups.

  • log into the 'prodbox' and install the Affa packages as described above.
  • copy the config helper script sample
cp /usr/lib/affa/jobconfig-sample.pl /root/prodbox-job.pl
  • edit /root/prodbox-job.pl and set
my $jobname='prodbox';

and

'remoteHostName‘=>'localhost',
'TimeSchedule'=>'1130,1530,1930',
'scheduledKeep'=>3,
'dailyKeep'=>7,
'weeklyKeep'=>5,
'monthlyKeep'=>12,
'yearlyKeep'=>1,
'RootDir=>'/mnt/affadevice',

Review the other properties and change them to your needs.

  • write the configuration
/root/prodbox-job.pl
  • run the job manually
affa --run prodbox


Limitations

With this kind of setup you cannot use the affa rise feature, as it requires the backup archive to be located on the same fileystem as the server installation. The rise option uses hardlinks, which are not working across filesystems.

Automount

Having the backup archives in the same filesystem is always a risk, which can be minimized by using the automount feature. Then the external filesystem is only mounted during a job run.

In the NAS example set

'AutomountDevice=>'10.200.48.2:/mnt/affashare',
'AutomountPoint =>'/mnt/affadevice',

and skip the step 2.

In the USB drive example set

'AutomountDevice=>'/dev/sdc1',
'AutomountPoint =>'/mnt/affadevice',

and skip the steps 5 to 8.

The mount point will be automatically created, if it does not exist.
To access the archive directory, you need to mount it manually.

Restore

Restore single files or directories

Example 1: It's Tuesday January 6th 2009, when user 'briedlin' asks you to restore the messages of his mailbox 'orders' he has accidentally deleted on Monday.

  • You first must check what backup archives are available. The jobname of this server backup is 'primmail'. To get a listing of all archives run
affa --list-archives primmail

(see the example listing in chapter Affa#Usage_and_command_line_options

  • Choose the daily.0 archive, which was created Monday night. Now restore the mailbox 'orders' using the rsync command.
  • Now run the rsync command (note the trailing slash!) on the Affa backup server:
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/  # this variable is used to shorten the next command line
rsync -av /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR

If the servers are configured to use a different ssh port eg 2222, then instead do:

export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/  # this variable is used to shorten the next command line
rsync -av -e 'ssh -p 2222' /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR


Example 2: A user has deleted the file orderform.pdf from ibay 'docs' on the server 10.204.48.1 and asks you to restore it.

  • You have searched and found the latest version of this file in weekly archive of job 'prodserv'.
  • To copy it back to the server 10.204.48.1 run on the Affa server
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf  # this variable is used to shorten the next command line
rsync -av /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE
  • If the servers are configured to use a different ssh port eg 2222, then instead do:
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf  # this variable is used to shorten the next command line
rsync -av -e 'ssh -p 2222' /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE


Example 3: Restoring a file with special characters in the path

Do not use shell variables to shorten the command. It would complicate things more than it would help. Quote the source and destination path. In the destination path escape blank characters with a double backslash and brackets with a single backslash. On the Affa backup server do:

rsync -av "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"

If the servers are configured to use a different ssh port eg 2222, then instead do:

rsync -av -e 'ssh -p 2222' "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"

Full restore

To run a full restore of user and configuration data run on the Affa server

affa --full-restore <JOB> [<ARCHIVE>]

This rsyncs the data from the backup ARCHIVE back to the 'remoteHostname' defined in the configuration of the job JOB.

Example: You have backuped your production server 'prodsrv' as job 'prodbox'. To restore from the latest backup run

affa --full-restore prodbox

To restore from the older archive daily.3 run

affa --full-restore prodbox daily.3


Warning.png Warning:
A full restore reconstructs the server as it was at the time of the backup. That means, that all files created or server configuration changes made after the backup will be lost. After the restore is done, the restored server reboots automatically.


Moving a SME 7 server installation to new hardware using the Affa rise feature

Please see this Howto: Moving SME to new Hardware


Restore from USB drive on new server

This tip comes from http://forums.contribs.org/index.php?topic=42412.0

Q) I have complete backups using affa stored on a usb hard drive connected to our affa backup server. I need to restore an earlier monthly backup of our job "mailbackup" to a test server rather than back to the original system. If I did it from the backup server I see the instructions of how to rise that server to the current backup on that server but I want to restore a point about a month ago before some strange things happened. And I want to do it on a machine that is not our backup server or our production server. I tried to figure out how but am lost in the options. My goal is to do some testing.

A) On your testserver setup a job mailbackup identical to that on your backup server but set property RootDir to /var/affa and property status to disabled. Connect the USB drive and copy the archive of the job mailbackup to /var/affa. Then run affa --rise mailbackup ARCHIVE

Remember that Affa places job setup scripts into the archive directories. In your case it is mailbackup-setup.pl. Copy it to your testserver, change the RootDir and status properties and execute it.

FAQ

What files and directories are included by default?

With SMEServer=no nothing at all.

With SMEServer=yes the SME default backup list. The following command displays the list:

perl -e 'use esmith::Backup;$b=new esmith::Backup;print join("\n",$b->restore_list)."\n"' 


Can I exclusively backup image files from a specific directory?

Yes. Assuming you want to backup all gif and jpg files from directory /home/e-smith/files/ibays/pictures/files use this configuration

db affa setprop JOBNAME SMEServer no           # do not include SME Server default directories and files
db affa setprop JOBNAME Include[0] '/home/e-smith/files/ibays/pictures/files' # start searching here
db affa setprop JOBNAME Include[1] '*/'        # search the tree recursively
db affa setprop JOBNAME Include[2] '*.gif'     # copy files that match these patterns
db affa setprop JOBNAME Include[3] '*.jpg'  
db affa setprop JOBNAME Exclude[0] '*'         # exclude all others
db affa setprop JOBNAME rsyncOptions '-m'      # do not create empty folders


How can I move a backup archive from my production server in a remote branch office to the Affa server using an USB disk? I want to avoid to download the first full backup over the internet. Install Affa on your production server, connect and mount the USB disk. Setup a job to write the archive to the USB disk. After the job run, connect the USB disk to your Affa server and setup the job in the same way. Use the setup script from the archive directory. You only need to change the remoteHostName property. Now use the --move-archive option to move the archive to the local disk. If not already done, send the public key to your production server. You're done.


How do I backup two SME servers behind a firewall? First you need to configure port forwardings for the ssh service on your firewall. Use a non-standard port, e.g 2200->22, for your second server. Setup a job on your Affa server for each of your production servers. Set up the job property sshPort=2200 for second server.

Uninstall

This removes the installed Affa package, all configuration data and all backup archives.

Run the following commands for each job:

affa --revoke-key JOBNAME
affa --delete-job JOBNAME

Verify that all jobs have been deleted

affa --status

Remvove the Affa package

rpm -e smeserver-affa

and, if these packages are not needed by any other packages:

rpm -e perl-Filesys-DiskFree perl-Compress-Bzip2

Cleaning up

rm -f /etc/cron.d/affa /etc/cron.d/affa-status
rm -rf /home/e-smith/db/affa /home/e-smith/db/affa-report /home/e-smith/db/affa-rpmlist
rm -rf /var/affa /var/log/affa

Troubleshooting

Affa stops working after prodserver was rebuild.

Description:

  • Affa stopped working or is broken.
  • When affaserver tries to make an ssh connection to prodserver it fails.
  • In the /var/log/affa/jobname.log file you will this message: "SSH connection to prodserver ip failed. Did you send the public key".
  • Resending the keys does not solve the problem.

Solution:

  • This is probably due to the key for prodserver that is stored on the affaserver no longer matches the key of prodserver.
  • You need remove the existing key so that the next ssh connection re-gets the keys from prodserver.
  • From the command line edit the /root/.ssh/known_hosts file and remove the line for prodserver. It starts with: prodserver ip ssh-rsa

Unable to send mail after a rise.

Description:

  • fail to send mail after rise on the backup server when backuping fron SME 7 to SME8
  • qpsmtpd log shows :
@400000005125f42409a8cd1c 4280 vvirus::clamav plugin (data_post): clamscan results: /var/spool/qpsmtpd/1361441818:4280:0: Access denied. ERROR
@400000005125f42409aa9624 4280 virus::clamav plugin (data_post): ClamAV error: /usr/bin/clamdscan --stdout  --config-file=/etc/clamd.conf --no-summary /var/spool/qpsmtpd/1361441818:4280:0 2>&1: 2
  • /var/spool permissions problem
#on the working server:
ls -al /var/spool
drwxr-s---   2 qpsmtpd clamav 4096 Jun 10 12:31 qpsmtpd

#On the broken (restored) server:
ls -al /var/spool
drwxr-x---   2 qpsmtpd clamav 4096 Jun 10 10:46 qpsmtpd

#On the fresh install before the restore:
ls -al /var/spool
drwxr-s---   2 qpsmtpd clamav 4096 Apr 17 13:47 qpsmtpd

see SME bug #4399


Solution: from command line on the backup server after Rise : chmod g+s /var/spool/qpsmtpd

Users can not login to server - Important

Full restore oer raise via affa on SME8, all seems ok but users can't login to server (samba, horde . . . ecc.). User accounts seems to be there in server manager. see bugzilla:6983 and bugzilla:7687

Warning.png Work in Progress:
you have to read the bug content as it seems there is some other problems with this workaround


This is due to un incomplete restore/raise of files contained in /etc folder, particularly of files containing users and password. First things first, if you complete an "affa --full-restore prodserv" do not delete your existing affa backup. If you complete an "affa --rise --all prodserv" do not continue to "/bin/rm -rf /var/affa" per the wiki (or delete your archive in any way).

  • IF USED RISE FEATURE

Login as "root" on your production server and apply these command lines

mkdir /etc/backup
cp -r /etc/{passwd,samba,e-smith,group,gshadow,shadow,ssh,sudoers} /etc/backup
yes | cp -r /var/affa/"JOBNAME"/"ARCHIVE"/etc/* /etc/

And finally reconfigure the server

signal-event post-upgrade; signal-event reboot
  • IF USED FULL-RESTORE (Complete until step 1,2,3)

Login as "root" to your AFFA server and apply these command lines

export RDIR=/var/affa/"JOBNAME"/"ARCHIVE"/etc/
rsync -av $RDIR "PRODSERV-IP":/etc/

On your production server "PRODSERV" do by ssh in a root terminal:

signal-event post-upgrade; signal-event reboot

Bug report

Affa is listed in the bugtracker contribs section. Please report all bugs, new feature requests and documentation issues there.

Additional information

For details of performance, changelog etc see Affa:Additional information

References