https://wiki.koozali.org/api.php?action=feedcontributions&user=Arnaud&feedformat=atomSME Server - User contributions [en]2024-03-29T07:28:25ZUser contributionsMediaWiki 1.35.5https://wiki.koozali.org/index.php?title=OwnCloud&diff=34426OwnCloud2018-02-26T10:01:03Z<p>Arnaud: php opcache</p>
<hr />
<div>{{usefulnote}}<br />
{{Level|Medium}}<br />
{{Note box|Please note that OwnCloud has been forked by Nextcloud. It seems that the main developers have left OwnCloud and moved over to Nextcloud as per June 2016. Please see [http://nextcloud.com their website] for more info. Specific Nextcloud instructions will be on the [[Nextcloud]] wiki page.}}<br />
<br />
==About==<br />
<span style="float: right;"><br />
[[File:Owncloud.png|150px]]<br />
</span>"OwnCloud ( http://owncloud.org ) gives you universal access to your files through a web interface or WebDAV. It also provides a platform to easily view & sync your contacts, calendars and bookmarks across all your devices and enables basic editing right on the web. Installation has minimal server requirements, doesn’t need special permissions and is quick. ownCloud is extendable via a simple but powerful API for applications and plugins."<br />
<br />
{{Note box|Owncloud requires a higher version of PHP than SME Server 9.x currently provides. Presently, the ownCloud maintainers recommend PHP version 5.5. Please see '''[[PHP Software Collections]]''' on how to install a supported higher version of PHP alongside the default version of PHP on SME Server 9.x}}<br />
<br />
==Forum discussion==<br />
This how-to can be discussed on the forums [http://forums.contribs.org/index.php/topic,49709.msg248772.html#msg248772 here] or [http://forums.contribs.org/index.php/topic,49515.msg248880.html#msg248880 here]. Search for other posts.<br />
<br />
==Installation==<br />
Download the latest bz2 archive version or find the latest link from here https://owncloud.org/install/#instructions-server<br />
<br />
* Create an ibay in server manager, with the following typical settings:<br />
Information bay name - owncloud, Description - owncloud site, Group - Admin, User access - Write = group, Read = everyone, <br />
Public access via web - Entire Internet (no password required), Execution of dynamic content - Enabled<br />
* Check that you have the correct PHP version running for the Owncloud version you like to install: [https://doc.owncloud.org/server/8.0/admin_manual/installation/source_installation.html Version 8 doc]. Please see '''[[PHP Software Collections]]'''<nowiki> on how to install a supported higher version of PHP in an Ibay on SME Server 9.x}}</nowiki><br />
* Extract ownCloud into the html directory of the new ibay (in this example called 'owncloud' )<br />
In the following code, change the version number of owncloud to suit the current version downloaded from the owncloud website. The latest version and changelog can be found [https://owncloud.org/changelog/ '''here''']<br />
cd /home/e-smith/files/ibays/owncloud/html/<br />
wget http://download.owncloud.org/community/owncloud-9.X.X.tar.bz2<br />
tar xvf owncloud-9.X.X.tar.bz2 <br />
mv owncloud/* .<br />
mv owncloud/.htaccess .<br />
rm index.html <br />
* Set appropriate ibay settings at the command line prompt:<br />
<nowiki>db accounts setprop owncloud \<br />
AllowOverride All \<br />
FollowSymLinks enabled \<br />
Group www \<br />
PHPBaseDir /home/e-smith/files/ibays/owncloud/:/tmp/:/dev/urandom \<br />
PublicAccess global \<br />
UserAccess wr-group-rd-everyone<br />
</nowiki><br />
<br />
* Update the ibay<br />
signal-event ibay-modify owncloud<br />
<br />
* you also need to allow the "allow_url_fopen"<br />
<nowiki><br />
mkdir -p /etc/e-smith/templates-custom/etc/httpd/conf/httpd.conf<br />
nano -w /etc/e-smith/templates-custom/etc/httpd/conf/httpd.conf/99allow_url_fopen<br />
</nowiki><br />
<br />
* and paste the following and save this<br />
<br />
<nowiki><br />
<Directory /home/e-smith/files/ibays/owncloud/html><br />
php_admin_flag allow_url_fopen on<br />
</Directory><br />
</nowiki><br />
<br />
* then at the command line prompt enter:<br />
<br />
<nowiki><br />
expand-template /etc/httpd/conf/httpd.conf<br />
/etc/init.d/httpd-e-smith restart<br />
</nowiki><br />
<br />
4. Create a new MySQL database (In this example the database name is owncloud. Change '''owncloud''', '''username''' and '''password''' with your own choices as required)<br />
<br />
Login as root and issue the following command:<br />
mysql<br />
create database '''owncloud''';<br />
grant all privileges on '''owncloud'''.* to '''username@localhost''' identified by '<nowiki/>'''password'''';<br />
flush privileges;<br />
exit<br />
<br />
{{Note box|To be able to run OwnCloud 9.x on a stock Koozali SME Server 9.x, you must enable the InnoDB engine of MySQL _before_ you run the OwnCloud installation wizzard. To enable the InnoDB engine please [[MySQL#Enable_InnoDB_engine|THIS SECTION]]. This is due to the fact that Koozali SME Server comes with a lower version of MySQL than the recommended 5.5.x version which has the InnoDB engine enabled by default.}}<br />
<br />
5. Browse to http://yourserver/owncloud and follow install script<br />
<br />
==Upgrade==<br />
<br />
Upgrading from one minor version to the next (e.g., from 8.1.2 to 8.1.3) can be done using the updater app inside ownCloud. Just log in to the web site as admin, go to the admin page, and scroll down to the update center. However, major version upgrades (e.g., 8.1.x to 8.2.x) will require some work at the command line. The steps below assume that you've used PHP software collections to install an appropriate version of PHP for ownCloud, and that you've set your ownCloud ibay to use PHP version 5.6.<br />
<br />
You'll first need to download the distribution .tar.bz2 file. Place it in the ownCloud ibay, in the files subdirectory.<br />
<br />
Second, put ownCloud in maintenance mode, so that no changes will be made to the database or to the files on disk while you do your upgrade. Do this with<br />
# cd ~owncloud/../html<br />
# sudo -u www scl enable php56 'php occ maintenance:mode --on'<br />
<br />
Third, take backups of the ownCloud directory and database.<br />
# cd ~owncloud/..<br />
# rsync -Aaxv html owncloud-dirbkp_`date +"%Y%m%d"`/<br />
# mysqldump --lock-tables owncloud > owncloud-sqlbkp_`date +"%Y%m%d"`.sql<br />
<br />
Next, unpack the ownCloud distribution file.<br />
# cd ~owncloud/..<br />
# tar xjfv files/owncloud-8.2.0.tar.bz2<br />
<br />
Then you'll need to rename a couple of directories, and move some files around.<br />
# cd ~owncloud/..<br />
# mv html html_old<br />
# mv owncloud html<br />
# mv html_old/data/ html/<br />
# cp html_old/config/config.php html/config/<br />
<br />
Once the directories are renamed and the files are back in the right place, you'll want to kick off the upgrade at the command line and then disable maintenance mode. If you do not run the upgrade from the command line, the web interface will prompt you to do this the next time you log in as admin. However, for a larger installation, PHP may time out before finishing the operation. Therefore, it's recommended to do it from the command line.<br />
# cd ~owncloud/../html<br />
# sudo -u www scl enable php56 'php occ upgrade'<br />
# sudo -u www scl enable php56 'php occ maintenance:mode --off'<br />
<br />
Your ownCloud installation is now upgraded, and you can log into the web interface to ensure that everything works properly. Once you've confirmed this, you'll probably want to clean up the ibay a little bit.<br />
# cd ~owncloud/..<br />
# bzip2 owncloud-sqlbkp_20151027.sql<br />
# tar -cvjSf owncloud-dirbkp_20151027.tar.bz2 owncloud-dirbkp_20151027/<br />
# rm -rf html_old<br />
{{Note box|It is only possible to upgrade one step at a time. E.g. you can not upgrade from version 6 to 8 directly but have to make 2 upgrades from 6 to 7 and then 7 to 8}}<br />
<br />
==Optional Addons & Settings==<br />
===Security===<br />
<br />
====Require HTTPS====<br />
You can force https redirection whenever you use the http protocol. See [[Https_redirection]] for more information.<br />
<br />
In Owncloud versions prior to 8.1.1 you could also force https by checking 'Enforce HTTPS' in the Security section of the Admin section.<br />
<br />
====Strict Transport Security====<br />
Starting with ownCloud version 8.1.1, the admin panel may display this warning:<br />
<br />
The "Strict-Transport-Security" HTTP header is not configured to least "15768000" seconds.<br />
<br />
HTTP Strict Transport Security can be enabled as described at [[HSTS and HPKP]].<br />
<br />
====Fail2Ban====<br />
If you have installed the [[Fail2ban|Fail2Ban]] contrib, you can configure it to block ownCloud logins from a given IP address after repeated failures. To do this, you will first need to create a definition file telling Fail2ban what a failed login looks like, then create a template fragment to activate that definition, and finally restart fail2ban.<br />
<br />
First, create the definition file:<br />
# nano -w /etc/fail2ban/filter.d/owncloud.conf<br />
<br />
This file should contain the following text:<br />
[Definition]<br />
failregex={"reqId":".*","remoteAddr":".*","app":"core","message":"Login failed: '.*' \(Remote IP: '<HOST>\)","level":2,"time":".*"}<br />
<br />
ignoreregex =<br />
<br />
Press Ctrl-X to exit, and Y to save the file.<br />
<br />
Next, create the template fragment:<br />
# mkdir -p /etc/e-smith/templates-custom/etc/fail2ban/jail.conf<br />
# nano -w /etc/e-smith/templates-custom/etc/fail2ban/jail.conf/30Service55owncloud <br />
<br />
That file should contain the following:<br />
[owncloud]<br />
enabled = true<br />
filter = owncloud<br />
action = smeserver-iptables[port="$port",protocol=tcp,bantime=$bantime]<br />
logpath = /home/e-smith/files/ibays/owncloud/html/data/owncloud.log<br />
maxretry = 3<br />
port = 80,443<br />
protocol = tcp<br />
<br />
Again, Ctrl-X to exit, and Y to save. Then reconfigure the fail2ban service:<br />
# signal-event fail2ban-conf<br />
<br />
For more information, see [https://forum.owncloud.org/viewtopic.php?f=8&t=28678 this thread] on the ownCloud forum.<br />
<br />
===User login and permissions===<br />
owncloud provides several ways for users to login/authenticate. Next to the internal user authentication of owncloud, you can authenticate SME users: <br />
<br />
====LDAP Authentication====<br />
Configure owncloud to authenticate against the internal SME OpenLDAP server (all SME users)<br />
<br />
* log into your owncloud webapp as admin who installed owncloud<br />
* go to apps and enable ldap authentication backend<br />
* go to admin, ldap section and use below settings<br />
<br />
'''Server tab:'''<br />
Host: localhost<br />
Port: 389<br />
Base DN: dc=[yourdomain],dc=[com] (as seen in the 'domains' section in SME server manager)<br />
<br />
'''Users tab:'''<br />
Select username and email address<br />
<br />
Only these object classes: sambaSamAccount<br />
<br />
'''Groups'''<br />
Only these object classes: sambaGroupMapping <br />
<br />
'''Advanced tab:'''<br />
Special attributes -> email field: mail<br />
<br />
For the variables between brackets [], use your specific settings. Test creating a specific user and use that username password opposed to using the admin's credentials.<br />
For more info on using SME OpenLDAP for authentication please see: [[LDAP_Authentication_for_applications|LDAP Authentication for applications]]<br />
<br />
<br />
=====LDAP Tweaks=====<br />
It seems when creating a user in SME, and ownCloud integrated via LDAP, it takes a while before the new user can log into ownCloud. Also, the error "Can't get cache storage, user not logged in" begins to log in the ownCloud Warning Log.<br />
To fix this, and allow the new user to log in immediately, go to Settings -> Admin -> LDAP/Advanced Tab<br />
Change the Cache Time To Live + or -1. This forces the cache to update, and the new user will be able to log in to ownCloud straight away.<br />
<br />
{{note box| When setting up ldap following these instructions you may need to uncheck "tls" in the advanced tab of the ldap admin section in owncloud.}}<br />
<br />
====IMAP Authentication====<br />
Configure owncloud to authenticate against an IMAP server<br />
<br />
<br />
=====Enable and Configure IMAP Authentication=====<br />
# Login to your running owncloud configuration as an administrator<br />
# Enable the app named "External user support" <br />Note: "External user support" is included with owncloud as an "Internal app". There is an older add-on app named "IMAP user backend" that did not work for me.<br />
# Edit <owncloud>/config/config.php and insert the text below inside the $CONFIG array<br />
#* in front of the last closing parenthesis in the file (make sure there's a comma at the end of the preceding entry)<br />
#* Replace [mail.yourdomain.com] with your mail server name or IP address<br />
#* Remove "/novalidate-cert" if your chosen mail server uses a signed SSL certificate):<br />
<nowiki> 'user_backends' => <br />
array (<br />
0 => <br />
array (<br />
'class' => 'OC_User_IMAP',<br />
'arguments' => <br />
array (<br />
0 => '{[mail.yourdomain.com]:993/imap/ssl/novalidate-cert}',<br />
),<br />
),<br />
),</nowiki><br />
<br />
When done, users will be able to login to your owncloud web interface using their mail server username and password. <br />
More information can be found here: http://owncloud.org/support/custom-user-configuration/<br />
<br />
======Enable Automatic IMAP User Creation======<br />
This discussion on the Owncloud bug tracker describes patches that provide more automated IMAP user management:<br />
https://github.com/owncloud/apps/issues/302<br />
<br />
The patches provide these capabilities:<br />
* Create new Owncloud users when unrecognized users login using IMAP<br />
:* Add new users to a group named for the domain portion of their email address<br />
:* Optionally set an Owncloud quota based on the user's email domain<br />
* Update the Owncloud password for any existing user who logs in using IMAP<br />
<br />
======Sync Issues (Turning off magic quotes in php)======<br />
I had issues when deleting files in my client. Turing off magic quotes fixed the problem. Edit your php.ini using custom templates.<br />
* See http://wiki.contribs.org/PHP<br />
cd /etc/e-smith/templates-custom/etc/php.ini<br />
* Edit template-begin file<br />
* Add the following and save file: <br />
php_value magic_quotes_gpc = Off<br />
php_value magic_quotes_runtime = Off<br />
php_value magic_quotes_sybase = Off<br />
<br />
expand-template /etc/php.ini<br />
sv t httpd-e-smith<br />
<br />
======Allow Sharing with IMAP Users======<br />
To allow sharing with your IMAP users, you need to manually add each IMAP user account to the oc_users table using commands like those below at a shell prompt on your SME server:<br />
mysql owncloud<br />
insert into oc_users (uid,password) values ('[xyzuser@mail.yourdomain.com]','*');<br />
* '''owncloud''': replace with the name of the database you created for owncloud<br />
* '''[xyzuser@mail.yourdomain.com]''': replace with the exact login string used by the user when they login<br />
<br />
=====Problems=====<br />
* By default:<br />
:* Users who login using IMAP Authentication are not automatically added to the owncloud "oc_users" table. Consequently, they are not offered as options when other users want to share files with them.<br />
:* IMAP authenticated users will be unable to connect using the owncloud client sync program.<br />
* If your IMAP server allows alternative username syntax, owncloud will create a separate account for each username format that allows successful authentication. That is, if your IMAP server allows users to optionally include or omit the domain name, you might create a user account entry for "xyzuser@mail.yourdomain.com", then find that the user always logs in using simply "xyzuser", which succeeds during the IMAP authentication, but is a different account within owncloud.<br />
======Using the Owncloud Client====== <br />
In order for an IMAP-authenticated user to configure the owncloud client sync program:<br />
* The system administrator must add the username to the oc_users table manually as described above<br />
<br />
Then the user must:<br />
* login to the web interface<br />
* open his/her personal settings<br />
* "change" the password in owncloud to match the password on the IMAP server.<br />
<br />
===WebDAV interface ===<br />
The ownCloud admin panel may display the following warning. <br />
<br />
Setup Warning Your web server is not yet properly setup to allow files synchronization because the WebDAV interface seems to be broken. Please double check the installation guides.<br />
<br />
The issue could come from the use of a bad domain dns name which is different of the ssl certificate. If you feel it is a false positive... simply add 'check_for_working_webdav' => false, to your /config/config.php file and get rid of it. This workaround was successful with ownCloud 7. It has not been tested on ownCloud 6 or with ownCloud 8.<br />
<br />
nano /home/e-smith/files/ibays/owncloud/html/config/config.php<br />
and add<br />
'check_for_working_webdav' => false,<br />
<br />
===Maintenance tasks===<br />
Owncloud requires periodic maintenance tasks to run, and offers three different methods to execute them: AJAX, webcron, or through the system's cron process. They may be chosen on the Admin page, under the Cron heading. AJAX checks for tasks each time an owncloud page is loaded. It is simple to use and requires no configuration, but not very reliable since it depends on users logging into the web interface. Webcron relies on using an outside service like [http://www.easycron.com easycron.com] to load http://yourserver.tld/owncloud/cron.php on a schedule, roughly every 15 minutes. This is simple to configure, but can incur additional cost for the webcron service. Finally, the system's cron process runs commands on a specified schedule. It requires the most complex configuration, but does not rely on any external user or service to operate.<br />
<br />
To set up regular maintenance using the system cron process, first select the Cron option under the Cron heading on the Admin page. Then, from the system shell, run the following commands:<br />
# export EDITOR=nano<br />
# crontab -u apache -e<br />
<br />
This will open the nano editor with a blank file. Enter the following:<br />
*/15 * * * * scl enable php56 'php -f /home/e-smith/files/ibays/owncloud/html/cron.php > /dev/null 2>&1'<br />
<br />
Then press Ctrl-X to exit, and Y to save. This command assumes you've installed the PHP software collections as described above, and that you've chosen to run your ownCloud installation with PHP 5.6. If you'd chosen PHP 5.5, it would read "scl enable php55" instead.<br />
<br />
===Memory Caching===<br />
For improved performance, use of a memory cache for compiled PHP code and data is recommended. As of version 8.1, ownCloud will complain on the Admin page if you don't have this configured. Also, as of version 8.2, ownCloud implements transactional file locking, and it's preferred that this be managed with a memory cache as well. The only suitable memory cache system to support file locking is [http://redis.io/ Redis].<br />
<br />
====Redis====<br />
You'll need to install Redis, as well as its associated PHP module, configure your system to start Redis on startup, and change some configuration settings for both Redis and ownCloud.<br />
<br />
To install Redis, you'll need to have both the [[Remi]] and [[Epel]] repositories set up on your server. Then do<br />
# yum --enablerepo=remi,epel install redis php54-php-pecl-redis php55-php-pecl-redis php56-php-pecl-redis php71-php-pecl-redis php70-php-pecl-redis<br />
# config set redis service status enabled<br />
# cd /etc/rc7.d<br />
# ln -s /etc/rc.d/init.d/e-smith-service S80redis<br />
<br />
You'll need to adjust two entries in the Redis configuration file:<br />
# nano -w /etc/redis.conf<br />
<br />
Find the line that mentions "unixsocket", uncomment it and the following line, and edit them to appear as follows:<br />
unixsocket /var/run/redis/redis.sock<br />
unixsocketperm 777<br />
and to add the redis user to the www group:<br />
usermod -a -G redis www<br />
Then start the redis server:<br />
# /etc/rc.d/init.d/redis start<br />
<br />
You'll now need to edit the ownCloud config file:<br />
# nano -w ~owncloud/../html/config/config.php<br />
<br />
Add the following lines:<br />
'filelocking.enabled' => 'true',<br />
'memcache.locking' => '\OC\Memcache\Redis',<br />
'memcache.local' => '\OC\Memcache\Redis',<br />
'redis' => array(<br />
'host' => '/var/run/redis/redis.sock',<br />
'port' => 0,<br />
'timeout' => 0.0,<br />
),<br />
<br />
Save the file and exit nano. You should now be able to log in to your ownCloud installation as the admin user, and not see a warning message about memory cache or file locking.<br />
<br />
====APCu====<br />
If you don't want to use the memory cache for file locking, ownCloud supports a number of other cache backends, but the easiest to configure is said to be APCu. To configure this, you'll need to install the PHP extension, and activate it in your config.php file. To install the extension, run the following command:<br />
# yum --enablerepo=remi install php54-php-pecl-apcu php55-php-pecl-apcu php56-php-pecl-apcu<br />
<br />
If you've set up a system cron job for the ownCloud maintenance operations, you'll need to tell the cache to operate from the CLI as well as from the web. To do this, assuming you're using PHP 5.6 for ownCloud (change the version below as appropriate):<br />
# nano -w /opt/remi/php56/root/etc/php.d/40-apcu.ini<br />
<br />
Find the line with "apc.enable_cli", uncomment it (remove the semicolon), and change the value to 1. It should look like this:<br />
apc.enable_cli=1<br />
<br />
Then restart the web server:<br />
# /etc/init.d/httpd-e-smith restart<br />
<br />
This will install the extension for all PHP Software Collection versions. Then, to activate it, add this line to the ownCloud config.php file:<br />
'memcache.local' => '\OC\Memcache\APCu',<br />
<br />
Further information about caching can be found in the [https://doc.owncloud.org/server/8.1/admin_manual/configuration_server/performance_tuning.html#caching ownCloud documentation].<br />
<br />
=== PHP opcache ===<br />
Into Nextcloud, logged as admin => Parameters => Basic parameters check that Nextcloud doesn't claim for following parameters concerning PHP opcache:<syntaxhighlight><br />
opcache.enable=1<br />
opcache.enable_cli=1<br />
opcache.interned_strings_buffer=8<br />
opcache.max_accelerated_files=10000<br />
opcache.memory_consumption=128<br />
opcache.save_comments=1<br />
opcache.revalidate_freq=1<br />
</syntaxhighlight>If it is the case:<br />
<br />
==== In short ====<br />
* Edit ''/etc/opt/remi/php70/php.d/10-opcache.ini'' and make sure that the above given parameters are set.<br />
* Restart php: signal-event php-update<br />
<br />
==== With explanations ====<br />
Check the parameters of php for the ibay where Nextcloud is installed into:<br />
* create into .../path/to/ibay-Nextcloud/html a file ''phptest.php'' and enter in it:<br />
<syntaxhighlight lang="php"><br />
<?php<br />
phpinfo();<br />
phpinfo(INFO_MODULES);<br />
?><br />
</syntaxhighlight><br />
* from the browser, go to http://server/ibay_nextcloud/phptest.php ans have a look at the parameters and paths concerning opcache<br />
* modify the proper files with the parameters that Nextcloud claims for. The use of templates-custom may be necessary, depending your configuration.<br />
* if necessary expand the templates, remove ''phptest.php''<br />
* restart php<br />
<br />
===Mozilla Sync===<br />
Mozilla Sync is an addon for OwnCloud that allows users to securely synchronize Firefox settings across multiple devices (tabs, bookmarks, history, saved passwords).<br />
<br />
The owncloud instructions for installing and configuring Mozilla Sync addon can be found here: https://github.com/owncloud/mozilla_sync/blob/master/INSTALL.md<br />
<br />
The Mozilla Sync addon is included by default as of OwnCloud 6 and simply needs to be 'enabled' by an admin. <br />
<br />
====Install Mozilla Sync in OwnCloud====<br />
These instructions assume that you have installed owncloud into an ibay named '''owncloud''':<br />
<br />
Using a zipped tarball:<br />
<nowiki>cd /home/e-smith/files/ibays/owncloud/files<br />
wget https://github.com/owncloud/mozilla_sync/archive/master.zip -O master.zip<br />
sudo -u www unzip master.zip<br />
mv mozilla_sync-master /home/e-smith/files/ibays/owncloud/html/apps/mozilla_sync</nowiki><br />
<br />
Using Git:<br />
<nowiki>cd /home/e-smith/files/ibays/owncloud/html/apps<br />
sudo -u www git clone https://github.com/owncloud/mozilla_sync.git<br />
sudo -u www git checkout master</nowiki><br />
<br />
====Server settings to enable Mozilla Sync====<br />
Now, on the server:<br />
* login to your owncloud instance as an administrator<br />
* Open the 'Apps' page<br />
* 'Enable' the Mozilla Sync app<br />
* Verify that all 'Mozilla Sync' users have a unique email address configured in their user profile (Mozilla sync (in Firefox) authenticates using the user's email address).<br />
<br />
====Firefox settings to enable Mozilla Sync====<br />
Setup sync using the email address you verified above, using the password for the user that matches the email address. <br />
<br />
The address for your 'custom server' should look like:<br><br />
<nowiki>https://'''<path-to-your-owncloud>'''/remote.php/mozilla_sync/</nowiki><br />
<br />
==History and advanced topics==<br />
The history and some advanced topics can be viewed on these How-To page sections:<br />
*[http://wiki.contribs.org/index.php?title=OwnCloud&action=history History of this How-To revisions]<br />
*[http://wiki.contribs.org/Talk:OwnCloud Discussion/Talk page about this How-To]<br />
<br />
==References==<br />
* [https://plus.google.com/101306541175806407286/posts/TD8bGMsEqGL User-Guide for Installing OwnCloud on SME 8.0]<br />
==Interesting relations to other Contribs or How-to's==<br />
* [https://forums.zarafa.com/showthread.php?7866-A-simple-owncloud-app-for-integrating-zarafa-auth-and-webapp(link-to-source-inside) ownCloud Attachments integration] for [http://wiki.contribs.org/Zarafa Zarafa]<br />
* Mounting a ownCloud space as a '''[[Davfs2|local file system]]'''<br />
----<br />
[[Category:Howto]]<br />
[[Category: Backup]]<br />
[[Category: Webapps]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=OwnCloud&diff=34425OwnCloud2018-02-26T09:44:32Z<p>Arnaud: Settings for PHP opcache</p>
<hr />
<div>{{usefulnote}}<br />
{{Level|Medium}}<br />
{{Note box|Please note that OwnCloud has been forked by Nextcloud. It seems that the main developers have left OwnCloud and moved over to Nextcloud as per June 2016. Please see [http://nextcloud.com their website] for more info. Specific Nextcloud instructions will be on the [[Nextcloud]] wiki page.}}<br />
<br />
==About==<br />
<span style="float: right;"><br />
[[File:Owncloud.png|150px]]<br />
</span>"OwnCloud ( http://owncloud.org ) gives you universal access to your files through a web interface or WebDAV. It also provides a platform to easily view & sync your contacts, calendars and bookmarks across all your devices and enables basic editing right on the web. Installation has minimal server requirements, doesn’t need special permissions and is quick. ownCloud is extendable via a simple but powerful API for applications and plugins."<br />
<br />
{{Note box|Owncloud requires a higher version of PHP than SME Server 9.x currently provides. Presently, the ownCloud maintainers recommend PHP version 5.5. Please see '''[[PHP Software Collections]]''' on how to install a supported higher version of PHP alongside the default version of PHP on SME Server 9.x}}<br />
<br />
==Forum discussion==<br />
This how-to can be discussed on the forums [http://forums.contribs.org/index.php/topic,49709.msg248772.html#msg248772 here] or [http://forums.contribs.org/index.php/topic,49515.msg248880.html#msg248880 here]. Search for other posts.<br />
<br />
==Installation==<br />
Download the latest bz2 archive version or find the latest link from here https://owncloud.org/install/#instructions-server<br />
<br />
* Create an ibay in server manager, with the following typical settings:<br />
Information bay name - owncloud, Description - owncloud site, Group - Admin, User access - Write = group, Read = everyone, <br />
Public access via web - Entire Internet (no password required), Execution of dynamic content - Enabled<br />
* Check that you have the correct PHP version running for the Owncloud version you like to install: [https://doc.owncloud.org/server/8.0/admin_manual/installation/source_installation.html Version 8 doc]. Please see '''[[PHP Software Collections]]'''<nowiki> on how to install a supported higher version of PHP in an Ibay on SME Server 9.x}}</nowiki><br />
* Extract ownCloud into the html directory of the new ibay (in this example called 'owncloud' )<br />
In the following code, change the version number of owncloud to suit the current version downloaded from the owncloud website. The latest version and changelog can be found [https://owncloud.org/changelog/ '''here''']<br />
cd /home/e-smith/files/ibays/owncloud/html/<br />
wget http://download.owncloud.org/community/owncloud-9.X.X.tar.bz2<br />
tar xvf owncloud-9.X.X.tar.bz2 <br />
mv owncloud/* .<br />
mv owncloud/.htaccess .<br />
rm index.html <br />
* Set appropriate ibay settings at the command line prompt:<br />
<nowiki>db accounts setprop owncloud \<br />
AllowOverride All \<br />
FollowSymLinks enabled \<br />
Group www \<br />
PHPBaseDir /home/e-smith/files/ibays/owncloud/:/tmp/:/dev/urandom \<br />
PublicAccess global \<br />
UserAccess wr-group-rd-everyone<br />
</nowiki><br />
<br />
* Update the ibay<br />
signal-event ibay-modify owncloud<br />
<br />
* you also need to allow the "allow_url_fopen"<br />
<nowiki><br />
mkdir -p /etc/e-smith/templates-custom/etc/httpd/conf/httpd.conf<br />
nano -w /etc/e-smith/templates-custom/etc/httpd/conf/httpd.conf/99allow_url_fopen<br />
</nowiki><br />
<br />
* and paste the following and save this<br />
<br />
<nowiki><br />
<Directory /home/e-smith/files/ibays/owncloud/html><br />
php_admin_flag allow_url_fopen on<br />
</Directory><br />
</nowiki><br />
<br />
* then at the command line prompt enter:<br />
<br />
<nowiki><br />
expand-template /etc/httpd/conf/httpd.conf<br />
/etc/init.d/httpd-e-smith restart<br />
</nowiki><br />
<br />
4. Create a new MySQL database (In this example the database name is owncloud. Change '''owncloud''', '''username''' and '''password''' with your own choices as required)<br />
<br />
Login as root and issue the following command:<br />
mysql<br />
create database '''owncloud''';<br />
grant all privileges on '''owncloud'''.* to '''username@localhost''' identified by '<nowiki/>'''password'''';<br />
flush privileges;<br />
exit<br />
<br />
{{Note box|To be able to run OwnCloud 9.x on a stock Koozali SME Server 9.x, you must enable the InnoDB engine of MySQL _before_ you run the OwnCloud installation wizzard. To enable the InnoDB engine please [[MySQL#Enable_InnoDB_engine|THIS SECTION]]. This is due to the fact that Koozali SME Server comes with a lower version of MySQL than the recommended 5.5.x version which has the InnoDB engine enabled by default.}}<br />
<br />
5. Browse to http://yourserver/owncloud and follow install script<br />
<br />
==Upgrade==<br />
<br />
Upgrading from one minor version to the next (e.g., from 8.1.2 to 8.1.3) can be done using the updater app inside ownCloud. Just log in to the web site as admin, go to the admin page, and scroll down to the update center. However, major version upgrades (e.g., 8.1.x to 8.2.x) will require some work at the command line. The steps below assume that you've used PHP software collections to install an appropriate version of PHP for ownCloud, and that you've set your ownCloud ibay to use PHP version 5.6.<br />
<br />
You'll first need to download the distribution .tar.bz2 file. Place it in the ownCloud ibay, in the files subdirectory.<br />
<br />
Second, put ownCloud in maintenance mode, so that no changes will be made to the database or to the files on disk while you do your upgrade. Do this with<br />
# cd ~owncloud/../html<br />
# sudo -u www scl enable php56 'php occ maintenance:mode --on'<br />
<br />
Third, take backups of the ownCloud directory and database.<br />
# cd ~owncloud/..<br />
# rsync -Aaxv html owncloud-dirbkp_`date +"%Y%m%d"`/<br />
# mysqldump --lock-tables owncloud > owncloud-sqlbkp_`date +"%Y%m%d"`.sql<br />
<br />
Next, unpack the ownCloud distribution file.<br />
# cd ~owncloud/..<br />
# tar xjfv files/owncloud-8.2.0.tar.bz2<br />
<br />
Then you'll need to rename a couple of directories, and move some files around.<br />
# cd ~owncloud/..<br />
# mv html html_old<br />
# mv owncloud html<br />
# mv html_old/data/ html/<br />
# cp html_old/config/config.php html/config/<br />
<br />
Once the directories are renamed and the files are back in the right place, you'll want to kick off the upgrade at the command line and then disable maintenance mode. If you do not run the upgrade from the command line, the web interface will prompt you to do this the next time you log in as admin. However, for a larger installation, PHP may time out before finishing the operation. Therefore, it's recommended to do it from the command line.<br />
# cd ~owncloud/../html<br />
# sudo -u www scl enable php56 'php occ upgrade'<br />
# sudo -u www scl enable php56 'php occ maintenance:mode --off'<br />
<br />
Your ownCloud installation is now upgraded, and you can log into the web interface to ensure that everything works properly. Once you've confirmed this, you'll probably want to clean up the ibay a little bit.<br />
# cd ~owncloud/..<br />
# bzip2 owncloud-sqlbkp_20151027.sql<br />
# tar -cvjSf owncloud-dirbkp_20151027.tar.bz2 owncloud-dirbkp_20151027/<br />
# rm -rf html_old<br />
{{Note box|It is only possible to upgrade one step at a time. E.g. you can not upgrade from version 6 to 8 directly but have to make 2 upgrades from 6 to 7 and then 7 to 8}}<br />
<br />
==Optional Addons & Settings==<br />
===Security===<br />
<br />
====Require HTTPS====<br />
You can force https redirection whenever you use the http protocol. See [[Https_redirection]] for more information.<br />
<br />
In Owncloud versions prior to 8.1.1 you could also force https by checking 'Enforce HTTPS' in the Security section of the Admin section.<br />
<br />
====Strict Transport Security====<br />
Starting with ownCloud version 8.1.1, the admin panel may display this warning:<br />
<br />
The "Strict-Transport-Security" HTTP header is not configured to least "15768000" seconds.<br />
<br />
HTTP Strict Transport Security can be enabled as described at [[HSTS and HPKP]].<br />
<br />
====Fail2Ban====<br />
If you have installed the [[Fail2ban|Fail2Ban]] contrib, you can configure it to block ownCloud logins from a given IP address after repeated failures. To do this, you will first need to create a definition file telling Fail2ban what a failed login looks like, then create a template fragment to activate that definition, and finally restart fail2ban.<br />
<br />
First, create the definition file:<br />
# nano -w /etc/fail2ban/filter.d/owncloud.conf<br />
<br />
This file should contain the following text:<br />
[Definition]<br />
failregex={"reqId":".*","remoteAddr":".*","app":"core","message":"Login failed: '.*' \(Remote IP: '<HOST>\)","level":2,"time":".*"}<br />
<br />
ignoreregex =<br />
<br />
Press Ctrl-X to exit, and Y to save the file.<br />
<br />
Next, create the template fragment:<br />
# mkdir -p /etc/e-smith/templates-custom/etc/fail2ban/jail.conf<br />
# nano -w /etc/e-smith/templates-custom/etc/fail2ban/jail.conf/30Service55owncloud <br />
<br />
That file should contain the following:<br />
[owncloud]<br />
enabled = true<br />
filter = owncloud<br />
action = smeserver-iptables[port="$port",protocol=tcp,bantime=$bantime]<br />
logpath = /home/e-smith/files/ibays/owncloud/html/data/owncloud.log<br />
maxretry = 3<br />
port = 80,443<br />
protocol = tcp<br />
<br />
Again, Ctrl-X to exit, and Y to save. Then reconfigure the fail2ban service:<br />
# signal-event fail2ban-conf<br />
<br />
For more information, see [https://forum.owncloud.org/viewtopic.php?f=8&t=28678 this thread] on the ownCloud forum.<br />
<br />
===User login and permissions===<br />
owncloud provides several ways for users to login/authenticate. Next to the internal user authentication of owncloud, you can authenticate SME users: <br />
<br />
====LDAP Authentication====<br />
Configure owncloud to authenticate against the internal SME OpenLDAP server (all SME users)<br />
<br />
* log into your owncloud webapp as admin who installed owncloud<br />
* go to apps and enable ldap authentication backend<br />
* go to admin, ldap section and use below settings<br />
<br />
'''Server tab:'''<br />
Host: localhost<br />
Port: 389<br />
Base DN: dc=[yourdomain],dc=[com] (as seen in the 'domains' section in SME server manager)<br />
<br />
'''Users tab:'''<br />
Select username and email address<br />
<br />
Only these object classes: sambaSamAccount<br />
<br />
'''Groups'''<br />
Only these object classes: sambaGroupMapping <br />
<br />
'''Advanced tab:'''<br />
Special attributes -> email field: mail<br />
<br />
For the variables between brackets [], use your specific settings. Test creating a specific user and use that username password opposed to using the admin's credentials.<br />
For more info on using SME OpenLDAP for authentication please see: [[LDAP_Authentication_for_applications|LDAP Authentication for applications]]<br />
<br />
<br />
=====LDAP Tweaks=====<br />
It seems when creating a user in SME, and ownCloud integrated via LDAP, it takes a while before the new user can log into ownCloud. Also, the error "Can't get cache storage, user not logged in" begins to log in the ownCloud Warning Log.<br />
To fix this, and allow the new user to log in immediately, go to Settings -> Admin -> LDAP/Advanced Tab<br />
Change the Cache Time To Live + or -1. This forces the cache to update, and the new user will be able to log in to ownCloud straight away.<br />
<br />
{{note box| When setting up ldap following these instructions you may need to uncheck "tls" in the advanced tab of the ldap admin section in owncloud.}}<br />
<br />
====IMAP Authentication====<br />
Configure owncloud to authenticate against an IMAP server<br />
<br />
<br />
=====Enable and Configure IMAP Authentication=====<br />
# Login to your running owncloud configuration as an administrator<br />
# Enable the app named "External user support" <br />Note: "External user support" is included with owncloud as an "Internal app". There is an older add-on app named "IMAP user backend" that did not work for me.<br />
# Edit <owncloud>/config/config.php and insert the text below inside the $CONFIG array<br />
#* in front of the last closing parenthesis in the file (make sure there's a comma at the end of the preceding entry)<br />
#* Replace [mail.yourdomain.com] with your mail server name or IP address<br />
#* Remove "/novalidate-cert" if your chosen mail server uses a signed SSL certificate):<br />
<nowiki> 'user_backends' => <br />
array (<br />
0 => <br />
array (<br />
'class' => 'OC_User_IMAP',<br />
'arguments' => <br />
array (<br />
0 => '{[mail.yourdomain.com]:993/imap/ssl/novalidate-cert}',<br />
),<br />
),<br />
),</nowiki><br />
<br />
When done, users will be able to login to your owncloud web interface using their mail server username and password. <br />
More information can be found here: http://owncloud.org/support/custom-user-configuration/<br />
<br />
======Enable Automatic IMAP User Creation======<br />
This discussion on the Owncloud bug tracker describes patches that provide more automated IMAP user management:<br />
https://github.com/owncloud/apps/issues/302<br />
<br />
The patches provide these capabilities:<br />
* Create new Owncloud users when unrecognized users login using IMAP<br />
:* Add new users to a group named for the domain portion of their email address<br />
:* Optionally set an Owncloud quota based on the user's email domain<br />
* Update the Owncloud password for any existing user who logs in using IMAP<br />
<br />
======Sync Issues (Turning off magic quotes in php)======<br />
I had issues when deleting files in my client. Turing off magic quotes fixed the problem. Edit your php.ini using custom templates.<br />
* See http://wiki.contribs.org/PHP<br />
cd /etc/e-smith/templates-custom/etc/php.ini<br />
* Edit template-begin file<br />
* Add the following and save file: <br />
php_value magic_quotes_gpc = Off<br />
php_value magic_quotes_runtime = Off<br />
php_value magic_quotes_sybase = Off<br />
<br />
expand-template /etc/php.ini<br />
sv t httpd-e-smith<br />
<br />
======Allow Sharing with IMAP Users======<br />
To allow sharing with your IMAP users, you need to manually add each IMAP user account to the oc_users table using commands like those below at a shell prompt on your SME server:<br />
mysql owncloud<br />
insert into oc_users (uid,password) values ('[xyzuser@mail.yourdomain.com]','*');<br />
* '''owncloud''': replace with the name of the database you created for owncloud<br />
* '''[xyzuser@mail.yourdomain.com]''': replace with the exact login string used by the user when they login<br />
<br />
=====Problems=====<br />
* By default:<br />
:* Users who login using IMAP Authentication are not automatically added to the owncloud "oc_users" table. Consequently, they are not offered as options when other users want to share files with them.<br />
:* IMAP authenticated users will be unable to connect using the owncloud client sync program.<br />
* If your IMAP server allows alternative username syntax, owncloud will create a separate account for each username format that allows successful authentication. That is, if your IMAP server allows users to optionally include or omit the domain name, you might create a user account entry for "xyzuser@mail.yourdomain.com", then find that the user always logs in using simply "xyzuser", which succeeds during the IMAP authentication, but is a different account within owncloud.<br />
======Using the Owncloud Client====== <br />
In order for an IMAP-authenticated user to configure the owncloud client sync program:<br />
* The system administrator must add the username to the oc_users table manually as described above<br />
<br />
Then the user must:<br />
* login to the web interface<br />
* open his/her personal settings<br />
* "change" the password in owncloud to match the password on the IMAP server.<br />
<br />
===WebDAV interface ===<br />
The ownCloud admin panel may display the following warning. <br />
<br />
Setup Warning Your web server is not yet properly setup to allow files synchronization because the WebDAV interface seems to be broken. Please double check the installation guides.<br />
<br />
The issue could come from the use of a bad domain dns name which is different of the ssl certificate. If you feel it is a false positive... simply add 'check_for_working_webdav' => false, to your /config/config.php file and get rid of it. This workaround was successful with ownCloud 7. It has not been tested on ownCloud 6 or with ownCloud 8.<br />
<br />
nano /home/e-smith/files/ibays/owncloud/html/config/config.php<br />
and add<br />
'check_for_working_webdav' => false,<br />
<br />
===Maintenance tasks===<br />
Owncloud requires periodic maintenance tasks to run, and offers three different methods to execute them: AJAX, webcron, or through the system's cron process. They may be chosen on the Admin page, under the Cron heading. AJAX checks for tasks each time an owncloud page is loaded. It is simple to use and requires no configuration, but not very reliable since it depends on users logging into the web interface. Webcron relies on using an outside service like [http://www.easycron.com easycron.com] to load http://yourserver.tld/owncloud/cron.php on a schedule, roughly every 15 minutes. This is simple to configure, but can incur additional cost for the webcron service. Finally, the system's cron process runs commands on a specified schedule. It requires the most complex configuration, but does not rely on any external user or service to operate.<br />
<br />
To set up regular maintenance using the system cron process, first select the Cron option under the Cron heading on the Admin page. Then, from the system shell, run the following commands:<br />
# export EDITOR=nano<br />
# crontab -u apache -e<br />
<br />
This will open the nano editor with a blank file. Enter the following:<br />
*/15 * * * * scl enable php56 'php -f /home/e-smith/files/ibays/owncloud/html/cron.php > /dev/null 2>&1'<br />
<br />
Then press Ctrl-X to exit, and Y to save. This command assumes you've installed the PHP software collections as described above, and that you've chosen to run your ownCloud installation with PHP 5.6. If you'd chosen PHP 5.5, it would read "scl enable php55" instead.<br />
<br />
===Memory Caching===<br />
For improved performance, use of a memory cache for compiled PHP code and data is recommended. As of version 8.1, ownCloud will complain on the Admin page if you don't have this configured. Also, as of version 8.2, ownCloud implements transactional file locking, and it's preferred that this be managed with a memory cache as well. The only suitable memory cache system to support file locking is [http://redis.io/ Redis].<br />
<br />
====Redis====<br />
You'll need to install Redis, as well as its associated PHP module, configure your system to start Redis on startup, and change some configuration settings for both Redis and ownCloud.<br />
<br />
To install Redis, you'll need to have both the [[Remi]] and [[Epel]] repositories set up on your server. Then do<br />
# yum --enablerepo=remi,epel install redis php54-php-pecl-redis php55-php-pecl-redis php56-php-pecl-redis php71-php-pecl-redis php70-php-pecl-redis<br />
# config set redis service status enabled<br />
# cd /etc/rc7.d<br />
# ln -s /etc/rc.d/init.d/e-smith-service S80redis<br />
<br />
You'll need to adjust two entries in the Redis configuration file:<br />
# nano -w /etc/redis.conf<br />
<br />
Find the line that mentions "unixsocket", uncomment it and the following line, and edit them to appear as follows:<br />
unixsocket /var/run/redis/redis.sock<br />
unixsocketperm 777<br />
and to add the redis user to the www group:<br />
usermod -a -G redis www<br />
Then start the redis server:<br />
# /etc/rc.d/init.d/redis start<br />
<br />
You'll now need to edit the ownCloud config file:<br />
# nano -w ~owncloud/../html/config/config.php<br />
<br />
Add the following lines:<br />
'filelocking.enabled' => 'true',<br />
'memcache.locking' => '\OC\Memcache\Redis',<br />
'memcache.local' => '\OC\Memcache\Redis',<br />
'redis' => array(<br />
'host' => '/var/run/redis/redis.sock',<br />
'port' => 0,<br />
'timeout' => 0.0,<br />
),<br />
<br />
Save the file and exit nano. You should now be able to log in to your ownCloud installation as the admin user, and not see a warning message about memory cache or file locking.<br />
<br />
====APCu====<br />
If you don't want to use the memory cache for file locking, ownCloud supports a number of other cache backends, but the easiest to configure is said to be APCu. To configure this, you'll need to install the PHP extension, and activate it in your config.php file. To install the extension, run the following command:<br />
# yum --enablerepo=remi install php54-php-pecl-apcu php55-php-pecl-apcu php56-php-pecl-apcu<br />
<br />
If you've set up a system cron job for the ownCloud maintenance operations, you'll need to tell the cache to operate from the CLI as well as from the web. To do this, assuming you're using PHP 5.6 for ownCloud (change the version below as appropriate):<br />
# nano -w /opt/remi/php56/root/etc/php.d/40-apcu.ini<br />
<br />
Find the line with "apc.enable_cli", uncomment it (remove the semicolon), and change the value to 1. It should look like this:<br />
apc.enable_cli=1<br />
<br />
Then restart the web server:<br />
# /etc/init.d/httpd-e-smith restart<br />
<br />
This will install the extension for all PHP Software Collection versions. Then, to activate it, add this line to the ownCloud config.php file:<br />
'memcache.local' => '\OC\Memcache\APCu',<br />
<br />
Further information about caching can be found in the [https://doc.owncloud.org/server/8.1/admin_manual/configuration_server/performance_tuning.html#caching ownCloud documentation].<br />
<br />
=== PHP opcache ===<br />
Into Nextcloud, logged as admin => Parameters => Basic parameters check that Nextcloud doesn't claim for following parameters concerning PHP opcache:<syntaxhighlight><br />
opcache.enable=1<br />
opcache.enable_cli=1<br />
opcache.interned_strings_buffer=8<br />
opcache.max_accelerated_files=10000<br />
opcache.memory_consumption=128<br />
opcache.save_comments=1<br />
opcache.revalidate_freq=1<br />
</syntaxhighlight>If it is the case:<br />
<br />
==== In short ====<br />
* Edit ''/etc/opt/remi/php70/php.d/10-opcache.ini'' and make sure that the above given parameters are set.<br />
* Restart php: signal-event php-update<br />
<br />
===Mozilla Sync===<br />
Mozilla Sync is an addon for OwnCloud that allows users to securely synchronize Firefox settings across multiple devices (tabs, bookmarks, history, saved passwords).<br />
<br />
The owncloud instructions for installing and configuring Mozilla Sync addon can be found here: https://github.com/owncloud/mozilla_sync/blob/master/INSTALL.md<br />
<br />
The Mozilla Sync addon is included by default as of OwnCloud 6 and simply needs to be 'enabled' by an admin. <br />
<br />
====Install Mozilla Sync in OwnCloud====<br />
These instructions assume that you have installed owncloud into an ibay named '''owncloud''':<br />
<br />
Using a zipped tarball:<br />
<nowiki>cd /home/e-smith/files/ibays/owncloud/files<br />
wget https://github.com/owncloud/mozilla_sync/archive/master.zip -O master.zip<br />
sudo -u www unzip master.zip<br />
mv mozilla_sync-master /home/e-smith/files/ibays/owncloud/html/apps/mozilla_sync</nowiki><br />
<br />
Using Git:<br />
<nowiki>cd /home/e-smith/files/ibays/owncloud/html/apps<br />
sudo -u www git clone https://github.com/owncloud/mozilla_sync.git<br />
sudo -u www git checkout master</nowiki><br />
<br />
====Server settings to enable Mozilla Sync====<br />
Now, on the server:<br />
* login to your owncloud instance as an administrator<br />
* Open the 'Apps' page<br />
* 'Enable' the Mozilla Sync app<br />
* Verify that all 'Mozilla Sync' users have a unique email address configured in their user profile (Mozilla sync (in Firefox) authenticates using the user's email address).<br />
<br />
====Firefox settings to enable Mozilla Sync====<br />
Setup sync using the email address you verified above, using the password for the user that matches the email address. <br />
<br />
The address for your 'custom server' should look like:<br><br />
<nowiki>https://'''<path-to-your-owncloud>'''/remote.php/mozilla_sync/</nowiki><br />
<br />
==History and advanced topics==<br />
The history and some advanced topics can be viewed on these How-To page sections:<br />
*[http://wiki.contribs.org/index.php?title=OwnCloud&action=history History of this How-To revisions]<br />
*[http://wiki.contribs.org/Talk:OwnCloud Discussion/Talk page about this How-To]<br />
<br />
==References==<br />
* [https://plus.google.com/101306541175806407286/posts/TD8bGMsEqGL User-Guide for Installing OwnCloud on SME 8.0]<br />
==Interesting relations to other Contribs or How-to's==<br />
* [https://forums.zarafa.com/showthread.php?7866-A-simple-owncloud-app-for-integrating-zarafa-auth-and-webapp(link-to-source-inside) ownCloud Attachments integration] for [http://wiki.contribs.org/Zarafa Zarafa]<br />
* Mounting a ownCloud space as a '''[[Davfs2|local file system]]'''<br />
----<br />
[[Category:Howto]]<br />
[[Category: Backup]]<br />
[[Category: Webapps]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Client_Authentication:Fedora&diff=34042Client Authentication:Fedora2017-10-29T07:22:02Z<p>Arnaud: /* Automount Ibays at Login */</p>
<hr />
<div><br />
{{Warning box|This is based upon limited testing and a small number of users.}}<br />
==Client Configuration==<br />
===Introduction===<br />
The following is Fedora 21 (F21) - standard gnome edition desktop configuration for SME Server 9 authentication using Samba and Winbind. It allows login via the standard Fedora login screen. Also suitable for Fedora 19 and 20 (F19 and F20) for SME Server 8 - note that the Firewall and SELinux Administration GUI's may be slightly different.<br />
===Install Fedora===<br />
*Download the Fedora .iso and install. During the install process change the hostname to something of your choice and your domain name.<br />
<HOSTNAME>.<yourdomain>.<yourtld><br />
{{Tip box| Make sure you set the <HOSTNAME> to something less than 15 characters.<br />
The hostname can be set during the Installation Summary section of the install procedure by selecting Network & Hostname.<br />
<br />
When creating a user account, give a non SME Server user such as 'administrator' as this first user effectively becomes a local user for Gnome login. Root is not allowed to login at the Gnome GDM prompt. You can login as this user, open the Terminal (cli) and 'su' to root to carry out most of the authentication setup later.}}<br />
*When the install has finished, remove the media and reboot.<br />
*Complete the install, login and apply all updates. Logout and Restart.<br />
{{Note box| There may be a lot of updates, it is recommended to apply them all but ensure the security fixes are applied as a minimum.}}<br />
===Additional Packages===<br />
*Open the Terminal and use "su" to log in as root user.<br />
*Use "yum" at the Terminal to install the additional packages.<br />
*If you prefer to use a graphical package manager then install the "Yum Extender" from the Software" package.<br />
*The following shows how to install using yum at the Terminal, the package names are the same if you use the gui. Note: Firewall-config is already installed on F19 & F20.<br />
yum install \<br />
pam_mount policycoreutils-gui authconfig-gtk \<br />
samba samba-winbind samba-winbind-clients \<br />
system-config-samba firewall-config<br />
<br />
===Package Removal===<br />
*Remove the following package (F21 only)<br />
yum remove sssd-libwbclient<br />
<br />
===Firewall Modifications===<br />
*Search for and open “Firewall” and tick<br />
samba<br />
samba-client<br />
as trusted services. Do not forget to select “Permanent” in the configuration drop down box first otherwise the changes will apply to the current session only.<br />
===SELinux Administration===<br />
*Search for and open “SELinux Management” - note that the screen which opens is titled “SELinux Administration” (not Management)<br />
*On the "Status" menu select the "System Default Enforcing Mode" to "Disabled".<br />
===Samba Modifications===<br />
At the Terminal and still as root user, run the following two commands.<br />
systemctl enable smb.service<br />
systemctl start smb.service<br />
*Search for and open “Authentication”. This will open the Authentication Configuration dialogue.<br />
{{Tip box| Do not press the “Join Domain” button until you have completed the changes to all of the Authentication Configuration dialogue boxes as detailed below.}}<br />
*On the Identity & Authentication tab select Winbind as the User Account Database.<br />
*Complete the dialogue box as follows:<br />
Winbind Domain - this is the Windows Workgroup name for your SME Server<br />
Security Model - set this to domain<br />
Winbind Domain Controllers - this is the IP Address of your SME Server<br />
Template shell - set this to /bin/bash<br />
Allow offline login - tick<br />
*On the Advanced Options tab<br />
Enable local access control - tick<br />
Password hashing Algorithm - MD5<br />
Create home directories on first login - tick<br />
*Password options tab.<br />
No configuration changes changes on this tab are necessary<br />
*Change back to the Identity & Authentication tab.<br />
*Click on “Join Domain” button and save changes when prompted. Enter 'admin' as the Domain Administrator and then enter your SME Server 'admin' password. Click “OK”, then click “Apply” on the Identity & Authentication tab.<br />
*Open the Terminal and 'su' to root if not already done.<br />
*Open and edit /etc/samba/smb.conf. Under [global] there will be a section commented as having been generated by authconfig. Check this section is as below, some lines may not exist and may need to be added.<br />
:Replace <WORKGROUP> below with the 'Windows Workgroup' name of your SME Server.<br />
:Replace <ip of sme server> below with the ip address of your SME Server.<br />
[global] <br />
#--authconfig--start-line-- <br />
<br />
# Generated by authconfig on 2013/08/10 15:16:23 <br />
# DO NOT EDIT THIS SECTION (delimited by --start-line--/--end-line--) <br />
# Any modification may be deleted or altered by authconfig in future <br />
<br />
workgroup = <WORKGROUP> <br />
# password server = <ip of sme server> (not required – leave as is but comment out) <br />
security = domain <br />
idmap config * : backend = tdb (add this line)<br />
idmap config * : range = 16777216-33554431 (leave whatever range is set by default)<br />
idmap config DOMAIN : backend = rid (add his line)<br />
idmap config DOMAIN : range = 10000-49999 (add this line)<br />
idmap config DOMAIN : base_rid = 1000 (add this line)<br />
template shell = /bin/bash<br />
# kerberos method = secrets only (comment out if this line exists)<br />
winbind use default domain = yes (change this from false)<br />
winbind offline logon = true <br />
wins server = <ip of sme server> (add all of the following lines)<br />
name resolve order = wins host lmhosts bcast <br />
socket options = TCP_NODELAY <br />
template homedir = /home/%D/%U <br />
winbind enum users = yes <br />
winbind enum groups = yes <br />
winbind cache time = 10 <br />
obey pam restrictions = yes <br />
pam password change = yes <br />
hostname lookups = yes <br />
<br />
#--authconfig--end-line--<br />
<br />
{{Note box| If you run the 'Authentication' tool again your amendments will be lost}}<br />
*To check validation of smb.conf, run<br />
testparm<br />
*The 'Join Domain' above should also have worked so to list users, groups and available shares respectively from the SME server, test with <br />
wbinfo -u<br />
wbinfo -g<br />
smbtree<br />
:If it doesn't appear to have worked then run<br />
net rpc join -D <WORKGROUP> -U admin<br />
:Enter the admin password for the SME server when prompted and you should get a message,<br />
Joined domain <WORKGROUP><br />
===Authentication Modifications===<br />
{{Warning box| Altering the pam system authentication files can seriously effect your ability to login in to the system. Take a backup of the /etc/pam.d directory and /etc/nsswitch.conf. Have a live DVD available to give access and re-apply the backup files if you make a mistake and/or get locked out}}<br />
*Open, check and edit as necessary /etc/nsswitch.conf. On F21 the following 4 lines will require amendment, on F19 and F20 only line 4 should require amendment. Close and Save.<br />
passwd: files winbind <br />
shadow: files winbind<br />
group: files winbind<br />
hosts: files dns wins (ensure the order is correct – put wins at the end)<br />
*Open and edit the /etc/pam.d/system-auth file, and amend as below:<br />
#%PAM-1.0 <br />
# This file is auto-generated. <br />
# User changes will be destroyed the next time authconfig is run. <br />
auth required pam_env.so <br />
auth sufficient pam_fprintd.so <br />
auth sufficient pam_unix.so nullok try_first_pass <br />
auth requisite pam_succeed_if.so uid >= 1000 quiet_success <br />
auth sufficient pam_winbind.so cached_login use_first_pass <br />
auth optional pam_mount.so enable_pam_password <br />
auth required pam_deny.so <br />
<br />
account required pam_access.so <br />
account required pam_unix.so broken_shadow <br />
account sufficient pam_localuser.so <br />
account sufficient pam_succeed_if.so uid < 1000 quiet <br />
account [default=bad success=ok user_unknown=ignore] pam_winbind.so cached_login <br />
account required pam_permit.so <br />
<br />
password requisite pam_pwquality.so try_first_pass retry=3 type= <br />
password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok <br />
password sufficient pam_winbind.so use_authtok <br />
password required pam_deny.so <br />
<br />
session optional pam_keyinit.so revoke <br />
session required pam_limits.so <br />
-session optional pam_systemd.so <br />
session optional pam_mkhomedir.so <br />
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid <br />
session required pam_unix.so <br />
session optional pam_winbind.so cached_login <br />
session optional pam_mount.so enable_pam_password<br />
{{Note box| The following two lines were added to the system-auth file:<br />
auth optional pam_mount.so enable_pam_password<br />
<br />
session optional pam_mount.so enable_pam_password}}<br />
*Open and edit the /etc/pam.d/password-auth file, and amend as below:<br />
#%PAM-1.0 <br />
# This file is auto-generated. <br />
# User changes will be destroyed the next time authconfig is run. <br />
auth required pam_env.so <br />
auth sufficient pam_unix.so nullok try_first_pass <br />
auth requisite pam_succeed_if.so uid >= 1000 quiet_success <br />
auth sufficient pam_winbind.so cached_login use_first_pass <br />
auth optional pam_mount.so enable_pam_password <br />
auth required pam_deny.so <br />
<br />
account required pam_access.so <br />
account required pam_unix.so broken_shadow <br />
account sufficient pam_localuser.so <br />
account sufficient pam_succeed_if.so uid < 1000 quiet <br />
account [default=bad success=ok user_unknown=ignore] pam_winbind.so cached_login <br />
account required pam_permit.so <br />
<br />
password requisite pam_pwquality.so try_first_pass retry=3 type= <br />
password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok <br />
password sufficient pam_winbind.so use_authtok <br />
password required pam_deny.so <br />
<br />
session optional pam_keyinit.so revoke <br />
session required pam_limits.so <br />
-session optional pam_systemd.so <br />
session optional pam_mkhomedir.so <br />
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid <br />
session required pam_unix.so <br />
session optional pam_winbind.so cached_login <br />
session optional pam_mount.so enable_pam_password<br />
{{Note box| The following two lines were added to the password-auth file:<br />
auth optional pam_mount.so enable_pam_password<br />
<br />
session optional pam_mount.so enable_pam_password}}<br />
*Open and edit the /etc/pam.d/gdm-password file, and amend as below:<br />
auth [success=done ignore=ignore default=bad] pam_selinux_permit.so <br />
auth substack password-auth <br />
auth optional pam_gnome_keyring.so <br />
auth optional pam_mount.so <br />
auth include postlogin <br />
<br />
account required pam_nologin.so <br />
account include password-auth <br />
<br />
password include password-auth <br />
<br />
session required pam_selinux.so close <br />
session required pam_loginuid.so <br />
session optional pam_console.so <br />
-session optional pam_ck_connector.so <br />
session required pam_selinux.so open <br />
session optional pam_keyinit.so force revoke <br />
session required pam_namespace.so <br />
session include password-auth <br />
session optional pam_gnome_keyring.so auto_start <br />
session include postlogin <br />
session optional pam_mount.so<br />
{{Note box| The following two lines were added to the gdm-password file:<br />
auth optional pam_mount.so<br />
<br />
session optional pam_mount.so}}<br />
<br />
===Automount User Home Directories at Login===<br />
*Create a new group in SME Server with a Group Name of “nethome” and a Description of “nethome-group”. Add all SME Server users to this group, or at least all SME Server users who will be using the SME Server to authenticate a Fedora client workstation.<br />
{{Note box| The names “nethome” and “nethome-group” can, of course be anything you like, these are just my example for the purpose of this HowTo. They are, however, a sensible choice as we are going to use a mount point called “nethome” but again this mount point name can be anything you want.}}<br />
*Open and edit /etc/security/pam_mount.conf.xml file. Find the 'Volume Definitions' section. Add a volume line below the header<br />
<nowiki><!-- Volume Definitions --> </nowiki><br />
<volume sgrp="nethome-group" fstype="cifs" server="<SMESERVER>" path="homes" mountpoint="~/nethome" options="nosuid,nodev,vers=1.0" /><br />
*Replace <SMESERVER> above with the samba name of your SME Server. This will mount the users 'home' directory from SME Server into a directory called 'nethome' in their local home directory.<br />
<br />
===Automount Using smserver-tw-loginscript===<br />
<br />
The [[Smeserver-tw-logonscript]] package provides a convenient and flexible alternative to managing mounts for user home directories and i-bay directories. <br />
<br />
Instead of the hardcoded lines as described above it auto generates a small user specific script when the user logs in and then links the pam_mount to this user script.<br />
<br />
=== Automount Ibays at Login===<br />
*Edit /etc/security/pam_mount.conf.xml and add a line below the header <br />
<nowiki><!-- Volume Definitions --> </nowiki><br />
<volume sgrp="<GROUPNAME>" fstype="cifs" server="<SMESERVER>" path="<IBAYNAME>" mountpoint="~/<IBAYNAME>" options="user=%(DOMAIN_USER),setuids,acl,vers=1.0" /><br />
*Replace <SMESERVER> with the samba name of your SME server, <IBAYNAME> with the ibay name, <GROUPNAME> with the '''[[description]]''' of the ibay owner group. The description can be recovered with<br />
wbinfo -g<br />
{{Note box| The sgrp param is optional. If used, ibay will be mounted only if %(DOMAIN_USER) is a member of ibay's owner group}}<br />
<br />
===Login and Test===<br />
*Exit the Terminal cli<br />
*Logout, and Restart Fedora. <br />
*Login as a valid SME server user on your system, just giving username and password. No need for DOMAIN\user as samba configured above to use the default Windows Workgroup<br />
*Authentication against SME Server should proceed and the user log in. A home directory on the local machine should be created as /home/DOMAIN/user, and a sub directory to that called 'nethome' mounted to the users home directory on the SME server.<br />
===Password Changes===<br />
User password changes made through the web browser (www.yourdomain.xxx/user-password) are implemented correctly. The new password also being recognised when logging in away from the SME Server network i.e. off-line cached login, particularly useful for business laptops.<br />
----<br />
[[Category:Howto]]<br />
[[Category:Administration]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Client_Authentication:Fedora&diff=34041Client Authentication:Fedora2017-10-29T07:21:32Z<p>Arnaud: /* Automount User Home Directories at Login */</p>
<hr />
<div><br />
{{Warning box|This is based upon limited testing and a small number of users.}}<br />
==Client Configuration==<br />
===Introduction===<br />
The following is Fedora 21 (F21) - standard gnome edition desktop configuration for SME Server 9 authentication using Samba and Winbind. It allows login via the standard Fedora login screen. Also suitable for Fedora 19 and 20 (F19 and F20) for SME Server 8 - note that the Firewall and SELinux Administration GUI's may be slightly different.<br />
===Install Fedora===<br />
*Download the Fedora .iso and install. During the install process change the hostname to something of your choice and your domain name.<br />
<HOSTNAME>.<yourdomain>.<yourtld><br />
{{Tip box| Make sure you set the <HOSTNAME> to something less than 15 characters.<br />
The hostname can be set during the Installation Summary section of the install procedure by selecting Network & Hostname.<br />
<br />
When creating a user account, give a non SME Server user such as 'administrator' as this first user effectively becomes a local user for Gnome login. Root is not allowed to login at the Gnome GDM prompt. You can login as this user, open the Terminal (cli) and 'su' to root to carry out most of the authentication setup later.}}<br />
*When the install has finished, remove the media and reboot.<br />
*Complete the install, login and apply all updates. Logout and Restart.<br />
{{Note box| There may be a lot of updates, it is recommended to apply them all but ensure the security fixes are applied as a minimum.}}<br />
===Additional Packages===<br />
*Open the Terminal and use "su" to log in as root user.<br />
*Use "yum" at the Terminal to install the additional packages.<br />
*If you prefer to use a graphical package manager then install the "Yum Extender" from the Software" package.<br />
*The following shows how to install using yum at the Terminal, the package names are the same if you use the gui. Note: Firewall-config is already installed on F19 & F20.<br />
yum install \<br />
pam_mount policycoreutils-gui authconfig-gtk \<br />
samba samba-winbind samba-winbind-clients \<br />
system-config-samba firewall-config<br />
<br />
===Package Removal===<br />
*Remove the following package (F21 only)<br />
yum remove sssd-libwbclient<br />
<br />
===Firewall Modifications===<br />
*Search for and open “Firewall” and tick<br />
samba<br />
samba-client<br />
as trusted services. Do not forget to select “Permanent” in the configuration drop down box first otherwise the changes will apply to the current session only.<br />
===SELinux Administration===<br />
*Search for and open “SELinux Management” - note that the screen which opens is titled “SELinux Administration” (not Management)<br />
*On the "Status" menu select the "System Default Enforcing Mode" to "Disabled".<br />
===Samba Modifications===<br />
At the Terminal and still as root user, run the following two commands.<br />
systemctl enable smb.service<br />
systemctl start smb.service<br />
*Search for and open “Authentication”. This will open the Authentication Configuration dialogue.<br />
{{Tip box| Do not press the “Join Domain” button until you have completed the changes to all of the Authentication Configuration dialogue boxes as detailed below.}}<br />
*On the Identity & Authentication tab select Winbind as the User Account Database.<br />
*Complete the dialogue box as follows:<br />
Winbind Domain - this is the Windows Workgroup name for your SME Server<br />
Security Model - set this to domain<br />
Winbind Domain Controllers - this is the IP Address of your SME Server<br />
Template shell - set this to /bin/bash<br />
Allow offline login - tick<br />
*On the Advanced Options tab<br />
Enable local access control - tick<br />
Password hashing Algorithm - MD5<br />
Create home directories on first login - tick<br />
*Password options tab.<br />
No configuration changes changes on this tab are necessary<br />
*Change back to the Identity & Authentication tab.<br />
*Click on “Join Domain” button and save changes when prompted. Enter 'admin' as the Domain Administrator and then enter your SME Server 'admin' password. Click “OK”, then click “Apply” on the Identity & Authentication tab.<br />
*Open the Terminal and 'su' to root if not already done.<br />
*Open and edit /etc/samba/smb.conf. Under [global] there will be a section commented as having been generated by authconfig. Check this section is as below, some lines may not exist and may need to be added.<br />
:Replace <WORKGROUP> below with the 'Windows Workgroup' name of your SME Server.<br />
:Replace <ip of sme server> below with the ip address of your SME Server.<br />
[global] <br />
#--authconfig--start-line-- <br />
<br />
# Generated by authconfig on 2013/08/10 15:16:23 <br />
# DO NOT EDIT THIS SECTION (delimited by --start-line--/--end-line--) <br />
# Any modification may be deleted or altered by authconfig in future <br />
<br />
workgroup = <WORKGROUP> <br />
# password server = <ip of sme server> (not required – leave as is but comment out) <br />
security = domain <br />
idmap config * : backend = tdb (add this line)<br />
idmap config * : range = 16777216-33554431 (leave whatever range is set by default)<br />
idmap config DOMAIN : backend = rid (add his line)<br />
idmap config DOMAIN : range = 10000-49999 (add this line)<br />
idmap config DOMAIN : base_rid = 1000 (add this line)<br />
template shell = /bin/bash<br />
# kerberos method = secrets only (comment out if this line exists)<br />
winbind use default domain = yes (change this from false)<br />
winbind offline logon = true <br />
wins server = <ip of sme server> (add all of the following lines)<br />
name resolve order = wins host lmhosts bcast <br />
socket options = TCP_NODELAY <br />
template homedir = /home/%D/%U <br />
winbind enum users = yes <br />
winbind enum groups = yes <br />
winbind cache time = 10 <br />
obey pam restrictions = yes <br />
pam password change = yes <br />
hostname lookups = yes <br />
<br />
#--authconfig--end-line--<br />
<br />
{{Note box| If you run the 'Authentication' tool again your amendments will be lost}}<br />
*To check validation of smb.conf, run<br />
testparm<br />
*The 'Join Domain' above should also have worked so to list users, groups and available shares respectively from the SME server, test with <br />
wbinfo -u<br />
wbinfo -g<br />
smbtree<br />
:If it doesn't appear to have worked then run<br />
net rpc join -D <WORKGROUP> -U admin<br />
:Enter the admin password for the SME server when prompted and you should get a message,<br />
Joined domain <WORKGROUP><br />
===Authentication Modifications===<br />
{{Warning box| Altering the pam system authentication files can seriously effect your ability to login in to the system. Take a backup of the /etc/pam.d directory and /etc/nsswitch.conf. Have a live DVD available to give access and re-apply the backup files if you make a mistake and/or get locked out}}<br />
*Open, check and edit as necessary /etc/nsswitch.conf. On F21 the following 4 lines will require amendment, on F19 and F20 only line 4 should require amendment. Close and Save.<br />
passwd: files winbind <br />
shadow: files winbind<br />
group: files winbind<br />
hosts: files dns wins (ensure the order is correct – put wins at the end)<br />
*Open and edit the /etc/pam.d/system-auth file, and amend as below:<br />
#%PAM-1.0 <br />
# This file is auto-generated. <br />
# User changes will be destroyed the next time authconfig is run. <br />
auth required pam_env.so <br />
auth sufficient pam_fprintd.so <br />
auth sufficient pam_unix.so nullok try_first_pass <br />
auth requisite pam_succeed_if.so uid >= 1000 quiet_success <br />
auth sufficient pam_winbind.so cached_login use_first_pass <br />
auth optional pam_mount.so enable_pam_password <br />
auth required pam_deny.so <br />
<br />
account required pam_access.so <br />
account required pam_unix.so broken_shadow <br />
account sufficient pam_localuser.so <br />
account sufficient pam_succeed_if.so uid < 1000 quiet <br />
account [default=bad success=ok user_unknown=ignore] pam_winbind.so cached_login <br />
account required pam_permit.so <br />
<br />
password requisite pam_pwquality.so try_first_pass retry=3 type= <br />
password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok <br />
password sufficient pam_winbind.so use_authtok <br />
password required pam_deny.so <br />
<br />
session optional pam_keyinit.so revoke <br />
session required pam_limits.so <br />
-session optional pam_systemd.so <br />
session optional pam_mkhomedir.so <br />
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid <br />
session required pam_unix.so <br />
session optional pam_winbind.so cached_login <br />
session optional pam_mount.so enable_pam_password<br />
{{Note box| The following two lines were added to the system-auth file:<br />
auth optional pam_mount.so enable_pam_password<br />
<br />
session optional pam_mount.so enable_pam_password}}<br />
*Open and edit the /etc/pam.d/password-auth file, and amend as below:<br />
#%PAM-1.0 <br />
# This file is auto-generated. <br />
# User changes will be destroyed the next time authconfig is run. <br />
auth required pam_env.so <br />
auth sufficient pam_unix.so nullok try_first_pass <br />
auth requisite pam_succeed_if.so uid >= 1000 quiet_success <br />
auth sufficient pam_winbind.so cached_login use_first_pass <br />
auth optional pam_mount.so enable_pam_password <br />
auth required pam_deny.so <br />
<br />
account required pam_access.so <br />
account required pam_unix.so broken_shadow <br />
account sufficient pam_localuser.so <br />
account sufficient pam_succeed_if.so uid < 1000 quiet <br />
account [default=bad success=ok user_unknown=ignore] pam_winbind.so cached_login <br />
account required pam_permit.so <br />
<br />
password requisite pam_pwquality.so try_first_pass retry=3 type= <br />
password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok <br />
password sufficient pam_winbind.so use_authtok <br />
password required pam_deny.so <br />
<br />
session optional pam_keyinit.so revoke <br />
session required pam_limits.so <br />
-session optional pam_systemd.so <br />
session optional pam_mkhomedir.so <br />
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid <br />
session required pam_unix.so <br />
session optional pam_winbind.so cached_login <br />
session optional pam_mount.so enable_pam_password<br />
{{Note box| The following two lines were added to the password-auth file:<br />
auth optional pam_mount.so enable_pam_password<br />
<br />
session optional pam_mount.so enable_pam_password}}<br />
*Open and edit the /etc/pam.d/gdm-password file, and amend as below:<br />
auth [success=done ignore=ignore default=bad] pam_selinux_permit.so <br />
auth substack password-auth <br />
auth optional pam_gnome_keyring.so <br />
auth optional pam_mount.so <br />
auth include postlogin <br />
<br />
account required pam_nologin.so <br />
account include password-auth <br />
<br />
password include password-auth <br />
<br />
session required pam_selinux.so close <br />
session required pam_loginuid.so <br />
session optional pam_console.so <br />
-session optional pam_ck_connector.so <br />
session required pam_selinux.so open <br />
session optional pam_keyinit.so force revoke <br />
session required pam_namespace.so <br />
session include password-auth <br />
session optional pam_gnome_keyring.so auto_start <br />
session include postlogin <br />
session optional pam_mount.so<br />
{{Note box| The following two lines were added to the gdm-password file:<br />
auth optional pam_mount.so<br />
<br />
session optional pam_mount.so}}<br />
<br />
===Automount User Home Directories at Login===<br />
*Create a new group in SME Server with a Group Name of “nethome” and a Description of “nethome-group”. Add all SME Server users to this group, or at least all SME Server users who will be using the SME Server to authenticate a Fedora client workstation.<br />
{{Note box| The names “nethome” and “nethome-group” can, of course be anything you like, these are just my example for the purpose of this HowTo. They are, however, a sensible choice as we are going to use a mount point called “nethome” but again this mount point name can be anything you want.}}<br />
*Open and edit /etc/security/pam_mount.conf.xml file. Find the 'Volume Definitions' section. Add a volume line below the header<br />
<nowiki><!-- Volume Definitions --> </nowiki><br />
<volume sgrp="nethome-group" fstype="cifs" server="<SMESERVER>" path="homes" mountpoint="~/nethome" options="nosuid,nodev,vers=1.0" /><br />
*Replace <SMESERVER> above with the samba name of your SME Server. This will mount the users 'home' directory from SME Server into a directory called 'nethome' in their local home directory.<br />
<br />
===Automount Using smserver-tw-loginscript===<br />
<br />
The [[Smeserver-tw-logonscript]] package provides a convenient and flexible alternative to managing mounts for user home directories and i-bay directories. <br />
<br />
Instead of the hardcoded lines as described above it auto generates a small user specific script when the user logs in and then links the pam_mount to this user script.<br />
<br />
=== Automount Ibays at Login===<br />
*Edit /etc/security/pam_mount.conf.xml and add a line below the header <br />
<nowiki><!-- Volume Definitions --> </nowiki><br />
<volume sgrp="<GROUPNAME>" fstype="cifs" server="<SMESERVER>" path="<IBAYNAME>" mountpoint="~/<IBAYNAME>" options="user=%(DOMAIN_USER),setuids,acl" /><br />
*Replace <SMESERVER> with the samba name of your SME server, <IBAYNAME> with the ibay name, <GROUPNAME> with the '''[[description]]''' of the ibay owner group. The description can be recovered with<br />
wbinfo -g<br />
{{Note box| The sgrp param is optional. If used, ibay will be mounted only if %(DOMAIN_USER) is a member of ibay's owner group}}<br />
===Login and Test===<br />
*Exit the Terminal cli<br />
*Logout, and Restart Fedora. <br />
*Login as a valid SME server user on your system, just giving username and password. No need for DOMAIN\user as samba configured above to use the default Windows Workgroup<br />
*Authentication against SME Server should proceed and the user log in. A home directory on the local machine should be created as /home/DOMAIN/user, and a sub directory to that called 'nethome' mounted to the users home directory on the SME server.<br />
===Password Changes===<br />
User password changes made through the web browser (www.yourdomain.xxx/user-password) are implemented correctly. The new password also being recognised when logging in away from the SME Server network i.e. off-line cached login, particularly useful for business laptops.<br />
----<br />
[[Category:Howto]]<br />
[[Category:Administration]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Affa&diff=33106Affa2017-04-03T19:58:14Z<p>Arnaud: /* Restore from USB drive on new server */</p>
<hr />
<div>{{Languages|Affa}}<br />
{{Level|Advanced}}<br />
{{usefulnote}}<br />
<br />
<br />
=== Maintainer ===<br />
Mainteners(s) Affa3: Arnaud, stephdl<br><br />
Development of Affa2 for SME Server by the original maintainer has been discontinued.<br />
<br />
=== Version ===<br />
{{ #smeversion: affa}}<br />
{{ #smeversion: smeserver-affa}}<br />
<br />
<br />
=== Description ===<br />
The main purpose of this affa package is to make a SME Server a dedicated backup box in a few minutes. Affa backs up as many SME servers as you like or any other servers which have sshd running and rsync installed. Once it is configured, Affa runs reliably unattended and sends warning messages in case of any errors. <br />
<br />
All backup archives are full backups, as Affa makes use of the hardlink technique. Therefore a new full backup only needs disk space for the differences plus the filesystem overhead for the hardlinks and directories (which is typically 2-3%).<br />
<br />
Affa is based on the rsync program and supports the rsync --compress option. This allows you to run backups over the internet or VPN. A typical setup is one or more Affa backup servers placed in different locations, which backup the production server(s) over the VPN.<br />
<br />
A special feature is the rise option, which allows you to rise the backup server to your production server from a backup archive in case of a dead loss of your production server. The rise is executed within a extremely short time, even with huge amount of data. The rise feature uses hardlinks and therefore does not use up additional disk space.<br />
<br />
The rise feature can also be used to upgrade from a lower version to a higher version of SME. This also work from SME 8.1 to 9. For this to work you need 2 separate installations of SME. One is the actual running server and the other is an affa backup server with a newer version of SME Server where you execute the rise command. See more here: [[Moving SME to new Hardware]]<br />
<br />
A simple way to have a server running with maximum up time is to duplicate the hardware. E.g. to have 2 separate hardware boxes or virtual servers on separate hardware. One is the actual server and the other one is a backup system that in a very short time in case of problems can be upgraded to the actual server with the rise command. This also allow you to upgrade the SME Server software with minimum down time!<br />
<br />
Affa is a command line tool for system administrators and is intentionally designed without a GUI. Therefore it can be efficiently managed on the console and over slow internet connections.<br />
<br />
'''Note:''' This documents also refers to the Affa Version 2 which is not maintained anymore. Information on Affa 3 will be gradually added here once it has been more tested.<br />
<br />
==== Affa features at a glance ====<br />
* Affa is secure: All transfers and connections are made by using the ssh protocol with public/private key authentication<br />
* Periodically runs unattended full backups. Only differences to the previous full backup are transferred over the network<br />
* Using rsync with optional bandwidth limit and compression allows backups over the internet<br />
* Uses hardlink technique, i.e. physical disk space only needed for the differences between two full backups<br />
* Keeps a configurable number of scheduled, daily, weekly, monthly and yearly full backup archives<br />
* The archives are browseable. No need to unpack them first.<br />
* Archives can be mapped to a Samba share.<br />
* Backup jobs are started by the cron daemon<br />
* Interrupted jobs continuing with already transfered data when restarted<br />
* Backups the default e-smith directories and files, when property SMEServer is set to yes<br />
* Additional directories and files can be included<br />
* Directories and files can be excluded from the backup<br />
* Non-SME server linux systems can be backed up by setting the SMEServer property to no and using an include list<br />
* In ESXi mode, running virtual machines can be backed up. See [[Backup of ESXi Virtual Machines using Affa]]<br />
* Configurable nice level for rsync processes on the backup and source server<br />
* Optional run of custom programs before and after a job run (e.g. running tape backup)<br />
* Checks the disk space left after a job run with warning levels strict, normal or risky<br />
* Extensive checking of failure conditions<br />
* Sends failure messages to a configurable list of email addresses<br />
* Sends a warning message, if the backup server runs out of disk space<br />
* Installs an optional watchdog on the source server in case the backupserver fails (SME Server only)<br />
* Watchdog sends warning, if an expected backup did not run (SME Server only)<br />
* Watchdog sends a daily reminder message, if the error continues unchecked (SME Server only)<br />
* Option to display current status of all jobs showing times of last and next run, size and disk usage<br />
* Status can be mailed on a daily, weekly or monthly schedule<br />
* Option to display all existing archives of a job shown date, number of files, size and bytes tranferred from the source<br />
* Option to send (and revoke) the public key to the source server (SME Server and ESXi only)<br />
* Option to rise the backup server to a production server from a backup (SME Server only)<br />
* The rise feature does not physically move data and therefore is extremly fast and needs (almost) no extra disk space<br />
* Rise option can be run remotely as the NIC driver configuration of the backup server are preserved<br />
* Compares installed RPMs on source with backup server. Sends warning message, if not in sync<br />
* Undo rise option to restore the backup server<br />
* Configurable via a e-smith style db, with one record for each job and a default record for all jobs<br />
* Logs to /var/log/affa/JOB.log and /var/log/affa/affa.log with optional debug switch for higher verbosity<br />
* Log files are rotated weekly, with 5 logs kept<br />
<br />
<div style="text-align:left" class="mw-collapsible mw-collapsed" data-collapsetext="less &uarr;" data-expandtext="Obsolete Affa 2 instructions &darr;"><br />
<br />
=== Installation or Update of Affa 2 ===<br />
<br />
====Smecontribs for sme8====<br />
yum install --enablerepo=smecontribs smeserver-affa<br />
<br />
The latest test version can be found here :<br />
[http://www.reetspetit.com/smeserver/5/noarch/smeserver-affa-2.0.0-rc6.el5.sme.noarch.rpm smeserver-affa-2.0.0-rc6.el5.sme.noarch.rpm] Note this is NOT signed.<br />
<br />
Please post bug reports in the [http://bugs.contribs.org bug tracker]<br />
<br />
When you have installed Affa for the first time run the following command to initialize the Affa database<br />
<br />
affa --make-cronjobs<br />
and logout and re-login to the console to take the bash auto-completion (TAB key) in effect.<br />
====Obsolete way to install====<br />
{{Note box|SME7 is now EOL and has been moved to the obsolete tree}}<br />
SME7 version: '''smeserver-affa-2.0.0-rc4.noarch.rpm'''<br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/smeserver-affa-2.0.0-rc4.noarch.rpm</nowiki><br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm<br />
<br />
SME8 version: SME8 is based on CentOS5, therefore use the perl-Compress-Bzip2 el5 rpm: <br />
wget <nowiki>http://mirror.contribs.org/releases/8/smecontribs/i386/repoview/smeserver-affa.html</nowiki><br />
For SME8 32bit:<br />
wget <nowiki>http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm<br />
For SME8 64bit:<br />
wget http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
<br />
When you have installed Affa for the first time run the following command to initialize the Affa database<br />
affa --make-cronjobs<br />
and logout and re-login to the console to take the bash auto-completion (TAB key) in effect.<br />
<br />
==== Quick start example for Affa 2 ====<br />
You have a SME 7 production server with hostname 'prodbox‘ and IP 10.200.48.1.<br><br />
Set up a second SME 7 box as your backupserver with hostname 'affabox‘ and IP 10.200.48.2. <br />
<br />
<ol></li><li>log into the 'affabox' and install the packages as described above.<br />
</li><li>copy the config helper script sample<br />
cp /usr/lib/affa/jobconfig-sample.pl /root/prodbox-job.pl<br />
</li><li>edit /root/prodbox-job.pl and set<br />
my $jobname='prodbox';<br />
and<br />
'remoteHostName‘=>'10.200.48.1',<br />
optionally, if the remote server port is configured to e.g. 2222 then set<br />
'sshPort'=>2222,<br />
</li><li>save and run the 'prodbox-job.pl' file. This will write the configuration (this makes the database entries and sets up the cronjobs)<br />
/root/prodbox-job.pl<br />
</li><li>generate the DSA keys and send the public key to the 'prodbox'<br />
{{Note box|When initially doing this step, you will need to temporarily enable "Allow secure shell access using standard passwords" on the production server}}<br />
affa --send-key prodbox<br />
</li><li>run the job manually<br />
<br />
affa --run prodbox<br />
</li></ol><br />
<br />
==== Configuration ====<br />
The configuration is stored in an e-smith style database. Use the db command to configure Affa.<br />
The jobname is the record key with the type 'job'.<br>To setup a new job with the name 'prodbox' enter:<br />
<br />
db affa set prodbox job<br />
<br />
then set the properties<br />
<br />
db affa setprop prodbox remoteHostName 192.168.1.1<br />
db affa setprop prodbox TimeSchedule '0030,0730,1130,1330,1730,2030'<br />
db affa setprop prodbox Description 'My Production Server'<br />
db affa setprop prodbox status enabled<br />
and so on...<br />
<br />
Alternatively you can you use a script as described above in the 'Quick start' chapter.<br />
<br />
To verify your work, type:<br />
<br />
db affa show prodbox<br />
<br />
Finally set up the cronjobs:<br />
affa --make-cronjobs<br />
<br />
</div><br />
<br />
<br />
=== Installation of Affa 3 ===<br />
The following instructions assume that a fresh Affa 3 installation is made on a fresh dedicated SME server which serves as a dedicated backup server ('affabox‘). No jobs are migrated from Affa 2. For using an existing SME server, just skip the instructions how to setup a fresh SME box.<br />
<br />
Setup a dedicated machine with SME 8.1 or SME 9.0 from CDROM. Use the following example settings:<br />
<ol><li>Domain name: athome.xx (use your existing domain name)</li><br />
<li>Host name: affabox (must not match to existing host)</li><br />
<li>IP address: 192.168.0.10 (must not match to existing IP address)</li><br />
<li>Subnet: 255.255.255.0</li><br />
<li>role: Server only</li><br />
<li>Gateway: 192.168.0.1 (use your existing gateway)</li><br />
<li>DHCP: DEactivate</li><br />
<li>DNS server: 192.168.0.1 (use your existing DNS server)</li><br />
<li>activate changes: yes</li><br />
<li>The fresh server will then restart with the above settings.</li><br />
</ol><br />
<br />
To update your server, login as user root on local console. Use 'top' command to display running jobs. Wait until 'yum' stopped running.<br />
yum clean all<br />
yum update<br />
The fresh server will be updated.<br />
signal-event post-upgrade<br />
signal-event reboot<br />
The server will reboot. Afterwards it is ready for installation of Affa 3.<br />
<br />
Login as root on local or remote SSH console of ''affabox''.<br />
<br />
= For SME8 =<br />
<br />
To install issue the following command:<br />
yum install smeserver-affa --enablerepo=smecontribs<br />
= For SME9 =<br />
<br />
yum install smeserver-affa --enablerepo=smecontribs<br />
<headertabs/><br />
<br />
The server again need to be updated.<br />
signal-event post-upgrade<br />
signal-event reboot<br />
The server will reboot. Afterwards Affa 3 is ready for use.<br />
<br />
Now manually create directory for archive files:<br />
mkdir /var/affa<br />
<br />
{{Warning box| Note that on SME9, a new SSH [https://wiki.contribs.org/AutoBlock AutoBlock] feature interferes with the Affa requirement for regular SSH logins! You might have to disable this feature to allow Affa to operate correctly. }}<br />
<br />
=== Creation of backup jobs ===<br />
Job configurations of Affa 3 are no longer stored in an e-smith style database. They are stored in configuration files located in ''/etc/affa'' . Create your configuration files in this directory, e.g.<br />
touch /etc/affa/backup-smeserver.conf<br />
where ''backup-smeserver'' is your jobname, and add content as described below.<br />
<br />
==== Quick start example ====<br />
You have a SME production server with hostname 'smeserver‘ and IP 192.168.0.2.<br><br />
You have a second SME box as your backup server with hostname 'affabox‘ and IP 192.168.0.10. <br />
<br />
Login to your 'affabox' as root and edit ''/etc/affa/backup-smeserver.conf'' . Using e.g. editor ''nano'', create the following example job configuration file for jobname ''backup-smeserver'' :<br />
[backup-smeserver]<br />
remoteHostName=192.168.0.2<br />
SMEServer=yes<br />
Watchdog=yes<br />
RPMCheck=yes<br />
ConnectionCheckTimeout=120<br />
Debug=no<br />
Description=Backup of 192.168.0.2 smeserver.athome.xx<br />
DiskSpaceWarn=strict<br />
RootDir=/var/affa<br />
TimeSchedule=0630<br />
localNice=15<br />
remoteNice=15<br />
rsync--inplace=yes<br />
rsyncCompress=no<br />
rsyncTimeout=900<br />
scheduledKeep=1<br />
dailyKeep=7<br />
weeklyKeep=4<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
status=enabled<br />
<br />
If you have a Letsencrypt certificate on the production server, then you should also include<br />
Include=/etc/dehydrated<br />
<br />
Then save your job configuration file.<br />
<br />
Now check that your configuration is OK:<br />
affa --configcheck<br />
This should throw out no errors. Now create / send key files to your productive smeserver:<br />
affa --send-key backup-smeserver<br />
The following output should appear on the console:<br />
Job sme-backup: Generating DSA keys...<br />
Successfully created DSA key pair.<br />
root@192.168.0.2's password: <Enter password of 192.168.0.2/smeserver and hit ENTER><br />
Public key sent to 192.168.0.2<br />
Now run your job manually, both for test purposes, as well as to create DSA keys:<br />
affa --run backup-smeserver<br />
The following output should appear on the console:<br />
The authenticity of host 'backup-smeserver (192.168.0.2)' can't be established.<br />
RSA key fingerprint is 3b..........65.<br />
Are you sure you want ton continue connecting (yes/no)? <yes><br />
Now your backup job should run for the first time. Depending on the volume of the files, this takes quite a while.<br />
<br />
Once the job is done, check that the archive is available:<br />
affa –-list-archives<br />
<br />
or run the job a second time:<br />
affa --run backup-smeserver<br />
Note that you won't be asked for the password again. Note as well, that this second run of the job ''backup-smeserver'' should run considerably faster, because not all files are copied again: just the difference to the previous run is backed up, the rest is stored as hard links to the existing file copies.<br />
<br />
Check that the second archive is available too:<br />
affa --list-archives<br />
<br />
From here you are able to work as with Affa 2. Modify your config file as required and described below. Automate the backup using the command ''affa --make-cronjobs'', see below.<br />
<br />
==== Job configuration properties ====<br />
'''Note 1:''' The default values shown in this table are the Affa program defaults and not to be confused with the preset values in the job configuration files, e.g. backup-smeserver.conf.<br />
<br />
'''Note 2:''' The complete documentation of the settings of Affa3 is available in [http://affa.sourceforge.net/AffaPdfMan.pdf] . It should be taken in consideration!<br />
<br />
'''Note 3:''' Affa 3 for SME is a fork of [http://affa.sourceforge.net/ Affa 3 for CentOS]. It adds a few options back again which have been removed. Generally the configuration properties as described [http://wiki.nikoforge.org/Affa_Configuration/ here] do apply. Arnaud added the following functions for SME:<br />
<ul><br />
<li>The parameter “SMEServer”</li><br />
<li>The parameter and function “Watchdog”</li><br />
<li>The parameter and function “RPMCheck”</li><br />
<li>The functions “--rise” and “--undo-rise”</li><br />
</ul><br />
Consequently the list below should reproduce the list for the unforked Affa 3 version, plus adding the properties above. For details refer to [https://www.guedel.eu/index.php/informatique/sme-server-8/affa-v3 Arnaud's website].<br />
{| border="1" cellpadding="3" cellspacing=0<br />
|-<br />
| '''Property''' || '''Value''' || '''Multivalue''' || '''Default''' || '''Description''' <br />
|-<br />
| remoteHostName <br />
| FQHN or IP<br />
| no<br />
| <none><br />
| FQHN or IP of the source host (mandatory)<br />
|-<br />
| remoteUser<br />
| account<br />
| no<br />
| root<br />
| The user account to be used with all ssh logins. May be case sensitive, e.g. Administrator account on Windows<br />
|-<br />
| Description<br />
| text string<br />
| no<br />
| <none><br />
| Any text that describes the job<br />
|-<br />
| TimeSchedule <br />
| HHMM<br />
| yes<br />
| 2230<br />
| The time the job will run. Use exactly 4 digits, no colon, no point. '''Important:''' Using the proper format HHMM is essential. Badly formatted TimeSchedule will cause strange Perl errors. Multiple runs can be achieved by repeating the line for each run time.<br />
|-<br />
| status <br />
| enabled ''or'' disabled<br />
| no<br />
| enabled<br />
| When set to disabled, no cron entries will made. You can still run a job manually. <br />
|-<br />
| Include<br />
| full path<br />
| yes<br />
| <none><br />
| File(s) or directory(s) to be included in the backup. <br />
|-<br />
| Exclude<br />
| full path<br />
| yes<br />
| <none><br />
| File(s) or directory(s) to be excluded from the backup. <br />
|-<br />
| RootDir<br />
| full path<br />
| no<br />
| /var/affa<br />
| where to store the backup archives. Do not use /home/e-smith or /root as these are included in the backup and therefore the rise option will not work! Recommended: /var/affa<br />
|-<br />
| scheduledKeep<br />
| integer >= 1<br />
| no<br />
| 1<br />
| how many of the scheduled backups to be kept<br />
|-<br />
| dailyKeep<br />
| integer >= 0<br />
| no<br />
| 7<br />
| how many of the daily backups to be kept<br />
|-<br />
| weeklyKeep<br />
| integer >= 0<br />
| no<br />
| 4<br />
| how many of the weekly backups to be kept<br />
|-<br />
| monthlyKeep<br />
| integer >= 0<br />
| no<br />
| 12<br />
| how many of the monthly backups to be kept<br />
|-<br />
| yearlyKeep<br />
| integer >= 0<br />
| no<br />
| 2<br />
| how many of the yearly backups to be kept<br />
|-<br />
| EmailAddress<br />
| name@domain.com<br />
| yes<br />
| root<br />
| comma separated list of mail addresses, where the messages should be sent to<br>'''Note:''' By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).<br />
|-<br />
| RetryAttempts<br />
| integer >= 0<br />
| no<br />
| 4<br />
| When set to a value>0, Affa re-run a failed job RetryAttempts times with a delay of RetryAfter seconds.<br />
|-<br />
| RetryAfter<br />
| seconds >= 0<br />
| no<br />
| 900<br />
| when set to a value>0, wait RetryAfter seconds before re-running the job after an error. Only applicable with RetryAttempts>0<br />
|-<br />
| RetryNotification<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| when set to no, Affa does not send an error message when a job has failed and RetryAttempts is configured. An error message is only sent when the last attempt has failed.<br />
|-<br />
| NRPEtrigger<br />
| hours<br />
| no<br />
| 24<br />
| NRPE reports a job as critical when the last successful run is older then NRPEtrigger hours. To exclude the job from monitoring set to a value < 0.<br />
|-<br />
| SambaShare<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Access to the job archives via CIFS protocol.<br />
|-<br />
| SambaValidUser<br />
| local account<br />
| yes<br />
| affa<br />
| User who has permission to access the job archives via the CIFS protocol.<br />
|-<br />
| preJobCommand<br> preJobCommandRemote<br />
| path relative to /etc/affa/scripts/<br />
| yes<br />
| <none><br />
| Scripts to be executed before a job run. The job name and type (scheduled, daily etc.) are passed as arguments to the program. The preJobCommandRemote scripts are copied to the remote server and then executed there. The scripts are executed in alphabetical order. Use a numerical prefix if you need a specific order e.g. 01scriptB, 02remotescriptA, 03scriptA. Use the included prescript-sample.pl and prescriptRemote-sample.pl as a starting point for own scripts.<br>'''Note:''' If you use the parameter “SMEServer=yes”, you may delete the “Included” relative to SME default directories in the jobconfig.ini: they will be backuped automatically.<br />
|-<br />
| postJobCommand<br> postJobCommandRemote<br />
| path relative to /etc/affa/scripts/<br />
| yes<br />
| <none><br />
| Scripts to be executed after a job run. The job name, the type (scheduled, daily etc.) and the exit code of the job run are passed as arguments to the program. The postJobCommandRemote scripts are copied to the remote server and then executed there. The scripts are executed in alphabetical order. Use a numerical prefix if you need a specific order e.g. 01scriptB, 02remotescriptA, 03scriptA. Use the included postscript-sample.pl and postscriptRemote-sample.pl as a starting point for own scripts.<br>'''Note:''' If you use the parameter “SMEServer=yes”, you may delete the “Included” relative to SME default directories in the jobconfig.ini: they will be backuped automatically.<br />
|-<br />
| dedup<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| The purpose of the deduplication is to remove duplicate files to save backup space. When set to 'yes' file deduplication is run after the synchronization has been completed. It looks for files that have identical content, user, group and permissions and replace duplicates by hardlinks. Deduplication scans the just completed archive and the previous one, that usually is<br />
scheduled.0 and daily.0 or scheduled.0 and scheduled.1. Consider this scenario: A user has renamed directories or files. Rsync sees those as new ones and copies them. Deduplication finds the identical copies in the previous archive and replace them by hardlinks. To use deduplication the Freedup program needs to be installed. Affa actually runs freedup -upg scheduled.0 <previous_archive>.<br />
|-<br />
| dedupKill<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| When set to 'no' the job ignores affa --kill or affa --killall when deduplication is running. This is useful in the context of killAt which is typically used to stop bandwith utilisation.<br />
|-<br />
| sshPort<br />
| service port <br />
| no<br />
| 22<br />
| When sshd on the remote host listens on a non-standard port, set the port here.<br />
|-<br />
| ConnectionCheckTimeout<br />
| seconds<br />
| no<br />
| 120<br />
| Affa checks the ssh connection before the rsync process is started and exits with an error after the configured time if the host did not respond.<br />
|- <br />
| BandwidthLimit<br />
| integer>=0 kilobytes per second<br />
| no<br />
| 0<br />
| Limits the data transfer rate. A value of zero specifies no limit.<br />
|- <br />
| rsyncTimeout<br />
| seconds<br />
| no<br />
| 900<br />
| Rsync exits after the configured time if no data was transferred. This avoids infinitely hanging in case of a network error.<br />
|- <br />
| rsyncCompress<br />
| yes ''or'' no <br />
| no<br />
| yes<br />
| Compress the transferred data. May be useful with slow internet connections. Increases CPU load on remote and backup host.<br />
|- <br />
| rsync--inplace<br />
| yes ''or'' no<br />
| no<br />
| yes <br />
| Set to no if the rsync version on the remote hist does not support this option.<br />
|-<br />
| rsync--modify-window<br />
| integer >= 0 <br />
| no<br />
| 0<br />
| When comparing two timestamps, rsync treats the timestamps as being equal if they differ by no more than the modify-window value. This is normally 0 for an exact match. A value >= 0 is useful if you can't get the clocks of the remote host and the Affa server in sync.<br />
|-<br />
| rsyncOptions<br />
| string<br />
| no<br />
| <none><br />
| Additional option string to be passed to rsync<br />
|-<br />
| localNice <br />
| -19...+19<br />
| no<br />
| 0<br />
| run rsync local process niced. <br />
|-<br />
| remoteNice<br />
| -19...+19<br />
| no<br />
| 0<br />
| run rsync process on source niced. <br />
|-<br />
| killAt<br />
| HHMM<br />
| no<br />
| <none><br />
| The time at which a job will be killed if it was still running. You can use it for example to kill jobs that are running over the internet early in the morning so that your users have the full bandwidth available during office hours. <br />
|-<br />
| resumeKilledAt<br />
| HHMM<br />
| no<br />
| <none><br />
| The time at which a killed job will be resumed. This allows you to start earlier in the evening than the scheduled time.<br />
|-<br />
| chattyOnSuccess<br />
| integer >= 0<br />
| no<br />
| 0 <br />
| When set to a value>0, Affa sends a message on a successfully completed job run and decrements the chattyOnSuccess value. When the value has reached zero, Affa falls back to the default and only sends messages on errors.<br />
|-<br />
| AutomountDevice<br>AutomountPoint<br />
| full path<br />
| no<br />
| <none><br />
| Device and mountpoint of backup device (e.g. USB disk). Device is automounted before a job starts and unmounted after job completion. With both properties empty no automount is done.<br />
|-<br />
| AutomountOptions<br />
| string<br />
| no<br />
| <none><br />
| An option string passed to the mount command.<br />
|-<br />
| AutoUnmount <br />
| yes ''or'' no<br />
| no<br />
| yes<br />
| When set to 'no' the automounted device stay mounted after the Affa run.<br />
|-<br />
| Debug <br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Set to yes to increase log verbosity.<br />
|-<br />
| remoteRsyncBinary <br />
| full path<br />
| no<br />
| /usr/bin/rsync<br />
| If the rsync program on the remote server is located in non-standard location, set it here<br />
|-<br />
| remoteNiceBinary <br />
| full path<br />
| no<br />
| /bin/nice<br />
| If the nice program on the remote server is located in non-standard location, set it here.<br />
|-<br />
| localRsyncBinary <br />
| full path<br />
| no<br />
| /usr/bin/rsync<br />
| If the local rsync program is located in non-standard location, set it here<br />
|-<br />
| localNiceBinary <br />
| full path<br />
| no<br />
| /bin/nice<br />
| If the nice local program is located in non-standard location, set it here.<br />
|-<br />
| RemoteAuthorizedKeysFile <br />
| path relative to remote user's home or full path<br />
| no<br />
| .ssh/authorized_keys2<br />
| If the remote host stores the authorized keys file in a non-standard location, set it here.<br />
|-<br />
| rsyncdMode<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Set to yes to connect to the rsync daemon on the remote host (instead of running rsync over ssh).<br />
|-<br />
| rsyncdModule<br />
| string<br />
| no<br />
| AFFA<br />
| The rsyncd module name (only applicable with rsyncdMode=yes).<br />
|-<br />
| rsyncdUser<br />
| string<br />
| no<br />
| affa<br />
| The username for authentication to the rsync daemon (only applicable with rsyncdMode=yes).<br />
|-<br />
| rsyncdPassword<br />
| string<br />
| no<br />
| <none><br />
| The password for authentication to the rsync daemon (only applicable with rsyncdMode=yes).<br />
<br />
|-<br />
| globalStatus<br />
| enabled ''or'' disabled ''or'' jobs<br />
| no<br />
| jobs<br />
| Set to enabled or disabled to overide the status settings in all job sections. When set to value jobs, the status settings in the job sections are effictive.<br>'''Note:''' This property is allowed only in the [GlobalAffaConfig] section. <br />
|-<br />
| sendStatus<br />
| daily ''or'' weekly ''or'' monthly ''or'' never<br />
| no<br />
| weekly<br />
| Defines how frequently the status will be sent.<br>'''Note:''' This property is allowed only in the [GlobalAffaConfig] section. <br />
|-<br />
| SMEServer<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| When set to yes, the default e-smith directories are automatically included and the property RPMCheck=yes can be used.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|-<br />
| RPMCheck<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Only applicable to jobs that backup a SME server. Compares the packages installation of the source host with this affa backup host. Sends a message with diff list if not in sync. This check is useful, if you want have the option to rise the backup server to a production server from a backup.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|-<br />
| Watchdog<br />
| yes ''or'' no<br />
| no<br />
| yes<br />
| Only applicable to jobs that backups a SME server. When a job is started, affa installs a watchdog script on the source in /etc/cron.d/, which sends a warning message, if the next scheduled job (taken from the TimeSchedule property + 10 minutes) did not run. This guarantees, that you will be notfied even in case of a affa server outage. The watchdog script send a daily reminder message, if the error continues. The next run job replaces the watchdog script with a new trigger time.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|}<br />
<br />
==== Default configuration properties ====<br />
For all 'name=value' properties defaults can be set in the [GlobalAffaConfig] section which are used in all job configuration when the corresponding property is omitted. For example, when these properties are set in [GlobalAffaConfig] section, they can be omitted in the specific job configurations. This is useful, when you set up many similar jobs. <br />
<br />
Example: You want to set the property 'localNice' to 19 for all jobs. Then add the following section in one of your configuration files:<br />
[GlobalAffaConfig]<br />
localNice=19<br />
Consequently you don't need to set this property for your individual jobs. Properties set in the job record override the defaults.<br />
<br />
==== Properties for global Settings ====<br />
The following special properties are only applicable to the [GlobalAffaConfig] section:<br />
<ul><br />
<li>sendStatus</li><br />
<li>globalStatus</li><br />
</ul><br />
<br />
All jobs can be disabled for execution through 'cronjobs" with setting 'globalStatus' to 'disabled'.<br />
[GlobalAffaConfig]<br />
globalStatus=disabled<br />
<br />
To re-enable run, either delete the above configuration line, or set to:<br />
globalStatus=jobs<br />
which is the default value.<br />
<br />
You can also set the value to<br />
globalStatus=enabled<br />
which enforces the the job execution through 'cronjobs' and overrides the specified values in the section for the specific job.<br />
<br />
=== Usage and command line options ===<br />
{{Note box|Options can be abbreviated to uniqueness, e.g. --mak is equal to --make-cronjobs}}<br />
'''affa --run JOB'''<br />
Starts a job run. Usually done by the cronjob. <br />
<br />
<br />
'''affa --make-cronjobs'''<br />
Configures the cronjobs as scheduled in the jobs records. By default this command will by executed by 'cronjobs' every 15 minutes. So you don't need to run this command manually, you can also just wait max. 15 minutes before your updates job configurations become effective.<br />
<br />
'''affa --configcheck'''<br />
Checks the syntax and values in all configuration files found in /etc/affa/. Run this command after modifying the configuration. Lower/Upper case errors in property names are corrected automatically.<br />
<br />
'''affa --make-cronjobs'''<br />
Configures the cronjobs. Run this command to make changes of time related properties effective i.e. TimeSchedule, killAt and resumekKilledAt properties. Every 15 minutes Affa checks whether a configuration file is newer than the cronjob file and runs the make-cronjobs command if this is the case.<br />
<br />
'''affa --send-key [JOB JOB ...]'''<br />
This first generates the DSA key on the Affa Server, if not already done. Then the public key is send to the hosts 'remoteHostName' as defined in section of each job JOB and generates the job specific ssh known host entry.<br />
{{Note box|When initially doing this step, you will need to temporarily enable "Allow secure shell access using standard passwords" on the source server.}}<br />
<br />
'''affa --check-connections [JOB JOB ...]'''<br />
Checks the ssh login and if applicable rsyncd auth for all jobs given as arguments. Without any arguments all jobs are checked.<br />
<br />
'''affa --full-restore [--preserve-newer=no] [--delete=yes] JOB [ARCHIVE]'''<br />
Does a full restore remote server of all backed up files and directories from the backup ARCHIVE. If ARCHIVE is not given, the archive 'scheduled.0' is used as the default. The full restore reconstructs the server as it was at the time of the backup. After the restore the source host reboots.<br />
<br />
With option --preserve-newer=yes files on the remote server with modification time newer than on the backup are not overwritten.<br />
<br />
With option --delete=yes all files on the remote server, which are not in the backup, are deleted.<br />
<br />
If the parameter “SMEServer=yes” is set, the signal-event pre-restore and signal-event post-upgrade will be used automatically.<br />
<br />
'''affa --list-archives JOB'''<br />
Displays a table of all present archives of job JOB with date, number of files, size and and bytes received. While column buTime shows the actual rsync time, the column ddTime shows the length of the subsequent deduplication run. ddYld is the gained deduplication yield in bytes.<br />
<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
+------------------------------------------------------------------------------+<br />
| Job: fshare-ak |<br />
| Description: Fileserver AK |<br />
| Directory: /var/affa/fshare-ak/ |<br />
| Hostname: 10.204.104.4 |<br />
| Email: sysadmin@mydomain.com |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| Run | Completion date | buTime | ddTime | ddYld | Files | Size | Recvd |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| Y 0 | Wed 2010-06-23 20:26 | 11m53s | - | - | 412k | 143G | 470M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| M11 | Sun 2010-08-29 20:22 | 7m50s | - | - | 417k | 153G | 14M |<br />
| M10 | Sun 2010-09-26 20:23 | 8m57s | - | - | 430k | 156G | 14M |<br />
| M 9 | Sun 2010-10-31 20:25 | 10m05s | - | - | 448k | 161G | 15M |<br />
| M 8 | Sun 2010-11-28 20:31 | 16m42s | - | - | 463k | 167G | 17M |<br />
| M 7 | Sun 2011-01-02 20:48 | 33m07s | - | - | 486k | 173G | 1.1G |<br />
| M 6 | Sun 2011-01-30 20:33 | 18m54s | - | - | 493k | 176G | 24M |<br />
| M 5 | Sun 2011-02-27 20:28 | 13m31s | - | - | 490k | 176G | 19M |<br />
| M 4 | Sun 2011-03-27 20:28 | 13m08s | - | - | 491k | 175G | 17M |<br />
| M 3 | Sun 2011-05-01 20:30 | 15m41s | - | - | 493k | 179G | 18M |<br />
| M 2 | Sun 2011-05-29 20:28 | 13m40s | - | - | 494k | 182G | 31M |<br />
| M 1 | Sun 2011-06-19 20:26 | 11m20s | - | - | 493k | 183G | 17M |<br />
| M 0 | Sun 2011-07-03 20:28 | 13m46s | - | - | 496k | 183G | 18M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| W 3 | Sun 2011-07-10 20:28 | 13m38s | - | - | 479k | 180G | 16M |<br />
| W 2 | Mon 2011-07-11 20:35 | 20m27s | - | - | 479k | 180G | 301M |<br />
| W 1 | Fri 2011-07-15 20:30 | 15m53s | - | - | 480k | 180G | 62M |<br />
| W 0 | Sun 2011-07-24 19:39 | 9m25s | - | - | 482k | 181G | 16M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| D 6 | Thu 2011-07-28 19:42 | 12m22s | - | - | 483k | 182G | 176M |<br />
| D 5 | Fri 2011-07-29 23:29 | 11m10s | 5h33m | 45G | 483k | 182G | 16M |<br />
| D 4 | Sat 2011-07-30 19:53 | 23m26s | 2h30m | 8.3G | 483k | 182G | 17M |<br />
| D 3 | Sun 2011-07-31 20:07 | 37m31s | 4m47s | 8.3G | 483k | 182G | 17M |<br />
| D 2 | Mon 2011-08-01 20:44 | 1h14m | 7h50m | 8.5G | 484k | 182G | 630M |<br />
| D 1 | Tue 2011-08-02 20:02 | 32m28s | 12h20m | 8.3G | 484k | 182G | 74M |<br />
| D 0 | Wed 2011-08-03 19:58 | 28m46s | 11h01m | 8.5G | 484k | 182G | 214M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| S 0 | Thu 2011-08-04 20:00 | 30m28s | 11h52m | 8.5G | 484k | 182G | 203M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
</pre></small><br />
<br />
'''affa --list-archives [--csv] JOB'''<br />
With --csv, the output is in machine readable colon separated format.<br />
<small><pre><br />
Archive:Count;Date;Files;Size;RootDirFilesystemAvail;RootDirFilesystemUsed;valid;TotalBytesReceived;ExecutionTime;DedupTotalFiles;DedupReplacedFiles;DedupSavedBytes;DedupExectime;DedupDate<br />
monthly;00000;201509270631;393237;37252509103;166889260;47876936;yes;1327370;1160638358;99;;;;;<br />
weekly;00003;201510040531;390273;37042612135;164566692;50199504;yes;1117726;45430532;59;;;;;<br />
weekly;00002;201510110531;395553;37299589800;162612204;52153992;yes;1117092;92369237;64;;;;;<br />
weekly;00001;201510180531;403831;37707599172;161855844;52910352;yes;1317083;148317764;64;;;;;<br />
weekly;00000;201510250531;406509;38146200127;161041156;53725040;yes;1104759;64083948;63;;;;;<br />
daily;00006;201510310531;415663;39248862982;162979176;51787020;yes;1958110;198060283;66;;;;;<br />
daily;00005;201511010531;411014;38563389171;163061136;51705060;yes;1156506;48903675;75;;;;;<br />
daily;00004;201511020531;408910;38612857040;162359124;52407072;yes;1007434;210812387;75;;;;;<br />
daily;00003;201511030531;410615;37815861577;162026060;52740136;yes;1524069;162697515;67;;;;;<br />
daily;00002;201511040531;414784;38131734213;161990024;52776172;yes;1571657;134250735;63;;;;;<br />
daily;00001;201511050531;420780;38433895988;161906580;52859616;yes;2021507;149388808;103;;;;;<br />
daily;00000;201511060531;425920;38677614350;161778812;52987384;yes;1987971;166510621;71;;;;;<br />
scheduled;00000;201511070531;425663;38815712018;161612600;53153596;yes;2003540;139603231;68;;;;;<br />
</pre></small><br />
<br />
'''affa --status [--csv]'''<br />
Displays a table of all configured jobs with enable status, time of last and next run, size of the most recent archive, exectution time of the last run and the number of scheduled (S), daily (D), weekly (W), monthly (M) and yearly (Y) archives. If last time shows 'failed', the job did not run in the last 24h. For disabled jobs 'Last' always shows 'failed' after 24 h. To see the date and time of the last run of those jobs use the --list-archives option. Column 'Next' shows the time when the next run will be started, if 'Enabled' is 'yes'. Column 'ddYld' shows the deduplication yield in bytes.<br />
<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
| Job | ENA | Last | Time | Next | Size | ddYld | N of S,D,W,M,Y |<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
| ads-ak | yes | 20:20 | 0m29s | 20:20 | 28M | - | 1, 7, 2, 0, 0 |<br />
| ak-user-ma.. | yes | 22:31 | 1m51s | 22:30 | 170M | - | 3, 7, 4, 6, 0 |<br />
| azubi-1 | yes | 03:39 | 3h24m | 00:15 | 44G | 1.2G | 1, 7, 4,10, 1 |<br />
| dcpant | yes | 03:06 | 1m03s | 03:05 | 1.3G | - | 1, 7, 4,10, 1 |<br />
| eshare | yes | 21:06 | 21m01s | 20:45 | 24G | 178M | 1, 7, 4,10, 1 |<br />
| etherpad | yes | 20:40 | 0m13s | 20:40 | 3.1M | - | 1, 7, 4, 2, 0 |<br />
| fazubi | yes | 08:16 | 11h16m | 21:00 | 132G | 3.3G | 1, 7, 4,10, 1 |<br />
| fschare-rh | yes | de-duplicating (pid 9719) | 1, 7, 4,10, 1 |<br />
| fsh-02 | yes | 07:53 | 12h23m | 19:30 | 182G | 8.5G | 1, 7, 4,10, 1 |<br />
| fshare-ak2 | yes | 00:30 | 0m26s | 00:30 | 415M | 544k | 1, 7, 4,10, 1 |<br />
| helpdesk | yes | 21:27 | 2m16s | 21:25 | 895M | 138k | 1, 7, 4,10, 0 |<br />
| it-share | yes | running rsync (pid 9744) | 1, 7, 1, 0, 0 |<br />
| lightroom | yes | waiting (pid 9528) | 1, 7, 4, 2, 0 |<br />
| localhost | yes | 02:15 | 0m08s | 02:15 | 395k | - | 1, 7, 2, 0, 0 |<br />
| mediawiki | yes | 16:10 | 0m06s | 19:40 | 1.1G | 6.7M | 3, 7, 4,10, 1 |<br />
| mshare | yes | 00:48 | 1h33m | 23:15 | 18G | 2.4G | 1, 7, 4,10, 1 |<br />
| wshare | yes | 00:34 | 1h49m | 22:45 | 23G | 484M | 1, 7, 4,10, 1 |<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
2 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
With --csv, the output is in machine readable colon separated format.<br />
<br />
'''affa --show-config-pathes [--csv] [JOB JOB ...]'''<br />
Prints the full pathes of the file where the section of job JOB is defined.<br />
<br />
'''affa --show-default-config'''<br />
Prints a list of all allowed property names with their default values. These values are used, when omitted in the job sections.<br />
<br />
'''affa --show-schedule [--all]'''<br />
Prints a 'graphical' timetable for all enabled jobs. The resolution is 30 minutes. An 'S' character marks the scheduled start times. The duration of the job runs are marked with '=' characters.<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
TIME 12:00 16:00 20:00 0:00 4:00 8:00<br />
fsh-02 -------- -------S ~~~~~~~~ ~~~~~~~~ ~~~~~~~~ --------<br />
ads-ak -------- -------- S------- -------- -------- --------<br />
etherpad -------- -------- -S------ -------- -------- --------<br />
eshare -------- -------- -S~----- -------- -------- --------<br />
fazubi -------- -------- --S~~~~~ ~~~~~~~~ ~~~~~~~~ ~-------<br />
helpdesk -------- -------- --S----- -------- -------- --------<br />
it-share -------- -------- ---S=~~~ ~~~~~~~~ ~~~~~~~- -------- busy<br />
wshare -------- -------- -----S~~ ~~------ -------- --------<br />
ak-user-management -------- -------- -----S-- -------- -------- --------<br />
fschare-rh -------- -------- -----S=- -------- --~----- -------- busy<br />
mshare -------- -------- ------S= ~~------ -------- --------<br />
azubi-1 -------- -------- -------- S=~~~~~~ -------- --------<br />
fshare-ak2 -------- -------- -------- -S------ -------- --------<br />
mediawiki -------- -------- -------- ---S---- -------- ----S---<br />
localhost -------- -------- -------- ----S--- -------- --------<br />
dcpant -------- -------- -------- ------S- -------- --------<br />
lightroom -------- -------- -------- -------- -------- S------- busy<br />
Symbols: S=scheduled K=kill R=resume '='=rsync '~'=dedup<br />
2 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
<br />
'''affa --log-tail [JOB]'''<br />
Displays the tail of the logfile of job JOB with live update. This command is identical to tail -n 50 -f /path/to/logfile. Without the JOB argument the global logfile is shown.<br />
<br />
'''affa --send-status'''<br />
Sends the status table, the disk-usage and the archive list of all jobs to the email addresses configured in the [GlobalAffaConfig] section.<br />
<br />
'''affa --disk-usage'''<br />
Shows the current disk usage of all root dir filesystems<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.de<br />
+------+--------+--------+----------------------------------------------------+<br />
| Use% | Used | Avail | Root Dir |<br />
+------+--------+--------+----------------------------------------------------+<br />
| 71% | 938GB | 365GB | /var/affa |<br />
| 20% | 194GB | 759GB | /mnt/affadev |<br />
+------+--------+--------+----------------------------------------------------+<br />
</pre></small><br />
With --csv, the output is printed in a machine readable colon separated format.<br />
<br />
'''affa --cleanup JOB'''<br />
After you have lowered a keep value, e.g. scheduledKeep, then archives with a higher indices will no longer be shifted and will exist for ever. This option finds these archives and deletes them after confirmation.<br />
<br />
'''affa --rename-job JOB NEWNAME'''<br />
Renames the job JOB to NEWNAME including the section name and archive directories.<br />
<br />
'''affa --move-archive JOB NEWROOTDIR'''<br />
Moves the archive directory of job JOB to the rootdir NEWROOTDIR and adjusts the value of property RootDir. NEWROOTDIR must be a full path starting with a slash. As moving across filesystems (e.g. from an external USB drive to the local disk) is not possible, Affa uses a copy command in this case and deletes the source directory after that. Depending on the archive size, copying across filesystems can take a long time.<br />
<br />
'''affa --delete-job [--revoke-key] JOB'''<br />
Irreversibly deletes all archives, of job JOB and set the status property to disable. With --revoke-key option, the public key on the remote server will be deleted.<br />
<br />
'''affa --revoke-key JOB'''<br />
Deletes the public dsa key on the remote server.<br />
<br />
'''affa --kill JOB'''<br />
Terminates the running job JOB and all its child processes (rsync processes).<br />
<br />
'''affa --killall'''<br />
Terminates all running jobs.<br />
<br />
affa --mailtest JOB<br />
Sends a test email to the email addresses configured in the JOB section. Use this to verify, that your mail system is working.<br />
{{Note box|By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).}}<br />
<br />
'''affa --nrpe [JOB JOB ...]'''<br />
Checks for jobs that were not run the last NRPRtrigger hours and reports them as failed. A Nagios/ICINGA NRPE complient message is printed and exit status returned.<br />
<br />
'''affa --version'''<br />
Displays the Affa version number and checks Samba and Freedup installation.<br />
<br />
'''affa --warranty'''<br />
Displays the disclaimer of liability.<br />
<br />
'''affa --license'''<br />
Displays the license Affa is released under.<br />
<br />
'''affa --help'''<br />
Displays a short help.<br />
<br />
'''affa --debug'''<br />
Enables verbose logging. Overrides job and global configurations.<br />
<br />
==== SME Specific usage and command line options ====<br />
<br />
These commands are implemented in a fork of Affa 3 project for use with SME. They are not part of the Affa 3 for CentOS project.<br />
<br />
'''affa --rise [--all] JOB [ARCHIVE]'''<br />
Runs a full restore on the Affa server <b>(!!!)</b> of all standard files and directories from the backup ARCHIVE of job JOB. In other words: After completion, the Affa box reboots as a clone of the source server. Ensure, that the source server has been powered off before you reboot the Affa box, otherwise the network interface will not come up. This is important, when you run --rise remotely. The --rise feature only works with SME servers and should only be used on dedicated backup servers.<br />
<br />
With option --all, all files and directories of the archive as defined by the include[] properties are restored. Files or directories with the same name on the Affa server will be overwritten and cannot be restored by a undorise. This should not be an issue on a dedicated Affa server which does not hold any other data. After a possible undorise those additional restored data must be removed manually.<br />
<br />
Please note, that the rise process backs up the the Affa server itself before doing the restore from the archive. This backup is used by a possible undorise run to restore the Affa server. Only the standard files and directories are backed up. Data in non-standard loctions (like /opt) are untouched and will still exist after the rise run '''if they don't get overwritten by data of the backup''' (=if the backup contains /opt too e.g.).<br />
<br />
{{Note box|The above command is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.}}<br />
<br />
'''affa --undo-rise'''<br />
This feature reverts a risen Affa box to a backup server. After a reboot, all configured jobs '''based on standard files and directories''' will work again.<br />
<br />
{{Note box|The above command is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.}}<br />
<br />
=== Example setups ===<br />
==== Dedicated backup server ====<br />
* Setup a dedicated server and install Affa 3 for SME.<br />
* Setup a job for every server you want to backup.<br />
* Send the public keys to every server.<br />
affa --send-key JOBNAME<br />
* Check whether password-less logins are working.<br />
affa --check-connections<br />
* Check whether the scheduled jobs are evenly distributed over the day.<br />
affa --show-schedule<br />
* Create the cron jobs.<br />
affa --make-cronjobs<br />
* Check the status after 24 hours.<br />
affa --status<br />
<br />
==== Backup of single ibays ====<br />
Suppose you want to backup the ibays 'staff1' and 'staff2' on your production server with WAN IP 82.123.1.1 to an Affa server in a different location over the internet every night at 2:30am.<br />
* Log into the Affa server.<br />
* Create a dedicated job file ''/etc/affa/ibay-staff.conf'' . Edit it and set<br />
[ibay-staff]<br />
Description=Backup of ibay-staff on 82.123.1.1<br />
remoteHostName=82.123.1.1<br />
TimeSchedule=0230<br />
SMEServer=no<br />
Include=/home/e-smith/files/ibays/staff1<br />
Include=/home/e-smith/files/ibays/staff2<br />
* Save the configuration<br />
* send the public key to the production server<br />
affa --send-key ibay-staff<br />
* check next morning<br />
affa --list-archives ibay-staff<br />
affa --status<br />
ls /var/affa/ibay-staff<br />
<br />
==== Two production servers backup each other ====<br />
You have two sites connnected via a VPN and a SME Server running on each site. In that case you don't need a dedicated Affa backup server. Both production servers can additionally act as Affa backup servers backing up the server of the other site. Simply install Affa and configure a job that backs up the other one. You can use all Affa features except of the rise feature.<br />
<br />
When using the rise feature, the server becomes any of the backed up systems, which is less useful in this scenario as it would give you a running copy of the server of the other site while the server of this site is down.<br />
<br />
To get redundancy and a faster restore you can configure a local backup to an external USB or NAS device.<br />
<br />
Please pay attention, that you do not backup the archives back to the other site. Set the RootDir property to a path which is not included in the SME default backup list. When leaving the RootDir property to its default '/var/affa', this is guaranteed.<br />
<br />
==== Use Affa to backup to a NFS-mounted NAS or a local attached USB drive ====<br />
<br />
{{Note box|This chapter still needs to be checked whether it works with Affa 3 for SME.}}<br />
<br />
You want to backup your SME production server with hostname 'prodbox‘ and IP 10.200.48.1 on a mounted filesystem instead of setting up a dedicated Affa box.<br />
<br />
===== Setup NAS =====<br />
You have a [http://www.freenas.org FreeNAS] box with IP 10.200.48.2 up and running with NFS service enabled. The disk is mounted to /mnt/affashare. You have authorized the IP address of your prodbox server to access share /mnt/affashare.<br />
* log into the 'prodbox' and install the NFS packages<br />
/usr/bin/yum install --enablerepo=smecontribs smeserver-nfs<br />
* now enable and start the portmapper service<br />
config setprop portmap status enabled<br />
service portmap start<br />
<br />
* mount the NFS share<br />
mkdir -p /mnt/affadevice<br />
mount 10.200.48.2:/mnt/affashare /mnt/affadevice<br />
<br />
<br />
===== Alternatively setup a USB drive =====<br />
* log into the 'prodbox' <br />
* connect a USB hard disk to the USB Bus. Now you must determine what device the kernel has assigned to the drive. View the /var/log/message and search for ''Initializing USB Mass Storage driver''. A few lines below you'll find the name of the device. In this example it is ''sdh''. Replace ''/dev/sdh'' by ''your device'' in following instructions.<br />
* use the fdisk program to create a linux partition. '''Verify that this is really the attached USB drive before you continue!'''<br />
<br />
fdisk /dev/sdh<br />
You'll most likely find an existing vfat dos partition, which you have to delete first. In the following we assume, that you have created a single partition ''/dev/sdh1''.<br />
* now format the drive with an ext3 filesystem<br />
mkfs.ext3 /dev/sdh1<br />
* make the mount point<br />
mkdir -p /mnt/affadevice<br />
* add the following line to the /etc/fstab<br />
/dev/sdh1 /mnt/affadevice ext3 defaults<br />
* mount the drive<br />
mount /mnt/affadevice<br />
* crosscheck your work using the df command<br />
df<br />
<br />
===== Copying a AFFA USB hard drive archive to a new disk =====<br />
Affa uses copious amounts of hard links to compress and preserve disk space for its backups. If you are in the situation where you want to copy such a disk archive to a new (bigger) disk, you need to ensure that the hard links are copied correctly or the destination copy may became significantly bigger than the total of the source archive.<br />
<br />
One way to copy across file systems (i.e. two different USB disks) and preserve the hard links is as follows:<br />
<br />
* mount both USB drives but with different mount points. e.g. /media/backup1 & /media/backup2 and then:<br />
<br />
mkdir /media/backup2/archive <br />
cd /media/backup1/archive<br />
tar cpf - . | ( cd /media/backup2/archive && tar xpf - )<br />
<br />
where '''archive''' is the name of the AFFA job you want to move to the other disk.<br />
<br />
===== Setup Affa =====<br />
You want to run backups on 11:30 h, 15:30 h and 19:30 h and you want to keep the last 3 scheduled backups, 7 daily, 5 weekly, 12 monthly and 1 yearly backups.<br />
* log into the 'prodbox' and install the Affa packages as described above.<br />
* create/edit the conf file of the job: nano /etc/affa/prodbox.conf<br />
and set:<br />
[prodbox]<br />
remoteHostName=localhost<br />
TimeSchedule=1130<br />
TimeSchedule=1530<br />
TimeSchedule=1930<br />
scheduledKeep=3<br />
dailyKeep=7<br />
weeklyKeep=5<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
RootDir=/mnt/affadevice<br />
Review the other properties and change them to your needs.<br />
* run the job manually<br />
affa --run prodbox<br />
<br />
===== Limitations =====<br />
With this kind of setup you cannot use the affa rise feature, as it requires the backup archive to be located on the '''same fileystem''' as the server installation. The rise option uses hardlinks, which are not working across filesystems.<br />
<br />
===== Automount =====<br />
Having the backup archives in the same filesystem is always a risk, which can be minimized by using the automount feature. Then the external filesystem is only mounted during a job run.<br />
<br />
In the NAS example set<br />
AutomountDevice=10.200.48.2:/mnt/affashare<br />
AutomountPoint=/mnt/affadevice<br />
and skip the step 2.<br />
<br />
In the USB drive example set<br />
AutomountDevice=/dev/sdc1<br />
AutomountPoint=/mnt/affadevice<br />
and skip the steps 5 to 8.<br />
<br />
The mount point will be automatically created, if it does not exist. <br><br />
To access the archive directory, you need to mount it manually.<br />
<br />
=== Restore ===<br />
==== Restore single files or directories ====<br />
'''Example 1:''' It's Tuesday January 6th 2009, when user 'briedlin' asks you to restore the messages of his mailbox 'orders' he has accidentally deleted on Monday.<br />
<br />
* You first must check what backup archives are available. The jobname of this server backup is 'primmail'. To get a listing of all archives run <br />
affa --list-archives primmail<br />
''(see the example listing in chapter [[Affa#Usage_and_command_line_options]]''<br />
* Choose the daily.0 archive, which was created Monday night. Now restore the mailbox 'orders' using the rsync command.<br />
* Now run the rsync command ''(note the trailing slash!)'' on the Affa backup server:<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# this variable is used to shorten the next command line''<br />
rsync -av /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# this variable is used to shorten the next command line''<br />
rsync -av -e 'ssh -p 2222' /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
<br />
<br />
'''Example 2:''' A user has deleted the file orderform.pdf from ibay 'docs' on the server 10.204.48.1 and asks you to restore it.<br />
<br />
* You have searched and found the latest version of this file in weekly archive of job 'prodserv'.<br />
* To copy it back to the server 10.204.48.1 run on the Affa server<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# this variable is used to shorten the next command line''<br />
rsync -av /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
* If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# this variable is used to shorten the next command line''<br />
rsync -av -e 'ssh -p 2222' /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
<br />
<br />
'''Example 3:''' Restoring a file with special characters in the path<br />
<br />
Do not use shell variables to shorten the command. It would complicate things more than it would help.<br />
Quote the source and destination path. In the destination path escape blank characters with a double backslash and brackets with a single backslash. On the Affa backup server do:<br />
rsync -av "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
<br />
rsync -av -e 'ssh -p 2222' "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
==== Full restore ====<br />
Generally: <br />
'''affa --full-restore [optional settings] JOB [ARCHIVE]'''<br />
This rsyncs the data (files and directories )from the backup ARCHIVE back to the 'remoteHostname' defined in the configuration of the job JOB. <br />
If ARCHIVE is not given, the archive 'scheduled.0' is used as the default.<br />
The --full-restore reconstructs the server as it was at the time of the backup and takes following IMPORTANT optional settings in consideration:<br />
<br />
With option [--preserve-newer=yes] files on the remote server with modification time newer than on the backup are not overwritten.<br />
<br />
With option [--delete=yes] all files on the remote server, which are not in the backup, are deleted.<br />
<br />
If the parameter “SMEServer=yes” is set, the signal-event pre-restore and signal-event post-upgrade will be used automatically.<br />
<br />
After the restore is done, the restored server reboots automatically. <br />
<br />
Example:<br />
You have backuped your production server 'prodsrv' as job 'prodbox'. To restore only lost data from the latest backup run<br />
affa --full-restore prodbox<br />
<br />
To restore exactly from the older archive daily.3 run <br />
affa --full-restore [--preserve-newer=no] [--delete=yes] prodbox daily.3<br />
<br />
{{Warning box|A full restore with --preserve-newer no and --delete yes reconstructs the server as it was at the time of the backup. That means, that all files created or server configuration changes made after the backup will be lost! }}<br />
<br />
==== Moving a SME server installation to new hardware using the Affa rise feature or the backup-restore functions ====<br />
Please see this Howto: [[Moving SME to new Hardware]]<br />
<br />
==== Restore from USB drive on new server ====<br />
<br />
This tip comes from http://forums.contribs.org/index.php?topic=42412.0<br />
<br />
Q) I have complete backups using affa stored on a usb hard drive connected to our affa backup server. I need to restore an earlier monthly backup of our job "mailbackup" to a test server rather than back to the original system. If I did it from the backup server I see the instructions of how to rise that server to the current backup on that server but I want to restore a point about a month ago before some strange things happened. And I want to do it on a machine that is not our backup server or our production server. I tried to figure out how but am lost in the options. My goal is to do some testing.<br />
<br />
A) On your testserver setup a job "mailbackup" identical to that on your backup server (a copy of the conf file is available into the folder of the archive, as an hidden file ''.mailbackup-setup.ini'') but set property '''RootDir''' to /var/affa and property '''status''' to disabled. Connect the USB drive and copy the archive of the job mailbackup to /var/affa. Then run affa --rise mailbackup ARCHIVE<br />
<br />
=== FAQ ===<br />
'''What files and directories are included by default?'''<br />
<br />
With SMEServer=no nothing at all.<br />
<br />
With SMEServer=yes the SME default backup list is the output of following command:<br />
perl -e 'use esmith::Backup;$b=new esmith::Backup;print join("\n",$b->restore_list)."\n"' <br />
and as additional folder:<br />
/etc/affa<br />
<br />
{{Note box|The following example needs to be reworked for Affa 3.}}<br />
'''Can I exclusively backup image files from a specific directory?'''<br />
Yes. Assuming you want to backup all gif and jpg files from directory /home/e-smith/files/ibays/pictures/files use this configuration<br />
db affa setprop JOBNAME SMEServer no # do not include SME Server default directories and files<br />
db affa setprop JOBNAME Include[0] '/home/e-smith/files/ibays/pictures/files' # start searching here<br />
db affa setprop JOBNAME Include[1] '*/' # search the tree recursively<br />
db affa setprop JOBNAME Include[2] '*.gif' # copy files that match these patterns<br />
db affa setprop JOBNAME Include[3] '*.jpg' <br />
db affa setprop JOBNAME Exclude[0] '*' # exclude all others<br />
db affa setprop JOBNAME rsyncOptions '-m' # do not create empty folders<br />
<br />
<br />
'''How can I move a backup archive from my production server in a remote branch office to the Affa server using an USB disk? I want to avoid to download the first full backup over the internet.'''<br />
Install Affa on your production server, connect and mount the USB disk. Setup a job to write the archive to the USB disk. After the job run, connect the USB disk to your Affa server and setup the job in the same way. Use the setup script from the archive directory. You only need to change the remoteHostName property. Now use the --move-archive option to move the archive to the local disk. If not already done, send the public key to your production server. You're done.<br />
<br />
<br />
'''How do I backup two SME servers behind a firewall?'''<br />
First you need to configure port forwardings for the ssh service on your firewall. Use a non-standard port, e.g 2200->22, for your second server. Setup a job on your Affa server for each of your production servers. Set up the job property sshPort=2200 for second server.<br />
<br />
=== Uninstall ===<br />
This removes the installed Affa package, all configuration data and all backup archives.<br />
<br />
Run the following commands for each job:<br />
affa --revoke-key JOBNAME<br />
affa --delete-job JOBNAME<br />
Verify that all jobs have been deleted:<br />
affa --status<br />
Remove the Affa package:<br />
yum remove smeserver-affa-3.2.2.1-0.noarch.rpm<br />
Cleaning up:<br />
rm -f /etc/cron.d/affa<br />
rm -rf /etc/affa<br />
rm -rf /var/affa /var/log/affa<br />
<br />
=== Troubleshooting ===<br />
==== Affa stops working after prodserver was rebuild. ==== <br />
'''Description:'''<br />
* Affa stopped working or is broken.<br />
* When affaserver tries to make an ssh connection to prodserver it fails.<br />
* In the /var/log/affa/''jobname''.log file you will this message: "SSH connection to ''prodserver ip'' failed. Did you send the public key". <br />
* Resending the keys does not solve the problem. <br />
<br />
'''Solution:'''<br />
* This is probably due to the key for prodserver that is stored on the affaserver no longer matches the key of prodserver. <br />
* You need remove the existing key so that the next ssh connection re-gets the keys from prodserver. <br />
* From the command line edit the /root/.ssh/known_hosts file and remove the line for prodserver. It starts with: ''prodserver ip'' ssh-rsa<br />
<br />
=== Bug report ===<br />
Affa is listed in the [http://bugs.contribs.org/enter_bug.cgi?product=SME%20Contribs&component=smeserver-affa bugtracker contribs section].<br />
Please report all bugs, new feature requests and documentation issues there.<br />
<br />
<br />
{{#bugzilla:columns=id,product,version,status,summary |sort=id |order=desc |disablecache=1|component=smeserver-affa|noresultsmessage="No open bugs found."}}<br />
<br />
=== Additional information ===<br />
For details of performance, changelog etc see [[:Affa:Additional information]]<br />
<br />
=== References ===<br />
* Wikipedia article http://en.wikipedia.org/wiki/Rsync<br />
* Rsync homepage http://rsync.samba.org/<br />
* Wiki article [[Moving SME to new Hardware]]<br />
* Wiki article [[Backup with Affa and FreeDup]]<br />
* Wiki article [[Rsyncd setup on a windows computer for use with Affa backup]]<br />
* Wiki article [[Backup Windows Computers Using Affa]]<br />
* Wiki article [[Backup of ESXi Virtual Machines using Affa]]<br />
<br />
----<br />
[[Category: Contrib]]<br />
[[Category: Backup]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Affa&diff=33105Affa2017-04-03T19:48:43Z<p>Arnaud: /* Moving a SME server installation to new hardware using the Affa rise feature */</p>
<hr />
<div>{{Languages|Affa}}<br />
{{Level|Advanced}}<br />
{{usefulnote}}<br />
<br />
<br />
=== Maintainer ===<br />
Mainteners(s) Affa3: Arnaud, stephdl<br><br />
Development of Affa2 for SME Server by the original maintainer has been discontinued.<br />
<br />
=== Version ===<br />
{{ #smeversion: affa}}<br />
{{ #smeversion: smeserver-affa}}<br />
<br />
<br />
=== Description ===<br />
The main purpose of this affa package is to make a SME Server a dedicated backup box in a few minutes. Affa backs up as many SME servers as you like or any other servers which have sshd running and rsync installed. Once it is configured, Affa runs reliably unattended and sends warning messages in case of any errors. <br />
<br />
All backup archives are full backups, as Affa makes use of the hardlink technique. Therefore a new full backup only needs disk space for the differences plus the filesystem overhead for the hardlinks and directories (which is typically 2-3%).<br />
<br />
Affa is based on the rsync program and supports the rsync --compress option. This allows you to run backups over the internet or VPN. A typical setup is one or more Affa backup servers placed in different locations, which backup the production server(s) over the VPN.<br />
<br />
A special feature is the rise option, which allows you to rise the backup server to your production server from a backup archive in case of a dead loss of your production server. The rise is executed within a extremely short time, even with huge amount of data. The rise feature uses hardlinks and therefore does not use up additional disk space.<br />
<br />
The rise feature can also be used to upgrade from a lower version to a higher version of SME. This also work from SME 8.1 to 9. For this to work you need 2 separate installations of SME. One is the actual running server and the other is an affa backup server with a newer version of SME Server where you execute the rise command. See more here: [[Moving SME to new Hardware]]<br />
<br />
A simple way to have a server running with maximum up time is to duplicate the hardware. E.g. to have 2 separate hardware boxes or virtual servers on separate hardware. One is the actual server and the other one is a backup system that in a very short time in case of problems can be upgraded to the actual server with the rise command. This also allow you to upgrade the SME Server software with minimum down time!<br />
<br />
Affa is a command line tool for system administrators and is intentionally designed without a GUI. Therefore it can be efficiently managed on the console and over slow internet connections.<br />
<br />
'''Note:''' This documents also refers to the Affa Version 2 which is not maintained anymore. Information on Affa 3 will be gradually added here once it has been more tested.<br />
<br />
==== Affa features at a glance ====<br />
* Affa is secure: All transfers and connections are made by using the ssh protocol with public/private key authentication<br />
* Periodically runs unattended full backups. Only differences to the previous full backup are transferred over the network<br />
* Using rsync with optional bandwidth limit and compression allows backups over the internet<br />
* Uses hardlink technique, i.e. physical disk space only needed for the differences between two full backups<br />
* Keeps a configurable number of scheduled, daily, weekly, monthly and yearly full backup archives<br />
* The archives are browseable. No need to unpack them first.<br />
* Archives can be mapped to a Samba share.<br />
* Backup jobs are started by the cron daemon<br />
* Interrupted jobs continuing with already transfered data when restarted<br />
* Backups the default e-smith directories and files, when property SMEServer is set to yes<br />
* Additional directories and files can be included<br />
* Directories and files can be excluded from the backup<br />
* Non-SME server linux systems can be backed up by setting the SMEServer property to no and using an include list<br />
* In ESXi mode, running virtual machines can be backed up. See [[Backup of ESXi Virtual Machines using Affa]]<br />
* Configurable nice level for rsync processes on the backup and source server<br />
* Optional run of custom programs before and after a job run (e.g. running tape backup)<br />
* Checks the disk space left after a job run with warning levels strict, normal or risky<br />
* Extensive checking of failure conditions<br />
* Sends failure messages to a configurable list of email addresses<br />
* Sends a warning message, if the backup server runs out of disk space<br />
* Installs an optional watchdog on the source server in case the backupserver fails (SME Server only)<br />
* Watchdog sends warning, if an expected backup did not run (SME Server only)<br />
* Watchdog sends a daily reminder message, if the error continues unchecked (SME Server only)<br />
* Option to display current status of all jobs showing times of last and next run, size and disk usage<br />
* Status can be mailed on a daily, weekly or monthly schedule<br />
* Option to display all existing archives of a job shown date, number of files, size and bytes tranferred from the source<br />
* Option to send (and revoke) the public key to the source server (SME Server and ESXi only)<br />
* Option to rise the backup server to a production server from a backup (SME Server only)<br />
* The rise feature does not physically move data and therefore is extremly fast and needs (almost) no extra disk space<br />
* Rise option can be run remotely as the NIC driver configuration of the backup server are preserved<br />
* Compares installed RPMs on source with backup server. Sends warning message, if not in sync<br />
* Undo rise option to restore the backup server<br />
* Configurable via a e-smith style db, with one record for each job and a default record for all jobs<br />
* Logs to /var/log/affa/JOB.log and /var/log/affa/affa.log with optional debug switch for higher verbosity<br />
* Log files are rotated weekly, with 5 logs kept<br />
<br />
<div style="text-align:left" class="mw-collapsible mw-collapsed" data-collapsetext="less &uarr;" data-expandtext="Obsolete Affa 2 instructions &darr;"><br />
<br />
=== Installation or Update of Affa 2 ===<br />
<br />
====Smecontribs for sme8====<br />
yum install --enablerepo=smecontribs smeserver-affa<br />
<br />
The latest test version can be found here :<br />
[http://www.reetspetit.com/smeserver/5/noarch/smeserver-affa-2.0.0-rc6.el5.sme.noarch.rpm smeserver-affa-2.0.0-rc6.el5.sme.noarch.rpm] Note this is NOT signed.<br />
<br />
Please post bug reports in the [http://bugs.contribs.org bug tracker]<br />
<br />
When you have installed Affa for the first time run the following command to initialize the Affa database<br />
<br />
affa --make-cronjobs<br />
and logout and re-login to the console to take the bash auto-completion (TAB key) in effect.<br />
====Obsolete way to install====<br />
{{Note box|SME7 is now EOL and has been moved to the obsolete tree}}<br />
SME7 version: '''smeserver-affa-2.0.0-rc4.noarch.rpm'''<br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/smeserver-affa-2.0.0-rc4.noarch.rpm</nowiki><br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm<br />
<br />
SME8 version: SME8 is based on CentOS5, therefore use the perl-Compress-Bzip2 el5 rpm: <br />
wget <nowiki>http://mirror.contribs.org/releases/8/smecontribs/i386/repoview/smeserver-affa.html</nowiki><br />
For SME8 32bit:<br />
wget <nowiki>http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm<br />
For SME8 64bit:<br />
wget http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
<br />
When you have installed Affa for the first time run the following command to initialize the Affa database<br />
affa --make-cronjobs<br />
and logout and re-login to the console to take the bash auto-completion (TAB key) in effect.<br />
<br />
==== Quick start example for Affa 2 ====<br />
You have a SME 7 production server with hostname 'prodbox‘ and IP 10.200.48.1.<br><br />
Set up a second SME 7 box as your backupserver with hostname 'affabox‘ and IP 10.200.48.2. <br />
<br />
<ol></li><li>log into the 'affabox' and install the packages as described above.<br />
</li><li>copy the config helper script sample<br />
cp /usr/lib/affa/jobconfig-sample.pl /root/prodbox-job.pl<br />
</li><li>edit /root/prodbox-job.pl and set<br />
my $jobname='prodbox';<br />
and<br />
'remoteHostName‘=>'10.200.48.1',<br />
optionally, if the remote server port is configured to e.g. 2222 then set<br />
'sshPort'=>2222,<br />
</li><li>save and run the 'prodbox-job.pl' file. This will write the configuration (this makes the database entries and sets up the cronjobs)<br />
/root/prodbox-job.pl<br />
</li><li>generate the DSA keys and send the public key to the 'prodbox'<br />
{{Note box|When initially doing this step, you will need to temporarily enable "Allow secure shell access using standard passwords" on the production server}}<br />
affa --send-key prodbox<br />
</li><li>run the job manually<br />
<br />
affa --run prodbox<br />
</li></ol><br />
<br />
==== Configuration ====<br />
The configuration is stored in an e-smith style database. Use the db command to configure Affa.<br />
The jobname is the record key with the type 'job'.<br>To setup a new job with the name 'prodbox' enter:<br />
<br />
db affa set prodbox job<br />
<br />
then set the properties<br />
<br />
db affa setprop prodbox remoteHostName 192.168.1.1<br />
db affa setprop prodbox TimeSchedule '0030,0730,1130,1330,1730,2030'<br />
db affa setprop prodbox Description 'My Production Server'<br />
db affa setprop prodbox status enabled<br />
and so on...<br />
<br />
Alternatively you can you use a script as described above in the 'Quick start' chapter.<br />
<br />
To verify your work, type:<br />
<br />
db affa show prodbox<br />
<br />
Finally set up the cronjobs:<br />
affa --make-cronjobs<br />
<br />
</div><br />
<br />
<br />
=== Installation of Affa 3 ===<br />
The following instructions assume that a fresh Affa 3 installation is made on a fresh dedicated SME server which serves as a dedicated backup server ('affabox‘). No jobs are migrated from Affa 2. For using an existing SME server, just skip the instructions how to setup a fresh SME box.<br />
<br />
Setup a dedicated machine with SME 8.1 or SME 9.0 from CDROM. Use the following example settings:<br />
<ol><li>Domain name: athome.xx (use your existing domain name)</li><br />
<li>Host name: affabox (must not match to existing host)</li><br />
<li>IP address: 192.168.0.10 (must not match to existing IP address)</li><br />
<li>Subnet: 255.255.255.0</li><br />
<li>role: Server only</li><br />
<li>Gateway: 192.168.0.1 (use your existing gateway)</li><br />
<li>DHCP: DEactivate</li><br />
<li>DNS server: 192.168.0.1 (use your existing DNS server)</li><br />
<li>activate changes: yes</li><br />
<li>The fresh server will then restart with the above settings.</li><br />
</ol><br />
<br />
To update your server, login as user root on local console. Use 'top' command to display running jobs. Wait until 'yum' stopped running.<br />
yum clean all<br />
yum update<br />
The fresh server will be updated.<br />
signal-event post-upgrade<br />
signal-event reboot<br />
The server will reboot. Afterwards it is ready for installation of Affa 3.<br />
<br />
Login as root on local or remote SSH console of ''affabox''.<br />
<br />
= For SME8 =<br />
<br />
To install issue the following command:<br />
yum install smeserver-affa --enablerepo=smecontribs<br />
= For SME9 =<br />
<br />
yum install smeserver-affa --enablerepo=smecontribs<br />
<headertabs/><br />
<br />
The server again need to be updated.<br />
signal-event post-upgrade<br />
signal-event reboot<br />
The server will reboot. Afterwards Affa 3 is ready for use.<br />
<br />
Now manually create directory for archive files:<br />
mkdir /var/affa<br />
<br />
{{Warning box| Note that on SME9, a new SSH [https://wiki.contribs.org/AutoBlock AutoBlock] feature interferes with the Affa requirement for regular SSH logins! You might have to disable this feature to allow Affa to operate correctly. }}<br />
<br />
=== Creation of backup jobs ===<br />
Job configurations of Affa 3 are no longer stored in an e-smith style database. They are stored in configuration files located in ''/etc/affa'' . Create your configuration files in this directory, e.g.<br />
touch /etc/affa/backup-smeserver.conf<br />
where ''backup-smeserver'' is your jobname, and add content as described below.<br />
<br />
==== Quick start example ====<br />
You have a SME production server with hostname 'smeserver‘ and IP 192.168.0.2.<br><br />
You have a second SME box as your backup server with hostname 'affabox‘ and IP 192.168.0.10. <br />
<br />
Login to your 'affabox' as root and edit ''/etc/affa/backup-smeserver.conf'' . Using e.g. editor ''nano'', create the following example job configuration file for jobname ''backup-smeserver'' :<br />
[backup-smeserver]<br />
remoteHostName=192.168.0.2<br />
SMEServer=yes<br />
Watchdog=yes<br />
RPMCheck=yes<br />
ConnectionCheckTimeout=120<br />
Debug=no<br />
Description=Backup of 192.168.0.2 smeserver.athome.xx<br />
DiskSpaceWarn=strict<br />
RootDir=/var/affa<br />
TimeSchedule=0630<br />
localNice=15<br />
remoteNice=15<br />
rsync--inplace=yes<br />
rsyncCompress=no<br />
rsyncTimeout=900<br />
scheduledKeep=1<br />
dailyKeep=7<br />
weeklyKeep=4<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
status=enabled<br />
<br />
If you have a Letsencrypt certificate on the production server, then you should also include<br />
Include=/etc/dehydrated<br />
<br />
Then save your job configuration file.<br />
<br />
Now check that your configuration is OK:<br />
affa --configcheck<br />
This should throw out no errors. Now create / send key files to your productive smeserver:<br />
affa --send-key backup-smeserver<br />
The following output should appear on the console:<br />
Job sme-backup: Generating DSA keys...<br />
Successfully created DSA key pair.<br />
root@192.168.0.2's password: <Enter password of 192.168.0.2/smeserver and hit ENTER><br />
Public key sent to 192.168.0.2<br />
Now run your job manually, both for test purposes, as well as to create DSA keys:<br />
affa --run backup-smeserver<br />
The following output should appear on the console:<br />
The authenticity of host 'backup-smeserver (192.168.0.2)' can't be established.<br />
RSA key fingerprint is 3b..........65.<br />
Are you sure you want ton continue connecting (yes/no)? <yes><br />
Now your backup job should run for the first time. Depending on the volume of the files, this takes quite a while.<br />
<br />
Once the job is done, check that the archive is available:<br />
affa –-list-archives<br />
<br />
or run the job a second time:<br />
affa --run backup-smeserver<br />
Note that you won't be asked for the password again. Note as well, that this second run of the job ''backup-smeserver'' should run considerably faster, because not all files are copied again: just the difference to the previous run is backed up, the rest is stored as hard links to the existing file copies.<br />
<br />
Check that the second archive is available too:<br />
affa --list-archives<br />
<br />
From here you are able to work as with Affa 2. Modify your config file as required and described below. Automate the backup using the command ''affa --make-cronjobs'', see below.<br />
<br />
==== Job configuration properties ====<br />
'''Note 1:''' The default values shown in this table are the Affa program defaults and not to be confused with the preset values in the job configuration files, e.g. backup-smeserver.conf.<br />
<br />
'''Note 2:''' The complete documentation of the settings of Affa3 is available in [http://affa.sourceforge.net/AffaPdfMan.pdf] . It should be taken in consideration!<br />
<br />
'''Note 3:''' Affa 3 for SME is a fork of [http://affa.sourceforge.net/ Affa 3 for CentOS]. It adds a few options back again which have been removed. Generally the configuration properties as described [http://wiki.nikoforge.org/Affa_Configuration/ here] do apply. Arnaud added the following functions for SME:<br />
<ul><br />
<li>The parameter “SMEServer”</li><br />
<li>The parameter and function “Watchdog”</li><br />
<li>The parameter and function “RPMCheck”</li><br />
<li>The functions “--rise” and “--undo-rise”</li><br />
</ul><br />
Consequently the list below should reproduce the list for the unforked Affa 3 version, plus adding the properties above. For details refer to [https://www.guedel.eu/index.php/informatique/sme-server-8/affa-v3 Arnaud's website].<br />
{| border="1" cellpadding="3" cellspacing=0<br />
|-<br />
| '''Property''' || '''Value''' || '''Multivalue''' || '''Default''' || '''Description''' <br />
|-<br />
| remoteHostName <br />
| FQHN or IP<br />
| no<br />
| <none><br />
| FQHN or IP of the source host (mandatory)<br />
|-<br />
| remoteUser<br />
| account<br />
| no<br />
| root<br />
| The user account to be used with all ssh logins. May be case sensitive, e.g. Administrator account on Windows<br />
|-<br />
| Description<br />
| text string<br />
| no<br />
| <none><br />
| Any text that describes the job<br />
|-<br />
| TimeSchedule <br />
| HHMM<br />
| yes<br />
| 2230<br />
| The time the job will run. Use exactly 4 digits, no colon, no point. '''Important:''' Using the proper format HHMM is essential. Badly formatted TimeSchedule will cause strange Perl errors. Multiple runs can be achieved by repeating the line for each run time.<br />
|-<br />
| status <br />
| enabled ''or'' disabled<br />
| no<br />
| enabled<br />
| When set to disabled, no cron entries will made. You can still run a job manually. <br />
|-<br />
| Include<br />
| full path<br />
| yes<br />
| <none><br />
| File(s) or directory(s) to be included in the backup. <br />
|-<br />
| Exclude<br />
| full path<br />
| yes<br />
| <none><br />
| File(s) or directory(s) to be excluded from the backup. <br />
|-<br />
| RootDir<br />
| full path<br />
| no<br />
| /var/affa<br />
| where to store the backup archives. Do not use /home/e-smith or /root as these are included in the backup and therefore the rise option will not work! Recommended: /var/affa<br />
|-<br />
| scheduledKeep<br />
| integer >= 1<br />
| no<br />
| 1<br />
| how many of the scheduled backups to be kept<br />
|-<br />
| dailyKeep<br />
| integer >= 0<br />
| no<br />
| 7<br />
| how many of the daily backups to be kept<br />
|-<br />
| weeklyKeep<br />
| integer >= 0<br />
| no<br />
| 4<br />
| how many of the weekly backups to be kept<br />
|-<br />
| monthlyKeep<br />
| integer >= 0<br />
| no<br />
| 12<br />
| how many of the monthly backups to be kept<br />
|-<br />
| yearlyKeep<br />
| integer >= 0<br />
| no<br />
| 2<br />
| how many of the yearly backups to be kept<br />
|-<br />
| EmailAddress<br />
| name@domain.com<br />
| yes<br />
| root<br />
| comma separated list of mail addresses, where the messages should be sent to<br>'''Note:''' By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).<br />
|-<br />
| RetryAttempts<br />
| integer >= 0<br />
| no<br />
| 4<br />
| When set to a value>0, Affa re-run a failed job RetryAttempts times with a delay of RetryAfter seconds.<br />
|-<br />
| RetryAfter<br />
| seconds >= 0<br />
| no<br />
| 900<br />
| when set to a value>0, wait RetryAfter seconds before re-running the job after an error. Only applicable with RetryAttempts>0<br />
|-<br />
| RetryNotification<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| when set to no, Affa does not send an error message when a job has failed and RetryAttempts is configured. An error message is only sent when the last attempt has failed.<br />
|-<br />
| NRPEtrigger<br />
| hours<br />
| no<br />
| 24<br />
| NRPE reports a job as critical when the last successful run is older then NRPEtrigger hours. To exclude the job from monitoring set to a value < 0.<br />
|-<br />
| SambaShare<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Access to the job archives via CIFS protocol.<br />
|-<br />
| SambaValidUser<br />
| local account<br />
| yes<br />
| affa<br />
| User who has permission to access the job archives via the CIFS protocol.<br />
|-<br />
| preJobCommand<br> preJobCommandRemote<br />
| path relative to /etc/affa/scripts/<br />
| yes<br />
| <none><br />
| Scripts to be executed before a job run. The job name and type (scheduled, daily etc.) are passed as arguments to the program. The preJobCommandRemote scripts are copied to the remote server and then executed there. The scripts are executed in alphabetical order. Use a numerical prefix if you need a specific order e.g. 01scriptB, 02remotescriptA, 03scriptA. Use the included prescript-sample.pl and prescriptRemote-sample.pl as a starting point for own scripts.<br>'''Note:''' If you use the parameter “SMEServer=yes”, you may delete the “Included” relative to SME default directories in the jobconfig.ini: they will be backuped automatically.<br />
|-<br />
| postJobCommand<br> postJobCommandRemote<br />
| path relative to /etc/affa/scripts/<br />
| yes<br />
| <none><br />
| Scripts to be executed after a job run. The job name, the type (scheduled, daily etc.) and the exit code of the job run are passed as arguments to the program. The postJobCommandRemote scripts are copied to the remote server and then executed there. The scripts are executed in alphabetical order. Use a numerical prefix if you need a specific order e.g. 01scriptB, 02remotescriptA, 03scriptA. Use the included postscript-sample.pl and postscriptRemote-sample.pl as a starting point for own scripts.<br>'''Note:''' If you use the parameter “SMEServer=yes”, you may delete the “Included” relative to SME default directories in the jobconfig.ini: they will be backuped automatically.<br />
|-<br />
| dedup<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| The purpose of the deduplication is to remove duplicate files to save backup space. When set to 'yes' file deduplication is run after the synchronization has been completed. It looks for files that have identical content, user, group and permissions and replace duplicates by hardlinks. Deduplication scans the just completed archive and the previous one, that usually is<br />
scheduled.0 and daily.0 or scheduled.0 and scheduled.1. Consider this scenario: A user has renamed directories or files. Rsync sees those as new ones and copies them. Deduplication finds the identical copies in the previous archive and replace them by hardlinks. To use deduplication the Freedup program needs to be installed. Affa actually runs freedup -upg scheduled.0 <previous_archive>.<br />
|-<br />
| dedupKill<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| When set to 'no' the job ignores affa --kill or affa --killall when deduplication is running. This is useful in the context of killAt which is typically used to stop bandwith utilisation.<br />
|-<br />
| sshPort<br />
| service port <br />
| no<br />
| 22<br />
| When sshd on the remote host listens on a non-standard port, set the port here.<br />
|-<br />
| ConnectionCheckTimeout<br />
| seconds<br />
| no<br />
| 120<br />
| Affa checks the ssh connection before the rsync process is started and exits with an error after the configured time if the host did not respond.<br />
|- <br />
| BandwidthLimit<br />
| integer>=0 kilobytes per second<br />
| no<br />
| 0<br />
| Limits the data transfer rate. A value of zero specifies no limit.<br />
|- <br />
| rsyncTimeout<br />
| seconds<br />
| no<br />
| 900<br />
| Rsync exits after the configured time if no data was transferred. This avoids infinitely hanging in case of a network error.<br />
|- <br />
| rsyncCompress<br />
| yes ''or'' no <br />
| no<br />
| yes<br />
| Compress the transferred data. May be useful with slow internet connections. Increases CPU load on remote and backup host.<br />
|- <br />
| rsync--inplace<br />
| yes ''or'' no<br />
| no<br />
| yes <br />
| Set to no if the rsync version on the remote hist does not support this option.<br />
|-<br />
| rsync--modify-window<br />
| integer >= 0 <br />
| no<br />
| 0<br />
| When comparing two timestamps, rsync treats the timestamps as being equal if they differ by no more than the modify-window value. This is normally 0 for an exact match. A value >= 0 is useful if you can't get the clocks of the remote host and the Affa server in sync.<br />
|-<br />
| rsyncOptions<br />
| string<br />
| no<br />
| <none><br />
| Additional option string to be passed to rsync<br />
|-<br />
| localNice <br />
| -19...+19<br />
| no<br />
| 0<br />
| run rsync local process niced. <br />
|-<br />
| remoteNice<br />
| -19...+19<br />
| no<br />
| 0<br />
| run rsync process on source niced. <br />
|-<br />
| killAt<br />
| HHMM<br />
| no<br />
| <none><br />
| The time at which a job will be killed if it was still running. You can use it for example to kill jobs that are running over the internet early in the morning so that your users have the full bandwidth available during office hours. <br />
|-<br />
| resumeKilledAt<br />
| HHMM<br />
| no<br />
| <none><br />
| The time at which a killed job will be resumed. This allows you to start earlier in the evening than the scheduled time.<br />
|-<br />
| chattyOnSuccess<br />
| integer >= 0<br />
| no<br />
| 0 <br />
| When set to a value>0, Affa sends a message on a successfully completed job run and decrements the chattyOnSuccess value. When the value has reached zero, Affa falls back to the default and only sends messages on errors.<br />
|-<br />
| AutomountDevice<br>AutomountPoint<br />
| full path<br />
| no<br />
| <none><br />
| Device and mountpoint of backup device (e.g. USB disk). Device is automounted before a job starts and unmounted after job completion. With both properties empty no automount is done.<br />
|-<br />
| AutomountOptions<br />
| string<br />
| no<br />
| <none><br />
| An option string passed to the mount command.<br />
|-<br />
| AutoUnmount <br />
| yes ''or'' no<br />
| no<br />
| yes<br />
| When set to 'no' the automounted device stay mounted after the Affa run.<br />
|-<br />
| Debug <br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Set to yes to increase log verbosity.<br />
|-<br />
| remoteRsyncBinary <br />
| full path<br />
| no<br />
| /usr/bin/rsync<br />
| If the rsync program on the remote server is located in non-standard location, set it here<br />
|-<br />
| remoteNiceBinary <br />
| full path<br />
| no<br />
| /bin/nice<br />
| If the nice program on the remote server is located in non-standard location, set it here.<br />
|-<br />
| localRsyncBinary <br />
| full path<br />
| no<br />
| /usr/bin/rsync<br />
| If the local rsync program is located in non-standard location, set it here<br />
|-<br />
| localNiceBinary <br />
| full path<br />
| no<br />
| /bin/nice<br />
| If the nice local program is located in non-standard location, set it here.<br />
|-<br />
| RemoteAuthorizedKeysFile <br />
| path relative to remote user's home or full path<br />
| no<br />
| .ssh/authorized_keys2<br />
| If the remote host stores the authorized keys file in a non-standard location, set it here.<br />
|-<br />
| rsyncdMode<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Set to yes to connect to the rsync daemon on the remote host (instead of running rsync over ssh).<br />
|-<br />
| rsyncdModule<br />
| string<br />
| no<br />
| AFFA<br />
| The rsyncd module name (only applicable with rsyncdMode=yes).<br />
|-<br />
| rsyncdUser<br />
| string<br />
| no<br />
| affa<br />
| The username for authentication to the rsync daemon (only applicable with rsyncdMode=yes).<br />
|-<br />
| rsyncdPassword<br />
| string<br />
| no<br />
| <none><br />
| The password for authentication to the rsync daemon (only applicable with rsyncdMode=yes).<br />
<br />
|-<br />
| globalStatus<br />
| enabled ''or'' disabled ''or'' jobs<br />
| no<br />
| jobs<br />
| Set to enabled or disabled to overide the status settings in all job sections. When set to value jobs, the status settings in the job sections are effictive.<br>'''Note:''' This property is allowed only in the [GlobalAffaConfig] section. <br />
|-<br />
| sendStatus<br />
| daily ''or'' weekly ''or'' monthly ''or'' never<br />
| no<br />
| weekly<br />
| Defines how frequently the status will be sent.<br>'''Note:''' This property is allowed only in the [GlobalAffaConfig] section. <br />
|-<br />
| SMEServer<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| When set to yes, the default e-smith directories are automatically included and the property RPMCheck=yes can be used.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|-<br />
| RPMCheck<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Only applicable to jobs that backup a SME server. Compares the packages installation of the source host with this affa backup host. Sends a message with diff list if not in sync. This check is useful, if you want have the option to rise the backup server to a production server from a backup.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|-<br />
| Watchdog<br />
| yes ''or'' no<br />
| no<br />
| yes<br />
| Only applicable to jobs that backups a SME server. When a job is started, affa installs a watchdog script on the source in /etc/cron.d/, which sends a warning message, if the next scheduled job (taken from the TimeSchedule property + 10 minutes) did not run. This guarantees, that you will be notfied even in case of a affa server outage. The watchdog script send a daily reminder message, if the error continues. The next run job replaces the watchdog script with a new trigger time.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|}<br />
<br />
==== Default configuration properties ====<br />
For all 'name=value' properties defaults can be set in the [GlobalAffaConfig] section which are used in all job configuration when the corresponding property is omitted. For example, when these properties are set in [GlobalAffaConfig] section, they can be omitted in the specific job configurations. This is useful, when you set up many similar jobs. <br />
<br />
Example: You want to set the property 'localNice' to 19 for all jobs. Then add the following section in one of your configuration files:<br />
[GlobalAffaConfig]<br />
localNice=19<br />
Consequently you don't need to set this property for your individual jobs. Properties set in the job record override the defaults.<br />
<br />
==== Properties for global Settings ====<br />
The following special properties are only applicable to the [GlobalAffaConfig] section:<br />
<ul><br />
<li>sendStatus</li><br />
<li>globalStatus</li><br />
</ul><br />
<br />
All jobs can be disabled for execution through 'cronjobs" with setting 'globalStatus' to 'disabled'.<br />
[GlobalAffaConfig]<br />
globalStatus=disabled<br />
<br />
To re-enable run, either delete the above configuration line, or set to:<br />
globalStatus=jobs<br />
which is the default value.<br />
<br />
You can also set the value to<br />
globalStatus=enabled<br />
which enforces the the job execution through 'cronjobs' and overrides the specified values in the section for the specific job.<br />
<br />
=== Usage and command line options ===<br />
{{Note box|Options can be abbreviated to uniqueness, e.g. --mak is equal to --make-cronjobs}}<br />
'''affa --run JOB'''<br />
Starts a job run. Usually done by the cronjob. <br />
<br />
<br />
'''affa --make-cronjobs'''<br />
Configures the cronjobs as scheduled in the jobs records. By default this command will by executed by 'cronjobs' every 15 minutes. So you don't need to run this command manually, you can also just wait max. 15 minutes before your updates job configurations become effective.<br />
<br />
'''affa --configcheck'''<br />
Checks the syntax and values in all configuration files found in /etc/affa/. Run this command after modifying the configuration. Lower/Upper case errors in property names are corrected automatically.<br />
<br />
'''affa --make-cronjobs'''<br />
Configures the cronjobs. Run this command to make changes of time related properties effective i.e. TimeSchedule, killAt and resumekKilledAt properties. Every 15 minutes Affa checks whether a configuration file is newer than the cronjob file and runs the make-cronjobs command if this is the case.<br />
<br />
'''affa --send-key [JOB JOB ...]'''<br />
This first generates the DSA key on the Affa Server, if not already done. Then the public key is send to the hosts 'remoteHostName' as defined in section of each job JOB and generates the job specific ssh known host entry.<br />
{{Note box|When initially doing this step, you will need to temporarily enable "Allow secure shell access using standard passwords" on the source server.}}<br />
<br />
'''affa --check-connections [JOB JOB ...]'''<br />
Checks the ssh login and if applicable rsyncd auth for all jobs given as arguments. Without any arguments all jobs are checked.<br />
<br />
'''affa --full-restore [--preserve-newer=no] [--delete=yes] JOB [ARCHIVE]'''<br />
Does a full restore remote server of all backed up files and directories from the backup ARCHIVE. If ARCHIVE is not given, the archive 'scheduled.0' is used as the default. The full restore reconstructs the server as it was at the time of the backup. After the restore the source host reboots.<br />
<br />
With option --preserve-newer=yes files on the remote server with modification time newer than on the backup are not overwritten.<br />
<br />
With option --delete=yes all files on the remote server, which are not in the backup, are deleted.<br />
<br />
If the parameter “SMEServer=yes” is set, the signal-event pre-restore and signal-event post-upgrade will be used automatically.<br />
<br />
'''affa --list-archives JOB'''<br />
Displays a table of all present archives of job JOB with date, number of files, size and and bytes received. While column buTime shows the actual rsync time, the column ddTime shows the length of the subsequent deduplication run. ddYld is the gained deduplication yield in bytes.<br />
<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
+------------------------------------------------------------------------------+<br />
| Job: fshare-ak |<br />
| Description: Fileserver AK |<br />
| Directory: /var/affa/fshare-ak/ |<br />
| Hostname: 10.204.104.4 |<br />
| Email: sysadmin@mydomain.com |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| Run | Completion date | buTime | ddTime | ddYld | Files | Size | Recvd |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| Y 0 | Wed 2010-06-23 20:26 | 11m53s | - | - | 412k | 143G | 470M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| M11 | Sun 2010-08-29 20:22 | 7m50s | - | - | 417k | 153G | 14M |<br />
| M10 | Sun 2010-09-26 20:23 | 8m57s | - | - | 430k | 156G | 14M |<br />
| M 9 | Sun 2010-10-31 20:25 | 10m05s | - | - | 448k | 161G | 15M |<br />
| M 8 | Sun 2010-11-28 20:31 | 16m42s | - | - | 463k | 167G | 17M |<br />
| M 7 | Sun 2011-01-02 20:48 | 33m07s | - | - | 486k | 173G | 1.1G |<br />
| M 6 | Sun 2011-01-30 20:33 | 18m54s | - | - | 493k | 176G | 24M |<br />
| M 5 | Sun 2011-02-27 20:28 | 13m31s | - | - | 490k | 176G | 19M |<br />
| M 4 | Sun 2011-03-27 20:28 | 13m08s | - | - | 491k | 175G | 17M |<br />
| M 3 | Sun 2011-05-01 20:30 | 15m41s | - | - | 493k | 179G | 18M |<br />
| M 2 | Sun 2011-05-29 20:28 | 13m40s | - | - | 494k | 182G | 31M |<br />
| M 1 | Sun 2011-06-19 20:26 | 11m20s | - | - | 493k | 183G | 17M |<br />
| M 0 | Sun 2011-07-03 20:28 | 13m46s | - | - | 496k | 183G | 18M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| W 3 | Sun 2011-07-10 20:28 | 13m38s | - | - | 479k | 180G | 16M |<br />
| W 2 | Mon 2011-07-11 20:35 | 20m27s | - | - | 479k | 180G | 301M |<br />
| W 1 | Fri 2011-07-15 20:30 | 15m53s | - | - | 480k | 180G | 62M |<br />
| W 0 | Sun 2011-07-24 19:39 | 9m25s | - | - | 482k | 181G | 16M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| D 6 | Thu 2011-07-28 19:42 | 12m22s | - | - | 483k | 182G | 176M |<br />
| D 5 | Fri 2011-07-29 23:29 | 11m10s | 5h33m | 45G | 483k | 182G | 16M |<br />
| D 4 | Sat 2011-07-30 19:53 | 23m26s | 2h30m | 8.3G | 483k | 182G | 17M |<br />
| D 3 | Sun 2011-07-31 20:07 | 37m31s | 4m47s | 8.3G | 483k | 182G | 17M |<br />
| D 2 | Mon 2011-08-01 20:44 | 1h14m | 7h50m | 8.5G | 484k | 182G | 630M |<br />
| D 1 | Tue 2011-08-02 20:02 | 32m28s | 12h20m | 8.3G | 484k | 182G | 74M |<br />
| D 0 | Wed 2011-08-03 19:58 | 28m46s | 11h01m | 8.5G | 484k | 182G | 214M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| S 0 | Thu 2011-08-04 20:00 | 30m28s | 11h52m | 8.5G | 484k | 182G | 203M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
</pre></small><br />
<br />
'''affa --list-archives [--csv] JOB'''<br />
With --csv, the output is in machine readable colon separated format.<br />
<small><pre><br />
Archive:Count;Date;Files;Size;RootDirFilesystemAvail;RootDirFilesystemUsed;valid;TotalBytesReceived;ExecutionTime;DedupTotalFiles;DedupReplacedFiles;DedupSavedBytes;DedupExectime;DedupDate<br />
monthly;00000;201509270631;393237;37252509103;166889260;47876936;yes;1327370;1160638358;99;;;;;<br />
weekly;00003;201510040531;390273;37042612135;164566692;50199504;yes;1117726;45430532;59;;;;;<br />
weekly;00002;201510110531;395553;37299589800;162612204;52153992;yes;1117092;92369237;64;;;;;<br />
weekly;00001;201510180531;403831;37707599172;161855844;52910352;yes;1317083;148317764;64;;;;;<br />
weekly;00000;201510250531;406509;38146200127;161041156;53725040;yes;1104759;64083948;63;;;;;<br />
daily;00006;201510310531;415663;39248862982;162979176;51787020;yes;1958110;198060283;66;;;;;<br />
daily;00005;201511010531;411014;38563389171;163061136;51705060;yes;1156506;48903675;75;;;;;<br />
daily;00004;201511020531;408910;38612857040;162359124;52407072;yes;1007434;210812387;75;;;;;<br />
daily;00003;201511030531;410615;37815861577;162026060;52740136;yes;1524069;162697515;67;;;;;<br />
daily;00002;201511040531;414784;38131734213;161990024;52776172;yes;1571657;134250735;63;;;;;<br />
daily;00001;201511050531;420780;38433895988;161906580;52859616;yes;2021507;149388808;103;;;;;<br />
daily;00000;201511060531;425920;38677614350;161778812;52987384;yes;1987971;166510621;71;;;;;<br />
scheduled;00000;201511070531;425663;38815712018;161612600;53153596;yes;2003540;139603231;68;;;;;<br />
</pre></small><br />
<br />
'''affa --status [--csv]'''<br />
Displays a table of all configured jobs with enable status, time of last and next run, size of the most recent archive, exectution time of the last run and the number of scheduled (S), daily (D), weekly (W), monthly (M) and yearly (Y) archives. If last time shows 'failed', the job did not run in the last 24h. For disabled jobs 'Last' always shows 'failed' after 24 h. To see the date and time of the last run of those jobs use the --list-archives option. Column 'Next' shows the time when the next run will be started, if 'Enabled' is 'yes'. Column 'ddYld' shows the deduplication yield in bytes.<br />
<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
| Job | ENA | Last | Time | Next | Size | ddYld | N of S,D,W,M,Y |<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
| ads-ak | yes | 20:20 | 0m29s | 20:20 | 28M | - | 1, 7, 2, 0, 0 |<br />
| ak-user-ma.. | yes | 22:31 | 1m51s | 22:30 | 170M | - | 3, 7, 4, 6, 0 |<br />
| azubi-1 | yes | 03:39 | 3h24m | 00:15 | 44G | 1.2G | 1, 7, 4,10, 1 |<br />
| dcpant | yes | 03:06 | 1m03s | 03:05 | 1.3G | - | 1, 7, 4,10, 1 |<br />
| eshare | yes | 21:06 | 21m01s | 20:45 | 24G | 178M | 1, 7, 4,10, 1 |<br />
| etherpad | yes | 20:40 | 0m13s | 20:40 | 3.1M | - | 1, 7, 4, 2, 0 |<br />
| fazubi | yes | 08:16 | 11h16m | 21:00 | 132G | 3.3G | 1, 7, 4,10, 1 |<br />
| fschare-rh | yes | de-duplicating (pid 9719) | 1, 7, 4,10, 1 |<br />
| fsh-02 | yes | 07:53 | 12h23m | 19:30 | 182G | 8.5G | 1, 7, 4,10, 1 |<br />
| fshare-ak2 | yes | 00:30 | 0m26s | 00:30 | 415M | 544k | 1, 7, 4,10, 1 |<br />
| helpdesk | yes | 21:27 | 2m16s | 21:25 | 895M | 138k | 1, 7, 4,10, 0 |<br />
| it-share | yes | running rsync (pid 9744) | 1, 7, 1, 0, 0 |<br />
| lightroom | yes | waiting (pid 9528) | 1, 7, 4, 2, 0 |<br />
| localhost | yes | 02:15 | 0m08s | 02:15 | 395k | - | 1, 7, 2, 0, 0 |<br />
| mediawiki | yes | 16:10 | 0m06s | 19:40 | 1.1G | 6.7M | 3, 7, 4,10, 1 |<br />
| mshare | yes | 00:48 | 1h33m | 23:15 | 18G | 2.4G | 1, 7, 4,10, 1 |<br />
| wshare | yes | 00:34 | 1h49m | 22:45 | 23G | 484M | 1, 7, 4,10, 1 |<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
2 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
With --csv, the output is in machine readable colon separated format.<br />
<br />
'''affa --show-config-pathes [--csv] [JOB JOB ...]'''<br />
Prints the full pathes of the file where the section of job JOB is defined.<br />
<br />
'''affa --show-default-config'''<br />
Prints a list of all allowed property names with their default values. These values are used, when omitted in the job sections.<br />
<br />
'''affa --show-schedule [--all]'''<br />
Prints a 'graphical' timetable for all enabled jobs. The resolution is 30 minutes. An 'S' character marks the scheduled start times. The duration of the job runs are marked with '=' characters.<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
TIME 12:00 16:00 20:00 0:00 4:00 8:00<br />
fsh-02 -------- -------S ~~~~~~~~ ~~~~~~~~ ~~~~~~~~ --------<br />
ads-ak -------- -------- S------- -------- -------- --------<br />
etherpad -------- -------- -S------ -------- -------- --------<br />
eshare -------- -------- -S~----- -------- -------- --------<br />
fazubi -------- -------- --S~~~~~ ~~~~~~~~ ~~~~~~~~ ~-------<br />
helpdesk -------- -------- --S----- -------- -------- --------<br />
it-share -------- -------- ---S=~~~ ~~~~~~~~ ~~~~~~~- -------- busy<br />
wshare -------- -------- -----S~~ ~~------ -------- --------<br />
ak-user-management -------- -------- -----S-- -------- -------- --------<br />
fschare-rh -------- -------- -----S=- -------- --~----- -------- busy<br />
mshare -------- -------- ------S= ~~------ -------- --------<br />
azubi-1 -------- -------- -------- S=~~~~~~ -------- --------<br />
fshare-ak2 -------- -------- -------- -S------ -------- --------<br />
mediawiki -------- -------- -------- ---S---- -------- ----S---<br />
localhost -------- -------- -------- ----S--- -------- --------<br />
dcpant -------- -------- -------- ------S- -------- --------<br />
lightroom -------- -------- -------- -------- -------- S------- busy<br />
Symbols: S=scheduled K=kill R=resume '='=rsync '~'=dedup<br />
2 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
<br />
'''affa --log-tail [JOB]'''<br />
Displays the tail of the logfile of job JOB with live update. This command is identical to tail -n 50 -f /path/to/logfile. Without the JOB argument the global logfile is shown.<br />
<br />
'''affa --send-status'''<br />
Sends the status table, the disk-usage and the archive list of all jobs to the email addresses configured in the [GlobalAffaConfig] section.<br />
<br />
'''affa --disk-usage'''<br />
Shows the current disk usage of all root dir filesystems<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.de<br />
+------+--------+--------+----------------------------------------------------+<br />
| Use% | Used | Avail | Root Dir |<br />
+------+--------+--------+----------------------------------------------------+<br />
| 71% | 938GB | 365GB | /var/affa |<br />
| 20% | 194GB | 759GB | /mnt/affadev |<br />
+------+--------+--------+----------------------------------------------------+<br />
</pre></small><br />
With --csv, the output is printed in a machine readable colon separated format.<br />
<br />
'''affa --cleanup JOB'''<br />
After you have lowered a keep value, e.g. scheduledKeep, then archives with a higher indices will no longer be shifted and will exist for ever. This option finds these archives and deletes them after confirmation.<br />
<br />
'''affa --rename-job JOB NEWNAME'''<br />
Renames the job JOB to NEWNAME including the section name and archive directories.<br />
<br />
'''affa --move-archive JOB NEWROOTDIR'''<br />
Moves the archive directory of job JOB to the rootdir NEWROOTDIR and adjusts the value of property RootDir. NEWROOTDIR must be a full path starting with a slash. As moving across filesystems (e.g. from an external USB drive to the local disk) is not possible, Affa uses a copy command in this case and deletes the source directory after that. Depending on the archive size, copying across filesystems can take a long time.<br />
<br />
'''affa --delete-job [--revoke-key] JOB'''<br />
Irreversibly deletes all archives, of job JOB and set the status property to disable. With --revoke-key option, the public key on the remote server will be deleted.<br />
<br />
'''affa --revoke-key JOB'''<br />
Deletes the public dsa key on the remote server.<br />
<br />
'''affa --kill JOB'''<br />
Terminates the running job JOB and all its child processes (rsync processes).<br />
<br />
'''affa --killall'''<br />
Terminates all running jobs.<br />
<br />
affa --mailtest JOB<br />
Sends a test email to the email addresses configured in the JOB section. Use this to verify, that your mail system is working.<br />
{{Note box|By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).}}<br />
<br />
'''affa --nrpe [JOB JOB ...]'''<br />
Checks for jobs that were not run the last NRPRtrigger hours and reports them as failed. A Nagios/ICINGA NRPE complient message is printed and exit status returned.<br />
<br />
'''affa --version'''<br />
Displays the Affa version number and checks Samba and Freedup installation.<br />
<br />
'''affa --warranty'''<br />
Displays the disclaimer of liability.<br />
<br />
'''affa --license'''<br />
Displays the license Affa is released under.<br />
<br />
'''affa --help'''<br />
Displays a short help.<br />
<br />
'''affa --debug'''<br />
Enables verbose logging. Overrides job and global configurations.<br />
<br />
==== SME Specific usage and command line options ====<br />
<br />
These commands are implemented in a fork of Affa 3 project for use with SME. They are not part of the Affa 3 for CentOS project.<br />
<br />
'''affa --rise [--all] JOB [ARCHIVE]'''<br />
Runs a full restore on the Affa server <b>(!!!)</b> of all standard files and directories from the backup ARCHIVE of job JOB. In other words: After completion, the Affa box reboots as a clone of the source server. Ensure, that the source server has been powered off before you reboot the Affa box, otherwise the network interface will not come up. This is important, when you run --rise remotely. The --rise feature only works with SME servers and should only be used on dedicated backup servers.<br />
<br />
With option --all, all files and directories of the archive as defined by the include[] properties are restored. Files or directories with the same name on the Affa server will be overwritten and cannot be restored by a undorise. This should not be an issue on a dedicated Affa server which does not hold any other data. After a possible undorise those additional restored data must be removed manually.<br />
<br />
Please note, that the rise process backs up the the Affa server itself before doing the restore from the archive. This backup is used by a possible undorise run to restore the Affa server. Only the standard files and directories are backed up. Data in non-standard loctions (like /opt) are untouched and will still exist after the rise run '''if they don't get overwritten by data of the backup''' (=if the backup contains /opt too e.g.).<br />
<br />
{{Note box|The above command is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.}}<br />
<br />
'''affa --undo-rise'''<br />
This feature reverts a risen Affa box to a backup server. After a reboot, all configured jobs '''based on standard files and directories''' will work again.<br />
<br />
{{Note box|The above command is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.}}<br />
<br />
=== Example setups ===<br />
==== Dedicated backup server ====<br />
* Setup a dedicated server and install Affa 3 for SME.<br />
* Setup a job for every server you want to backup.<br />
* Send the public keys to every server.<br />
affa --send-key JOBNAME<br />
* Check whether password-less logins are working.<br />
affa --check-connections<br />
* Check whether the scheduled jobs are evenly distributed over the day.<br />
affa --show-schedule<br />
* Create the cron jobs.<br />
affa --make-cronjobs<br />
* Check the status after 24 hours.<br />
affa --status<br />
<br />
==== Backup of single ibays ====<br />
Suppose you want to backup the ibays 'staff1' and 'staff2' on your production server with WAN IP 82.123.1.1 to an Affa server in a different location over the internet every night at 2:30am.<br />
* Log into the Affa server.<br />
* Create a dedicated job file ''/etc/affa/ibay-staff.conf'' . Edit it and set<br />
[ibay-staff]<br />
Description=Backup of ibay-staff on 82.123.1.1<br />
remoteHostName=82.123.1.1<br />
TimeSchedule=0230<br />
SMEServer=no<br />
Include=/home/e-smith/files/ibays/staff1<br />
Include=/home/e-smith/files/ibays/staff2<br />
* Save the configuration<br />
* send the public key to the production server<br />
affa --send-key ibay-staff<br />
* check next morning<br />
affa --list-archives ibay-staff<br />
affa --status<br />
ls /var/affa/ibay-staff<br />
<br />
==== Two production servers backup each other ====<br />
You have two sites connnected via a VPN and a SME Server running on each site. In that case you don't need a dedicated Affa backup server. Both production servers can additionally act as Affa backup servers backing up the server of the other site. Simply install Affa and configure a job that backs up the other one. You can use all Affa features except of the rise feature.<br />
<br />
When using the rise feature, the server becomes any of the backed up systems, which is less useful in this scenario as it would give you a running copy of the server of the other site while the server of this site is down.<br />
<br />
To get redundancy and a faster restore you can configure a local backup to an external USB or NAS device.<br />
<br />
Please pay attention, that you do not backup the archives back to the other site. Set the RootDir property to a path which is not included in the SME default backup list. When leaving the RootDir property to its default '/var/affa', this is guaranteed.<br />
<br />
==== Use Affa to backup to a NFS-mounted NAS or a local attached USB drive ====<br />
<br />
{{Note box|This chapter still needs to be checked whether it works with Affa 3 for SME.}}<br />
<br />
You want to backup your SME production server with hostname 'prodbox‘ and IP 10.200.48.1 on a mounted filesystem instead of setting up a dedicated Affa box.<br />
<br />
===== Setup NAS =====<br />
You have a [http://www.freenas.org FreeNAS] box with IP 10.200.48.2 up and running with NFS service enabled. The disk is mounted to /mnt/affashare. You have authorized the IP address of your prodbox server to access share /mnt/affashare.<br />
* log into the 'prodbox' and install the NFS packages<br />
/usr/bin/yum install --enablerepo=smecontribs smeserver-nfs<br />
* now enable and start the portmapper service<br />
config setprop portmap status enabled<br />
service portmap start<br />
<br />
* mount the NFS share<br />
mkdir -p /mnt/affadevice<br />
mount 10.200.48.2:/mnt/affashare /mnt/affadevice<br />
<br />
<br />
===== Alternatively setup a USB drive =====<br />
* log into the 'prodbox' <br />
* connect a USB hard disk to the USB Bus. Now you must determine what device the kernel has assigned to the drive. View the /var/log/message and search for ''Initializing USB Mass Storage driver''. A few lines below you'll find the name of the device. In this example it is ''sdh''. Replace ''/dev/sdh'' by ''your device'' in following instructions.<br />
* use the fdisk program to create a linux partition. '''Verify that this is really the attached USB drive before you continue!'''<br />
<br />
fdisk /dev/sdh<br />
You'll most likely find an existing vfat dos partition, which you have to delete first. In the following we assume, that you have created a single partition ''/dev/sdh1''.<br />
* now format the drive with an ext3 filesystem<br />
mkfs.ext3 /dev/sdh1<br />
* make the mount point<br />
mkdir -p /mnt/affadevice<br />
* add the following line to the /etc/fstab<br />
/dev/sdh1 /mnt/affadevice ext3 defaults<br />
* mount the drive<br />
mount /mnt/affadevice<br />
* crosscheck your work using the df command<br />
df<br />
<br />
===== Copying a AFFA USB hard drive archive to a new disk =====<br />
Affa uses copious amounts of hard links to compress and preserve disk space for its backups. If you are in the situation where you want to copy such a disk archive to a new (bigger) disk, you need to ensure that the hard links are copied correctly or the destination copy may became significantly bigger than the total of the source archive.<br />
<br />
One way to copy across file systems (i.e. two different USB disks) and preserve the hard links is as follows:<br />
<br />
* mount both USB drives but with different mount points. e.g. /media/backup1 & /media/backup2 and then:<br />
<br />
mkdir /media/backup2/archive <br />
cd /media/backup1/archive<br />
tar cpf - . | ( cd /media/backup2/archive && tar xpf - )<br />
<br />
where '''archive''' is the name of the AFFA job you want to move to the other disk.<br />
<br />
===== Setup Affa =====<br />
You want to run backups on 11:30 h, 15:30 h and 19:30 h and you want to keep the last 3 scheduled backups, 7 daily, 5 weekly, 12 monthly and 1 yearly backups.<br />
* log into the 'prodbox' and install the Affa packages as described above.<br />
* create/edit the conf file of the job: nano /etc/affa/prodbox.conf<br />
and set:<br />
[prodbox]<br />
remoteHostName=localhost<br />
TimeSchedule=1130<br />
TimeSchedule=1530<br />
TimeSchedule=1930<br />
scheduledKeep=3<br />
dailyKeep=7<br />
weeklyKeep=5<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
RootDir=/mnt/affadevice<br />
Review the other properties and change them to your needs.<br />
* run the job manually<br />
affa --run prodbox<br />
<br />
===== Limitations =====<br />
With this kind of setup you cannot use the affa rise feature, as it requires the backup archive to be located on the '''same fileystem''' as the server installation. The rise option uses hardlinks, which are not working across filesystems.<br />
<br />
===== Automount =====<br />
Having the backup archives in the same filesystem is always a risk, which can be minimized by using the automount feature. Then the external filesystem is only mounted during a job run.<br />
<br />
In the NAS example set<br />
AutomountDevice=10.200.48.2:/mnt/affashare<br />
AutomountPoint=/mnt/affadevice<br />
and skip the step 2.<br />
<br />
In the USB drive example set<br />
AutomountDevice=/dev/sdc1<br />
AutomountPoint=/mnt/affadevice<br />
and skip the steps 5 to 8.<br />
<br />
The mount point will be automatically created, if it does not exist. <br><br />
To access the archive directory, you need to mount it manually.<br />
<br />
=== Restore ===<br />
==== Restore single files or directories ====<br />
'''Example 1:''' It's Tuesday January 6th 2009, when user 'briedlin' asks you to restore the messages of his mailbox 'orders' he has accidentally deleted on Monday.<br />
<br />
* You first must check what backup archives are available. The jobname of this server backup is 'primmail'. To get a listing of all archives run <br />
affa --list-archives primmail<br />
''(see the example listing in chapter [[Affa#Usage_and_command_line_options]]''<br />
* Choose the daily.0 archive, which was created Monday night. Now restore the mailbox 'orders' using the rsync command.<br />
* Now run the rsync command ''(note the trailing slash!)'' on the Affa backup server:<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# this variable is used to shorten the next command line''<br />
rsync -av /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# this variable is used to shorten the next command line''<br />
rsync -av -e 'ssh -p 2222' /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
<br />
<br />
'''Example 2:''' A user has deleted the file orderform.pdf from ibay 'docs' on the server 10.204.48.1 and asks you to restore it.<br />
<br />
* You have searched and found the latest version of this file in weekly archive of job 'prodserv'.<br />
* To copy it back to the server 10.204.48.1 run on the Affa server<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# this variable is used to shorten the next command line''<br />
rsync -av /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
* If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# this variable is used to shorten the next command line''<br />
rsync -av -e 'ssh -p 2222' /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
<br />
<br />
'''Example 3:''' Restoring a file with special characters in the path<br />
<br />
Do not use shell variables to shorten the command. It would complicate things more than it would help.<br />
Quote the source and destination path. In the destination path escape blank characters with a double backslash and brackets with a single backslash. On the Affa backup server do:<br />
rsync -av "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
<br />
rsync -av -e 'ssh -p 2222' "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
==== Full restore ====<br />
Generally: <br />
'''affa --full-restore [optional settings] JOB [ARCHIVE]'''<br />
This rsyncs the data (files and directories )from the backup ARCHIVE back to the 'remoteHostname' defined in the configuration of the job JOB. <br />
If ARCHIVE is not given, the archive 'scheduled.0' is used as the default.<br />
The --full-restore reconstructs the server as it was at the time of the backup and takes following IMPORTANT optional settings in consideration:<br />
<br />
With option [--preserve-newer=yes] files on the remote server with modification time newer than on the backup are not overwritten.<br />
<br />
With option [--delete=yes] all files on the remote server, which are not in the backup, are deleted.<br />
<br />
If the parameter “SMEServer=yes” is set, the signal-event pre-restore and signal-event post-upgrade will be used automatically.<br />
<br />
After the restore is done, the restored server reboots automatically. <br />
<br />
Example:<br />
You have backuped your production server 'prodsrv' as job 'prodbox'. To restore only lost data from the latest backup run<br />
affa --full-restore prodbox<br />
<br />
To restore exactly from the older archive daily.3 run <br />
affa --full-restore [--preserve-newer=no] [--delete=yes] prodbox daily.3<br />
<br />
{{Warning box|A full restore with --preserve-newer no and --delete yes reconstructs the server as it was at the time of the backup. That means, that all files created or server configuration changes made after the backup will be lost! }}<br />
<br />
==== Moving a SME server installation to new hardware using the Affa rise feature or the backup-restore functions ====<br />
Please see this Howto: [[Moving SME to new Hardware]]<br />
<br />
==== Restore from USB drive on new server ====<br />
<br />
This tip comes from http://forums.contribs.org/index.php?topic=42412.0<br />
<br />
Q) I have complete backups using affa stored on a usb hard drive connected to our affa backup server. I need to restore an earlier monthly backup of our job "mailbackup" to a test server rather than back to the original system. If I did it from the backup server I see the instructions of how to rise that server to the current backup on that server but I want to restore a point about a month ago before some strange things happened. And I want to do it on a machine that is not our backup server or our production server. I tried to figure out how but am lost in the options. My goal is to do some testing.<br />
<br />
A) On your testserver setup a job mailbackup identical to that on your backup server but set property RootDir to /var/affa and property status to disabled. Connect the USB drive and copy the archive of the job mailbackup to /var/affa. Then run affa --rise mailbackup ARCHIVE<br />
<br />
Remember that Affa places job setup scripts into the archive directories. In your case it is mailbackup-setup.pl. Copy it to your testserver, change the RootDir and status properties and execute it.<br />
<br />
=== FAQ ===<br />
'''What files and directories are included by default?'''<br />
<br />
With SMEServer=no nothing at all.<br />
<br />
With SMEServer=yes the SME default backup list is the output of following command:<br />
perl -e 'use esmith::Backup;$b=new esmith::Backup;print join("\n",$b->restore_list)."\n"' <br />
and as additional folder:<br />
/etc/affa<br />
<br />
{{Note box|The following example needs to be reworked for Affa 3.}}<br />
'''Can I exclusively backup image files from a specific directory?'''<br />
Yes. Assuming you want to backup all gif and jpg files from directory /home/e-smith/files/ibays/pictures/files use this configuration<br />
db affa setprop JOBNAME SMEServer no # do not include SME Server default directories and files<br />
db affa setprop JOBNAME Include[0] '/home/e-smith/files/ibays/pictures/files' # start searching here<br />
db affa setprop JOBNAME Include[1] '*/' # search the tree recursively<br />
db affa setprop JOBNAME Include[2] '*.gif' # copy files that match these patterns<br />
db affa setprop JOBNAME Include[3] '*.jpg' <br />
db affa setprop JOBNAME Exclude[0] '*' # exclude all others<br />
db affa setprop JOBNAME rsyncOptions '-m' # do not create empty folders<br />
<br />
<br />
'''How can I move a backup archive from my production server in a remote branch office to the Affa server using an USB disk? I want to avoid to download the first full backup over the internet.'''<br />
Install Affa on your production server, connect and mount the USB disk. Setup a job to write the archive to the USB disk. After the job run, connect the USB disk to your Affa server and setup the job in the same way. Use the setup script from the archive directory. You only need to change the remoteHostName property. Now use the --move-archive option to move the archive to the local disk. If not already done, send the public key to your production server. You're done.<br />
<br />
<br />
'''How do I backup two SME servers behind a firewall?'''<br />
First you need to configure port forwardings for the ssh service on your firewall. Use a non-standard port, e.g 2200->22, for your second server. Setup a job on your Affa server for each of your production servers. Set up the job property sshPort=2200 for second server.<br />
<br />
=== Uninstall ===<br />
This removes the installed Affa package, all configuration data and all backup archives.<br />
<br />
Run the following commands for each job:<br />
affa --revoke-key JOBNAME<br />
affa --delete-job JOBNAME<br />
Verify that all jobs have been deleted:<br />
affa --status<br />
Remove the Affa package:<br />
yum remove smeserver-affa-3.2.2.1-0.noarch.rpm<br />
Cleaning up:<br />
rm -f /etc/cron.d/affa<br />
rm -rf /etc/affa<br />
rm -rf /var/affa /var/log/affa<br />
<br />
=== Troubleshooting ===<br />
==== Affa stops working after prodserver was rebuild. ==== <br />
'''Description:'''<br />
* Affa stopped working or is broken.<br />
* When affaserver tries to make an ssh connection to prodserver it fails.<br />
* In the /var/log/affa/''jobname''.log file you will this message: "SSH connection to ''prodserver ip'' failed. Did you send the public key". <br />
* Resending the keys does not solve the problem. <br />
<br />
'''Solution:'''<br />
* This is probably due to the key for prodserver that is stored on the affaserver no longer matches the key of prodserver. <br />
* You need remove the existing key so that the next ssh connection re-gets the keys from prodserver. <br />
* From the command line edit the /root/.ssh/known_hosts file and remove the line for prodserver. It starts with: ''prodserver ip'' ssh-rsa<br />
<br />
=== Bug report ===<br />
Affa is listed in the [http://bugs.contribs.org/enter_bug.cgi?product=SME%20Contribs&component=smeserver-affa bugtracker contribs section].<br />
Please report all bugs, new feature requests and documentation issues there.<br />
<br />
<br />
{{#bugzilla:columns=id,product,version,status,summary |sort=id |order=desc |disablecache=1|component=smeserver-affa|noresultsmessage="No open bugs found."}}<br />
<br />
=== Additional information ===<br />
For details of performance, changelog etc see [[:Affa:Additional information]]<br />
<br />
=== References ===<br />
* Wikipedia article http://en.wikipedia.org/wiki/Rsync<br />
* Rsync homepage http://rsync.samba.org/<br />
* Wiki article [[Moving SME to new Hardware]]<br />
* Wiki article [[Backup with Affa and FreeDup]]<br />
* Wiki article [[Rsyncd setup on a windows computer for use with Affa backup]]<br />
* Wiki article [[Backup Windows Computers Using Affa]]<br />
* Wiki article [[Backup of ESXi Virtual Machines using Affa]]<br />
<br />
----<br />
[[Category: Contrib]]<br />
[[Category: Backup]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Moving_SME_to_new_Hardware&diff=33104Moving SME to new Hardware2017-04-03T19:44:24Z<p>Arnaud: /* Restore the data */</p>
<hr />
<div>{{Languages|Moving_SME_to_new_Hardware}}<br />
{{Level|Advanced}}<br />
{{usefulnote}}<br />
<br />
<br />
=== Introduction ===<br />
There are many ways to move a SME Server installation to new hardware and also new versions of the SME Server software. <br />
<br />
This document describes a method using the '''[[Affa|Affa V3]]''' contrib.<br />
<br />
Affa gives 3 possibilities to move to new hardware, depending of the available hardware (1,2 or 3 servers, an usb disk) and on the acceptable downtime:<br />
* Using the "rise" feature will convert the backup server into the new production server => 2 machines are needed - short downtime.<br />
* Using the normal backup and restore functions:<br />
:* with 2 machines (production server and backup server - long downtime) or 3 machines (old production server, backup server and new production server - short downtime) <br />
:* with only 1 machine and an usb external disk (long downtime) <br />
<br />
<br />
These 3 methods can also be used to upgrade from a lower version to a higher version of the SME software. This also work from SME 8.1 to 9.<br><br />
<br />
In the following it is assumed that <code>'''prodIP'''</code> is the IP address of your '''production server''' and <code>'''newIP'''</code> is the IP address of the new '''server hardware'''. Replace both placeholders by your real IP addresses.<br />
<br />
=== Using the "rise" feature ===<br />
==== Preparation ====<br />
===== Production server =====<br />
Enable remote administration ssh access in the server-manager of <code>prodIP</code>. This involves setting both the 'Secure shell access' for the local network, and the 'Allow administrative command line access over secure shell' setting, which must be set to 'Yes'. Also, you may have to set "Allow secure shell access using standard passwords" to 'Yes' as well.<br><br />
Log into the <code>prodIP</code> box and run an update<br><br />
<br />
yum update<br />
<br />
If packages were updated signalling post-upgrade and reboot events is required.<br />
<br />
===== New hardware =====<br />
Install SME Server from the latest CDROM/ISO. Assign an unused IP address (<code>newIP</code>) and disable DHCP.<br><br />
Enable remote ssh access in the server-manager on the newIP box.<br />
<br />
'''''Note:''' From now on all further steps can be done remotely via ssh login.''<br><br />
<br />
Log into the <code>newIP</code> box and run an update<br><br />
<br />
yum update<br />
<br />
Signalling post-upgrade and reboot events is required.<br><br />
<br />
<br />
Install the Affa RPM<br />
Follow the latest instructions here: '''[[Affa#Installation_of_Affa_3|Affa Installation]]'''<br />
<br />
Please remember to manually create directory for archive files:<br />
mkdir /var/affa<br />
<br />
{{Note box|During the transition from SME8 to SME9, contrib packages will be migrated to the SME9 contrib repository. If the contrib is not yet in the SME9 Contrib repository and an entry in the Q&A suggests it will install properly then you will need to install the contrib from the SME8 repository See: http://wiki.contribs.org/SME9.0_Contribs_QA#Setup.}}<br />
<br />
<div style="text-align:left" class="mw-collapsible mw-collapsed" data-collapsetext="less &uarr;" data-expandtext="Obsolete Affa 2 instructions &darr;"><br />
<br />
<br><br />
<br />
{{Note box|During the transition from SME7 to SME8, contrib packages will be migrated to the SME8 contrib repository. If the contrib is not yet in the SME8 Contrib repository and an entry in the Q&A suggests it will install properly then you will need to install the contrib from the SME7 repository See: http://wiki.contribs.org/SME8.0_Contribs_QA#Setup.}}<br />
<br />
Configure Affa<br><br />
db affa set AffaGlobalDisable yes<br />
db affa set prodserv job<br />
db affa setprop prodserv remoteHostName prodIP<br />
db affa setprop prodserv RPMCheck yes<br />
affa --make-cronjobs<br />
<br />
</div><br />
<br />
In this example you have an SME production server (<code>prodIP</code>) with IP 192.168.0.2.<br><br />
You have a second SME box as your backup server (<code>newIP</code>) with IP 192.168.0.10.<br><br />
The affa backup job will be called 'prodserv'.<br><br><br />
<br />
Login to your <code>newIP</code> as root and edit/create the file '''/etc/affa/prodserv.conf''' . Using e.g. editor ''nano'', add the following example job configuration text for jobname '''prodserv''' and adapt '''Include=XXXXX''' according the supplementary folders that must be backuped, e.g. due to installed contribs that works with /opt):<br />
[prodserv]<br />
remoteHostName=192.168.0.2<br />
SMEServer=yes<br />
Watchdog=yes<br />
RPMCheck=yes<br />
ConnectionCheckTimeout=120<br />
Debug=no<br />
Description=Backup of 192.168.0.2 smeserver.athome.xx<br />
DiskSpaceWarn=strict<br />
RootDir=/var/affa<br />
Include=/each/folder/to/save<br />
Include=/another/folder/to/save<br />
TimeSchedule=0630<br />
localNice=15<br />
remoteNice=15<br />
rsync--inplace=yes<br />
rsyncCompress=no<br />
rsyncTimeout=900<br />
scheduledKeep=1<br />
dailyKeep=7<br />
weeklyKeep=4<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
status=disabled<br />
Then save your job configuration file.<br />
<br />
If you have a Letsencrypt certificate on the production server, then you should also include<br />
Include=/etc/dehydrated<br />
<br />
Now check that your configuration is OK:<br />
affa --configcheck<br />
This should throw out no errors.<br> <br />
<br />
Generate DSA keys and send the public key to the <code>prodIP</code> server<br><br />
affa --send-key prodserv<br />
<br />
The answer from ProdIP server will be<br><br />
Job prodserv: root@ProdIP's password:<br />
enter the root ProdIP password. The answer will be:<br><br />
Public key sent to prodIP<br />
<br />
==== Copying data ====<br />
Run the Affa job on the <code>newIP</code> box<br />
<br />
affa --run prodserv<br />
<br />
Depending on the amount of data and the speed of hardware and network this first job run can take a really long time.<br />
<br />
Now view the file <code>/var/affa/prodserv/rpms-missing.txt</code><br />
<br />
less /var/affa/prodserv/rpms-missing.txt<br />
<br />
You will find a list of RPMs which are installed on <code>prodIP</code> but not on this server (<code>newIP</code>) and also RPMs installed with different versions. Install or update the listed RPMs. To verify, you can run the steps of this chapter again. Finally the rpms-missing.txt should not list any RPMs.<br />
<br />
In case you are upgrading the SME operating system to a higher version you may not only find the contribs installed in this list but all changes between the 2 systems - in this case you need to run this command on <code>prodIP</code><br />
/sbin/e-smith/audittools/newrpms<br />
To find out what contribs are installed on <code>prodIP</code>.<br />
<br />
==== Final data synchronization ====<br />
Ask your users to log off.<br><br />
Log into the <code>prodIP</code> box and stop all services that can modify data.<br />
<br />
SVC='qpsmtpd sqpsmtpd crond pop3 dovecot pop3s ftp httpd-e-smith atalk smb qmail' <br />
for s in $SVC; do service $s stop; done<br />
<br />
'''''Note:''' Downtime of the production server starts here''<br />
<br />
Log into the <code>newIP</code> box and run the Affa job again<br />
<br />
affa --run prodserv<br />
<br />
This run will complete very quickly as only differences since the the last run needs to be synchronsized.<br />
<br />
==== Switch over to the new hardware ====<br />
Log into the <code>prodIP</code> box and power it off<br />
<br />
poweroff<br />
<br />
<br />
Log into the <code>newIP</code> box and rise this server to your production server<br />
<br />
affa --rise --all prodserv<br />
<br />
This action will complete very quickly as only hardlinks are used and no data is physically moved.<br><br />
<br />
'''''Note:''' Do not be scared if your prompt looks different!<br><br><br />
<br />
Now do a reboot<br />
<br />
reboot<br />
'''''Note:''' Downtime of the production server ends here''<br />
<br />
<br />
You now have an identical copy of your old production server running on the new hardware. Your users can now log on.<br />
<br />
==== Cleaning up ====<br />
Remove the Affa archives<br />
<br />
/bin/rm -rf /var/affa<br />
<br />
Remove the Affa packages and all status and configuration data<br />
<br />
yum remove smeserver-affa perl-Filesys-DiskFree<br />
rm -f /etc/cron.d/affa-status /etc/cron.d/affa<br />
rm -rf /home/e-smith/db/affa /home/e-smith/db/affa-report<br />
rm -rf /var/log/affa <br />
<br />
'''Note:''' Don't forget to clean up /var/affa. Otherwise you will waste disk space and see strange quota reports. See this [http://forums.contribs.org/index.php?topic=40737.msg198064#msg198064 forum thread] for details.<br />
<br />
==== Additional information ====<br />
===== Performance =====<br />
With this method you should be able to move a typical 500 Gbyte sized server to new hardware with downtime less than 20 minutes. The final sync and the rise time does not really depend on the total files size, but on the number of files and directories.<br />
<br />
<br />
<br />
=== Using the backup and restore fonctions ===<br />
The general working way is to make a backup of the old SME and to restore it on the new SME (upgraded or not).<br />
The fonction "RPMCheck" can be used indirectly with the backup server (compare the list with the old and with the new hardware) to get the list of rpm's that are missing on the new hardware but it is not available for a backup on the external usb disk. <br />
<br />
==== Preparation ====<br />
Take a backup of the running old 'prod server' (see the above conf file ''/etc/affa/prodserv.conf'' ).<br />
{{Note box| Like using the way based on "rise", your can take the first backup (takes long time) without stopping the services and the last one (is quick) after the logout of the users and the stop of the services }}<br />
After the backup, set a temporary IP into the conf file of the backup job:<br />
remoteHostName=tem.po.ra.ry.IP<br />
<br />
For a backup on an external usb disk, set:<br />
remoteHostName=localhost<br />
RootDir=/the/mount/point/of/the/disk<br />
and mount the disk. You should make a list of the installed rpm's too.<br />
<br />
==== Install SME on the new hardware ====<br />
Install at least the same version of SME you were running on the old hardware or a more recent one (e.g. from SME8 to SME9).<br />
<br />
===== For the method based on a backup server: =====<br />
* Set the '''temporary IP''' as internal address of the new hardware<br />
* From the backup server create the ssh connection between the backup server and the new hardware by sending the ssh key:<br />
affa --send-key prodserv<br />
<br />
The answer from Prod-temp-IP server will be<br><br />
Job prodserv: root@Prod-temp-IP's password:<br />
enter the root Prod-temp-IP password. The answer will be:<br><br />
Public key sent to prod-temp-IP<br />
<br />
===== External usb disk: =====<br />
* Install "smeserver-affa" on the new hardware<br />
* mount the usb disk on the same mountpoint than for the backup<br />
* go into the archive and copy the .ini file into /etc/affa as conf file:<br />
cd /mount/point/prodserv/scheduled.0<br />
ls -a ### to see the ini file<br />
cp .prodserv.ini /etc/affa/<br />
mv /etc/affa/.prodserv.ini /etc/affa/prodserv.conf<br />
<br />
===== In case of upgrade SME8 to SME9 =====<br />
The restoration of the default data (parameter "SMEServer=yes" into the conf file of the job) will configure yum repos for SME8 on the new server SME9!<br />
<br />
In order to avoid this there are 2 possibilities:<br />
* make a copy of both folders ''/etc/yum.repos.d'' and ''/etc/yum.smerepos.d'' before the restore. It will be helpful for reconfiguring by hand the repos for SME9 after the restore.<br />
* add following into the conf file of the backup job:<br />
Exclude=/etc/yum.repos.d<br />
Exclude=/etc/yum.smerepos.d<br />
before the last backup (of course if you will restore from the last backup - scheduled.0 - and not from an older one like weekly.2!)<br />
<br />
==== Restore the data ====<br />
<br />
From the backup server (or from the new production server in case of restoring from external usb disk) run:<br />
affa --full-restore [--preserve-newer=no] [--delete=yes] prodserv<br />
To get 1:1 the state of the backup.<br />
<br />
'''Keep in mind that:'''<br />
:[--preserve-newer=no]: files on the remote server with modification time newer than on the backup are overwritten through the older ones of the backup.<br />
:[--delete=yes]: all files on the remote server, which are not in the backup, are deleted. <br />
<br />
{{Warning box|A full restore with --preserve-newer no and --delete yes reconstructs the server as it was at the time of the backup. That means, that all files created or server configuration changes made before the restore will be lost! }}<br />
<br />
After the restore, the new prodserver will reboot.<br />
<br />
'''Note for the case of 3 machines:''' Make sure that the old hardware is switched off or no more connect to the network before the new hardware reboots because the new hardware will take its IP after the reconfiguration.<br />
<br />
==== Tasks post restore ==== <br />
<br />
* If the backup job should be used for further backups of the new hardware, don't forget to replace the temporary IP of "Remotehost" through the previus set IP of the old server into the conf file of the affa job.<br />
* In case of an OS upgrade, check and if necessary reconfigure the repositories of yum for the new version.<br />
<br />
[[Category: Howto]]<br />
[[Category: Backup]]<br />
----</div>Arnaudhttps://wiki.koozali.org/index.php?title=Moving_SME_to_new_Hardware&diff=33103Moving SME to new Hardware2017-04-03T19:40:51Z<p>Arnaud: /* Restore the data */</p>
<hr />
<div>{{Languages|Moving_SME_to_new_Hardware}}<br />
{{Level|Advanced}}<br />
{{usefulnote}}<br />
<br />
<br />
=== Introduction ===<br />
There are many ways to move a SME Server installation to new hardware and also new versions of the SME Server software. <br />
<br />
This document describes a method using the '''[[Affa|Affa V3]]''' contrib.<br />
<br />
Affa gives 3 possibilities to move to new hardware, depending of the available hardware (1,2 or 3 servers, an usb disk) and on the acceptable downtime:<br />
* Using the "rise" feature will convert the backup server into the new production server => 2 machines are needed - short downtime.<br />
* Using the normal backup and restore functions:<br />
:* with 2 machines (production server and backup server - long downtime) or 3 machines (old production server, backup server and new production server - short downtime) <br />
:* with only 1 machine and an usb external disk (long downtime) <br />
<br />
<br />
These 3 methods can also be used to upgrade from a lower version to a higher version of the SME software. This also work from SME 8.1 to 9.<br><br />
<br />
In the following it is assumed that <code>'''prodIP'''</code> is the IP address of your '''production server''' and <code>'''newIP'''</code> is the IP address of the new '''server hardware'''. Replace both placeholders by your real IP addresses.<br />
<br />
=== Using the "rise" feature ===<br />
==== Preparation ====<br />
===== Production server =====<br />
Enable remote administration ssh access in the server-manager of <code>prodIP</code>. This involves setting both the 'Secure shell access' for the local network, and the 'Allow administrative command line access over secure shell' setting, which must be set to 'Yes'. Also, you may have to set "Allow secure shell access using standard passwords" to 'Yes' as well.<br><br />
Log into the <code>prodIP</code> box and run an update<br><br />
<br />
yum update<br />
<br />
If packages were updated signalling post-upgrade and reboot events is required.<br />
<br />
===== New hardware =====<br />
Install SME Server from the latest CDROM/ISO. Assign an unused IP address (<code>newIP</code>) and disable DHCP.<br><br />
Enable remote ssh access in the server-manager on the newIP box.<br />
<br />
'''''Note:''' From now on all further steps can be done remotely via ssh login.''<br><br />
<br />
Log into the <code>newIP</code> box and run an update<br><br />
<br />
yum update<br />
<br />
Signalling post-upgrade and reboot events is required.<br><br />
<br />
<br />
Install the Affa RPM<br />
Follow the latest instructions here: '''[[Affa#Installation_of_Affa_3|Affa Installation]]'''<br />
<br />
Please remember to manually create directory for archive files:<br />
mkdir /var/affa<br />
<br />
{{Note box|During the transition from SME8 to SME9, contrib packages will be migrated to the SME9 contrib repository. If the contrib is not yet in the SME9 Contrib repository and an entry in the Q&A suggests it will install properly then you will need to install the contrib from the SME8 repository See: http://wiki.contribs.org/SME9.0_Contribs_QA#Setup.}}<br />
<br />
<div style="text-align:left" class="mw-collapsible mw-collapsed" data-collapsetext="less &uarr;" data-expandtext="Obsolete Affa 2 instructions &darr;"><br />
<br />
<br><br />
<br />
{{Note box|During the transition from SME7 to SME8, contrib packages will be migrated to the SME8 contrib repository. If the contrib is not yet in the SME8 Contrib repository and an entry in the Q&A suggests it will install properly then you will need to install the contrib from the SME7 repository See: http://wiki.contribs.org/SME8.0_Contribs_QA#Setup.}}<br />
<br />
Configure Affa<br><br />
db affa set AffaGlobalDisable yes<br />
db affa set prodserv job<br />
db affa setprop prodserv remoteHostName prodIP<br />
db affa setprop prodserv RPMCheck yes<br />
affa --make-cronjobs<br />
<br />
</div><br />
<br />
In this example you have an SME production server (<code>prodIP</code>) with IP 192.168.0.2.<br><br />
You have a second SME box as your backup server (<code>newIP</code>) with IP 192.168.0.10.<br><br />
The affa backup job will be called 'prodserv'.<br><br><br />
<br />
Login to your <code>newIP</code> as root and edit/create the file '''/etc/affa/prodserv.conf''' . Using e.g. editor ''nano'', add the following example job configuration text for jobname '''prodserv''' and adapt '''Include=XXXXX''' according the supplementary folders that must be backuped, e.g. due to installed contribs that works with /opt):<br />
[prodserv]<br />
remoteHostName=192.168.0.2<br />
SMEServer=yes<br />
Watchdog=yes<br />
RPMCheck=yes<br />
ConnectionCheckTimeout=120<br />
Debug=no<br />
Description=Backup of 192.168.0.2 smeserver.athome.xx<br />
DiskSpaceWarn=strict<br />
RootDir=/var/affa<br />
Include=/each/folder/to/save<br />
Include=/another/folder/to/save<br />
TimeSchedule=0630<br />
localNice=15<br />
remoteNice=15<br />
rsync--inplace=yes<br />
rsyncCompress=no<br />
rsyncTimeout=900<br />
scheduledKeep=1<br />
dailyKeep=7<br />
weeklyKeep=4<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
status=disabled<br />
Then save your job configuration file.<br />
<br />
If you have a Letsencrypt certificate on the production server, then you should also include<br />
Include=/etc/dehydrated<br />
<br />
Now check that your configuration is OK:<br />
affa --configcheck<br />
This should throw out no errors.<br> <br />
<br />
Generate DSA keys and send the public key to the <code>prodIP</code> server<br><br />
affa --send-key prodserv<br />
<br />
The answer from ProdIP server will be<br><br />
Job prodserv: root@ProdIP's password:<br />
enter the root ProdIP password. The answer will be:<br><br />
Public key sent to prodIP<br />
<br />
==== Copying data ====<br />
Run the Affa job on the <code>newIP</code> box<br />
<br />
affa --run prodserv<br />
<br />
Depending on the amount of data and the speed of hardware and network this first job run can take a really long time.<br />
<br />
Now view the file <code>/var/affa/prodserv/rpms-missing.txt</code><br />
<br />
less /var/affa/prodserv/rpms-missing.txt<br />
<br />
You will find a list of RPMs which are installed on <code>prodIP</code> but not on this server (<code>newIP</code>) and also RPMs installed with different versions. Install or update the listed RPMs. To verify, you can run the steps of this chapter again. Finally the rpms-missing.txt should not list any RPMs.<br />
<br />
In case you are upgrading the SME operating system to a higher version you may not only find the contribs installed in this list but all changes between the 2 systems - in this case you need to run this command on <code>prodIP</code><br />
/sbin/e-smith/audittools/newrpms<br />
To find out what contribs are installed on <code>prodIP</code>.<br />
<br />
==== Final data synchronization ====<br />
Ask your users to log off.<br><br />
Log into the <code>prodIP</code> box and stop all services that can modify data.<br />
<br />
SVC='qpsmtpd sqpsmtpd crond pop3 dovecot pop3s ftp httpd-e-smith atalk smb qmail' <br />
for s in $SVC; do service $s stop; done<br />
<br />
'''''Note:''' Downtime of the production server starts here''<br />
<br />
Log into the <code>newIP</code> box and run the Affa job again<br />
<br />
affa --run prodserv<br />
<br />
This run will complete very quickly as only differences since the the last run needs to be synchronsized.<br />
<br />
==== Switch over to the new hardware ====<br />
Log into the <code>prodIP</code> box and power it off<br />
<br />
poweroff<br />
<br />
<br />
Log into the <code>newIP</code> box and rise this server to your production server<br />
<br />
affa --rise --all prodserv<br />
<br />
This action will complete very quickly as only hardlinks are used and no data is physically moved.<br><br />
<br />
'''''Note:''' Do not be scared if your prompt looks different!<br><br><br />
<br />
Now do a reboot<br />
<br />
reboot<br />
'''''Note:''' Downtime of the production server ends here''<br />
<br />
<br />
You now have an identical copy of your old production server running on the new hardware. Your users can now log on.<br />
<br />
==== Cleaning up ====<br />
Remove the Affa archives<br />
<br />
/bin/rm -rf /var/affa<br />
<br />
Remove the Affa packages and all status and configuration data<br />
<br />
yum remove smeserver-affa perl-Filesys-DiskFree<br />
rm -f /etc/cron.d/affa-status /etc/cron.d/affa<br />
rm -rf /home/e-smith/db/affa /home/e-smith/db/affa-report<br />
rm -rf /var/log/affa <br />
<br />
'''Note:''' Don't forget to clean up /var/affa. Otherwise you will waste disk space and see strange quota reports. See this [http://forums.contribs.org/index.php?topic=40737.msg198064#msg198064 forum thread] for details.<br />
<br />
==== Additional information ====<br />
===== Performance =====<br />
With this method you should be able to move a typical 500 Gbyte sized server to new hardware with downtime less than 20 minutes. The final sync and the rise time does not really depend on the total files size, but on the number of files and directories.<br />
<br />
<br />
<br />
=== Using the backup and restore fonctions ===<br />
The general working way is to make a backup of the old SME and to restore it on the new SME (upgraded or not).<br />
The fonction "RPMCheck" can be used indirectly with the backup server (compare the list with the old and with the new hardware) to get the list of rpm's that are missing on the new hardware but it is not available for a backup on the external usb disk. <br />
<br />
==== Preparation ====<br />
Take a backup of the running old 'prod server' (see the above conf file ''/etc/affa/prodserv.conf'' ).<br />
{{Note box| Like using the way based on "rise", your can take the first backup (takes long time) without stopping the services and the last one (is quick) after the logout of the users and the stop of the services }}<br />
After the backup, set a temporary IP into the conf file of the backup job:<br />
remoteHostName=tem.po.ra.ry.IP<br />
<br />
For a backup on an external usb disk, set:<br />
remoteHostName=localhost<br />
RootDir=/the/mount/point/of/the/disk<br />
and mount the disk. You should make a list of the installed rpm's too.<br />
<br />
==== Install SME on the new hardware ====<br />
Install at least the same version of SME you were running on the old hardware or a more recent one (e.g. from SME8 to SME9).<br />
<br />
===== For the method based on a backup server: =====<br />
* Set the '''temporary IP''' as internal address of the new hardware<br />
* From the backup server create the ssh connection between the backup server and the new hardware by sending the ssh key:<br />
affa --send-key prodserv<br />
<br />
The answer from Prod-temp-IP server will be<br><br />
Job prodserv: root@Prod-temp-IP's password:<br />
enter the root Prod-temp-IP password. The answer will be:<br><br />
Public key sent to prod-temp-IP<br />
<br />
===== External usb disk: =====<br />
* Install "smeserver-affa" on the new hardware<br />
* mount the usb disk on the same mountpoint than for the backup<br />
* go into the archive and copy the .ini file into /etc/affa as conf file:<br />
cd /mount/point/prodserv/scheduled.0<br />
ls -a ### to see the ini file<br />
cp .prodserv.ini /etc/affa/<br />
mv /etc/affa/.prodserv.ini /etc/affa/prodserv.conf<br />
<br />
===== In case of upgrade SME8 to SME9 =====<br />
The restoration of the default data (parameter "SMEServer=yes" into the conf file of the job) will configure yum repos for SME8 on the new server SME9!<br />
<br />
In order to avoid this there are 2 possibilities:<br />
* make a copy of both folders ''/etc/yum.repos.d'' and ''/etc/yum.smerepos.d'' before the restore. It will be helpful for reconfiguring by hand the repos for SME9 after the restore.<br />
* add following into the conf file of the backup job:<br />
Exclude=/etc/yum.repos.d<br />
Exclude=/etc/yum.smerepos.d<br />
before the last backup (of course if you will restore from the last backup - scheduled.0 - and not from an older one like weekly.2!)<br />
<br />
==== Restore the data ====<br />
<br />
From the backup server (or from the new production server in case of restoring from external usb disk) run:<br />
affa --full-restore [--preserve-newer=no] [--delete=yes] prodserv<br />
To get 1:1 the state of the backup.<br />
<br />
'''Keep in mind that:'''<br />
:[--preserve-newer=no]: files on the remote server with modification time newer than on the backup are overwritten through the older ones of the backup.<br />
:[--delete=yes]: all files on the remote server, which are not in the backup, are deleted. <br />
<br />
{{Warning box|A full restore with --preserve-newer no and --delete yes reconstructs the server as it was at the time of the backup. That means, that all files created or server configuration changes made before the restore will be lost! }}<br />
<br />
After the restore, the new prodserver will reboot.<br />
'''Note for the case of 3 machines:''' Make sure that the old hardware is switched off or no more connect to the network before the new hardware reboots because the new hardware will take its IP after the reconfiguration.<br />
<br />
If the backup job should be used for futher backups of the new hardware, don't forget to replace the temporary IP of "Remotehost" through the presius set IP of the olf server into the conf file of the affa job.<br />
<br />
In case of an OS upgrade, check and if necessary reconfigure the repositories of yum for the new version.<br />
<br />
[[Category: Howto]]<br />
[[Category: Backup]]<br />
----</div>Arnaudhttps://wiki.koozali.org/index.php?title=Moving_SME_to_new_Hardware&diff=33102Moving SME to new Hardware2017-04-03T19:31:13Z<p>Arnaud: /* Preparation */</p>
<hr />
<div>{{Languages|Moving_SME_to_new_Hardware}}<br />
{{Level|Advanced}}<br />
{{usefulnote}}<br />
<br />
<br />
=== Introduction ===<br />
There are many ways to move a SME Server installation to new hardware and also new versions of the SME Server software. <br />
<br />
This document describes a method using the '''[[Affa|Affa V3]]''' contrib.<br />
<br />
Affa gives 3 possibilities to move to new hardware, depending of the available hardware (1,2 or 3 servers, an usb disk) and on the acceptable downtime:<br />
* Using the "rise" feature will convert the backup server into the new production server => 2 machines are needed - short downtime.<br />
* Using the normal backup and restore functions:<br />
:* with 2 machines (production server and backup server - long downtime) or 3 machines (old production server, backup server and new production server - short downtime) <br />
:* with only 1 machine and an usb external disk (long downtime) <br />
<br />
<br />
These 3 methods can also be used to upgrade from a lower version to a higher version of the SME software. This also work from SME 8.1 to 9.<br><br />
<br />
In the following it is assumed that <code>'''prodIP'''</code> is the IP address of your '''production server''' and <code>'''newIP'''</code> is the IP address of the new '''server hardware'''. Replace both placeholders by your real IP addresses.<br />
<br />
=== Using the "rise" feature ===<br />
==== Preparation ====<br />
===== Production server =====<br />
Enable remote administration ssh access in the server-manager of <code>prodIP</code>. This involves setting both the 'Secure shell access' for the local network, and the 'Allow administrative command line access over secure shell' setting, which must be set to 'Yes'. Also, you may have to set "Allow secure shell access using standard passwords" to 'Yes' as well.<br><br />
Log into the <code>prodIP</code> box and run an update<br><br />
<br />
yum update<br />
<br />
If packages were updated signalling post-upgrade and reboot events is required.<br />
<br />
===== New hardware =====<br />
Install SME Server from the latest CDROM/ISO. Assign an unused IP address (<code>newIP</code>) and disable DHCP.<br><br />
Enable remote ssh access in the server-manager on the newIP box.<br />
<br />
'''''Note:''' From now on all further steps can be done remotely via ssh login.''<br><br />
<br />
Log into the <code>newIP</code> box and run an update<br><br />
<br />
yum update<br />
<br />
Signalling post-upgrade and reboot events is required.<br><br />
<br />
<br />
Install the Affa RPM<br />
Follow the latest instructions here: '''[[Affa#Installation_of_Affa_3|Affa Installation]]'''<br />
<br />
Please remember to manually create directory for archive files:<br />
mkdir /var/affa<br />
<br />
{{Note box|During the transition from SME8 to SME9, contrib packages will be migrated to the SME9 contrib repository. If the contrib is not yet in the SME9 Contrib repository and an entry in the Q&A suggests it will install properly then you will need to install the contrib from the SME8 repository See: http://wiki.contribs.org/SME9.0_Contribs_QA#Setup.}}<br />
<br />
<div style="text-align:left" class="mw-collapsible mw-collapsed" data-collapsetext="less &uarr;" data-expandtext="Obsolete Affa 2 instructions &darr;"><br />
<br />
<br><br />
<br />
{{Note box|During the transition from SME7 to SME8, contrib packages will be migrated to the SME8 contrib repository. If the contrib is not yet in the SME8 Contrib repository and an entry in the Q&A suggests it will install properly then you will need to install the contrib from the SME7 repository See: http://wiki.contribs.org/SME8.0_Contribs_QA#Setup.}}<br />
<br />
Configure Affa<br><br />
db affa set AffaGlobalDisable yes<br />
db affa set prodserv job<br />
db affa setprop prodserv remoteHostName prodIP<br />
db affa setprop prodserv RPMCheck yes<br />
affa --make-cronjobs<br />
<br />
</div><br />
<br />
In this example you have an SME production server (<code>prodIP</code>) with IP 192.168.0.2.<br><br />
You have a second SME box as your backup server (<code>newIP</code>) with IP 192.168.0.10.<br><br />
The affa backup job will be called 'prodserv'.<br><br><br />
<br />
Login to your <code>newIP</code> as root and edit/create the file '''/etc/affa/prodserv.conf''' . Using e.g. editor ''nano'', add the following example job configuration text for jobname '''prodserv''' and adapt '''Include=XXXXX''' according the supplementary folders that must be backuped, e.g. due to installed contribs that works with /opt):<br />
[prodserv]<br />
remoteHostName=192.168.0.2<br />
SMEServer=yes<br />
Watchdog=yes<br />
RPMCheck=yes<br />
ConnectionCheckTimeout=120<br />
Debug=no<br />
Description=Backup of 192.168.0.2 smeserver.athome.xx<br />
DiskSpaceWarn=strict<br />
RootDir=/var/affa<br />
Include=/each/folder/to/save<br />
Include=/another/folder/to/save<br />
TimeSchedule=0630<br />
localNice=15<br />
remoteNice=15<br />
rsync--inplace=yes<br />
rsyncCompress=no<br />
rsyncTimeout=900<br />
scheduledKeep=1<br />
dailyKeep=7<br />
weeklyKeep=4<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
status=disabled<br />
Then save your job configuration file.<br />
<br />
If you have a Letsencrypt certificate on the production server, then you should also include<br />
Include=/etc/dehydrated<br />
<br />
Now check that your configuration is OK:<br />
affa --configcheck<br />
This should throw out no errors.<br> <br />
<br />
Generate DSA keys and send the public key to the <code>prodIP</code> server<br><br />
affa --send-key prodserv<br />
<br />
The answer from ProdIP server will be<br><br />
Job prodserv: root@ProdIP's password:<br />
enter the root ProdIP password. The answer will be:<br><br />
Public key sent to prodIP<br />
<br />
==== Copying data ====<br />
Run the Affa job on the <code>newIP</code> box<br />
<br />
affa --run prodserv<br />
<br />
Depending on the amount of data and the speed of hardware and network this first job run can take a really long time.<br />
<br />
Now view the file <code>/var/affa/prodserv/rpms-missing.txt</code><br />
<br />
less /var/affa/prodserv/rpms-missing.txt<br />
<br />
You will find a list of RPMs which are installed on <code>prodIP</code> but not on this server (<code>newIP</code>) and also RPMs installed with different versions. Install or update the listed RPMs. To verify, you can run the steps of this chapter again. Finally the rpms-missing.txt should not list any RPMs.<br />
<br />
In case you are upgrading the SME operating system to a higher version you may not only find the contribs installed in this list but all changes between the 2 systems - in this case you need to run this command on <code>prodIP</code><br />
/sbin/e-smith/audittools/newrpms<br />
To find out what contribs are installed on <code>prodIP</code>.<br />
<br />
==== Final data synchronization ====<br />
Ask your users to log off.<br><br />
Log into the <code>prodIP</code> box and stop all services that can modify data.<br />
<br />
SVC='qpsmtpd sqpsmtpd crond pop3 dovecot pop3s ftp httpd-e-smith atalk smb qmail' <br />
for s in $SVC; do service $s stop; done<br />
<br />
'''''Note:''' Downtime of the production server starts here''<br />
<br />
Log into the <code>newIP</code> box and run the Affa job again<br />
<br />
affa --run prodserv<br />
<br />
This run will complete very quickly as only differences since the the last run needs to be synchronsized.<br />
<br />
==== Switch over to the new hardware ====<br />
Log into the <code>prodIP</code> box and power it off<br />
<br />
poweroff<br />
<br />
<br />
Log into the <code>newIP</code> box and rise this server to your production server<br />
<br />
affa --rise --all prodserv<br />
<br />
This action will complete very quickly as only hardlinks are used and no data is physically moved.<br><br />
<br />
'''''Note:''' Do not be scared if your prompt looks different!<br><br><br />
<br />
Now do a reboot<br />
<br />
reboot<br />
'''''Note:''' Downtime of the production server ends here''<br />
<br />
<br />
You now have an identical copy of your old production server running on the new hardware. Your users can now log on.<br />
<br />
==== Cleaning up ====<br />
Remove the Affa archives<br />
<br />
/bin/rm -rf /var/affa<br />
<br />
Remove the Affa packages and all status and configuration data<br />
<br />
yum remove smeserver-affa perl-Filesys-DiskFree<br />
rm -f /etc/cron.d/affa-status /etc/cron.d/affa<br />
rm -rf /home/e-smith/db/affa /home/e-smith/db/affa-report<br />
rm -rf /var/log/affa <br />
<br />
'''Note:''' Don't forget to clean up /var/affa. Otherwise you will waste disk space and see strange quota reports. See this [http://forums.contribs.org/index.php?topic=40737.msg198064#msg198064 forum thread] for details.<br />
<br />
==== Additional information ====<br />
===== Performance =====<br />
With this method you should be able to move a typical 500 Gbyte sized server to new hardware with downtime less than 20 minutes. The final sync and the rise time does not really depend on the total files size, but on the number of files and directories.<br />
<br />
<br />
<br />
=== Using the backup and restore fonctions ===<br />
The general working way is to make a backup of the old SME and to restore it on the new SME (upgraded or not).<br />
The fonction "RPMCheck" can be used indirectly with the backup server (compare the list with the old and with the new hardware) to get the list of rpm's that are missing on the new hardware but it is not available for a backup on the external usb disk. <br />
<br />
==== Preparation ====<br />
Take a backup of the running old 'prod server' (see the above conf file ''/etc/affa/prodserv.conf'' ).<br />
{{Note box| Like using the way based on "rise", your can take the first backup (takes long time) without stopping the services and the last one (is quick) after the logout of the users and the stop of the services }}<br />
After the backup, set a temporary IP into the conf file of the backup job:<br />
remoteHostName=tem.po.ra.ry.IP<br />
<br />
For a backup on an external usb disk, set:<br />
remoteHostName=localhost<br />
RootDir=/the/mount/point/of/the/disk<br />
and mount the disk. You should make a list of the installed rpm's too.<br />
<br />
==== Install SME on the new hardware ====<br />
Install at least the same version of SME you were running on the old hardware or a more recent one (e.g. from SME8 to SME9).<br />
<br />
===== For the method based on a backup server: =====<br />
* Set the '''temporary IP''' as internal address of the new hardware<br />
* From the backup server create the ssh connection between the backup server and the new hardware by sending the ssh key:<br />
affa --send-key prodserv<br />
<br />
The answer from Prod-temp-IP server will be<br><br />
Job prodserv: root@Prod-temp-IP's password:<br />
enter the root Prod-temp-IP password. The answer will be:<br><br />
Public key sent to prod-temp-IP<br />
<br />
===== External usb disk: =====<br />
* Install "smeserver-affa" on the new hardware<br />
* mount the usb disk on the same mountpoint than for the backup<br />
* go into the archive and copy the .ini file into /etc/affa as conf file:<br />
cd /mount/point/prodserv/scheduled.0<br />
ls -a ### to see the ini file<br />
cp .prodserv.ini /etc/affa/<br />
mv /etc/affa/.prodserv.ini /etc/affa/prodserv.conf<br />
<br />
===== In case of upgrade SME8 to SME9 =====<br />
The restoration of the default data (parameter "SMEServer=yes" into the conf file of the job) will configure yum repos for SME8 on the new server SME9!<br />
<br />
In order to avoid this there are 2 possibilities:<br />
* make a copy of both folders ''/etc/yum.repos.d'' and ''/etc/yum.smerepos.d'' before the restore. It will be helpful for reconfiguring by hand the repos for SME9 after the restore.<br />
* add following into the conf file of the backup job:<br />
Exclude=/etc/yum.repos.d<br />
Exclude=/etc/yum.smerepos.d<br />
before the last backup (of course if you will restore from the last backup - scheduled.0 - and not from an older one like weekly.2!)<br />
<br />
==== Restore the data ====<br />
<br />
From the backup server (or from the new production server in case of restoring from external usb disk) run:<br />
affa --full-restore [--preserve-newer=no] [--delete=yes] prodserv<br />
To get 1:1 the state of the backup.<br />
<br />
'''Keep in mind that:'''<br />
:[--preserve-newer=yes]: files on the remote server with modification time newer than on the backup are not overwritten.<br />
:[--delete=yes]: all files on the remote server, which are not in the backup, are deleted. <br />
<br />
{{Warning box|A full restore with --preserve-newer no and --delete yes reconstructs the server as it was at the time of the backup. That means, that all files created or server configuration changes made before the restore will be lost! }}<br />
<br />
After the restore, the new prodserver will reboot.<br />
Make sure that the old hardware is previously switched off or no more connect to the network because the new hardware will take its IP after the reconfiguration.<br />
<br />
In case of an OS upgrade, check and if necessary reconfigure the repositories of yum for the new version.<br />
<br />
[[Category: Howto]]<br />
[[Category: Backup]]<br />
----</div>Arnaudhttps://wiki.koozali.org/index.php?title=Moving_SME_to_new_Hardware&diff=33101Moving SME to new Hardware2017-04-03T19:28:40Z<p>Arnaud: /* In case of upgrade SME8 to SME9 */</p>
<hr />
<div>{{Languages|Moving_SME_to_new_Hardware}}<br />
{{Level|Advanced}}<br />
{{usefulnote}}<br />
<br />
<br />
=== Introduction ===<br />
There are many ways to move a SME Server installation to new hardware and also new versions of the SME Server software. <br />
<br />
This document describes a method using the '''[[Affa|Affa V3]]''' contrib.<br />
<br />
Affa gives 3 possibilities to move to new hardware, depending of the available hardware (1,2 or 3 servers, an usb disk) and on the acceptable downtime:<br />
* Using the "rise" feature will convert the backup server into the new production server => 2 machines are needed - short downtime.<br />
* Using the normal backup and restore functions:<br />
:* with 2 machines (production server and backup server - long downtime) or 3 machines (old production server, backup server and new production server - short downtime) <br />
:* with only 1 machine and an usb external disk (long downtime) <br />
<br />
<br />
These 3 methods can also be used to upgrade from a lower version to a higher version of the SME software. This also work from SME 8.1 to 9.<br><br />
<br />
In the following it is assumed that <code>'''prodIP'''</code> is the IP address of your '''production server''' and <code>'''newIP'''</code> is the IP address of the new '''server hardware'''. Replace both placeholders by your real IP addresses.<br />
<br />
=== Using the "rise" feature ===<br />
==== Preparation ====<br />
===== Production server =====<br />
Enable remote administration ssh access in the server-manager of <code>prodIP</code>. This involves setting both the 'Secure shell access' for the local network, and the 'Allow administrative command line access over secure shell' setting, which must be set to 'Yes'. Also, you may have to set "Allow secure shell access using standard passwords" to 'Yes' as well.<br><br />
Log into the <code>prodIP</code> box and run an update<br><br />
<br />
yum update<br />
<br />
If packages were updated signalling post-upgrade and reboot events is required.<br />
<br />
===== New hardware =====<br />
Install SME Server from the latest CDROM/ISO. Assign an unused IP address (<code>newIP</code>) and disable DHCP.<br><br />
Enable remote ssh access in the server-manager on the newIP box.<br />
<br />
'''''Note:''' From now on all further steps can be done remotely via ssh login.''<br><br />
<br />
Log into the <code>newIP</code> box and run an update<br><br />
<br />
yum update<br />
<br />
Signalling post-upgrade and reboot events is required.<br><br />
<br />
<br />
Install the Affa RPM<br />
Follow the latest instructions here: '''[[Affa#Installation_of_Affa_3|Affa Installation]]'''<br />
<br />
Please remember to manually create directory for archive files:<br />
mkdir /var/affa<br />
<br />
{{Note box|During the transition from SME8 to SME9, contrib packages will be migrated to the SME9 contrib repository. If the contrib is not yet in the SME9 Contrib repository and an entry in the Q&A suggests it will install properly then you will need to install the contrib from the SME8 repository See: http://wiki.contribs.org/SME9.0_Contribs_QA#Setup.}}<br />
<br />
<div style="text-align:left" class="mw-collapsible mw-collapsed" data-collapsetext="less &uarr;" data-expandtext="Obsolete Affa 2 instructions &darr;"><br />
<br />
<br><br />
<br />
{{Note box|During the transition from SME7 to SME8, contrib packages will be migrated to the SME8 contrib repository. If the contrib is not yet in the SME8 Contrib repository and an entry in the Q&A suggests it will install properly then you will need to install the contrib from the SME7 repository See: http://wiki.contribs.org/SME8.0_Contribs_QA#Setup.}}<br />
<br />
Configure Affa<br><br />
db affa set AffaGlobalDisable yes<br />
db affa set prodserv job<br />
db affa setprop prodserv remoteHostName prodIP<br />
db affa setprop prodserv RPMCheck yes<br />
affa --make-cronjobs<br />
<br />
</div><br />
<br />
In this example you have an SME production server (<code>prodIP</code>) with IP 192.168.0.2.<br><br />
You have a second SME box as your backup server (<code>newIP</code>) with IP 192.168.0.10.<br><br />
The affa backup job will be called 'prodserv'.<br><br><br />
<br />
Login to your <code>newIP</code> as root and edit/create the file '''/etc/affa/prodserv.conf''' . Using e.g. editor ''nano'', add the following example job configuration text for jobname '''prodserv''' and adapt '''Include=XXXXX''' according the supplementary folders that must be backuped, e.g. due to installed contribs that works with /opt):<br />
[prodserv]<br />
remoteHostName=192.168.0.2<br />
SMEServer=yes<br />
Watchdog=yes<br />
RPMCheck=yes<br />
ConnectionCheckTimeout=120<br />
Debug=no<br />
Description=Backup of 192.168.0.2 smeserver.athome.xx<br />
DiskSpaceWarn=strict<br />
RootDir=/var/affa<br />
Include=/each/folder/to/save<br />
Include=/another/folder/to/save<br />
TimeSchedule=0630<br />
localNice=15<br />
remoteNice=15<br />
rsync--inplace=yes<br />
rsyncCompress=no<br />
rsyncTimeout=900<br />
scheduledKeep=1<br />
dailyKeep=7<br />
weeklyKeep=4<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
status=disabled<br />
Then save your job configuration file.<br />
<br />
If you have a Letsencrypt certificate on the production server, then you should also include<br />
Include=/etc/dehydrated<br />
<br />
Now check that your configuration is OK:<br />
affa --configcheck<br />
This should throw out no errors.<br> <br />
<br />
Generate DSA keys and send the public key to the <code>prodIP</code> server<br><br />
affa --send-key prodserv<br />
<br />
The answer from ProdIP server will be<br><br />
Job prodserv: root@ProdIP's password:<br />
enter the root ProdIP password. The answer will be:<br><br />
Public key sent to prodIP<br />
<br />
==== Copying data ====<br />
Run the Affa job on the <code>newIP</code> box<br />
<br />
affa --run prodserv<br />
<br />
Depending on the amount of data and the speed of hardware and network this first job run can take a really long time.<br />
<br />
Now view the file <code>/var/affa/prodserv/rpms-missing.txt</code><br />
<br />
less /var/affa/prodserv/rpms-missing.txt<br />
<br />
You will find a list of RPMs which are installed on <code>prodIP</code> but not on this server (<code>newIP</code>) and also RPMs installed with different versions. Install or update the listed RPMs. To verify, you can run the steps of this chapter again. Finally the rpms-missing.txt should not list any RPMs.<br />
<br />
In case you are upgrading the SME operating system to a higher version you may not only find the contribs installed in this list but all changes between the 2 systems - in this case you need to run this command on <code>prodIP</code><br />
/sbin/e-smith/audittools/newrpms<br />
To find out what contribs are installed on <code>prodIP</code>.<br />
<br />
==== Final data synchronization ====<br />
Ask your users to log off.<br><br />
Log into the <code>prodIP</code> box and stop all services that can modify data.<br />
<br />
SVC='qpsmtpd sqpsmtpd crond pop3 dovecot pop3s ftp httpd-e-smith atalk smb qmail' <br />
for s in $SVC; do service $s stop; done<br />
<br />
'''''Note:''' Downtime of the production server starts here''<br />
<br />
Log into the <code>newIP</code> box and run the Affa job again<br />
<br />
affa --run prodserv<br />
<br />
This run will complete very quickly as only differences since the the last run needs to be synchronsized.<br />
<br />
==== Switch over to the new hardware ====<br />
Log into the <code>prodIP</code> box and power it off<br />
<br />
poweroff<br />
<br />
<br />
Log into the <code>newIP</code> box and rise this server to your production server<br />
<br />
affa --rise --all prodserv<br />
<br />
This action will complete very quickly as only hardlinks are used and no data is physically moved.<br><br />
<br />
'''''Note:''' Do not be scared if your prompt looks different!<br><br><br />
<br />
Now do a reboot<br />
<br />
reboot<br />
'''''Note:''' Downtime of the production server ends here''<br />
<br />
<br />
You now have an identical copy of your old production server running on the new hardware. Your users can now log on.<br />
<br />
==== Cleaning up ====<br />
Remove the Affa archives<br />
<br />
/bin/rm -rf /var/affa<br />
<br />
Remove the Affa packages and all status and configuration data<br />
<br />
yum remove smeserver-affa perl-Filesys-DiskFree<br />
rm -f /etc/cron.d/affa-status /etc/cron.d/affa<br />
rm -rf /home/e-smith/db/affa /home/e-smith/db/affa-report<br />
rm -rf /var/log/affa <br />
<br />
'''Note:''' Don't forget to clean up /var/affa. Otherwise you will waste disk space and see strange quota reports. See this [http://forums.contribs.org/index.php?topic=40737.msg198064#msg198064 forum thread] for details.<br />
<br />
==== Additional information ====<br />
===== Performance =====<br />
With this method you should be able to move a typical 500 Gbyte sized server to new hardware with downtime less than 20 minutes. The final sync and the rise time does not really depend on the total files size, but on the number of files and directories.<br />
<br />
<br />
<br />
=== Using the backup and restore fonctions ===<br />
The general working way is to make a backup of the old SME and to restore it on the new SME (upgraded or not).<br />
The fonction "RPMCheck" can be used indirectly with the backup server (compare the list with the old and with the new hardware) to get the list of rpm's that are missing on the new hardware but it is not available for a backup on the external usb disk. <br />
<br />
==== Preparation ====<br />
Take a backup of the running old 'prod server' (see the above conf file).<br />
{{Note box| Like using the way based on "rise", your can take the first backup (takes long time) without stopping the services and the last one (is quick) after the logout of the users and the stop of the services }}<br />
After the backup, set a temporary IP into the conf file of the backup job:<br />
remoteHostName=tem.po.ra.ry.IP<br />
<br />
For a backup on an external usb disk, set:<br />
remoteHostName=localhost<br />
RootDir=/the/mount/point/of/the/disk<br />
and mount the disk. You should make a list of the installed rpm's too.<br />
<br />
==== Install SME on the new hardware ====<br />
Install at least the same version of SME you were running on the old hardware or a more recent one (e.g. from SME8 to SME9).<br />
<br />
===== For the method based on a backup server: =====<br />
* Set the '''temporary IP''' as internal address of the new hardware<br />
* From the backup server create the ssh connection between the backup server and the new hardware by sending the ssh key:<br />
affa --send-key prodserv<br />
<br />
The answer from Prod-temp-IP server will be<br><br />
Job prodserv: root@Prod-temp-IP's password:<br />
enter the root Prod-temp-IP password. The answer will be:<br><br />
Public key sent to prod-temp-IP<br />
<br />
===== External usb disk: =====<br />
* Install "smeserver-affa" on the new hardware<br />
* mount the usb disk on the same mountpoint than for the backup<br />
* go into the archive and copy the .ini file into /etc/affa as conf file:<br />
cd /mount/point/prodserv/scheduled.0<br />
ls -a ### to see the ini file<br />
cp .prodserv.ini /etc/affa/<br />
mv /etc/affa/.prodserv.ini /etc/affa/prodserv.conf<br />
<br />
===== In case of upgrade SME8 to SME9 =====<br />
The restoration of the default data (parameter "SMEServer=yes" into the conf file of the job) will configure yum repos for SME8 on the new server SME9!<br />
<br />
In order to avoid this there are 2 possibilities:<br />
* make a copy of both folders ''/etc/yum.repos.d'' and ''/etc/yum.smerepos.d'' before the restore. It will be helpful for reconfiguring by hand the repos for SME9 after the restore.<br />
* add following into the conf file of the backup job:<br />
Exclude=/etc/yum.repos.d<br />
Exclude=/etc/yum.smerepos.d<br />
before the last backup (of course if you will restore from the last backup - scheduled.0 - and not from an older one like weekly.2!)<br />
<br />
==== Restore the data ====<br />
<br />
From the backup server (or from the new production server in case of restoring from external usb disk) run:<br />
affa --full-restore [--preserve-newer=no] [--delete=yes] prodserv<br />
To get 1:1 the state of the backup.<br />
<br />
'''Keep in mind that:'''<br />
:[--preserve-newer=yes]: files on the remote server with modification time newer than on the backup are not overwritten.<br />
:[--delete=yes]: all files on the remote server, which are not in the backup, are deleted. <br />
<br />
{{Warning box|A full restore with --preserve-newer no and --delete yes reconstructs the server as it was at the time of the backup. That means, that all files created or server configuration changes made before the restore will be lost! }}<br />
<br />
After the restore, the new prodserver will reboot.<br />
Make sure that the old hardware is previously switched off or no more connect to the network because the new hardware will take its IP after the reconfiguration.<br />
<br />
In case of an OS upgrade, check and if necessary reconfigure the repositories of yum for the new version.<br />
<br />
[[Category: Howto]]<br />
[[Category: Backup]]<br />
----</div>Arnaudhttps://wiki.koozali.org/index.php?title=Moving_SME_to_new_Hardware&diff=33084Moving SME to new Hardware2017-04-02T15:31:59Z<p>Arnaud: </p>
<hr />
<div>{{Languages|Moving_SME_to_new_Hardware}}<br />
{{Level|Advanced}}<br />
{{usefulnote}}<br />
<br />
<br />
=== Introduction ===<br />
There are many ways to move a SME Server installation to new hardware and also new versions of the SME Server software. <br />
<br />
This document describes a method using the '''[[Affa|Affa V3]]''' contrib.<br />
<br />
Affa gives 3 possibilities to move to new hardware, depending of the available hardware (1,2 or 3 servers, an usb disk) and on the acceptable downtime:<br />
* Using the "rise" feature will convert the backup server into the new production server => 2 machines are needed - short downtime.<br />
* Using the normal backup and restore functions:<br />
:* with 2 machines (production server and backup server - long downtime) or 3 machines (old production server, backup server and new production server - short downtime) <br />
:* with only 1 machine and an usb external disk (long downtime) <br />
<br />
<br />
These 3 methods can also be used to upgrade from a lower version to a higher version of the SME software. This also work from SME 8.1 to 9.<br><br />
<br />
In the following it is assumed that <code>'''prodIP'''</code> is the IP address of your '''production server''' and <code>'''newIP'''</code> is the IP address of the new '''server hardware'''. Replace both placeholders by your real IP addresses.<br />
<br />
=== Using the "rise" feature ===<br />
==== Preparation ====<br />
===== Production server =====<br />
Enable remote administration ssh access in the server-manager of <code>prodIP</code>. This involves setting both the 'Secure shell access' for the local network, and the 'Allow administrative command line access over secure shell' setting, which must be set to 'Yes'. Also, you may have to set "Allow secure shell access using standard passwords" to 'Yes' as well.<br><br />
Log into the <code>prodIP</code> box and run an update<br><br />
<br />
yum update<br />
<br />
If packages were updated signalling post-upgrade and reboot events is required.<br />
<br />
===== New hardware =====<br />
Install SME Server from the latest CDROM/ISO. Assign an unused IP address (<code>newIP</code>) and disable DHCP.<br><br />
Enable remote ssh access in the server-manager on the newIP box.<br />
<br />
'''''Note:''' From now on all further steps can be done remotely via ssh login.''<br><br />
<br />
Log into the <code>newIP</code> box and run an update<br><br />
<br />
yum update<br />
<br />
Signalling post-upgrade and reboot events is required.<br><br />
<br />
<br />
Install the Affa RPM<br />
Follow the latest instructions here: '''[[Affa#Installation_of_Affa_3|Affa Installation]]'''<br />
<br />
Please remember to manually create directory for archive files:<br />
mkdir /var/affa<br />
<br />
{{Note box|During the transition from SME8 to SME9, contrib packages will be migrated to the SME9 contrib repository. If the contrib is not yet in the SME9 Contrib repository and an entry in the Q&A suggests it will install properly then you will need to install the contrib from the SME8 repository See: http://wiki.contribs.org/SME9.0_Contribs_QA#Setup.}}<br />
<br />
<div style="text-align:left" class="mw-collapsible mw-collapsed" data-collapsetext="less &uarr;" data-expandtext="Obsolete Affa 2 instructions &darr;"><br />
<br />
<br><br />
<br />
{{Note box|During the transition from SME7 to SME8, contrib packages will be migrated to the SME8 contrib repository. If the contrib is not yet in the SME8 Contrib repository and an entry in the Q&A suggests it will install properly then you will need to install the contrib from the SME7 repository See: http://wiki.contribs.org/SME8.0_Contribs_QA#Setup.}}<br />
<br />
Configure Affa<br><br />
db affa set AffaGlobalDisable yes<br />
db affa set prodserv job<br />
db affa setprop prodserv remoteHostName prodIP<br />
db affa setprop prodserv RPMCheck yes<br />
affa --make-cronjobs<br />
<br />
</div><br />
<br />
In this example you have an SME production server (<code>prodIP</code>) with IP 192.168.0.2.<br><br />
You have a second SME box as your backup server (<code>newIP</code>) with IP 192.168.0.10.<br><br />
The affa backup job will be called 'prodserv'.<br><br><br />
<br />
Login to your <code>newIP</code> as root and edit/create the file '''/etc/affa/prodserv.conf''' . Using e.g. editor ''nano'', add the following example job configuration text for jobname '''prodserv''' and adapt '''Include=XXXXX''' according the supplementary folders that must be backuped, e.g. due to installed contribs that works with /opt):<br />
[prodserv]<br />
remoteHostName=192.168.0.2<br />
SMEServer=yes<br />
Watchdog=yes<br />
RPMCheck=yes<br />
ConnectionCheckTimeout=120<br />
Debug=no<br />
Description=Backup of 192.168.0.2 smeserver.athome.xx<br />
DiskSpaceWarn=strict<br />
RootDir=/var/affa<br />
Include=/each/folder/to/save<br />
Include=/another/folder/to/save<br />
TimeSchedule=0630<br />
localNice=15<br />
remoteNice=15<br />
rsync--inplace=yes<br />
rsyncCompress=no<br />
rsyncTimeout=900<br />
scheduledKeep=1<br />
dailyKeep=7<br />
weeklyKeep=4<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
status=disabled<br />
Then save your job configuration file.<br />
<br />
If you have a Letsencrypt certificate on the production server, then you should also include<br />
Include=/etc/dehydrated<br />
<br />
Now check that your configuration is OK:<br />
affa --configcheck<br />
This should throw out no errors.<br> <br />
<br />
Generate DSA keys and send the public key to the <code>prodIP</code> server<br><br />
affa --send-key prodserv<br />
<br />
The answer from ProdIP server will be<br><br />
Job prodserv: root@ProdIP's password:<br />
enter the root ProdIP password. The answer will be:<br><br />
Public key sent to prodIP<br />
<br />
==== Copying data ====<br />
Run the Affa job on the <code>newIP</code> box<br />
<br />
affa --run prodserv<br />
<br />
Depending on the amount of data and the speed of hardware and network this first job run can take a really long time.<br />
<br />
Now view the file <code>/var/affa/prodserv/rpms-missing.txt</code><br />
<br />
less /var/affa/prodserv/rpms-missing.txt<br />
<br />
You will find a list of RPMs which are installed on <code>prodIP</code> but not on this server (<code>newIP</code>) and also RPMs installed with different versions. Install or update the listed RPMs. To verify, you can run the steps of this chapter again. Finally the rpms-missing.txt should not list any RPMs.<br />
<br />
In case you are upgrading the SME operating system to a higher version you may not only find the contribs installed in this list but all changes between the 2 systems - in this case you need to run this command on <code>prodIP</code><br />
/sbin/e-smith/audittools/newrpms<br />
To find out what contribs are installed on <code>prodIP</code>.<br />
<br />
==== Final data synchronization ====<br />
Ask your users to log off.<br><br />
Log into the <code>prodIP</code> box and stop all services that can modify data.<br />
<br />
SVC='qpsmtpd sqpsmtpd crond pop3 dovecot pop3s ftp httpd-e-smith atalk smb qmail' <br />
for s in $SVC; do service $s stop; done<br />
<br />
'''''Note:''' Downtime of the production server starts here''<br />
<br />
Log into the <code>newIP</code> box and run the Affa job again<br />
<br />
affa --run prodserv<br />
<br />
This run will complete very quickly as only differences since the the last run needs to be synchronsized.<br />
<br />
==== Switch over to the new hardware ====<br />
Log into the <code>prodIP</code> box and power it off<br />
<br />
poweroff<br />
<br />
<br />
Log into the <code>newIP</code> box and rise this server to your production server<br />
<br />
affa --rise --all prodserv<br />
<br />
This action will complete very quickly as only hardlinks are used and no data is physically moved.<br><br />
<br />
'''''Note:''' Do not be scared if your prompt looks different!<br><br><br />
<br />
Now do a reboot<br />
<br />
reboot<br />
'''''Note:''' Downtime of the production server ends here''<br />
<br />
<br />
You now have an identical copy of your old production server running on the new hardware. Your users can now log on.<br />
<br />
==== Cleaning up ====<br />
Remove the Affa archives<br />
<br />
/bin/rm -rf /var/affa<br />
<br />
Remove the Affa packages and all status and configuration data<br />
<br />
yum remove smeserver-affa perl-Filesys-DiskFree<br />
rm -f /etc/cron.d/affa-status /etc/cron.d/affa<br />
rm -rf /home/e-smith/db/affa /home/e-smith/db/affa-report<br />
rm -rf /var/log/affa <br />
<br />
'''Note:''' Don't forget to clean up /var/affa. Otherwise you will waste disk space and see strange quota reports. See this [http://forums.contribs.org/index.php?topic=40737.msg198064#msg198064 forum thread] for details.<br />
<br />
==== Additional information ====<br />
===== Performance =====<br />
With this method you should be able to move a typical 500 Gbyte sized server to new hardware with downtime less than 20 minutes. The final sync and the rise time does not really depend on the total files size, but on the number of files and directories.<br />
<br />
<br />
<br />
=== Using the backup and restore fonctions ===<br />
The general working way is to make a backup of the old SME and to restore it on the new SME (upgraded or not).<br />
The fonction "RPMCheck" can be used indirectly with the backup server (compare the list with the old and with the new hardware) to get the list of rpm's that are missing on the new hardware but it is not available for a backup on the external usb disk. <br />
<br />
==== Preparation ====<br />
Take a backup of the running old 'prod server' (see the above conf file).<br />
{{Note box| Like using the way based on "rise", your can take the first backup (takes long time) without stopping the services and the last one (is quick) after the logout of the users and the stop of the services }}<br />
After the backup, set a temporary IP into the conf file of the backup job:<br />
remoteHostName=tem.po.ra.ry.IP<br />
<br />
For a backup on an external usb disk, set:<br />
remoteHostName=localhost<br />
RootDir=/the/mount/point/of/the/disk<br />
and mount the disk. You should make a list of the installed rpm's too.<br />
<br />
==== Install SME on the new hardware ====<br />
Install at least the same version of SME you were running on the old hardware or a more recent one (e.g. from SME8 to SME9).<br />
<br />
===== For the method based on a backup server: =====<br />
* Set the '''temporary IP''' as internal address of the new hardware<br />
* From the backup server create the ssh connection between the backup server and the new hardware by sending the ssh key:<br />
affa --send-key prodserv<br />
<br />
The answer from Prod-temp-IP server will be<br><br />
Job prodserv: root@Prod-temp-IP's password:<br />
enter the root Prod-temp-IP password. The answer will be:<br><br />
Public key sent to prod-temp-IP<br />
<br />
===== External usb disk: =====<br />
* Install "smeserver-affa" on the new hardware<br />
* mount the usb disk on the same mountpoint than for the backup<br />
* go into the archive and copy the .ini file into /etc/affa as conf file:<br />
cd /mount/point/prodserv/scheduled.0<br />
ls -a ### to see the ini file<br />
cp .prodserv.ini /etc/affa/<br />
mv /etc/affa/.prodserv.ini /etc/affa/prodserv.conf<br />
<br />
===== In case of upgrade SME8 to SME9 =====<br />
The restoration of the data will configure yum repos for SME8 on the new server SME9!<br />
<br />
In order to avoid this, make a copy of both folders ''/etc/yum.repos.d'' and ''/etc/yum.smerepos.d''. It will be helpful for reconfiguring by hand the repos for SME9 after the restore. <br />
<br />
==== Restore the data ====<br />
<br />
From the backup server (or from the new production server in case of restoring from external usb disk) run:<br />
affa --full-restore [--preserve-newer=no] [--delete=yes] prodserv<br />
To get 1:1 the state of the backup.<br />
<br />
'''Keep in mind that:'''<br />
:[--preserve-newer=yes]: files on the remote server with modification time newer than on the backup are not overwritten.<br />
:[--delete=yes]: all files on the remote server, which are not in the backup, are deleted. <br />
<br />
{{Warning box|A full restore with --preserve-newer no and --delete yes reconstructs the server as it was at the time of the backup. That means, that all files created or server configuration changes made before the restore will be lost! }}<br />
<br />
After the restore, the new prodserver will reboot.<br />
Make sure that the old hardware is previously switched off or no more connect to the network because the new hardware will take its IP after the reconfiguration.<br />
<br />
In case of an OS upgrade, check and if necessary reconfigure the repositories of yum for the new version.<br />
<br />
[[Category: Howto]]<br />
[[Category: Backup]]<br />
----</div>Arnaudhttps://wiki.koozali.org/index.php?title=Affa&diff=33082Affa2017-04-01T19:49:51Z<p>Arnaud: /* FAQ */</p>
<hr />
<div>{{Languages|Affa}}<br />
{{Level|Advanced}}<br />
{{usefulnote}}<br />
<br />
<br />
=== Maintainer ===<br />
Mainteners(s) Affa3: Arnaud, stephdl<br><br />
Development of Affa2 for SME Server by the original maintainer has been discontinued.<br />
<br />
=== Version ===<br />
{{ #smeversion: affa}}<br />
{{ #smeversion: smeserver-affa}}<br />
<br />
<br />
=== Description ===<br />
The main purpose of this affa package is to make a SME Server a dedicated backup box in a few minutes. Affa backs up as many SME servers as you like or any other servers which have sshd running and rsync installed. Once it is configured, Affa runs reliably unattended and sends warning messages in case of any errors. <br />
<br />
All backup archives are full backups, as Affa makes use of the hardlink technique. Therefore a new full backup only needs disk space for the differences plus the filesystem overhead for the hardlinks and directories (which is typically 2-3%).<br />
<br />
Affa is based on the rsync program and supports the rsync --compress option. This allows you to run backups over the internet or VPN. A typical setup is one or more Affa backup servers placed in different locations, which backup the production server(s) over the VPN.<br />
<br />
A special feature is the rise option, which allows you to rise the backup server to your production server from a backup archive in case of a dead loss of your production server. The rise is executed within a extremely short time, even with huge amount of data. The rise feature uses hardlinks and therefore does not use up additional disk space.<br />
<br />
The rise feature can also be used to upgrade from a lower version to a higher version of SME. This also work from SME 8.1 to 9. For this to work you need 2 separate installations of SME. One is the actual running server and the other is an affa backup server with a newer version of SME Server where you execute the rise command. See more here: [[Moving SME to new Hardware]]<br />
<br />
A simple way to have a server running with maximum up time is to duplicate the hardware. E.g. to have 2 separate hardware boxes or virtual servers on separate hardware. One is the actual server and the other one is a backup system that in a very short time in case of problems can be upgraded to the actual server with the rise command. This also allow you to upgrade the SME Server software with minimum down time!<br />
<br />
Affa is a command line tool for system administrators and is intentionally designed without a GUI. Therefore it can be efficiently managed on the console and over slow internet connections.<br />
<br />
'''Note:''' This documents also refers to the Affa Version 2 which is not maintained anymore. Information on Affa 3 will be gradually added here once it has been more tested.<br />
<br />
==== Affa features at a glance ====<br />
* Affa is secure: All transfers and connections are made by using the ssh protocol with public/private key authentication<br />
* Periodically runs unattended full backups. Only differences to the previous full backup are transferred over the network<br />
* Using rsync with optional bandwidth limit and compression allows backups over the internet<br />
* Uses hardlink technique, i.e. physical disk space only needed for the differences between two full backups<br />
* Keeps a configurable number of scheduled, daily, weekly, monthly and yearly full backup archives<br />
* The archives are browseable. No need to unpack them first.<br />
* Archives can be mapped to a Samba share.<br />
* Backup jobs are started by the cron daemon<br />
* Interrupted jobs continuing with already transfered data when restarted<br />
* Backups the default e-smith directories and files, when property SMEServer is set to yes<br />
* Additional directories and files can be included<br />
* Directories and files can be excluded from the backup<br />
* Non-SME server linux systems can be backed up by setting the SMEServer property to no and using an include list<br />
* In ESXi mode, running virtual machines can be backed up. See [[Backup of ESXi Virtual Machines using Affa]]<br />
* Configurable nice level for rsync processes on the backup and source server<br />
* Optional run of custom programs before and after a job run (e.g. running tape backup)<br />
* Checks the disk space left after a job run with warning levels strict, normal or risky<br />
* Extensive checking of failure conditions<br />
* Sends failure messages to a configurable list of email addresses<br />
* Sends a warning message, if the backup server runs out of disk space<br />
* Installs an optional watchdog on the source server in case the backupserver fails (SME Server only)<br />
* Watchdog sends warning, if an expected backup did not run (SME Server only)<br />
* Watchdog sends a daily reminder message, if the error continues unchecked (SME Server only)<br />
* Option to display current status of all jobs showing times of last and next run, size and disk usage<br />
* Status can be mailed on a daily, weekly or monthly schedule<br />
* Option to display all existing archives of a job shown date, number of files, size and bytes tranferred from the source<br />
* Option to send (and revoke) the public key to the source server (SME Server and ESXi only)<br />
* Option to rise the backup server to a production server from a backup (SME Server only)<br />
* The rise feature does not physically move data and therefore is extremly fast and needs (almost) no extra disk space<br />
* Rise option can be run remotely as the NIC driver configuration of the backup server are preserved<br />
* Compares installed RPMs on source with backup server. Sends warning message, if not in sync<br />
* Undo rise option to restore the backup server<br />
* Configurable via a e-smith style db, with one record for each job and a default record for all jobs<br />
* Logs to /var/log/affa/JOB.log and /var/log/affa/affa.log with optional debug switch for higher verbosity<br />
* Log files are rotated weekly, with 5 logs kept<br />
<br />
<div style="text-align:left" class="mw-collapsible mw-collapsed" data-collapsetext="less &uarr;" data-expandtext="Obsolete Affa 2 instructions &darr;"><br />
<br />
=== Installation or Update of Affa 2 ===<br />
<br />
====Smecontribs for sme8====<br />
yum install --enablerepo=smecontribs smeserver-affa<br />
<br />
The latest test version can be found here :<br />
[http://www.reetspetit.com/smeserver/5/noarch/smeserver-affa-2.0.0-rc6.el5.sme.noarch.rpm smeserver-affa-2.0.0-rc6.el5.sme.noarch.rpm] Note this is NOT signed.<br />
<br />
Please post bug reports in the [http://bugs.contribs.org bug tracker]<br />
<br />
When you have installed Affa for the first time run the following command to initialize the Affa database<br />
<br />
affa --make-cronjobs<br />
and logout and re-login to the console to take the bash auto-completion (TAB key) in effect.<br />
====Obsolete way to install====<br />
{{Note box|SME7 is now EOL and has been moved to the obsolete tree}}<br />
SME7 version: '''smeserver-affa-2.0.0-rc4.noarch.rpm'''<br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/smeserver-affa-2.0.0-rc4.noarch.rpm</nowiki><br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm<br />
<br />
SME8 version: SME8 is based on CentOS5, therefore use the perl-Compress-Bzip2 el5 rpm: <br />
wget <nowiki>http://mirror.contribs.org/releases/8/smecontribs/i386/repoview/smeserver-affa.html</nowiki><br />
For SME8 32bit:<br />
wget <nowiki>http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm<br />
For SME8 64bit:<br />
wget http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
<br />
When you have installed Affa for the first time run the following command to initialize the Affa database<br />
affa --make-cronjobs<br />
and logout and re-login to the console to take the bash auto-completion (TAB key) in effect.<br />
<br />
==== Quick start example for Affa 2 ====<br />
You have a SME 7 production server with hostname 'prodbox‘ and IP 10.200.48.1.<br><br />
Set up a second SME 7 box as your backupserver with hostname 'affabox‘ and IP 10.200.48.2. <br />
<br />
<ol></li><li>log into the 'affabox' and install the packages as described above.<br />
</li><li>copy the config helper script sample<br />
cp /usr/lib/affa/jobconfig-sample.pl /root/prodbox-job.pl<br />
</li><li>edit /root/prodbox-job.pl and set<br />
my $jobname='prodbox';<br />
and<br />
'remoteHostName‘=>'10.200.48.1',<br />
optionally, if the remote server port is configured to e.g. 2222 then set<br />
'sshPort'=>2222,<br />
</li><li>save and run the 'prodbox-job.pl' file. This will write the configuration (this makes the database entries and sets up the cronjobs)<br />
/root/prodbox-job.pl<br />
</li><li>generate the DSA keys and send the public key to the 'prodbox'<br />
{{Note box|When initially doing this step, you will need to temporarily enable "Allow secure shell access using standard passwords" on the production server}}<br />
affa --send-key prodbox<br />
</li><li>run the job manually<br />
<br />
affa --run prodbox<br />
</li></ol><br />
<br />
==== Configuration ====<br />
The configuration is stored in an e-smith style database. Use the db command to configure Affa.<br />
The jobname is the record key with the type 'job'.<br>To setup a new job with the name 'prodbox' enter:<br />
<br />
db affa set prodbox job<br />
<br />
then set the properties<br />
<br />
db affa setprop prodbox remoteHostName 192.168.1.1<br />
db affa setprop prodbox TimeSchedule '0030,0730,1130,1330,1730,2030'<br />
db affa setprop prodbox Description 'My Production Server'<br />
db affa setprop prodbox status enabled<br />
and so on...<br />
<br />
Alternatively you can you use a script as described above in the 'Quick start' chapter.<br />
<br />
To verify your work, type:<br />
<br />
db affa show prodbox<br />
<br />
Finally set up the cronjobs:<br />
affa --make-cronjobs<br />
<br />
</div><br />
<br />
<br />
=== Installation of Affa 3 ===<br />
The following instructions assume that a fresh Affa 3 installation is made on a fresh dedicated SME server which serves as a dedicated backup server ('affabox‘). No jobs are migrated from Affa 2. For using an existing SME server, just skip the instructions how to setup a fresh SME box.<br />
<br />
Setup a dedicated machine with SME 8.1 or SME 9.0 from CDROM. Use the following example settings:<br />
<ol><li>Domain name: athome.xx (use your existing domain name)</li><br />
<li>Host name: affabox (must not match to existing host)</li><br />
<li>IP address: 192.168.0.10 (must not match to existing IP address)</li><br />
<li>Subnet: 255.255.255.0</li><br />
<li>role: Server only</li><br />
<li>Gateway: 192.168.0.1 (use your existing gateway)</li><br />
<li>DHCP: DEactivate</li><br />
<li>DNS server: 192.168.0.1 (use your existing DNS server)</li><br />
<li>activate changes: yes</li><br />
<li>The fresh server will then restart with the above settings.</li><br />
</ol><br />
<br />
To update your server, login as user root on local console. Use 'top' command to display running jobs. Wait until 'yum' stopped running.<br />
yum clean all<br />
yum update<br />
The fresh server will be updated.<br />
signal-event post-upgrade<br />
signal-event reboot<br />
The server will reboot. Afterwards it is ready for installation of Affa 3.<br />
<br />
Login as root on local or remote SSH console of ''affabox''.<br />
<br />
= For SME8 =<br />
<br />
To install issue the following command:<br />
yum install smeserver-affa --enablerepo=smecontribs<br />
= For SME9 =<br />
<br />
yum install smeserver-affa --enablerepo=smecontribs<br />
<headertabs/><br />
<br />
The server again need to be updated.<br />
signal-event post-upgrade<br />
signal-event reboot<br />
The server will reboot. Afterwards Affa 3 is ready for use.<br />
<br />
Now manually create directory for archive files:<br />
mkdir /var/affa<br />
<br />
{{Warning box| Note that on SME9, a new SSH [https://wiki.contribs.org/AutoBlock AutoBlock] feature interferes with the Affa requirement for regular SSH logins! You might have to disable this feature to allow Affa to operate correctly. }}<br />
<br />
=== Creation of backup jobs ===<br />
Job configurations of Affa 3 are no longer stored in an e-smith style database. They are stored in configuration files located in ''/etc/affa'' . Create your configuration files in this directory, e.g.<br />
touch /etc/affa/backup-smeserver.conf<br />
where ''backup-smeserver'' is your jobname, and add content as described below.<br />
<br />
==== Quick start example ====<br />
You have a SME production server with hostname 'smeserver‘ and IP 192.168.0.2.<br><br />
You have a second SME box as your backup server with hostname 'affabox‘ and IP 192.168.0.10. <br />
<br />
Login to your 'affabox' as root and edit ''/etc/affa/backup-smeserver.conf'' . Using e.g. editor ''nano'', create the following example job configuration file for jobname ''backup-smeserver'' :<br />
[backup-smeserver]<br />
remoteHostName=192.168.0.2<br />
SMEServer=yes<br />
Watchdog=yes<br />
RPMCheck=yes<br />
ConnectionCheckTimeout=120<br />
Debug=no<br />
Description=Backup of 192.168.0.2 smeserver.athome.xx<br />
DiskSpaceWarn=strict<br />
RootDir=/var/affa<br />
TimeSchedule=0630<br />
localNice=15<br />
remoteNice=15<br />
rsync--inplace=yes<br />
rsyncCompress=no<br />
rsyncTimeout=900<br />
scheduledKeep=1<br />
dailyKeep=7<br />
weeklyKeep=4<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
status=enabled<br />
<br />
If you have a Letsencrypt certificate on the production server, then you should also include<br />
Include=/etc/dehydrated<br />
<br />
Then save your job configuration file.<br />
<br />
Now check that your configuration is OK:<br />
affa --configcheck<br />
This should throw out no errors. Now create / send key files to your productive smeserver:<br />
affa --send-key backup-smeserver<br />
The following output should appear on the console:<br />
Job sme-backup: Generating DSA keys...<br />
Successfully created DSA key pair.<br />
root@192.168.0.2's password: <Enter password of 192.168.0.2/smeserver and hit ENTER><br />
Public key sent to 192.168.0.2<br />
Now run your job manually, both for test purposes, as well as to create DSA keys:<br />
affa --run backup-smeserver<br />
The following output should appear on the console:<br />
The authenticity of host 'backup-smeserver (192.168.0.2)' can't be established.<br />
RSA key fingerprint is 3b..........65.<br />
Are you sure you want ton continue connecting (yes/no)? <yes><br />
Now your backup job should run for the first time. Depending on the volume of the files, this takes quite a while.<br />
<br />
Once the job is done, check that the archive is available:<br />
affa –-list-archives<br />
<br />
or run the job a second time:<br />
affa --run backup-smeserver<br />
Note that you won't be asked for the password again. Note as well, that this second run of the job ''backup-smeserver'' should run considerably faster, because not all files are copied again: just the difference to the previous run is backed up, the rest is stored as hard links to the existing file copies.<br />
<br />
Check that the second archive is available too:<br />
affa --list-archives<br />
<br />
From here you are able to work as with Affa 2. Modify your config file as required and described below. Automate the backup using the command ''affa --make-cronjobs'', see below.<br />
<br />
==== Job configuration properties ====<br />
'''Note 1:''' The default values shown in this table are the Affa program defaults and not to be confused with the preset values in the job configuration files, e.g. backup-smeserver.conf.<br />
<br />
'''Note 2:''' The complete documentation of the settings of Affa3 is available in [http://affa.sourceforge.net/AffaPdfMan.pdf] . It should be taken in consideration!<br />
<br />
'''Note 3:''' Affa 3 for SME is a fork of [http://affa.sourceforge.net/ Affa 3 for CentOS]. It adds a few options back again which have been removed. Generally the configuration properties as described [http://wiki.nikoforge.org/Affa_Configuration/ here] do apply. Arnaud added the following functions for SME:<br />
<ul><br />
<li>The parameter “SMEServer”</li><br />
<li>The parameter and function “Watchdog”</li><br />
<li>The parameter and function “RPMCheck”</li><br />
<li>The functions “--rise” and “--undo-rise”</li><br />
</ul><br />
Consequently the list below should reproduce the list for the unforked Affa 3 version, plus adding the properties above. For details refer to [https://www.guedel.eu/index.php/informatique/sme-server-8/affa-v3 Arnaud's website].<br />
{| border="1" cellpadding="3" cellspacing=0<br />
|-<br />
| '''Property''' || '''Value''' || '''Multivalue''' || '''Default''' || '''Description''' <br />
|-<br />
| remoteHostName <br />
| FQHN or IP<br />
| no<br />
| <none><br />
| FQHN or IP of the source host (mandatory)<br />
|-<br />
| remoteUser<br />
| account<br />
| no<br />
| root<br />
| The user account to be used with all ssh logins. May be case sensitive, e.g. Administrator account on Windows<br />
|-<br />
| Description<br />
| text string<br />
| no<br />
| <none><br />
| Any text that describes the job<br />
|-<br />
| TimeSchedule <br />
| HHMM<br />
| yes<br />
| 2230<br />
| The time the job will run. Use exactly 4 digits, no colon, no point. '''Important:''' Using the proper format HHMM is essential. Badly formatted TimeSchedule will cause strange Perl errors. Multiple runs can be achieved by repeating the line for each run time.<br />
|-<br />
| status <br />
| enabled ''or'' disabled<br />
| no<br />
| enabled<br />
| When set to disabled, no cron entries will made. You can still run a job manually. <br />
|-<br />
| Include<br />
| full path<br />
| yes<br />
| <none><br />
| File(s) or directory(s) to be included in the backup. <br />
|-<br />
| Exclude<br />
| full path<br />
| yes<br />
| <none><br />
| File(s) or directory(s) to be excluded from the backup. <br />
|-<br />
| RootDir<br />
| full path<br />
| no<br />
| /var/affa<br />
| where to store the backup archives. Do not use /home/e-smith or /root as these are included in the backup and therefore the rise option will not work! Recommended: /var/affa<br />
|-<br />
| scheduledKeep<br />
| integer >= 1<br />
| no<br />
| 1<br />
| how many of the scheduled backups to be kept<br />
|-<br />
| dailyKeep<br />
| integer >= 0<br />
| no<br />
| 7<br />
| how many of the daily backups to be kept<br />
|-<br />
| weeklyKeep<br />
| integer >= 0<br />
| no<br />
| 4<br />
| how many of the weekly backups to be kept<br />
|-<br />
| monthlyKeep<br />
| integer >= 0<br />
| no<br />
| 12<br />
| how many of the monthly backups to be kept<br />
|-<br />
| yearlyKeep<br />
| integer >= 0<br />
| no<br />
| 2<br />
| how many of the yearly backups to be kept<br />
|-<br />
| EmailAddress<br />
| name@domain.com<br />
| yes<br />
| root<br />
| comma separated list of mail addresses, where the messages should be sent to<br>'''Note:''' By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).<br />
|-<br />
| RetryAttempts<br />
| integer >= 0<br />
| no<br />
| 4<br />
| When set to a value>0, Affa re-run a failed job RetryAttempts times with a delay of RetryAfter seconds.<br />
|-<br />
| RetryAfter<br />
| seconds >= 0<br />
| no<br />
| 900<br />
| when set to a value>0, wait RetryAfter seconds before re-running the job after an error. Only applicable with RetryAttempts>0<br />
|-<br />
| RetryNotification<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| when set to no, Affa does not send an error message when a job has failed and RetryAttempts is configured. An error message is only sent when the last attempt has failed.<br />
|-<br />
| NRPEtrigger<br />
| hours<br />
| no<br />
| 24<br />
| NRPE reports a job as critical when the last successful run is older then NRPEtrigger hours. To exclude the job from monitoring set to a value < 0.<br />
|-<br />
| SambaShare<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Access to the job archives via CIFS protocol.<br />
|-<br />
| SambaValidUser<br />
| local account<br />
| yes<br />
| affa<br />
| User who has permission to access the job archives via the CIFS protocol.<br />
|-<br />
| preJobCommand<br> preJobCommandRemote<br />
| path relative to /etc/affa/scripts/<br />
| yes<br />
| <none><br />
| Scripts to be executed before a job run. The job name and type (scheduled, daily etc.) are passed as arguments to the program. The preJobCommandRemote scripts are copied to the remote server and then executed there. The scripts are executed in alphabetical order. Use a numerical prefix if you need a specific order e.g. 01scriptB, 02remotescriptA, 03scriptA. Use the included prescript-sample.pl and prescriptRemote-sample.pl as a starting point for own scripts.<br>'''Note:''' If you use the parameter “SMEServer=yes”, you may delete the “Included” relative to SME default directories in the jobconfig.ini: they will be backuped automatically.<br />
|-<br />
| postJobCommand<br> postJobCommandRemote<br />
| path relative to /etc/affa/scripts/<br />
| yes<br />
| <none><br />
| Scripts to be executed after a job run. The job name, the type (scheduled, daily etc.) and the exit code of the job run are passed as arguments to the program. The postJobCommandRemote scripts are copied to the remote server and then executed there. The scripts are executed in alphabetical order. Use a numerical prefix if you need a specific order e.g. 01scriptB, 02remotescriptA, 03scriptA. Use the included postscript-sample.pl and postscriptRemote-sample.pl as a starting point for own scripts.<br>'''Note:''' If you use the parameter “SMEServer=yes”, you may delete the “Included” relative to SME default directories in the jobconfig.ini: they will be backuped automatically.<br />
|-<br />
| dedup<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| The purpose of the deduplication is to remove duplicate files to save backup space. When set to 'yes' file deduplication is run after the synchronization has been completed. It looks for files that have identical content, user, group and permissions and replace duplicates by hardlinks. Deduplication scans the just completed archive and the previous one, that usually is<br />
scheduled.0 and daily.0 or scheduled.0 and scheduled.1. Consider this scenario: A user has renamed directories or files. Rsync sees those as new ones and copies them. Deduplication finds the identical copies in the previous archive and replace them by hardlinks. To use deduplication the Freedup program needs to be installed. Affa actually runs freedup -upg scheduled.0 <previous_archive>.<br />
|-<br />
| dedupKill<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| When set to 'no' the job ignores affa --kill or affa --killall when deduplication is running. This is useful in the context of killAt which is typically used to stop bandwith utilisation.<br />
|-<br />
| sshPort<br />
| service port <br />
| no<br />
| 22<br />
| When sshd on the remote host listens on a non-standard port, set the port here.<br />
|-<br />
| ConnectionCheckTimeout<br />
| seconds<br />
| no<br />
| 120<br />
| Affa checks the ssh connection before the rsync process is started and exits with an error after the configured time if the host did not respond.<br />
|- <br />
| BandwidthLimit<br />
| integer>=0 kilobytes per second<br />
| no<br />
| 0<br />
| Limits the data transfer rate. A value of zero specifies no limit.<br />
|- <br />
| rsyncTimeout<br />
| seconds<br />
| no<br />
| 900<br />
| Rsync exits after the configured time if no data was transferred. This avoids infinitely hanging in case of a network error.<br />
|- <br />
| rsyncCompress<br />
| yes ''or'' no <br />
| no<br />
| yes<br />
| Compress the transferred data. May be useful with slow internet connections. Increases CPU load on remote and backup host.<br />
|- <br />
| rsync--inplace<br />
| yes ''or'' no<br />
| no<br />
| yes <br />
| Set to no if the rsync version on the remote hist does not support this option.<br />
|-<br />
| rsync--modify-window<br />
| integer >= 0 <br />
| no<br />
| 0<br />
| When comparing two timestamps, rsync treats the timestamps as being equal if they differ by no more than the modify-window value. This is normally 0 for an exact match. A value >= 0 is useful if you can't get the clocks of the remote host and the Affa server in sync.<br />
|-<br />
| rsyncOptions<br />
| string<br />
| no<br />
| <none><br />
| Additional option string to be passed to rsync<br />
|-<br />
| localNice <br />
| -19...+19<br />
| no<br />
| 0<br />
| run rsync local process niced. <br />
|-<br />
| remoteNice<br />
| -19...+19<br />
| no<br />
| 0<br />
| run rsync process on source niced. <br />
|-<br />
| killAt<br />
| HHMM<br />
| no<br />
| <none><br />
| The time at which a job will be killed if it was still running. You can use it for example to kill jobs that are running over the internet early in the morning so that your users have the full bandwidth available during office hours. <br />
|-<br />
| resumeKilledAt<br />
| HHMM<br />
| no<br />
| <none><br />
| The time at which a killed job will be resumed. This allows you to start earlier in the evening than the scheduled time.<br />
|-<br />
| chattyOnSuccess<br />
| integer >= 0<br />
| no<br />
| 0 <br />
| When set to a value>0, Affa sends a message on a successfully completed job run and decrements the chattyOnSuccess value. When the value has reached zero, Affa falls back to the default and only sends messages on errors.<br />
|-<br />
| AutomountDevice<br>AutomountPoint<br />
| full path<br />
| no<br />
| <none><br />
| Device and mountpoint of backup device (e.g. USB disk). Device is automounted before a job starts and unmounted after job completion. With both properties empty no automount is done.<br />
|-<br />
| AutomountOptions<br />
| string<br />
| no<br />
| <none><br />
| An option string passed to the mount command.<br />
|-<br />
| AutoUnmount <br />
| yes ''or'' no<br />
| no<br />
| yes<br />
| When set to 'no' the automounted device stay mounted after the Affa run.<br />
|-<br />
| Debug <br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Set to yes to increase log verbosity.<br />
|-<br />
| remoteRsyncBinary <br />
| full path<br />
| no<br />
| /usr/bin/rsync<br />
| If the rsync program on the remote server is located in non-standard location, set it here<br />
|-<br />
| remoteNiceBinary <br />
| full path<br />
| no<br />
| /bin/nice<br />
| If the nice program on the remote server is located in non-standard location, set it here.<br />
|-<br />
| localRsyncBinary <br />
| full path<br />
| no<br />
| /usr/bin/rsync<br />
| If the local rsync program is located in non-standard location, set it here<br />
|-<br />
| localNiceBinary <br />
| full path<br />
| no<br />
| /bin/nice<br />
| If the nice local program is located in non-standard location, set it here.<br />
|-<br />
| RemoteAuthorizedKeysFile <br />
| path relative to remote user's home or full path<br />
| no<br />
| .ssh/authorized_keys2<br />
| If the remote host stores the authorized keys file in a non-standard location, set it here.<br />
|-<br />
| rsyncdMode<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Set to yes to connect to the rsync daemon on the remote host (instead of running rsync over ssh).<br />
|-<br />
| rsyncdModule<br />
| string<br />
| no<br />
| AFFA<br />
| The rsyncd module name (only applicable with rsyncdMode=yes).<br />
|-<br />
| rsyncdUser<br />
| string<br />
| no<br />
| affa<br />
| The username for authentication to the rsync daemon (only applicable with rsyncdMode=yes).<br />
|-<br />
| rsyncdPassword<br />
| string<br />
| no<br />
| <none><br />
| The password for authentication to the rsync daemon (only applicable with rsyncdMode=yes).<br />
<br />
|-<br />
| globalStatus<br />
| enabled ''or'' disabled ''or'' jobs<br />
| no<br />
| jobs<br />
| Set to enabled or disabled to overide the status settings in all job sections. When set to value jobs, the status settings in the job sections are effictive.<br>'''Note:''' This property is allowed only in the [GlobalAffaConfig] section. <br />
|-<br />
| sendStatus<br />
| daily ''or'' weekly ''or'' monthly ''or'' never<br />
| no<br />
| weekly<br />
| Defines how frequently the status will be sent.<br>'''Note:''' This property is allowed only in the [GlobalAffaConfig] section. <br />
|-<br />
| SMEServer<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| When set to yes, the default e-smith directories are automatically included and the property RPMCheck=yes can be used.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|-<br />
| RPMCheck<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Only applicable to jobs that backup a SME server. Compares the packages installation of the source host with this affa backup host. Sends a message with diff list if not in sync. This check is useful, if you want have the option to rise the backup server to a production server from a backup.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|-<br />
| Watchdog<br />
| yes ''or'' no<br />
| no<br />
| yes<br />
| Only applicable to jobs that backups a SME server. When a job is started, affa installs a watchdog script on the source in /etc/cron.d/, which sends a warning message, if the next scheduled job (taken from the TimeSchedule property + 10 minutes) did not run. This guarantees, that you will be notfied even in case of a affa server outage. The watchdog script send a daily reminder message, if the error continues. The next run job replaces the watchdog script with a new trigger time.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|}<br />
<br />
==== Default configuration properties ====<br />
For all 'name=value' properties defaults can be set in the [GlobalAffaConfig] section which are used in all job configuration when the corresponding property is omitted. For example, when these properties are set in [GlobalAffaConfig] section, they can be omitted in the specific job configurations. This is useful, when you set up many similar jobs. <br />
<br />
Example: You want to set the property 'localNice' to 19 for all jobs. Then add the following section in one of your configuration files:<br />
[GlobalAffaConfig]<br />
localNice=19<br />
Consequently you don't need to set this property for your individual jobs. Properties set in the job record override the defaults.<br />
<br />
==== Properties for global Settings ====<br />
The following special properties are only applicable to the [GlobalAffaConfig] section:<br />
<ul><br />
<li>sendStatus</li><br />
<li>globalStatus</li><br />
</ul><br />
<br />
All jobs can be disabled for execution through 'cronjobs" with setting 'globalStatus' to 'disabled'.<br />
[GlobalAffaConfig]<br />
globalStatus=disabled<br />
<br />
To re-enable run, either delete the above configuration line, or set to:<br />
globalStatus=jobs<br />
which is the default value.<br />
<br />
You can also set the value to<br />
globalStatus=enabled<br />
which enforces the the job execution through 'cronjobs' and overrides the specified values in the section for the specific job.<br />
<br />
=== Usage and command line options ===<br />
{{Note box|Options can be abbreviated to uniqueness, e.g. --mak is equal to --make-cronjobs}}<br />
'''affa --run JOB'''<br />
Starts a job run. Usually done by the cronjob. <br />
<br />
<br />
'''affa --make-cronjobs'''<br />
Configures the cronjobs as scheduled in the jobs records. By default this command will by executed by 'cronjobs' every 15 minutes. So you don't need to run this command manually, you can also just wait max. 15 minutes before your updates job configurations become effective.<br />
<br />
'''affa --configcheck'''<br />
Checks the syntax and values in all configuration files found in /etc/affa/. Run this command after modifying the configuration. Lower/Upper case errors in property names are corrected automatically.<br />
<br />
'''affa --make-cronjobs'''<br />
Configures the cronjobs. Run this command to make changes of time related properties effective i.e. TimeSchedule, killAt and resumekKilledAt properties. Every 15 minutes Affa checks whether a configuration file is newer than the cronjob file and runs the make-cronjobs command if this is the case.<br />
<br />
'''affa --send-key [JOB JOB ...]'''<br />
This first generates the DSA key on the Affa Server, if not already done. Then the public key is send to the hosts 'remoteHostName' as defined in section of each job JOB and generates the job specific ssh known host entry.<br />
{{Note box|When initially doing this step, you will need to temporarily enable "Allow secure shell access using standard passwords" on the source server.}}<br />
<br />
'''affa --check-connections [JOB JOB ...]'''<br />
Checks the ssh login and if applicable rsyncd auth for all jobs given as arguments. Without any arguments all jobs are checked.<br />
<br />
'''affa --full-restore [--preserve-newer=no] [--delete=yes] JOB [ARCHIVE]'''<br />
Does a full restore remote server of all backed up files and directories from the backup ARCHIVE. If ARCHIVE is not given, the archive 'scheduled.0' is used as the default. The full restore reconstructs the server as it was at the time of the backup. After the restore the source host reboots.<br />
<br />
With option --preserve-newer=yes files on the remote server with modification time newer than on the backup are not overwritten.<br />
<br />
With option --delete=yes all files on the remote server, which are not in the backup, are deleted.<br />
<br />
If the parameter “SMEServer=yes” is set, the signal-event pre-restore and signal-event post-upgrade will be used automatically.<br />
<br />
'''affa --list-archives JOB'''<br />
Displays a table of all present archives of job JOB with date, number of files, size and and bytes received. While column buTime shows the actual rsync time, the column ddTime shows the length of the subsequent deduplication run. ddYld is the gained deduplication yield in bytes.<br />
<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
+------------------------------------------------------------------------------+<br />
| Job: fshare-ak |<br />
| Description: Fileserver AK |<br />
| Directory: /var/affa/fshare-ak/ |<br />
| Hostname: 10.204.104.4 |<br />
| Email: sysadmin@mydomain.com |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| Run | Completion date | buTime | ddTime | ddYld | Files | Size | Recvd |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| Y 0 | Wed 2010-06-23 20:26 | 11m53s | - | - | 412k | 143G | 470M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| M11 | Sun 2010-08-29 20:22 | 7m50s | - | - | 417k | 153G | 14M |<br />
| M10 | Sun 2010-09-26 20:23 | 8m57s | - | - | 430k | 156G | 14M |<br />
| M 9 | Sun 2010-10-31 20:25 | 10m05s | - | - | 448k | 161G | 15M |<br />
| M 8 | Sun 2010-11-28 20:31 | 16m42s | - | - | 463k | 167G | 17M |<br />
| M 7 | Sun 2011-01-02 20:48 | 33m07s | - | - | 486k | 173G | 1.1G |<br />
| M 6 | Sun 2011-01-30 20:33 | 18m54s | - | - | 493k | 176G | 24M |<br />
| M 5 | Sun 2011-02-27 20:28 | 13m31s | - | - | 490k | 176G | 19M |<br />
| M 4 | Sun 2011-03-27 20:28 | 13m08s | - | - | 491k | 175G | 17M |<br />
| M 3 | Sun 2011-05-01 20:30 | 15m41s | - | - | 493k | 179G | 18M |<br />
| M 2 | Sun 2011-05-29 20:28 | 13m40s | - | - | 494k | 182G | 31M |<br />
| M 1 | Sun 2011-06-19 20:26 | 11m20s | - | - | 493k | 183G | 17M |<br />
| M 0 | Sun 2011-07-03 20:28 | 13m46s | - | - | 496k | 183G | 18M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| W 3 | Sun 2011-07-10 20:28 | 13m38s | - | - | 479k | 180G | 16M |<br />
| W 2 | Mon 2011-07-11 20:35 | 20m27s | - | - | 479k | 180G | 301M |<br />
| W 1 | Fri 2011-07-15 20:30 | 15m53s | - | - | 480k | 180G | 62M |<br />
| W 0 | Sun 2011-07-24 19:39 | 9m25s | - | - | 482k | 181G | 16M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| D 6 | Thu 2011-07-28 19:42 | 12m22s | - | - | 483k | 182G | 176M |<br />
| D 5 | Fri 2011-07-29 23:29 | 11m10s | 5h33m | 45G | 483k | 182G | 16M |<br />
| D 4 | Sat 2011-07-30 19:53 | 23m26s | 2h30m | 8.3G | 483k | 182G | 17M |<br />
| D 3 | Sun 2011-07-31 20:07 | 37m31s | 4m47s | 8.3G | 483k | 182G | 17M |<br />
| D 2 | Mon 2011-08-01 20:44 | 1h14m | 7h50m | 8.5G | 484k | 182G | 630M |<br />
| D 1 | Tue 2011-08-02 20:02 | 32m28s | 12h20m | 8.3G | 484k | 182G | 74M |<br />
| D 0 | Wed 2011-08-03 19:58 | 28m46s | 11h01m | 8.5G | 484k | 182G | 214M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| S 0 | Thu 2011-08-04 20:00 | 30m28s | 11h52m | 8.5G | 484k | 182G | 203M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
</pre></small><br />
<br />
'''affa --list-archives [--csv] JOB'''<br />
With --csv, the output is in machine readable colon separated format.<br />
<small><pre><br />
Archive:Count;Date;Files;Size;RootDirFilesystemAvail;RootDirFilesystemUsed;valid;TotalBytesReceived;ExecutionTime;DedupTotalFiles;DedupReplacedFiles;DedupSavedBytes;DedupExectime;DedupDate<br />
monthly;00000;201509270631;393237;37252509103;166889260;47876936;yes;1327370;1160638358;99;;;;;<br />
weekly;00003;201510040531;390273;37042612135;164566692;50199504;yes;1117726;45430532;59;;;;;<br />
weekly;00002;201510110531;395553;37299589800;162612204;52153992;yes;1117092;92369237;64;;;;;<br />
weekly;00001;201510180531;403831;37707599172;161855844;52910352;yes;1317083;148317764;64;;;;;<br />
weekly;00000;201510250531;406509;38146200127;161041156;53725040;yes;1104759;64083948;63;;;;;<br />
daily;00006;201510310531;415663;39248862982;162979176;51787020;yes;1958110;198060283;66;;;;;<br />
daily;00005;201511010531;411014;38563389171;163061136;51705060;yes;1156506;48903675;75;;;;;<br />
daily;00004;201511020531;408910;38612857040;162359124;52407072;yes;1007434;210812387;75;;;;;<br />
daily;00003;201511030531;410615;37815861577;162026060;52740136;yes;1524069;162697515;67;;;;;<br />
daily;00002;201511040531;414784;38131734213;161990024;52776172;yes;1571657;134250735;63;;;;;<br />
daily;00001;201511050531;420780;38433895988;161906580;52859616;yes;2021507;149388808;103;;;;;<br />
daily;00000;201511060531;425920;38677614350;161778812;52987384;yes;1987971;166510621;71;;;;;<br />
scheduled;00000;201511070531;425663;38815712018;161612600;53153596;yes;2003540;139603231;68;;;;;<br />
</pre></small><br />
<br />
'''affa --status [--csv]'''<br />
Displays a table of all configured jobs with enable status, time of last and next run, size of the most recent archive, exectution time of the last run and the number of scheduled (S), daily (D), weekly (W), monthly (M) and yearly (Y) archives. If last time shows 'failed', the job did not run in the last 24h. For disabled jobs 'Last' always shows 'failed' after 24 h. To see the date and time of the last run of those jobs use the --list-archives option. Column 'Next' shows the time when the next run will be started, if 'Enabled' is 'yes'. Column 'ddYld' shows the deduplication yield in bytes.<br />
<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
| Job | ENA | Last | Time | Next | Size | ddYld | N of S,D,W,M,Y |<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
| ads-ak | yes | 20:20 | 0m29s | 20:20 | 28M | - | 1, 7, 2, 0, 0 |<br />
| ak-user-ma.. | yes | 22:31 | 1m51s | 22:30 | 170M | - | 3, 7, 4, 6, 0 |<br />
| azubi-1 | yes | 03:39 | 3h24m | 00:15 | 44G | 1.2G | 1, 7, 4,10, 1 |<br />
| dcpant | yes | 03:06 | 1m03s | 03:05 | 1.3G | - | 1, 7, 4,10, 1 |<br />
| eshare | yes | 21:06 | 21m01s | 20:45 | 24G | 178M | 1, 7, 4,10, 1 |<br />
| etherpad | yes | 20:40 | 0m13s | 20:40 | 3.1M | - | 1, 7, 4, 2, 0 |<br />
| fazubi | yes | 08:16 | 11h16m | 21:00 | 132G | 3.3G | 1, 7, 4,10, 1 |<br />
| fschare-rh | yes | de-duplicating (pid 9719) | 1, 7, 4,10, 1 |<br />
| fsh-02 | yes | 07:53 | 12h23m | 19:30 | 182G | 8.5G | 1, 7, 4,10, 1 |<br />
| fshare-ak2 | yes | 00:30 | 0m26s | 00:30 | 415M | 544k | 1, 7, 4,10, 1 |<br />
| helpdesk | yes | 21:27 | 2m16s | 21:25 | 895M | 138k | 1, 7, 4,10, 0 |<br />
| it-share | yes | running rsync (pid 9744) | 1, 7, 1, 0, 0 |<br />
| lightroom | yes | waiting (pid 9528) | 1, 7, 4, 2, 0 |<br />
| localhost | yes | 02:15 | 0m08s | 02:15 | 395k | - | 1, 7, 2, 0, 0 |<br />
| mediawiki | yes | 16:10 | 0m06s | 19:40 | 1.1G | 6.7M | 3, 7, 4,10, 1 |<br />
| mshare | yes | 00:48 | 1h33m | 23:15 | 18G | 2.4G | 1, 7, 4,10, 1 |<br />
| wshare | yes | 00:34 | 1h49m | 22:45 | 23G | 484M | 1, 7, 4,10, 1 |<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
2 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
With --csv, the output is in machine readable colon separated format.<br />
<br />
'''affa --show-config-pathes [--csv] [JOB JOB ...]'''<br />
Prints the full pathes of the file where the section of job JOB is defined.<br />
<br />
'''affa --show-default-config'''<br />
Prints a list of all allowed property names with their default values. These values are used, when omitted in the job sections.<br />
<br />
'''affa --show-schedule [--all]'''<br />
Prints a 'graphical' timetable for all enabled jobs. The resolution is 30 minutes. An 'S' character marks the scheduled start times. The duration of the job runs are marked with '=' characters.<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
TIME 12:00 16:00 20:00 0:00 4:00 8:00<br />
fsh-02 -------- -------S ~~~~~~~~ ~~~~~~~~ ~~~~~~~~ --------<br />
ads-ak -------- -------- S------- -------- -------- --------<br />
etherpad -------- -------- -S------ -------- -------- --------<br />
eshare -------- -------- -S~----- -------- -------- --------<br />
fazubi -------- -------- --S~~~~~ ~~~~~~~~ ~~~~~~~~ ~-------<br />
helpdesk -------- -------- --S----- -------- -------- --------<br />
it-share -------- -------- ---S=~~~ ~~~~~~~~ ~~~~~~~- -------- busy<br />
wshare -------- -------- -----S~~ ~~------ -------- --------<br />
ak-user-management -------- -------- -----S-- -------- -------- --------<br />
fschare-rh -------- -------- -----S=- -------- --~----- -------- busy<br />
mshare -------- -------- ------S= ~~------ -------- --------<br />
azubi-1 -------- -------- -------- S=~~~~~~ -------- --------<br />
fshare-ak2 -------- -------- -------- -S------ -------- --------<br />
mediawiki -------- -------- -------- ---S---- -------- ----S---<br />
localhost -------- -------- -------- ----S--- -------- --------<br />
dcpant -------- -------- -------- ------S- -------- --------<br />
lightroom -------- -------- -------- -------- -------- S------- busy<br />
Symbols: S=scheduled K=kill R=resume '='=rsync '~'=dedup<br />
2 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
<br />
'''affa --log-tail [JOB]'''<br />
Displays the tail of the logfile of job JOB with live update. This command is identical to tail -n 50 -f /path/to/logfile. Without the JOB argument the global logfile is shown.<br />
<br />
'''affa --send-status'''<br />
Sends the status table, the disk-usage and the archive list of all jobs to the email addresses configured in the [GlobalAffaConfig] section.<br />
<br />
'''affa --disk-usage'''<br />
Shows the current disk usage of all root dir filesystems<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.de<br />
+------+--------+--------+----------------------------------------------------+<br />
| Use% | Used | Avail | Root Dir |<br />
+------+--------+--------+----------------------------------------------------+<br />
| 71% | 938GB | 365GB | /var/affa |<br />
| 20% | 194GB | 759GB | /mnt/affadev |<br />
+------+--------+--------+----------------------------------------------------+<br />
</pre></small><br />
With --csv, the output is printed in a machine readable colon separated format.<br />
<br />
'''affa --cleanup JOB'''<br />
After you have lowered a keep value, e.g. scheduledKeep, then archives with a higher indices will no longer be shifted and will exist for ever. This option finds these archives and deletes them after confirmation.<br />
<br />
'''affa --rename-job JOB NEWNAME'''<br />
Renames the job JOB to NEWNAME including the section name and archive directories.<br />
<br />
'''affa --move-archive JOB NEWROOTDIR'''<br />
Moves the archive directory of job JOB to the rootdir NEWROOTDIR and adjusts the value of property RootDir. NEWROOTDIR must be a full path starting with a slash. As moving across filesystems (e.g. from an external USB drive to the local disk) is not possible, Affa uses a copy command in this case and deletes the source directory after that. Depending on the archive size, copying across filesystems can take a long time.<br />
<br />
'''affa --delete-job [--revoke-key] JOB'''<br />
Irreversibly deletes all archives, of job JOB and set the status property to disable. With --revoke-key option, the public key on the remote server will be deleted.<br />
<br />
'''affa --revoke-key JOB'''<br />
Deletes the public dsa key on the remote server.<br />
<br />
'''affa --kill JOB'''<br />
Terminates the running job JOB and all its child processes (rsync processes).<br />
<br />
'''affa --killall'''<br />
Terminates all running jobs.<br />
<br />
affa --mailtest JOB<br />
Sends a test email to the email addresses configured in the JOB section. Use this to verify, that your mail system is working.<br />
{{Note box|By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).}}<br />
<br />
'''affa --nrpe [JOB JOB ...]'''<br />
Checks for jobs that were not run the last NRPRtrigger hours and reports them as failed. A Nagios/ICINGA NRPE complient message is printed and exit status returned.<br />
<br />
'''affa --version'''<br />
Displays the Affa version number and checks Samba and Freedup installation.<br />
<br />
'''affa --warranty'''<br />
Displays the disclaimer of liability.<br />
<br />
'''affa --license'''<br />
Displays the license Affa is released under.<br />
<br />
'''affa --help'''<br />
Displays a short help.<br />
<br />
'''affa --debug'''<br />
Enables verbose logging. Overrides job and global configurations.<br />
<br />
==== SME Specific usage and command line options ====<br />
<br />
These commands are implemented in a fork of Affa 3 project for use with SME. They are not part of the Affa 3 for CentOS project.<br />
<br />
'''affa --rise [--all] JOB [ARCHIVE]'''<br />
Runs a full restore on the Affa server <b>(!!!)</b> of all standard files and directories from the backup ARCHIVE of job JOB. In other words: After completion, the Affa box reboots as a clone of the source server. Ensure, that the source server has been powered off before you reboot the Affa box, otherwise the network interface will not come up. This is important, when you run --rise remotely. The --rise feature only works with SME servers and should only be used on dedicated backup servers.<br />
<br />
With option --all, all files and directories of the archive as defined by the include[] properties are restored. Files or directories with the same name on the Affa server will be overwritten and cannot be restored by a undorise. This should not be an issue on a dedicated Affa server which does not hold any other data. After a possible undorise those additional restored data must be removed manually.<br />
<br />
Please note, that the rise process backs up the the Affa server itself before doing the restore from the archive. This backup is used by a possible undorise run to restore the Affa server. Only the standard files and directories are backed up. Data in non-standard loctions (like /opt) are untouched and will still exist after the rise run '''if they don't get overwritten by data of the backup''' (=if the backup contains /opt too e.g.).<br />
<br />
{{Note box|The above command is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.}}<br />
<br />
'''affa --undo-rise'''<br />
This feature reverts a risen Affa box to a backup server. After a reboot, all configured jobs '''based on standard files and directories''' will work again.<br />
<br />
{{Note box|The above command is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.}}<br />
<br />
=== Example setups ===<br />
==== Dedicated backup server ====<br />
* Setup a dedicated server and install Affa 3 for SME.<br />
* Setup a job for every server you want to backup.<br />
* Send the public keys to every server.<br />
affa --send-key JOBNAME<br />
* Check whether password-less logins are working.<br />
affa --check-connections<br />
* Check whether the scheduled jobs are evenly distributed over the day.<br />
affa --show-schedule<br />
* Create the cron jobs.<br />
affa --make-cronjobs<br />
* Check the status after 24 hours.<br />
affa --status<br />
<br />
==== Backup of single ibays ====<br />
Suppose you want to backup the ibays 'staff1' and 'staff2' on your production server with WAN IP 82.123.1.1 to an Affa server in a different location over the internet every night at 2:30am.<br />
* Log into the Affa server.<br />
* Create a dedicated job file ''/etc/affa/ibay-staff.conf'' . Edit it and set<br />
[ibay-staff]<br />
Description=Backup of ibay-staff on 82.123.1.1<br />
remoteHostName=82.123.1.1<br />
TimeSchedule=0230<br />
SMEServer=no<br />
Include=/home/e-smith/files/ibays/staff1<br />
Include=/home/e-smith/files/ibays/staff2<br />
* Save the configuration<br />
* send the public key to the production server<br />
affa --send-key ibay-staff<br />
* check next morning<br />
affa --list-archives ibay-staff<br />
affa --status<br />
ls /var/affa/ibay-staff<br />
<br />
==== Two production servers backup each other ====<br />
You have two sites connnected via a VPN and a SME Server running on each site. In that case you don't need a dedicated Affa backup server. Both production servers can additionally act as Affa backup servers backing up the server of the other site. Simply install Affa and configure a job that backs up the other one. You can use all Affa features except of the rise feature.<br />
<br />
When using the rise feature, the server becomes any of the backed up systems, which is less useful in this scenario as it would give you a running copy of the server of the other site while the server of this site is down.<br />
<br />
To get redundancy and a faster restore you can configure a local backup to an external USB or NAS device.<br />
<br />
Please pay attention, that you do not backup the archives back to the other site. Set the RootDir property to a path which is not included in the SME default backup list. When leaving the RootDir property to its default '/var/affa', this is guaranteed.<br />
<br />
==== Use Affa to backup to a NFS-mounted NAS or a local attached USB drive ====<br />
<br />
{{Note box|This chapter still needs to be checked whether it works with Affa 3 for SME.}}<br />
<br />
You want to backup your SME production server with hostname 'prodbox‘ and IP 10.200.48.1 on a mounted filesystem instead of setting up a dedicated Affa box.<br />
<br />
===== Setup NAS =====<br />
You have a [http://www.freenas.org FreeNAS] box with IP 10.200.48.2 up and running with NFS service enabled. The disk is mounted to /mnt/affashare. You have authorized the IP address of your prodbox server to access share /mnt/affashare.<br />
* log into the 'prodbox' and install the NFS packages<br />
/usr/bin/yum install --enablerepo=smecontribs smeserver-nfs<br />
* now enable and start the portmapper service<br />
config setprop portmap status enabled<br />
service portmap start<br />
<br />
* mount the NFS share<br />
mkdir -p /mnt/affadevice<br />
mount 10.200.48.2:/mnt/affashare /mnt/affadevice<br />
<br />
<br />
===== Alternatively setup a USB drive =====<br />
* log into the 'prodbox' <br />
* connect a USB hard disk to the USB Bus. Now you must determine what device the kernel has assigned to the drive. View the /var/log/message and search for ''Initializing USB Mass Storage driver''. A few lines below you'll find the name of the device. In this example it is ''sdh''. Replace ''/dev/sdh'' by ''your device'' in following instructions.<br />
* use the fdisk program to create a linux partition. '''Verify that this is really the attached USB drive before you continue!'''<br />
<br />
fdisk /dev/sdh<br />
You'll most likely find an existing vfat dos partition, which you have to delete first. In the following we assume, that you have created a single partition ''/dev/sdh1''.<br />
* now format the drive with an ext3 filesystem<br />
mkfs.ext3 /dev/sdh1<br />
* make the mount point<br />
mkdir -p /mnt/affadevice<br />
* add the following line to the /etc/fstab<br />
/dev/sdh1 /mnt/affadevice ext3 defaults<br />
* mount the drive<br />
mount /mnt/affadevice<br />
* crosscheck your work using the df command<br />
df<br />
<br />
===== Copying a AFFA USB hard drive archive to a new disk =====<br />
Affa uses copious amounts of hard links to compress and preserve disk space for its backups. If you are in the situation where you want to copy such a disk archive to a new (bigger) disk, you need to ensure that the hard links are copied correctly or the destination copy may became significantly bigger than the total of the source archive.<br />
<br />
One way to copy across file systems (i.e. two different USB disks) and preserve the hard links is as follows:<br />
<br />
* mount both USB drives but with different mount points. e.g. /media/backup1 & /media/backup2 and then:<br />
<br />
mkdir /media/backup2/archive <br />
cd /media/backup1/archive<br />
tar cpf - . | ( cd /media/backup2/archive && tar xpf - )<br />
<br />
where '''archive''' is the name of the AFFA job you want to move to the other disk.<br />
<br />
===== Setup Affa =====<br />
You want to run backups on 11:30 h, 15:30 h and 19:30 h and you want to keep the last 3 scheduled backups, 7 daily, 5 weekly, 12 monthly and 1 yearly backups.<br />
* log into the 'prodbox' and install the Affa packages as described above.<br />
* create/edit the conf file of the job: nano /etc/affa/prodbox.conf<br />
and set:<br />
[prodbox]<br />
remoteHostName=localhost<br />
TimeSchedule=1130<br />
TimeSchedule=1530<br />
TimeSchedule=1930<br />
scheduledKeep=3<br />
dailyKeep=7<br />
weeklyKeep=5<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
RootDir=/mnt/affadevice<br />
Review the other properties and change them to your needs.<br />
* run the job manually<br />
affa --run prodbox<br />
<br />
===== Limitations =====<br />
With this kind of setup you cannot use the affa rise feature, as it requires the backup archive to be located on the '''same fileystem''' as the server installation. The rise option uses hardlinks, which are not working across filesystems.<br />
<br />
===== Automount =====<br />
Having the backup archives in the same filesystem is always a risk, which can be minimized by using the automount feature. Then the external filesystem is only mounted during a job run.<br />
<br />
In the NAS example set<br />
AutomountDevice=10.200.48.2:/mnt/affashare<br />
AutomountPoint=/mnt/affadevice<br />
and skip the step 2.<br />
<br />
In the USB drive example set<br />
AutomountDevice=/dev/sdc1<br />
AutomountPoint=/mnt/affadevice<br />
and skip the steps 5 to 8.<br />
<br />
The mount point will be automatically created, if it does not exist. <br><br />
To access the archive directory, you need to mount it manually.<br />
<br />
=== Restore ===<br />
==== Restore single files or directories ====<br />
'''Example 1:''' It's Tuesday January 6th 2009, when user 'briedlin' asks you to restore the messages of his mailbox 'orders' he has accidentally deleted on Monday.<br />
<br />
* You first must check what backup archives are available. The jobname of this server backup is 'primmail'. To get a listing of all archives run <br />
affa --list-archives primmail<br />
''(see the example listing in chapter [[Affa#Usage_and_command_line_options]]''<br />
* Choose the daily.0 archive, which was created Monday night. Now restore the mailbox 'orders' using the rsync command.<br />
* Now run the rsync command ''(note the trailing slash!)'' on the Affa backup server:<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# this variable is used to shorten the next command line''<br />
rsync -av /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# this variable is used to shorten the next command line''<br />
rsync -av -e 'ssh -p 2222' /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
<br />
<br />
'''Example 2:''' A user has deleted the file orderform.pdf from ibay 'docs' on the server 10.204.48.1 and asks you to restore it.<br />
<br />
* You have searched and found the latest version of this file in weekly archive of job 'prodserv'.<br />
* To copy it back to the server 10.204.48.1 run on the Affa server<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# this variable is used to shorten the next command line''<br />
rsync -av /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
* If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# this variable is used to shorten the next command line''<br />
rsync -av -e 'ssh -p 2222' /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
<br />
<br />
'''Example 3:''' Restoring a file with special characters in the path<br />
<br />
Do not use shell variables to shorten the command. It would complicate things more than it would help.<br />
Quote the source and destination path. In the destination path escape blank characters with a double backslash and brackets with a single backslash. On the Affa backup server do:<br />
rsync -av "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
<br />
rsync -av -e 'ssh -p 2222' "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
==== Full restore ====<br />
Generally: <br />
'''affa --full-restore [optional settings] JOB [ARCHIVE]'''<br />
This rsyncs the data (files and directories )from the backup ARCHIVE back to the 'remoteHostname' defined in the configuration of the job JOB. <br />
If ARCHIVE is not given, the archive 'scheduled.0' is used as the default.<br />
The --full-restore reconstructs the server as it was at the time of the backup and takes following IMPORTANT optional settings in consideration:<br />
<br />
With option [--preserve-newer=yes] files on the remote server with modification time newer than on the backup are not overwritten.<br />
<br />
With option [--delete=yes] all files on the remote server, which are not in the backup, are deleted.<br />
<br />
If the parameter “SMEServer=yes” is set, the signal-event pre-restore and signal-event post-upgrade will be used automatically.<br />
<br />
After the restore is done, the restored server reboots automatically. <br />
<br />
Example:<br />
You have backuped your production server 'prodsrv' as job 'prodbox'. To restore only lost data from the latest backup run<br />
affa --full-restore prodbox<br />
<br />
To restore exactly from the older archive daily.3 run <br />
affa --full-restore [--preserve-newer=no] [--delete=yes] prodbox daily.3<br />
<br />
{{Warning box|A full restore with --preserve-newer no and --delete yes reconstructs the server as it was at the time of the backup. That means, that all files created or server configuration changes made after the backup will be lost! }}<br />
<br />
==== Moving a SME server installation to new hardware using the Affa rise feature ====<br />
Please see this Howto: [[Moving SME to new Hardware]]<br />
<br />
<br />
==== Restore from USB drive on new server ====<br />
<br />
This tip comes from http://forums.contribs.org/index.php?topic=42412.0<br />
<br />
Q) I have complete backups using affa stored on a usb hard drive connected to our affa backup server. I need to restore an earlier monthly backup of our job "mailbackup" to a test server rather than back to the original system. If I did it from the backup server I see the instructions of how to rise that server to the current backup on that server but I want to restore a point about a month ago before some strange things happened. And I want to do it on a machine that is not our backup server or our production server. I tried to figure out how but am lost in the options. My goal is to do some testing.<br />
<br />
A) On your testserver setup a job mailbackup identical to that on your backup server but set property RootDir to /var/affa and property status to disabled. Connect the USB drive and copy the archive of the job mailbackup to /var/affa. Then run affa --rise mailbackup ARCHIVE<br />
<br />
Remember that Affa places job setup scripts into the archive directories. In your case it is mailbackup-setup.pl. Copy it to your testserver, change the RootDir and status properties and execute it.<br />
<br />
=== FAQ ===<br />
'''What files and directories are included by default?'''<br />
<br />
With SMEServer=no nothing at all.<br />
<br />
With SMEServer=yes the SME default backup list is the output of following command:<br />
perl -e 'use esmith::Backup;$b=new esmith::Backup;print join("\n",$b->restore_list)."\n"' <br />
and as additional folder:<br />
/etc/affa<br />
<br />
{{Note box|The following example needs to be reworked for Affa 3.}}<br />
'''Can I exclusively backup image files from a specific directory?'''<br />
Yes. Assuming you want to backup all gif and jpg files from directory /home/e-smith/files/ibays/pictures/files use this configuration<br />
db affa setprop JOBNAME SMEServer no # do not include SME Server default directories and files<br />
db affa setprop JOBNAME Include[0] '/home/e-smith/files/ibays/pictures/files' # start searching here<br />
db affa setprop JOBNAME Include[1] '*/' # search the tree recursively<br />
db affa setprop JOBNAME Include[2] '*.gif' # copy files that match these patterns<br />
db affa setprop JOBNAME Include[3] '*.jpg' <br />
db affa setprop JOBNAME Exclude[0] '*' # exclude all others<br />
db affa setprop JOBNAME rsyncOptions '-m' # do not create empty folders<br />
<br />
<br />
'''How can I move a backup archive from my production server in a remote branch office to the Affa server using an USB disk? I want to avoid to download the first full backup over the internet.'''<br />
Install Affa on your production server, connect and mount the USB disk. Setup a job to write the archive to the USB disk. After the job run, connect the USB disk to your Affa server and setup the job in the same way. Use the setup script from the archive directory. You only need to change the remoteHostName property. Now use the --move-archive option to move the archive to the local disk. If not already done, send the public key to your production server. You're done.<br />
<br />
<br />
'''How do I backup two SME servers behind a firewall?'''<br />
First you need to configure port forwardings for the ssh service on your firewall. Use a non-standard port, e.g 2200->22, for your second server. Setup a job on your Affa server for each of your production servers. Set up the job property sshPort=2200 for second server.<br />
<br />
=== Uninstall ===<br />
This removes the installed Affa package, all configuration data and all backup archives.<br />
<br />
Run the following commands for each job:<br />
affa --revoke-key JOBNAME<br />
affa --delete-job JOBNAME<br />
Verify that all jobs have been deleted:<br />
affa --status<br />
Remove the Affa package:<br />
yum remove smeserver-affa-3.2.2.1-0.noarch.rpm<br />
Cleaning up:<br />
rm -f /etc/cron.d/affa<br />
rm -rf /etc/affa<br />
rm -rf /var/affa /var/log/affa<br />
<br />
=== Troubleshooting ===<br />
==== Affa stops working after prodserver was rebuild. ==== <br />
'''Description:'''<br />
* Affa stopped working or is broken.<br />
* When affaserver tries to make an ssh connection to prodserver it fails.<br />
* In the /var/log/affa/''jobname''.log file you will this message: "SSH connection to ''prodserver ip'' failed. Did you send the public key". <br />
* Resending the keys does not solve the problem. <br />
<br />
'''Solution:'''<br />
* This is probably due to the key for prodserver that is stored on the affaserver no longer matches the key of prodserver. <br />
* You need remove the existing key so that the next ssh connection re-gets the keys from prodserver. <br />
* From the command line edit the /root/.ssh/known_hosts file and remove the line for prodserver. It starts with: ''prodserver ip'' ssh-rsa<br />
<br />
=== Bug report ===<br />
Affa is listed in the [http://bugs.contribs.org/enter_bug.cgi?product=SME%20Contribs&component=smeserver-affa bugtracker contribs section].<br />
Please report all bugs, new feature requests and documentation issues there.<br />
<br />
<br />
{{#bugzilla:columns=id,product,version,status,summary |sort=id |order=desc |disablecache=1|component=smeserver-affa|noresultsmessage="No open bugs found."}}<br />
<br />
=== Additional information ===<br />
For details of performance, changelog etc see [[:Affa:Additional information]]<br />
<br />
=== References ===<br />
* Wikipedia article http://en.wikipedia.org/wiki/Rsync<br />
* Rsync homepage http://rsync.samba.org/<br />
* Wiki article [[Moving SME to new Hardware]]<br />
* Wiki article [[Backup with Affa and FreeDup]]<br />
* Wiki article [[Rsyncd setup on a windows computer for use with Affa backup]]<br />
* Wiki article [[Backup Windows Computers Using Affa]]<br />
* Wiki article [[Backup of ESXi Virtual Machines using Affa]]<br />
<br />
----<br />
[[Category: Contrib]]<br />
[[Category: Backup]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Affa&diff=33081Affa2017-04-01T19:48:33Z<p>Arnaud: /* FAQ */</p>
<hr />
<div>{{Languages|Affa}}<br />
{{Level|Advanced}}<br />
{{usefulnote}}<br />
<br />
<br />
=== Maintainer ===<br />
Mainteners(s) Affa3: Arnaud, stephdl<br><br />
Development of Affa2 for SME Server by the original maintainer has been discontinued.<br />
<br />
=== Version ===<br />
{{ #smeversion: affa}}<br />
{{ #smeversion: smeserver-affa}}<br />
<br />
<br />
=== Description ===<br />
The main purpose of this affa package is to make a SME Server a dedicated backup box in a few minutes. Affa backs up as many SME servers as you like or any other servers which have sshd running and rsync installed. Once it is configured, Affa runs reliably unattended and sends warning messages in case of any errors. <br />
<br />
All backup archives are full backups, as Affa makes use of the hardlink technique. Therefore a new full backup only needs disk space for the differences plus the filesystem overhead for the hardlinks and directories (which is typically 2-3%).<br />
<br />
Affa is based on the rsync program and supports the rsync --compress option. This allows you to run backups over the internet or VPN. A typical setup is one or more Affa backup servers placed in different locations, which backup the production server(s) over the VPN.<br />
<br />
A special feature is the rise option, which allows you to rise the backup server to your production server from a backup archive in case of a dead loss of your production server. The rise is executed within a extremely short time, even with huge amount of data. The rise feature uses hardlinks and therefore does not use up additional disk space.<br />
<br />
The rise feature can also be used to upgrade from a lower version to a higher version of SME. This also work from SME 8.1 to 9. For this to work you need 2 separate installations of SME. One is the actual running server and the other is an affa backup server with a newer version of SME Server where you execute the rise command. See more here: [[Moving SME to new Hardware]]<br />
<br />
A simple way to have a server running with maximum up time is to duplicate the hardware. E.g. to have 2 separate hardware boxes or virtual servers on separate hardware. One is the actual server and the other one is a backup system that in a very short time in case of problems can be upgraded to the actual server with the rise command. This also allow you to upgrade the SME Server software with minimum down time!<br />
<br />
Affa is a command line tool for system administrators and is intentionally designed without a GUI. Therefore it can be efficiently managed on the console and over slow internet connections.<br />
<br />
'''Note:''' This documents also refers to the Affa Version 2 which is not maintained anymore. Information on Affa 3 will be gradually added here once it has been more tested.<br />
<br />
==== Affa features at a glance ====<br />
* Affa is secure: All transfers and connections are made by using the ssh protocol with public/private key authentication<br />
* Periodically runs unattended full backups. Only differences to the previous full backup are transferred over the network<br />
* Using rsync with optional bandwidth limit and compression allows backups over the internet<br />
* Uses hardlink technique, i.e. physical disk space only needed for the differences between two full backups<br />
* Keeps a configurable number of scheduled, daily, weekly, monthly and yearly full backup archives<br />
* The archives are browseable. No need to unpack them first.<br />
* Archives can be mapped to a Samba share.<br />
* Backup jobs are started by the cron daemon<br />
* Interrupted jobs continuing with already transfered data when restarted<br />
* Backups the default e-smith directories and files, when property SMEServer is set to yes<br />
* Additional directories and files can be included<br />
* Directories and files can be excluded from the backup<br />
* Non-SME server linux systems can be backed up by setting the SMEServer property to no and using an include list<br />
* In ESXi mode, running virtual machines can be backed up. See [[Backup of ESXi Virtual Machines using Affa]]<br />
* Configurable nice level for rsync processes on the backup and source server<br />
* Optional run of custom programs before and after a job run (e.g. running tape backup)<br />
* Checks the disk space left after a job run with warning levels strict, normal or risky<br />
* Extensive checking of failure conditions<br />
* Sends failure messages to a configurable list of email addresses<br />
* Sends a warning message, if the backup server runs out of disk space<br />
* Installs an optional watchdog on the source server in case the backupserver fails (SME Server only)<br />
* Watchdog sends warning, if an expected backup did not run (SME Server only)<br />
* Watchdog sends a daily reminder message, if the error continues unchecked (SME Server only)<br />
* Option to display current status of all jobs showing times of last and next run, size and disk usage<br />
* Status can be mailed on a daily, weekly or monthly schedule<br />
* Option to display all existing archives of a job shown date, number of files, size and bytes tranferred from the source<br />
* Option to send (and revoke) the public key to the source server (SME Server and ESXi only)<br />
* Option to rise the backup server to a production server from a backup (SME Server only)<br />
* The rise feature does not physically move data and therefore is extremly fast and needs (almost) no extra disk space<br />
* Rise option can be run remotely as the NIC driver configuration of the backup server are preserved<br />
* Compares installed RPMs on source with backup server. Sends warning message, if not in sync<br />
* Undo rise option to restore the backup server<br />
* Configurable via a e-smith style db, with one record for each job and a default record for all jobs<br />
* Logs to /var/log/affa/JOB.log and /var/log/affa/affa.log with optional debug switch for higher verbosity<br />
* Log files are rotated weekly, with 5 logs kept<br />
<br />
<div style="text-align:left" class="mw-collapsible mw-collapsed" data-collapsetext="less &uarr;" data-expandtext="Obsolete Affa 2 instructions &darr;"><br />
<br />
=== Installation or Update of Affa 2 ===<br />
<br />
====Smecontribs for sme8====<br />
yum install --enablerepo=smecontribs smeserver-affa<br />
<br />
The latest test version can be found here :<br />
[http://www.reetspetit.com/smeserver/5/noarch/smeserver-affa-2.0.0-rc6.el5.sme.noarch.rpm smeserver-affa-2.0.0-rc6.el5.sme.noarch.rpm] Note this is NOT signed.<br />
<br />
Please post bug reports in the [http://bugs.contribs.org bug tracker]<br />
<br />
When you have installed Affa for the first time run the following command to initialize the Affa database<br />
<br />
affa --make-cronjobs<br />
and logout and re-login to the console to take the bash auto-completion (TAB key) in effect.<br />
====Obsolete way to install====<br />
{{Note box|SME7 is now EOL and has been moved to the obsolete tree}}<br />
SME7 version: '''smeserver-affa-2.0.0-rc4.noarch.rpm'''<br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/smeserver-affa-2.0.0-rc4.noarch.rpm</nowiki><br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm<br />
<br />
SME8 version: SME8 is based on CentOS5, therefore use the perl-Compress-Bzip2 el5 rpm: <br />
wget <nowiki>http://mirror.contribs.org/releases/8/smecontribs/i386/repoview/smeserver-affa.html</nowiki><br />
For SME8 32bit:<br />
wget <nowiki>http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm<br />
For SME8 64bit:<br />
wget http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
<br />
When you have installed Affa for the first time run the following command to initialize the Affa database<br />
affa --make-cronjobs<br />
and logout and re-login to the console to take the bash auto-completion (TAB key) in effect.<br />
<br />
==== Quick start example for Affa 2 ====<br />
You have a SME 7 production server with hostname 'prodbox‘ and IP 10.200.48.1.<br><br />
Set up a second SME 7 box as your backupserver with hostname 'affabox‘ and IP 10.200.48.2. <br />
<br />
<ol></li><li>log into the 'affabox' and install the packages as described above.<br />
</li><li>copy the config helper script sample<br />
cp /usr/lib/affa/jobconfig-sample.pl /root/prodbox-job.pl<br />
</li><li>edit /root/prodbox-job.pl and set<br />
my $jobname='prodbox';<br />
and<br />
'remoteHostName‘=>'10.200.48.1',<br />
optionally, if the remote server port is configured to e.g. 2222 then set<br />
'sshPort'=>2222,<br />
</li><li>save and run the 'prodbox-job.pl' file. This will write the configuration (this makes the database entries and sets up the cronjobs)<br />
/root/prodbox-job.pl<br />
</li><li>generate the DSA keys and send the public key to the 'prodbox'<br />
{{Note box|When initially doing this step, you will need to temporarily enable "Allow secure shell access using standard passwords" on the production server}}<br />
affa --send-key prodbox<br />
</li><li>run the job manually<br />
<br />
affa --run prodbox<br />
</li></ol><br />
<br />
==== Configuration ====<br />
The configuration is stored in an e-smith style database. Use the db command to configure Affa.<br />
The jobname is the record key with the type 'job'.<br>To setup a new job with the name 'prodbox' enter:<br />
<br />
db affa set prodbox job<br />
<br />
then set the properties<br />
<br />
db affa setprop prodbox remoteHostName 192.168.1.1<br />
db affa setprop prodbox TimeSchedule '0030,0730,1130,1330,1730,2030'<br />
db affa setprop prodbox Description 'My Production Server'<br />
db affa setprop prodbox status enabled<br />
and so on...<br />
<br />
Alternatively you can you use a script as described above in the 'Quick start' chapter.<br />
<br />
To verify your work, type:<br />
<br />
db affa show prodbox<br />
<br />
Finally set up the cronjobs:<br />
affa --make-cronjobs<br />
<br />
</div><br />
<br />
<br />
=== Installation of Affa 3 ===<br />
The following instructions assume that a fresh Affa 3 installation is made on a fresh dedicated SME server which serves as a dedicated backup server ('affabox‘). No jobs are migrated from Affa 2. For using an existing SME server, just skip the instructions how to setup a fresh SME box.<br />
<br />
Setup a dedicated machine with SME 8.1 or SME 9.0 from CDROM. Use the following example settings:<br />
<ol><li>Domain name: athome.xx (use your existing domain name)</li><br />
<li>Host name: affabox (must not match to existing host)</li><br />
<li>IP address: 192.168.0.10 (must not match to existing IP address)</li><br />
<li>Subnet: 255.255.255.0</li><br />
<li>role: Server only</li><br />
<li>Gateway: 192.168.0.1 (use your existing gateway)</li><br />
<li>DHCP: DEactivate</li><br />
<li>DNS server: 192.168.0.1 (use your existing DNS server)</li><br />
<li>activate changes: yes</li><br />
<li>The fresh server will then restart with the above settings.</li><br />
</ol><br />
<br />
To update your server, login as user root on local console. Use 'top' command to display running jobs. Wait until 'yum' stopped running.<br />
yum clean all<br />
yum update<br />
The fresh server will be updated.<br />
signal-event post-upgrade<br />
signal-event reboot<br />
The server will reboot. Afterwards it is ready for installation of Affa 3.<br />
<br />
Login as root on local or remote SSH console of ''affabox''.<br />
<br />
= For SME8 =<br />
<br />
To install issue the following command:<br />
yum install smeserver-affa --enablerepo=smecontribs<br />
= For SME9 =<br />
<br />
yum install smeserver-affa --enablerepo=smecontribs<br />
<headertabs/><br />
<br />
The server again need to be updated.<br />
signal-event post-upgrade<br />
signal-event reboot<br />
The server will reboot. Afterwards Affa 3 is ready for use.<br />
<br />
Now manually create directory for archive files:<br />
mkdir /var/affa<br />
<br />
{{Warning box| Note that on SME9, a new SSH [https://wiki.contribs.org/AutoBlock AutoBlock] feature interferes with the Affa requirement for regular SSH logins! You might have to disable this feature to allow Affa to operate correctly. }}<br />
<br />
=== Creation of backup jobs ===<br />
Job configurations of Affa 3 are no longer stored in an e-smith style database. They are stored in configuration files located in ''/etc/affa'' . Create your configuration files in this directory, e.g.<br />
touch /etc/affa/backup-smeserver.conf<br />
where ''backup-smeserver'' is your jobname, and add content as described below.<br />
<br />
==== Quick start example ====<br />
You have a SME production server with hostname 'smeserver‘ and IP 192.168.0.2.<br><br />
You have a second SME box as your backup server with hostname 'affabox‘ and IP 192.168.0.10. <br />
<br />
Login to your 'affabox' as root and edit ''/etc/affa/backup-smeserver.conf'' . Using e.g. editor ''nano'', create the following example job configuration file for jobname ''backup-smeserver'' :<br />
[backup-smeserver]<br />
remoteHostName=192.168.0.2<br />
SMEServer=yes<br />
Watchdog=yes<br />
RPMCheck=yes<br />
ConnectionCheckTimeout=120<br />
Debug=no<br />
Description=Backup of 192.168.0.2 smeserver.athome.xx<br />
DiskSpaceWarn=strict<br />
RootDir=/var/affa<br />
TimeSchedule=0630<br />
localNice=15<br />
remoteNice=15<br />
rsync--inplace=yes<br />
rsyncCompress=no<br />
rsyncTimeout=900<br />
scheduledKeep=1<br />
dailyKeep=7<br />
weeklyKeep=4<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
status=enabled<br />
<br />
If you have a Letsencrypt certificate on the production server, then you should also include<br />
Include=/etc/dehydrated<br />
<br />
Then save your job configuration file.<br />
<br />
Now check that your configuration is OK:<br />
affa --configcheck<br />
This should throw out no errors. Now create / send key files to your productive smeserver:<br />
affa --send-key backup-smeserver<br />
The following output should appear on the console:<br />
Job sme-backup: Generating DSA keys...<br />
Successfully created DSA key pair.<br />
root@192.168.0.2's password: <Enter password of 192.168.0.2/smeserver and hit ENTER><br />
Public key sent to 192.168.0.2<br />
Now run your job manually, both for test purposes, as well as to create DSA keys:<br />
affa --run backup-smeserver<br />
The following output should appear on the console:<br />
The authenticity of host 'backup-smeserver (192.168.0.2)' can't be established.<br />
RSA key fingerprint is 3b..........65.<br />
Are you sure you want ton continue connecting (yes/no)? <yes><br />
Now your backup job should run for the first time. Depending on the volume of the files, this takes quite a while.<br />
<br />
Once the job is done, check that the archive is available:<br />
affa –-list-archives<br />
<br />
or run the job a second time:<br />
affa --run backup-smeserver<br />
Note that you won't be asked for the password again. Note as well, that this second run of the job ''backup-smeserver'' should run considerably faster, because not all files are copied again: just the difference to the previous run is backed up, the rest is stored as hard links to the existing file copies.<br />
<br />
Check that the second archive is available too:<br />
affa --list-archives<br />
<br />
From here you are able to work as with Affa 2. Modify your config file as required and described below. Automate the backup using the command ''affa --make-cronjobs'', see below.<br />
<br />
==== Job configuration properties ====<br />
'''Note 1:''' The default values shown in this table are the Affa program defaults and not to be confused with the preset values in the job configuration files, e.g. backup-smeserver.conf.<br />
<br />
'''Note 2:''' The complete documentation of the settings of Affa3 is available in [http://affa.sourceforge.net/AffaPdfMan.pdf] . It should be taken in consideration!<br />
<br />
'''Note 3:''' Affa 3 for SME is a fork of [http://affa.sourceforge.net/ Affa 3 for CentOS]. It adds a few options back again which have been removed. Generally the configuration properties as described [http://wiki.nikoforge.org/Affa_Configuration/ here] do apply. Arnaud added the following functions for SME:<br />
<ul><br />
<li>The parameter “SMEServer”</li><br />
<li>The parameter and function “Watchdog”</li><br />
<li>The parameter and function “RPMCheck”</li><br />
<li>The functions “--rise” and “--undo-rise”</li><br />
</ul><br />
Consequently the list below should reproduce the list for the unforked Affa 3 version, plus adding the properties above. For details refer to [https://www.guedel.eu/index.php/informatique/sme-server-8/affa-v3 Arnaud's website].<br />
{| border="1" cellpadding="3" cellspacing=0<br />
|-<br />
| '''Property''' || '''Value''' || '''Multivalue''' || '''Default''' || '''Description''' <br />
|-<br />
| remoteHostName <br />
| FQHN or IP<br />
| no<br />
| <none><br />
| FQHN or IP of the source host (mandatory)<br />
|-<br />
| remoteUser<br />
| account<br />
| no<br />
| root<br />
| The user account to be used with all ssh logins. May be case sensitive, e.g. Administrator account on Windows<br />
|-<br />
| Description<br />
| text string<br />
| no<br />
| <none><br />
| Any text that describes the job<br />
|-<br />
| TimeSchedule <br />
| HHMM<br />
| yes<br />
| 2230<br />
| The time the job will run. Use exactly 4 digits, no colon, no point. '''Important:''' Using the proper format HHMM is essential. Badly formatted TimeSchedule will cause strange Perl errors. Multiple runs can be achieved by repeating the line for each run time.<br />
|-<br />
| status <br />
| enabled ''or'' disabled<br />
| no<br />
| enabled<br />
| When set to disabled, no cron entries will made. You can still run a job manually. <br />
|-<br />
| Include<br />
| full path<br />
| yes<br />
| <none><br />
| File(s) or directory(s) to be included in the backup. <br />
|-<br />
| Exclude<br />
| full path<br />
| yes<br />
| <none><br />
| File(s) or directory(s) to be excluded from the backup. <br />
|-<br />
| RootDir<br />
| full path<br />
| no<br />
| /var/affa<br />
| where to store the backup archives. Do not use /home/e-smith or /root as these are included in the backup and therefore the rise option will not work! Recommended: /var/affa<br />
|-<br />
| scheduledKeep<br />
| integer >= 1<br />
| no<br />
| 1<br />
| how many of the scheduled backups to be kept<br />
|-<br />
| dailyKeep<br />
| integer >= 0<br />
| no<br />
| 7<br />
| how many of the daily backups to be kept<br />
|-<br />
| weeklyKeep<br />
| integer >= 0<br />
| no<br />
| 4<br />
| how many of the weekly backups to be kept<br />
|-<br />
| monthlyKeep<br />
| integer >= 0<br />
| no<br />
| 12<br />
| how many of the monthly backups to be kept<br />
|-<br />
| yearlyKeep<br />
| integer >= 0<br />
| no<br />
| 2<br />
| how many of the yearly backups to be kept<br />
|-<br />
| EmailAddress<br />
| name@domain.com<br />
| yes<br />
| root<br />
| comma separated list of mail addresses, where the messages should be sent to<br>'''Note:''' By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).<br />
|-<br />
| RetryAttempts<br />
| integer >= 0<br />
| no<br />
| 4<br />
| When set to a value>0, Affa re-run a failed job RetryAttempts times with a delay of RetryAfter seconds.<br />
|-<br />
| RetryAfter<br />
| seconds >= 0<br />
| no<br />
| 900<br />
| when set to a value>0, wait RetryAfter seconds before re-running the job after an error. Only applicable with RetryAttempts>0<br />
|-<br />
| RetryNotification<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| when set to no, Affa does not send an error message when a job has failed and RetryAttempts is configured. An error message is only sent when the last attempt has failed.<br />
|-<br />
| NRPEtrigger<br />
| hours<br />
| no<br />
| 24<br />
| NRPE reports a job as critical when the last successful run is older then NRPEtrigger hours. To exclude the job from monitoring set to a value < 0.<br />
|-<br />
| SambaShare<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Access to the job archives via CIFS protocol.<br />
|-<br />
| SambaValidUser<br />
| local account<br />
| yes<br />
| affa<br />
| User who has permission to access the job archives via the CIFS protocol.<br />
|-<br />
| preJobCommand<br> preJobCommandRemote<br />
| path relative to /etc/affa/scripts/<br />
| yes<br />
| <none><br />
| Scripts to be executed before a job run. The job name and type (scheduled, daily etc.) are passed as arguments to the program. The preJobCommandRemote scripts are copied to the remote server and then executed there. The scripts are executed in alphabetical order. Use a numerical prefix if you need a specific order e.g. 01scriptB, 02remotescriptA, 03scriptA. Use the included prescript-sample.pl and prescriptRemote-sample.pl as a starting point for own scripts.<br>'''Note:''' If you use the parameter “SMEServer=yes”, you may delete the “Included” relative to SME default directories in the jobconfig.ini: they will be backuped automatically.<br />
|-<br />
| postJobCommand<br> postJobCommandRemote<br />
| path relative to /etc/affa/scripts/<br />
| yes<br />
| <none><br />
| Scripts to be executed after a job run. The job name, the type (scheduled, daily etc.) and the exit code of the job run are passed as arguments to the program. The postJobCommandRemote scripts are copied to the remote server and then executed there. The scripts are executed in alphabetical order. Use a numerical prefix if you need a specific order e.g. 01scriptB, 02remotescriptA, 03scriptA. Use the included postscript-sample.pl and postscriptRemote-sample.pl as a starting point for own scripts.<br>'''Note:''' If you use the parameter “SMEServer=yes”, you may delete the “Included” relative to SME default directories in the jobconfig.ini: they will be backuped automatically.<br />
|-<br />
| dedup<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| The purpose of the deduplication is to remove duplicate files to save backup space. When set to 'yes' file deduplication is run after the synchronization has been completed. It looks for files that have identical content, user, group and permissions and replace duplicates by hardlinks. Deduplication scans the just completed archive and the previous one, that usually is<br />
scheduled.0 and daily.0 or scheduled.0 and scheduled.1. Consider this scenario: A user has renamed directories or files. Rsync sees those as new ones and copies them. Deduplication finds the identical copies in the previous archive and replace them by hardlinks. To use deduplication the Freedup program needs to be installed. Affa actually runs freedup -upg scheduled.0 <previous_archive>.<br />
|-<br />
| dedupKill<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| When set to 'no' the job ignores affa --kill or affa --killall when deduplication is running. This is useful in the context of killAt which is typically used to stop bandwith utilisation.<br />
|-<br />
| sshPort<br />
| service port <br />
| no<br />
| 22<br />
| When sshd on the remote host listens on a non-standard port, set the port here.<br />
|-<br />
| ConnectionCheckTimeout<br />
| seconds<br />
| no<br />
| 120<br />
| Affa checks the ssh connection before the rsync process is started and exits with an error after the configured time if the host did not respond.<br />
|- <br />
| BandwidthLimit<br />
| integer>=0 kilobytes per second<br />
| no<br />
| 0<br />
| Limits the data transfer rate. A value of zero specifies no limit.<br />
|- <br />
| rsyncTimeout<br />
| seconds<br />
| no<br />
| 900<br />
| Rsync exits after the configured time if no data was transferred. This avoids infinitely hanging in case of a network error.<br />
|- <br />
| rsyncCompress<br />
| yes ''or'' no <br />
| no<br />
| yes<br />
| Compress the transferred data. May be useful with slow internet connections. Increases CPU load on remote and backup host.<br />
|- <br />
| rsync--inplace<br />
| yes ''or'' no<br />
| no<br />
| yes <br />
| Set to no if the rsync version on the remote hist does not support this option.<br />
|-<br />
| rsync--modify-window<br />
| integer >= 0 <br />
| no<br />
| 0<br />
| When comparing two timestamps, rsync treats the timestamps as being equal if they differ by no more than the modify-window value. This is normally 0 for an exact match. A value >= 0 is useful if you can't get the clocks of the remote host and the Affa server in sync.<br />
|-<br />
| rsyncOptions<br />
| string<br />
| no<br />
| <none><br />
| Additional option string to be passed to rsync<br />
|-<br />
| localNice <br />
| -19...+19<br />
| no<br />
| 0<br />
| run rsync local process niced. <br />
|-<br />
| remoteNice<br />
| -19...+19<br />
| no<br />
| 0<br />
| run rsync process on source niced. <br />
|-<br />
| killAt<br />
| HHMM<br />
| no<br />
| <none><br />
| The time at which a job will be killed if it was still running. You can use it for example to kill jobs that are running over the internet early in the morning so that your users have the full bandwidth available during office hours. <br />
|-<br />
| resumeKilledAt<br />
| HHMM<br />
| no<br />
| <none><br />
| The time at which a killed job will be resumed. This allows you to start earlier in the evening than the scheduled time.<br />
|-<br />
| chattyOnSuccess<br />
| integer >= 0<br />
| no<br />
| 0 <br />
| When set to a value>0, Affa sends a message on a successfully completed job run and decrements the chattyOnSuccess value. When the value has reached zero, Affa falls back to the default and only sends messages on errors.<br />
|-<br />
| AutomountDevice<br>AutomountPoint<br />
| full path<br />
| no<br />
| <none><br />
| Device and mountpoint of backup device (e.g. USB disk). Device is automounted before a job starts and unmounted after job completion. With both properties empty no automount is done.<br />
|-<br />
| AutomountOptions<br />
| string<br />
| no<br />
| <none><br />
| An option string passed to the mount command.<br />
|-<br />
| AutoUnmount <br />
| yes ''or'' no<br />
| no<br />
| yes<br />
| When set to 'no' the automounted device stay mounted after the Affa run.<br />
|-<br />
| Debug <br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Set to yes to increase log verbosity.<br />
|-<br />
| remoteRsyncBinary <br />
| full path<br />
| no<br />
| /usr/bin/rsync<br />
| If the rsync program on the remote server is located in non-standard location, set it here<br />
|-<br />
| remoteNiceBinary <br />
| full path<br />
| no<br />
| /bin/nice<br />
| If the nice program on the remote server is located in non-standard location, set it here.<br />
|-<br />
| localRsyncBinary <br />
| full path<br />
| no<br />
| /usr/bin/rsync<br />
| If the local rsync program is located in non-standard location, set it here<br />
|-<br />
| localNiceBinary <br />
| full path<br />
| no<br />
| /bin/nice<br />
| If the nice local program is located in non-standard location, set it here.<br />
|-<br />
| RemoteAuthorizedKeysFile <br />
| path relative to remote user's home or full path<br />
| no<br />
| .ssh/authorized_keys2<br />
| If the remote host stores the authorized keys file in a non-standard location, set it here.<br />
|-<br />
| rsyncdMode<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Set to yes to connect to the rsync daemon on the remote host (instead of running rsync over ssh).<br />
|-<br />
| rsyncdModule<br />
| string<br />
| no<br />
| AFFA<br />
| The rsyncd module name (only applicable with rsyncdMode=yes).<br />
|-<br />
| rsyncdUser<br />
| string<br />
| no<br />
| affa<br />
| The username for authentication to the rsync daemon (only applicable with rsyncdMode=yes).<br />
|-<br />
| rsyncdPassword<br />
| string<br />
| no<br />
| <none><br />
| The password for authentication to the rsync daemon (only applicable with rsyncdMode=yes).<br />
<br />
|-<br />
| globalStatus<br />
| enabled ''or'' disabled ''or'' jobs<br />
| no<br />
| jobs<br />
| Set to enabled or disabled to overide the status settings in all job sections. When set to value jobs, the status settings in the job sections are effictive.<br>'''Note:''' This property is allowed only in the [GlobalAffaConfig] section. <br />
|-<br />
| sendStatus<br />
| daily ''or'' weekly ''or'' monthly ''or'' never<br />
| no<br />
| weekly<br />
| Defines how frequently the status will be sent.<br>'''Note:''' This property is allowed only in the [GlobalAffaConfig] section. <br />
|-<br />
| SMEServer<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| When set to yes, the default e-smith directories are automatically included and the property RPMCheck=yes can be used.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|-<br />
| RPMCheck<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Only applicable to jobs that backup a SME server. Compares the packages installation of the source host with this affa backup host. Sends a message with diff list if not in sync. This check is useful, if you want have the option to rise the backup server to a production server from a backup.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|-<br />
| Watchdog<br />
| yes ''or'' no<br />
| no<br />
| yes<br />
| Only applicable to jobs that backups a SME server. When a job is started, affa installs a watchdog script on the source in /etc/cron.d/, which sends a warning message, if the next scheduled job (taken from the TimeSchedule property + 10 minutes) did not run. This guarantees, that you will be notfied even in case of a affa server outage. The watchdog script send a daily reminder message, if the error continues. The next run job replaces the watchdog script with a new trigger time.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|}<br />
<br />
==== Default configuration properties ====<br />
For all 'name=value' properties defaults can be set in the [GlobalAffaConfig] section which are used in all job configuration when the corresponding property is omitted. For example, when these properties are set in [GlobalAffaConfig] section, they can be omitted in the specific job configurations. This is useful, when you set up many similar jobs. <br />
<br />
Example: You want to set the property 'localNice' to 19 for all jobs. Then add the following section in one of your configuration files:<br />
[GlobalAffaConfig]<br />
localNice=19<br />
Consequently you don't need to set this property for your individual jobs. Properties set in the job record override the defaults.<br />
<br />
==== Properties for global Settings ====<br />
The following special properties are only applicable to the [GlobalAffaConfig] section:<br />
<ul><br />
<li>sendStatus</li><br />
<li>globalStatus</li><br />
</ul><br />
<br />
All jobs can be disabled for execution through 'cronjobs" with setting 'globalStatus' to 'disabled'.<br />
[GlobalAffaConfig]<br />
globalStatus=disabled<br />
<br />
To re-enable run, either delete the above configuration line, or set to:<br />
globalStatus=jobs<br />
which is the default value.<br />
<br />
You can also set the value to<br />
globalStatus=enabled<br />
which enforces the the job execution through 'cronjobs' and overrides the specified values in the section for the specific job.<br />
<br />
=== Usage and command line options ===<br />
{{Note box|Options can be abbreviated to uniqueness, e.g. --mak is equal to --make-cronjobs}}<br />
'''affa --run JOB'''<br />
Starts a job run. Usually done by the cronjob. <br />
<br />
<br />
'''affa --make-cronjobs'''<br />
Configures the cronjobs as scheduled in the jobs records. By default this command will by executed by 'cronjobs' every 15 minutes. So you don't need to run this command manually, you can also just wait max. 15 minutes before your updates job configurations become effective.<br />
<br />
'''affa --configcheck'''<br />
Checks the syntax and values in all configuration files found in /etc/affa/. Run this command after modifying the configuration. Lower/Upper case errors in property names are corrected automatically.<br />
<br />
'''affa --make-cronjobs'''<br />
Configures the cronjobs. Run this command to make changes of time related properties effective i.e. TimeSchedule, killAt and resumekKilledAt properties. Every 15 minutes Affa checks whether a configuration file is newer than the cronjob file and runs the make-cronjobs command if this is the case.<br />
<br />
'''affa --send-key [JOB JOB ...]'''<br />
This first generates the DSA key on the Affa Server, if not already done. Then the public key is send to the hosts 'remoteHostName' as defined in section of each job JOB and generates the job specific ssh known host entry.<br />
{{Note box|When initially doing this step, you will need to temporarily enable "Allow secure shell access using standard passwords" on the source server.}}<br />
<br />
'''affa --check-connections [JOB JOB ...]'''<br />
Checks the ssh login and if applicable rsyncd auth for all jobs given as arguments. Without any arguments all jobs are checked.<br />
<br />
'''affa --full-restore [--preserve-newer=no] [--delete=yes] JOB [ARCHIVE]'''<br />
Does a full restore remote server of all backed up files and directories from the backup ARCHIVE. If ARCHIVE is not given, the archive 'scheduled.0' is used as the default. The full restore reconstructs the server as it was at the time of the backup. After the restore the source host reboots.<br />
<br />
With option --preserve-newer=yes files on the remote server with modification time newer than on the backup are not overwritten.<br />
<br />
With option --delete=yes all files on the remote server, which are not in the backup, are deleted.<br />
<br />
If the parameter “SMEServer=yes” is set, the signal-event pre-restore and signal-event post-upgrade will be used automatically.<br />
<br />
'''affa --list-archives JOB'''<br />
Displays a table of all present archives of job JOB with date, number of files, size and and bytes received. While column buTime shows the actual rsync time, the column ddTime shows the length of the subsequent deduplication run. ddYld is the gained deduplication yield in bytes.<br />
<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
+------------------------------------------------------------------------------+<br />
| Job: fshare-ak |<br />
| Description: Fileserver AK |<br />
| Directory: /var/affa/fshare-ak/ |<br />
| Hostname: 10.204.104.4 |<br />
| Email: sysadmin@mydomain.com |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| Run | Completion date | buTime | ddTime | ddYld | Files | Size | Recvd |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| Y 0 | Wed 2010-06-23 20:26 | 11m53s | - | - | 412k | 143G | 470M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| M11 | Sun 2010-08-29 20:22 | 7m50s | - | - | 417k | 153G | 14M |<br />
| M10 | Sun 2010-09-26 20:23 | 8m57s | - | - | 430k | 156G | 14M |<br />
| M 9 | Sun 2010-10-31 20:25 | 10m05s | - | - | 448k | 161G | 15M |<br />
| M 8 | Sun 2010-11-28 20:31 | 16m42s | - | - | 463k | 167G | 17M |<br />
| M 7 | Sun 2011-01-02 20:48 | 33m07s | - | - | 486k | 173G | 1.1G |<br />
| M 6 | Sun 2011-01-30 20:33 | 18m54s | - | - | 493k | 176G | 24M |<br />
| M 5 | Sun 2011-02-27 20:28 | 13m31s | - | - | 490k | 176G | 19M |<br />
| M 4 | Sun 2011-03-27 20:28 | 13m08s | - | - | 491k | 175G | 17M |<br />
| M 3 | Sun 2011-05-01 20:30 | 15m41s | - | - | 493k | 179G | 18M |<br />
| M 2 | Sun 2011-05-29 20:28 | 13m40s | - | - | 494k | 182G | 31M |<br />
| M 1 | Sun 2011-06-19 20:26 | 11m20s | - | - | 493k | 183G | 17M |<br />
| M 0 | Sun 2011-07-03 20:28 | 13m46s | - | - | 496k | 183G | 18M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| W 3 | Sun 2011-07-10 20:28 | 13m38s | - | - | 479k | 180G | 16M |<br />
| W 2 | Mon 2011-07-11 20:35 | 20m27s | - | - | 479k | 180G | 301M |<br />
| W 1 | Fri 2011-07-15 20:30 | 15m53s | - | - | 480k | 180G | 62M |<br />
| W 0 | Sun 2011-07-24 19:39 | 9m25s | - | - | 482k | 181G | 16M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| D 6 | Thu 2011-07-28 19:42 | 12m22s | - | - | 483k | 182G | 176M |<br />
| D 5 | Fri 2011-07-29 23:29 | 11m10s | 5h33m | 45G | 483k | 182G | 16M |<br />
| D 4 | Sat 2011-07-30 19:53 | 23m26s | 2h30m | 8.3G | 483k | 182G | 17M |<br />
| D 3 | Sun 2011-07-31 20:07 | 37m31s | 4m47s | 8.3G | 483k | 182G | 17M |<br />
| D 2 | Mon 2011-08-01 20:44 | 1h14m | 7h50m | 8.5G | 484k | 182G | 630M |<br />
| D 1 | Tue 2011-08-02 20:02 | 32m28s | 12h20m | 8.3G | 484k | 182G | 74M |<br />
| D 0 | Wed 2011-08-03 19:58 | 28m46s | 11h01m | 8.5G | 484k | 182G | 214M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| S 0 | Thu 2011-08-04 20:00 | 30m28s | 11h52m | 8.5G | 484k | 182G | 203M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
</pre></small><br />
<br />
'''affa --list-archives [--csv] JOB'''<br />
With --csv, the output is in machine readable colon separated format.<br />
<small><pre><br />
Archive:Count;Date;Files;Size;RootDirFilesystemAvail;RootDirFilesystemUsed;valid;TotalBytesReceived;ExecutionTime;DedupTotalFiles;DedupReplacedFiles;DedupSavedBytes;DedupExectime;DedupDate<br />
monthly;00000;201509270631;393237;37252509103;166889260;47876936;yes;1327370;1160638358;99;;;;;<br />
weekly;00003;201510040531;390273;37042612135;164566692;50199504;yes;1117726;45430532;59;;;;;<br />
weekly;00002;201510110531;395553;37299589800;162612204;52153992;yes;1117092;92369237;64;;;;;<br />
weekly;00001;201510180531;403831;37707599172;161855844;52910352;yes;1317083;148317764;64;;;;;<br />
weekly;00000;201510250531;406509;38146200127;161041156;53725040;yes;1104759;64083948;63;;;;;<br />
daily;00006;201510310531;415663;39248862982;162979176;51787020;yes;1958110;198060283;66;;;;;<br />
daily;00005;201511010531;411014;38563389171;163061136;51705060;yes;1156506;48903675;75;;;;;<br />
daily;00004;201511020531;408910;38612857040;162359124;52407072;yes;1007434;210812387;75;;;;;<br />
daily;00003;201511030531;410615;37815861577;162026060;52740136;yes;1524069;162697515;67;;;;;<br />
daily;00002;201511040531;414784;38131734213;161990024;52776172;yes;1571657;134250735;63;;;;;<br />
daily;00001;201511050531;420780;38433895988;161906580;52859616;yes;2021507;149388808;103;;;;;<br />
daily;00000;201511060531;425920;38677614350;161778812;52987384;yes;1987971;166510621;71;;;;;<br />
scheduled;00000;201511070531;425663;38815712018;161612600;53153596;yes;2003540;139603231;68;;;;;<br />
</pre></small><br />
<br />
'''affa --status [--csv]'''<br />
Displays a table of all configured jobs with enable status, time of last and next run, size of the most recent archive, exectution time of the last run and the number of scheduled (S), daily (D), weekly (W), monthly (M) and yearly (Y) archives. If last time shows 'failed', the job did not run in the last 24h. For disabled jobs 'Last' always shows 'failed' after 24 h. To see the date and time of the last run of those jobs use the --list-archives option. Column 'Next' shows the time when the next run will be started, if 'Enabled' is 'yes'. Column 'ddYld' shows the deduplication yield in bytes.<br />
<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
| Job | ENA | Last | Time | Next | Size | ddYld | N of S,D,W,M,Y |<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
| ads-ak | yes | 20:20 | 0m29s | 20:20 | 28M | - | 1, 7, 2, 0, 0 |<br />
| ak-user-ma.. | yes | 22:31 | 1m51s | 22:30 | 170M | - | 3, 7, 4, 6, 0 |<br />
| azubi-1 | yes | 03:39 | 3h24m | 00:15 | 44G | 1.2G | 1, 7, 4,10, 1 |<br />
| dcpant | yes | 03:06 | 1m03s | 03:05 | 1.3G | - | 1, 7, 4,10, 1 |<br />
| eshare | yes | 21:06 | 21m01s | 20:45 | 24G | 178M | 1, 7, 4,10, 1 |<br />
| etherpad | yes | 20:40 | 0m13s | 20:40 | 3.1M | - | 1, 7, 4, 2, 0 |<br />
| fazubi | yes | 08:16 | 11h16m | 21:00 | 132G | 3.3G | 1, 7, 4,10, 1 |<br />
| fschare-rh | yes | de-duplicating (pid 9719) | 1, 7, 4,10, 1 |<br />
| fsh-02 | yes | 07:53 | 12h23m | 19:30 | 182G | 8.5G | 1, 7, 4,10, 1 |<br />
| fshare-ak2 | yes | 00:30 | 0m26s | 00:30 | 415M | 544k | 1, 7, 4,10, 1 |<br />
| helpdesk | yes | 21:27 | 2m16s | 21:25 | 895M | 138k | 1, 7, 4,10, 0 |<br />
| it-share | yes | running rsync (pid 9744) | 1, 7, 1, 0, 0 |<br />
| lightroom | yes | waiting (pid 9528) | 1, 7, 4, 2, 0 |<br />
| localhost | yes | 02:15 | 0m08s | 02:15 | 395k | - | 1, 7, 2, 0, 0 |<br />
| mediawiki | yes | 16:10 | 0m06s | 19:40 | 1.1G | 6.7M | 3, 7, 4,10, 1 |<br />
| mshare | yes | 00:48 | 1h33m | 23:15 | 18G | 2.4G | 1, 7, 4,10, 1 |<br />
| wshare | yes | 00:34 | 1h49m | 22:45 | 23G | 484M | 1, 7, 4,10, 1 |<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
2 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
With --csv, the output is in machine readable colon separated format.<br />
<br />
'''affa --show-config-pathes [--csv] [JOB JOB ...]'''<br />
Prints the full pathes of the file where the section of job JOB is defined.<br />
<br />
'''affa --show-default-config'''<br />
Prints a list of all allowed property names with their default values. These values are used, when omitted in the job sections.<br />
<br />
'''affa --show-schedule [--all]'''<br />
Prints a 'graphical' timetable for all enabled jobs. The resolution is 30 minutes. An 'S' character marks the scheduled start times. The duration of the job runs are marked with '=' characters.<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
TIME 12:00 16:00 20:00 0:00 4:00 8:00<br />
fsh-02 -------- -------S ~~~~~~~~ ~~~~~~~~ ~~~~~~~~ --------<br />
ads-ak -------- -------- S------- -------- -------- --------<br />
etherpad -------- -------- -S------ -------- -------- --------<br />
eshare -------- -------- -S~----- -------- -------- --------<br />
fazubi -------- -------- --S~~~~~ ~~~~~~~~ ~~~~~~~~ ~-------<br />
helpdesk -------- -------- --S----- -------- -------- --------<br />
it-share -------- -------- ---S=~~~ ~~~~~~~~ ~~~~~~~- -------- busy<br />
wshare -------- -------- -----S~~ ~~------ -------- --------<br />
ak-user-management -------- -------- -----S-- -------- -------- --------<br />
fschare-rh -------- -------- -----S=- -------- --~----- -------- busy<br />
mshare -------- -------- ------S= ~~------ -------- --------<br />
azubi-1 -------- -------- -------- S=~~~~~~ -------- --------<br />
fshare-ak2 -------- -------- -------- -S------ -------- --------<br />
mediawiki -------- -------- -------- ---S---- -------- ----S---<br />
localhost -------- -------- -------- ----S--- -------- --------<br />
dcpant -------- -------- -------- ------S- -------- --------<br />
lightroom -------- -------- -------- -------- -------- S------- busy<br />
Symbols: S=scheduled K=kill R=resume '='=rsync '~'=dedup<br />
2 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
<br />
'''affa --log-tail [JOB]'''<br />
Displays the tail of the logfile of job JOB with live update. This command is identical to tail -n 50 -f /path/to/logfile. Without the JOB argument the global logfile is shown.<br />
<br />
'''affa --send-status'''<br />
Sends the status table, the disk-usage and the archive list of all jobs to the email addresses configured in the [GlobalAffaConfig] section.<br />
<br />
'''affa --disk-usage'''<br />
Shows the current disk usage of all root dir filesystems<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.de<br />
+------+--------+--------+----------------------------------------------------+<br />
| Use% | Used | Avail | Root Dir |<br />
+------+--------+--------+----------------------------------------------------+<br />
| 71% | 938GB | 365GB | /var/affa |<br />
| 20% | 194GB | 759GB | /mnt/affadev |<br />
+------+--------+--------+----------------------------------------------------+<br />
</pre></small><br />
With --csv, the output is printed in a machine readable colon separated format.<br />
<br />
'''affa --cleanup JOB'''<br />
After you have lowered a keep value, e.g. scheduledKeep, then archives with a higher indices will no longer be shifted and will exist for ever. This option finds these archives and deletes them after confirmation.<br />
<br />
'''affa --rename-job JOB NEWNAME'''<br />
Renames the job JOB to NEWNAME including the section name and archive directories.<br />
<br />
'''affa --move-archive JOB NEWROOTDIR'''<br />
Moves the archive directory of job JOB to the rootdir NEWROOTDIR and adjusts the value of property RootDir. NEWROOTDIR must be a full path starting with a slash. As moving across filesystems (e.g. from an external USB drive to the local disk) is not possible, Affa uses a copy command in this case and deletes the source directory after that. Depending on the archive size, copying across filesystems can take a long time.<br />
<br />
'''affa --delete-job [--revoke-key] JOB'''<br />
Irreversibly deletes all archives, of job JOB and set the status property to disable. With --revoke-key option, the public key on the remote server will be deleted.<br />
<br />
'''affa --revoke-key JOB'''<br />
Deletes the public dsa key on the remote server.<br />
<br />
'''affa --kill JOB'''<br />
Terminates the running job JOB and all its child processes (rsync processes).<br />
<br />
'''affa --killall'''<br />
Terminates all running jobs.<br />
<br />
affa --mailtest JOB<br />
Sends a test email to the email addresses configured in the JOB section. Use this to verify, that your mail system is working.<br />
{{Note box|By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).}}<br />
<br />
'''affa --nrpe [JOB JOB ...]'''<br />
Checks for jobs that were not run the last NRPRtrigger hours and reports them as failed. A Nagios/ICINGA NRPE complient message is printed and exit status returned.<br />
<br />
'''affa --version'''<br />
Displays the Affa version number and checks Samba and Freedup installation.<br />
<br />
'''affa --warranty'''<br />
Displays the disclaimer of liability.<br />
<br />
'''affa --license'''<br />
Displays the license Affa is released under.<br />
<br />
'''affa --help'''<br />
Displays a short help.<br />
<br />
'''affa --debug'''<br />
Enables verbose logging. Overrides job and global configurations.<br />
<br />
==== SME Specific usage and command line options ====<br />
<br />
These commands are implemented in a fork of Affa 3 project for use with SME. They are not part of the Affa 3 for CentOS project.<br />
<br />
'''affa --rise [--all] JOB [ARCHIVE]'''<br />
Runs a full restore on the Affa server <b>(!!!)</b> of all standard files and directories from the backup ARCHIVE of job JOB. In other words: After completion, the Affa box reboots as a clone of the source server. Ensure, that the source server has been powered off before you reboot the Affa box, otherwise the network interface will not come up. This is important, when you run --rise remotely. The --rise feature only works with SME servers and should only be used on dedicated backup servers.<br />
<br />
With option --all, all files and directories of the archive as defined by the include[] properties are restored. Files or directories with the same name on the Affa server will be overwritten and cannot be restored by a undorise. This should not be an issue on a dedicated Affa server which does not hold any other data. After a possible undorise those additional restored data must be removed manually.<br />
<br />
Please note, that the rise process backs up the the Affa server itself before doing the restore from the archive. This backup is used by a possible undorise run to restore the Affa server. Only the standard files and directories are backed up. Data in non-standard loctions (like /opt) are untouched and will still exist after the rise run '''if they don't get overwritten by data of the backup''' (=if the backup contains /opt too e.g.).<br />
<br />
{{Note box|The above command is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.}}<br />
<br />
'''affa --undo-rise'''<br />
This feature reverts a risen Affa box to a backup server. After a reboot, all configured jobs '''based on standard files and directories''' will work again.<br />
<br />
{{Note box|The above command is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.}}<br />
<br />
=== Example setups ===<br />
==== Dedicated backup server ====<br />
* Setup a dedicated server and install Affa 3 for SME.<br />
* Setup a job for every server you want to backup.<br />
* Send the public keys to every server.<br />
affa --send-key JOBNAME<br />
* Check whether password-less logins are working.<br />
affa --check-connections<br />
* Check whether the scheduled jobs are evenly distributed over the day.<br />
affa --show-schedule<br />
* Create the cron jobs.<br />
affa --make-cronjobs<br />
* Check the status after 24 hours.<br />
affa --status<br />
<br />
==== Backup of single ibays ====<br />
Suppose you want to backup the ibays 'staff1' and 'staff2' on your production server with WAN IP 82.123.1.1 to an Affa server in a different location over the internet every night at 2:30am.<br />
* Log into the Affa server.<br />
* Create a dedicated job file ''/etc/affa/ibay-staff.conf'' . Edit it and set<br />
[ibay-staff]<br />
Description=Backup of ibay-staff on 82.123.1.1<br />
remoteHostName=82.123.1.1<br />
TimeSchedule=0230<br />
SMEServer=no<br />
Include=/home/e-smith/files/ibays/staff1<br />
Include=/home/e-smith/files/ibays/staff2<br />
* Save the configuration<br />
* send the public key to the production server<br />
affa --send-key ibay-staff<br />
* check next morning<br />
affa --list-archives ibay-staff<br />
affa --status<br />
ls /var/affa/ibay-staff<br />
<br />
==== Two production servers backup each other ====<br />
You have two sites connnected via a VPN and a SME Server running on each site. In that case you don't need a dedicated Affa backup server. Both production servers can additionally act as Affa backup servers backing up the server of the other site. Simply install Affa and configure a job that backs up the other one. You can use all Affa features except of the rise feature.<br />
<br />
When using the rise feature, the server becomes any of the backed up systems, which is less useful in this scenario as it would give you a running copy of the server of the other site while the server of this site is down.<br />
<br />
To get redundancy and a faster restore you can configure a local backup to an external USB or NAS device.<br />
<br />
Please pay attention, that you do not backup the archives back to the other site. Set the RootDir property to a path which is not included in the SME default backup list. When leaving the RootDir property to its default '/var/affa', this is guaranteed.<br />
<br />
==== Use Affa to backup to a NFS-mounted NAS or a local attached USB drive ====<br />
<br />
{{Note box|This chapter still needs to be checked whether it works with Affa 3 for SME.}}<br />
<br />
You want to backup your SME production server with hostname 'prodbox‘ and IP 10.200.48.1 on a mounted filesystem instead of setting up a dedicated Affa box.<br />
<br />
===== Setup NAS =====<br />
You have a [http://www.freenas.org FreeNAS] box with IP 10.200.48.2 up and running with NFS service enabled. The disk is mounted to /mnt/affashare. You have authorized the IP address of your prodbox server to access share /mnt/affashare.<br />
* log into the 'prodbox' and install the NFS packages<br />
/usr/bin/yum install --enablerepo=smecontribs smeserver-nfs<br />
* now enable and start the portmapper service<br />
config setprop portmap status enabled<br />
service portmap start<br />
<br />
* mount the NFS share<br />
mkdir -p /mnt/affadevice<br />
mount 10.200.48.2:/mnt/affashare /mnt/affadevice<br />
<br />
<br />
===== Alternatively setup a USB drive =====<br />
* log into the 'prodbox' <br />
* connect a USB hard disk to the USB Bus. Now you must determine what device the kernel has assigned to the drive. View the /var/log/message and search for ''Initializing USB Mass Storage driver''. A few lines below you'll find the name of the device. In this example it is ''sdh''. Replace ''/dev/sdh'' by ''your device'' in following instructions.<br />
* use the fdisk program to create a linux partition. '''Verify that this is really the attached USB drive before you continue!'''<br />
<br />
fdisk /dev/sdh<br />
You'll most likely find an existing vfat dos partition, which you have to delete first. In the following we assume, that you have created a single partition ''/dev/sdh1''.<br />
* now format the drive with an ext3 filesystem<br />
mkfs.ext3 /dev/sdh1<br />
* make the mount point<br />
mkdir -p /mnt/affadevice<br />
* add the following line to the /etc/fstab<br />
/dev/sdh1 /mnt/affadevice ext3 defaults<br />
* mount the drive<br />
mount /mnt/affadevice<br />
* crosscheck your work using the df command<br />
df<br />
<br />
===== Copying a AFFA USB hard drive archive to a new disk =====<br />
Affa uses copious amounts of hard links to compress and preserve disk space for its backups. If you are in the situation where you want to copy such a disk archive to a new (bigger) disk, you need to ensure that the hard links are copied correctly or the destination copy may became significantly bigger than the total of the source archive.<br />
<br />
One way to copy across file systems (i.e. two different USB disks) and preserve the hard links is as follows:<br />
<br />
* mount both USB drives but with different mount points. e.g. /media/backup1 & /media/backup2 and then:<br />
<br />
mkdir /media/backup2/archive <br />
cd /media/backup1/archive<br />
tar cpf - . | ( cd /media/backup2/archive && tar xpf - )<br />
<br />
where '''archive''' is the name of the AFFA job you want to move to the other disk.<br />
<br />
===== Setup Affa =====<br />
You want to run backups on 11:30 h, 15:30 h and 19:30 h and you want to keep the last 3 scheduled backups, 7 daily, 5 weekly, 12 monthly and 1 yearly backups.<br />
* log into the 'prodbox' and install the Affa packages as described above.<br />
* create/edit the conf file of the job: nano /etc/affa/prodbox.conf<br />
and set:<br />
[prodbox]<br />
remoteHostName=localhost<br />
TimeSchedule=1130<br />
TimeSchedule=1530<br />
TimeSchedule=1930<br />
scheduledKeep=3<br />
dailyKeep=7<br />
weeklyKeep=5<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
RootDir=/mnt/affadevice<br />
Review the other properties and change them to your needs.<br />
* run the job manually<br />
affa --run prodbox<br />
<br />
===== Limitations =====<br />
With this kind of setup you cannot use the affa rise feature, as it requires the backup archive to be located on the '''same fileystem''' as the server installation. The rise option uses hardlinks, which are not working across filesystems.<br />
<br />
===== Automount =====<br />
Having the backup archives in the same filesystem is always a risk, which can be minimized by using the automount feature. Then the external filesystem is only mounted during a job run.<br />
<br />
In the NAS example set<br />
AutomountDevice=10.200.48.2:/mnt/affashare<br />
AutomountPoint=/mnt/affadevice<br />
and skip the step 2.<br />
<br />
In the USB drive example set<br />
AutomountDevice=/dev/sdc1<br />
AutomountPoint=/mnt/affadevice<br />
and skip the steps 5 to 8.<br />
<br />
The mount point will be automatically created, if it does not exist. <br><br />
To access the archive directory, you need to mount it manually.<br />
<br />
=== Restore ===<br />
==== Restore single files or directories ====<br />
'''Example 1:''' It's Tuesday January 6th 2009, when user 'briedlin' asks you to restore the messages of his mailbox 'orders' he has accidentally deleted on Monday.<br />
<br />
* You first must check what backup archives are available. The jobname of this server backup is 'primmail'. To get a listing of all archives run <br />
affa --list-archives primmail<br />
''(see the example listing in chapter [[Affa#Usage_and_command_line_options]]''<br />
* Choose the daily.0 archive, which was created Monday night. Now restore the mailbox 'orders' using the rsync command.<br />
* Now run the rsync command ''(note the trailing slash!)'' on the Affa backup server:<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# this variable is used to shorten the next command line''<br />
rsync -av /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# this variable is used to shorten the next command line''<br />
rsync -av -e 'ssh -p 2222' /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
<br />
<br />
'''Example 2:''' A user has deleted the file orderform.pdf from ibay 'docs' on the server 10.204.48.1 and asks you to restore it.<br />
<br />
* You have searched and found the latest version of this file in weekly archive of job 'prodserv'.<br />
* To copy it back to the server 10.204.48.1 run on the Affa server<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# this variable is used to shorten the next command line''<br />
rsync -av /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
* If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# this variable is used to shorten the next command line''<br />
rsync -av -e 'ssh -p 2222' /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
<br />
<br />
'''Example 3:''' Restoring a file with special characters in the path<br />
<br />
Do not use shell variables to shorten the command. It would complicate things more than it would help.<br />
Quote the source and destination path. In the destination path escape blank characters with a double backslash and brackets with a single backslash. On the Affa backup server do:<br />
rsync -av "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
<br />
rsync -av -e 'ssh -p 2222' "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
==== Full restore ====<br />
Generally: <br />
'''affa --full-restore [optional settings] JOB [ARCHIVE]'''<br />
This rsyncs the data (files and directories )from the backup ARCHIVE back to the 'remoteHostname' defined in the configuration of the job JOB. <br />
If ARCHIVE is not given, the archive 'scheduled.0' is used as the default.<br />
The --full-restore reconstructs the server as it was at the time of the backup and takes following IMPORTANT optional settings in consideration:<br />
<br />
With option [--preserve-newer=yes] files on the remote server with modification time newer than on the backup are not overwritten.<br />
<br />
With option [--delete=yes] all files on the remote server, which are not in the backup, are deleted.<br />
<br />
If the parameter “SMEServer=yes” is set, the signal-event pre-restore and signal-event post-upgrade will be used automatically.<br />
<br />
After the restore is done, the restored server reboots automatically. <br />
<br />
Example:<br />
You have backuped your production server 'prodsrv' as job 'prodbox'. To restore only lost data from the latest backup run<br />
affa --full-restore prodbox<br />
<br />
To restore exactly from the older archive daily.3 run <br />
affa --full-restore [--preserve-newer=no] [--delete=yes] prodbox daily.3<br />
<br />
{{Warning box|A full restore with --preserve-newer no and --delete yes reconstructs the server as it was at the time of the backup. That means, that all files created or server configuration changes made after the backup will be lost! }}<br />
<br />
==== Moving a SME server installation to new hardware using the Affa rise feature ====<br />
Please see this Howto: [[Moving SME to new Hardware]]<br />
<br />
<br />
==== Restore from USB drive on new server ====<br />
<br />
This tip comes from http://forums.contribs.org/index.php?topic=42412.0<br />
<br />
Q) I have complete backups using affa stored on a usb hard drive connected to our affa backup server. I need to restore an earlier monthly backup of our job "mailbackup" to a test server rather than back to the original system. If I did it from the backup server I see the instructions of how to rise that server to the current backup on that server but I want to restore a point about a month ago before some strange things happened. And I want to do it on a machine that is not our backup server or our production server. I tried to figure out how but am lost in the options. My goal is to do some testing.<br />
<br />
A) On your testserver setup a job mailbackup identical to that on your backup server but set property RootDir to /var/affa and property status to disabled. Connect the USB drive and copy the archive of the job mailbackup to /var/affa. Then run affa --rise mailbackup ARCHIVE<br />
<br />
Remember that Affa places job setup scripts into the archive directories. In your case it is mailbackup-setup.pl. Copy it to your testserver, change the RootDir and status properties and execute it.<br />
<br />
=== FAQ ===<br />
'''What files and directories are included by default?'''<br />
<br />
With SMEServer=no nothing at all.<br />
<br />
With SMEServer=yes the SME default backup list is given by entering the command<br />
perl -e 'use esmith::Backup;$b=new esmith::Backup;print join("\n",$b->restore_list)."\n"' <br />
and as additional folder:<br />
/etc/affa<br />
<br />
{{Note box|The following example needs to be reworked for Affa 3.}}<br />
'''Can I exclusively backup image files from a specific directory?'''<br />
Yes. Assuming you want to backup all gif and jpg files from directory /home/e-smith/files/ibays/pictures/files use this configuration<br />
db affa setprop JOBNAME SMEServer no # do not include SME Server default directories and files<br />
db affa setprop JOBNAME Include[0] '/home/e-smith/files/ibays/pictures/files' # start searching here<br />
db affa setprop JOBNAME Include[1] '*/' # search the tree recursively<br />
db affa setprop JOBNAME Include[2] '*.gif' # copy files that match these patterns<br />
db affa setprop JOBNAME Include[3] '*.jpg' <br />
db affa setprop JOBNAME Exclude[0] '*' # exclude all others<br />
db affa setprop JOBNAME rsyncOptions '-m' # do not create empty folders<br />
<br />
<br />
'''How can I move a backup archive from my production server in a remote branch office to the Affa server using an USB disk? I want to avoid to download the first full backup over the internet.'''<br />
Install Affa on your production server, connect and mount the USB disk. Setup a job to write the archive to the USB disk. After the job run, connect the USB disk to your Affa server and setup the job in the same way. Use the setup script from the archive directory. You only need to change the remoteHostName property. Now use the --move-archive option to move the archive to the local disk. If not already done, send the public key to your production server. You're done.<br />
<br />
<br />
'''How do I backup two SME servers behind a firewall?'''<br />
First you need to configure port forwardings for the ssh service on your firewall. Use a non-standard port, e.g 2200->22, for your second server. Setup a job on your Affa server for each of your production servers. Set up the job property sshPort=2200 for second server.<br />
<br />
=== Uninstall ===<br />
This removes the installed Affa package, all configuration data and all backup archives.<br />
<br />
Run the following commands for each job:<br />
affa --revoke-key JOBNAME<br />
affa --delete-job JOBNAME<br />
Verify that all jobs have been deleted:<br />
affa --status<br />
Remove the Affa package:<br />
yum remove smeserver-affa-3.2.2.1-0.noarch.rpm<br />
Cleaning up:<br />
rm -f /etc/cron.d/affa<br />
rm -rf /etc/affa<br />
rm -rf /var/affa /var/log/affa<br />
<br />
=== Troubleshooting ===<br />
==== Affa stops working after prodserver was rebuild. ==== <br />
'''Description:'''<br />
* Affa stopped working or is broken.<br />
* When affaserver tries to make an ssh connection to prodserver it fails.<br />
* In the /var/log/affa/''jobname''.log file you will this message: "SSH connection to ''prodserver ip'' failed. Did you send the public key". <br />
* Resending the keys does not solve the problem. <br />
<br />
'''Solution:'''<br />
* This is probably due to the key for prodserver that is stored on the affaserver no longer matches the key of prodserver. <br />
* You need remove the existing key so that the next ssh connection re-gets the keys from prodserver. <br />
* From the command line edit the /root/.ssh/known_hosts file and remove the line for prodserver. It starts with: ''prodserver ip'' ssh-rsa<br />
<br />
=== Bug report ===<br />
Affa is listed in the [http://bugs.contribs.org/enter_bug.cgi?product=SME%20Contribs&component=smeserver-affa bugtracker contribs section].<br />
Please report all bugs, new feature requests and documentation issues there.<br />
<br />
<br />
{{#bugzilla:columns=id,product,version,status,summary |sort=id |order=desc |disablecache=1|component=smeserver-affa|noresultsmessage="No open bugs found."}}<br />
<br />
=== Additional information ===<br />
For details of performance, changelog etc see [[:Affa:Additional information]]<br />
<br />
=== References ===<br />
* Wikipedia article http://en.wikipedia.org/wiki/Rsync<br />
* Rsync homepage http://rsync.samba.org/<br />
* Wiki article [[Moving SME to new Hardware]]<br />
* Wiki article [[Backup with Affa and FreeDup]]<br />
* Wiki article [[Rsyncd setup on a windows computer for use with Affa backup]]<br />
* Wiki article [[Backup Windows Computers Using Affa]]<br />
* Wiki article [[Backup of ESXi Virtual Machines using Affa]]<br />
<br />
----<br />
[[Category: Contrib]]<br />
[[Category: Backup]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Affa&diff=33080Affa2017-04-01T19:20:58Z<p>Arnaud: /* Setup Affa */</p>
<hr />
<div>{{Languages|Affa}}<br />
{{Level|Advanced}}<br />
{{usefulnote}}<br />
<br />
<br />
=== Maintainer ===<br />
Mainteners(s) Affa3: Arnaud, stephdl<br><br />
Development of Affa2 for SME Server by the original maintainer has been discontinued.<br />
<br />
=== Version ===<br />
{{ #smeversion: affa}}<br />
{{ #smeversion: smeserver-affa}}<br />
<br />
<br />
=== Description ===<br />
The main purpose of this affa package is to make a SME Server a dedicated backup box in a few minutes. Affa backs up as many SME servers as you like or any other servers which have sshd running and rsync installed. Once it is configured, Affa runs reliably unattended and sends warning messages in case of any errors. <br />
<br />
All backup archives are full backups, as Affa makes use of the hardlink technique. Therefore a new full backup only needs disk space for the differences plus the filesystem overhead for the hardlinks and directories (which is typically 2-3%).<br />
<br />
Affa is based on the rsync program and supports the rsync --compress option. This allows you to run backups over the internet or VPN. A typical setup is one or more Affa backup servers placed in different locations, which backup the production server(s) over the VPN.<br />
<br />
A special feature is the rise option, which allows you to rise the backup server to your production server from a backup archive in case of a dead loss of your production server. The rise is executed within a extremely short time, even with huge amount of data. The rise feature uses hardlinks and therefore does not use up additional disk space.<br />
<br />
The rise feature can also be used to upgrade from a lower version to a higher version of SME. This also work from SME 8.1 to 9. For this to work you need 2 separate installations of SME. One is the actual running server and the other is an affa backup server with a newer version of SME Server where you execute the rise command. See more here: [[Moving SME to new Hardware]]<br />
<br />
A simple way to have a server running with maximum up time is to duplicate the hardware. E.g. to have 2 separate hardware boxes or virtual servers on separate hardware. One is the actual server and the other one is a backup system that in a very short time in case of problems can be upgraded to the actual server with the rise command. This also allow you to upgrade the SME Server software with minimum down time!<br />
<br />
Affa is a command line tool for system administrators and is intentionally designed without a GUI. Therefore it can be efficiently managed on the console and over slow internet connections.<br />
<br />
'''Note:''' This documents also refers to the Affa Version 2 which is not maintained anymore. Information on Affa 3 will be gradually added here once it has been more tested.<br />
<br />
==== Affa features at a glance ====<br />
* Affa is secure: All transfers and connections are made by using the ssh protocol with public/private key authentication<br />
* Periodically runs unattended full backups. Only differences to the previous full backup are transferred over the network<br />
* Using rsync with optional bandwidth limit and compression allows backups over the internet<br />
* Uses hardlink technique, i.e. physical disk space only needed for the differences between two full backups<br />
* Keeps a configurable number of scheduled, daily, weekly, monthly and yearly full backup archives<br />
* The archives are browseable. No need to unpack them first.<br />
* Archives can be mapped to a Samba share.<br />
* Backup jobs are started by the cron daemon<br />
* Interrupted jobs continuing with already transfered data when restarted<br />
* Backups the default e-smith directories and files, when property SMEServer is set to yes<br />
* Additional directories and files can be included<br />
* Directories and files can be excluded from the backup<br />
* Non-SME server linux systems can be backed up by setting the SMEServer property to no and using an include list<br />
* In ESXi mode, running virtual machines can be backed up. See [[Backup of ESXi Virtual Machines using Affa]]<br />
* Configurable nice level for rsync processes on the backup and source server<br />
* Optional run of custom programs before and after a job run (e.g. running tape backup)<br />
* Checks the disk space left after a job run with warning levels strict, normal or risky<br />
* Extensive checking of failure conditions<br />
* Sends failure messages to a configurable list of email addresses<br />
* Sends a warning message, if the backup server runs out of disk space<br />
* Installs an optional watchdog on the source server in case the backupserver fails (SME Server only)<br />
* Watchdog sends warning, if an expected backup did not run (SME Server only)<br />
* Watchdog sends a daily reminder message, if the error continues unchecked (SME Server only)<br />
* Option to display current status of all jobs showing times of last and next run, size and disk usage<br />
* Status can be mailed on a daily, weekly or monthly schedule<br />
* Option to display all existing archives of a job shown date, number of files, size and bytes tranferred from the source<br />
* Option to send (and revoke) the public key to the source server (SME Server and ESXi only)<br />
* Option to rise the backup server to a production server from a backup (SME Server only)<br />
* The rise feature does not physically move data and therefore is extremly fast and needs (almost) no extra disk space<br />
* Rise option can be run remotely as the NIC driver configuration of the backup server are preserved<br />
* Compares installed RPMs on source with backup server. Sends warning message, if not in sync<br />
* Undo rise option to restore the backup server<br />
* Configurable via a e-smith style db, with one record for each job and a default record for all jobs<br />
* Logs to /var/log/affa/JOB.log and /var/log/affa/affa.log with optional debug switch for higher verbosity<br />
* Log files are rotated weekly, with 5 logs kept<br />
<br />
<div style="text-align:left" class="mw-collapsible mw-collapsed" data-collapsetext="less &uarr;" data-expandtext="Obsolete Affa 2 instructions &darr;"><br />
<br />
=== Installation or Update of Affa 2 ===<br />
<br />
====Smecontribs for sme8====<br />
yum install --enablerepo=smecontribs smeserver-affa<br />
<br />
The latest test version can be found here :<br />
[http://www.reetspetit.com/smeserver/5/noarch/smeserver-affa-2.0.0-rc6.el5.sme.noarch.rpm smeserver-affa-2.0.0-rc6.el5.sme.noarch.rpm] Note this is NOT signed.<br />
<br />
Please post bug reports in the [http://bugs.contribs.org bug tracker]<br />
<br />
When you have installed Affa for the first time run the following command to initialize the Affa database<br />
<br />
affa --make-cronjobs<br />
and logout and re-login to the console to take the bash auto-completion (TAB key) in effect.<br />
====Obsolete way to install====<br />
{{Note box|SME7 is now EOL and has been moved to the obsolete tree}}<br />
SME7 version: '''smeserver-affa-2.0.0-rc4.noarch.rpm'''<br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/smeserver-affa-2.0.0-rc4.noarch.rpm</nowiki><br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm<br />
<br />
SME8 version: SME8 is based on CentOS5, therefore use the perl-Compress-Bzip2 el5 rpm: <br />
wget <nowiki>http://mirror.contribs.org/releases/8/smecontribs/i386/repoview/smeserver-affa.html</nowiki><br />
For SME8 32bit:<br />
wget <nowiki>http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm<br />
For SME8 64bit:<br />
wget http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
<br />
When you have installed Affa for the first time run the following command to initialize the Affa database<br />
affa --make-cronjobs<br />
and logout and re-login to the console to take the bash auto-completion (TAB key) in effect.<br />
<br />
==== Quick start example for Affa 2 ====<br />
You have a SME 7 production server with hostname 'prodbox‘ and IP 10.200.48.1.<br><br />
Set up a second SME 7 box as your backupserver with hostname 'affabox‘ and IP 10.200.48.2. <br />
<br />
<ol></li><li>log into the 'affabox' and install the packages as described above.<br />
</li><li>copy the config helper script sample<br />
cp /usr/lib/affa/jobconfig-sample.pl /root/prodbox-job.pl<br />
</li><li>edit /root/prodbox-job.pl and set<br />
my $jobname='prodbox';<br />
and<br />
'remoteHostName‘=>'10.200.48.1',<br />
optionally, if the remote server port is configured to e.g. 2222 then set<br />
'sshPort'=>2222,<br />
</li><li>save and run the 'prodbox-job.pl' file. This will write the configuration (this makes the database entries and sets up the cronjobs)<br />
/root/prodbox-job.pl<br />
</li><li>generate the DSA keys and send the public key to the 'prodbox'<br />
{{Note box|When initially doing this step, you will need to temporarily enable "Allow secure shell access using standard passwords" on the production server}}<br />
affa --send-key prodbox<br />
</li><li>run the job manually<br />
<br />
affa --run prodbox<br />
</li></ol><br />
<br />
==== Configuration ====<br />
The configuration is stored in an e-smith style database. Use the db command to configure Affa.<br />
The jobname is the record key with the type 'job'.<br>To setup a new job with the name 'prodbox' enter:<br />
<br />
db affa set prodbox job<br />
<br />
then set the properties<br />
<br />
db affa setprop prodbox remoteHostName 192.168.1.1<br />
db affa setprop prodbox TimeSchedule '0030,0730,1130,1330,1730,2030'<br />
db affa setprop prodbox Description 'My Production Server'<br />
db affa setprop prodbox status enabled<br />
and so on...<br />
<br />
Alternatively you can you use a script as described above in the 'Quick start' chapter.<br />
<br />
To verify your work, type:<br />
<br />
db affa show prodbox<br />
<br />
Finally set up the cronjobs:<br />
affa --make-cronjobs<br />
<br />
</div><br />
<br />
<br />
=== Installation of Affa 3 ===<br />
The following instructions assume that a fresh Affa 3 installation is made on a fresh dedicated SME server which serves as a dedicated backup server ('affabox‘). No jobs are migrated from Affa 2. For using an existing SME server, just skip the instructions how to setup a fresh SME box.<br />
<br />
Setup a dedicated machine with SME 8.1 or SME 9.0 from CDROM. Use the following example settings:<br />
<ol><li>Domain name: athome.xx (use your existing domain name)</li><br />
<li>Host name: affabox (must not match to existing host)</li><br />
<li>IP address: 192.168.0.10 (must not match to existing IP address)</li><br />
<li>Subnet: 255.255.255.0</li><br />
<li>role: Server only</li><br />
<li>Gateway: 192.168.0.1 (use your existing gateway)</li><br />
<li>DHCP: DEactivate</li><br />
<li>DNS server: 192.168.0.1 (use your existing DNS server)</li><br />
<li>activate changes: yes</li><br />
<li>The fresh server will then restart with the above settings.</li><br />
</ol><br />
<br />
To update your server, login as user root on local console. Use 'top' command to display running jobs. Wait until 'yum' stopped running.<br />
yum clean all<br />
yum update<br />
The fresh server will be updated.<br />
signal-event post-upgrade<br />
signal-event reboot<br />
The server will reboot. Afterwards it is ready for installation of Affa 3.<br />
<br />
Login as root on local or remote SSH console of ''affabox''.<br />
<br />
= For SME8 =<br />
<br />
To install issue the following command:<br />
yum install smeserver-affa --enablerepo=smecontribs<br />
= For SME9 =<br />
<br />
yum install smeserver-affa --enablerepo=smecontribs<br />
<headertabs/><br />
<br />
The server again need to be updated.<br />
signal-event post-upgrade<br />
signal-event reboot<br />
The server will reboot. Afterwards Affa 3 is ready for use.<br />
<br />
Now manually create directory for archive files:<br />
mkdir /var/affa<br />
<br />
{{Warning box| Note that on SME9, a new SSH [https://wiki.contribs.org/AutoBlock AutoBlock] feature interferes with the Affa requirement for regular SSH logins! You might have to disable this feature to allow Affa to operate correctly. }}<br />
<br />
=== Creation of backup jobs ===<br />
Job configurations of Affa 3 are no longer stored in an e-smith style database. They are stored in configuration files located in ''/etc/affa'' . Create your configuration files in this directory, e.g.<br />
touch /etc/affa/backup-smeserver.conf<br />
where ''backup-smeserver'' is your jobname, and add content as described below.<br />
<br />
==== Quick start example ====<br />
You have a SME production server with hostname 'smeserver‘ and IP 192.168.0.2.<br><br />
You have a second SME box as your backup server with hostname 'affabox‘ and IP 192.168.0.10. <br />
<br />
Login to your 'affabox' as root and edit ''/etc/affa/backup-smeserver.conf'' . Using e.g. editor ''nano'', create the following example job configuration file for jobname ''backup-smeserver'' :<br />
[backup-smeserver]<br />
remoteHostName=192.168.0.2<br />
SMEServer=yes<br />
Watchdog=yes<br />
RPMCheck=yes<br />
ConnectionCheckTimeout=120<br />
Debug=no<br />
Description=Backup of 192.168.0.2 smeserver.athome.xx<br />
DiskSpaceWarn=strict<br />
RootDir=/var/affa<br />
TimeSchedule=0630<br />
localNice=15<br />
remoteNice=15<br />
rsync--inplace=yes<br />
rsyncCompress=no<br />
rsyncTimeout=900<br />
scheduledKeep=1<br />
dailyKeep=7<br />
weeklyKeep=4<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
status=enabled<br />
<br />
If you have a Letsencrypt certificate on the production server, then you should also include<br />
Include=/etc/dehydrated<br />
<br />
Then save your job configuration file.<br />
<br />
Now check that your configuration is OK:<br />
affa --configcheck<br />
This should throw out no errors. Now create / send key files to your productive smeserver:<br />
affa --send-key backup-smeserver<br />
The following output should appear on the console:<br />
Job sme-backup: Generating DSA keys...<br />
Successfully created DSA key pair.<br />
root@192.168.0.2's password: <Enter password of 192.168.0.2/smeserver and hit ENTER><br />
Public key sent to 192.168.0.2<br />
Now run your job manually, both for test purposes, as well as to create DSA keys:<br />
affa --run backup-smeserver<br />
The following output should appear on the console:<br />
The authenticity of host 'backup-smeserver (192.168.0.2)' can't be established.<br />
RSA key fingerprint is 3b..........65.<br />
Are you sure you want ton continue connecting (yes/no)? <yes><br />
Now your backup job should run for the first time. Depending on the volume of the files, this takes quite a while.<br />
<br />
Once the job is done, check that the archive is available:<br />
affa –-list-archives<br />
<br />
or run the job a second time:<br />
affa --run backup-smeserver<br />
Note that you won't be asked for the password again. Note as well, that this second run of the job ''backup-smeserver'' should run considerably faster, because not all files are copied again: just the difference to the previous run is backed up, the rest is stored as hard links to the existing file copies.<br />
<br />
Check that the second archive is available too:<br />
affa --list-archives<br />
<br />
From here you are able to work as with Affa 2. Modify your config file as required and described below. Automate the backup using the command ''affa --make-cronjobs'', see below.<br />
<br />
==== Job configuration properties ====<br />
'''Note 1:''' The default values shown in this table are the Affa program defaults and not to be confused with the preset values in the job configuration files, e.g. backup-smeserver.conf.<br />
<br />
'''Note 2:''' The complete documentation of the settings of Affa3 is available in [http://affa.sourceforge.net/AffaPdfMan.pdf] . It should be taken in consideration!<br />
<br />
'''Note 3:''' Affa 3 for SME is a fork of [http://affa.sourceforge.net/ Affa 3 for CentOS]. It adds a few options back again which have been removed. Generally the configuration properties as described [http://wiki.nikoforge.org/Affa_Configuration/ here] do apply. Arnaud added the following functions for SME:<br />
<ul><br />
<li>The parameter “SMEServer”</li><br />
<li>The parameter and function “Watchdog”</li><br />
<li>The parameter and function “RPMCheck”</li><br />
<li>The functions “--rise” and “--undo-rise”</li><br />
</ul><br />
Consequently the list below should reproduce the list for the unforked Affa 3 version, plus adding the properties above. For details refer to [https://www.guedel.eu/index.php/informatique/sme-server-8/affa-v3 Arnaud's website].<br />
{| border="1" cellpadding="3" cellspacing=0<br />
|-<br />
| '''Property''' || '''Value''' || '''Multivalue''' || '''Default''' || '''Description''' <br />
|-<br />
| remoteHostName <br />
| FQHN or IP<br />
| no<br />
| <none><br />
| FQHN or IP of the source host (mandatory)<br />
|-<br />
| remoteUser<br />
| account<br />
| no<br />
| root<br />
| The user account to be used with all ssh logins. May be case sensitive, e.g. Administrator account on Windows<br />
|-<br />
| Description<br />
| text string<br />
| no<br />
| <none><br />
| Any text that describes the job<br />
|-<br />
| TimeSchedule <br />
| HHMM<br />
| yes<br />
| 2230<br />
| The time the job will run. Use exactly 4 digits, no colon, no point. '''Important:''' Using the proper format HHMM is essential. Badly formatted TimeSchedule will cause strange Perl errors. Multiple runs can be achieved by repeating the line for each run time.<br />
|-<br />
| status <br />
| enabled ''or'' disabled<br />
| no<br />
| enabled<br />
| When set to disabled, no cron entries will made. You can still run a job manually. <br />
|-<br />
| Include<br />
| full path<br />
| yes<br />
| <none><br />
| File(s) or directory(s) to be included in the backup. <br />
|-<br />
| Exclude<br />
| full path<br />
| yes<br />
| <none><br />
| File(s) or directory(s) to be excluded from the backup. <br />
|-<br />
| RootDir<br />
| full path<br />
| no<br />
| /var/affa<br />
| where to store the backup archives. Do not use /home/e-smith or /root as these are included in the backup and therefore the rise option will not work! Recommended: /var/affa<br />
|-<br />
| scheduledKeep<br />
| integer >= 1<br />
| no<br />
| 1<br />
| how many of the scheduled backups to be kept<br />
|-<br />
| dailyKeep<br />
| integer >= 0<br />
| no<br />
| 7<br />
| how many of the daily backups to be kept<br />
|-<br />
| weeklyKeep<br />
| integer >= 0<br />
| no<br />
| 4<br />
| how many of the weekly backups to be kept<br />
|-<br />
| monthlyKeep<br />
| integer >= 0<br />
| no<br />
| 12<br />
| how many of the monthly backups to be kept<br />
|-<br />
| yearlyKeep<br />
| integer >= 0<br />
| no<br />
| 2<br />
| how many of the yearly backups to be kept<br />
|-<br />
| EmailAddress<br />
| name@domain.com<br />
| yes<br />
| root<br />
| comma separated list of mail addresses, where the messages should be sent to<br>'''Note:''' By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).<br />
|-<br />
| RetryAttempts<br />
| integer >= 0<br />
| no<br />
| 4<br />
| When set to a value>0, Affa re-run a failed job RetryAttempts times with a delay of RetryAfter seconds.<br />
|-<br />
| RetryAfter<br />
| seconds >= 0<br />
| no<br />
| 900<br />
| when set to a value>0, wait RetryAfter seconds before re-running the job after an error. Only applicable with RetryAttempts>0<br />
|-<br />
| RetryNotification<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| when set to no, Affa does not send an error message when a job has failed and RetryAttempts is configured. An error message is only sent when the last attempt has failed.<br />
|-<br />
| NRPEtrigger<br />
| hours<br />
| no<br />
| 24<br />
| NRPE reports a job as critical when the last successful run is older then NRPEtrigger hours. To exclude the job from monitoring set to a value < 0.<br />
|-<br />
| SambaShare<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Access to the job archives via CIFS protocol.<br />
|-<br />
| SambaValidUser<br />
| local account<br />
| yes<br />
| affa<br />
| User who has permission to access the job archives via the CIFS protocol.<br />
|-<br />
| preJobCommand<br> preJobCommandRemote<br />
| path relative to /etc/affa/scripts/<br />
| yes<br />
| <none><br />
| Scripts to be executed before a job run. The job name and type (scheduled, daily etc.) are passed as arguments to the program. The preJobCommandRemote scripts are copied to the remote server and then executed there. The scripts are executed in alphabetical order. Use a numerical prefix if you need a specific order e.g. 01scriptB, 02remotescriptA, 03scriptA. Use the included prescript-sample.pl and prescriptRemote-sample.pl as a starting point for own scripts.<br>'''Note:''' If you use the parameter “SMEServer=yes”, you may delete the “Included” relative to SME default directories in the jobconfig.ini: they will be backuped automatically.<br />
|-<br />
| postJobCommand<br> postJobCommandRemote<br />
| path relative to /etc/affa/scripts/<br />
| yes<br />
| <none><br />
| Scripts to be executed after a job run. The job name, the type (scheduled, daily etc.) and the exit code of the job run are passed as arguments to the program. The postJobCommandRemote scripts are copied to the remote server and then executed there. The scripts are executed in alphabetical order. Use a numerical prefix if you need a specific order e.g. 01scriptB, 02remotescriptA, 03scriptA. Use the included postscript-sample.pl and postscriptRemote-sample.pl as a starting point for own scripts.<br>'''Note:''' If you use the parameter “SMEServer=yes”, you may delete the “Included” relative to SME default directories in the jobconfig.ini: they will be backuped automatically.<br />
|-<br />
| dedup<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| The purpose of the deduplication is to remove duplicate files to save backup space. When set to 'yes' file deduplication is run after the synchronization has been completed. It looks for files that have identical content, user, group and permissions and replace duplicates by hardlinks. Deduplication scans the just completed archive and the previous one, that usually is<br />
scheduled.0 and daily.0 or scheduled.0 and scheduled.1. Consider this scenario: A user has renamed directories or files. Rsync sees those as new ones and copies them. Deduplication finds the identical copies in the previous archive and replace them by hardlinks. To use deduplication the Freedup program needs to be installed. Affa actually runs freedup -upg scheduled.0 <previous_archive>.<br />
|-<br />
| dedupKill<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| When set to 'no' the job ignores affa --kill or affa --killall when deduplication is running. This is useful in the context of killAt which is typically used to stop bandwith utilisation.<br />
|-<br />
| sshPort<br />
| service port <br />
| no<br />
| 22<br />
| When sshd on the remote host listens on a non-standard port, set the port here.<br />
|-<br />
| ConnectionCheckTimeout<br />
| seconds<br />
| no<br />
| 120<br />
| Affa checks the ssh connection before the rsync process is started and exits with an error after the configured time if the host did not respond.<br />
|- <br />
| BandwidthLimit<br />
| integer>=0 kilobytes per second<br />
| no<br />
| 0<br />
| Limits the data transfer rate. A value of zero specifies no limit.<br />
|- <br />
| rsyncTimeout<br />
| seconds<br />
| no<br />
| 900<br />
| Rsync exits after the configured time if no data was transferred. This avoids infinitely hanging in case of a network error.<br />
|- <br />
| rsyncCompress<br />
| yes ''or'' no <br />
| no<br />
| yes<br />
| Compress the transferred data. May be useful with slow internet connections. Increases CPU load on remote and backup host.<br />
|- <br />
| rsync--inplace<br />
| yes ''or'' no<br />
| no<br />
| yes <br />
| Set to no if the rsync version on the remote hist does not support this option.<br />
|-<br />
| rsync--modify-window<br />
| integer >= 0 <br />
| no<br />
| 0<br />
| When comparing two timestamps, rsync treats the timestamps as being equal if they differ by no more than the modify-window value. This is normally 0 for an exact match. A value >= 0 is useful if you can't get the clocks of the remote host and the Affa server in sync.<br />
|-<br />
| rsyncOptions<br />
| string<br />
| no<br />
| <none><br />
| Additional option string to be passed to rsync<br />
|-<br />
| localNice <br />
| -19...+19<br />
| no<br />
| 0<br />
| run rsync local process niced. <br />
|-<br />
| remoteNice<br />
| -19...+19<br />
| no<br />
| 0<br />
| run rsync process on source niced. <br />
|-<br />
| killAt<br />
| HHMM<br />
| no<br />
| <none><br />
| The time at which a job will be killed if it was still running. You can use it for example to kill jobs that are running over the internet early in the morning so that your users have the full bandwidth available during office hours. <br />
|-<br />
| resumeKilledAt<br />
| HHMM<br />
| no<br />
| <none><br />
| The time at which a killed job will be resumed. This allows you to start earlier in the evening than the scheduled time.<br />
|-<br />
| chattyOnSuccess<br />
| integer >= 0<br />
| no<br />
| 0 <br />
| When set to a value>0, Affa sends a message on a successfully completed job run and decrements the chattyOnSuccess value. When the value has reached zero, Affa falls back to the default and only sends messages on errors.<br />
|-<br />
| AutomountDevice<br>AutomountPoint<br />
| full path<br />
| no<br />
| <none><br />
| Device and mountpoint of backup device (e.g. USB disk). Device is automounted before a job starts and unmounted after job completion. With both properties empty no automount is done.<br />
|-<br />
| AutomountOptions<br />
| string<br />
| no<br />
| <none><br />
| An option string passed to the mount command.<br />
|-<br />
| AutoUnmount <br />
| yes ''or'' no<br />
| no<br />
| yes<br />
| When set to 'no' the automounted device stay mounted after the Affa run.<br />
|-<br />
| Debug <br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Set to yes to increase log verbosity.<br />
|-<br />
| remoteRsyncBinary <br />
| full path<br />
| no<br />
| /usr/bin/rsync<br />
| If the rsync program on the remote server is located in non-standard location, set it here<br />
|-<br />
| remoteNiceBinary <br />
| full path<br />
| no<br />
| /bin/nice<br />
| If the nice program on the remote server is located in non-standard location, set it here.<br />
|-<br />
| localRsyncBinary <br />
| full path<br />
| no<br />
| /usr/bin/rsync<br />
| If the local rsync program is located in non-standard location, set it here<br />
|-<br />
| localNiceBinary <br />
| full path<br />
| no<br />
| /bin/nice<br />
| If the nice local program is located in non-standard location, set it here.<br />
|-<br />
| RemoteAuthorizedKeysFile <br />
| path relative to remote user's home or full path<br />
| no<br />
| .ssh/authorized_keys2<br />
| If the remote host stores the authorized keys file in a non-standard location, set it here.<br />
|-<br />
| rsyncdMode<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Set to yes to connect to the rsync daemon on the remote host (instead of running rsync over ssh).<br />
|-<br />
| rsyncdModule<br />
| string<br />
| no<br />
| AFFA<br />
| The rsyncd module name (only applicable with rsyncdMode=yes).<br />
|-<br />
| rsyncdUser<br />
| string<br />
| no<br />
| affa<br />
| The username for authentication to the rsync daemon (only applicable with rsyncdMode=yes).<br />
|-<br />
| rsyncdPassword<br />
| string<br />
| no<br />
| <none><br />
| The password for authentication to the rsync daemon (only applicable with rsyncdMode=yes).<br />
<br />
|-<br />
| globalStatus<br />
| enabled ''or'' disabled ''or'' jobs<br />
| no<br />
| jobs<br />
| Set to enabled or disabled to overide the status settings in all job sections. When set to value jobs, the status settings in the job sections are effictive.<br>'''Note:''' This property is allowed only in the [GlobalAffaConfig] section. <br />
|-<br />
| sendStatus<br />
| daily ''or'' weekly ''or'' monthly ''or'' never<br />
| no<br />
| weekly<br />
| Defines how frequently the status will be sent.<br>'''Note:''' This property is allowed only in the [GlobalAffaConfig] section. <br />
|-<br />
| SMEServer<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| When set to yes, the default e-smith directories are automatically included and the property RPMCheck=yes can be used.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|-<br />
| RPMCheck<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Only applicable to jobs that backup a SME server. Compares the packages installation of the source host with this affa backup host. Sends a message with diff list if not in sync. This check is useful, if you want have the option to rise the backup server to a production server from a backup.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|-<br />
| Watchdog<br />
| yes ''or'' no<br />
| no<br />
| yes<br />
| Only applicable to jobs that backups a SME server. When a job is started, affa installs a watchdog script on the source in /etc/cron.d/, which sends a warning message, if the next scheduled job (taken from the TimeSchedule property + 10 minutes) did not run. This guarantees, that you will be notfied even in case of a affa server outage. The watchdog script send a daily reminder message, if the error continues. The next run job replaces the watchdog script with a new trigger time.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|}<br />
<br />
==== Default configuration properties ====<br />
For all 'name=value' properties defaults can be set in the [GlobalAffaConfig] section which are used in all job configuration when the corresponding property is omitted. For example, when these properties are set in [GlobalAffaConfig] section, they can be omitted in the specific job configurations. This is useful, when you set up many similar jobs. <br />
<br />
Example: You want to set the property 'localNice' to 19 for all jobs. Then add the following section in one of your configuration files:<br />
[GlobalAffaConfig]<br />
localNice=19<br />
Consequently you don't need to set this property for your individual jobs. Properties set in the job record override the defaults.<br />
<br />
==== Properties for global Settings ====<br />
The following special properties are only applicable to the [GlobalAffaConfig] section:<br />
<ul><br />
<li>sendStatus</li><br />
<li>globalStatus</li><br />
</ul><br />
<br />
All jobs can be disabled for execution through 'cronjobs" with setting 'globalStatus' to 'disabled'.<br />
[GlobalAffaConfig]<br />
globalStatus=disabled<br />
<br />
To re-enable run, either delete the above configuration line, or set to:<br />
globalStatus=jobs<br />
which is the default value.<br />
<br />
You can also set the value to<br />
globalStatus=enabled<br />
which enforces the the job execution through 'cronjobs' and overrides the specified values in the section for the specific job.<br />
<br />
=== Usage and command line options ===<br />
{{Note box|Options can be abbreviated to uniqueness, e.g. --mak is equal to --make-cronjobs}}<br />
'''affa --run JOB'''<br />
Starts a job run. Usually done by the cronjob. <br />
<br />
<br />
'''affa --make-cronjobs'''<br />
Configures the cronjobs as scheduled in the jobs records. By default this command will by executed by 'cronjobs' every 15 minutes. So you don't need to run this command manually, you can also just wait max. 15 minutes before your updates job configurations become effective.<br />
<br />
'''affa --configcheck'''<br />
Checks the syntax and values in all configuration files found in /etc/affa/. Run this command after modifying the configuration. Lower/Upper case errors in property names are corrected automatically.<br />
<br />
'''affa --make-cronjobs'''<br />
Configures the cronjobs. Run this command to make changes of time related properties effective i.e. TimeSchedule, killAt and resumekKilledAt properties. Every 15 minutes Affa checks whether a configuration file is newer than the cronjob file and runs the make-cronjobs command if this is the case.<br />
<br />
'''affa --send-key [JOB JOB ...]'''<br />
This first generates the DSA key on the Affa Server, if not already done. Then the public key is send to the hosts 'remoteHostName' as defined in section of each job JOB and generates the job specific ssh known host entry.<br />
{{Note box|When initially doing this step, you will need to temporarily enable "Allow secure shell access using standard passwords" on the source server.}}<br />
<br />
'''affa --check-connections [JOB JOB ...]'''<br />
Checks the ssh login and if applicable rsyncd auth for all jobs given as arguments. Without any arguments all jobs are checked.<br />
<br />
'''affa --full-restore [--preserve-newer=no] [--delete=yes] JOB [ARCHIVE]'''<br />
Does a full restore remote server of all backed up files and directories from the backup ARCHIVE. If ARCHIVE is not given, the archive 'scheduled.0' is used as the default. The full restore reconstructs the server as it was at the time of the backup. After the restore the source host reboots.<br />
<br />
With option --preserve-newer=yes files on the remote server with modification time newer than on the backup are not overwritten.<br />
<br />
With option --delete=yes all files on the remote server, which are not in the backup, are deleted.<br />
<br />
If the parameter “SMEServer=yes” is set, the signal-event pre-restore and signal-event post-upgrade will be used automatically.<br />
<br />
'''affa --list-archives JOB'''<br />
Displays a table of all present archives of job JOB with date, number of files, size and and bytes received. While column buTime shows the actual rsync time, the column ddTime shows the length of the subsequent deduplication run. ddYld is the gained deduplication yield in bytes.<br />
<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
+------------------------------------------------------------------------------+<br />
| Job: fshare-ak |<br />
| Description: Fileserver AK |<br />
| Directory: /var/affa/fshare-ak/ |<br />
| Hostname: 10.204.104.4 |<br />
| Email: sysadmin@mydomain.com |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| Run | Completion date | buTime | ddTime | ddYld | Files | Size | Recvd |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| Y 0 | Wed 2010-06-23 20:26 | 11m53s | - | - | 412k | 143G | 470M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| M11 | Sun 2010-08-29 20:22 | 7m50s | - | - | 417k | 153G | 14M |<br />
| M10 | Sun 2010-09-26 20:23 | 8m57s | - | - | 430k | 156G | 14M |<br />
| M 9 | Sun 2010-10-31 20:25 | 10m05s | - | - | 448k | 161G | 15M |<br />
| M 8 | Sun 2010-11-28 20:31 | 16m42s | - | - | 463k | 167G | 17M |<br />
| M 7 | Sun 2011-01-02 20:48 | 33m07s | - | - | 486k | 173G | 1.1G |<br />
| M 6 | Sun 2011-01-30 20:33 | 18m54s | - | - | 493k | 176G | 24M |<br />
| M 5 | Sun 2011-02-27 20:28 | 13m31s | - | - | 490k | 176G | 19M |<br />
| M 4 | Sun 2011-03-27 20:28 | 13m08s | - | - | 491k | 175G | 17M |<br />
| M 3 | Sun 2011-05-01 20:30 | 15m41s | - | - | 493k | 179G | 18M |<br />
| M 2 | Sun 2011-05-29 20:28 | 13m40s | - | - | 494k | 182G | 31M |<br />
| M 1 | Sun 2011-06-19 20:26 | 11m20s | - | - | 493k | 183G | 17M |<br />
| M 0 | Sun 2011-07-03 20:28 | 13m46s | - | - | 496k | 183G | 18M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| W 3 | Sun 2011-07-10 20:28 | 13m38s | - | - | 479k | 180G | 16M |<br />
| W 2 | Mon 2011-07-11 20:35 | 20m27s | - | - | 479k | 180G | 301M |<br />
| W 1 | Fri 2011-07-15 20:30 | 15m53s | - | - | 480k | 180G | 62M |<br />
| W 0 | Sun 2011-07-24 19:39 | 9m25s | - | - | 482k | 181G | 16M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| D 6 | Thu 2011-07-28 19:42 | 12m22s | - | - | 483k | 182G | 176M |<br />
| D 5 | Fri 2011-07-29 23:29 | 11m10s | 5h33m | 45G | 483k | 182G | 16M |<br />
| D 4 | Sat 2011-07-30 19:53 | 23m26s | 2h30m | 8.3G | 483k | 182G | 17M |<br />
| D 3 | Sun 2011-07-31 20:07 | 37m31s | 4m47s | 8.3G | 483k | 182G | 17M |<br />
| D 2 | Mon 2011-08-01 20:44 | 1h14m | 7h50m | 8.5G | 484k | 182G | 630M |<br />
| D 1 | Tue 2011-08-02 20:02 | 32m28s | 12h20m | 8.3G | 484k | 182G | 74M |<br />
| D 0 | Wed 2011-08-03 19:58 | 28m46s | 11h01m | 8.5G | 484k | 182G | 214M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| S 0 | Thu 2011-08-04 20:00 | 30m28s | 11h52m | 8.5G | 484k | 182G | 203M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
</pre></small><br />
<br />
'''affa --list-archives [--csv] JOB'''<br />
With --csv, the output is in machine readable colon separated format.<br />
<small><pre><br />
Archive:Count;Date;Files;Size;RootDirFilesystemAvail;RootDirFilesystemUsed;valid;TotalBytesReceived;ExecutionTime;DedupTotalFiles;DedupReplacedFiles;DedupSavedBytes;DedupExectime;DedupDate<br />
monthly;00000;201509270631;393237;37252509103;166889260;47876936;yes;1327370;1160638358;99;;;;;<br />
weekly;00003;201510040531;390273;37042612135;164566692;50199504;yes;1117726;45430532;59;;;;;<br />
weekly;00002;201510110531;395553;37299589800;162612204;52153992;yes;1117092;92369237;64;;;;;<br />
weekly;00001;201510180531;403831;37707599172;161855844;52910352;yes;1317083;148317764;64;;;;;<br />
weekly;00000;201510250531;406509;38146200127;161041156;53725040;yes;1104759;64083948;63;;;;;<br />
daily;00006;201510310531;415663;39248862982;162979176;51787020;yes;1958110;198060283;66;;;;;<br />
daily;00005;201511010531;411014;38563389171;163061136;51705060;yes;1156506;48903675;75;;;;;<br />
daily;00004;201511020531;408910;38612857040;162359124;52407072;yes;1007434;210812387;75;;;;;<br />
daily;00003;201511030531;410615;37815861577;162026060;52740136;yes;1524069;162697515;67;;;;;<br />
daily;00002;201511040531;414784;38131734213;161990024;52776172;yes;1571657;134250735;63;;;;;<br />
daily;00001;201511050531;420780;38433895988;161906580;52859616;yes;2021507;149388808;103;;;;;<br />
daily;00000;201511060531;425920;38677614350;161778812;52987384;yes;1987971;166510621;71;;;;;<br />
scheduled;00000;201511070531;425663;38815712018;161612600;53153596;yes;2003540;139603231;68;;;;;<br />
</pre></small><br />
<br />
'''affa --status [--csv]'''<br />
Displays a table of all configured jobs with enable status, time of last and next run, size of the most recent archive, exectution time of the last run and the number of scheduled (S), daily (D), weekly (W), monthly (M) and yearly (Y) archives. If last time shows 'failed', the job did not run in the last 24h. For disabled jobs 'Last' always shows 'failed' after 24 h. To see the date and time of the last run of those jobs use the --list-archives option. Column 'Next' shows the time when the next run will be started, if 'Enabled' is 'yes'. Column 'ddYld' shows the deduplication yield in bytes.<br />
<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
| Job | ENA | Last | Time | Next | Size | ddYld | N of S,D,W,M,Y |<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
| ads-ak | yes | 20:20 | 0m29s | 20:20 | 28M | - | 1, 7, 2, 0, 0 |<br />
| ak-user-ma.. | yes | 22:31 | 1m51s | 22:30 | 170M | - | 3, 7, 4, 6, 0 |<br />
| azubi-1 | yes | 03:39 | 3h24m | 00:15 | 44G | 1.2G | 1, 7, 4,10, 1 |<br />
| dcpant | yes | 03:06 | 1m03s | 03:05 | 1.3G | - | 1, 7, 4,10, 1 |<br />
| eshare | yes | 21:06 | 21m01s | 20:45 | 24G | 178M | 1, 7, 4,10, 1 |<br />
| etherpad | yes | 20:40 | 0m13s | 20:40 | 3.1M | - | 1, 7, 4, 2, 0 |<br />
| fazubi | yes | 08:16 | 11h16m | 21:00 | 132G | 3.3G | 1, 7, 4,10, 1 |<br />
| fschare-rh | yes | de-duplicating (pid 9719) | 1, 7, 4,10, 1 |<br />
| fsh-02 | yes | 07:53 | 12h23m | 19:30 | 182G | 8.5G | 1, 7, 4,10, 1 |<br />
| fshare-ak2 | yes | 00:30 | 0m26s | 00:30 | 415M | 544k | 1, 7, 4,10, 1 |<br />
| helpdesk | yes | 21:27 | 2m16s | 21:25 | 895M | 138k | 1, 7, 4,10, 0 |<br />
| it-share | yes | running rsync (pid 9744) | 1, 7, 1, 0, 0 |<br />
| lightroom | yes | waiting (pid 9528) | 1, 7, 4, 2, 0 |<br />
| localhost | yes | 02:15 | 0m08s | 02:15 | 395k | - | 1, 7, 2, 0, 0 |<br />
| mediawiki | yes | 16:10 | 0m06s | 19:40 | 1.1G | 6.7M | 3, 7, 4,10, 1 |<br />
| mshare | yes | 00:48 | 1h33m | 23:15 | 18G | 2.4G | 1, 7, 4,10, 1 |<br />
| wshare | yes | 00:34 | 1h49m | 22:45 | 23G | 484M | 1, 7, 4,10, 1 |<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
2 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
With --csv, the output is in machine readable colon separated format.<br />
<br />
'''affa --show-config-pathes [--csv] [JOB JOB ...]'''<br />
Prints the full pathes of the file where the section of job JOB is defined.<br />
<br />
'''affa --show-default-config'''<br />
Prints a list of all allowed property names with their default values. These values are used, when omitted in the job sections.<br />
<br />
'''affa --show-schedule [--all]'''<br />
Prints a 'graphical' timetable for all enabled jobs. The resolution is 30 minutes. An 'S' character marks the scheduled start times. The duration of the job runs are marked with '=' characters.<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
TIME 12:00 16:00 20:00 0:00 4:00 8:00<br />
fsh-02 -------- -------S ~~~~~~~~ ~~~~~~~~ ~~~~~~~~ --------<br />
ads-ak -------- -------- S------- -------- -------- --------<br />
etherpad -------- -------- -S------ -------- -------- --------<br />
eshare -------- -------- -S~----- -------- -------- --------<br />
fazubi -------- -------- --S~~~~~ ~~~~~~~~ ~~~~~~~~ ~-------<br />
helpdesk -------- -------- --S----- -------- -------- --------<br />
it-share -------- -------- ---S=~~~ ~~~~~~~~ ~~~~~~~- -------- busy<br />
wshare -------- -------- -----S~~ ~~------ -------- --------<br />
ak-user-management -------- -------- -----S-- -------- -------- --------<br />
fschare-rh -------- -------- -----S=- -------- --~----- -------- busy<br />
mshare -------- -------- ------S= ~~------ -------- --------<br />
azubi-1 -------- -------- -------- S=~~~~~~ -------- --------<br />
fshare-ak2 -------- -------- -------- -S------ -------- --------<br />
mediawiki -------- -------- -------- ---S---- -------- ----S---<br />
localhost -------- -------- -------- ----S--- -------- --------<br />
dcpant -------- -------- -------- ------S- -------- --------<br />
lightroom -------- -------- -------- -------- -------- S------- busy<br />
Symbols: S=scheduled K=kill R=resume '='=rsync '~'=dedup<br />
2 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
<br />
'''affa --log-tail [JOB]'''<br />
Displays the tail of the logfile of job JOB with live update. This command is identical to tail -n 50 -f /path/to/logfile. Without the JOB argument the global logfile is shown.<br />
<br />
'''affa --send-status'''<br />
Sends the status table, the disk-usage and the archive list of all jobs to the email addresses configured in the [GlobalAffaConfig] section.<br />
<br />
'''affa --disk-usage'''<br />
Shows the current disk usage of all root dir filesystems<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.de<br />
+------+--------+--------+----------------------------------------------------+<br />
| Use% | Used | Avail | Root Dir |<br />
+------+--------+--------+----------------------------------------------------+<br />
| 71% | 938GB | 365GB | /var/affa |<br />
| 20% | 194GB | 759GB | /mnt/affadev |<br />
+------+--------+--------+----------------------------------------------------+<br />
</pre></small><br />
With --csv, the output is printed in a machine readable colon separated format.<br />
<br />
'''affa --cleanup JOB'''<br />
After you have lowered a keep value, e.g. scheduledKeep, then archives with a higher indices will no longer be shifted and will exist for ever. This option finds these archives and deletes them after confirmation.<br />
<br />
'''affa --rename-job JOB NEWNAME'''<br />
Renames the job JOB to NEWNAME including the section name and archive directories.<br />
<br />
'''affa --move-archive JOB NEWROOTDIR'''<br />
Moves the archive directory of job JOB to the rootdir NEWROOTDIR and adjusts the value of property RootDir. NEWROOTDIR must be a full path starting with a slash. As moving across filesystems (e.g. from an external USB drive to the local disk) is not possible, Affa uses a copy command in this case and deletes the source directory after that. Depending on the archive size, copying across filesystems can take a long time.<br />
<br />
'''affa --delete-job [--revoke-key] JOB'''<br />
Irreversibly deletes all archives, of job JOB and set the status property to disable. With --revoke-key option, the public key on the remote server will be deleted.<br />
<br />
'''affa --revoke-key JOB'''<br />
Deletes the public dsa key on the remote server.<br />
<br />
'''affa --kill JOB'''<br />
Terminates the running job JOB and all its child processes (rsync processes).<br />
<br />
'''affa --killall'''<br />
Terminates all running jobs.<br />
<br />
affa --mailtest JOB<br />
Sends a test email to the email addresses configured in the JOB section. Use this to verify, that your mail system is working.<br />
{{Note box|By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).}}<br />
<br />
'''affa --nrpe [JOB JOB ...]'''<br />
Checks for jobs that were not run the last NRPRtrigger hours and reports them as failed. A Nagios/ICINGA NRPE complient message is printed and exit status returned.<br />
<br />
'''affa --version'''<br />
Displays the Affa version number and checks Samba and Freedup installation.<br />
<br />
'''affa --warranty'''<br />
Displays the disclaimer of liability.<br />
<br />
'''affa --license'''<br />
Displays the license Affa is released under.<br />
<br />
'''affa --help'''<br />
Displays a short help.<br />
<br />
'''affa --debug'''<br />
Enables verbose logging. Overrides job and global configurations.<br />
<br />
==== SME Specific usage and command line options ====<br />
<br />
These commands are implemented in a fork of Affa 3 project for use with SME. They are not part of the Affa 3 for CentOS project.<br />
<br />
'''affa --rise [--all] JOB [ARCHIVE]'''<br />
Runs a full restore on the Affa server <b>(!!!)</b> of all standard files and directories from the backup ARCHIVE of job JOB. In other words: After completion, the Affa box reboots as a clone of the source server. Ensure, that the source server has been powered off before you reboot the Affa box, otherwise the network interface will not come up. This is important, when you run --rise remotely. The --rise feature only works with SME servers and should only be used on dedicated backup servers.<br />
<br />
With option --all, all files and directories of the archive as defined by the include[] properties are restored. Files or directories with the same name on the Affa server will be overwritten and cannot be restored by a undorise. This should not be an issue on a dedicated Affa server which does not hold any other data. After a possible undorise those additional restored data must be removed manually.<br />
<br />
Please note, that the rise process backs up the the Affa server itself before doing the restore from the archive. This backup is used by a possible undorise run to restore the Affa server. Only the standard files and directories are backed up. Data in non-standard loctions (like /opt) are untouched and will still exist after the rise run '''if they don't get overwritten by data of the backup''' (=if the backup contains /opt too e.g.).<br />
<br />
{{Note box|The above command is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.}}<br />
<br />
'''affa --undo-rise'''<br />
This feature reverts a risen Affa box to a backup server. After a reboot, all configured jobs '''based on standard files and directories''' will work again.<br />
<br />
{{Note box|The above command is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.}}<br />
<br />
=== Example setups ===<br />
==== Dedicated backup server ====<br />
* Setup a dedicated server and install Affa 3 for SME.<br />
* Setup a job for every server you want to backup.<br />
* Send the public keys to every server.<br />
affa --send-key JOBNAME<br />
* Check whether password-less logins are working.<br />
affa --check-connections<br />
* Check whether the scheduled jobs are evenly distributed over the day.<br />
affa --show-schedule<br />
* Create the cron jobs.<br />
affa --make-cronjobs<br />
* Check the status after 24 hours.<br />
affa --status<br />
<br />
==== Backup of single ibays ====<br />
Suppose you want to backup the ibays 'staff1' and 'staff2' on your production server with WAN IP 82.123.1.1 to an Affa server in a different location over the internet every night at 2:30am.<br />
* Log into the Affa server.<br />
* Create a dedicated job file ''/etc/affa/ibay-staff.conf'' . Edit it and set<br />
[ibay-staff]<br />
Description=Backup of ibay-staff on 82.123.1.1<br />
remoteHostName=82.123.1.1<br />
TimeSchedule=0230<br />
SMEServer=no<br />
Include=/home/e-smith/files/ibays/staff1<br />
Include=/home/e-smith/files/ibays/staff2<br />
* Save the configuration<br />
* send the public key to the production server<br />
affa --send-key ibay-staff<br />
* check next morning<br />
affa --list-archives ibay-staff<br />
affa --status<br />
ls /var/affa/ibay-staff<br />
<br />
==== Two production servers backup each other ====<br />
You have two sites connnected via a VPN and a SME Server running on each site. In that case you don't need a dedicated Affa backup server. Both production servers can additionally act as Affa backup servers backing up the server of the other site. Simply install Affa and configure a job that backs up the other one. You can use all Affa features except of the rise feature.<br />
<br />
When using the rise feature, the server becomes any of the backed up systems, which is less useful in this scenario as it would give you a running copy of the server of the other site while the server of this site is down.<br />
<br />
To get redundancy and a faster restore you can configure a local backup to an external USB or NAS device.<br />
<br />
Please pay attention, that you do not backup the archives back to the other site. Set the RootDir property to a path which is not included in the SME default backup list. When leaving the RootDir property to its default '/var/affa', this is guaranteed.<br />
<br />
==== Use Affa to backup to a NFS-mounted NAS or a local attached USB drive ====<br />
<br />
{{Note box|This chapter still needs to be checked whether it works with Affa 3 for SME.}}<br />
<br />
You want to backup your SME production server with hostname 'prodbox‘ and IP 10.200.48.1 on a mounted filesystem instead of setting up a dedicated Affa box.<br />
<br />
===== Setup NAS =====<br />
You have a [http://www.freenas.org FreeNAS] box with IP 10.200.48.2 up and running with NFS service enabled. The disk is mounted to /mnt/affashare. You have authorized the IP address of your prodbox server to access share /mnt/affashare.<br />
* log into the 'prodbox' and install the NFS packages<br />
/usr/bin/yum install --enablerepo=smecontribs smeserver-nfs<br />
* now enable and start the portmapper service<br />
config setprop portmap status enabled<br />
service portmap start<br />
<br />
* mount the NFS share<br />
mkdir -p /mnt/affadevice<br />
mount 10.200.48.2:/mnt/affashare /mnt/affadevice<br />
<br />
<br />
===== Alternatively setup a USB drive =====<br />
* log into the 'prodbox' <br />
* connect a USB hard disk to the USB Bus. Now you must determine what device the kernel has assigned to the drive. View the /var/log/message and search for ''Initializing USB Mass Storage driver''. A few lines below you'll find the name of the device. In this example it is ''sdh''. Replace ''/dev/sdh'' by ''your device'' in following instructions.<br />
* use the fdisk program to create a linux partition. '''Verify that this is really the attached USB drive before you continue!'''<br />
<br />
fdisk /dev/sdh<br />
You'll most likely find an existing vfat dos partition, which you have to delete first. In the following we assume, that you have created a single partition ''/dev/sdh1''.<br />
* now format the drive with an ext3 filesystem<br />
mkfs.ext3 /dev/sdh1<br />
* make the mount point<br />
mkdir -p /mnt/affadevice<br />
* add the following line to the /etc/fstab<br />
/dev/sdh1 /mnt/affadevice ext3 defaults<br />
* mount the drive<br />
mount /mnt/affadevice<br />
* crosscheck your work using the df command<br />
df<br />
<br />
===== Copying a AFFA USB hard drive archive to a new disk =====<br />
Affa uses copious amounts of hard links to compress and preserve disk space for its backups. If you are in the situation where you want to copy such a disk archive to a new (bigger) disk, you need to ensure that the hard links are copied correctly or the destination copy may became significantly bigger than the total of the source archive.<br />
<br />
One way to copy across file systems (i.e. two different USB disks) and preserve the hard links is as follows:<br />
<br />
* mount both USB drives but with different mount points. e.g. /media/backup1 & /media/backup2 and then:<br />
<br />
mkdir /media/backup2/archive <br />
cd /media/backup1/archive<br />
tar cpf - . | ( cd /media/backup2/archive && tar xpf - )<br />
<br />
where '''archive''' is the name of the AFFA job you want to move to the other disk.<br />
<br />
===== Setup Affa =====<br />
You want to run backups on 11:30 h, 15:30 h and 19:30 h and you want to keep the last 3 scheduled backups, 7 daily, 5 weekly, 12 monthly and 1 yearly backups.<br />
* log into the 'prodbox' and install the Affa packages as described above.<br />
* create/edit the conf file of the job: nano /etc/affa/prodbox.conf<br />
and set:<br />
[prodbox]<br />
remoteHostName=localhost<br />
TimeSchedule=1130<br />
TimeSchedule=1530<br />
TimeSchedule=1930<br />
scheduledKeep=3<br />
dailyKeep=7<br />
weeklyKeep=5<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
RootDir=/mnt/affadevice<br />
Review the other properties and change them to your needs.<br />
* run the job manually<br />
affa --run prodbox<br />
<br />
===== Limitations =====<br />
With this kind of setup you cannot use the affa rise feature, as it requires the backup archive to be located on the '''same fileystem''' as the server installation. The rise option uses hardlinks, which are not working across filesystems.<br />
<br />
===== Automount =====<br />
Having the backup archives in the same filesystem is always a risk, which can be minimized by using the automount feature. Then the external filesystem is only mounted during a job run.<br />
<br />
In the NAS example set<br />
AutomountDevice=10.200.48.2:/mnt/affashare<br />
AutomountPoint=/mnt/affadevice<br />
and skip the step 2.<br />
<br />
In the USB drive example set<br />
AutomountDevice=/dev/sdc1<br />
AutomountPoint=/mnt/affadevice<br />
and skip the steps 5 to 8.<br />
<br />
The mount point will be automatically created, if it does not exist. <br><br />
To access the archive directory, you need to mount it manually.<br />
<br />
=== Restore ===<br />
==== Restore single files or directories ====<br />
'''Example 1:''' It's Tuesday January 6th 2009, when user 'briedlin' asks you to restore the messages of his mailbox 'orders' he has accidentally deleted on Monday.<br />
<br />
* You first must check what backup archives are available. The jobname of this server backup is 'primmail'. To get a listing of all archives run <br />
affa --list-archives primmail<br />
''(see the example listing in chapter [[Affa#Usage_and_command_line_options]]''<br />
* Choose the daily.0 archive, which was created Monday night. Now restore the mailbox 'orders' using the rsync command.<br />
* Now run the rsync command ''(note the trailing slash!)'' on the Affa backup server:<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# this variable is used to shorten the next command line''<br />
rsync -av /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# this variable is used to shorten the next command line''<br />
rsync -av -e 'ssh -p 2222' /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
<br />
<br />
'''Example 2:''' A user has deleted the file orderform.pdf from ibay 'docs' on the server 10.204.48.1 and asks you to restore it.<br />
<br />
* You have searched and found the latest version of this file in weekly archive of job 'prodserv'.<br />
* To copy it back to the server 10.204.48.1 run on the Affa server<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# this variable is used to shorten the next command line''<br />
rsync -av /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
* If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# this variable is used to shorten the next command line''<br />
rsync -av -e 'ssh -p 2222' /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
<br />
<br />
'''Example 3:''' Restoring a file with special characters in the path<br />
<br />
Do not use shell variables to shorten the command. It would complicate things more than it would help.<br />
Quote the source and destination path. In the destination path escape blank characters with a double backslash and brackets with a single backslash. On the Affa backup server do:<br />
rsync -av "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
<br />
rsync -av -e 'ssh -p 2222' "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
==== Full restore ====<br />
Generally: <br />
'''affa --full-restore [optional settings] JOB [ARCHIVE]'''<br />
This rsyncs the data (files and directories )from the backup ARCHIVE back to the 'remoteHostname' defined in the configuration of the job JOB. <br />
If ARCHIVE is not given, the archive 'scheduled.0' is used as the default.<br />
The --full-restore reconstructs the server as it was at the time of the backup and takes following IMPORTANT optional settings in consideration:<br />
<br />
With option [--preserve-newer=yes] files on the remote server with modification time newer than on the backup are not overwritten.<br />
<br />
With option [--delete=yes] all files on the remote server, which are not in the backup, are deleted.<br />
<br />
If the parameter “SMEServer=yes” is set, the signal-event pre-restore and signal-event post-upgrade will be used automatically.<br />
<br />
After the restore is done, the restored server reboots automatically. <br />
<br />
Example:<br />
You have backuped your production server 'prodsrv' as job 'prodbox'. To restore only lost data from the latest backup run<br />
affa --full-restore prodbox<br />
<br />
To restore exactly from the older archive daily.3 run <br />
affa --full-restore [--preserve-newer=no] [--delete=yes] prodbox daily.3<br />
<br />
{{Warning box|A full restore with --preserve-newer no and --delete yes reconstructs the server as it was at the time of the backup. That means, that all files created or server configuration changes made after the backup will be lost! }}<br />
<br />
==== Moving a SME server installation to new hardware using the Affa rise feature ====<br />
Please see this Howto: [[Moving SME to new Hardware]]<br />
<br />
<br />
==== Restore from USB drive on new server ====<br />
<br />
This tip comes from http://forums.contribs.org/index.php?topic=42412.0<br />
<br />
Q) I have complete backups using affa stored on a usb hard drive connected to our affa backup server. I need to restore an earlier monthly backup of our job "mailbackup" to a test server rather than back to the original system. If I did it from the backup server I see the instructions of how to rise that server to the current backup on that server but I want to restore a point about a month ago before some strange things happened. And I want to do it on a machine that is not our backup server or our production server. I tried to figure out how but am lost in the options. My goal is to do some testing.<br />
<br />
A) On your testserver setup a job mailbackup identical to that on your backup server but set property RootDir to /var/affa and property status to disabled. Connect the USB drive and copy the archive of the job mailbackup to /var/affa. Then run affa --rise mailbackup ARCHIVE<br />
<br />
Remember that Affa places job setup scripts into the archive directories. In your case it is mailbackup-setup.pl. Copy it to your testserver, change the RootDir and status properties and execute it.<br />
<br />
=== FAQ ===<br />
'''What files and directories are included by default?'''<br />
<br />
With SMEServer=no nothing at all.<br />
<br />
With SMEServer=yes the SME default backup list:<br />
/etc/e-smith/templates-custom<br />
/etc/e-smith/templates-user-custom<br />
/etc/group<br />
/etc/gshadow<br />
/etc/passwd<br />
/etc/samba/secrets.tdb<br />
/etc/samba/smbpasswd<br />
/etc/shadow<br />
/etc/smbpasswd<br />
/etc/ssh<br />
/etc/sudoers<br />
/home/e-smith<br />
/root<br />
<br />
{{Note box|The following example needs to be reworked for Affa 3.}}<br />
'''Can I exclusively backup image files from a specific directory?'''<br />
Yes. Assuming you want to backup all gif and jpg files from directory /home/e-smith/files/ibays/pictures/files use this configuration<br />
db affa setprop JOBNAME SMEServer no # do not include SME Server default directories and files<br />
db affa setprop JOBNAME Include[0] '/home/e-smith/files/ibays/pictures/files' # start searching here<br />
db affa setprop JOBNAME Include[1] '*/' # search the tree recursively<br />
db affa setprop JOBNAME Include[2] '*.gif' # copy files that match these patterns<br />
db affa setprop JOBNAME Include[3] '*.jpg' <br />
db affa setprop JOBNAME Exclude[0] '*' # exclude all others<br />
db affa setprop JOBNAME rsyncOptions '-m' # do not create empty folders<br />
<br />
<br />
'''How can I move a backup archive from my production server in a remote branch office to the Affa server using an USB disk? I want to avoid to download the first full backup over the internet.'''<br />
Install Affa on your production server, connect and mount the USB disk. Setup a job to write the archive to the USB disk. After the job run, connect the USB disk to your Affa server and setup the job in the same way. Use the setup script from the archive directory. You only need to change the remoteHostName property. Now use the --move-archive option to move the archive to the local disk. If not already done, send the public key to your production server. You're done.<br />
<br />
<br />
'''How do I backup two SME servers behind a firewall?'''<br />
First you need to configure port forwardings for the ssh service on your firewall. Use a non-standard port, e.g 2200->22, for your second server. Setup a job on your Affa server for each of your production servers. Set up the job property sshPort=2200 for second server.<br />
<br />
=== Uninstall ===<br />
This removes the installed Affa package, all configuration data and all backup archives.<br />
<br />
Run the following commands for each job:<br />
affa --revoke-key JOBNAME<br />
affa --delete-job JOBNAME<br />
Verify that all jobs have been deleted:<br />
affa --status<br />
Remove the Affa package:<br />
yum remove smeserver-affa-3.2.2.1-0.noarch.rpm<br />
Cleaning up:<br />
rm -f /etc/cron.d/affa<br />
rm -rf /etc/affa<br />
rm -rf /var/affa /var/log/affa<br />
<br />
=== Troubleshooting ===<br />
==== Affa stops working after prodserver was rebuild. ==== <br />
'''Description:'''<br />
* Affa stopped working or is broken.<br />
* When affaserver tries to make an ssh connection to prodserver it fails.<br />
* In the /var/log/affa/''jobname''.log file you will this message: "SSH connection to ''prodserver ip'' failed. Did you send the public key". <br />
* Resending the keys does not solve the problem. <br />
<br />
'''Solution:'''<br />
* This is probably due to the key for prodserver that is stored on the affaserver no longer matches the key of prodserver. <br />
* You need remove the existing key so that the next ssh connection re-gets the keys from prodserver. <br />
* From the command line edit the /root/.ssh/known_hosts file and remove the line for prodserver. It starts with: ''prodserver ip'' ssh-rsa<br />
<br />
=== Bug report ===<br />
Affa is listed in the [http://bugs.contribs.org/enter_bug.cgi?product=SME%20Contribs&component=smeserver-affa bugtracker contribs section].<br />
Please report all bugs, new feature requests and documentation issues there.<br />
<br />
<br />
{{#bugzilla:columns=id,product,version,status,summary |sort=id |order=desc |disablecache=1|component=smeserver-affa|noresultsmessage="No open bugs found."}}<br />
<br />
=== Additional information ===<br />
For details of performance, changelog etc see [[:Affa:Additional information]]<br />
<br />
=== References ===<br />
* Wikipedia article http://en.wikipedia.org/wiki/Rsync<br />
* Rsync homepage http://rsync.samba.org/<br />
* Wiki article [[Moving SME to new Hardware]]<br />
* Wiki article [[Backup with Affa and FreeDup]]<br />
* Wiki article [[Rsyncd setup on a windows computer for use with Affa backup]]<br />
* Wiki article [[Backup Windows Computers Using Affa]]<br />
* Wiki article [[Backup of ESXi Virtual Machines using Affa]]<br />
<br />
----<br />
[[Category: Contrib]]<br />
[[Category: Backup]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Affa&diff=33079Affa2017-04-01T19:15:22Z<p>Arnaud: /* FAQ */</p>
<hr />
<div>{{Languages|Affa}}<br />
{{Level|Advanced}}<br />
{{usefulnote}}<br />
<br />
<br />
=== Maintainer ===<br />
Mainteners(s) Affa3: Arnaud, stephdl<br><br />
Development of Affa2 for SME Server by the original maintainer has been discontinued.<br />
<br />
=== Version ===<br />
{{ #smeversion: affa}}<br />
{{ #smeversion: smeserver-affa}}<br />
<br />
<br />
=== Description ===<br />
The main purpose of this affa package is to make a SME Server a dedicated backup box in a few minutes. Affa backs up as many SME servers as you like or any other servers which have sshd running and rsync installed. Once it is configured, Affa runs reliably unattended and sends warning messages in case of any errors. <br />
<br />
All backup archives are full backups, as Affa makes use of the hardlink technique. Therefore a new full backup only needs disk space for the differences plus the filesystem overhead for the hardlinks and directories (which is typically 2-3%).<br />
<br />
Affa is based on the rsync program and supports the rsync --compress option. This allows you to run backups over the internet or VPN. A typical setup is one or more Affa backup servers placed in different locations, which backup the production server(s) over the VPN.<br />
<br />
A special feature is the rise option, which allows you to rise the backup server to your production server from a backup archive in case of a dead loss of your production server. The rise is executed within a extremely short time, even with huge amount of data. The rise feature uses hardlinks and therefore does not use up additional disk space.<br />
<br />
The rise feature can also be used to upgrade from a lower version to a higher version of SME. This also work from SME 8.1 to 9. For this to work you need 2 separate installations of SME. One is the actual running server and the other is an affa backup server with a newer version of SME Server where you execute the rise command. See more here: [[Moving SME to new Hardware]]<br />
<br />
A simple way to have a server running with maximum up time is to duplicate the hardware. E.g. to have 2 separate hardware boxes or virtual servers on separate hardware. One is the actual server and the other one is a backup system that in a very short time in case of problems can be upgraded to the actual server with the rise command. This also allow you to upgrade the SME Server software with minimum down time!<br />
<br />
Affa is a command line tool for system administrators and is intentionally designed without a GUI. Therefore it can be efficiently managed on the console and over slow internet connections.<br />
<br />
'''Note:''' This documents also refers to the Affa Version 2 which is not maintained anymore. Information on Affa 3 will be gradually added here once it has been more tested.<br />
<br />
==== Affa features at a glance ====<br />
* Affa is secure: All transfers and connections are made by using the ssh protocol with public/private key authentication<br />
* Periodically runs unattended full backups. Only differences to the previous full backup are transferred over the network<br />
* Using rsync with optional bandwidth limit and compression allows backups over the internet<br />
* Uses hardlink technique, i.e. physical disk space only needed for the differences between two full backups<br />
* Keeps a configurable number of scheduled, daily, weekly, monthly and yearly full backup archives<br />
* The archives are browseable. No need to unpack them first.<br />
* Archives can be mapped to a Samba share.<br />
* Backup jobs are started by the cron daemon<br />
* Interrupted jobs continuing with already transfered data when restarted<br />
* Backups the default e-smith directories and files, when property SMEServer is set to yes<br />
* Additional directories and files can be included<br />
* Directories and files can be excluded from the backup<br />
* Non-SME server linux systems can be backed up by setting the SMEServer property to no and using an include list<br />
* In ESXi mode, running virtual machines can be backed up. See [[Backup of ESXi Virtual Machines using Affa]]<br />
* Configurable nice level for rsync processes on the backup and source server<br />
* Optional run of custom programs before and after a job run (e.g. running tape backup)<br />
* Checks the disk space left after a job run with warning levels strict, normal or risky<br />
* Extensive checking of failure conditions<br />
* Sends failure messages to a configurable list of email addresses<br />
* Sends a warning message, if the backup server runs out of disk space<br />
* Installs an optional watchdog on the source server in case the backupserver fails (SME Server only)<br />
* Watchdog sends warning, if an expected backup did not run (SME Server only)<br />
* Watchdog sends a daily reminder message, if the error continues unchecked (SME Server only)<br />
* Option to display current status of all jobs showing times of last and next run, size and disk usage<br />
* Status can be mailed on a daily, weekly or monthly schedule<br />
* Option to display all existing archives of a job shown date, number of files, size and bytes tranferred from the source<br />
* Option to send (and revoke) the public key to the source server (SME Server and ESXi only)<br />
* Option to rise the backup server to a production server from a backup (SME Server only)<br />
* The rise feature does not physically move data and therefore is extremly fast and needs (almost) no extra disk space<br />
* Rise option can be run remotely as the NIC driver configuration of the backup server are preserved<br />
* Compares installed RPMs on source with backup server. Sends warning message, if not in sync<br />
* Undo rise option to restore the backup server<br />
* Configurable via a e-smith style db, with one record for each job and a default record for all jobs<br />
* Logs to /var/log/affa/JOB.log and /var/log/affa/affa.log with optional debug switch for higher verbosity<br />
* Log files are rotated weekly, with 5 logs kept<br />
<br />
<div style="text-align:left" class="mw-collapsible mw-collapsed" data-collapsetext="less &uarr;" data-expandtext="Obsolete Affa 2 instructions &darr;"><br />
<br />
=== Installation or Update of Affa 2 ===<br />
<br />
====Smecontribs for sme8====<br />
yum install --enablerepo=smecontribs smeserver-affa<br />
<br />
The latest test version can be found here :<br />
[http://www.reetspetit.com/smeserver/5/noarch/smeserver-affa-2.0.0-rc6.el5.sme.noarch.rpm smeserver-affa-2.0.0-rc6.el5.sme.noarch.rpm] Note this is NOT signed.<br />
<br />
Please post bug reports in the [http://bugs.contribs.org bug tracker]<br />
<br />
When you have installed Affa for the first time run the following command to initialize the Affa database<br />
<br />
affa --make-cronjobs<br />
and logout and re-login to the console to take the bash auto-completion (TAB key) in effect.<br />
====Obsolete way to install====<br />
{{Note box|SME7 is now EOL and has been moved to the obsolete tree}}<br />
SME7 version: '''smeserver-affa-2.0.0-rc4.noarch.rpm'''<br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/smeserver-affa-2.0.0-rc4.noarch.rpm</nowiki><br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm<br />
<br />
SME8 version: SME8 is based on CentOS5, therefore use the perl-Compress-Bzip2 el5 rpm: <br />
wget <nowiki>http://mirror.contribs.org/releases/8/smecontribs/i386/repoview/smeserver-affa.html</nowiki><br />
For SME8 32bit:<br />
wget <nowiki>http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm<br />
For SME8 64bit:<br />
wget http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
<br />
When you have installed Affa for the first time run the following command to initialize the Affa database<br />
affa --make-cronjobs<br />
and logout and re-login to the console to take the bash auto-completion (TAB key) in effect.<br />
<br />
==== Quick start example for Affa 2 ====<br />
You have a SME 7 production server with hostname 'prodbox‘ and IP 10.200.48.1.<br><br />
Set up a second SME 7 box as your backupserver with hostname 'affabox‘ and IP 10.200.48.2. <br />
<br />
<ol></li><li>log into the 'affabox' and install the packages as described above.<br />
</li><li>copy the config helper script sample<br />
cp /usr/lib/affa/jobconfig-sample.pl /root/prodbox-job.pl<br />
</li><li>edit /root/prodbox-job.pl and set<br />
my $jobname='prodbox';<br />
and<br />
'remoteHostName‘=>'10.200.48.1',<br />
optionally, if the remote server port is configured to e.g. 2222 then set<br />
'sshPort'=>2222,<br />
</li><li>save and run the 'prodbox-job.pl' file. This will write the configuration (this makes the database entries and sets up the cronjobs)<br />
/root/prodbox-job.pl<br />
</li><li>generate the DSA keys and send the public key to the 'prodbox'<br />
{{Note box|When initially doing this step, you will need to temporarily enable "Allow secure shell access using standard passwords" on the production server}}<br />
affa --send-key prodbox<br />
</li><li>run the job manually<br />
<br />
affa --run prodbox<br />
</li></ol><br />
<br />
==== Configuration ====<br />
The configuration is stored in an e-smith style database. Use the db command to configure Affa.<br />
The jobname is the record key with the type 'job'.<br>To setup a new job with the name 'prodbox' enter:<br />
<br />
db affa set prodbox job<br />
<br />
then set the properties<br />
<br />
db affa setprop prodbox remoteHostName 192.168.1.1<br />
db affa setprop prodbox TimeSchedule '0030,0730,1130,1330,1730,2030'<br />
db affa setprop prodbox Description 'My Production Server'<br />
db affa setprop prodbox status enabled<br />
and so on...<br />
<br />
Alternatively you can you use a script as described above in the 'Quick start' chapter.<br />
<br />
To verify your work, type:<br />
<br />
db affa show prodbox<br />
<br />
Finally set up the cronjobs:<br />
affa --make-cronjobs<br />
<br />
</div><br />
<br />
<br />
=== Installation of Affa 3 ===<br />
The following instructions assume that a fresh Affa 3 installation is made on a fresh dedicated SME server which serves as a dedicated backup server ('affabox‘). No jobs are migrated from Affa 2. For using an existing SME server, just skip the instructions how to setup a fresh SME box.<br />
<br />
Setup a dedicated machine with SME 8.1 or SME 9.0 from CDROM. Use the following example settings:<br />
<ol><li>Domain name: athome.xx (use your existing domain name)</li><br />
<li>Host name: affabox (must not match to existing host)</li><br />
<li>IP address: 192.168.0.10 (must not match to existing IP address)</li><br />
<li>Subnet: 255.255.255.0</li><br />
<li>role: Server only</li><br />
<li>Gateway: 192.168.0.1 (use your existing gateway)</li><br />
<li>DHCP: DEactivate</li><br />
<li>DNS server: 192.168.0.1 (use your existing DNS server)</li><br />
<li>activate changes: yes</li><br />
<li>The fresh server will then restart with the above settings.</li><br />
</ol><br />
<br />
To update your server, login as user root on local console. Use 'top' command to display running jobs. Wait until 'yum' stopped running.<br />
yum clean all<br />
yum update<br />
The fresh server will be updated.<br />
signal-event post-upgrade<br />
signal-event reboot<br />
The server will reboot. Afterwards it is ready for installation of Affa 3.<br />
<br />
Login as root on local or remote SSH console of ''affabox''.<br />
<br />
= For SME8 =<br />
<br />
To install issue the following command:<br />
yum install smeserver-affa --enablerepo=smecontribs<br />
= For SME9 =<br />
<br />
yum install smeserver-affa --enablerepo=smecontribs<br />
<headertabs/><br />
<br />
The server again need to be updated.<br />
signal-event post-upgrade<br />
signal-event reboot<br />
The server will reboot. Afterwards Affa 3 is ready for use.<br />
<br />
Now manually create directory for archive files:<br />
mkdir /var/affa<br />
<br />
{{Warning box| Note that on SME9, a new SSH [https://wiki.contribs.org/AutoBlock AutoBlock] feature interferes with the Affa requirement for regular SSH logins! You might have to disable this feature to allow Affa to operate correctly. }}<br />
<br />
=== Creation of backup jobs ===<br />
Job configurations of Affa 3 are no longer stored in an e-smith style database. They are stored in configuration files located in ''/etc/affa'' . Create your configuration files in this directory, e.g.<br />
touch /etc/affa/backup-smeserver.conf<br />
where ''backup-smeserver'' is your jobname, and add content as described below.<br />
<br />
==== Quick start example ====<br />
You have a SME production server with hostname 'smeserver‘ and IP 192.168.0.2.<br><br />
You have a second SME box as your backup server with hostname 'affabox‘ and IP 192.168.0.10. <br />
<br />
Login to your 'affabox' as root and edit ''/etc/affa/backup-smeserver.conf'' . Using e.g. editor ''nano'', create the following example job configuration file for jobname ''backup-smeserver'' :<br />
[backup-smeserver]<br />
remoteHostName=192.168.0.2<br />
SMEServer=yes<br />
Watchdog=yes<br />
RPMCheck=yes<br />
ConnectionCheckTimeout=120<br />
Debug=no<br />
Description=Backup of 192.168.0.2 smeserver.athome.xx<br />
DiskSpaceWarn=strict<br />
RootDir=/var/affa<br />
TimeSchedule=0630<br />
localNice=15<br />
remoteNice=15<br />
rsync--inplace=yes<br />
rsyncCompress=no<br />
rsyncTimeout=900<br />
scheduledKeep=1<br />
dailyKeep=7<br />
weeklyKeep=4<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
status=enabled<br />
<br />
If you have a Letsencrypt certificate on the production server, then you should also include<br />
Include=/etc/dehydrated<br />
<br />
Then save your job configuration file.<br />
<br />
Now check that your configuration is OK:<br />
affa --configcheck<br />
This should throw out no errors. Now create / send key files to your productive smeserver:<br />
affa --send-key backup-smeserver<br />
The following output should appear on the console:<br />
Job sme-backup: Generating DSA keys...<br />
Successfully created DSA key pair.<br />
root@192.168.0.2's password: <Enter password of 192.168.0.2/smeserver and hit ENTER><br />
Public key sent to 192.168.0.2<br />
Now run your job manually, both for test purposes, as well as to create DSA keys:<br />
affa --run backup-smeserver<br />
The following output should appear on the console:<br />
The authenticity of host 'backup-smeserver (192.168.0.2)' can't be established.<br />
RSA key fingerprint is 3b..........65.<br />
Are you sure you want ton continue connecting (yes/no)? <yes><br />
Now your backup job should run for the first time. Depending on the volume of the files, this takes quite a while.<br />
<br />
Once the job is done, check that the archive is available:<br />
affa –-list-archives<br />
<br />
or run the job a second time:<br />
affa --run backup-smeserver<br />
Note that you won't be asked for the password again. Note as well, that this second run of the job ''backup-smeserver'' should run considerably faster, because not all files are copied again: just the difference to the previous run is backed up, the rest is stored as hard links to the existing file copies.<br />
<br />
Check that the second archive is available too:<br />
affa --list-archives<br />
<br />
From here you are able to work as with Affa 2. Modify your config file as required and described below. Automate the backup using the command ''affa --make-cronjobs'', see below.<br />
<br />
==== Job configuration properties ====<br />
'''Note 1:''' The default values shown in this table are the Affa program defaults and not to be confused with the preset values in the job configuration files, e.g. backup-smeserver.conf.<br />
<br />
'''Note 2:''' The complete documentation of the settings of Affa3 is available in [http://affa.sourceforge.net/AffaPdfMan.pdf] . It should be taken in consideration!<br />
<br />
'''Note 3:''' Affa 3 for SME is a fork of [http://affa.sourceforge.net/ Affa 3 for CentOS]. It adds a few options back again which have been removed. Generally the configuration properties as described [http://wiki.nikoforge.org/Affa_Configuration/ here] do apply. Arnaud added the following functions for SME:<br />
<ul><br />
<li>The parameter “SMEServer”</li><br />
<li>The parameter and function “Watchdog”</li><br />
<li>The parameter and function “RPMCheck”</li><br />
<li>The functions “--rise” and “--undo-rise”</li><br />
</ul><br />
Consequently the list below should reproduce the list for the unforked Affa 3 version, plus adding the properties above. For details refer to [https://www.guedel.eu/index.php/informatique/sme-server-8/affa-v3 Arnaud's website].<br />
{| border="1" cellpadding="3" cellspacing=0<br />
|-<br />
| '''Property''' || '''Value''' || '''Multivalue''' || '''Default''' || '''Description''' <br />
|-<br />
| remoteHostName <br />
| FQHN or IP<br />
| no<br />
| <none><br />
| FQHN or IP of the source host (mandatory)<br />
|-<br />
| remoteUser<br />
| account<br />
| no<br />
| root<br />
| The user account to be used with all ssh logins. May be case sensitive, e.g. Administrator account on Windows<br />
|-<br />
| Description<br />
| text string<br />
| no<br />
| <none><br />
| Any text that describes the job<br />
|-<br />
| TimeSchedule <br />
| HHMM<br />
| yes<br />
| 2230<br />
| The time the job will run. Use exactly 4 digits, no colon, no point. '''Important:''' Using the proper format HHMM is essential. Badly formatted TimeSchedule will cause strange Perl errors. Multiple runs can be achieved by repeating the line for each run time.<br />
|-<br />
| status <br />
| enabled ''or'' disabled<br />
| no<br />
| enabled<br />
| When set to disabled, no cron entries will made. You can still run a job manually. <br />
|-<br />
| Include<br />
| full path<br />
| yes<br />
| <none><br />
| File(s) or directory(s) to be included in the backup. <br />
|-<br />
| Exclude<br />
| full path<br />
| yes<br />
| <none><br />
| File(s) or directory(s) to be excluded from the backup. <br />
|-<br />
| RootDir<br />
| full path<br />
| no<br />
| /var/affa<br />
| where to store the backup archives. Do not use /home/e-smith or /root as these are included in the backup and therefore the rise option will not work! Recommended: /var/affa<br />
|-<br />
| scheduledKeep<br />
| integer >= 1<br />
| no<br />
| 1<br />
| how many of the scheduled backups to be kept<br />
|-<br />
| dailyKeep<br />
| integer >= 0<br />
| no<br />
| 7<br />
| how many of the daily backups to be kept<br />
|-<br />
| weeklyKeep<br />
| integer >= 0<br />
| no<br />
| 4<br />
| how many of the weekly backups to be kept<br />
|-<br />
| monthlyKeep<br />
| integer >= 0<br />
| no<br />
| 12<br />
| how many of the monthly backups to be kept<br />
|-<br />
| yearlyKeep<br />
| integer >= 0<br />
| no<br />
| 2<br />
| how many of the yearly backups to be kept<br />
|-<br />
| EmailAddress<br />
| name@domain.com<br />
| yes<br />
| root<br />
| comma separated list of mail addresses, where the messages should be sent to<br>'''Note:''' By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).<br />
|-<br />
| RetryAttempts<br />
| integer >= 0<br />
| no<br />
| 4<br />
| When set to a value>0, Affa re-run a failed job RetryAttempts times with a delay of RetryAfter seconds.<br />
|-<br />
| RetryAfter<br />
| seconds >= 0<br />
| no<br />
| 900<br />
| when set to a value>0, wait RetryAfter seconds before re-running the job after an error. Only applicable with RetryAttempts>0<br />
|-<br />
| RetryNotification<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| when set to no, Affa does not send an error message when a job has failed and RetryAttempts is configured. An error message is only sent when the last attempt has failed.<br />
|-<br />
| NRPEtrigger<br />
| hours<br />
| no<br />
| 24<br />
| NRPE reports a job as critical when the last successful run is older then NRPEtrigger hours. To exclude the job from monitoring set to a value < 0.<br />
|-<br />
| SambaShare<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Access to the job archives via CIFS protocol.<br />
|-<br />
| SambaValidUser<br />
| local account<br />
| yes<br />
| affa<br />
| User who has permission to access the job archives via the CIFS protocol.<br />
|-<br />
| preJobCommand<br> preJobCommandRemote<br />
| path relative to /etc/affa/scripts/<br />
| yes<br />
| <none><br />
| Scripts to be executed before a job run. The job name and type (scheduled, daily etc.) are passed as arguments to the program. The preJobCommandRemote scripts are copied to the remote server and then executed there. The scripts are executed in alphabetical order. Use a numerical prefix if you need a specific order e.g. 01scriptB, 02remotescriptA, 03scriptA. Use the included prescript-sample.pl and prescriptRemote-sample.pl as a starting point for own scripts.<br>'''Note:''' If you use the parameter “SMEServer=yes”, you may delete the “Included” relative to SME default directories in the jobconfig.ini: they will be backuped automatically.<br />
|-<br />
| postJobCommand<br> postJobCommandRemote<br />
| path relative to /etc/affa/scripts/<br />
| yes<br />
| <none><br />
| Scripts to be executed after a job run. The job name, the type (scheduled, daily etc.) and the exit code of the job run are passed as arguments to the program. The postJobCommandRemote scripts are copied to the remote server and then executed there. The scripts are executed in alphabetical order. Use a numerical prefix if you need a specific order e.g. 01scriptB, 02remotescriptA, 03scriptA. Use the included postscript-sample.pl and postscriptRemote-sample.pl as a starting point for own scripts.<br>'''Note:''' If you use the parameter “SMEServer=yes”, you may delete the “Included” relative to SME default directories in the jobconfig.ini: they will be backuped automatically.<br />
|-<br />
| dedup<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| The purpose of the deduplication is to remove duplicate files to save backup space. When set to 'yes' file deduplication is run after the synchronization has been completed. It looks for files that have identical content, user, group and permissions and replace duplicates by hardlinks. Deduplication scans the just completed archive and the previous one, that usually is<br />
scheduled.0 and daily.0 or scheduled.0 and scheduled.1. Consider this scenario: A user has renamed directories or files. Rsync sees those as new ones and copies them. Deduplication finds the identical copies in the previous archive and replace them by hardlinks. To use deduplication the Freedup program needs to be installed. Affa actually runs freedup -upg scheduled.0 <previous_archive>.<br />
|-<br />
| dedupKill<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| When set to 'no' the job ignores affa --kill or affa --killall when deduplication is running. This is useful in the context of killAt which is typically used to stop bandwith utilisation.<br />
|-<br />
| sshPort<br />
| service port <br />
| no<br />
| 22<br />
| When sshd on the remote host listens on a non-standard port, set the port here.<br />
|-<br />
| ConnectionCheckTimeout<br />
| seconds<br />
| no<br />
| 120<br />
| Affa checks the ssh connection before the rsync process is started and exits with an error after the configured time if the host did not respond.<br />
|- <br />
| BandwidthLimit<br />
| integer>=0 kilobytes per second<br />
| no<br />
| 0<br />
| Limits the data transfer rate. A value of zero specifies no limit.<br />
|- <br />
| rsyncTimeout<br />
| seconds<br />
| no<br />
| 900<br />
| Rsync exits after the configured time if no data was transferred. This avoids infinitely hanging in case of a network error.<br />
|- <br />
| rsyncCompress<br />
| yes ''or'' no <br />
| no<br />
| yes<br />
| Compress the transferred data. May be useful with slow internet connections. Increases CPU load on remote and backup host.<br />
|- <br />
| rsync--inplace<br />
| yes ''or'' no<br />
| no<br />
| yes <br />
| Set to no if the rsync version on the remote hist does not support this option.<br />
|-<br />
| rsync--modify-window<br />
| integer >= 0 <br />
| no<br />
| 0<br />
| When comparing two timestamps, rsync treats the timestamps as being equal if they differ by no more than the modify-window value. This is normally 0 for an exact match. A value >= 0 is useful if you can't get the clocks of the remote host and the Affa server in sync.<br />
|-<br />
| rsyncOptions<br />
| string<br />
| no<br />
| <none><br />
| Additional option string to be passed to rsync<br />
|-<br />
| localNice <br />
| -19...+19<br />
| no<br />
| 0<br />
| run rsync local process niced. <br />
|-<br />
| remoteNice<br />
| -19...+19<br />
| no<br />
| 0<br />
| run rsync process on source niced. <br />
|-<br />
| killAt<br />
| HHMM<br />
| no<br />
| <none><br />
| The time at which a job will be killed if it was still running. You can use it for example to kill jobs that are running over the internet early in the morning so that your users have the full bandwidth available during office hours. <br />
|-<br />
| resumeKilledAt<br />
| HHMM<br />
| no<br />
| <none><br />
| The time at which a killed job will be resumed. This allows you to start earlier in the evening than the scheduled time.<br />
|-<br />
| chattyOnSuccess<br />
| integer >= 0<br />
| no<br />
| 0 <br />
| When set to a value>0, Affa sends a message on a successfully completed job run and decrements the chattyOnSuccess value. When the value has reached zero, Affa falls back to the default and only sends messages on errors.<br />
|-<br />
| AutomountDevice<br>AutomountPoint<br />
| full path<br />
| no<br />
| <none><br />
| Device and mountpoint of backup device (e.g. USB disk). Device is automounted before a job starts and unmounted after job completion. With both properties empty no automount is done.<br />
|-<br />
| AutomountOptions<br />
| string<br />
| no<br />
| <none><br />
| An option string passed to the mount command.<br />
|-<br />
| AutoUnmount <br />
| yes ''or'' no<br />
| no<br />
| yes<br />
| When set to 'no' the automounted device stay mounted after the Affa run.<br />
|-<br />
| Debug <br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Set to yes to increase log verbosity.<br />
|-<br />
| remoteRsyncBinary <br />
| full path<br />
| no<br />
| /usr/bin/rsync<br />
| If the rsync program on the remote server is located in non-standard location, set it here<br />
|-<br />
| remoteNiceBinary <br />
| full path<br />
| no<br />
| /bin/nice<br />
| If the nice program on the remote server is located in non-standard location, set it here.<br />
|-<br />
| localRsyncBinary <br />
| full path<br />
| no<br />
| /usr/bin/rsync<br />
| If the local rsync program is located in non-standard location, set it here<br />
|-<br />
| localNiceBinary <br />
| full path<br />
| no<br />
| /bin/nice<br />
| If the nice local program is located in non-standard location, set it here.<br />
|-<br />
| RemoteAuthorizedKeysFile <br />
| path relative to remote user's home or full path<br />
| no<br />
| .ssh/authorized_keys2<br />
| If the remote host stores the authorized keys file in a non-standard location, set it here.<br />
|-<br />
| rsyncdMode<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Set to yes to connect to the rsync daemon on the remote host (instead of running rsync over ssh).<br />
|-<br />
| rsyncdModule<br />
| string<br />
| no<br />
| AFFA<br />
| The rsyncd module name (only applicable with rsyncdMode=yes).<br />
|-<br />
| rsyncdUser<br />
| string<br />
| no<br />
| affa<br />
| The username for authentication to the rsync daemon (only applicable with rsyncdMode=yes).<br />
|-<br />
| rsyncdPassword<br />
| string<br />
| no<br />
| <none><br />
| The password for authentication to the rsync daemon (only applicable with rsyncdMode=yes).<br />
<br />
|-<br />
| globalStatus<br />
| enabled ''or'' disabled ''or'' jobs<br />
| no<br />
| jobs<br />
| Set to enabled or disabled to overide the status settings in all job sections. When set to value jobs, the status settings in the job sections are effictive.<br>'''Note:''' This property is allowed only in the [GlobalAffaConfig] section. <br />
|-<br />
| sendStatus<br />
| daily ''or'' weekly ''or'' monthly ''or'' never<br />
| no<br />
| weekly<br />
| Defines how frequently the status will be sent.<br>'''Note:''' This property is allowed only in the [GlobalAffaConfig] section. <br />
|-<br />
| SMEServer<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| When set to yes, the default e-smith directories are automatically included and the property RPMCheck=yes can be used.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|-<br />
| RPMCheck<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Only applicable to jobs that backup a SME server. Compares the packages installation of the source host with this affa backup host. Sends a message with diff list if not in sync. This check is useful, if you want have the option to rise the backup server to a production server from a backup.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|-<br />
| Watchdog<br />
| yes ''or'' no<br />
| no<br />
| yes<br />
| Only applicable to jobs that backups a SME server. When a job is started, affa installs a watchdog script on the source in /etc/cron.d/, which sends a warning message, if the next scheduled job (taken from the TimeSchedule property + 10 minutes) did not run. This guarantees, that you will be notfied even in case of a affa server outage. The watchdog script send a daily reminder message, if the error continues. The next run job replaces the watchdog script with a new trigger time.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|}<br />
<br />
==== Default configuration properties ====<br />
For all 'name=value' properties defaults can be set in the [GlobalAffaConfig] section which are used in all job configuration when the corresponding property is omitted. For example, when these properties are set in [GlobalAffaConfig] section, they can be omitted in the specific job configurations. This is useful, when you set up many similar jobs. <br />
<br />
Example: You want to set the property 'localNice' to 19 for all jobs. Then add the following section in one of your configuration files:<br />
[GlobalAffaConfig]<br />
localNice=19<br />
Consequently you don't need to set this property for your individual jobs. Properties set in the job record override the defaults.<br />
<br />
==== Properties for global Settings ====<br />
The following special properties are only applicable to the [GlobalAffaConfig] section:<br />
<ul><br />
<li>sendStatus</li><br />
<li>globalStatus</li><br />
</ul><br />
<br />
All jobs can be disabled for execution through 'cronjobs" with setting 'globalStatus' to 'disabled'.<br />
[GlobalAffaConfig]<br />
globalStatus=disabled<br />
<br />
To re-enable run, either delete the above configuration line, or set to:<br />
globalStatus=jobs<br />
which is the default value.<br />
<br />
You can also set the value to<br />
globalStatus=enabled<br />
which enforces the the job execution through 'cronjobs' and overrides the specified values in the section for the specific job.<br />
<br />
=== Usage and command line options ===<br />
{{Note box|Options can be abbreviated to uniqueness, e.g. --mak is equal to --make-cronjobs}}<br />
'''affa --run JOB'''<br />
Starts a job run. Usually done by the cronjob. <br />
<br />
<br />
'''affa --make-cronjobs'''<br />
Configures the cronjobs as scheduled in the jobs records. By default this command will by executed by 'cronjobs' every 15 minutes. So you don't need to run this command manually, you can also just wait max. 15 minutes before your updates job configurations become effective.<br />
<br />
'''affa --configcheck'''<br />
Checks the syntax and values in all configuration files found in /etc/affa/. Run this command after modifying the configuration. Lower/Upper case errors in property names are corrected automatically.<br />
<br />
'''affa --make-cronjobs'''<br />
Configures the cronjobs. Run this command to make changes of time related properties effective i.e. TimeSchedule, killAt and resumekKilledAt properties. Every 15 minutes Affa checks whether a configuration file is newer than the cronjob file and runs the make-cronjobs command if this is the case.<br />
<br />
'''affa --send-key [JOB JOB ...]'''<br />
This first generates the DSA key on the Affa Server, if not already done. Then the public key is send to the hosts 'remoteHostName' as defined in section of each job JOB and generates the job specific ssh known host entry.<br />
{{Note box|When initially doing this step, you will need to temporarily enable "Allow secure shell access using standard passwords" on the source server.}}<br />
<br />
'''affa --check-connections [JOB JOB ...]'''<br />
Checks the ssh login and if applicable rsyncd auth for all jobs given as arguments. Without any arguments all jobs are checked.<br />
<br />
'''affa --full-restore [--preserve-newer=no] [--delete=yes] JOB [ARCHIVE]'''<br />
Does a full restore remote server of all backed up files and directories from the backup ARCHIVE. If ARCHIVE is not given, the archive 'scheduled.0' is used as the default. The full restore reconstructs the server as it was at the time of the backup. After the restore the source host reboots.<br />
<br />
With option --preserve-newer=yes files on the remote server with modification time newer than on the backup are not overwritten.<br />
<br />
With option --delete=yes all files on the remote server, which are not in the backup, are deleted.<br />
<br />
If the parameter “SMEServer=yes” is set, the signal-event pre-restore and signal-event post-upgrade will be used automatically.<br />
<br />
'''affa --list-archives JOB'''<br />
Displays a table of all present archives of job JOB with date, number of files, size and and bytes received. While column buTime shows the actual rsync time, the column ddTime shows the length of the subsequent deduplication run. ddYld is the gained deduplication yield in bytes.<br />
<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
+------------------------------------------------------------------------------+<br />
| Job: fshare-ak |<br />
| Description: Fileserver AK |<br />
| Directory: /var/affa/fshare-ak/ |<br />
| Hostname: 10.204.104.4 |<br />
| Email: sysadmin@mydomain.com |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| Run | Completion date | buTime | ddTime | ddYld | Files | Size | Recvd |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| Y 0 | Wed 2010-06-23 20:26 | 11m53s | - | - | 412k | 143G | 470M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| M11 | Sun 2010-08-29 20:22 | 7m50s | - | - | 417k | 153G | 14M |<br />
| M10 | Sun 2010-09-26 20:23 | 8m57s | - | - | 430k | 156G | 14M |<br />
| M 9 | Sun 2010-10-31 20:25 | 10m05s | - | - | 448k | 161G | 15M |<br />
| M 8 | Sun 2010-11-28 20:31 | 16m42s | - | - | 463k | 167G | 17M |<br />
| M 7 | Sun 2011-01-02 20:48 | 33m07s | - | - | 486k | 173G | 1.1G |<br />
| M 6 | Sun 2011-01-30 20:33 | 18m54s | - | - | 493k | 176G | 24M |<br />
| M 5 | Sun 2011-02-27 20:28 | 13m31s | - | - | 490k | 176G | 19M |<br />
| M 4 | Sun 2011-03-27 20:28 | 13m08s | - | - | 491k | 175G | 17M |<br />
| M 3 | Sun 2011-05-01 20:30 | 15m41s | - | - | 493k | 179G | 18M |<br />
| M 2 | Sun 2011-05-29 20:28 | 13m40s | - | - | 494k | 182G | 31M |<br />
| M 1 | Sun 2011-06-19 20:26 | 11m20s | - | - | 493k | 183G | 17M |<br />
| M 0 | Sun 2011-07-03 20:28 | 13m46s | - | - | 496k | 183G | 18M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| W 3 | Sun 2011-07-10 20:28 | 13m38s | - | - | 479k | 180G | 16M |<br />
| W 2 | Mon 2011-07-11 20:35 | 20m27s | - | - | 479k | 180G | 301M |<br />
| W 1 | Fri 2011-07-15 20:30 | 15m53s | - | - | 480k | 180G | 62M |<br />
| W 0 | Sun 2011-07-24 19:39 | 9m25s | - | - | 482k | 181G | 16M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| D 6 | Thu 2011-07-28 19:42 | 12m22s | - | - | 483k | 182G | 176M |<br />
| D 5 | Fri 2011-07-29 23:29 | 11m10s | 5h33m | 45G | 483k | 182G | 16M |<br />
| D 4 | Sat 2011-07-30 19:53 | 23m26s | 2h30m | 8.3G | 483k | 182G | 17M |<br />
| D 3 | Sun 2011-07-31 20:07 | 37m31s | 4m47s | 8.3G | 483k | 182G | 17M |<br />
| D 2 | Mon 2011-08-01 20:44 | 1h14m | 7h50m | 8.5G | 484k | 182G | 630M |<br />
| D 1 | Tue 2011-08-02 20:02 | 32m28s | 12h20m | 8.3G | 484k | 182G | 74M |<br />
| D 0 | Wed 2011-08-03 19:58 | 28m46s | 11h01m | 8.5G | 484k | 182G | 214M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| S 0 | Thu 2011-08-04 20:00 | 30m28s | 11h52m | 8.5G | 484k | 182G | 203M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
</pre></small><br />
<br />
'''affa --list-archives [--csv] JOB'''<br />
With --csv, the output is in machine readable colon separated format.<br />
<small><pre><br />
Archive:Count;Date;Files;Size;RootDirFilesystemAvail;RootDirFilesystemUsed;valid;TotalBytesReceived;ExecutionTime;DedupTotalFiles;DedupReplacedFiles;DedupSavedBytes;DedupExectime;DedupDate<br />
monthly;00000;201509270631;393237;37252509103;166889260;47876936;yes;1327370;1160638358;99;;;;;<br />
weekly;00003;201510040531;390273;37042612135;164566692;50199504;yes;1117726;45430532;59;;;;;<br />
weekly;00002;201510110531;395553;37299589800;162612204;52153992;yes;1117092;92369237;64;;;;;<br />
weekly;00001;201510180531;403831;37707599172;161855844;52910352;yes;1317083;148317764;64;;;;;<br />
weekly;00000;201510250531;406509;38146200127;161041156;53725040;yes;1104759;64083948;63;;;;;<br />
daily;00006;201510310531;415663;39248862982;162979176;51787020;yes;1958110;198060283;66;;;;;<br />
daily;00005;201511010531;411014;38563389171;163061136;51705060;yes;1156506;48903675;75;;;;;<br />
daily;00004;201511020531;408910;38612857040;162359124;52407072;yes;1007434;210812387;75;;;;;<br />
daily;00003;201511030531;410615;37815861577;162026060;52740136;yes;1524069;162697515;67;;;;;<br />
daily;00002;201511040531;414784;38131734213;161990024;52776172;yes;1571657;134250735;63;;;;;<br />
daily;00001;201511050531;420780;38433895988;161906580;52859616;yes;2021507;149388808;103;;;;;<br />
daily;00000;201511060531;425920;38677614350;161778812;52987384;yes;1987971;166510621;71;;;;;<br />
scheduled;00000;201511070531;425663;38815712018;161612600;53153596;yes;2003540;139603231;68;;;;;<br />
</pre></small><br />
<br />
'''affa --status [--csv]'''<br />
Displays a table of all configured jobs with enable status, time of last and next run, size of the most recent archive, exectution time of the last run and the number of scheduled (S), daily (D), weekly (W), monthly (M) and yearly (Y) archives. If last time shows 'failed', the job did not run in the last 24h. For disabled jobs 'Last' always shows 'failed' after 24 h. To see the date and time of the last run of those jobs use the --list-archives option. Column 'Next' shows the time when the next run will be started, if 'Enabled' is 'yes'. Column 'ddYld' shows the deduplication yield in bytes.<br />
<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
| Job | ENA | Last | Time | Next | Size | ddYld | N of S,D,W,M,Y |<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
| ads-ak | yes | 20:20 | 0m29s | 20:20 | 28M | - | 1, 7, 2, 0, 0 |<br />
| ak-user-ma.. | yes | 22:31 | 1m51s | 22:30 | 170M | - | 3, 7, 4, 6, 0 |<br />
| azubi-1 | yes | 03:39 | 3h24m | 00:15 | 44G | 1.2G | 1, 7, 4,10, 1 |<br />
| dcpant | yes | 03:06 | 1m03s | 03:05 | 1.3G | - | 1, 7, 4,10, 1 |<br />
| eshare | yes | 21:06 | 21m01s | 20:45 | 24G | 178M | 1, 7, 4,10, 1 |<br />
| etherpad | yes | 20:40 | 0m13s | 20:40 | 3.1M | - | 1, 7, 4, 2, 0 |<br />
| fazubi | yes | 08:16 | 11h16m | 21:00 | 132G | 3.3G | 1, 7, 4,10, 1 |<br />
| fschare-rh | yes | de-duplicating (pid 9719) | 1, 7, 4,10, 1 |<br />
| fsh-02 | yes | 07:53 | 12h23m | 19:30 | 182G | 8.5G | 1, 7, 4,10, 1 |<br />
| fshare-ak2 | yes | 00:30 | 0m26s | 00:30 | 415M | 544k | 1, 7, 4,10, 1 |<br />
| helpdesk | yes | 21:27 | 2m16s | 21:25 | 895M | 138k | 1, 7, 4,10, 0 |<br />
| it-share | yes | running rsync (pid 9744) | 1, 7, 1, 0, 0 |<br />
| lightroom | yes | waiting (pid 9528) | 1, 7, 4, 2, 0 |<br />
| localhost | yes | 02:15 | 0m08s | 02:15 | 395k | - | 1, 7, 2, 0, 0 |<br />
| mediawiki | yes | 16:10 | 0m06s | 19:40 | 1.1G | 6.7M | 3, 7, 4,10, 1 |<br />
| mshare | yes | 00:48 | 1h33m | 23:15 | 18G | 2.4G | 1, 7, 4,10, 1 |<br />
| wshare | yes | 00:34 | 1h49m | 22:45 | 23G | 484M | 1, 7, 4,10, 1 |<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
2 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
With --csv, the output is in machine readable colon separated format.<br />
<br />
'''affa --show-config-pathes [--csv] [JOB JOB ...]'''<br />
Prints the full pathes of the file where the section of job JOB is defined.<br />
<br />
'''affa --show-default-config'''<br />
Prints a list of all allowed property names with their default values. These values are used, when omitted in the job sections.<br />
<br />
'''affa --show-schedule [--all]'''<br />
Prints a 'graphical' timetable for all enabled jobs. The resolution is 30 minutes. An 'S' character marks the scheduled start times. The duration of the job runs are marked with '=' characters.<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
TIME 12:00 16:00 20:00 0:00 4:00 8:00<br />
fsh-02 -------- -------S ~~~~~~~~ ~~~~~~~~ ~~~~~~~~ --------<br />
ads-ak -------- -------- S------- -------- -------- --------<br />
etherpad -------- -------- -S------ -------- -------- --------<br />
eshare -------- -------- -S~----- -------- -------- --------<br />
fazubi -------- -------- --S~~~~~ ~~~~~~~~ ~~~~~~~~ ~-------<br />
helpdesk -------- -------- --S----- -------- -------- --------<br />
it-share -------- -------- ---S=~~~ ~~~~~~~~ ~~~~~~~- -------- busy<br />
wshare -------- -------- -----S~~ ~~------ -------- --------<br />
ak-user-management -------- -------- -----S-- -------- -------- --------<br />
fschare-rh -------- -------- -----S=- -------- --~----- -------- busy<br />
mshare -------- -------- ------S= ~~------ -------- --------<br />
azubi-1 -------- -------- -------- S=~~~~~~ -------- --------<br />
fshare-ak2 -------- -------- -------- -S------ -------- --------<br />
mediawiki -------- -------- -------- ---S---- -------- ----S---<br />
localhost -------- -------- -------- ----S--- -------- --------<br />
dcpant -------- -------- -------- ------S- -------- --------<br />
lightroom -------- -------- -------- -------- -------- S------- busy<br />
Symbols: S=scheduled K=kill R=resume '='=rsync '~'=dedup<br />
2 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
<br />
'''affa --log-tail [JOB]'''<br />
Displays the tail of the logfile of job JOB with live update. This command is identical to tail -n 50 -f /path/to/logfile. Without the JOB argument the global logfile is shown.<br />
<br />
'''affa --send-status'''<br />
Sends the status table, the disk-usage and the archive list of all jobs to the email addresses configured in the [GlobalAffaConfig] section.<br />
<br />
'''affa --disk-usage'''<br />
Shows the current disk usage of all root dir filesystems<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.de<br />
+------+--------+--------+----------------------------------------------------+<br />
| Use% | Used | Avail | Root Dir |<br />
+------+--------+--------+----------------------------------------------------+<br />
| 71% | 938GB | 365GB | /var/affa |<br />
| 20% | 194GB | 759GB | /mnt/affadev |<br />
+------+--------+--------+----------------------------------------------------+<br />
</pre></small><br />
With --csv, the output is printed in a machine readable colon separated format.<br />
<br />
'''affa --cleanup JOB'''<br />
After you have lowered a keep value, e.g. scheduledKeep, then archives with a higher indices will no longer be shifted and will exist for ever. This option finds these archives and deletes them after confirmation.<br />
<br />
'''affa --rename-job JOB NEWNAME'''<br />
Renames the job JOB to NEWNAME including the section name and archive directories.<br />
<br />
'''affa --move-archive JOB NEWROOTDIR'''<br />
Moves the archive directory of job JOB to the rootdir NEWROOTDIR and adjusts the value of property RootDir. NEWROOTDIR must be a full path starting with a slash. As moving across filesystems (e.g. from an external USB drive to the local disk) is not possible, Affa uses a copy command in this case and deletes the source directory after that. Depending on the archive size, copying across filesystems can take a long time.<br />
<br />
'''affa --delete-job [--revoke-key] JOB'''<br />
Irreversibly deletes all archives, of job JOB and set the status property to disable. With --revoke-key option, the public key on the remote server will be deleted.<br />
<br />
'''affa --revoke-key JOB'''<br />
Deletes the public dsa key on the remote server.<br />
<br />
'''affa --kill JOB'''<br />
Terminates the running job JOB and all its child processes (rsync processes).<br />
<br />
'''affa --killall'''<br />
Terminates all running jobs.<br />
<br />
affa --mailtest JOB<br />
Sends a test email to the email addresses configured in the JOB section. Use this to verify, that your mail system is working.<br />
{{Note box|By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).}}<br />
<br />
'''affa --nrpe [JOB JOB ...]'''<br />
Checks for jobs that were not run the last NRPRtrigger hours and reports them as failed. A Nagios/ICINGA NRPE complient message is printed and exit status returned.<br />
<br />
'''affa --version'''<br />
Displays the Affa version number and checks Samba and Freedup installation.<br />
<br />
'''affa --warranty'''<br />
Displays the disclaimer of liability.<br />
<br />
'''affa --license'''<br />
Displays the license Affa is released under.<br />
<br />
'''affa --help'''<br />
Displays a short help.<br />
<br />
'''affa --debug'''<br />
Enables verbose logging. Overrides job and global configurations.<br />
<br />
==== SME Specific usage and command line options ====<br />
<br />
These commands are implemented in a fork of Affa 3 project for use with SME. They are not part of the Affa 3 for CentOS project.<br />
<br />
'''affa --rise [--all] JOB [ARCHIVE]'''<br />
Runs a full restore on the Affa server <b>(!!!)</b> of all standard files and directories from the backup ARCHIVE of job JOB. In other words: After completion, the Affa box reboots as a clone of the source server. Ensure, that the source server has been powered off before you reboot the Affa box, otherwise the network interface will not come up. This is important, when you run --rise remotely. The --rise feature only works with SME servers and should only be used on dedicated backup servers.<br />
<br />
With option --all, all files and directories of the archive as defined by the include[] properties are restored. Files or directories with the same name on the Affa server will be overwritten and cannot be restored by a undorise. This should not be an issue on a dedicated Affa server which does not hold any other data. After a possible undorise those additional restored data must be removed manually.<br />
<br />
Please note, that the rise process backs up the the Affa server itself before doing the restore from the archive. This backup is used by a possible undorise run to restore the Affa server. Only the standard files and directories are backed up. Data in non-standard loctions (like /opt) are untouched and will still exist after the rise run '''if they don't get overwritten by data of the backup''' (=if the backup contains /opt too e.g.).<br />
<br />
{{Note box|The above command is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.}}<br />
<br />
'''affa --undo-rise'''<br />
This feature reverts a risen Affa box to a backup server. After a reboot, all configured jobs '''based on standard files and directories''' will work again.<br />
<br />
{{Note box|The above command is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.}}<br />
<br />
=== Example setups ===<br />
==== Dedicated backup server ====<br />
* Setup a dedicated server and install Affa 3 for SME.<br />
* Setup a job for every server you want to backup.<br />
* Send the public keys to every server.<br />
affa --send-key JOBNAME<br />
* Check whether password-less logins are working.<br />
affa --check-connections<br />
* Check whether the scheduled jobs are evenly distributed over the day.<br />
affa --show-schedule<br />
* Create the cron jobs.<br />
affa --make-cronjobs<br />
* Check the status after 24 hours.<br />
affa --status<br />
<br />
==== Backup of single ibays ====<br />
Suppose you want to backup the ibays 'staff1' and 'staff2' on your production server with WAN IP 82.123.1.1 to an Affa server in a different location over the internet every night at 2:30am.<br />
* Log into the Affa server.<br />
* Create a dedicated job file ''/etc/affa/ibay-staff.conf'' . Edit it and set<br />
[ibay-staff]<br />
Description=Backup of ibay-staff on 82.123.1.1<br />
remoteHostName=82.123.1.1<br />
TimeSchedule=0230<br />
SMEServer=no<br />
Include=/home/e-smith/files/ibays/staff1<br />
Include=/home/e-smith/files/ibays/staff2<br />
* Save the configuration<br />
* send the public key to the production server<br />
affa --send-key ibay-staff<br />
* check next morning<br />
affa --list-archives ibay-staff<br />
affa --status<br />
ls /var/affa/ibay-staff<br />
<br />
==== Two production servers backup each other ====<br />
You have two sites connnected via a VPN and a SME Server running on each site. In that case you don't need a dedicated Affa backup server. Both production servers can additionally act as Affa backup servers backing up the server of the other site. Simply install Affa and configure a job that backs up the other one. You can use all Affa features except of the rise feature.<br />
<br />
When using the rise feature, the server becomes any of the backed up systems, which is less useful in this scenario as it would give you a running copy of the server of the other site while the server of this site is down.<br />
<br />
To get redundancy and a faster restore you can configure a local backup to an external USB or NAS device.<br />
<br />
Please pay attention, that you do not backup the archives back to the other site. Set the RootDir property to a path which is not included in the SME default backup list. When leaving the RootDir property to its default '/var/affa', this is guaranteed.<br />
<br />
==== Use Affa to backup to a NFS-mounted NAS or a local attached USB drive ====<br />
<br />
{{Note box|This chapter still needs to be checked whether it works with Affa 3 for SME.}}<br />
<br />
You want to backup your SME production server with hostname 'prodbox‘ and IP 10.200.48.1 on a mounted filesystem instead of setting up a dedicated Affa box.<br />
<br />
===== Setup NAS =====<br />
You have a [http://www.freenas.org FreeNAS] box with IP 10.200.48.2 up and running with NFS service enabled. The disk is mounted to /mnt/affashare. You have authorized the IP address of your prodbox server to access share /mnt/affashare.<br />
* log into the 'prodbox' and install the NFS packages<br />
/usr/bin/yum install --enablerepo=smecontribs smeserver-nfs<br />
* now enable and start the portmapper service<br />
config setprop portmap status enabled<br />
service portmap start<br />
<br />
* mount the NFS share<br />
mkdir -p /mnt/affadevice<br />
mount 10.200.48.2:/mnt/affashare /mnt/affadevice<br />
<br />
<br />
===== Alternatively setup a USB drive =====<br />
* log into the 'prodbox' <br />
* connect a USB hard disk to the USB Bus. Now you must determine what device the kernel has assigned to the drive. View the /var/log/message and search for ''Initializing USB Mass Storage driver''. A few lines below you'll find the name of the device. In this example it is ''sdh''. Replace ''/dev/sdh'' by ''your device'' in following instructions.<br />
* use the fdisk program to create a linux partition. '''Verify that this is really the attached USB drive before you continue!'''<br />
<br />
fdisk /dev/sdh<br />
You'll most likely find an existing vfat dos partition, which you have to delete first. In the following we assume, that you have created a single partition ''/dev/sdh1''.<br />
* now format the drive with an ext3 filesystem<br />
mkfs.ext3 /dev/sdh1<br />
* make the mount point<br />
mkdir -p /mnt/affadevice<br />
* add the following line to the /etc/fstab<br />
/dev/sdh1 /mnt/affadevice ext3 defaults<br />
* mount the drive<br />
mount /mnt/affadevice<br />
* crosscheck your work using the df command<br />
df<br />
<br />
===== Copying a AFFA USB hard drive archive to a new disk =====<br />
Affa uses copious amounts of hard links to compress and preserve disk space for its backups. If you are in the situation where you want to copy such a disk archive to a new (bigger) disk, you need to ensure that the hard links are copied correctly or the destination copy may became significantly bigger than the total of the source archive.<br />
<br />
One way to copy across file systems (i.e. two different USB disks) and preserve the hard links is as follows:<br />
<br />
* mount both USB drives but with different mount points. e.g. /media/backup1 & /media/backup2 and then:<br />
<br />
mkdir /media/backup2/archive <br />
cd /media/backup1/archive<br />
tar cpf - . | ( cd /media/backup2/archive && tar xpf - )<br />
<br />
where '''archive''' is the name of the AFFA job you want to move to the other disk.<br />
<br />
===== Setup Affa =====<br />
You want to run backups on 11:30 h, 15:30 h and 19:30 h and you want to keep the last 3 scheduled backups, 7 daily, 5 weekly, 12 monthly and 1 yearly backups.<br />
* log into the 'prodbox' and install the Affa packages as described above.<br />
* create/edit the conf file of the job: nano /etc/affa/JOBNAME.conf<br />
and set:<br />
remoteHostName=localhost<br />
TimeSchedule=1130<br />
TimeSchedule=1530<br />
TimeSchedule=1930<br />
scheduledKeep=3<br />
dailyKeep=7<br />
weeklyKeep=5<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
RootDir=/mnt/affadevice<br />
Review the other properties and change them to your needs.<br />
* run the job manually<br />
affa --run prodbox<br />
<br />
<br />
===== Limitations =====<br />
With this kind of setup you cannot use the affa rise feature, as it requires the backup archive to be located on the '''same fileystem''' as the server installation. The rise option uses hardlinks, which are not working across filesystems.<br />
<br />
===== Automount =====<br />
Having the backup archives in the same filesystem is always a risk, which can be minimized by using the automount feature. Then the external filesystem is only mounted during a job run.<br />
<br />
In the NAS example set<br />
AutomountDevice=10.200.48.2:/mnt/affashare<br />
AutomountPoint=/mnt/affadevice<br />
and skip the step 2.<br />
<br />
In the USB drive example set<br />
AutomountDevice=/dev/sdc1<br />
AutomountPoint=/mnt/affadevice<br />
and skip the steps 5 to 8.<br />
<br />
The mount point will be automatically created, if it does not exist. <br><br />
To access the archive directory, you need to mount it manually.<br />
<br />
=== Restore ===<br />
==== Restore single files or directories ====<br />
'''Example 1:''' It's Tuesday January 6th 2009, when user 'briedlin' asks you to restore the messages of his mailbox 'orders' he has accidentally deleted on Monday.<br />
<br />
* You first must check what backup archives are available. The jobname of this server backup is 'primmail'. To get a listing of all archives run <br />
affa --list-archives primmail<br />
''(see the example listing in chapter [[Affa#Usage_and_command_line_options]]''<br />
* Choose the daily.0 archive, which was created Monday night. Now restore the mailbox 'orders' using the rsync command.<br />
* Now run the rsync command ''(note the trailing slash!)'' on the Affa backup server:<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# this variable is used to shorten the next command line''<br />
rsync -av /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# this variable is used to shorten the next command line''<br />
rsync -av -e 'ssh -p 2222' /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
<br />
<br />
'''Example 2:''' A user has deleted the file orderform.pdf from ibay 'docs' on the server 10.204.48.1 and asks you to restore it.<br />
<br />
* You have searched and found the latest version of this file in weekly archive of job 'prodserv'.<br />
* To copy it back to the server 10.204.48.1 run on the Affa server<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# this variable is used to shorten the next command line''<br />
rsync -av /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
* If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# this variable is used to shorten the next command line''<br />
rsync -av -e 'ssh -p 2222' /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
<br />
<br />
'''Example 3:''' Restoring a file with special characters in the path<br />
<br />
Do not use shell variables to shorten the command. It would complicate things more than it would help.<br />
Quote the source and destination path. In the destination path escape blank characters with a double backslash and brackets with a single backslash. On the Affa backup server do:<br />
rsync -av "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
<br />
rsync -av -e 'ssh -p 2222' "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
==== Full restore ====<br />
Generally: <br />
'''affa --full-restore [optional settings] JOB [ARCHIVE]'''<br />
This rsyncs the data (files and directories )from the backup ARCHIVE back to the 'remoteHostname' defined in the configuration of the job JOB. <br />
If ARCHIVE is not given, the archive 'scheduled.0' is used as the default.<br />
The --full-restore reconstructs the server as it was at the time of the backup and takes following IMPORTANT optional settings in consideration:<br />
<br />
With option [--preserve-newer=yes] files on the remote server with modification time newer than on the backup are not overwritten.<br />
<br />
With option [--delete=yes] all files on the remote server, which are not in the backup, are deleted.<br />
<br />
If the parameter “SMEServer=yes” is set, the signal-event pre-restore and signal-event post-upgrade will be used automatically.<br />
<br />
After the restore is done, the restored server reboots automatically. <br />
<br />
Example:<br />
You have backuped your production server 'prodsrv' as job 'prodbox'. To restore only lost data from the latest backup run<br />
affa --full-restore prodbox<br />
<br />
To restore exactly from the older archive daily.3 run <br />
affa --full-restore [--preserve-newer=no] [--delete=yes] prodbox daily.3<br />
<br />
{{Warning box|A full restore with --preserve-newer no and --delete yes reconstructs the server as it was at the time of the backup. That means, that all files created or server configuration changes made after the backup will be lost! }}<br />
<br />
==== Moving a SME server installation to new hardware using the Affa rise feature ====<br />
Please see this Howto: [[Moving SME to new Hardware]]<br />
<br />
<br />
==== Restore from USB drive on new server ====<br />
<br />
This tip comes from http://forums.contribs.org/index.php?topic=42412.0<br />
<br />
Q) I have complete backups using affa stored on a usb hard drive connected to our affa backup server. I need to restore an earlier monthly backup of our job "mailbackup" to a test server rather than back to the original system. If I did it from the backup server I see the instructions of how to rise that server to the current backup on that server but I want to restore a point about a month ago before some strange things happened. And I want to do it on a machine that is not our backup server or our production server. I tried to figure out how but am lost in the options. My goal is to do some testing.<br />
<br />
A) On your testserver setup a job mailbackup identical to that on your backup server but set property RootDir to /var/affa and property status to disabled. Connect the USB drive and copy the archive of the job mailbackup to /var/affa. Then run affa --rise mailbackup ARCHIVE<br />
<br />
Remember that Affa places job setup scripts into the archive directories. In your case it is mailbackup-setup.pl. Copy it to your testserver, change the RootDir and status properties and execute it.<br />
<br />
=== FAQ ===<br />
'''What files and directories are included by default?'''<br />
<br />
With SMEServer=no nothing at all.<br />
<br />
With SMEServer=yes the SME default backup list:<br />
/etc/e-smith/templates-custom<br />
/etc/e-smith/templates-user-custom<br />
/etc/group<br />
/etc/gshadow<br />
/etc/passwd<br />
/etc/samba/secrets.tdb<br />
/etc/samba/smbpasswd<br />
/etc/shadow<br />
/etc/smbpasswd<br />
/etc/ssh<br />
/etc/sudoers<br />
/home/e-smith<br />
/root<br />
<br />
{{Note box|The following example needs to be reworked for Affa 3.}}<br />
'''Can I exclusively backup image files from a specific directory?'''<br />
Yes. Assuming you want to backup all gif and jpg files from directory /home/e-smith/files/ibays/pictures/files use this configuration<br />
db affa setprop JOBNAME SMEServer no # do not include SME Server default directories and files<br />
db affa setprop JOBNAME Include[0] '/home/e-smith/files/ibays/pictures/files' # start searching here<br />
db affa setprop JOBNAME Include[1] '*/' # search the tree recursively<br />
db affa setprop JOBNAME Include[2] '*.gif' # copy files that match these patterns<br />
db affa setprop JOBNAME Include[3] '*.jpg' <br />
db affa setprop JOBNAME Exclude[0] '*' # exclude all others<br />
db affa setprop JOBNAME rsyncOptions '-m' # do not create empty folders<br />
<br />
<br />
'''How can I move a backup archive from my production server in a remote branch office to the Affa server using an USB disk? I want to avoid to download the first full backup over the internet.'''<br />
Install Affa on your production server, connect and mount the USB disk. Setup a job to write the archive to the USB disk. After the job run, connect the USB disk to your Affa server and setup the job in the same way. Use the setup script from the archive directory. You only need to change the remoteHostName property. Now use the --move-archive option to move the archive to the local disk. If not already done, send the public key to your production server. You're done.<br />
<br />
<br />
'''How do I backup two SME servers behind a firewall?'''<br />
First you need to configure port forwardings for the ssh service on your firewall. Use a non-standard port, e.g 2200->22, for your second server. Setup a job on your Affa server for each of your production servers. Set up the job property sshPort=2200 for second server.<br />
<br />
=== Uninstall ===<br />
This removes the installed Affa package, all configuration data and all backup archives.<br />
<br />
Run the following commands for each job:<br />
affa --revoke-key JOBNAME<br />
affa --delete-job JOBNAME<br />
Verify that all jobs have been deleted:<br />
affa --status<br />
Remove the Affa package:<br />
yum remove smeserver-affa-3.2.2.1-0.noarch.rpm<br />
Cleaning up:<br />
rm -f /etc/cron.d/affa<br />
rm -rf /etc/affa<br />
rm -rf /var/affa /var/log/affa<br />
<br />
=== Troubleshooting ===<br />
==== Affa stops working after prodserver was rebuild. ==== <br />
'''Description:'''<br />
* Affa stopped working or is broken.<br />
* When affaserver tries to make an ssh connection to prodserver it fails.<br />
* In the /var/log/affa/''jobname''.log file you will this message: "SSH connection to ''prodserver ip'' failed. Did you send the public key". <br />
* Resending the keys does not solve the problem. <br />
<br />
'''Solution:'''<br />
* This is probably due to the key for prodserver that is stored on the affaserver no longer matches the key of prodserver. <br />
* You need remove the existing key so that the next ssh connection re-gets the keys from prodserver. <br />
* From the command line edit the /root/.ssh/known_hosts file and remove the line for prodserver. It starts with: ''prodserver ip'' ssh-rsa<br />
<br />
=== Bug report ===<br />
Affa is listed in the [http://bugs.contribs.org/enter_bug.cgi?product=SME%20Contribs&component=smeserver-affa bugtracker contribs section].<br />
Please report all bugs, new feature requests and documentation issues there.<br />
<br />
<br />
{{#bugzilla:columns=id,product,version,status,summary |sort=id |order=desc |disablecache=1|component=smeserver-affa|noresultsmessage="No open bugs found."}}<br />
<br />
=== Additional information ===<br />
For details of performance, changelog etc see [[:Affa:Additional information]]<br />
<br />
=== References ===<br />
* Wikipedia article http://en.wikipedia.org/wiki/Rsync<br />
* Rsync homepage http://rsync.samba.org/<br />
* Wiki article [[Moving SME to new Hardware]]<br />
* Wiki article [[Backup with Affa and FreeDup]]<br />
* Wiki article [[Rsyncd setup on a windows computer for use with Affa backup]]<br />
* Wiki article [[Backup Windows Computers Using Affa]]<br />
* Wiki article [[Backup of ESXi Virtual Machines using Affa]]<br />
<br />
----<br />
[[Category: Contrib]]<br />
[[Category: Backup]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Affa&diff=33078Affa2017-04-01T19:04:42Z<p>Arnaud: /* Full restore */</p>
<hr />
<div>{{Languages|Affa}}<br />
{{Level|Advanced}}<br />
{{usefulnote}}<br />
<br />
<br />
=== Maintainer ===<br />
Mainteners(s) Affa3: Arnaud, stephdl<br><br />
Development of Affa2 for SME Server by the original maintainer has been discontinued.<br />
<br />
=== Version ===<br />
{{ #smeversion: affa}}<br />
{{ #smeversion: smeserver-affa}}<br />
<br />
<br />
=== Description ===<br />
The main purpose of this affa package is to make a SME Server a dedicated backup box in a few minutes. Affa backs up as many SME servers as you like or any other servers which have sshd running and rsync installed. Once it is configured, Affa runs reliably unattended and sends warning messages in case of any errors. <br />
<br />
All backup archives are full backups, as Affa makes use of the hardlink technique. Therefore a new full backup only needs disk space for the differences plus the filesystem overhead for the hardlinks and directories (which is typically 2-3%).<br />
<br />
Affa is based on the rsync program and supports the rsync --compress option. This allows you to run backups over the internet or VPN. A typical setup is one or more Affa backup servers placed in different locations, which backup the production server(s) over the VPN.<br />
<br />
A special feature is the rise option, which allows you to rise the backup server to your production server from a backup archive in case of a dead loss of your production server. The rise is executed within a extremely short time, even with huge amount of data. The rise feature uses hardlinks and therefore does not use up additional disk space.<br />
<br />
The rise feature can also be used to upgrade from a lower version to a higher version of SME. This also work from SME 8.1 to 9. For this to work you need 2 separate installations of SME. One is the actual running server and the other is an affa backup server with a newer version of SME Server where you execute the rise command. See more here: [[Moving SME to new Hardware]]<br />
<br />
A simple way to have a server running with maximum up time is to duplicate the hardware. E.g. to have 2 separate hardware boxes or virtual servers on separate hardware. One is the actual server and the other one is a backup system that in a very short time in case of problems can be upgraded to the actual server with the rise command. This also allow you to upgrade the SME Server software with minimum down time!<br />
<br />
Affa is a command line tool for system administrators and is intentionally designed without a GUI. Therefore it can be efficiently managed on the console and over slow internet connections.<br />
<br />
'''Note:''' This documents also refers to the Affa Version 2 which is not maintained anymore. Information on Affa 3 will be gradually added here once it has been more tested.<br />
<br />
==== Affa features at a glance ====<br />
* Affa is secure: All transfers and connections are made by using the ssh protocol with public/private key authentication<br />
* Periodically runs unattended full backups. Only differences to the previous full backup are transferred over the network<br />
* Using rsync with optional bandwidth limit and compression allows backups over the internet<br />
* Uses hardlink technique, i.e. physical disk space only needed for the differences between two full backups<br />
* Keeps a configurable number of scheduled, daily, weekly, monthly and yearly full backup archives<br />
* The archives are browseable. No need to unpack them first.<br />
* Archives can be mapped to a Samba share.<br />
* Backup jobs are started by the cron daemon<br />
* Interrupted jobs continuing with already transfered data when restarted<br />
* Backups the default e-smith directories and files, when property SMEServer is set to yes<br />
* Additional directories and files can be included<br />
* Directories and files can be excluded from the backup<br />
* Non-SME server linux systems can be backed up by setting the SMEServer property to no and using an include list<br />
* In ESXi mode, running virtual machines can be backed up. See [[Backup of ESXi Virtual Machines using Affa]]<br />
* Configurable nice level for rsync processes on the backup and source server<br />
* Optional run of custom programs before and after a job run (e.g. running tape backup)<br />
* Checks the disk space left after a job run with warning levels strict, normal or risky<br />
* Extensive checking of failure conditions<br />
* Sends failure messages to a configurable list of email addresses<br />
* Sends a warning message, if the backup server runs out of disk space<br />
* Installs an optional watchdog on the source server in case the backupserver fails (SME Server only)<br />
* Watchdog sends warning, if an expected backup did not run (SME Server only)<br />
* Watchdog sends a daily reminder message, if the error continues unchecked (SME Server only)<br />
* Option to display current status of all jobs showing times of last and next run, size and disk usage<br />
* Status can be mailed on a daily, weekly or monthly schedule<br />
* Option to display all existing archives of a job shown date, number of files, size and bytes tranferred from the source<br />
* Option to send (and revoke) the public key to the source server (SME Server and ESXi only)<br />
* Option to rise the backup server to a production server from a backup (SME Server only)<br />
* The rise feature does not physically move data and therefore is extremly fast and needs (almost) no extra disk space<br />
* Rise option can be run remotely as the NIC driver configuration of the backup server are preserved<br />
* Compares installed RPMs on source with backup server. Sends warning message, if not in sync<br />
* Undo rise option to restore the backup server<br />
* Configurable via a e-smith style db, with one record for each job and a default record for all jobs<br />
* Logs to /var/log/affa/JOB.log and /var/log/affa/affa.log with optional debug switch for higher verbosity<br />
* Log files are rotated weekly, with 5 logs kept<br />
<br />
<div style="text-align:left" class="mw-collapsible mw-collapsed" data-collapsetext="less &uarr;" data-expandtext="Obsolete Affa 2 instructions &darr;"><br />
<br />
=== Installation or Update of Affa 2 ===<br />
<br />
====Smecontribs for sme8====<br />
yum install --enablerepo=smecontribs smeserver-affa<br />
<br />
The latest test version can be found here :<br />
[http://www.reetspetit.com/smeserver/5/noarch/smeserver-affa-2.0.0-rc6.el5.sme.noarch.rpm smeserver-affa-2.0.0-rc6.el5.sme.noarch.rpm] Note this is NOT signed.<br />
<br />
Please post bug reports in the [http://bugs.contribs.org bug tracker]<br />
<br />
When you have installed Affa for the first time run the following command to initialize the Affa database<br />
<br />
affa --make-cronjobs<br />
and logout and re-login to the console to take the bash auto-completion (TAB key) in effect.<br />
====Obsolete way to install====<br />
{{Note box|SME7 is now EOL and has been moved to the obsolete tree}}<br />
SME7 version: '''smeserver-affa-2.0.0-rc4.noarch.rpm'''<br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/smeserver-affa-2.0.0-rc4.noarch.rpm</nowiki><br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm<br />
<br />
SME8 version: SME8 is based on CentOS5, therefore use the perl-Compress-Bzip2 el5 rpm: <br />
wget <nowiki>http://mirror.contribs.org/releases/8/smecontribs/i386/repoview/smeserver-affa.html</nowiki><br />
For SME8 32bit:<br />
wget <nowiki>http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm<br />
For SME8 64bit:<br />
wget http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
<br />
When you have installed Affa for the first time run the following command to initialize the Affa database<br />
affa --make-cronjobs<br />
and logout and re-login to the console to take the bash auto-completion (TAB key) in effect.<br />
<br />
==== Quick start example for Affa 2 ====<br />
You have a SME 7 production server with hostname 'prodbox‘ and IP 10.200.48.1.<br><br />
Set up a second SME 7 box as your backupserver with hostname 'affabox‘ and IP 10.200.48.2. <br />
<br />
<ol></li><li>log into the 'affabox' and install the packages as described above.<br />
</li><li>copy the config helper script sample<br />
cp /usr/lib/affa/jobconfig-sample.pl /root/prodbox-job.pl<br />
</li><li>edit /root/prodbox-job.pl and set<br />
my $jobname='prodbox';<br />
and<br />
'remoteHostName‘=>'10.200.48.1',<br />
optionally, if the remote server port is configured to e.g. 2222 then set<br />
'sshPort'=>2222,<br />
</li><li>save and run the 'prodbox-job.pl' file. This will write the configuration (this makes the database entries and sets up the cronjobs)<br />
/root/prodbox-job.pl<br />
</li><li>generate the DSA keys and send the public key to the 'prodbox'<br />
{{Note box|When initially doing this step, you will need to temporarily enable "Allow secure shell access using standard passwords" on the production server}}<br />
affa --send-key prodbox<br />
</li><li>run the job manually<br />
<br />
affa --run prodbox<br />
</li></ol><br />
<br />
==== Configuration ====<br />
The configuration is stored in an e-smith style database. Use the db command to configure Affa.<br />
The jobname is the record key with the type 'job'.<br>To setup a new job with the name 'prodbox' enter:<br />
<br />
db affa set prodbox job<br />
<br />
then set the properties<br />
<br />
db affa setprop prodbox remoteHostName 192.168.1.1<br />
db affa setprop prodbox TimeSchedule '0030,0730,1130,1330,1730,2030'<br />
db affa setprop prodbox Description 'My Production Server'<br />
db affa setprop prodbox status enabled<br />
and so on...<br />
<br />
Alternatively you can you use a script as described above in the 'Quick start' chapter.<br />
<br />
To verify your work, type:<br />
<br />
db affa show prodbox<br />
<br />
Finally set up the cronjobs:<br />
affa --make-cronjobs<br />
<br />
</div><br />
<br />
<br />
=== Installation of Affa 3 ===<br />
The following instructions assume that a fresh Affa 3 installation is made on a fresh dedicated SME server which serves as a dedicated backup server ('affabox‘). No jobs are migrated from Affa 2. For using an existing SME server, just skip the instructions how to setup a fresh SME box.<br />
<br />
Setup a dedicated machine with SME 8.1 or SME 9.0 from CDROM. Use the following example settings:<br />
<ol><li>Domain name: athome.xx (use your existing domain name)</li><br />
<li>Host name: affabox (must not match to existing host)</li><br />
<li>IP address: 192.168.0.10 (must not match to existing IP address)</li><br />
<li>Subnet: 255.255.255.0</li><br />
<li>role: Server only</li><br />
<li>Gateway: 192.168.0.1 (use your existing gateway)</li><br />
<li>DHCP: DEactivate</li><br />
<li>DNS server: 192.168.0.1 (use your existing DNS server)</li><br />
<li>activate changes: yes</li><br />
<li>The fresh server will then restart with the above settings.</li><br />
</ol><br />
<br />
To update your server, login as user root on local console. Use 'top' command to display running jobs. Wait until 'yum' stopped running.<br />
yum clean all<br />
yum update<br />
The fresh server will be updated.<br />
signal-event post-upgrade<br />
signal-event reboot<br />
The server will reboot. Afterwards it is ready for installation of Affa 3.<br />
<br />
Login as root on local or remote SSH console of ''affabox''.<br />
<br />
= For SME8 =<br />
<br />
To install issue the following command:<br />
yum install smeserver-affa --enablerepo=smecontribs<br />
= For SME9 =<br />
<br />
yum install smeserver-affa --enablerepo=smecontribs<br />
<headertabs/><br />
<br />
The server again need to be updated.<br />
signal-event post-upgrade<br />
signal-event reboot<br />
The server will reboot. Afterwards Affa 3 is ready for use.<br />
<br />
Now manually create directory for archive files:<br />
mkdir /var/affa<br />
<br />
{{Warning box| Note that on SME9, a new SSH [https://wiki.contribs.org/AutoBlock AutoBlock] feature interferes with the Affa requirement for regular SSH logins! You might have to disable this feature to allow Affa to operate correctly. }}<br />
<br />
=== Creation of backup jobs ===<br />
Job configurations of Affa 3 are no longer stored in an e-smith style database. They are stored in configuration files located in ''/etc/affa'' . Create your configuration files in this directory, e.g.<br />
touch /etc/affa/backup-smeserver.conf<br />
where ''backup-smeserver'' is your jobname, and add content as described below.<br />
<br />
==== Quick start example ====<br />
You have a SME production server with hostname 'smeserver‘ and IP 192.168.0.2.<br><br />
You have a second SME box as your backup server with hostname 'affabox‘ and IP 192.168.0.10. <br />
<br />
Login to your 'affabox' as root and edit ''/etc/affa/backup-smeserver.conf'' . Using e.g. editor ''nano'', create the following example job configuration file for jobname ''backup-smeserver'' :<br />
[backup-smeserver]<br />
remoteHostName=192.168.0.2<br />
SMEServer=yes<br />
Watchdog=yes<br />
RPMCheck=yes<br />
ConnectionCheckTimeout=120<br />
Debug=no<br />
Description=Backup of 192.168.0.2 smeserver.athome.xx<br />
DiskSpaceWarn=strict<br />
RootDir=/var/affa<br />
TimeSchedule=0630<br />
localNice=15<br />
remoteNice=15<br />
rsync--inplace=yes<br />
rsyncCompress=no<br />
rsyncTimeout=900<br />
scheduledKeep=1<br />
dailyKeep=7<br />
weeklyKeep=4<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
status=enabled<br />
<br />
If you have a Letsencrypt certificate on the production server, then you should also include<br />
Include=/etc/dehydrated<br />
<br />
Then save your job configuration file.<br />
<br />
Now check that your configuration is OK:<br />
affa --configcheck<br />
This should throw out no errors. Now create / send key files to your productive smeserver:<br />
affa --send-key backup-smeserver<br />
The following output should appear on the console:<br />
Job sme-backup: Generating DSA keys...<br />
Successfully created DSA key pair.<br />
root@192.168.0.2's password: <Enter password of 192.168.0.2/smeserver and hit ENTER><br />
Public key sent to 192.168.0.2<br />
Now run your job manually, both for test purposes, as well as to create DSA keys:<br />
affa --run backup-smeserver<br />
The following output should appear on the console:<br />
The authenticity of host 'backup-smeserver (192.168.0.2)' can't be established.<br />
RSA key fingerprint is 3b..........65.<br />
Are you sure you want ton continue connecting (yes/no)? <yes><br />
Now your backup job should run for the first time. Depending on the volume of the files, this takes quite a while.<br />
<br />
Once the job is done, check that the archive is available:<br />
affa –-list-archives<br />
<br />
or run the job a second time:<br />
affa --run backup-smeserver<br />
Note that you won't be asked for the password again. Note as well, that this second run of the job ''backup-smeserver'' should run considerably faster, because not all files are copied again: just the difference to the previous run is backed up, the rest is stored as hard links to the existing file copies.<br />
<br />
Check that the second archive is available too:<br />
affa --list-archives<br />
<br />
From here you are able to work as with Affa 2. Modify your config file as required and described below. Automate the backup using the command ''affa --make-cronjobs'', see below.<br />
<br />
==== Job configuration properties ====<br />
'''Note 1:''' The default values shown in this table are the Affa program defaults and not to be confused with the preset values in the job configuration files, e.g. backup-smeserver.conf.<br />
<br />
'''Note 2:''' The complete documentation of the settings of Affa3 is available in [http://affa.sourceforge.net/AffaPdfMan.pdf] . It should be taken in consideration!<br />
<br />
'''Note 3:''' Affa 3 for SME is a fork of [http://affa.sourceforge.net/ Affa 3 for CentOS]. It adds a few options back again which have been removed. Generally the configuration properties as described [http://wiki.nikoforge.org/Affa_Configuration/ here] do apply. Arnaud added the following functions for SME:<br />
<ul><br />
<li>The parameter “SMEServer”</li><br />
<li>The parameter and function “Watchdog”</li><br />
<li>The parameter and function “RPMCheck”</li><br />
<li>The functions “--rise” and “--undo-rise”</li><br />
</ul><br />
Consequently the list below should reproduce the list for the unforked Affa 3 version, plus adding the properties above. For details refer to [https://www.guedel.eu/index.php/informatique/sme-server-8/affa-v3 Arnaud's website].<br />
{| border="1" cellpadding="3" cellspacing=0<br />
|-<br />
| '''Property''' || '''Value''' || '''Multivalue''' || '''Default''' || '''Description''' <br />
|-<br />
| remoteHostName <br />
| FQHN or IP<br />
| no<br />
| <none><br />
| FQHN or IP of the source host (mandatory)<br />
|-<br />
| remoteUser<br />
| account<br />
| no<br />
| root<br />
| The user account to be used with all ssh logins. May be case sensitive, e.g. Administrator account on Windows<br />
|-<br />
| Description<br />
| text string<br />
| no<br />
| <none><br />
| Any text that describes the job<br />
|-<br />
| TimeSchedule <br />
| HHMM<br />
| yes<br />
| 2230<br />
| The time the job will run. Use exactly 4 digits, no colon, no point. '''Important:''' Using the proper format HHMM is essential. Badly formatted TimeSchedule will cause strange Perl errors. Multiple runs can be achieved by repeating the line for each run time.<br />
|-<br />
| status <br />
| enabled ''or'' disabled<br />
| no<br />
| enabled<br />
| When set to disabled, no cron entries will made. You can still run a job manually. <br />
|-<br />
| Include<br />
| full path<br />
| yes<br />
| <none><br />
| File(s) or directory(s) to be included in the backup. <br />
|-<br />
| Exclude<br />
| full path<br />
| yes<br />
| <none><br />
| File(s) or directory(s) to be excluded from the backup. <br />
|-<br />
| RootDir<br />
| full path<br />
| no<br />
| /var/affa<br />
| where to store the backup archives. Do not use /home/e-smith or /root as these are included in the backup and therefore the rise option will not work! Recommended: /var/affa<br />
|-<br />
| scheduledKeep<br />
| integer >= 1<br />
| no<br />
| 1<br />
| how many of the scheduled backups to be kept<br />
|-<br />
| dailyKeep<br />
| integer >= 0<br />
| no<br />
| 7<br />
| how many of the daily backups to be kept<br />
|-<br />
| weeklyKeep<br />
| integer >= 0<br />
| no<br />
| 4<br />
| how many of the weekly backups to be kept<br />
|-<br />
| monthlyKeep<br />
| integer >= 0<br />
| no<br />
| 12<br />
| how many of the monthly backups to be kept<br />
|-<br />
| yearlyKeep<br />
| integer >= 0<br />
| no<br />
| 2<br />
| how many of the yearly backups to be kept<br />
|-<br />
| EmailAddress<br />
| name@domain.com<br />
| yes<br />
| root<br />
| comma separated list of mail addresses, where the messages should be sent to<br>'''Note:''' By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).<br />
|-<br />
| RetryAttempts<br />
| integer >= 0<br />
| no<br />
| 4<br />
| When set to a value>0, Affa re-run a failed job RetryAttempts times with a delay of RetryAfter seconds.<br />
|-<br />
| RetryAfter<br />
| seconds >= 0<br />
| no<br />
| 900<br />
| when set to a value>0, wait RetryAfter seconds before re-running the job after an error. Only applicable with RetryAttempts>0<br />
|-<br />
| RetryNotification<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| when set to no, Affa does not send an error message when a job has failed and RetryAttempts is configured. An error message is only sent when the last attempt has failed.<br />
|-<br />
| NRPEtrigger<br />
| hours<br />
| no<br />
| 24<br />
| NRPE reports a job as critical when the last successful run is older then NRPEtrigger hours. To exclude the job from monitoring set to a value < 0.<br />
|-<br />
| SambaShare<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Access to the job archives via CIFS protocol.<br />
|-<br />
| SambaValidUser<br />
| local account<br />
| yes<br />
| affa<br />
| User who has permission to access the job archives via the CIFS protocol.<br />
|-<br />
| preJobCommand<br> preJobCommandRemote<br />
| path relative to /etc/affa/scripts/<br />
| yes<br />
| <none><br />
| Scripts to be executed before a job run. The job name and type (scheduled, daily etc.) are passed as arguments to the program. The preJobCommandRemote scripts are copied to the remote server and then executed there. The scripts are executed in alphabetical order. Use a numerical prefix if you need a specific order e.g. 01scriptB, 02remotescriptA, 03scriptA. Use the included prescript-sample.pl and prescriptRemote-sample.pl as a starting point for own scripts.<br>'''Note:''' If you use the parameter “SMEServer=yes”, you may delete the “Included” relative to SME default directories in the jobconfig.ini: they will be backuped automatically.<br />
|-<br />
| postJobCommand<br> postJobCommandRemote<br />
| path relative to /etc/affa/scripts/<br />
| yes<br />
| <none><br />
| Scripts to be executed after a job run. The job name, the type (scheduled, daily etc.) and the exit code of the job run are passed as arguments to the program. The postJobCommandRemote scripts are copied to the remote server and then executed there. The scripts are executed in alphabetical order. Use a numerical prefix if you need a specific order e.g. 01scriptB, 02remotescriptA, 03scriptA. Use the included postscript-sample.pl and postscriptRemote-sample.pl as a starting point for own scripts.<br>'''Note:''' If you use the parameter “SMEServer=yes”, you may delete the “Included” relative to SME default directories in the jobconfig.ini: they will be backuped automatically.<br />
|-<br />
| dedup<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| The purpose of the deduplication is to remove duplicate files to save backup space. When set to 'yes' file deduplication is run after the synchronization has been completed. It looks for files that have identical content, user, group and permissions and replace duplicates by hardlinks. Deduplication scans the just completed archive and the previous one, that usually is<br />
scheduled.0 and daily.0 or scheduled.0 and scheduled.1. Consider this scenario: A user has renamed directories or files. Rsync sees those as new ones and copies them. Deduplication finds the identical copies in the previous archive and replace them by hardlinks. To use deduplication the Freedup program needs to be installed. Affa actually runs freedup -upg scheduled.0 <previous_archive>.<br />
|-<br />
| dedupKill<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| When set to 'no' the job ignores affa --kill or affa --killall when deduplication is running. This is useful in the context of killAt which is typically used to stop bandwith utilisation.<br />
|-<br />
| sshPort<br />
| service port <br />
| no<br />
| 22<br />
| When sshd on the remote host listens on a non-standard port, set the port here.<br />
|-<br />
| ConnectionCheckTimeout<br />
| seconds<br />
| no<br />
| 120<br />
| Affa checks the ssh connection before the rsync process is started and exits with an error after the configured time if the host did not respond.<br />
|- <br />
| BandwidthLimit<br />
| integer>=0 kilobytes per second<br />
| no<br />
| 0<br />
| Limits the data transfer rate. A value of zero specifies no limit.<br />
|- <br />
| rsyncTimeout<br />
| seconds<br />
| no<br />
| 900<br />
| Rsync exits after the configured time if no data was transferred. This avoids infinitely hanging in case of a network error.<br />
|- <br />
| rsyncCompress<br />
| yes ''or'' no <br />
| no<br />
| yes<br />
| Compress the transferred data. May be useful with slow internet connections. Increases CPU load on remote and backup host.<br />
|- <br />
| rsync--inplace<br />
| yes ''or'' no<br />
| no<br />
| yes <br />
| Set to no if the rsync version on the remote hist does not support this option.<br />
|-<br />
| rsync--modify-window<br />
| integer >= 0 <br />
| no<br />
| 0<br />
| When comparing two timestamps, rsync treats the timestamps as being equal if they differ by no more than the modify-window value. This is normally 0 for an exact match. A value >= 0 is useful if you can't get the clocks of the remote host and the Affa server in sync.<br />
|-<br />
| rsyncOptions<br />
| string<br />
| no<br />
| <none><br />
| Additional option string to be passed to rsync<br />
|-<br />
| localNice <br />
| -19...+19<br />
| no<br />
| 0<br />
| run rsync local process niced. <br />
|-<br />
| remoteNice<br />
| -19...+19<br />
| no<br />
| 0<br />
| run rsync process on source niced. <br />
|-<br />
| killAt<br />
| HHMM<br />
| no<br />
| <none><br />
| The time at which a job will be killed if it was still running. You can use it for example to kill jobs that are running over the internet early in the morning so that your users have the full bandwidth available during office hours. <br />
|-<br />
| resumeKilledAt<br />
| HHMM<br />
| no<br />
| <none><br />
| The time at which a killed job will be resumed. This allows you to start earlier in the evening than the scheduled time.<br />
|-<br />
| chattyOnSuccess<br />
| integer >= 0<br />
| no<br />
| 0 <br />
| When set to a value>0, Affa sends a message on a successfully completed job run and decrements the chattyOnSuccess value. When the value has reached zero, Affa falls back to the default and only sends messages on errors.<br />
|-<br />
| AutomountDevice<br>AutomountPoint<br />
| full path<br />
| no<br />
| <none><br />
| Device and mountpoint of backup device (e.g. USB disk). Device is automounted before a job starts and unmounted after job completion. With both properties empty no automount is done.<br />
|-<br />
| AutomountOptions<br />
| string<br />
| no<br />
| <none><br />
| An option string passed to the mount command.<br />
|-<br />
| AutoUnmount <br />
| yes ''or'' no<br />
| no<br />
| yes<br />
| When set to 'no' the automounted device stay mounted after the Affa run.<br />
|-<br />
| Debug <br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Set to yes to increase log verbosity.<br />
|-<br />
| remoteRsyncBinary <br />
| full path<br />
| no<br />
| /usr/bin/rsync<br />
| If the rsync program on the remote server is located in non-standard location, set it here<br />
|-<br />
| remoteNiceBinary <br />
| full path<br />
| no<br />
| /bin/nice<br />
| If the nice program on the remote server is located in non-standard location, set it here.<br />
|-<br />
| localRsyncBinary <br />
| full path<br />
| no<br />
| /usr/bin/rsync<br />
| If the local rsync program is located in non-standard location, set it here<br />
|-<br />
| localNiceBinary <br />
| full path<br />
| no<br />
| /bin/nice<br />
| If the nice local program is located in non-standard location, set it here.<br />
|-<br />
| RemoteAuthorizedKeysFile <br />
| path relative to remote user's home or full path<br />
| no<br />
| .ssh/authorized_keys2<br />
| If the remote host stores the authorized keys file in a non-standard location, set it here.<br />
|-<br />
| rsyncdMode<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Set to yes to connect to the rsync daemon on the remote host (instead of running rsync over ssh).<br />
|-<br />
| rsyncdModule<br />
| string<br />
| no<br />
| AFFA<br />
| The rsyncd module name (only applicable with rsyncdMode=yes).<br />
|-<br />
| rsyncdUser<br />
| string<br />
| no<br />
| affa<br />
| The username for authentication to the rsync daemon (only applicable with rsyncdMode=yes).<br />
|-<br />
| rsyncdPassword<br />
| string<br />
| no<br />
| <none><br />
| The password for authentication to the rsync daemon (only applicable with rsyncdMode=yes).<br />
<br />
|-<br />
| globalStatus<br />
| enabled ''or'' disabled ''or'' jobs<br />
| no<br />
| jobs<br />
| Set to enabled or disabled to overide the status settings in all job sections. When set to value jobs, the status settings in the job sections are effictive.<br>'''Note:''' This property is allowed only in the [GlobalAffaConfig] section. <br />
|-<br />
| sendStatus<br />
| daily ''or'' weekly ''or'' monthly ''or'' never<br />
| no<br />
| weekly<br />
| Defines how frequently the status will be sent.<br>'''Note:''' This property is allowed only in the [GlobalAffaConfig] section. <br />
|-<br />
| SMEServer<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| When set to yes, the default e-smith directories are automatically included and the property RPMCheck=yes can be used.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|-<br />
| RPMCheck<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Only applicable to jobs that backup a SME server. Compares the packages installation of the source host with this affa backup host. Sends a message with diff list if not in sync. This check is useful, if you want have the option to rise the backup server to a production server from a backup.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|-<br />
| Watchdog<br />
| yes ''or'' no<br />
| no<br />
| yes<br />
| Only applicable to jobs that backups a SME server. When a job is started, affa installs a watchdog script on the source in /etc/cron.d/, which sends a warning message, if the next scheduled job (taken from the TimeSchedule property + 10 minutes) did not run. This guarantees, that you will be notfied even in case of a affa server outage. The watchdog script send a daily reminder message, if the error continues. The next run job replaces the watchdog script with a new trigger time.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|}<br />
<br />
==== Default configuration properties ====<br />
For all 'name=value' properties defaults can be set in the [GlobalAffaConfig] section which are used in all job configuration when the corresponding property is omitted. For example, when these properties are set in [GlobalAffaConfig] section, they can be omitted in the specific job configurations. This is useful, when you set up many similar jobs. <br />
<br />
Example: You want to set the property 'localNice' to 19 for all jobs. Then add the following section in one of your configuration files:<br />
[GlobalAffaConfig]<br />
localNice=19<br />
Consequently you don't need to set this property for your individual jobs. Properties set in the job record override the defaults.<br />
<br />
==== Properties for global Settings ====<br />
The following special properties are only applicable to the [GlobalAffaConfig] section:<br />
<ul><br />
<li>sendStatus</li><br />
<li>globalStatus</li><br />
</ul><br />
<br />
All jobs can be disabled for execution through 'cronjobs" with setting 'globalStatus' to 'disabled'.<br />
[GlobalAffaConfig]<br />
globalStatus=disabled<br />
<br />
To re-enable run, either delete the above configuration line, or set to:<br />
globalStatus=jobs<br />
which is the default value.<br />
<br />
You can also set the value to<br />
globalStatus=enabled<br />
which enforces the the job execution through 'cronjobs' and overrides the specified values in the section for the specific job.<br />
<br />
=== Usage and command line options ===<br />
{{Note box|Options can be abbreviated to uniqueness, e.g. --mak is equal to --make-cronjobs}}<br />
'''affa --run JOB'''<br />
Starts a job run. Usually done by the cronjob. <br />
<br />
<br />
'''affa --make-cronjobs'''<br />
Configures the cronjobs as scheduled in the jobs records. By default this command will by executed by 'cronjobs' every 15 minutes. So you don't need to run this command manually, you can also just wait max. 15 minutes before your updates job configurations become effective.<br />
<br />
'''affa --configcheck'''<br />
Checks the syntax and values in all configuration files found in /etc/affa/. Run this command after modifying the configuration. Lower/Upper case errors in property names are corrected automatically.<br />
<br />
'''affa --make-cronjobs'''<br />
Configures the cronjobs. Run this command to make changes of time related properties effective i.e. TimeSchedule, killAt and resumekKilledAt properties. Every 15 minutes Affa checks whether a configuration file is newer than the cronjob file and runs the make-cronjobs command if this is the case.<br />
<br />
'''affa --send-key [JOB JOB ...]'''<br />
This first generates the DSA key on the Affa Server, if not already done. Then the public key is send to the hosts 'remoteHostName' as defined in section of each job JOB and generates the job specific ssh known host entry.<br />
{{Note box|When initially doing this step, you will need to temporarily enable "Allow secure shell access using standard passwords" on the source server.}}<br />
<br />
'''affa --check-connections [JOB JOB ...]'''<br />
Checks the ssh login and if applicable rsyncd auth for all jobs given as arguments. Without any arguments all jobs are checked.<br />
<br />
'''affa --full-restore [--preserve-newer=no] [--delete=yes] JOB [ARCHIVE]'''<br />
Does a full restore remote server of all backed up files and directories from the backup ARCHIVE. If ARCHIVE is not given, the archive 'scheduled.0' is used as the default. The full restore reconstructs the server as it was at the time of the backup. After the restore the source host reboots.<br />
<br />
With option --preserve-newer=yes files on the remote server with modification time newer than on the backup are not overwritten.<br />
<br />
With option --delete=yes all files on the remote server, which are not in the backup, are deleted.<br />
<br />
If the parameter “SMEServer=yes” is set, the signal-event pre-restore and signal-event post-upgrade will be used automatically.<br />
<br />
'''affa --list-archives JOB'''<br />
Displays a table of all present archives of job JOB with date, number of files, size and and bytes received. While column buTime shows the actual rsync time, the column ddTime shows the length of the subsequent deduplication run. ddYld is the gained deduplication yield in bytes.<br />
<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
+------------------------------------------------------------------------------+<br />
| Job: fshare-ak |<br />
| Description: Fileserver AK |<br />
| Directory: /var/affa/fshare-ak/ |<br />
| Hostname: 10.204.104.4 |<br />
| Email: sysadmin@mydomain.com |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| Run | Completion date | buTime | ddTime | ddYld | Files | Size | Recvd |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| Y 0 | Wed 2010-06-23 20:26 | 11m53s | - | - | 412k | 143G | 470M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| M11 | Sun 2010-08-29 20:22 | 7m50s | - | - | 417k | 153G | 14M |<br />
| M10 | Sun 2010-09-26 20:23 | 8m57s | - | - | 430k | 156G | 14M |<br />
| M 9 | Sun 2010-10-31 20:25 | 10m05s | - | - | 448k | 161G | 15M |<br />
| M 8 | Sun 2010-11-28 20:31 | 16m42s | - | - | 463k | 167G | 17M |<br />
| M 7 | Sun 2011-01-02 20:48 | 33m07s | - | - | 486k | 173G | 1.1G |<br />
| M 6 | Sun 2011-01-30 20:33 | 18m54s | - | - | 493k | 176G | 24M |<br />
| M 5 | Sun 2011-02-27 20:28 | 13m31s | - | - | 490k | 176G | 19M |<br />
| M 4 | Sun 2011-03-27 20:28 | 13m08s | - | - | 491k | 175G | 17M |<br />
| M 3 | Sun 2011-05-01 20:30 | 15m41s | - | - | 493k | 179G | 18M |<br />
| M 2 | Sun 2011-05-29 20:28 | 13m40s | - | - | 494k | 182G | 31M |<br />
| M 1 | Sun 2011-06-19 20:26 | 11m20s | - | - | 493k | 183G | 17M |<br />
| M 0 | Sun 2011-07-03 20:28 | 13m46s | - | - | 496k | 183G | 18M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| W 3 | Sun 2011-07-10 20:28 | 13m38s | - | - | 479k | 180G | 16M |<br />
| W 2 | Mon 2011-07-11 20:35 | 20m27s | - | - | 479k | 180G | 301M |<br />
| W 1 | Fri 2011-07-15 20:30 | 15m53s | - | - | 480k | 180G | 62M |<br />
| W 0 | Sun 2011-07-24 19:39 | 9m25s | - | - | 482k | 181G | 16M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| D 6 | Thu 2011-07-28 19:42 | 12m22s | - | - | 483k | 182G | 176M |<br />
| D 5 | Fri 2011-07-29 23:29 | 11m10s | 5h33m | 45G | 483k | 182G | 16M |<br />
| D 4 | Sat 2011-07-30 19:53 | 23m26s | 2h30m | 8.3G | 483k | 182G | 17M |<br />
| D 3 | Sun 2011-07-31 20:07 | 37m31s | 4m47s | 8.3G | 483k | 182G | 17M |<br />
| D 2 | Mon 2011-08-01 20:44 | 1h14m | 7h50m | 8.5G | 484k | 182G | 630M |<br />
| D 1 | Tue 2011-08-02 20:02 | 32m28s | 12h20m | 8.3G | 484k | 182G | 74M |<br />
| D 0 | Wed 2011-08-03 19:58 | 28m46s | 11h01m | 8.5G | 484k | 182G | 214M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| S 0 | Thu 2011-08-04 20:00 | 30m28s | 11h52m | 8.5G | 484k | 182G | 203M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
</pre></small><br />
<br />
'''affa --list-archives [--csv] JOB'''<br />
With --csv, the output is in machine readable colon separated format.<br />
<small><pre><br />
Archive:Count;Date;Files;Size;RootDirFilesystemAvail;RootDirFilesystemUsed;valid;TotalBytesReceived;ExecutionTime;DedupTotalFiles;DedupReplacedFiles;DedupSavedBytes;DedupExectime;DedupDate<br />
monthly;00000;201509270631;393237;37252509103;166889260;47876936;yes;1327370;1160638358;99;;;;;<br />
weekly;00003;201510040531;390273;37042612135;164566692;50199504;yes;1117726;45430532;59;;;;;<br />
weekly;00002;201510110531;395553;37299589800;162612204;52153992;yes;1117092;92369237;64;;;;;<br />
weekly;00001;201510180531;403831;37707599172;161855844;52910352;yes;1317083;148317764;64;;;;;<br />
weekly;00000;201510250531;406509;38146200127;161041156;53725040;yes;1104759;64083948;63;;;;;<br />
daily;00006;201510310531;415663;39248862982;162979176;51787020;yes;1958110;198060283;66;;;;;<br />
daily;00005;201511010531;411014;38563389171;163061136;51705060;yes;1156506;48903675;75;;;;;<br />
daily;00004;201511020531;408910;38612857040;162359124;52407072;yes;1007434;210812387;75;;;;;<br />
daily;00003;201511030531;410615;37815861577;162026060;52740136;yes;1524069;162697515;67;;;;;<br />
daily;00002;201511040531;414784;38131734213;161990024;52776172;yes;1571657;134250735;63;;;;;<br />
daily;00001;201511050531;420780;38433895988;161906580;52859616;yes;2021507;149388808;103;;;;;<br />
daily;00000;201511060531;425920;38677614350;161778812;52987384;yes;1987971;166510621;71;;;;;<br />
scheduled;00000;201511070531;425663;38815712018;161612600;53153596;yes;2003540;139603231;68;;;;;<br />
</pre></small><br />
<br />
'''affa --status [--csv]'''<br />
Displays a table of all configured jobs with enable status, time of last and next run, size of the most recent archive, exectution time of the last run and the number of scheduled (S), daily (D), weekly (W), monthly (M) and yearly (Y) archives. If last time shows 'failed', the job did not run in the last 24h. For disabled jobs 'Last' always shows 'failed' after 24 h. To see the date and time of the last run of those jobs use the --list-archives option. Column 'Next' shows the time when the next run will be started, if 'Enabled' is 'yes'. Column 'ddYld' shows the deduplication yield in bytes.<br />
<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
| Job | ENA | Last | Time | Next | Size | ddYld | N of S,D,W,M,Y |<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
| ads-ak | yes | 20:20 | 0m29s | 20:20 | 28M | - | 1, 7, 2, 0, 0 |<br />
| ak-user-ma.. | yes | 22:31 | 1m51s | 22:30 | 170M | - | 3, 7, 4, 6, 0 |<br />
| azubi-1 | yes | 03:39 | 3h24m | 00:15 | 44G | 1.2G | 1, 7, 4,10, 1 |<br />
| dcpant | yes | 03:06 | 1m03s | 03:05 | 1.3G | - | 1, 7, 4,10, 1 |<br />
| eshare | yes | 21:06 | 21m01s | 20:45 | 24G | 178M | 1, 7, 4,10, 1 |<br />
| etherpad | yes | 20:40 | 0m13s | 20:40 | 3.1M | - | 1, 7, 4, 2, 0 |<br />
| fazubi | yes | 08:16 | 11h16m | 21:00 | 132G | 3.3G | 1, 7, 4,10, 1 |<br />
| fschare-rh | yes | de-duplicating (pid 9719) | 1, 7, 4,10, 1 |<br />
| fsh-02 | yes | 07:53 | 12h23m | 19:30 | 182G | 8.5G | 1, 7, 4,10, 1 |<br />
| fshare-ak2 | yes | 00:30 | 0m26s | 00:30 | 415M | 544k | 1, 7, 4,10, 1 |<br />
| helpdesk | yes | 21:27 | 2m16s | 21:25 | 895M | 138k | 1, 7, 4,10, 0 |<br />
| it-share | yes | running rsync (pid 9744) | 1, 7, 1, 0, 0 |<br />
| lightroom | yes | waiting (pid 9528) | 1, 7, 4, 2, 0 |<br />
| localhost | yes | 02:15 | 0m08s | 02:15 | 395k | - | 1, 7, 2, 0, 0 |<br />
| mediawiki | yes | 16:10 | 0m06s | 19:40 | 1.1G | 6.7M | 3, 7, 4,10, 1 |<br />
| mshare | yes | 00:48 | 1h33m | 23:15 | 18G | 2.4G | 1, 7, 4,10, 1 |<br />
| wshare | yes | 00:34 | 1h49m | 22:45 | 23G | 484M | 1, 7, 4,10, 1 |<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
2 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
With --csv, the output is in machine readable colon separated format.<br />
<br />
'''affa --show-config-pathes [--csv] [JOB JOB ...]'''<br />
Prints the full pathes of the file where the section of job JOB is defined.<br />
<br />
'''affa --show-default-config'''<br />
Prints a list of all allowed property names with their default values. These values are used, when omitted in the job sections.<br />
<br />
'''affa --show-schedule [--all]'''<br />
Prints a 'graphical' timetable for all enabled jobs. The resolution is 30 minutes. An 'S' character marks the scheduled start times. The duration of the job runs are marked with '=' characters.<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
TIME 12:00 16:00 20:00 0:00 4:00 8:00<br />
fsh-02 -------- -------S ~~~~~~~~ ~~~~~~~~ ~~~~~~~~ --------<br />
ads-ak -------- -------- S------- -------- -------- --------<br />
etherpad -------- -------- -S------ -------- -------- --------<br />
eshare -------- -------- -S~----- -------- -------- --------<br />
fazubi -------- -------- --S~~~~~ ~~~~~~~~ ~~~~~~~~ ~-------<br />
helpdesk -------- -------- --S----- -------- -------- --------<br />
it-share -------- -------- ---S=~~~ ~~~~~~~~ ~~~~~~~- -------- busy<br />
wshare -------- -------- -----S~~ ~~------ -------- --------<br />
ak-user-management -------- -------- -----S-- -------- -------- --------<br />
fschare-rh -------- -------- -----S=- -------- --~----- -------- busy<br />
mshare -------- -------- ------S= ~~------ -------- --------<br />
azubi-1 -------- -------- -------- S=~~~~~~ -------- --------<br />
fshare-ak2 -------- -------- -------- -S------ -------- --------<br />
mediawiki -------- -------- -------- ---S---- -------- ----S---<br />
localhost -------- -------- -------- ----S--- -------- --------<br />
dcpant -------- -------- -------- ------S- -------- --------<br />
lightroom -------- -------- -------- -------- -------- S------- busy<br />
Symbols: S=scheduled K=kill R=resume '='=rsync '~'=dedup<br />
2 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
<br />
'''affa --log-tail [JOB]'''<br />
Displays the tail of the logfile of job JOB with live update. This command is identical to tail -n 50 -f /path/to/logfile. Without the JOB argument the global logfile is shown.<br />
<br />
'''affa --send-status'''<br />
Sends the status table, the disk-usage and the archive list of all jobs to the email addresses configured in the [GlobalAffaConfig] section.<br />
<br />
'''affa --disk-usage'''<br />
Shows the current disk usage of all root dir filesystems<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.de<br />
+------+--------+--------+----------------------------------------------------+<br />
| Use% | Used | Avail | Root Dir |<br />
+------+--------+--------+----------------------------------------------------+<br />
| 71% | 938GB | 365GB | /var/affa |<br />
| 20% | 194GB | 759GB | /mnt/affadev |<br />
+------+--------+--------+----------------------------------------------------+<br />
</pre></small><br />
With --csv, the output is printed in a machine readable colon separated format.<br />
<br />
'''affa --cleanup JOB'''<br />
After you have lowered a keep value, e.g. scheduledKeep, then archives with a higher indices will no longer be shifted and will exist for ever. This option finds these archives and deletes them after confirmation.<br />
<br />
'''affa --rename-job JOB NEWNAME'''<br />
Renames the job JOB to NEWNAME including the section name and archive directories.<br />
<br />
'''affa --move-archive JOB NEWROOTDIR'''<br />
Moves the archive directory of job JOB to the rootdir NEWROOTDIR and adjusts the value of property RootDir. NEWROOTDIR must be a full path starting with a slash. As moving across filesystems (e.g. from an external USB drive to the local disk) is not possible, Affa uses a copy command in this case and deletes the source directory after that. Depending on the archive size, copying across filesystems can take a long time.<br />
<br />
'''affa --delete-job [--revoke-key] JOB'''<br />
Irreversibly deletes all archives, of job JOB and set the status property to disable. With --revoke-key option, the public key on the remote server will be deleted.<br />
<br />
'''affa --revoke-key JOB'''<br />
Deletes the public dsa key on the remote server.<br />
<br />
'''affa --kill JOB'''<br />
Terminates the running job JOB and all its child processes (rsync processes).<br />
<br />
'''affa --killall'''<br />
Terminates all running jobs.<br />
<br />
affa --mailtest JOB<br />
Sends a test email to the email addresses configured in the JOB section. Use this to verify, that your mail system is working.<br />
{{Note box|By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).}}<br />
<br />
'''affa --nrpe [JOB JOB ...]'''<br />
Checks for jobs that were not run the last NRPRtrigger hours and reports them as failed. A Nagios/ICINGA NRPE complient message is printed and exit status returned.<br />
<br />
'''affa --version'''<br />
Displays the Affa version number and checks Samba and Freedup installation.<br />
<br />
'''affa --warranty'''<br />
Displays the disclaimer of liability.<br />
<br />
'''affa --license'''<br />
Displays the license Affa is released under.<br />
<br />
'''affa --help'''<br />
Displays a short help.<br />
<br />
'''affa --debug'''<br />
Enables verbose logging. Overrides job and global configurations.<br />
<br />
==== SME Specific usage and command line options ====<br />
<br />
These commands are implemented in a fork of Affa 3 project for use with SME. They are not part of the Affa 3 for CentOS project.<br />
<br />
'''affa --rise [--all] JOB [ARCHIVE]'''<br />
Runs a full restore on the Affa server <b>(!!!)</b> of all standard files and directories from the backup ARCHIVE of job JOB. In other words: After completion, the Affa box reboots as a clone of the source server. Ensure, that the source server has been powered off before you reboot the Affa box, otherwise the network interface will not come up. This is important, when you run --rise remotely. The --rise feature only works with SME servers and should only be used on dedicated backup servers.<br />
<br />
With option --all, all files and directories of the archive as defined by the include[] properties are restored. Files or directories with the same name on the Affa server will be overwritten and cannot be restored by a undorise. This should not be an issue on a dedicated Affa server which does not hold any other data. After a possible undorise those additional restored data must be removed manually.<br />
<br />
Please note, that the rise process backs up the the Affa server itself before doing the restore from the archive. This backup is used by a possible undorise run to restore the Affa server. Only the standard files and directories are backed up. Data in non-standard loctions (like /opt) are untouched and will still exist after the rise run '''if they don't get overwritten by data of the backup''' (=if the backup contains /opt too e.g.).<br />
<br />
{{Note box|The above command is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.}}<br />
<br />
'''affa --undo-rise'''<br />
This feature reverts a risen Affa box to a backup server. After a reboot, all configured jobs '''based on standard files and directories''' will work again.<br />
<br />
{{Note box|The above command is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.}}<br />
<br />
=== Example setups ===<br />
==== Dedicated backup server ====<br />
* Setup a dedicated server and install Affa 3 for SME.<br />
* Setup a job for every server you want to backup.<br />
* Send the public keys to every server.<br />
affa --send-key JOBNAME<br />
* Check whether password-less logins are working.<br />
affa --check-connections<br />
* Check whether the scheduled jobs are evenly distributed over the day.<br />
affa --show-schedule<br />
* Create the cron jobs.<br />
affa --make-cronjobs<br />
* Check the status after 24 hours.<br />
affa --status<br />
<br />
==== Backup of single ibays ====<br />
Suppose you want to backup the ibays 'staff1' and 'staff2' on your production server with WAN IP 82.123.1.1 to an Affa server in a different location over the internet every night at 2:30am.<br />
* Log into the Affa server.<br />
* Create a dedicated job file ''/etc/affa/ibay-staff.conf'' . Edit it and set<br />
[ibay-staff]<br />
Description=Backup of ibay-staff on 82.123.1.1<br />
remoteHostName=82.123.1.1<br />
TimeSchedule=0230<br />
SMEServer=no<br />
Include=/home/e-smith/files/ibays/staff1<br />
Include=/home/e-smith/files/ibays/staff2<br />
* Save the configuration<br />
* send the public key to the production server<br />
affa --send-key ibay-staff<br />
* check next morning<br />
affa --list-archives ibay-staff<br />
affa --status<br />
ls /var/affa/ibay-staff<br />
<br />
==== Two production servers backup each other ====<br />
You have two sites connnected via a VPN and a SME Server running on each site. In that case you don't need a dedicated Affa backup server. Both production servers can additionally act as Affa backup servers backing up the server of the other site. Simply install Affa and configure a job that backs up the other one. You can use all Affa features except of the rise feature.<br />
<br />
When using the rise feature, the server becomes any of the backed up systems, which is less useful in this scenario as it would give you a running copy of the server of the other site while the server of this site is down.<br />
<br />
To get redundancy and a faster restore you can configure a local backup to an external USB or NAS device.<br />
<br />
Please pay attention, that you do not backup the archives back to the other site. Set the RootDir property to a path which is not included in the SME default backup list. When leaving the RootDir property to its default '/var/affa', this is guaranteed.<br />
<br />
==== Use Affa to backup to a NFS-mounted NAS or a local attached USB drive ====<br />
<br />
{{Note box|This chapter still needs to be checked whether it works with Affa 3 for SME.}}<br />
<br />
You want to backup your SME production server with hostname 'prodbox‘ and IP 10.200.48.1 on a mounted filesystem instead of setting up a dedicated Affa box.<br />
<br />
===== Setup NAS =====<br />
You have a [http://www.freenas.org FreeNAS] box with IP 10.200.48.2 up and running with NFS service enabled. The disk is mounted to /mnt/affashare. You have authorized the IP address of your prodbox server to access share /mnt/affashare.<br />
* log into the 'prodbox' and install the NFS packages<br />
/usr/bin/yum install --enablerepo=smecontribs smeserver-nfs<br />
* now enable and start the portmapper service<br />
config setprop portmap status enabled<br />
service portmap start<br />
<br />
* mount the NFS share<br />
mkdir -p /mnt/affadevice<br />
mount 10.200.48.2:/mnt/affashare /mnt/affadevice<br />
<br />
<br />
===== Alternatively setup a USB drive =====<br />
* log into the 'prodbox' <br />
* connect a USB hard disk to the USB Bus. Now you must determine what device the kernel has assigned to the drive. View the /var/log/message and search for ''Initializing USB Mass Storage driver''. A few lines below you'll find the name of the device. In this example it is ''sdh''. Replace ''/dev/sdh'' by ''your device'' in following instructions.<br />
* use the fdisk program to create a linux partition. '''Verify that this is really the attached USB drive before you continue!'''<br />
<br />
fdisk /dev/sdh<br />
You'll most likely find an existing vfat dos partition, which you have to delete first. In the following we assume, that you have created a single partition ''/dev/sdh1''.<br />
* now format the drive with an ext3 filesystem<br />
mkfs.ext3 /dev/sdh1<br />
* make the mount point<br />
mkdir -p /mnt/affadevice<br />
* add the following line to the /etc/fstab<br />
/dev/sdh1 /mnt/affadevice ext3 defaults<br />
* mount the drive<br />
mount /mnt/affadevice<br />
* crosscheck your work using the df command<br />
df<br />
<br />
===== Copying a AFFA USB hard drive archive to a new disk =====<br />
Affa uses copious amounts of hard links to compress and preserve disk space for its backups. If you are in the situation where you want to copy such a disk archive to a new (bigger) disk, you need to ensure that the hard links are copied correctly or the destination copy may became significantly bigger than the total of the source archive.<br />
<br />
One way to copy across file systems (i.e. two different USB disks) and preserve the hard links is as follows:<br />
<br />
* mount both USB drives but with different mount points. e.g. /media/backup1 & /media/backup2 and then:<br />
<br />
mkdir /media/backup2/archive <br />
cd /media/backup1/archive<br />
tar cpf - . | ( cd /media/backup2/archive && tar xpf - )<br />
<br />
where '''archive''' is the name of the AFFA job you want to move to the other disk.<br />
<br />
===== Setup Affa =====<br />
You want to run backups on 11:30 h, 15:30 h and 19:30 h and you want to keep the last 3 scheduled backups, 7 daily, 5 weekly, 12 monthly and 1 yearly backups.<br />
* log into the 'prodbox' and install the Affa packages as described above.<br />
* create/edit the conf file of the job: nano /etc/affa/JOBNAME.conf<br />
and set:<br />
remoteHostName=localhost<br />
TimeSchedule=1130<br />
TimeSchedule=1530<br />
TimeSchedule=1930<br />
scheduledKeep=3<br />
dailyKeep=7<br />
weeklyKeep=5<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
RootDir=/mnt/affadevice<br />
Review the other properties and change them to your needs.<br />
* run the job manually<br />
affa --run prodbox<br />
<br />
<br />
===== Limitations =====<br />
With this kind of setup you cannot use the affa rise feature, as it requires the backup archive to be located on the '''same fileystem''' as the server installation. The rise option uses hardlinks, which are not working across filesystems.<br />
<br />
===== Automount =====<br />
Having the backup archives in the same filesystem is always a risk, which can be minimized by using the automount feature. Then the external filesystem is only mounted during a job run.<br />
<br />
In the NAS example set<br />
AutomountDevice=10.200.48.2:/mnt/affashare<br />
AutomountPoint=/mnt/affadevice<br />
and skip the step 2.<br />
<br />
In the USB drive example set<br />
AutomountDevice=/dev/sdc1<br />
AutomountPoint=/mnt/affadevice<br />
and skip the steps 5 to 8.<br />
<br />
The mount point will be automatically created, if it does not exist. <br><br />
To access the archive directory, you need to mount it manually.<br />
<br />
=== Restore ===<br />
==== Restore single files or directories ====<br />
'''Example 1:''' It's Tuesday January 6th 2009, when user 'briedlin' asks you to restore the messages of his mailbox 'orders' he has accidentally deleted on Monday.<br />
<br />
* You first must check what backup archives are available. The jobname of this server backup is 'primmail'. To get a listing of all archives run <br />
affa --list-archives primmail<br />
''(see the example listing in chapter [[Affa#Usage_and_command_line_options]]''<br />
* Choose the daily.0 archive, which was created Monday night. Now restore the mailbox 'orders' using the rsync command.<br />
* Now run the rsync command ''(note the trailing slash!)'' on the Affa backup server:<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# this variable is used to shorten the next command line''<br />
rsync -av /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# this variable is used to shorten the next command line''<br />
rsync -av -e 'ssh -p 2222' /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
<br />
<br />
'''Example 2:''' A user has deleted the file orderform.pdf from ibay 'docs' on the server 10.204.48.1 and asks you to restore it.<br />
<br />
* You have searched and found the latest version of this file in weekly archive of job 'prodserv'.<br />
* To copy it back to the server 10.204.48.1 run on the Affa server<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# this variable is used to shorten the next command line''<br />
rsync -av /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
* If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# this variable is used to shorten the next command line''<br />
rsync -av -e 'ssh -p 2222' /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
<br />
<br />
'''Example 3:''' Restoring a file with special characters in the path<br />
<br />
Do not use shell variables to shorten the command. It would complicate things more than it would help.<br />
Quote the source and destination path. In the destination path escape blank characters with a double backslash and brackets with a single backslash. On the Affa backup server do:<br />
rsync -av "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
<br />
rsync -av -e 'ssh -p 2222' "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
==== Full restore ====<br />
Generally: <br />
'''affa --full-restore [optional settings] JOB [ARCHIVE]'''<br />
This rsyncs the data (files and directories )from the backup ARCHIVE back to the 'remoteHostname' defined in the configuration of the job JOB. <br />
If ARCHIVE is not given, the archive 'scheduled.0' is used as the default.<br />
The --full-restore reconstructs the server as it was at the time of the backup and takes following IMPORTANT optional settings in consideration:<br />
<br />
With option [--preserve-newer=yes] files on the remote server with modification time newer than on the backup are not overwritten.<br />
<br />
With option [--delete=yes] all files on the remote server, which are not in the backup, are deleted.<br />
<br />
If the parameter “SMEServer=yes” is set, the signal-event pre-restore and signal-event post-upgrade will be used automatically.<br />
<br />
After the restore is done, the restored server reboots automatically. <br />
<br />
Example:<br />
You have backuped your production server 'prodsrv' as job 'prodbox'. To restore only lost data from the latest backup run<br />
affa --full-restore prodbox<br />
<br />
To restore exactly from the older archive daily.3 run <br />
affa --full-restore [--preserve-newer=no] [--delete=yes] prodbox daily.3<br />
<br />
{{Warning box|A full restore with --preserve-newer no and --delete yes reconstructs the server as it was at the time of the backup. That means, that all files created or server configuration changes made after the backup will be lost! }}<br />
<br />
==== Moving a SME server installation to new hardware using the Affa rise feature ====<br />
Please see this Howto: [[Moving SME to new Hardware]]<br />
<br />
<br />
==== Restore from USB drive on new server ====<br />
<br />
This tip comes from http://forums.contribs.org/index.php?topic=42412.0<br />
<br />
Q) I have complete backups using affa stored on a usb hard drive connected to our affa backup server. I need to restore an earlier monthly backup of our job "mailbackup" to a test server rather than back to the original system. If I did it from the backup server I see the instructions of how to rise that server to the current backup on that server but I want to restore a point about a month ago before some strange things happened. And I want to do it on a machine that is not our backup server or our production server. I tried to figure out how but am lost in the options. My goal is to do some testing.<br />
<br />
A) On your testserver setup a job mailbackup identical to that on your backup server but set property RootDir to /var/affa and property status to disabled. Connect the USB drive and copy the archive of the job mailbackup to /var/affa. Then run affa --rise mailbackup ARCHIVE<br />
<br />
Remember that Affa places job setup scripts into the archive directories. In your case it is mailbackup-setup.pl. Copy it to your testserver, change the RootDir and status properties and execute it.<br />
<br />
=== FAQ ===<br />
'''What files and directories are included by default?'''<br />
<br />
With SMEServer=no nothing at all.<br />
<br />
With SMEServer=yes the SME default backup list. <br />
<br />
{{Note box|The list must be added!}}<br />
<br />
<br />
{{Note box|The following example needs to be reworked for Affa 3.}}<br />
'''Can I exclusively backup image files from a specific directory?'''<br />
Yes. Assuming you want to backup all gif and jpg files from directory /home/e-smith/files/ibays/pictures/files use this configuration<br />
db affa setprop JOBNAME SMEServer no # do not include SME Server default directories and files<br />
db affa setprop JOBNAME Include[0] '/home/e-smith/files/ibays/pictures/files' # start searching here<br />
db affa setprop JOBNAME Include[1] '*/' # search the tree recursively<br />
db affa setprop JOBNAME Include[2] '*.gif' # copy files that match these patterns<br />
db affa setprop JOBNAME Include[3] '*.jpg' <br />
db affa setprop JOBNAME Exclude[0] '*' # exclude all others<br />
db affa setprop JOBNAME rsyncOptions '-m' # do not create empty folders<br />
<br />
<br />
'''How can I move a backup archive from my production server in a remote branch office to the Affa server using an USB disk? I want to avoid to download the first full backup over the internet.'''<br />
Install Affa on your production server, connect and mount the USB disk. Setup a job to write the archive to the USB disk. After the job run, connect the USB disk to your Affa server and setup the job in the same way. Use the setup script from the archive directory. You only need to change the remoteHostName property. Now use the --move-archive option to move the archive to the local disk. If not already done, send the public key to your production server. You're done.<br />
<br />
<br />
'''How do I backup two SME servers behind a firewall?'''<br />
First you need to configure port forwardings for the ssh service on your firewall. Use a non-standard port, e.g 2200->22, for your second server. Setup a job on your Affa server for each of your production servers. Set up the job property sshPort=2200 for second server.<br />
<br />
=== Uninstall ===<br />
This removes the installed Affa package, all configuration data and all backup archives.<br />
<br />
Run the following commands for each job:<br />
affa --revoke-key JOBNAME<br />
affa --delete-job JOBNAME<br />
Verify that all jobs have been deleted:<br />
affa --status<br />
Remove the Affa package:<br />
yum remove smeserver-affa-3.2.2.1-0.noarch.rpm<br />
Cleaning up:<br />
rm -f /etc/cron.d/affa<br />
rm -rf /etc/affa<br />
rm -rf /var/affa /var/log/affa<br />
<br />
=== Troubleshooting ===<br />
==== Affa stops working after prodserver was rebuild. ==== <br />
'''Description:'''<br />
* Affa stopped working or is broken.<br />
* When affaserver tries to make an ssh connection to prodserver it fails.<br />
* In the /var/log/affa/''jobname''.log file you will this message: "SSH connection to ''prodserver ip'' failed. Did you send the public key". <br />
* Resending the keys does not solve the problem. <br />
<br />
'''Solution:'''<br />
* This is probably due to the key for prodserver that is stored on the affaserver no longer matches the key of prodserver. <br />
* You need remove the existing key so that the next ssh connection re-gets the keys from prodserver. <br />
* From the command line edit the /root/.ssh/known_hosts file and remove the line for prodserver. It starts with: ''prodserver ip'' ssh-rsa<br />
<br />
=== Bug report ===<br />
Affa is listed in the [http://bugs.contribs.org/enter_bug.cgi?product=SME%20Contribs&component=smeserver-affa bugtracker contribs section].<br />
Please report all bugs, new feature requests and documentation issues there.<br />
<br />
<br />
{{#bugzilla:columns=id,product,version,status,summary |sort=id |order=desc |disablecache=1|component=smeserver-affa|noresultsmessage="No open bugs found."}}<br />
<br />
=== Additional information ===<br />
For details of performance, changelog etc see [[:Affa:Additional information]]<br />
<br />
=== References ===<br />
* Wikipedia article http://en.wikipedia.org/wiki/Rsync<br />
* Rsync homepage http://rsync.samba.org/<br />
* Wiki article [[Moving SME to new Hardware]]<br />
* Wiki article [[Backup with Affa and FreeDup]]<br />
* Wiki article [[Rsyncd setup on a windows computer for use with Affa backup]]<br />
* Wiki article [[Backup Windows Computers Using Affa]]<br />
* Wiki article [[Backup of ESXi Virtual Machines using Affa]]<br />
<br />
----<br />
[[Category: Contrib]]<br />
[[Category: Backup]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Affa&diff=33077Affa2017-04-01T18:56:07Z<p>Arnaud: </p>
<hr />
<div>{{Languages|Affa}}<br />
{{Level|Advanced}}<br />
{{usefulnote}}<br />
<br />
<br />
=== Maintainer ===<br />
Mainteners(s) Affa3: Arnaud, stephdl<br><br />
Development of Affa2 for SME Server by the original maintainer has been discontinued.<br />
<br />
=== Version ===<br />
{{ #smeversion: affa}}<br />
{{ #smeversion: smeserver-affa}}<br />
<br />
<br />
=== Description ===<br />
The main purpose of this affa package is to make a SME Server a dedicated backup box in a few minutes. Affa backs up as many SME servers as you like or any other servers which have sshd running and rsync installed. Once it is configured, Affa runs reliably unattended and sends warning messages in case of any errors. <br />
<br />
All backup archives are full backups, as Affa makes use of the hardlink technique. Therefore a new full backup only needs disk space for the differences plus the filesystem overhead for the hardlinks and directories (which is typically 2-3%).<br />
<br />
Affa is based on the rsync program and supports the rsync --compress option. This allows you to run backups over the internet or VPN. A typical setup is one or more Affa backup servers placed in different locations, which backup the production server(s) over the VPN.<br />
<br />
A special feature is the rise option, which allows you to rise the backup server to your production server from a backup archive in case of a dead loss of your production server. The rise is executed within a extremely short time, even with huge amount of data. The rise feature uses hardlinks and therefore does not use up additional disk space.<br />
<br />
The rise feature can also be used to upgrade from a lower version to a higher version of SME. This also work from SME 8.1 to 9. For this to work you need 2 separate installations of SME. One is the actual running server and the other is an affa backup server with a newer version of SME Server where you execute the rise command. See more here: [[Moving SME to new Hardware]]<br />
<br />
A simple way to have a server running with maximum up time is to duplicate the hardware. E.g. to have 2 separate hardware boxes or virtual servers on separate hardware. One is the actual server and the other one is a backup system that in a very short time in case of problems can be upgraded to the actual server with the rise command. This also allow you to upgrade the SME Server software with minimum down time!<br />
<br />
Affa is a command line tool for system administrators and is intentionally designed without a GUI. Therefore it can be efficiently managed on the console and over slow internet connections.<br />
<br />
'''Note:''' This documents also refers to the Affa Version 2 which is not maintained anymore. Information on Affa 3 will be gradually added here once it has been more tested.<br />
<br />
==== Affa features at a glance ====<br />
* Affa is secure: All transfers and connections are made by using the ssh protocol with public/private key authentication<br />
* Periodically runs unattended full backups. Only differences to the previous full backup are transferred over the network<br />
* Using rsync with optional bandwidth limit and compression allows backups over the internet<br />
* Uses hardlink technique, i.e. physical disk space only needed for the differences between two full backups<br />
* Keeps a configurable number of scheduled, daily, weekly, monthly and yearly full backup archives<br />
* The archives are browseable. No need to unpack them first.<br />
* Archives can be mapped to a Samba share.<br />
* Backup jobs are started by the cron daemon<br />
* Interrupted jobs continuing with already transfered data when restarted<br />
* Backups the default e-smith directories and files, when property SMEServer is set to yes<br />
* Additional directories and files can be included<br />
* Directories and files can be excluded from the backup<br />
* Non-SME server linux systems can be backed up by setting the SMEServer property to no and using an include list<br />
* In ESXi mode, running virtual machines can be backed up. See [[Backup of ESXi Virtual Machines using Affa]]<br />
* Configurable nice level for rsync processes on the backup and source server<br />
* Optional run of custom programs before and after a job run (e.g. running tape backup)<br />
* Checks the disk space left after a job run with warning levels strict, normal or risky<br />
* Extensive checking of failure conditions<br />
* Sends failure messages to a configurable list of email addresses<br />
* Sends a warning message, if the backup server runs out of disk space<br />
* Installs an optional watchdog on the source server in case the backupserver fails (SME Server only)<br />
* Watchdog sends warning, if an expected backup did not run (SME Server only)<br />
* Watchdog sends a daily reminder message, if the error continues unchecked (SME Server only)<br />
* Option to display current status of all jobs showing times of last and next run, size and disk usage<br />
* Status can be mailed on a daily, weekly or monthly schedule<br />
* Option to display all existing archives of a job shown date, number of files, size and bytes tranferred from the source<br />
* Option to send (and revoke) the public key to the source server (SME Server and ESXi only)<br />
* Option to rise the backup server to a production server from a backup (SME Server only)<br />
* The rise feature does not physically move data and therefore is extremly fast and needs (almost) no extra disk space<br />
* Rise option can be run remotely as the NIC driver configuration of the backup server are preserved<br />
* Compares installed RPMs on source with backup server. Sends warning message, if not in sync<br />
* Undo rise option to restore the backup server<br />
* Configurable via a e-smith style db, with one record for each job and a default record for all jobs<br />
* Logs to /var/log/affa/JOB.log and /var/log/affa/affa.log with optional debug switch for higher verbosity<br />
* Log files are rotated weekly, with 5 logs kept<br />
<br />
<div style="text-align:left" class="mw-collapsible mw-collapsed" data-collapsetext="less &uarr;" data-expandtext="Obsolete Affa 2 instructions &darr;"><br />
<br />
=== Installation or Update of Affa 2 ===<br />
<br />
====Smecontribs for sme8====<br />
yum install --enablerepo=smecontribs smeserver-affa<br />
<br />
The latest test version can be found here :<br />
[http://www.reetspetit.com/smeserver/5/noarch/smeserver-affa-2.0.0-rc6.el5.sme.noarch.rpm smeserver-affa-2.0.0-rc6.el5.sme.noarch.rpm] Note this is NOT signed.<br />
<br />
Please post bug reports in the [http://bugs.contribs.org bug tracker]<br />
<br />
When you have installed Affa for the first time run the following command to initialize the Affa database<br />
<br />
affa --make-cronjobs<br />
and logout and re-login to the console to take the bash auto-completion (TAB key) in effect.<br />
====Obsolete way to install====<br />
{{Note box|SME7 is now EOL and has been moved to the obsolete tree}}<br />
SME7 version: '''smeserver-affa-2.0.0-rc4.noarch.rpm'''<br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/smeserver-affa-2.0.0-rc4.noarch.rpm</nowiki><br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm<br />
<br />
SME8 version: SME8 is based on CentOS5, therefore use the perl-Compress-Bzip2 el5 rpm: <br />
wget <nowiki>http://mirror.contribs.org/releases/8/smecontribs/i386/repoview/smeserver-affa.html</nowiki><br />
For SME8 32bit:<br />
wget <nowiki>http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm<br />
For SME8 64bit:<br />
wget http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
<br />
When you have installed Affa for the first time run the following command to initialize the Affa database<br />
affa --make-cronjobs<br />
and logout and re-login to the console to take the bash auto-completion (TAB key) in effect.<br />
<br />
==== Quick start example for Affa 2 ====<br />
You have a SME 7 production server with hostname 'prodbox‘ and IP 10.200.48.1.<br><br />
Set up a second SME 7 box as your backupserver with hostname 'affabox‘ and IP 10.200.48.2. <br />
<br />
<ol></li><li>log into the 'affabox' and install the packages as described above.<br />
</li><li>copy the config helper script sample<br />
cp /usr/lib/affa/jobconfig-sample.pl /root/prodbox-job.pl<br />
</li><li>edit /root/prodbox-job.pl and set<br />
my $jobname='prodbox';<br />
and<br />
'remoteHostName‘=>'10.200.48.1',<br />
optionally, if the remote server port is configured to e.g. 2222 then set<br />
'sshPort'=>2222,<br />
</li><li>save and run the 'prodbox-job.pl' file. This will write the configuration (this makes the database entries and sets up the cronjobs)<br />
/root/prodbox-job.pl<br />
</li><li>generate the DSA keys and send the public key to the 'prodbox'<br />
{{Note box|When initially doing this step, you will need to temporarily enable "Allow secure shell access using standard passwords" on the production server}}<br />
affa --send-key prodbox<br />
</li><li>run the job manually<br />
<br />
affa --run prodbox<br />
</li></ol><br />
<br />
==== Configuration ====<br />
The configuration is stored in an e-smith style database. Use the db command to configure Affa.<br />
The jobname is the record key with the type 'job'.<br>To setup a new job with the name 'prodbox' enter:<br />
<br />
db affa set prodbox job<br />
<br />
then set the properties<br />
<br />
db affa setprop prodbox remoteHostName 192.168.1.1<br />
db affa setprop prodbox TimeSchedule '0030,0730,1130,1330,1730,2030'<br />
db affa setprop prodbox Description 'My Production Server'<br />
db affa setprop prodbox status enabled<br />
and so on...<br />
<br />
Alternatively you can you use a script as described above in the 'Quick start' chapter.<br />
<br />
To verify your work, type:<br />
<br />
db affa show prodbox<br />
<br />
Finally set up the cronjobs:<br />
affa --make-cronjobs<br />
<br />
</div><br />
<br />
<br />
=== Installation of Affa 3 ===<br />
The following instructions assume that a fresh Affa 3 installation is made on a fresh dedicated SME server which serves as a dedicated backup server ('affabox‘). No jobs are migrated from Affa 2. For using an existing SME server, just skip the instructions how to setup a fresh SME box.<br />
<br />
Setup a dedicated machine with SME 8.1 or SME 9.0 from CDROM. Use the following example settings:<br />
<ol><li>Domain name: athome.xx (use your existing domain name)</li><br />
<li>Host name: affabox (must not match to existing host)</li><br />
<li>IP address: 192.168.0.10 (must not match to existing IP address)</li><br />
<li>Subnet: 255.255.255.0</li><br />
<li>role: Server only</li><br />
<li>Gateway: 192.168.0.1 (use your existing gateway)</li><br />
<li>DHCP: DEactivate</li><br />
<li>DNS server: 192.168.0.1 (use your existing DNS server)</li><br />
<li>activate changes: yes</li><br />
<li>The fresh server will then restart with the above settings.</li><br />
</ol><br />
<br />
To update your server, login as user root on local console. Use 'top' command to display running jobs. Wait until 'yum' stopped running.<br />
yum clean all<br />
yum update<br />
The fresh server will be updated.<br />
signal-event post-upgrade<br />
signal-event reboot<br />
The server will reboot. Afterwards it is ready for installation of Affa 3.<br />
<br />
Login as root on local or remote SSH console of ''affabox''.<br />
<br />
= For SME8 =<br />
<br />
To install issue the following command:<br />
yum install smeserver-affa --enablerepo=smecontribs<br />
= For SME9 =<br />
<br />
yum install smeserver-affa --enablerepo=smecontribs<br />
<headertabs/><br />
<br />
The server again need to be updated.<br />
signal-event post-upgrade<br />
signal-event reboot<br />
The server will reboot. Afterwards Affa 3 is ready for use.<br />
<br />
Now manually create directory for archive files:<br />
mkdir /var/affa<br />
<br />
{{Warning box| Note that on SME9, a new SSH [https://wiki.contribs.org/AutoBlock AutoBlock] feature interferes with the Affa requirement for regular SSH logins! You might have to disable this feature to allow Affa to operate correctly. }}<br />
<br />
=== Creation of backup jobs ===<br />
Job configurations of Affa 3 are no longer stored in an e-smith style database. They are stored in configuration files located in ''/etc/affa'' . Create your configuration files in this directory, e.g.<br />
touch /etc/affa/backup-smeserver.conf<br />
where ''backup-smeserver'' is your jobname, and add content as described below.<br />
<br />
==== Quick start example ====<br />
You have a SME production server with hostname 'smeserver‘ and IP 192.168.0.2.<br><br />
You have a second SME box as your backup server with hostname 'affabox‘ and IP 192.168.0.10. <br />
<br />
Login to your 'affabox' as root and edit ''/etc/affa/backup-smeserver.conf'' . Using e.g. editor ''nano'', create the following example job configuration file for jobname ''backup-smeserver'' :<br />
[backup-smeserver]<br />
remoteHostName=192.168.0.2<br />
SMEServer=yes<br />
Watchdog=yes<br />
RPMCheck=yes<br />
ConnectionCheckTimeout=120<br />
Debug=no<br />
Description=Backup of 192.168.0.2 smeserver.athome.xx<br />
DiskSpaceWarn=strict<br />
RootDir=/var/affa<br />
TimeSchedule=0630<br />
localNice=15<br />
remoteNice=15<br />
rsync--inplace=yes<br />
rsyncCompress=no<br />
rsyncTimeout=900<br />
scheduledKeep=1<br />
dailyKeep=7<br />
weeklyKeep=4<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
status=enabled<br />
<br />
If you have a Letsencrypt certificate on the production server, then you should also include<br />
Include=/etc/dehydrated<br />
<br />
Then save your job configuration file.<br />
<br />
Now check that your configuration is OK:<br />
affa --configcheck<br />
This should throw out no errors. Now create / send key files to your productive smeserver:<br />
affa --send-key backup-smeserver<br />
The following output should appear on the console:<br />
Job sme-backup: Generating DSA keys...<br />
Successfully created DSA key pair.<br />
root@192.168.0.2's password: <Enter password of 192.168.0.2/smeserver and hit ENTER><br />
Public key sent to 192.168.0.2<br />
Now run your job manually, both for test purposes, as well as to create DSA keys:<br />
affa --run backup-smeserver<br />
The following output should appear on the console:<br />
The authenticity of host 'backup-smeserver (192.168.0.2)' can't be established.<br />
RSA key fingerprint is 3b..........65.<br />
Are you sure you want ton continue connecting (yes/no)? <yes><br />
Now your backup job should run for the first time. Depending on the volume of the files, this takes quite a while.<br />
<br />
Once the job is done, check that the archive is available:<br />
affa –-list-archives<br />
<br />
or run the job a second time:<br />
affa --run backup-smeserver<br />
Note that you won't be asked for the password again. Note as well, that this second run of the job ''backup-smeserver'' should run considerably faster, because not all files are copied again: just the difference to the previous run is backed up, the rest is stored as hard links to the existing file copies.<br />
<br />
Check that the second archive is available too:<br />
affa --list-archives<br />
<br />
From here you are able to work as with Affa 2. Modify your config file as required and described below. Automate the backup using the command ''affa --make-cronjobs'', see below.<br />
<br />
==== Job configuration properties ====<br />
'''Note 1:''' The default values shown in this table are the Affa program defaults and not to be confused with the preset values in the job configuration files, e.g. backup-smeserver.conf.<br />
<br />
'''Note 2:''' The complete documentation of the settings of Affa3 is available in [http://affa.sourceforge.net/AffaPdfMan.pdf] . It should be taken in consideration!<br />
<br />
'''Note 3:''' Affa 3 for SME is a fork of [http://affa.sourceforge.net/ Affa 3 for CentOS]. It adds a few options back again which have been removed. Generally the configuration properties as described [http://wiki.nikoforge.org/Affa_Configuration/ here] do apply. Arnaud added the following functions for SME:<br />
<ul><br />
<li>The parameter “SMEServer”</li><br />
<li>The parameter and function “Watchdog”</li><br />
<li>The parameter and function “RPMCheck”</li><br />
<li>The functions “--rise” and “--undo-rise”</li><br />
</ul><br />
Consequently the list below should reproduce the list for the unforked Affa 3 version, plus adding the properties above. For details refer to [https://www.guedel.eu/index.php/informatique/sme-server-8/affa-v3 Arnaud's website].<br />
{| border="1" cellpadding="3" cellspacing=0<br />
|-<br />
| '''Property''' || '''Value''' || '''Multivalue''' || '''Default''' || '''Description''' <br />
|-<br />
| remoteHostName <br />
| FQHN or IP<br />
| no<br />
| <none><br />
| FQHN or IP of the source host (mandatory)<br />
|-<br />
| remoteUser<br />
| account<br />
| no<br />
| root<br />
| The user account to be used with all ssh logins. May be case sensitive, e.g. Administrator account on Windows<br />
|-<br />
| Description<br />
| text string<br />
| no<br />
| <none><br />
| Any text that describes the job<br />
|-<br />
| TimeSchedule <br />
| HHMM<br />
| yes<br />
| 2230<br />
| The time the job will run. Use exactly 4 digits, no colon, no point. '''Important:''' Using the proper format HHMM is essential. Badly formatted TimeSchedule will cause strange Perl errors. Multiple runs can be achieved by repeating the line for each run time.<br />
|-<br />
| status <br />
| enabled ''or'' disabled<br />
| no<br />
| enabled<br />
| When set to disabled, no cron entries will made. You can still run a job manually. <br />
|-<br />
| Include<br />
| full path<br />
| yes<br />
| <none><br />
| File(s) or directory(s) to be included in the backup. <br />
|-<br />
| Exclude<br />
| full path<br />
| yes<br />
| <none><br />
| File(s) or directory(s) to be excluded from the backup. <br />
|-<br />
| RootDir<br />
| full path<br />
| no<br />
| /var/affa<br />
| where to store the backup archives. Do not use /home/e-smith or /root as these are included in the backup and therefore the rise option will not work! Recommended: /var/affa<br />
|-<br />
| scheduledKeep<br />
| integer >= 1<br />
| no<br />
| 1<br />
| how many of the scheduled backups to be kept<br />
|-<br />
| dailyKeep<br />
| integer >= 0<br />
| no<br />
| 7<br />
| how many of the daily backups to be kept<br />
|-<br />
| weeklyKeep<br />
| integer >= 0<br />
| no<br />
| 4<br />
| how many of the weekly backups to be kept<br />
|-<br />
| monthlyKeep<br />
| integer >= 0<br />
| no<br />
| 12<br />
| how many of the monthly backups to be kept<br />
|-<br />
| yearlyKeep<br />
| integer >= 0<br />
| no<br />
| 2<br />
| how many of the yearly backups to be kept<br />
|-<br />
| EmailAddress<br />
| name@domain.com<br />
| yes<br />
| root<br />
| comma separated list of mail addresses, where the messages should be sent to<br>'''Note:''' By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).<br />
|-<br />
| RetryAttempts<br />
| integer >= 0<br />
| no<br />
| 4<br />
| When set to a value>0, Affa re-run a failed job RetryAttempts times with a delay of RetryAfter seconds.<br />
|-<br />
| RetryAfter<br />
| seconds >= 0<br />
| no<br />
| 900<br />
| when set to a value>0, wait RetryAfter seconds before re-running the job after an error. Only applicable with RetryAttempts>0<br />
|-<br />
| RetryNotification<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| when set to no, Affa does not send an error message when a job has failed and RetryAttempts is configured. An error message is only sent when the last attempt has failed.<br />
|-<br />
| NRPEtrigger<br />
| hours<br />
| no<br />
| 24<br />
| NRPE reports a job as critical when the last successful run is older then NRPEtrigger hours. To exclude the job from monitoring set to a value < 0.<br />
|-<br />
| SambaShare<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Access to the job archives via CIFS protocol.<br />
|-<br />
| SambaValidUser<br />
| local account<br />
| yes<br />
| affa<br />
| User who has permission to access the job archives via the CIFS protocol.<br />
|-<br />
| preJobCommand<br> preJobCommandRemote<br />
| path relative to /etc/affa/scripts/<br />
| yes<br />
| <none><br />
| Scripts to be executed before a job run. The job name and type (scheduled, daily etc.) are passed as arguments to the program. The preJobCommandRemote scripts are copied to the remote server and then executed there. The scripts are executed in alphabetical order. Use a numerical prefix if you need a specific order e.g. 01scriptB, 02remotescriptA, 03scriptA. Use the included prescript-sample.pl and prescriptRemote-sample.pl as a starting point for own scripts.<br>'''Note:''' If you use the parameter “SMEServer=yes”, you may delete the “Included” relative to SME default directories in the jobconfig.ini: they will be backuped automatically.<br />
|-<br />
| postJobCommand<br> postJobCommandRemote<br />
| path relative to /etc/affa/scripts/<br />
| yes<br />
| <none><br />
| Scripts to be executed after a job run. The job name, the type (scheduled, daily etc.) and the exit code of the job run are passed as arguments to the program. The postJobCommandRemote scripts are copied to the remote server and then executed there. The scripts are executed in alphabetical order. Use a numerical prefix if you need a specific order e.g. 01scriptB, 02remotescriptA, 03scriptA. Use the included postscript-sample.pl and postscriptRemote-sample.pl as a starting point for own scripts.<br>'''Note:''' If you use the parameter “SMEServer=yes”, you may delete the “Included” relative to SME default directories in the jobconfig.ini: they will be backuped automatically.<br />
|-<br />
| dedup<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| The purpose of the deduplication is to remove duplicate files to save backup space. When set to 'yes' file deduplication is run after the synchronization has been completed. It looks for files that have identical content, user, group and permissions and replace duplicates by hardlinks. Deduplication scans the just completed archive and the previous one, that usually is<br />
scheduled.0 and daily.0 or scheduled.0 and scheduled.1. Consider this scenario: A user has renamed directories or files. Rsync sees those as new ones and copies them. Deduplication finds the identical copies in the previous archive and replace them by hardlinks. To use deduplication the Freedup program needs to be installed. Affa actually runs freedup -upg scheduled.0 <previous_archive>.<br />
|-<br />
| dedupKill<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| When set to 'no' the job ignores affa --kill or affa --killall when deduplication is running. This is useful in the context of killAt which is typically used to stop bandwith utilisation.<br />
|-<br />
| sshPort<br />
| service port <br />
| no<br />
| 22<br />
| When sshd on the remote host listens on a non-standard port, set the port here.<br />
|-<br />
| ConnectionCheckTimeout<br />
| seconds<br />
| no<br />
| 120<br />
| Affa checks the ssh connection before the rsync process is started and exits with an error after the configured time if the host did not respond.<br />
|- <br />
| BandwidthLimit<br />
| integer>=0 kilobytes per second<br />
| no<br />
| 0<br />
| Limits the data transfer rate. A value of zero specifies no limit.<br />
|- <br />
| rsyncTimeout<br />
| seconds<br />
| no<br />
| 900<br />
| Rsync exits after the configured time if no data was transferred. This avoids infinitely hanging in case of a network error.<br />
|- <br />
| rsyncCompress<br />
| yes ''or'' no <br />
| no<br />
| yes<br />
| Compress the transferred data. May be useful with slow internet connections. Increases CPU load on remote and backup host.<br />
|- <br />
| rsync--inplace<br />
| yes ''or'' no<br />
| no<br />
| yes <br />
| Set to no if the rsync version on the remote hist does not support this option.<br />
|-<br />
| rsync--modify-window<br />
| integer >= 0 <br />
| no<br />
| 0<br />
| When comparing two timestamps, rsync treats the timestamps as being equal if they differ by no more than the modify-window value. This is normally 0 for an exact match. A value >= 0 is useful if you can't get the clocks of the remote host and the Affa server in sync.<br />
|-<br />
| rsyncOptions<br />
| string<br />
| no<br />
| <none><br />
| Additional option string to be passed to rsync<br />
|-<br />
| localNice <br />
| -19...+19<br />
| no<br />
| 0<br />
| run rsync local process niced. <br />
|-<br />
| remoteNice<br />
| -19...+19<br />
| no<br />
| 0<br />
| run rsync process on source niced. <br />
|-<br />
| killAt<br />
| HHMM<br />
| no<br />
| <none><br />
| The time at which a job will be killed if it was still running. You can use it for example to kill jobs that are running over the internet early in the morning so that your users have the full bandwidth available during office hours. <br />
|-<br />
| resumeKilledAt<br />
| HHMM<br />
| no<br />
| <none><br />
| The time at which a killed job will be resumed. This allows you to start earlier in the evening than the scheduled time.<br />
|-<br />
| chattyOnSuccess<br />
| integer >= 0<br />
| no<br />
| 0 <br />
| When set to a value>0, Affa sends a message on a successfully completed job run and decrements the chattyOnSuccess value. When the value has reached zero, Affa falls back to the default and only sends messages on errors.<br />
|-<br />
| AutomountDevice<br>AutomountPoint<br />
| full path<br />
| no<br />
| <none><br />
| Device and mountpoint of backup device (e.g. USB disk). Device is automounted before a job starts and unmounted after job completion. With both properties empty no automount is done.<br />
|-<br />
| AutomountOptions<br />
| string<br />
| no<br />
| <none><br />
| An option string passed to the mount command.<br />
|-<br />
| AutoUnmount <br />
| yes ''or'' no<br />
| no<br />
| yes<br />
| When set to 'no' the automounted device stay mounted after the Affa run.<br />
|-<br />
| Debug <br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Set to yes to increase log verbosity.<br />
|-<br />
| remoteRsyncBinary <br />
| full path<br />
| no<br />
| /usr/bin/rsync<br />
| If the rsync program on the remote server is located in non-standard location, set it here<br />
|-<br />
| remoteNiceBinary <br />
| full path<br />
| no<br />
| /bin/nice<br />
| If the nice program on the remote server is located in non-standard location, set it here.<br />
|-<br />
| localRsyncBinary <br />
| full path<br />
| no<br />
| /usr/bin/rsync<br />
| If the local rsync program is located in non-standard location, set it here<br />
|-<br />
| localNiceBinary <br />
| full path<br />
| no<br />
| /bin/nice<br />
| If the nice local program is located in non-standard location, set it here.<br />
|-<br />
| RemoteAuthorizedKeysFile <br />
| path relative to remote user's home or full path<br />
| no<br />
| .ssh/authorized_keys2<br />
| If the remote host stores the authorized keys file in a non-standard location, set it here.<br />
|-<br />
| rsyncdMode<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Set to yes to connect to the rsync daemon on the remote host (instead of running rsync over ssh).<br />
|-<br />
| rsyncdModule<br />
| string<br />
| no<br />
| AFFA<br />
| The rsyncd module name (only applicable with rsyncdMode=yes).<br />
|-<br />
| rsyncdUser<br />
| string<br />
| no<br />
| affa<br />
| The username for authentication to the rsync daemon (only applicable with rsyncdMode=yes).<br />
|-<br />
| rsyncdPassword<br />
| string<br />
| no<br />
| <none><br />
| The password for authentication to the rsync daemon (only applicable with rsyncdMode=yes).<br />
<br />
|-<br />
| globalStatus<br />
| enabled ''or'' disabled ''or'' jobs<br />
| no<br />
| jobs<br />
| Set to enabled or disabled to overide the status settings in all job sections. When set to value jobs, the status settings in the job sections are effictive.<br>'''Note:''' This property is allowed only in the [GlobalAffaConfig] section. <br />
|-<br />
| sendStatus<br />
| daily ''or'' weekly ''or'' monthly ''or'' never<br />
| no<br />
| weekly<br />
| Defines how frequently the status will be sent.<br>'''Note:''' This property is allowed only in the [GlobalAffaConfig] section. <br />
|-<br />
| SMEServer<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| When set to yes, the default e-smith directories are automatically included and the property RPMCheck=yes can be used.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|-<br />
| RPMCheck<br />
| yes ''or'' no<br />
| no<br />
| no<br />
| Only applicable to jobs that backup a SME server. Compares the packages installation of the source host with this affa backup host. Sends a message with diff list if not in sync. This check is useful, if you want have the option to rise the backup server to a production server from a backup.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|-<br />
| Watchdog<br />
| yes ''or'' no<br />
| no<br />
| yes<br />
| Only applicable to jobs that backups a SME server. When a job is started, affa installs a watchdog script on the source in /etc/cron.d/, which sends a warning message, if the next scheduled job (taken from the TimeSchedule property + 10 minutes) did not run. This guarantees, that you will be notfied even in case of a affa server outage. The watchdog script send a daily reminder message, if the error continues. The next run job replaces the watchdog script with a new trigger time.<br>'''Note:''' this property is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.<br />
|}<br />
<br />
==== Default configuration properties ====<br />
For all 'name=value' properties defaults can be set in the [GlobalAffaConfig] section which are used in all job configuration when the corresponding property is omitted. For example, when these properties are set in [GlobalAffaConfig] section, they can be omitted in the specific job configurations. This is useful, when you set up many similar jobs. <br />
<br />
Example: You want to set the property 'localNice' to 19 for all jobs. Then add the following section in one of your configuration files:<br />
[GlobalAffaConfig]<br />
localNice=19<br />
Consequently you don't need to set this property for your individual jobs. Properties set in the job record override the defaults.<br />
<br />
==== Properties for global Settings ====<br />
The following special properties are only applicable to the [GlobalAffaConfig] section:<br />
<ul><br />
<li>sendStatus</li><br />
<li>globalStatus</li><br />
</ul><br />
<br />
All jobs can be disabled for execution through 'cronjobs" with setting 'globalStatus' to 'disabled'.<br />
[GlobalAffaConfig]<br />
globalStatus=disabled<br />
<br />
To re-enable run, either delete the above configuration line, or set to:<br />
globalStatus=jobs<br />
which is the default value.<br />
<br />
You can also set the value to<br />
globalStatus=enabled<br />
which enforces the the job execution through 'cronjobs' and overrides the specified values in the section for the specific job.<br />
<br />
=== Usage and command line options ===<br />
{{Note box|Options can be abbreviated to uniqueness, e.g. --mak is equal to --make-cronjobs}}<br />
'''affa --run JOB'''<br />
Starts a job run. Usually done by the cronjob. <br />
<br />
<br />
'''affa --make-cronjobs'''<br />
Configures the cronjobs as scheduled in the jobs records. By default this command will by executed by 'cronjobs' every 15 minutes. So you don't need to run this command manually, you can also just wait max. 15 minutes before your updates job configurations become effective.<br />
<br />
'''affa --configcheck'''<br />
Checks the syntax and values in all configuration files found in /etc/affa/. Run this command after modifying the configuration. Lower/Upper case errors in property names are corrected automatically.<br />
<br />
'''affa --make-cronjobs'''<br />
Configures the cronjobs. Run this command to make changes of time related properties effective i.e. TimeSchedule, killAt and resumekKilledAt properties. Every 15 minutes Affa checks whether a configuration file is newer than the cronjob file and runs the make-cronjobs command if this is the case.<br />
<br />
'''affa --send-key [JOB JOB ...]'''<br />
This first generates the DSA key on the Affa Server, if not already done. Then the public key is send to the hosts 'remoteHostName' as defined in section of each job JOB and generates the job specific ssh known host entry.<br />
{{Note box|When initially doing this step, you will need to temporarily enable "Allow secure shell access using standard passwords" on the source server.}}<br />
<br />
'''affa --check-connections [JOB JOB ...]'''<br />
Checks the ssh login and if applicable rsyncd auth for all jobs given as arguments. Without any arguments all jobs are checked.<br />
<br />
'''affa --full-restore [--preserve-newer=no] [--delete=yes] JOB [ARCHIVE]'''<br />
Does a full restore remote server of all backed up files and directories from the backup ARCHIVE. If ARCHIVE is not given, the archive 'scheduled.0' is used as the default. The full restore reconstructs the server as it was at the time of the backup. After the restore the source host reboots.<br />
<br />
With option --preserve-newer=yes files on the remote server with modification time newer than on the backup are not overwritten.<br />
<br />
With option --delete=yes all files on the remote server, which are not in the backup, are deleted.<br />
<br />
If the parameter “SMEServer=yes” is set, the signal-event pre-restore and signal-event post-upgrade will be used automatically.<br />
<br />
'''affa --list-archives JOB'''<br />
Displays a table of all present archives of job JOB with date, number of files, size and and bytes received. While column buTime shows the actual rsync time, the column ddTime shows the length of the subsequent deduplication run. ddYld is the gained deduplication yield in bytes.<br />
<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
+------------------------------------------------------------------------------+<br />
| Job: fshare-ak |<br />
| Description: Fileserver AK |<br />
| Directory: /var/affa/fshare-ak/ |<br />
| Hostname: 10.204.104.4 |<br />
| Email: sysadmin@mydomain.com |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| Run | Completion date | buTime | ddTime | ddYld | Files | Size | Recvd |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| Y 0 | Wed 2010-06-23 20:26 | 11m53s | - | - | 412k | 143G | 470M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| M11 | Sun 2010-08-29 20:22 | 7m50s | - | - | 417k | 153G | 14M |<br />
| M10 | Sun 2010-09-26 20:23 | 8m57s | - | - | 430k | 156G | 14M |<br />
| M 9 | Sun 2010-10-31 20:25 | 10m05s | - | - | 448k | 161G | 15M |<br />
| M 8 | Sun 2010-11-28 20:31 | 16m42s | - | - | 463k | 167G | 17M |<br />
| M 7 | Sun 2011-01-02 20:48 | 33m07s | - | - | 486k | 173G | 1.1G |<br />
| M 6 | Sun 2011-01-30 20:33 | 18m54s | - | - | 493k | 176G | 24M |<br />
| M 5 | Sun 2011-02-27 20:28 | 13m31s | - | - | 490k | 176G | 19M |<br />
| M 4 | Sun 2011-03-27 20:28 | 13m08s | - | - | 491k | 175G | 17M |<br />
| M 3 | Sun 2011-05-01 20:30 | 15m41s | - | - | 493k | 179G | 18M |<br />
| M 2 | Sun 2011-05-29 20:28 | 13m40s | - | - | 494k | 182G | 31M |<br />
| M 1 | Sun 2011-06-19 20:26 | 11m20s | - | - | 493k | 183G | 17M |<br />
| M 0 | Sun 2011-07-03 20:28 | 13m46s | - | - | 496k | 183G | 18M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| W 3 | Sun 2011-07-10 20:28 | 13m38s | - | - | 479k | 180G | 16M |<br />
| W 2 | Mon 2011-07-11 20:35 | 20m27s | - | - | 479k | 180G | 301M |<br />
| W 1 | Fri 2011-07-15 20:30 | 15m53s | - | - | 480k | 180G | 62M |<br />
| W 0 | Sun 2011-07-24 19:39 | 9m25s | - | - | 482k | 181G | 16M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| D 6 | Thu 2011-07-28 19:42 | 12m22s | - | - | 483k | 182G | 176M |<br />
| D 5 | Fri 2011-07-29 23:29 | 11m10s | 5h33m | 45G | 483k | 182G | 16M |<br />
| D 4 | Sat 2011-07-30 19:53 | 23m26s | 2h30m | 8.3G | 483k | 182G | 17M |<br />
| D 3 | Sun 2011-07-31 20:07 | 37m31s | 4m47s | 8.3G | 483k | 182G | 17M |<br />
| D 2 | Mon 2011-08-01 20:44 | 1h14m | 7h50m | 8.5G | 484k | 182G | 630M |<br />
| D 1 | Tue 2011-08-02 20:02 | 32m28s | 12h20m | 8.3G | 484k | 182G | 74M |<br />
| D 0 | Wed 2011-08-03 19:58 | 28m46s | 11h01m | 8.5G | 484k | 182G | 214M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
| S 0 | Thu 2011-08-04 20:00 | 30m28s | 11h52m | 8.5G | 484k | 182G | 203M |<br />
+-----+----------------------+--------+--------+-------+-------+-------+-------+<br />
</pre></small><br />
<br />
'''affa --list-archives [--csv] JOB'''<br />
With --csv, the output is in machine readable colon separated format.<br />
<small><pre><br />
Archive:Count;Date;Files;Size;RootDirFilesystemAvail;RootDirFilesystemUsed;valid;TotalBytesReceived;ExecutionTime;DedupTotalFiles;DedupReplacedFiles;DedupSavedBytes;DedupExectime;DedupDate<br />
monthly;00000;201509270631;393237;37252509103;166889260;47876936;yes;1327370;1160638358;99;;;;;<br />
weekly;00003;201510040531;390273;37042612135;164566692;50199504;yes;1117726;45430532;59;;;;;<br />
weekly;00002;201510110531;395553;37299589800;162612204;52153992;yes;1117092;92369237;64;;;;;<br />
weekly;00001;201510180531;403831;37707599172;161855844;52910352;yes;1317083;148317764;64;;;;;<br />
weekly;00000;201510250531;406509;38146200127;161041156;53725040;yes;1104759;64083948;63;;;;;<br />
daily;00006;201510310531;415663;39248862982;162979176;51787020;yes;1958110;198060283;66;;;;;<br />
daily;00005;201511010531;411014;38563389171;163061136;51705060;yes;1156506;48903675;75;;;;;<br />
daily;00004;201511020531;408910;38612857040;162359124;52407072;yes;1007434;210812387;75;;;;;<br />
daily;00003;201511030531;410615;37815861577;162026060;52740136;yes;1524069;162697515;67;;;;;<br />
daily;00002;201511040531;414784;38131734213;161990024;52776172;yes;1571657;134250735;63;;;;;<br />
daily;00001;201511050531;420780;38433895988;161906580;52859616;yes;2021507;149388808;103;;;;;<br />
daily;00000;201511060531;425920;38677614350;161778812;52987384;yes;1987971;166510621;71;;;;;<br />
scheduled;00000;201511070531;425663;38815712018;161612600;53153596;yes;2003540;139603231;68;;;;;<br />
</pre></small><br />
<br />
'''affa --status [--csv]'''<br />
Displays a table of all configured jobs with enable status, time of last and next run, size of the most recent archive, exectution time of the last run and the number of scheduled (S), daily (D), weekly (W), monthly (M) and yearly (Y) archives. If last time shows 'failed', the job did not run in the last 24h. For disabled jobs 'Last' always shows 'failed' after 24 h. To see the date and time of the last run of those jobs use the --list-archives option. Column 'Next' shows the time when the next run will be started, if 'Enabled' is 'yes'. Column 'ddYld' shows the deduplication yield in bytes.<br />
<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
| Job | ENA | Last | Time | Next | Size | ddYld | N of S,D,W,M,Y |<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
| ads-ak | yes | 20:20 | 0m29s | 20:20 | 28M | - | 1, 7, 2, 0, 0 |<br />
| ak-user-ma.. | yes | 22:31 | 1m51s | 22:30 | 170M | - | 3, 7, 4, 6, 0 |<br />
| azubi-1 | yes | 03:39 | 3h24m | 00:15 | 44G | 1.2G | 1, 7, 4,10, 1 |<br />
| dcpant | yes | 03:06 | 1m03s | 03:05 | 1.3G | - | 1, 7, 4,10, 1 |<br />
| eshare | yes | 21:06 | 21m01s | 20:45 | 24G | 178M | 1, 7, 4,10, 1 |<br />
| etherpad | yes | 20:40 | 0m13s | 20:40 | 3.1M | - | 1, 7, 4, 2, 0 |<br />
| fazubi | yes | 08:16 | 11h16m | 21:00 | 132G | 3.3G | 1, 7, 4,10, 1 |<br />
| fschare-rh | yes | de-duplicating (pid 9719) | 1, 7, 4,10, 1 |<br />
| fsh-02 | yes | 07:53 | 12h23m | 19:30 | 182G | 8.5G | 1, 7, 4,10, 1 |<br />
| fshare-ak2 | yes | 00:30 | 0m26s | 00:30 | 415M | 544k | 1, 7, 4,10, 1 |<br />
| helpdesk | yes | 21:27 | 2m16s | 21:25 | 895M | 138k | 1, 7, 4,10, 0 |<br />
| it-share | yes | running rsync (pid 9744) | 1, 7, 1, 0, 0 |<br />
| lightroom | yes | waiting (pid 9528) | 1, 7, 4, 2, 0 |<br />
| localhost | yes | 02:15 | 0m08s | 02:15 | 395k | - | 1, 7, 2, 0, 0 |<br />
| mediawiki | yes | 16:10 | 0m06s | 19:40 | 1.1G | 6.7M | 3, 7, 4,10, 1 |<br />
| mshare | yes | 00:48 | 1h33m | 23:15 | 18G | 2.4G | 1, 7, 4,10, 1 |<br />
| wshare | yes | 00:34 | 1h49m | 22:45 | 23G | 484M | 1, 7, 4,10, 1 |<br />
+--------------+-----+-------+--------+-------+-------+-------+----------------+<br />
2 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
With --csv, the output is in machine readable colon separated format.<br />
<br />
'''affa --show-config-pathes [--csv] [JOB JOB ...]'''<br />
Prints the full pathes of the file where the section of job JOB is defined.<br />
<br />
'''affa --show-default-config'''<br />
Prints a list of all allowed property names with their default values. These values are used, when omitted in the job sections.<br />
<br />
'''affa --show-schedule [--all]'''<br />
Prints a 'graphical' timetable for all enabled jobs. The resolution is 30 minutes. An 'S' character marks the scheduled start times. The duration of the job runs are marked with '=' characters.<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.com<br />
TIME 12:00 16:00 20:00 0:00 4:00 8:00<br />
fsh-02 -------- -------S ~~~~~~~~ ~~~~~~~~ ~~~~~~~~ --------<br />
ads-ak -------- -------- S------- -------- -------- --------<br />
etherpad -------- -------- -S------ -------- -------- --------<br />
eshare -------- -------- -S~----- -------- -------- --------<br />
fazubi -------- -------- --S~~~~~ ~~~~~~~~ ~~~~~~~~ ~-------<br />
helpdesk -------- -------- --S----- -------- -------- --------<br />
it-share -------- -------- ---S=~~~ ~~~~~~~~ ~~~~~~~- -------- busy<br />
wshare -------- -------- -----S~~ ~~------ -------- --------<br />
ak-user-management -------- -------- -----S-- -------- -------- --------<br />
fschare-rh -------- -------- -----S=- -------- --~----- -------- busy<br />
mshare -------- -------- ------S= ~~------ -------- --------<br />
azubi-1 -------- -------- -------- S=~~~~~~ -------- --------<br />
fshare-ak2 -------- -------- -------- -S------ -------- --------<br />
mediawiki -------- -------- -------- ---S---- -------- ----S---<br />
localhost -------- -------- -------- ----S--- -------- --------<br />
dcpant -------- -------- -------- ------S- -------- --------<br />
lightroom -------- -------- -------- -------- -------- S------- busy<br />
Symbols: S=scheduled K=kill R=resume '='=rsync '~'=dedup<br />
2 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
<br />
'''affa --log-tail [JOB]'''<br />
Displays the tail of the logfile of job JOB with live update. This command is identical to tail -n 50 -f /path/to/logfile. Without the JOB argument the global logfile is shown.<br />
<br />
'''affa --send-status'''<br />
Sends the status table, the disk-usage and the archive list of all jobs to the email addresses configured in the [GlobalAffaConfig] section.<br />
<br />
'''affa --disk-usage'''<br />
Shows the current disk usage of all root dir filesystems<br />
<small><pre><br />
Affa version 3.1.0-0 on affa-2.mydomain.de<br />
+------+--------+--------+----------------------------------------------------+<br />
| Use% | Used | Avail | Root Dir |<br />
+------+--------+--------+----------------------------------------------------+<br />
| 71% | 938GB | 365GB | /var/affa |<br />
| 20% | 194GB | 759GB | /mnt/affadev |<br />
+------+--------+--------+----------------------------------------------------+<br />
</pre></small><br />
With --csv, the output is printed in a machine readable colon separated format.<br />
<br />
'''affa --cleanup JOB'''<br />
After you have lowered a keep value, e.g. scheduledKeep, then archives with a higher indices will no longer be shifted and will exist for ever. This option finds these archives and deletes them after confirmation.<br />
<br />
'''affa --rename-job JOB NEWNAME'''<br />
Renames the job JOB to NEWNAME including the section name and archive directories.<br />
<br />
'''affa --move-archive JOB NEWROOTDIR'''<br />
Moves the archive directory of job JOB to the rootdir NEWROOTDIR and adjusts the value of property RootDir. NEWROOTDIR must be a full path starting with a slash. As moving across filesystems (e.g. from an external USB drive to the local disk) is not possible, Affa uses a copy command in this case and deletes the source directory after that. Depending on the archive size, copying across filesystems can take a long time.<br />
<br />
'''affa --delete-job [--revoke-key] JOB'''<br />
Irreversibly deletes all archives, of job JOB and set the status property to disable. With --revoke-key option, the public key on the remote server will be deleted.<br />
<br />
'''affa --revoke-key JOB'''<br />
Deletes the public dsa key on the remote server.<br />
<br />
'''affa --kill JOB'''<br />
Terminates the running job JOB and all its child processes (rsync processes).<br />
<br />
'''affa --killall'''<br />
Terminates all running jobs.<br />
<br />
affa --mailtest JOB<br />
Sends a test email to the email addresses configured in the JOB section. Use this to verify, that your mail system is working.<br />
{{Note box|By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).}}<br />
<br />
'''affa --nrpe [JOB JOB ...]'''<br />
Checks for jobs that were not run the last NRPRtrigger hours and reports them as failed. A Nagios/ICINGA NRPE complient message is printed and exit status returned.<br />
<br />
'''affa --version'''<br />
Displays the Affa version number and checks Samba and Freedup installation.<br />
<br />
'''affa --warranty'''<br />
Displays the disclaimer of liability.<br />
<br />
'''affa --license'''<br />
Displays the license Affa is released under.<br />
<br />
'''affa --help'''<br />
Displays a short help.<br />
<br />
'''affa --debug'''<br />
Enables verbose logging. Overrides job and global configurations.<br />
<br />
==== SME Specific usage and command line options ====<br />
<br />
These commands are implemented in a fork of Affa 3 project for use with SME. They are not part of the Affa 3 for CentOS project.<br />
<br />
'''affa --rise [--all] JOB [ARCHIVE]'''<br />
Runs a full restore on the Affa server <b>(!!!)</b> of all standard files and directories from the backup ARCHIVE of job JOB. In other words: After completion, the Affa box reboots as a clone of the source server. Ensure, that the source server has been powered off before you reboot the Affa box, otherwise the network interface will not come up. This is important, when you run --rise remotely. The --rise feature only works with SME servers and should only be used on dedicated backup servers.<br />
<br />
With option --all, all files and directories of the archive as defined by the include[] properties are restored. Files or directories with the same name on the Affa server will be overwritten and cannot be restored by a undorise. This should not be an issue on a dedicated Affa server which does not hold any other data. After a possible undorise those additional restored data must be removed manually.<br />
<br />
Please note, that the rise process backs up the the Affa server itself before doing the restore from the archive. This backup is used by a possible undorise run to restore the Affa server. Only the standard files and directories are backed up. Data in non-standard loctions (like /opt) are untouched and will still exist after the rise run '''if they don't get overwritten by data of the backup''' (=if the backup contains /opt too e.g.).<br />
<br />
{{Note box|The above command is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.}}<br />
<br />
'''affa --undo-rise'''<br />
This feature reverts a risen Affa box to a backup server. After a reboot, all configured jobs '''based on standard files and directories''' will work again.<br />
<br />
{{Note box|The above command is not implemented in Affa 3 for CentOS. It has been specifically added to the fork for SME.}}<br />
<br />
=== Example setups ===<br />
==== Dedicated backup server ====<br />
* Setup a dedicated server and install Affa 3 for SME.<br />
* Setup a job for every server you want to backup.<br />
* Send the public keys to every server.<br />
affa --send-key JOBNAME<br />
* Check whether password-less logins are working.<br />
affa --check-connections<br />
* Check whether the scheduled jobs are evenly distributed over the day.<br />
affa --show-schedule<br />
* Create the cron jobs.<br />
affa --make-cronjobs<br />
* Check the status after 24 hours.<br />
affa --status<br />
<br />
==== Backup of single ibays ====<br />
Suppose you want to backup the ibays 'staff1' and 'staff2' on your production server with WAN IP 82.123.1.1 to an Affa server in a different location over the internet every night at 2:30am.<br />
* Log into the Affa server.<br />
* Create a dedicated job file ''/etc/affa/ibay-staff.conf'' . Edit it and set<br />
[ibay-staff]<br />
Description=Backup of ibay-staff on 82.123.1.1<br />
remoteHostName=82.123.1.1<br />
TimeSchedule=0230<br />
SMEServer=no<br />
Include=/home/e-smith/files/ibays/staff1<br />
Include=/home/e-smith/files/ibays/staff2<br />
* Save the configuration<br />
* send the public key to the production server<br />
affa --send-key ibay-staff<br />
* check next morning<br />
affa --list-archives ibay-staff<br />
affa --status<br />
ls /var/affa/ibay-staff<br />
<br />
==== Two production servers backup each other ====<br />
You have two sites connnected via a VPN and a SME Server running on each site. In that case you don't need a dedicated Affa backup server. Both production servers can additionally act as Affa backup servers backing up the server of the other site. Simply install Affa and configure a job that backs up the other one. You can use all Affa features except of the rise feature.<br />
<br />
When using the rise feature, the server becomes any of the backed up systems, which is less useful in this scenario as it would give you a running copy of the server of the other site while the server of this site is down.<br />
<br />
To get redundancy and a faster restore you can configure a local backup to an external USB or NAS device.<br />
<br />
Please pay attention, that you do not backup the archives back to the other site. Set the RootDir property to a path which is not included in the SME default backup list. When leaving the RootDir property to its default '/var/affa', this is guaranteed.<br />
<br />
==== Use Affa to backup to a NFS-mounted NAS or a local attached USB drive ====<br />
<br />
{{Note box|This chapter still needs to be checked whether it works with Affa 3 for SME.}}<br />
<br />
You want to backup your SME production server with hostname 'prodbox‘ and IP 10.200.48.1 on a mounted filesystem instead of setting up a dedicated Affa box.<br />
<br />
===== Setup NAS =====<br />
You have a [http://www.freenas.org FreeNAS] box with IP 10.200.48.2 up and running with NFS service enabled. The disk is mounted to /mnt/affashare. You have authorized the IP address of your prodbox server to access share /mnt/affashare.<br />
* log into the 'prodbox' and install the NFS packages<br />
/usr/bin/yum install --enablerepo=smecontribs smeserver-nfs<br />
* now enable and start the portmapper service<br />
config setprop portmap status enabled<br />
service portmap start<br />
<br />
* mount the NFS share<br />
mkdir -p /mnt/affadevice<br />
mount 10.200.48.2:/mnt/affashare /mnt/affadevice<br />
<br />
<br />
===== Alternatively setup a USB drive =====<br />
* log into the 'prodbox' <br />
* connect a USB hard disk to the USB Bus. Now you must determine what device the kernel has assigned to the drive. View the /var/log/message and search for ''Initializing USB Mass Storage driver''. A few lines below you'll find the name of the device. In this example it is ''sdh''. Replace ''/dev/sdh'' by ''your device'' in following instructions.<br />
* use the fdisk program to create a linux partition. '''Verify that this is really the attached USB drive before you continue!'''<br />
<br />
fdisk /dev/sdh<br />
You'll most likely find an existing vfat dos partition, which you have to delete first. In the following we assume, that you have created a single partition ''/dev/sdh1''.<br />
* now format the drive with an ext3 filesystem<br />
mkfs.ext3 /dev/sdh1<br />
* make the mount point<br />
mkdir -p /mnt/affadevice<br />
* add the following line to the /etc/fstab<br />
/dev/sdh1 /mnt/affadevice ext3 defaults<br />
* mount the drive<br />
mount /mnt/affadevice<br />
* crosscheck your work using the df command<br />
df<br />
<br />
===== Copying a AFFA USB hard drive archive to a new disk =====<br />
Affa uses copious amounts of hard links to compress and preserve disk space for its backups. If you are in the situation where you want to copy such a disk archive to a new (bigger) disk, you need to ensure that the hard links are copied correctly or the destination copy may became significantly bigger than the total of the source archive.<br />
<br />
One way to copy across file systems (i.e. two different USB disks) and preserve the hard links is as follows:<br />
<br />
* mount both USB drives but with different mount points. e.g. /media/backup1 & /media/backup2 and then:<br />
<br />
mkdir /media/backup2/archive <br />
cd /media/backup1/archive<br />
tar cpf - . | ( cd /media/backup2/archive && tar xpf - )<br />
<br />
where '''archive''' is the name of the AFFA job you want to move to the other disk.<br />
<br />
===== Setup Affa =====<br />
You want to run backups on 11:30 h, 15:30 h and 19:30 h and you want to keep the last 3 scheduled backups, 7 daily, 5 weekly, 12 monthly and 1 yearly backups.<br />
* log into the 'prodbox' and install the Affa packages as described above.<br />
* create/edit the conf file of the job: nano /etc/affa/JOBNAME.conf<br />
and set:<br />
remoteHostName=localhost<br />
TimeSchedule=1130<br />
TimeSchedule=1530<br />
TimeSchedule=1930<br />
scheduledKeep=3<br />
dailyKeep=7<br />
weeklyKeep=5<br />
monthlyKeep=12<br />
yearlyKeep=1<br />
RootDir=/mnt/affadevice<br />
Review the other properties and change them to your needs.<br />
* run the job manually<br />
affa --run prodbox<br />
<br />
<br />
===== Limitations =====<br />
With this kind of setup you cannot use the affa rise feature, as it requires the backup archive to be located on the '''same fileystem''' as the server installation. The rise option uses hardlinks, which are not working across filesystems.<br />
<br />
===== Automount =====<br />
Having the backup archives in the same filesystem is always a risk, which can be minimized by using the automount feature. Then the external filesystem is only mounted during a job run.<br />
<br />
In the NAS example set<br />
AutomountDevice=10.200.48.2:/mnt/affashare<br />
AutomountPoint=/mnt/affadevice<br />
and skip the step 2.<br />
<br />
In the USB drive example set<br />
AutomountDevice=/dev/sdc1<br />
AutomountPoint=/mnt/affadevice<br />
and skip the steps 5 to 8.<br />
<br />
The mount point will be automatically created, if it does not exist. <br><br />
To access the archive directory, you need to mount it manually.<br />
<br />
=== Restore ===<br />
==== Restore single files or directories ====<br />
'''Example 1:''' It's Tuesday January 6th 2009, when user 'briedlin' asks you to restore the messages of his mailbox 'orders' he has accidentally deleted on Monday.<br />
<br />
* You first must check what backup archives are available. The jobname of this server backup is 'primmail'. To get a listing of all archives run <br />
affa --list-archives primmail<br />
''(see the example listing in chapter [[Affa#Usage_and_command_line_options]]''<br />
* Choose the daily.0 archive, which was created Monday night. Now restore the mailbox 'orders' using the rsync command.<br />
* Now run the rsync command ''(note the trailing slash!)'' on the Affa backup server:<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# this variable is used to shorten the next command line''<br />
rsync -av /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# this variable is used to shorten the next command line''<br />
rsync -av -e 'ssh -p 2222' /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
<br />
<br />
'''Example 2:''' A user has deleted the file orderform.pdf from ibay 'docs' on the server 10.204.48.1 and asks you to restore it.<br />
<br />
* You have searched and found the latest version of this file in weekly archive of job 'prodserv'.<br />
* To copy it back to the server 10.204.48.1 run on the Affa server<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# this variable is used to shorten the next command line''<br />
rsync -av /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
* If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# this variable is used to shorten the next command line''<br />
rsync -av -e 'ssh -p 2222' /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
<br />
<br />
'''Example 3:''' Restoring a file with special characters in the path<br />
<br />
Do not use shell variables to shorten the command. It would complicate things more than it would help.<br />
Quote the source and destination path. In the destination path escape blank characters with a double backslash and brackets with a single backslash. On the Affa backup server do:<br />
rsync -av "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
If the servers are configured to use a different ssh port eg 2222, then instead do:<br />
<br />
rsync -av -e 'ssh -p 2222' "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
==== Full restore ====<br />
Generally: <br />
'''affa --full-restore [optional settings] JOB [ARCHIVE]'''<br />
This rsyncs the data (files and directories )from the backup ARCHIVE back to the 'remoteHostname' defined in the configuration of the job JOB. <br />
If ARCHIVE is not given, the archive 'scheduled.0' is used as the default.<br />
The --full-restore reconstructs the server as it was at the time of the backup and takes following IMPORTANT optional settings in consideration:<br />
<br />
With option [--preserve-newer=yes] files on the remote server with modification time newer than on the backup are not overwritten.<br />
<br />
With option [--delete=yes] all files on the remote server, which are not in the backup, are deleted.<br />
<br />
If the parameter “SMEServer=yes” is set, the signal-event pre-restore and signal-event post-upgrade will be used automatically.<br />
<br />
After the restore is done, the restored server reboots automatically. <br />
<br />
Example:<br />
You have backuped your production server 'prodsrv' as job 'prodbox'. To restore only lost data from the latest backup run<br />
affa --full-restore prodbox<br />
<br />
To restore exactly from the older archive daily.3 run <br />
affa --full-restore [--preserve-newer=no] [--delete=yes] prodbox daily.3<br />
<br />
{{Warning box| A full restore [--preserve-newer=no] [--delete=yes] reconstructs the server as it was at the time of the backup. That means, that all files created or server configuration changes made after the backup will be lost! }}<br />
<br />
==== Moving a SME server installation to new hardware using the Affa rise feature ====<br />
Please see this Howto: [[Moving SME to new Hardware]]<br />
<br />
<br />
==== Restore from USB drive on new server ====<br />
<br />
This tip comes from http://forums.contribs.org/index.php?topic=42412.0<br />
<br />
Q) I have complete backups using affa stored on a usb hard drive connected to our affa backup server. I need to restore an earlier monthly backup of our job "mailbackup" to a test server rather than back to the original system. If I did it from the backup server I see the instructions of how to rise that server to the current backup on that server but I want to restore a point about a month ago before some strange things happened. And I want to do it on a machine that is not our backup server or our production server. I tried to figure out how but am lost in the options. My goal is to do some testing.<br />
<br />
A) On your testserver setup a job mailbackup identical to that on your backup server but set property RootDir to /var/affa and property status to disabled. Connect the USB drive and copy the archive of the job mailbackup to /var/affa. Then run affa --rise mailbackup ARCHIVE<br />
<br />
Remember that Affa places job setup scripts into the archive directories. In your case it is mailbackup-setup.pl. Copy it to your testserver, change the RootDir and status properties and execute it.<br />
<br />
=== FAQ ===<br />
'''What files and directories are included by default?'''<br />
<br />
With SMEServer=no nothing at all.<br />
<br />
With SMEServer=yes the SME default backup list. <br />
<br />
{{Note box|The list must be added!}}<br />
<br />
<br />
{{Note box|The following example needs to be reworked for Affa 3.}}<br />
'''Can I exclusively backup image files from a specific directory?'''<br />
Yes. Assuming you want to backup all gif and jpg files from directory /home/e-smith/files/ibays/pictures/files use this configuration<br />
db affa setprop JOBNAME SMEServer no # do not include SME Server default directories and files<br />
db affa setprop JOBNAME Include[0] '/home/e-smith/files/ibays/pictures/files' # start searching here<br />
db affa setprop JOBNAME Include[1] '*/' # search the tree recursively<br />
db affa setprop JOBNAME Include[2] '*.gif' # copy files that match these patterns<br />
db affa setprop JOBNAME Include[3] '*.jpg' <br />
db affa setprop JOBNAME Exclude[0] '*' # exclude all others<br />
db affa setprop JOBNAME rsyncOptions '-m' # do not create empty folders<br />
<br />
<br />
'''How can I move a backup archive from my production server in a remote branch office to the Affa server using an USB disk? I want to avoid to download the first full backup over the internet.'''<br />
Install Affa on your production server, connect and mount the USB disk. Setup a job to write the archive to the USB disk. After the job run, connect the USB disk to your Affa server and setup the job in the same way. Use the setup script from the archive directory. You only need to change the remoteHostName property. Now use the --move-archive option to move the archive to the local disk. If not already done, send the public key to your production server. You're done.<br />
<br />
<br />
'''How do I backup two SME servers behind a firewall?'''<br />
First you need to configure port forwardings for the ssh service on your firewall. Use a non-standard port, e.g 2200->22, for your second server. Setup a job on your Affa server for each of your production servers. Set up the job property sshPort=2200 for second server.<br />
<br />
=== Uninstall ===<br />
This removes the installed Affa package, all configuration data and all backup archives.<br />
<br />
Run the following commands for each job:<br />
affa --revoke-key JOBNAME<br />
affa --delete-job JOBNAME<br />
Verify that all jobs have been deleted:<br />
affa --status<br />
Remove the Affa package:<br />
yum remove smeserver-affa-3.2.2.1-0.noarch.rpm<br />
Cleaning up:<br />
rm -f /etc/cron.d/affa<br />
rm -rf /etc/affa<br />
rm -rf /var/affa /var/log/affa<br />
<br />
=== Troubleshooting ===<br />
==== Affa stops working after prodserver was rebuild. ==== <br />
'''Description:'''<br />
* Affa stopped working or is broken.<br />
* When affaserver tries to make an ssh connection to prodserver it fails.<br />
* In the /var/log/affa/''jobname''.log file you will this message: "SSH connection to ''prodserver ip'' failed. Did you send the public key". <br />
* Resending the keys does not solve the problem. <br />
<br />
'''Solution:'''<br />
* This is probably due to the key for prodserver that is stored on the affaserver no longer matches the key of prodserver. <br />
* You need remove the existing key so that the next ssh connection re-gets the keys from prodserver. <br />
* From the command line edit the /root/.ssh/known_hosts file and remove the line for prodserver. It starts with: ''prodserver ip'' ssh-rsa<br />
<br />
=== Bug report ===<br />
Affa is listed in the [http://bugs.contribs.org/enter_bug.cgi?product=SME%20Contribs&component=smeserver-affa bugtracker contribs section].<br />
Please report all bugs, new feature requests and documentation issues there.<br />
<br />
<br />
{{#bugzilla:columns=id,product,version,status,summary |sort=id |order=desc |disablecache=1|component=smeserver-affa|noresultsmessage="No open bugs found."}}<br />
<br />
=== Additional information ===<br />
For details of performance, changelog etc see [[:Affa:Additional information]]<br />
<br />
=== References ===<br />
* Wikipedia article http://en.wikipedia.org/wiki/Rsync<br />
* Rsync homepage http://rsync.samba.org/<br />
* Wiki article [[Moving SME to new Hardware]]<br />
* Wiki article [[Backup with Affa and FreeDup]]<br />
* Wiki article [[Rsyncd setup on a windows computer for use with Affa backup]]<br />
* Wiki article [[Backup Windows Computers Using Affa]]<br />
* Wiki article [[Backup of ESXi Virtual Machines using Affa]]<br />
<br />
----<br />
[[Category: Contrib]]<br />
[[Category: Backup]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Update_contribs&diff=32839Update contribs2017-03-11T15:27:08Z<p>Arnaud: </p>
<hr />
<div>{{usefulnote}}<br />
==Update contribs==<br />
<br />
===Backgrund===<br />
<br />
Due to dependencies, the installation of contribs requires very often the installation of rpms coming from other repositories.<br />
<br />
Updating the installed contribs needs sometimes to update the dependencies or sometimes only the dependencies get updated.<br />
<br />
In order to avoid serious problems of compatibility, it is very important that all the rpms get updated only from the repository they are installed from, instead of only taking the most recent release existing in all the repositories that are available.<br />
<br />
Yum doesn't do this job in one command but small scripts must be used for this purpose.<br />
<br />
===Scripts===<br />
2 scripts are available at this time:<br />
<br />
====script 1====<br />
<br />
#!/bin/bash<br />
<br />
echo "List of available repos beside SME:";<br />
#list the disabled repos<br />
for repo in $(yum repolist disabled |awk '$1 !~ /id|Modules|repolist:/ {print $1}'); do<br />
<br />
echo $repo;<br />
<br />
done<br />
echo "";<br />
<br />
# repo selection<br />
index=0;<br />
repos="";<br />
proceed="proceed";<br />
<br />
until [ "$repos" == "$proceed" ]; do<br />
<br />
repos[$index]=$repos;<br />
((index=index+1));<br />
read -p 'Enter 1x repo name for selection or "proceed" to start the updating: ' repos;<br />
<br />
done<br />
<br />
# remove the repo "proceed"<br />
repos=("${repos[@]:1}");<br />
<br />
<br />
for repo in "${repos[@]}"; do<br />
echo "";<br />
echo "======================================";<br />
echo -e "\tUPDATE from repo: "$repo;<br />
echo "======================================";<br />
<br />
# generate the list of rpm installed from the repo<br />
for rpm in $(/sbin/e-smith/audittools/newrpms |awk -v repo_awk=@$repo 'repo_awk==$3 {print $1}'); do<br />
<br />
rpms=$rpm' '$rpms<br />
<br />
done<br />
<br />
# updating<br />
yum --enablerepo=$repo update $rpms;<br />
<br />
done<br />
<br />
exit 0<br />
<br />
Using this script is very easy:<br />
* you get the list of all available repositories<br />
* enter 1 by 1 the name of the repo you want to update from<br />
* enter 'proceed' after the last repo<br />
* for each repo, yum show the list of rpms that could be updated and ask (Y/N) before starting<br />
<br />
eg.:<br />
List of available repos beside SME:<br />
centosplus<br />
contrib<br />
epel<br />
extras<br />
fasttrack<br />
fws<br />
remi<br />
smecontribs<br />
smedev<br />
smetest<br />
smeupdates-testing<br />
sogo3<br />
<br />
Enter 1x repo name for selection or "proceed" to start the updating: epel<br />
Enter 1x repo name for selection or "proceed" to start the updating: fws<br />
Enter 1x repo name for selection or "proceed" to start the updating: proceed<br />
<br />
====script 2====<br />
This script is much shorter, runs faster and doesn't require to enter the name of the several repos:<br />
<br />
#!/bin/bash<br />
<br />
for repo in $(/sbin/e-smith/audittools/newrpms |grep \@ |awk ' {print $3}' |sort -u |sed s/@//); do<br />
<br />
# generate the list of rpm installed from the repo<br />
for rpm in $(/sbin/e-smith/audittools/newrpms |awk -v repo_awk=@$repo 'repo_awk==$3 {print $1}'); do<br />
rpms=$rpm' '$rpms<br />
done<br />
<br />
echo -e "\n\n===";<br />
echo -e "Repo: "$repo;<br />
echo -e "\nIncludePkgs: "$rpms;<br />
echo "===";<br />
# updating<br />
yum --enablerepo=$repo --setopt="$repo.includepkgs='$rpms'" update<br />
<br />
done<br />
<br />
exit 0<br />
<br />
The script<br />
* runs 'newrpms' to get all repos that have been used to install non-standard packages<br />
* creates a list of rpms for each such repo<br />
* uses "--setopts" to specify "includepkgs" for each repo during update<br />
* asks for Y/N and runs the update<br />
<br />
For more details, see the topic of the forum: https://forums.contribs.org/index.php/topic,52795.0.html<br />
<br />
[[Category: Howto]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Update_contribs&diff=32838Update contribs2017-03-11T15:20:02Z<p>Arnaud: Created page with "{{usefulnote}} ==Update contribs== ===Backgrund=== Due to dependancies, the installation of contribs requires very often the installation of rpms coming from other repositor..."</p>
<hr />
<div>{{usefulnote}}<br />
==Update contribs==<br />
<br />
===Backgrund===<br />
<br />
Due to dependancies, the installation of contribs requires very often the installation of rpms coming from other repositories.<br />
<br />
Updating the installed contribs needs sometimes to update the dependancies or sometimes only the dependancies get updated.<br />
<br />
In order to avoid serious problems of compability, it is very important that all the rpms get updated only from the repository they are installed from, instead of only taking the most recent release existing in all the repositories that are available.<br />
<br />
Yum doesn't do this job in one command but small scripts must be used for this purpose.<br />
<br />
===Scripts===<br />
2 scripts are available at this time:<br />
<br />
====script 1====<br />
<br />
#!/bin/bash<br />
<br />
echo "List of available repos beside SME:";<br />
#list the disabled repos<br />
for repo in $(yum repolist disabled |awk '$1 !~ /id|Modules|repolist:/ {print $1}'); do<br />
<br />
echo $repo;<br />
<br />
done<br />
echo "";<br />
<br />
# repo selection<br />
index=0;<br />
repos="";<br />
proceed="proceed";<br />
<br />
until [ "$repos" == "$proceed" ]; do<br />
<br />
repos[$index]=$repos;<br />
((index=index+1));<br />
read -p 'Enter 1x repo name for selection or "proceed" to start the updating: ' repos;<br />
<br />
done<br />
<br />
# remove the repo "proceed"<br />
repos=("${repos[@]:1}");<br />
<br />
<br />
for repo in "${repos[@]}"; do<br />
echo "";<br />
echo "======================================";<br />
echo -e "\tUPDATE from repo: "$repo;<br />
echo "======================================";<br />
<br />
# generate the list of rpm installed from the repo<br />
for rpm in $(/sbin/e-smith/audittools/newrpms |awk -v repo_awk=@$repo 'repo_awk==$3 {print $1}'); do<br />
<br />
rpms=$rpm' '$rpms<br />
<br />
done<br />
<br />
# updating<br />
yum --enablerepo=$repo update $rpms;<br />
<br />
done<br />
<br />
exit 0<br />
<br />
Using this script is very easy:<br />
* you get the list of all available repositories<br />
* enter 1 by 1 the name of the repo you want to update from<br />
* enter 'proceed' after the last repo<br />
* for each repo, yum show the list of rpms that could be updated and ask (Y/N) before starting<br />
<br />
eg.:<br />
List of available repos beside SME:<br />
centosplus<br />
contrib<br />
epel<br />
extras<br />
fasttrack<br />
fws<br />
remi<br />
smecontribs<br />
smedev<br />
smetest<br />
smeupdates-testing<br />
sogo3<br />
<br />
Enter 1x repo name for selection or "proceed" to start the updating: epel<br />
Enter 1x repo name for selection or "proceed" to start the updating: fws<br />
Enter 1x repo name for selection or "proceed" to start the updating: proceed<br />
<br />
====script 2====<br />
This script is much shorter, runs faster and doesn't require to enter the name of the several repos:<br />
<br />
#!/bin/bash<br />
<br />
for repo in $(/sbin/e-smith/audittools/newrpms |grep \@ |awk ' {print $3}' |sort -u |sed s/@//); do<br />
<br />
# generate the list of rpm installed from the repo<br />
for rpm in $(/sbin/e-smith/audittools/newrpms |awk -v repo_awk=@$repo 'repo_awk==$3 {print $1}'); do<br />
rpms=$rpm' '$rpms<br />
done<br />
<br />
echo -e "\n\n===";<br />
echo -e "Repo: "$repo;<br />
echo -e "\nIncludePkgs: "$rpms;<br />
echo "===";<br />
# updating<br />
yum --enablerepo=$repo --setopt="$repo.includepkgs='$rpms'" update<br />
<br />
done<br />
<br />
exit 0<br />
<br />
The script<br />
* runs 'newrpms' to get all repos that have been used to install non-standard packages<br />
* creates a list of rpms for each such repo<br />
* uses "--setopts" to specify "includepkgs" for each repo during update<br />
* asks for Y/N and runs the update<br />
<br />
For more details, see the topic of the forum: https://forums.contribs.org/index.php/topic,52795.0.html<br />
<br />
[[Category: Howto]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=SharedFolders&diff=32352SharedFolders2016-12-08T17:55:24Z<p>Arnaud: /* Encryption */</p>
<hr />
<div>{{Languages}}<br />
{{usefulnote}}<br />
===Maintainer===<br />
[[User:VIP-ire|Daniel B.]]<br/><br />
[http://www.firewall-services.com Firewall Services]<br><br />
mailto:daniel@firewall-services.com<br />
<br />
=== Version ===<br />
<br />
{{ #smeversion: smeserver-shared-folders }}<br />
<br />
=== Description ===<br />
<br />
Shared folders are more or less like ibays (I took a lot of code from the ibay module), but with more flexibility on file permissions as it supports ACL. There're also some additional options in the panel.<br />
<br />
=== Requirements ===<br />
*Min SME Server 7.X<br />
<br />
=== Installation 8.x and 9.x===<br />
<br />
*install the rpms<br />
<br />
yum --enablerepo=smecontribs install smeserver-shared-folders<br />
<br />
*Mount your file system(s) with the ACL option<br />
<br />
If you use a standard SME server installation:<br />
signal-event post-upgrade; signal-event reboot<br />
should do the trick.<br />
If you don't like to reboot your server every time you install something (just like me :)), you can just run:<br />
expand-template /etc/fstab<br />
Check your file system mounted in /home/e-smith/files/ has the acl option, for example:<br />
<br />
# This file is edited by fstab-sync - see 'man fstab-sync' for details<br />
LABEL=/ / ext3 usrquota,grpquota,'''acl''' 1 1<br />
LABEL=/boot /boot ext3 defaults 1 2<br />
none /dev/pts devpts gid=5,mode=620 0 0<br />
none /dev/shm tmpfs defaults 0 0<br />
none /proc proc defaults 0 0<br />
none /sys sysfs defaults 0 0<br />
LABEL=SWAP-rd/c0d0p3 swap swap defaults 0 0<br />
<br />
You may have to manually add the '''acl''' option in /etc/fstab if you have mounted another file system here. Once the option is in fstab, you can just remount the partitions, for example:<br />
mount -o remount /<br />
or<br />
mount -o remount /home/e-smith/files<br />
<br />
Check everything is ok:<br />
<br />
[root@sme ~]# mount<br />
/dev/sda2 on / type ext3 (rw,usrquota,grpquota,'''acl''')<br />
none on /proc type proc (rw)<br />
none on /sys type sysfs (rw)<br />
none on /dev/pts type devpts (rw,gid=5,mode=620)<br />
usbfs on /proc/bus/usb type usbfs (rw)<br />
/dev/rd/c0d0p1 on /boot type ext3 (rw)<br />
none on /dev/shm type tmpfs (rw)<br />
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)<br />
<br />
=== Storage ===<br />
All the shares you create will be stored in /home/e-smith/files/shares/<sharename>/files<br />
The latest /files directory is to keep the same structure than ibays. There's no html and cgi-bin directories. You'll never see the "files" directory as the root (for SMB and HTTP) will always be /home/e-smith/files/shares/<sharename>/files.<br />
<br />
<br />
=== Screenshots ===<br />
<br />
[[File:shares_main.png|center|Shared Folders main menu]]<br />
<br />
[[File:shares_modify.png|center|Create or modify a shared folder]]<br />
<br />
[[File:Shared_folders_permissions.png|center|Configure permissions]]<br />
<br />
=== Features ===<br />
A lot of options are available in the panel. When you create a new shared folder, or modify an existing one, the page is divided in up to 5 parts (depending on other packages you might have installed like encfs and Pydio):<br />
* The first part let you enter a name and a comment for this shared folder. This part works exactly the same way than ibay. An initial limit of 12 characters exists for the name. You can raise this value with<br />
db configuration set maxShareNameLength 16<br />
<br />
* The second part lets you configure access to this shared folder using the SMB/CIFS protocol. There's three drop down menus<br />
**'''SMB Access''' lets you configure the type of access: '''none''' (no access), '''enabled and browse-able''' (enabled, and visible if you browse available shares of the server) or '''enabled, hidden''' (enabled, but you need to know the exact name to access it)<br />
**'''Recycle bin''' lets you configure a network recycle bin option available in samba. Available options are '''none''' (recycle bin is disabled), '''enabled, keep only the latest version''' (enable the recycle bin, but only keep the latest version if two file with the same name are deleted), or '''enabled, keep a copy of all versions''' (enable the recycle bin and keep a copy of every file deleted). When the recycle bin is enabled, if a user delete a file, instead of removing it from the server, samba will move it the the '''Recycle Bin''' directory at the top of the shared folder.<br />
**'''Recycle Bin Retention''': lets you define the retention time of file in the recycle bin. After this period of time, files are deleted.<br />
<br />
* The third part lets you configure access to this shared folder using the HTTP/HTTPS protocol. There're also several options:<br />
**'''Web Access''': works like ibays. The only difference is that when a password is required, users have to use their own login/password instead of a global one. Users need at least read access to this share or they won't be allowed. This setting is only for web access, not FTP.<br />
**'''WebDav Support''': Enable webdav on this share. Only effective if Web Access is enabled. Write access with WedDav will always require to login.<br />
**'''Force secure connections''': if enabled, all the requests will be redirected to use the HTTPS protocol. The redirection is transparent. This is highly recommended if a password is required.<br />
**'''Indexes''': Let you control if the server should generate an index of all the files if no index.html file is found. You can disable it if you don't want everyone to have access to the list of files. This setting is global for the shared folder, I mean, if enabled, any directory which doesn't have a index.html file will be indexed.<br />
**'''Dynamic content execution (PHP, CGI, SSI)''': this enable the execution of PHP script. If disabled files with php, php3, phtml, cgi or pl as extension won't be allowed. If enabled, you can create a directory named "cgi-bin" in your shared folder. It will allow execution of cgi scripts<br />
<br />
{{warning box|IP addresses and networks which are allowed to access the server-manager (in Security -> Remote Access -> Remote Management) will be considered local for web access. This means those IP and networks will have access to shared folders which are restricted to local networks}}<br />
<br />
=== Pydio ===<br />
If you install the [[Pydio]] contrib, you'll have a new option displayed in the configuration of the shared folders. With this, you can easily enable the access of any shared folders through Pydio. This is only available on SME8.<br />
<br />
==== Limitations of Pydio ====<br />
* You cannot work with files bigger than 2GB, this is a limitation of PHP on 32 Bits system<br />
* Files uploaded with Pydio will not be counted for quota limitations (same applies for WebDav uploads). This is because the file will belong to www user, and not the real user.<br />
<br />
=== Encryption ===<br />
<br />
Since smeserver-shared-folders-0.1-50, you can now encrypt all the data stored in a shared folder. For this to work, you first have to install fuse-encfs from the [[Epel|EPEL]] repository<br />
<br />
{{:epel|transcludesection=epel8}}<br />
<br />
After adding it to the database updating the configuration file is required by issuing:<br />
<br />
signal-event yum-modify<br />
<br />
To install fuse-encfs on sme8<br />
yum --enablerepo=epel install fuse-encfs<br />
<br />
<br />
To install fuse-encfs on sme9<br />
yum --enablerepo=smecontribs --enablerepo=epel install fuse-encfs<br />
<br />
After installing fuse-encfs, make sure the fuse module is loaded<br />
lsmod | grep fuse<br />
<br />
Now, when you create a new shared folder, you have a new option to encrypt the data. You'll have to type a password, and to select '''enabled''' for the encryption.<br /><br /><br />
<br />
<br />
'''An encrypted shared folders can be "protected" or "enabled" and contains for each state a different corresponding data''' ( = a shared folder get a data for "enable" and another for "protected"):<br />
* When the share is "protected", you can write to it: files written to the server while the share is protected will not be encrypted, instead, it'll be directly stored into the files folder in the share.<br />
* When the share is "enabled", you mount the .store folder over the files folder, using encfs. When this happens, the cleartext dataset (stored directly in files) will be hidden, and you have access to a second, completely separated data set. The data you write when the share is enabled will be encrypted on the fly, and stored in the .store folder.<br />
:As soon as you "protect" the share (or when the inactivity timeout occurres), the encrypted volume is unmounted, and you get the unencrypted share available again<br />
<br />
Changing the state of an encrypted shared folder can be done through the 'Shared Folder Encryption' panel in the server-manager. <br />
You can grant access to this panel to all users using the [[UserManager]] contrib. Users will only be able to toggle mode of shared folders they have access to (and of course, they need to know the correct password).<br />
<br />
'''''Note:''''' If already mounted, Samba looses the access to the shared folder if its status "protected"/"enabled" is modified. Then the share must be umounted and mounted again to get access to the other data set.<br />
<br />
<br />
==== Limitations with encryption ====<br />
Before using encryption on a shared folder, you should be aware of some limitations:<br />
* Encryption is done on the fly in user-space using encfs, accessing data may be a bit slower when encryption is enabled<br />
* Encryption can only be enabled at creation time, it's not possible to encrypt an existing shared folders<br />
* Read Only permission is '''not''' supported on encrypted shared folders, only full read/write access is possible (this is a limitation of fuse which doesn't pass ACL permissions). If you set read only access to some groups, it'll just be ignored<br />
* The current implementation doesn't provide any option to change the password from the web GUI. If you want to change the password, you can do so from the command line with the encfsctl utility<br />
* There's '''NO WAY''' to recover the data if you forget the password.<br />
<br />
=== Additional options ===<br />
<br />
Some advanced settings are not available on the panel, but only with db commands:<br />
{| class="wikitable"<br />
|-<br />
! Database !! Key !! Key type <br />
|-<br />
| accounts<br />
| [name of share]<br />
| share<br />
|}<br />
<br />
<br />
*Various options<br />
{| class="wikitable"<br />
|-<br />
! Option !! Value !! Default !! Description<br />
|-<br />
| ManualPermissions<br />
| enabled / disabled<br />
| disabled<br />
| if you set this key to enabled, the permissions will only affect web access (HTTP, WebDav, Pydio), and will not touch the permissions on the file system. This can be used to host web applications with custom permissions.<br />
|-<br />
| Hide<br />
| yes / no<br />
| no<br />
| If set to yes, this share will not be listed in the server-manager<br />
|-<br />
| Removable<br />
| yes / no<br />
| yes<br />
| If set to no, you won't be able to remove this share through the server-manager<br />
|-<br />
| Audit<br />
| enabled / disabled<br />
| enabled<br />
| Enable or disable activity logging in /var/log/messages<br />
|}<br />
<br />
*Options for Web access:<br />
<br />
**'''AllowOverride''': You can configure the AllowOverride directive of apache if web access is enabled. See this [http://httpd.apache.org/docs/2.0/mod/core.html#allowoverride page] for a list of available option<br />
**'''FollowSymLinks''': (enabled|disabled). Should apache follows symlinks ?<br />
**'''PHPRegisterGlobals''': (enabled|disabled). Enable PHP register global for this share.<br />
**'''PHPAllowUrlFopen''': (enabled|disabled). Enable PHP allow_url_fopen for this share.<br />
**'''PHPMemoryLimit''': Set a memory limit for PHP. This limit only apply to this share<br />
**'''PHPMaxExecutionTime''': Set a time execution limit for PHP. This limit only apply to this share<br />
<br />
*Options for SMB access:<br />
**'''cscPolicy''': (disabled|manual|documents|programs). Control client-side caching. Don't touch this if you don't know what it means. It can have unexpected behaviour.<br />
**'''OpLocks''': (enabled|disabled). Control the '''oplocks''' option of samba (default is enabled)<br />
**'''VetoOplockFiles''': Control the '''veto oplock files''' of samba<br />
<br />
<br />
Example:<br />
db accounts setprop myshare AllowOverride All PHPMemoryLimit 50M PHPMaxExecutionTime 600<br />
signal-event share-modify myshare<br />
<br />
=== Missing features ===<br />
Even if this contrib brings some new options compared to ibays, some others are missing:<br />
<br />
* There's no access using the atalk protocol. I don't have any MAC OS here to test the feature, and anyway newer MAC OS can access SMB shares so I don't think it's a big problem<br />
* there's no support for [[ShadowCopy]]. I may add this in the future<br />
* You cannot choose a shared folder as the content of a virtual domain. This is because it'd require a modification of a core package<br />
<br />
=== Backup and Restore ===<br />
The standard backup process should backup all your shared folders. If you have to restore the data to another server, you'll have to re-install the contrib. Once everything is restored, you'll have to run '''signal-event share-modify-files''' for each share, because permissions on shared folders uses extended ACL which are not backed up with most tools. This is not a big problem because ACL are also stored in the accounts DB. The signal-event will just re-apply the configured ACL on the files:<br />
<br />
for SHARE in $(db accounts printtype | grep share | cut -d'=' -f1); do<br />
signal-event share-modify-files $SHARE<br />
done<br />
<br />
=== Uninstall ===<br />
If you want to remove the contrib, just run:<br />
yum remove smeserver-shared-folders<br />
<br />
You need to manually removes files in /home/e-smith/files/shares if you want to remove every shares created. You may also want to edit /etc/fstab to remove the acl option.<br />
<br />
=== Source ===<br />
The source for this contrib can be found in the smeserver [http://smeserver.cvs.sourceforge.net/viewvc/smecontribs/rpms/smeserver-shared-folders/ CVS] on sourceforge.<br />
<br />
=== Bugs ===<br />
Please raise bugs under the SME-Contribs section in [http://bugs.contribs.org/enter_bug.cgi bugzilla]<br />
and select the smeserver-shared-folders component or use {{BugzillaFileBug|product=SME%20Contribs|component=smeserver-shared-folders|title=this link}}<br />
<br />
Below is an overview of the current issues for this contrib:<br />
{{#bugzilla:columns=id,product,version,status,summary<br />
|sort=id<br />
|order=desc<br />
|component=smeserver-shared-folders<br />
|noresultsmessage=No open bugs found.}} <br />
<br />
----<br />
[[Category:Contrib]]<br />
[[Category:Administration:File and Directory Access]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Repairing_RAID_of_the_swap&diff=30433Repairing RAID of the swap2016-02-11T14:52:40Z<p>Arnaud: </p>
<hr />
<div>== Repairing manually the RAID of the swap ==<br />
<br />
Author: Arnaud<br />
<br />
=== Requirements: ===<br />
{{Warning box|'''This howto works for SME9.1, raid1, nolvm and only for the Raid-device concerning the swap.<br />
Some adaptations may be necessary for other version of SME or for other parameters concerning the RAID and the LVM! '''}}<br />
<br />
Because the SME is able to run without swap, the job can be done directly from the running SME, without any LiveCD or Rescue-mode.<br />
<br />
Adapt the name of the partitions (hdX. sdX etc…) to your case.<br />
<br />
=== The starting point: the device for the swap can't get sync ===<br />
It can occur that the raid refuses to resync completely when a new disk has been added to the SME and that this disk is "a little bit" smaller than the disk what is already running. <br />
The raid sync (e.g. started from the console) works of "/", for "/boot" but not for the swap because of missing some space on the added disk.<br />
<br />
==== look to the current state of the RAID: ====<br />
<br />
# cat /proc/mdstat<br />
Personalities : [raid1] <br />
md0 : active raid1 vda1[0] vdb1[2]<br />
255936 blocks super 1.0 [2/2] [UU]<br />
<br />
md2 : active raid1 vda3[0]<br />
2095104 blocks super 1.1 [2/1] [U_]<br />
<br />
md1 : active raid1 vda2[0] vdb3[2]<br />
18600960 blocks super 1.1 [2/2] [UU]<br />
bitmap: 0/1 pages [0KB], 65536KB chunk<br />
<br />
unused devices: <none> <br />
<br />
As indicated over the console too, md2 runs with only 1 disk (vda3). The disk vdb2 is missing into the RAID.<br />
The reason is:<br />
# mdadm --manage /dev/md2 --add /dev/vdb2<br />
mdadm: /dev/vdb2 not large enough to join array (the disk#2 has been added to the machine afterwards)<br />
<br />
=== The RAID array: ===<br />
* stop the swap:<br />
# swapoff -a <br />
<br />
* get details about /dev/md2 and remember the UUID and the name of the RAID. In my case:<br />
# mdadm --detail /dev/md2<br />
/dev/md2:<br />
Version : 1.1<br />
Creation Time : Mon Feb 1 21:42:29 2016<br />
Raid Level : raid1<br />
Array Size : 2095104 (2046.34 MiB 2145.39 MB)<br />
Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)<br />
Raid Devices : 2<br />
Total Devices : 1<br />
Persistence : Superblock is persistent<br />
<br />
Update Time : Mon Feb 1 21:42:30 2016<br />
State : clean, degraded <br />
Active Devices : 1<br />
Working Devices : 1<br />
Failed Devices : 0<br />
Spare Devices : 0<br />
<br />
Name : localhost.localdomain:2<br />
UUID : 3ee2fded:12de8ad4:736bc4ee:b74e8f89<br />
Events : 4<br />
<br />
Number Major Minor RaidDevice State<br />
0 252 3 0 active sync /dev/vda3<br />
2 0 0 2 removed <br />
<br />
<br />
* stop the RAID device:<br />
# mdadm --stop /dev/md2<br />
mdadm: stopped /dev/md2<br />
* check that md2 doesn't exist any more: <code> # mdadm --remove /dev/md2<br />
mdadm: error opening /dev/md2: No such file or directory <br />
<br />
* remove the superblocks of vda3 (was previously into the RAID:<br />
# mdadm --zero-superblock /dev/vda3 <br />
<br />
* recreate the RAID device md2 with both disks with the UUID and the name of the old RAID:<br />
# mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/vda3 /dev/vdb2 --uuid=3ee2fded:12de8ad4:736bc4ee:b74e8f89 --name=localhost.localdomain:2<br />
mdadm: Note: this array has metadata at the start and<br />
may not be suitable as a boot device. If you plan to<br />
store '/boot' on this device please ensure that<br />
your boot-loader understands md/v1.x metadata, or use<br />
--metadata=0.90<br />
Continue creating array? y<br />
mdadm: Defaulting to version 1.2 metadata<br />
mdadm: array /dev/md2 started.<br />
<br />
* The RAID automatically starts to resync:<br />
# mdadm --detail /dev/md2<br />
/dev/md2:<br />
Version : 1.2<br />
Creation Time : Fri Feb 5 16:17:21 2016<br />
Raid Level : raid1<br />
Array Size : 2093120 (2044.41 MiB 2143.35 MB)<br />
Used Dev Size : 2093120 (2044.41 MiB 2143.35 MB)<br />
Raid Devices : 2<br />
Total Devices : 2<br />
Persistence : Superblock is persistent<br />
<br />
Update Time : Fri Feb 5 16:18:03 2016<br />
State : clean, resyncing <br />
Active Devices : 2<br />
Working Devices : 2<br />
Failed Devices : 0<br />
Spare Devices : 0<br />
<br />
Resync Status : 19% complete<br />
<br />
Name : localhost.localdomain:2<br />
UUID : 3ee2fded:12de8ad4:736bc4ee:b74e8f89<br />
Events : 3<br />
<br />
Number Major Minor RaidDevice State<br />
0 252 3 0 active sync /dev/vda3<br />
1 252 18 1 active sync /dev/vdb2<br />
<br />
<br />
* Check that the UUID and the name are correct and wait the end of the sync.<br />
<br />
=== The swap: ===<br />
Open /etc/fstab and remember the UUID set for the swap file system<br />
nano /etc/fstab<br />
<br />
UUID=6844de9b-2c3c-433b-a7b5-c39258dbb85a swap swap defaults 0 0 <br />
<br />
* Create a swap file system on the RAID device with the UUID present into /etc/fstab:<br />
# mkswap /dev/md2 -U 6844de9b-2c3c-433b-a7b5-c39258dbb85a<br />
mkswap : /dev/md2 : attention, ne pas effacer les secteurs de démarrage<br />
tout le disque.Use -f to force.<br />
Configure l'espace d'échange (swap) en version 1, taille = 2093116 Ko<br />
pas d'étiquette, UUID=6844de9b-2c3c-433b-a7b5-c39258dbb85a <br />
<br />
* Reconfigure the server (only reboot should be sufficient in fact…)<br />
signal-event post-upgrade; signal-event reboot <br />
<br />
* Check the RAID in the console or with:<br />
# cat /proc/mdstat <br />
<br />
* Check the swap is running:<br />
# top<br />
<br />
top - 21:27:13 up 2 min, 1 user, load average: 2.46, 1.02, 0.38<br />
Tasks: 213 total, 1 running, 212 sleeping, 0 stopped, 0 zombie<br />
Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st<br />
Mem: 2029604k total, 957820k used, 1071784k free, 20912k buffers<br />
Swap: 2093116k total, 0k used, 2093116k free, 413852k cached <br />
etc.........<br />
<br />
{{Tip box|Reusing the "old" UUIDs makes the thing simple because there is no need to change any parameter in the configuration of the SME.}}<br />
<br />
[https://forums.contribs.org/index.php/topic,52216.0.html Link to a topic] of the forum relating to this.<br />
<br />
Enjoy!<br />
<br />
{{Note box| Where was the difficulty??....}}<br />
[[Category:Howto]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Repairing_RAID_of_the_swap&diff=30431Repairing RAID of the swap2016-02-10T20:19:17Z<p>Arnaud: /* The starting point: the device for the swap can't get sync */</p>
<hr />
<div>== Repairing manually the RAID of the swap ==<br />
<br />
Author: Arnaud<br />
<br />
=== Requirements: ===<br />
{{Warning box|'''This howto works for SME9.1, raid1, nolvm and only for the Raid-device concerning the swap.<br />
Some adaptations may be necessary for other version of SME or for other parameters concerning the RAID and the LVM! '''}}<br />
<br />
Because the SME is able to run without swap, the job can be done directly from the running SME, without any LiveCD or Rescue-mode.<br />
<br />
Adapt the name of the partitions (hdX. sdX etc…) to your case.<br />
<br />
=== The starting point: the device for the swap can't get sync ===<br />
It can occur that the raid refuses to resync completely when a new disk has been added to the SME and that this disk is "a little bit" smaller than the disk what is already running. <br />
The raid sync (e.g. started from the console) works of "/", for "/boot" but not for the swap because of missing some space on the added disk.<br />
<br />
==== look to the current state of the RAID: ====<br />
<br />
# cat /proc/mdstat<br />
Personalities : [raid1] <br />
md0 : active raid1 vda1[0] vdb1[2]<br />
255936 blocks super 1.0 [2/2] [UU]<br />
<br />
md2 : active raid1 vda3[0]<br />
2095104 blocks super 1.1 [2/1] [U_]<br />
<br />
md1 : active raid1 vda2[0] vdb3[2]<br />
18600960 blocks super 1.1 [2/2] [UU]<br />
bitmap: 0/1 pages [0KB], 65536KB chunk<br />
<br />
unused devices: <none> <br />
<br />
As indicated over the console too, md2 runs with only 1 disk (vda3). The disk vdb2 is missing into the RAID.<br />
The reason is:<br />
# mdadm --manage /dev/md2 --add /dev/vdb2<br />
mdadm: /dev/vdb2 not large enough to join array (the disk#2 has been added to the machine afterwards)<br />
<br />
==== The RAID array: ====<br />
* stop the swap:<br />
# swapoff -a <br />
<br />
* get details about /dev/md2 and remember the UUID and the name of the RAID. In my case:<br />
# mdadm --detail /dev/md2<br />
/dev/md2:<br />
Version : 1.1<br />
Creation Time : Mon Feb 1 21:42:29 2016<br />
Raid Level : raid1<br />
Array Size : 2095104 (2046.34 MiB 2145.39 MB)<br />
Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)<br />
Raid Devices : 2<br />
Total Devices : 1<br />
Persistence : Superblock is persistent<br />
<br />
Update Time : Mon Feb 1 21:42:30 2016<br />
State : clean, degraded <br />
Active Devices : 1<br />
Working Devices : 1<br />
Failed Devices : 0<br />
Spare Devices : 0<br />
<br />
Name : localhost.localdomain:2<br />
UUID : 3ee2fded:12de8ad4:736bc4ee:b74e8f89<br />
Events : 4<br />
<br />
Number Major Minor RaidDevice State<br />
0 252 3 0 active sync /dev/vda3<br />
2 0 0 2 removed <br />
<br />
<br />
* stop the RAID device:<br />
# mdadm --stop /dev/md2<br />
mdadm: stopped /dev/md2<br />
* check that md2 doesn't exist any more: <code> # mdadm --remove /dev/md2<br />
mdadm: error opening /dev/md2: No such file or directory <br />
<br />
* remove the superblocks of vda3 (was previously into the RAID:<br />
# mdadm --zero-superblock /dev/vda3 <br />
<br />
* recreate the RAID device md2 with both disks with the UUID and the name of the old RAID:<br />
# mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/vda3 /dev/vdb2 --uuid=3ee2fded:12de8ad4:736bc4ee:b74e8f89 --name=localhost.localdomain:2<br />
mdadm: Note: this array has metadata at the start and<br />
may not be suitable as a boot device. If you plan to<br />
store '/boot' on this device please ensure that<br />
your boot-loader understands md/v1.x metadata, or use<br />
--metadata=0.90<br />
Continue creating array? y<br />
mdadm: Defaulting to version 1.2 metadata<br />
mdadm: array /dev/md2 started.<br />
<br />
* The RAID automatically starts to resync:<br />
# mdadm --detail /dev/md2<br />
/dev/md2:<br />
Version : 1.2<br />
Creation Time : Fri Feb 5 16:17:21 2016<br />
Raid Level : raid1<br />
Array Size : 2093120 (2044.41 MiB 2143.35 MB)<br />
Used Dev Size : 2093120 (2044.41 MiB 2143.35 MB)<br />
Raid Devices : 2<br />
Total Devices : 2<br />
Persistence : Superblock is persistent<br />
<br />
Update Time : Fri Feb 5 16:18:03 2016<br />
State : clean, resyncing <br />
Active Devices : 2<br />
Working Devices : 2<br />
Failed Devices : 0<br />
Spare Devices : 0<br />
<br />
Resync Status : 19% complete<br />
<br />
Name : localhost.localdomain:2<br />
UUID : 3ee2fded:12de8ad4:736bc4ee:b74e8f89<br />
Events : 3<br />
<br />
Number Major Minor RaidDevice State<br />
0 252 3 0 active sync /dev/vda3<br />
1 252 18 1 active sync /dev/vdb2<br />
<br />
<br />
* Check that the UUID and the name are correct and wait the end of the sync.<br />
<br />
==== The swap: ====<br />
Open /etc/fstab and remember the UUID set for the swap file system<br />
nano /etc/fstab<br />
<br />
UUID=6844de9b-2c3c-433b-a7b5-c39258dbb85a swap swap defaults 0 0 <br />
<br />
* Create a swap file system on the RAID device with the UUID present into /etc/fstab:<br />
# mkswap /dev/md2 -U 6844de9b-2c3c-433b-a7b5-c39258dbb85a<br />
mkswap : /dev/md2 : attention, ne pas effacer les secteurs de démarrage<br />
tout le disque.Use -f to force.<br />
Configure l'espace d'échange (swap) en version 1, taille = 2093116 Ko<br />
pas d'étiquette, UUID=6844de9b-2c3c-433b-a7b5-c39258dbb85a <br />
<br />
* Reconfigure the server (only reboot should be sufficient in fact…)<br />
signal-event post-upgrade; signal-event reboot <br />
<br />
* Check the RAID in the console or with:<br />
# cat /proc/mdstat <br />
<br />
* Check the swap is running:<br />
# top<br />
<br />
top - 21:27:13 up 2 min, 1 user, load average: 2.46, 1.02, 0.38<br />
Tasks: 213 total, 1 running, 212 sleeping, 0 stopped, 0 zombie<br />
Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st<br />
Mem: 2029604k total, 957820k used, 1071784k free, 20912k buffers<br />
Swap: 2093116k total, 0k used, 2093116k free, 413852k cached <br />
etc.........<br />
<br />
{{Tip box|Reusing the "old" UUIDs makes the thing simple because there is no need to change any parameter in the configuration of the SME.}}<br />
<br />
[https://forums.contribs.org/index.php/topic,52216.0.html Link to a topic] of the forum relating to this.<br />
<br />
Enjoy!<br />
<br />
{{Note box| Where was the difficulty??....}}<br />
[[Category:Howto]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Repairing_RAID_of_the_swap&diff=30430Repairing RAID of the swap2016-02-10T20:10:28Z<p>Arnaud: </p>
<hr />
<div>== Repairing manually the RAID of the swap ==<br />
<br />
Author: Arnaud<br />
<br />
=== Requirements: ===<br />
{{Warning box|'''This howto works for SME9.1, raid1, nolvm and only for the Raid-device concerning the swap.<br />
Some adaptations may be necessary for other version of SME or for other parameters concerning the RAID and the LVM! '''}}<br />
<br />
Because the SME is able to run without swap, the job can be done directly from the running SME, without any LiveCD or Rescue-mode.<br />
<br />
Adapt the name of the partitions (hdX. sdX etc…) to your case.<br />
<br />
=== The starting point: the device for the swap can't get sync ===<br />
This can occur when a new disk has been added to the SME and that this disk is "a little bit" smaller than the disk what is already running. <br />
The raid sync (e.g. started from the console) works of "/", for "/boot" but not for the swap because os missing some space on the added disk.<br />
<br />
==== look to the current state of the RAID: ====<br />
<br />
# cat /proc/mdstat<br />
Personalities : [raid1] <br />
md0 : active raid1 vda1[0] vdb1[2]<br />
255936 blocks super 1.0 [2/2] [UU]<br />
<br />
md2 : active raid1 vda3[0]<br />
2095104 blocks super 1.1 [2/1] [U_]<br />
<br />
md1 : active raid1 vda2[0] vdb3[2]<br />
18600960 blocks super 1.1 [2/2] [UU]<br />
bitmap: 0/1 pages [0KB], 65536KB chunk<br />
<br />
unused devices: <none> <br />
<br />
As indicated over the console too, md2 runs with only 1 disk (vda3). The disk vdb2 is missing into the RAID.<br />
The reason is:<br />
# mdadm --manage /dev/md2 --add /dev/vdb2<br />
mdadm: /dev/vdb2 not large enough to join array (the disk#2 has been added to the machine afterwards)<br />
<br />
==== The RAID array: ====<br />
* stop the swap:<br />
# swapoff -a <br />
<br />
* get details about /dev/md2 and remember the UUID and the name of the RAID. In my case:<br />
# mdadm --detail /dev/md2<br />
/dev/md2:<br />
Version : 1.1<br />
Creation Time : Mon Feb 1 21:42:29 2016<br />
Raid Level : raid1<br />
Array Size : 2095104 (2046.34 MiB 2145.39 MB)<br />
Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)<br />
Raid Devices : 2<br />
Total Devices : 1<br />
Persistence : Superblock is persistent<br />
<br />
Update Time : Mon Feb 1 21:42:30 2016<br />
State : clean, degraded <br />
Active Devices : 1<br />
Working Devices : 1<br />
Failed Devices : 0<br />
Spare Devices : 0<br />
<br />
Name : localhost.localdomain:2<br />
UUID : 3ee2fded:12de8ad4:736bc4ee:b74e8f89<br />
Events : 4<br />
<br />
Number Major Minor RaidDevice State<br />
0 252 3 0 active sync /dev/vda3<br />
2 0 0 2 removed <br />
<br />
<br />
* stop the RAID device:<br />
# mdadm --stop /dev/md2<br />
mdadm: stopped /dev/md2<br />
* check that md2 doesn't exist any more: <code> # mdadm --remove /dev/md2<br />
mdadm: error opening /dev/md2: No such file or directory <br />
<br />
* remove the superblocks of vda3 (was previously into the RAID:<br />
# mdadm --zero-superblock /dev/vda3 <br />
<br />
* recreate the RAID device md2 with both disks with the UUID and the name of the old RAID:<br />
# mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/vda3 /dev/vdb2 --uuid=3ee2fded:12de8ad4:736bc4ee:b74e8f89 --name=localhost.localdomain:2<br />
mdadm: Note: this array has metadata at the start and<br />
may not be suitable as a boot device. If you plan to<br />
store '/boot' on this device please ensure that<br />
your boot-loader understands md/v1.x metadata, or use<br />
--metadata=0.90<br />
Continue creating array? y<br />
mdadm: Defaulting to version 1.2 metadata<br />
mdadm: array /dev/md2 started.<br />
<br />
* The RAID automatically starts to resync:<br />
# mdadm --detail /dev/md2<br />
/dev/md2:<br />
Version : 1.2<br />
Creation Time : Fri Feb 5 16:17:21 2016<br />
Raid Level : raid1<br />
Array Size : 2093120 (2044.41 MiB 2143.35 MB)<br />
Used Dev Size : 2093120 (2044.41 MiB 2143.35 MB)<br />
Raid Devices : 2<br />
Total Devices : 2<br />
Persistence : Superblock is persistent<br />
<br />
Update Time : Fri Feb 5 16:18:03 2016<br />
State : clean, resyncing <br />
Active Devices : 2<br />
Working Devices : 2<br />
Failed Devices : 0<br />
Spare Devices : 0<br />
<br />
Resync Status : 19% complete<br />
<br />
Name : localhost.localdomain:2<br />
UUID : 3ee2fded:12de8ad4:736bc4ee:b74e8f89<br />
Events : 3<br />
<br />
Number Major Minor RaidDevice State<br />
0 252 3 0 active sync /dev/vda3<br />
1 252 18 1 active sync /dev/vdb2<br />
<br />
<br />
* Check that the UUID and the name are correct and wait the end of the sync.<br />
<br />
==== The swap: ====<br />
Open /etc/fstab and remember the UUID set for the swap file system<br />
nano /etc/fstab<br />
<br />
UUID=6844de9b-2c3c-433b-a7b5-c39258dbb85a swap swap defaults 0 0 <br />
<br />
* Create a swap file system on the RAID device with the UUID present into /etc/fstab:<br />
# mkswap /dev/md2 -U 6844de9b-2c3c-433b-a7b5-c39258dbb85a<br />
mkswap : /dev/md2 : attention, ne pas effacer les secteurs de démarrage<br />
tout le disque.Use -f to force.<br />
Configure l'espace d'échange (swap) en version 1, taille = 2093116 Ko<br />
pas d'étiquette, UUID=6844de9b-2c3c-433b-a7b5-c39258dbb85a <br />
<br />
* Reconfigure the server (only reboot should be sufficient in fact…)<br />
signal-event post-upgrade; signal-event reboot <br />
<br />
* Check the RAID in the console or with:<br />
# cat /proc/mdstat <br />
<br />
* Check the swap is running:<br />
# top<br />
<br />
top - 21:27:13 up 2 min, 1 user, load average: 2.46, 1.02, 0.38<br />
Tasks: 213 total, 1 running, 212 sleeping, 0 stopped, 0 zombie<br />
Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st<br />
Mem: 2029604k total, 957820k used, 1071784k free, 20912k buffers<br />
Swap: 2093116k total, 0k used, 2093116k free, 413852k cached <br />
etc.........<br />
<br />
{{Tip box|Reusing the "old" UUIDs makes the thing simple because there is no need to change any parameter in the configuration of the SME.}}<br />
<br />
[https://forums.contribs.org/index.php/topic,52216.0.html Link to a topic] of the forum relating to this.<br />
<br />
Enjoy!<br />
<br />
{{Note box| Where was the difficulty??....}}<br />
[[Category:Howto]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Repairing_RAID_of_the_swap&diff=30429Repairing RAID of the swap2016-02-10T20:07:55Z<p>Arnaud: </p>
<hr />
<div>== Repairing manually the RAID of the swap ==<br />
<br />
Author: Arnaud<br />
<br />
=== Requirements: ===<br />
{{Warning box|'''This howto works for SME9.1, raid1, nolvm and only for the Raid-device concerning the swap.<br />
Some adaptations may be necessary for other version of SME or for other parameters concerning the RAID and the LVM! '''}}<br />
<br />
Because the SME is able to run without swap, the job can be done directly from the running SME, without any LiveCD or Rescue-mode.<br />
<br />
Adapt the name of the partitions (hdX. sdX etc…) to your case.<br />
<br />
=== The starting point: the device for the swap can't get sync ===<br />
This can occur when a new disk has been added to the SME and that this disk is "a little bit" smaller than the disk what is already running. <br />
The raid sync (e.g. started from the console) works of "/", for "/boot" but not for the swap because os missing some space on the added disk.<br />
<br />
==== look to the current state of the RAID: ====<br />
<br />
# cat /proc/mdstat<br />
Personalities : [raid1] <br />
md0 : active raid1 vda1[0] vdb1[2]<br />
255936 blocks super 1.0 [2/2] [UU]<br />
<br />
md2 : active raid1 vda3[0]<br />
2095104 blocks super 1.1 [2/1] [U_]<br />
<br />
md1 : active raid1 vda2[0] vdb3[2]<br />
18600960 blocks super 1.1 [2/2] [UU]<br />
bitmap: 0/1 pages [0KB], 65536KB chunk<br />
<br />
unused devices: <none> <br />
<br />
As indicated over the console too, md2 runs with only 1 disk (vda3). The disk vdb2 is missing into the RAID.<br />
The reason is:<br />
# mdadm --manage /dev/md2 --add /dev/vdb2<br />
mdadm: /dev/vdb2 not large enough to join array (the disk#2 has been added to the machine afterwards)<br />
<br />
==== The RAID array: ====<br />
* stop the swap:<br />
# swapoff -a <br />
<br />
* get details about /dev/md2 and remember the UUID and the name of the RAID. In my case:<br />
# mdadm --detail /dev/md2<br />
/dev/md2:<br />
Version : 1.1<br />
Creation Time : Mon Feb 1 21:42:29 2016<br />
Raid Level : raid1<br />
Array Size : 2095104 (2046.34 MiB 2145.39 MB)<br />
Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)<br />
Raid Devices : 2<br />
Total Devices : 1<br />
Persistence : Superblock is persistent<br />
<br />
Update Time : Mon Feb 1 21:42:30 2016<br />
State : clean, degraded <br />
Active Devices : 1<br />
Working Devices : 1<br />
Failed Devices : 0<br />
Spare Devices : 0<br />
<br />
Name : localhost.localdomain:2<br />
UUID : 3ee2fded:12de8ad4:736bc4ee:b74e8f89<br />
Events : 4<br />
<br />
Number Major Minor RaidDevice State<br />
0 252 3 0 active sync /dev/vda3<br />
2 0 0 2 removed <br />
<br />
<br />
* stop the RAID device:<br />
# mdadm --stop /dev/md2<br />
mdadm: stopped /dev/md2<br />
* check that md2 doesn't exist any more: <code> # mdadm --remove /dev/md2<br />
mdadm: error opening /dev/md2: No such file or directory <br />
<br />
* remove the superblocks of vda3 (was previously into the RAID:<br />
# mdadm --zero-superblock /dev/vda3 <br />
<br />
* recreate the RAID device md2 with both disks with the UUID and the name of the old RAID:<br />
# mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/vda3 /dev/vdb2 --uuid=3ee2fded:12de8ad4:736bc4ee:b74e8f89 --name=localhost.localdomain:2<br />
mdadm: Note: this array has metadata at the start and<br />
may not be suitable as a boot device. If you plan to<br />
store '/boot' on this device please ensure that<br />
your boot-loader understands md/v1.x metadata, or use<br />
--metadata=0.90<br />
Continue creating array? y<br />
mdadm: Defaulting to version 1.2 metadata<br />
mdadm: array /dev/md2 started.<br />
<br />
* The RAID automatically starts to resync:<br />
# mdadm --detail /dev/md2<br />
/dev/md2:<br />
Version : 1.2<br />
Creation Time : Fri Feb 5 16:17:21 2016<br />
Raid Level : raid1<br />
Array Size : 2093120 (2044.41 MiB 2143.35 MB)<br />
Used Dev Size : 2093120 (2044.41 MiB 2143.35 MB)<br />
Raid Devices : 2<br />
Total Devices : 2<br />
Persistence : Superblock is persistent<br />
<br />
Update Time : Fri Feb 5 16:18:03 2016<br />
State : clean, resyncing <br />
Active Devices : 2<br />
Working Devices : 2<br />
Failed Devices : 0<br />
Spare Devices : 0<br />
<br />
Resync Status : 19% complete<br />
<br />
Name : localhost.localdomain:2<br />
UUID : 3ee2fded:12de8ad4:736bc4ee:b74e8f89<br />
Events : 3<br />
<br />
Number Major Minor RaidDevice State<br />
0 252 3 0 active sync /dev/vda3<br />
1 252 18 1 active sync /dev/vdb2<br />
<br />
<br />
* Check that the UUID and the name are correct and wait the end of the sync.<br />
<br />
==== The swap: ====<br />
Open /etc/fstab and remember the UUID set for the swap file system<br />
nano /etc/fstab<br />
<br />
UUID=6844de9b-2c3c-433b-a7b5-c39258dbb85a swap swap defaults 0 0 <br />
<br />
* Create a swap file system on the RAID device with the UUID present into /etc/fstab:<br />
# mkswap /dev/md2 -U 6844de9b-2c3c-433b-a7b5-c39258dbb85a<br />
mkswap : /dev/md2 : attention, ne pas effacer les secteurs de démarrage<br />
tout le disque.Use -f to force.<br />
Configure l'espace d'échange (swap) en version 1, taille = 2093116 Ko<br />
pas d'étiquette, UUID=6844de9b-2c3c-433b-a7b5-c39258dbb85a <br />
<br />
* Reconfigure the server (only reboot should be sufficient in fact…)<br />
signal-event post-upgrade; signal-event reboot <br />
<br />
* Check the RAID in the console or with:<br />
# cat /proc/mdstat <br />
<br />
* Check the swap is running:<br />
# top<br />
<br />
top - 21:27:13 up 2 min, 1 user, load average: 2.46, 1.02, 0.38<br />
Tasks: 213 total, 1 running, 212 sleeping, 0 stopped, 0 zombie<br />
Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st<br />
Mem: 2029604k total, 957820k used, 1071784k free, 20912k buffers<br />
Swap: 2093116k total, 0k used, 2093116k free, 413852k cached <br />
etc.........<br />
<br />
{{Tip box|Reusing the "old" UUIDs makes the thing simple because there is no need to change any parameter in the configuration of the SME.}}<br />
<br />
Enjoy!<br />
<br />
{{Note box| Where was the difficulty??....}}<br />
[[Category:Howto]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Repairing_RAID_of_the_swap&diff=30428Repairing RAID of the swap2016-02-10T19:59:10Z<p>Arnaud: </p>
<hr />
<div>== Repairing manually the RAID of the swap ==<br />
<br />
Author: Arnaud<br />
<br />
=== Requirements: ===<br />
{{Warning box|'''This howto works for SME9.1, raid1, nolvm and only for the Raid-device concerning the swap.<br />
Some adaptations may be necessary for other version of SME or for other parameters concerning the RAID and the LVM! '''}}<br />
<br />
Because the SME is able to run without swap, the job can be done directly from the running SME, without any LiveCD or Rescue-mode.<br />
<br />
Adapt the name of the partitions (hdX. sdX etc…) to your case.<br />
<br />
=== The starting point: the device for the swap can't get sync ===<br />
This can occur when a new disk has been added to the SME and that this disk is "a little bit" smaller than the disk what is already running. <br />
The raid sync (e.g. started from the console) works of "/", for "/boot" but not for the swap because os missing some space on the added disk.<br />
<br />
==== look to the current state of the RAID: ====<br />
<br />
# cat /proc/mdstat<br />
Personalities : [raid1] <br />
md0 : active raid1 vda1[0] vdb1[2]<br />
255936 blocks super 1.0 [2/2] [UU]<br />
<br />
md2 : active raid1 vda3[0]<br />
2095104 blocks super 1.1 [2/1] [U_]<br />
<br />
md1 : active raid1 vda2[0] vdb3[2]<br />
18600960 blocks super 1.1 [2/2] [UU]<br />
bitmap: 0/1 pages [0KB], 65536KB chunk<br />
<br />
unused devices: <none> <br />
<br />
As indicated over the console too, md2 runs with only 1 disk (vda3). The disk vdb2 is missing into the RAID.<br />
The reason is:<br />
# mdadm --manage /dev/md2 --add /dev/vdb2<br />
mdadm: /dev/vdb2 not large enough to join array (the disk#2 has been added to the machine afterwards)<br />
<br />
==== The RAID array: ====<br />
* stop the swap:<br />
# swapoff -a <br />
<br />
* get details about /dev/md2 and remember the UUID and the name of the RAID. In my case:<br />
# mdadm --detail /dev/md2<br />
/dev/md2:<br />
Version : 1.1<br />
Creation Time : Mon Feb 1 21:42:29 2016<br />
Raid Level : raid1<br />
Array Size : 2095104 (2046.34 MiB 2145.39 MB)<br />
Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)<br />
Raid Devices : 2<br />
Total Devices : 1<br />
Persistence : Superblock is persistent<br />
<br />
Update Time : Mon Feb 1 21:42:30 2016<br />
State : clean, degraded <br />
Active Devices : 1<br />
Working Devices : 1<br />
Failed Devices : 0<br />
Spare Devices : 0<br />
<br />
Name : localhost.localdomain:2<br />
UUID : 3ee2fded:12de8ad4:736bc4ee:b74e8f89<br />
Events : 4<br />
<br />
Number Major Minor RaidDevice State<br />
0 252 3 0 active sync /dev/vda3<br />
2 0 0 2 removed <br />
<br />
<br />
* stop the RAID device:<br />
# mdadm --stop /dev/md2<br />
mdadm: stopped /dev/md2<br />
* check that md2 doesn't exist any more: <code> # mdadm --remove /dev/md2<br />
mdadm: error opening /dev/md2: No such file or directory <br />
<br />
* remove the superblocks of vda3 (was previously into the RAID:<br />
# mdadm --zero-superblock /dev/vda3 <br />
<br />
* recreate the RAID device md2 with both disks with the UUID and the name of the old RAID:<br />
# mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/vda3 /dev/vdb2 --uuid=3ee2fded:12de8ad4:736bc4ee:b74e8f89 --name=localhost.localdomain:2<br />
mdadm: Note: this array has metadata at the start and<br />
may not be suitable as a boot device. If you plan to<br />
store '/boot' on this device please ensure that<br />
your boot-loader understands md/v1.x metadata, or use<br />
--metadata=0.90<br />
Continue creating array? y<br />
mdadm: Defaulting to version 1.2 metadata<br />
mdadm: array /dev/md2 started.<br />
<br />
* The RAID automatically starts to resync:<br />
# mdadm --detail /dev/md2<br />
/dev/md2:<br />
Version : 1.2<br />
Creation Time : Fri Feb 5 16:17:21 2016<br />
Raid Level : raid1<br />
Array Size : 2093120 (2044.41 MiB 2143.35 MB)<br />
Used Dev Size : 2093120 (2044.41 MiB 2143.35 MB)<br />
Raid Devices : 2<br />
Total Devices : 2<br />
Persistence : Superblock is persistent<br />
<br />
Update Time : Fri Feb 5 16:18:03 2016<br />
State : clean, resyncing <br />
Active Devices : 2<br />
Working Devices : 2<br />
Failed Devices : 0<br />
Spare Devices : 0<br />
<br />
Resync Status : 19% complete<br />
<br />
Name : localhost.localdomain:2<br />
UUID : 3ee2fded:12de8ad4:736bc4ee:b74e8f89<br />
Events : 3<br />
<br />
Number Major Minor RaidDevice State<br />
0 252 3 0 active sync /dev/vda3<br />
1 252 18 1 active sync /dev/vdb2<br />
<br />
<br />
* Check that the UUID and the name are correct and wait the end of the sync.<br />
<br />
==== The swap: ====<br />
Open /etc/fstab and remember the UUID set for the swap file system<br />
nano /etc/fstab<br />
<br />
UUID=6844de9b-2c3c-433b-a7b5-c39258dbb85a swap swap defaults 0 0 <br />
<br />
* Create a swap file system on the RAID device with the UUID present into /etc/fstab:<br />
# mkswap /dev/md2 -U 6844de9b-2c3c-433b-a7b5-c39258dbb85a<br />
mkswap : /dev/md2 : attention, ne pas effacer les secteurs de démarrage<br />
tout le disque.Use -f to force.<br />
Configure l'espace d'échange (swap) en version 1, taille = 2093116 Ko<br />
pas d'étiquette, UUID=6844de9b-2c3c-433b-a7b5-c39258dbb85a <br />
<br />
* Reconfigure the server (only reboot should be sufficient in fact…)<br />
signal-event post-upgrade; signal-event reboot <br />
<br />
* Check the RAID in the console or with:<br />
# cat /proc/mdstat <br />
<br />
* Check the swap is running:<br />
# top<br />
<br />
top - 21:27:13 up 2 min, 1 user, load average: 2.46, 1.02, 0.38<br />
Tasks: 213 total, 1 running, 212 sleeping, 0 stopped, 0 zombie<br />
Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st<br />
Mem: 2029604k total, 957820k used, 1071784k free, 20912k buffers<br />
Swap: 2093116k total, 0k used, 2093116k free, 413852k cached <br />
etc.........</div>Arnaudhttps://wiki.koozali.org/index.php?title=Repairing_RAID_of_the_swap&diff=30427Repairing RAID of the swap2016-02-10T19:56:41Z<p>Arnaud: </p>
<hr />
<div>== Repairing manually the RAID of the swap ==<br />
<br />
Author: Arnaud<br />
<br />
=== Requirements: ===<br />
{{Warning box|'''This howto works for SME9.1, raid1, nolvm and only for the Raid-device concerning the swap.<br />
Some adaptations may be necessary for other version of SME or for other parameters concerning the RAID and the LVM! '''}}<br />
<br />
Because the SME is able to run without swap, the job can be done directly from the running SME, without any LiveCD or Rescue-mode.<br />
<br />
Adapt the name of the partitions (hdX. sdX etc…) to your case.<br />
<br />
=== The starting point: the device for the swap can't get sync ===<br />
This can occur when a new disk has been added to the SME and that this disk is "a little bit" smaller than the disk what is already running. <br />
The raid sync (e.g. started from the console) works of "/", for "/boot" but not for the swap because os missing some space on the added disk.<br />
<br />
==== look to the current state of the RAID: ====<br />
<br />
# cat /proc/mdstat<br />
Personalities : [raid1] <br />
md0 : active raid1 vda1[0] vdb1[2]<br />
255936 blocks super 1.0 [2/2] [UU]<br />
<br />
md2 : active raid1 vda3[0]<br />
2095104 blocks super 1.1 [2/1] [U_]<br />
<br />
md1 : active raid1 vda2[0] vdb3[2]<br />
18600960 blocks super 1.1 [2/2] [UU]<br />
bitmap: 0/1 pages [0KB], 65536KB chunk<br />
<br />
unused devices: <none> <br />
<br />
As indicated over the console too, md2 runs with only 1 disk (vda3). The disk vdb2 is missing into the RAID.<br />
The reason is:<br />
# mdadm --manage /dev/md2 --add /dev/vdb2<br />
mdadm: /dev/vdb2 not large enough to join array (the disk#2 has been added to the machine afterwards)<br />
<br />
==== The RAID array: ====<br />
* stop the swap:<br />
# swapoff -a <br />
<br />
* get details about /dev/md2 and remember the UUID and the name of the RAID. In my case:<br />
# mdadm --detail /dev/md2<br />
/dev/md2:<br />
Version : 1.1<br />
Creation Time : Mon Feb 1 21:42:29 2016<br />
Raid Level : raid1<br />
Array Size : 2095104 (2046.34 MiB 2145.39 MB)<br />
Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)<br />
Raid Devices : 2<br />
Total Devices : 1<br />
Persistence : Superblock is persistent<br />
<br />
Update Time : Mon Feb 1 21:42:30 2016<br />
State : clean, degraded <br />
Active Devices : 1<br />
Working Devices : 1<br />
Failed Devices : 0<br />
Spare Devices : 0<br />
<br />
Name : localhost.localdomain:2<br />
UUID : 3ee2fded:12de8ad4:736bc4ee:b74e8f89<br />
Events : 4<br />
<br />
Number Major Minor RaidDevice State<br />
0 252 3 0 active sync /dev/vda3<br />
2 0 0 2 removed <br />
<br />
<br />
* stop the RAID device:<br />
# mdadm --stop /dev/md2<br />
mdadm: stopped /dev/md2<br />
* check that md2 doesn't exist any more: <code> # mdadm --remove /dev/md2<br />
mdadm: error opening /dev/md2: No such file or directory <br />
<br />
* remove the superblocks of vda3 (was previously into the RAID:<br />
# mdadm --zero-superblock /dev/vda3 <br />
<br />
* recreate the RAID device md2 with both disks with the UUID and the name of the old RAID:<br />
# mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/vda3 /dev/vdb2 --uuid=3ee2fded:12de8ad4:736bc4ee:b74e8f89 --name=localhost.localdomain:2<br />
mdadm: Note: this array has metadata at the start and<br />
may not be suitable as a boot device. If you plan to<br />
store '/boot' on this device please ensure that<br />
your boot-loader understands md/v1.x metadata, or use<br />
--metadata=0.90<br />
Continue creating array? y<br />
mdadm: Defaulting to version 1.2 metadata<br />
mdadm: array /dev/md2 started.<br />
<br />
* The RAID automatically starts to resync:<br />
# mdadm --detail /dev/md2<br />
/dev/md2:<br />
Version : 1.2<br />
Creation Time : Fri Feb 5 16:17:21 2016<br />
Raid Level : raid1<br />
Array Size : 2093120 (2044.41 MiB 2143.35 MB)<br />
Used Dev Size : 2093120 (2044.41 MiB 2143.35 MB)<br />
Raid Devices : 2<br />
Total Devices : 2<br />
Persistence : Superblock is persistent<br />
<br />
Update Time : Fri Feb 5 16:18:03 2016<br />
State : clean, resyncing <br />
Active Devices : 2<br />
Working Devices : 2<br />
Failed Devices : 0<br />
Spare Devices : 0<br />
<br />
Resync Status : 19% complete<br />
<br />
Name : localhost.localdomain:2<br />
UUID : 3ee2fded:12de8ad4:736bc4ee:b74e8f89<br />
Events : 3<br />
<br />
Number Major Minor RaidDevice State<br />
0 252 3 0 active sync /dev/vda3<br />
1 252 18 1 active sync /dev/vdb2<br />
<br />
<br />
* Check that the UUID and the name are correct and wait the end of the sync.</div>Arnaudhttps://wiki.koozali.org/index.php?title=Repairing_RAID_of_the_swap&diff=30426Repairing RAID of the swap2016-02-10T19:51:14Z<p>Arnaud: </p>
<hr />
<div>== Repairing manually the RAID of the swap ==<br />
<br />
Author: Arnaud<br />
<br />
=== Requirements: ===<br />
{{Warning box|'''This howto works for SME9.1, raid1, nolvm and only for the Raid-device concerning the swap.<br />
Some adaptations may be necessary for other version of SME or for other parameters concerning the RAID and the LVM! '''}}<br />
<br />
Because the SME is able to run without swap, the job can be done directly from the running SME, without any LiveCD or Rescue-mode.<br />
<br />
Adapt the name of the partitions (hdX. sdX etc…) to your case.<br />
<br />
=== The starting point: the device for the swap can't get sync ===<br />
This can occur when a new disk has been added to the SME and that this disk is "a little bit" smaller than the disk what is already running. <br />
The raid sync (e.g. started from the console) works of "/", for "/boot" but not for the swap because os missing some space on the added disk.<br />
<br />
==== look to the current state of the RAID: ====<br />
<br />
# cat /proc/mdstat<br />
Personalities : [raid1] <br />
md0 : active raid1 vda1[0] vdb1[2]<br />
255936 blocks super 1.0 [2/2] [UU]<br />
<br />
md2 : active raid1 vda3[0]<br />
2095104 blocks super 1.1 [2/1] [U_]<br />
<br />
md1 : active raid1 vda2[0] vdb3[2]<br />
18600960 blocks super 1.1 [2/2] [UU]<br />
bitmap: 0/1 pages [0KB], 65536KB chunk<br />
unused devices: <none> <br />
<br />
As indicated over the console too, md2 runs with only 1 disk (vda3). The disk vdb2 is missing into the RAID.<br />
The reason is:<br />
# mdadm --manage /dev/md2 --add /dev/vdb2<br />
mdadm: /dev/vdb2 not large enough to join array (the disk#2 has been added to the machine afterwards)</div>Arnaudhttps://wiki.koozali.org/index.php?title=Repairing_RAID_of_the_swap&diff=30425Repairing RAID of the swap2016-02-10T18:30:29Z<p>Arnaud: Created page with "== Repairing manually the RAID of the swap == Author: Arnaud === Requirements: === {{Warning box|'''This howto works for SME9.1, raid1, nolvm. Some adaptations may be necess..."</p>
<hr />
<div>== Repairing manually the RAID of the swap ==<br />
<br />
Author: Arnaud<br />
<br />
=== Requirements: ===<br />
{{Warning box|'''This howto works for SME9.1, raid1, nolvm.<br />
Some adaptations may be necessary for other version of SME or for other parameters concerning the RAID and the LVM! '''}}<br />
<br />
=== Starting situation: ===</div>Arnaudhttps://wiki.koozali.org/index.php?title=Client_Authentication:Gentoo_via_sssd/ldap&diff=29137Client Authentication:Gentoo via sssd/ldap2015-10-27T20:52:59Z<p>Arnaud: </p>
<hr />
<div>{{Warning box|This how-to should be validated by Daniel before you use it!!.}}<br />
<br />
<br />
{{Warning box|This is based upon limited testing and a small number of users.}}<br />
<br />
==Introduction==<br />
This how-to shows how to configure a SME-server (>=8b6) and a client Gentoo for a LDAP based SSSD authentication of the client machine on the configured user accounts of the SME. <br />
<br />
The main advantage in comparaison to nss_ldap is that the authentication informations stay in the cache and the authentication can therefore furter work, even in offline mode (when the server not available).<br />
<br />
<br />
Nevertheless, the creation of a local user with the admin rights is recommanded for the emergency case.<br />
<br />
These lines are a translation of the method given by Daniel: https://wikit.firewall-services.com/doku.php/tuto/ipasserelle/authentification/gentoo_sssd_on_sme. Many thanks to him for it.<br />
<br />
In this how-to we assume that:<br />
the host name of the SME is "sme-server" and the domain is "domain.tld".<br />
<br />
==Configuration of the SME-server==<br />
There is quite no necessary configuration of the SME. <br />
*The only thing to do is to create a user (named "auth" in this how-to) via the server-manager and to give him a valid password ("something_very_secret" in the how-to).<br />
It is not required to make "auth" member of any group.<br />
<br />
*In addition, it is recommended to install and configure [[PHPki]] in order to make the managing of the self-created certificates easier.<br />
<br />
<br />
==Configuration of the client Gentoo==<br />
<br />
===Install the required packages===<br />
First of all, install the requides package:<br />
emerge sys-auth/sssd<br />
If needed, install the trusted root certificates:<br />
emerge app-misc/ca-certificates<br />
<br />
<br />
===Manage the CA of the SME===<br />
after having installed PHPki, go to https://www.domain.tld/phpki and download on the client machine the certificate of authority (ca-certificates.crt).<br />
<br />
Place a copy of it or of another CA into /etc/ssl/certs/ and give the 644 permissions:<br />
cp ~/download/ca-certificates.crt /etc/ssl/certs/<br />
chmod 644 /etc/ssl/certs/ca-certificates.crt<br />
<br />
<br />
===Configure sssd===<br />
The configuration of sssd is achieved in a classical way (as for Ubuntu or Fedora for exemple) and is made by the file /ets/sssd/sssd.conf.<br />
<br />
* At the beginning of this file, the used domain has to be set. In sssd, a domain can be taken as a source of content. it is possible to set several domains in order of priority. <br />
* And deeper in the file, we will add the configuration of the domain <br />
<br />
If the file doesn't exist by default it has to be created and it needs to get the permissions 600 to allow the daemon to start:<br />
<br />
cat <<'_EOF' > /etc/sssd/sssd.conf<br />
[sssd]<br />
config_file_version = 2<br />
services = nss, pam<br />
domains = LDAP<br />
<br />
[nss]<br />
<br />
[pam]<br />
<br />
[domain/LDAP]<br />
id_provider = ldap<br />
auth_provider = ldap<br />
ldap_schema = rfc2307<br />
ldap_uri = ldap://sme-server.domain.tld<br />
ldap_default_bind_dn = uid=auth,ou=Users,dc=domain,dc=tld<br />
ldap_default_authtok = something_very_secret<br />
ldap_default_authtok_type = password<br />
ldap_search_base = dc=domain,dc=tld<br />
ldap_user_search_base = ou=Users,dc=domain,dc=tld<br />
ldap_group_search_base = ou=Groups,dc=domain,dc=tld<br />
ldap_user_object_class = inetOrgPerson<br />
ldap_user_gecos = cn<br />
ldap_tls_reqcert = hard<br />
ldap_tls_cacert = /etc/ssl/certs/ca-certificates.crt<br />
ldap_id_use_start_tls = true<br />
# uncomment below if the SME is a âiPasserelleâ<br />
#ldap_user_shell = desktopLoginShell<br />
# comment below if the SME is a âiPasserelleâ<br />
override_shell = /bin/bash<br />
cache_credentials = true<br />
enumerate = true<br />
# It is possible to filter the logins via a LDAP-filer<br />
# by commenting the both lines below.<br />
# In this exemple, only the users member of the group netusers<br />
# will be valid on this host.<br />
# posixMemberOF is a parameter only for a iPasserelle<br />
#access_provider = ldap<br />
#ldap_access_filter = (|(posixMemberOf=admins)(uid=backup))<br />
_EOF<br />
<br />
chmod 600 /etc/sssd/sssd.conf<br />
<br />
{{Tip box|Make sure that the file /etc/ssl/certs/ca-certificates.crt contains the CA that has sign the certificate of the SME (if PHPki is used, a version > 0,82-13 is required).}}<br />
<br />
The best way to check that sssd is running is to start it in interactiv mode: <br />
sssd -i -d 5<br />
When all is OK, start sssd and configure its autostart:<br />
/etc/init.d/sssd start<br />
rc-update add sssd default<br />
===Configure nss===<br />
To allow nss to use sssd, edit /etc/nsswitch. conf and add sss as a source for users and groups. <br />
[...]<br />
passwd: files sss<br />
shadow: files sss<br />
group: files sss<br />
[...]<br />
netgroup: files sss<br />
{{Tip box|Normally this has been done automatically}}<br />
<br />
After it, it is possible to check that it works:<br />
getent passwd<br />
should list the LDAP users. If it doen't work, you should start debugging by running sssd in interactiv mode (with sssd -i -d 5 for exemple).<br />
<br />
<br />
===Configure pam===<br />
pam must be configured on order to use sssd as a source too:<br />
<br />
cp -a /etc/pam.d/system-auth /etc/pam.d/system-auth.pre_sssd<br />
cat <<'EOF'> /etc/pam.d/system-auth<br />
auth required pam_env.so<br />
auth sufficient pam_unix.so try_first_pass likeauth nullok<br />
auth sufficient pam_sss.so use_first_pass<br />
auth required pam_deny.so<br />
<br />
account required pam_unix.so<br />
account [default=bad success=ok user_unknown=ignore] pam_sss.so<br />
account required pam_permit.so<br />
<br />
password required pam_cracklib.so difok=2 minlen=8 dcredit=2 ocredit=2 retry=3<br />
password sufficient pam_unix.so try_first_pass use_authtok nullok sha512 shadow<br />
password sufficient pam_sss.so use_authtok<br />
password required pam_deny.so<br />
<br />
session required pam_limits.so<br />
session required pam_env.so<br />
session optional pam_mkhomedir.so skel=/etc/skel umask=0077<br />
session required pam_unix.so<br />
session optional pam_sss.so<br />
session optional pam_permit.so<br />
<br />
EOF<br />
<br />
[[Category:Howto]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Client_Authentication:Ubuntu_via_sssd/ldap&diff=29135Client Authentication:Ubuntu via sssd/ldap2015-10-27T20:51:05Z<p>Arnaud: Created page with "{{Warning box|This how-to should be validated by Daniel before you use it!!.}} {{Warning box|This is based upon limited testing and a small number of users.}} ==Introductio..."</p>
<hr />
<div>{{Warning box|This how-to should be validated by Daniel before you use it!!.}}<br />
<br />
<br />
{{Warning box|This is based upon limited testing and a small number of users.}}<br />
<br />
==Introduction==<br />
This how-to shows how to configure a SME-server (>=8b6) and a client Ubuntu for a LDAP based SSSD authentication of the client machine on the configured user accounts of the SME. <br />
<br />
The main advantage in comparaison to nss_ldap is that the authentication informations stay in the cache and the authentication can therefore furter work, even in offline mode (when the server not available).<br />
<br />
Nevertheless, the creation of a local user with the admin rights is recommanded for the emergency case.<br />
<br />
These lines are a translation of the method given by Daniel: https://wikit.firewall-services.com/doku.php/tuto/ipasserelle/authentification/ubuntu_sssd_on_sme. Many thanks to him for it.<br />
<br />
In this how-to we assume that:<br />
the host name of the SME is "sme-server" and the domain is "domain.tld".<br />
<br />
==Configuration of the SME-server==<br />
There is quite no necessary configuration of the SME. <br />
*The only thing to do is to create a user (named "auth" in this how-to) via the server-manager and to give him a valid password ("something_very_secret" in the how-to).<br />
It is not required to make "auth" member of any group.<br />
<br />
*In addition, it is recommended to install and configure [[PHPki]] in order to make the managing of the self-created certificates easier.<br />
<br />
==Configuration of the client Ubuntu==<br />
===Installation of the required packages===<br />
sudo apt-get install sssd libnss-sss libpam-sss auth-client-config<br />
===Create a symbolic link===<br />
It seems to be a bugg in the version of sssd from Ubuntu, therefore following links must be created:<br />
ln -s /usr/lib /usr/modules<br />
Without it, sssd can't manage membership to the groups in LDAP (source http://us.generation-nt.com/bug-599644-sssd-unable-resolve-ldap-group-memberships-help-200739341.html)<br />
<br />
===Manage the CA of the SME===<br />
after having installed PHPki, go to https://www.domain.tld/phpki and download on the client machine the certificate of authority (ca-certificates.crt).<br />
<br />
Place a copy of it or of another CA into /etc/ssl/certs/ and give the 644 permissions:<br />
cp ~/download/ca-certificates.crt /etc/ssl/certs/<br />
chmod 644 /etc/ssl/certs/ca-certificates.crt<br />
<br />
===Configure SSSD===<br />
The configuration of sssd is achieved in a classical way (as for Ubuntu or Fedora for exemple) and is made by the file /ets/sssd/sssd.conf.<br />
<br />
* At the beginning of this file, the used domain has to be set. In sssd, a domain can be taken as a source of content. it is possible to set several domains in order of priority. <br />
* And deeper in the file, we will add the configuration of the domain <br />
<br />
If the file doesn't exist by default it has to be created and it needs to get the permissions 600 to allow the daemon to start:<br />
<br />
cat <<'_EOF' > /etc/sssd/sssd.conf<br />
[sssd]<br />
config_file_version = 2<br />
services = nss, pam<br />
domains = LDAP<br />
<br />
[nss]<br />
<br />
[pam]<br />
<br />
[domain/LDAP]<br />
id_provider = ldap<br />
auth_provider = ldap<br />
ldap_schema = rfc2307<br />
ldap_uri = ldap://sme-server.domain.tld<br />
ldap_default_bind_dn = uid=auth,ou=Users,dc=domain,dc=tld<br />
ldap_default_authtok = something_very_secret<br />
ldap_default_authtok_type = password<br />
ldap_search_base = dc=domain,dc=tld<br />
ldap_user_search_base = ou=Users,dc=domain,dc=tld<br />
ldap_group_search_base = ou=Groups,dc=domain,dc=tld<br />
ldap_user_object_class = inetOrgPerson<br />
ldap_user_gecos = cn<br />
ldap_tls_reqcert = hard<br />
ldap_tls_cacert = /etc/ssl/certs/ca-certificates.crt<br />
ldap_id_use_start_tls = true<br />
# uncomment below if the SME is a “iPasserelle”<br />
#ldap_user_shell = desktopLoginShell<br />
# comment below if the SME is a “iPasserelle”<br />
override_shell = /bin/bash<br />
cache_credentials = true<br />
enumerate = true<br />
# It is possible to filter the logins via a LDAP-filer<br />
# by commenting the both lines below.<br />
# In this exemple, only the users member of the group netusers<br />
# will be valid on this host.<br />
# posixMemberOF is a parameter only for a iPasserelle<br />
#access_provider = ldap<br />
#ldap_access_filter = (|(posixMemberOf=admins)(uid=backup))<br />
_EOF<br />
<br />
chmod 600 /etc/sssd/sssd.conf<br />
<br />
{{Tip box|Make sure that the file /etc/ssl/certs/ca-certificates.crt contains the CA that has sign the certificate of the SME (if PHPki is used, a version > 0,82-13 is required).}}<br />
<br />
===Configure the system to use SSSD as a source of authentication:===<br />
Use the tool auth-client-config:<br />
vim /etc/auth-client-config/profile.d/sss<br />
and add following lines:<br />
<br />
[sss]<br />
nss_passwd= passwd: compat sss<br />
nss_group= group: compat sss<br />
nss_shadow= shadow: compat<br />
nss_netgroup= netgroup: nis<br />
<br />
pam_auth= auth [success=3 default=ignore] pam_unix.so nullok_secure try_first_pass<br />
auth requisite pam_succeed_if.so uid >= 500 quiet<br />
auth [success=1 default=ignore] pam_sss.so use_first_pass<br />
auth requisite pam_deny.so<br />
auth required pam_permit.so<br />
<br />
pam_account= account required pam_unix.so<br />
account sufficient pam_localuser.so<br />
account sufficient pam_succeed_if.so uid < 500 quiet<br />
account [default=bad success=ok user_unknown=ignore] pam_sss.so<br />
account required pam_permit.so<br />
<br />
pam_password= password sufficient pam_unix.so obscure sha512<br />
password sufficient pam_sss.so use_authtok<br />
password required pam_deny.so<br />
<br />
pam_session= session required pam_mkhomedir.so skel=/etc/skel/ umask=0077<br />
session optional pam_keyinit.so revoke<br />
session required pam_limits.so<br />
session [success=1 default=ignore] pam_sss.so<br />
session required pam_unix.so<br />
<br />
And enable this:<br />
sudo auth-client-config -a -p sss<br />
<br />
[[Category:Howto]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Client_Authentication:Gentoo_via_sssd/ldap&diff=29132Client Authentication:Gentoo via sssd/ldap2015-10-27T20:42:01Z<p>Arnaud: Created page with "{{Warning box|This how-to should be validated by Daniel before you use it!!.}} {{Warning box|This is based upon limited testing and a small number of users.}} ==Introductio..."</p>
<hr />
<div>{{Warning box|This how-to should be validated by Daniel before you use it!!.}}<br />
<br />
<br />
{{Warning box|This is based upon limited testing and a small number of users.}}<br />
<br />
==Introduction==<br />
This how-to shows how to configure a SME-server (>=8b6) and a client Gentoo for a LDAP based SSSD authentication of the client machine on the configured user accounts of the SME. <br />
<br />
The main advantage in comparaison to nss_ldap is that the authentication informations stay in the cache and the authentication can therefore furter work, even in offline mode (when the server not available).<br />
<br />
<br />
Nevertheless, the creation of a local user with the admin rights is recommanded for the emergency case.<br />
<br />
These lines are a translation of the method given by Daniel: https://wikit.firewall-services.com/doku.php/tuto/ipasserelle/authentification/gentoo_sssd_on_sme. Many thanks to him for it.<br />
<br />
In this how-to we assume that:<br />
the host name of the SME is "sme-server" and the domain is "domain.tld".<br />
<br />
==Configuration of the SME-server==<br />
There is quite no necessary configuration of the SME. <br />
*The only thing to do is to create a user (named "auth" in this how-to) via the server-manager and to give him a valid password ("something_very_secret" in the how-to).<br />
It is not required to make "auth" member of any group.<br />
<br />
*In addition, it is recommended to install and configure [[PHPki]] in order to make the managing of the self-created certificates easier.<br />
<br />
<br />
==Configuration of the client Gentoo==<br />
<br />
===Install the required packages===<br />
First of all, install the requides package:<br />
emerge sys-auth/sssd<br />
If needed, install the trusted root certificates:<br />
emerge app-misc/ca-certificates<br />
<br />
<br />
===Manage the CA of the SME===<br />
after having installed PHPki, go to https://www.domain.tld/phpki and download on the client machine the certificate of authority (ca-certificates.crt).<br />
<br />
Place a copy of it or of another CA into /etc/ssl/certs/ and give the 644 permissions:<br />
cp ~/download/ca-certificates.crt /etc/ssl/certs/<br />
chmod 644 /etc/ssl/certs/ca-certificates.crt<br />
<br />
<br />
===Configure sssd===<br />
The configuration of sssd is achieved in a classical way (as for Ubuntu or Fedora for exemple) and is made by the file /ets/sssd/sssd.conf.<br />
<br />
* At the beginning of this file, the used domain has to be set. In sssd, a domain can be taken as a source of content. it is possible to set several domains in order of priority. <br />
* And deeper in the file, we will add the configuration of the domain <br />
<br />
If the file doesn't exist by default it has to be created and it needs to get the permissions 600 to allow the daemon to start:<br />
<br />
cat <<'_EOF' > /etc/sssd/sssd.conf<br />
[sssd]<br />
config_file_version = 2<br />
services = nss, pam<br />
domains = LDAP<br />
<br />
[nss]<br />
<br />
[pam]<br />
<br />
[domain/LDAP]<br />
id_provider = ldap<br />
auth_provider = ldap<br />
ldap_schema = rfc2307<br />
ldap_uri = ldap://sme-server.domain.tld<br />
ldap_default_bind_dn = uid=auth,ou=Users,dc=domain,dc=tld<br />
ldap_default_authtok = something_very_secret<br />
ldap_default_authtok_type = password<br />
ldap_search_base = dc=domain,dc=tld<br />
ldap_user_search_base = ou=Users,dc=domain,dc=tld<br />
ldap_group_search_base = ou=Groups,dc=domain,dc=tld<br />
ldap_user_object_class = inetOrgPerson<br />
ldap_user_gecos = cn<br />
ldap_tls_reqcert = hard<br />
ldap_tls_cacert = /etc/ssl/certs/ca-certificates.crt<br />
ldap_id_use_start_tls = true<br />
# uncomment below if the SME is a âiPasserelleâ<br />
#ldap_user_shell = desktopLoginShell<br />
# comment below if the SME is a âiPasserelleâ<br />
override_shell = /bin/bash<br />
cache_credentials = true<br />
enumerate = true<br />
# It is possible to filter the logins via a LDAP-filer<br />
# by commenting the both lines below.<br />
# In this exemple, only the users member of the group netusers<br />
# will be valid on this host.<br />
# posixMemberOF is a parameter only for a iPasserelle<br />
#access_provider = ldap<br />
#ldap_access_filter = (|(posixMemberOf=admins)(uid=backup))<br />
<br />
_EOF<br />
<br />
chmod 600 /etc/sssd/sssd.conf<br />
<br />
{{Tip box|Make sure that the file /etc/ssl/certs/ca-certificates.crt contains the CA that has sign the certificate of the SME (if PHPki is used, a version > 0,82-13 is required).}}<br />
<br />
The best way to check that sssd is running is to start it in interactiv mode: <br />
sssd -i -d 5<br />
When all is OK, start sssd and configure its autostart:<br />
/etc/init.d/sssd start<br />
rc-update add sssd default<br />
===Configure nss===<br />
To allow nss to use sssd, edit /etc/nsswitch. conf and add sss as a source for users and groups. <br />
[...]<br />
passwd: files sss<br />
shadow: files sss<br />
group: files sss<br />
[...]<br />
netgroup: files sss<br />
{{Tip box|Normally this has been done automatically}}<br />
<br />
After it, it is possible to check that it works:<br />
getent passwd<br />
should list the LDAP users. If it doen't work, you should start debugging by running sssd in interactiv mode (with sssd -i -d 5 for exemple).<br />
<br />
<br />
===Configure pam===<br />
pam must be configured on order to use sssd as a source too:<br />
<br />
cp -a /etc/pam.d/system-auth /etc/pam.d/system-auth.pre_sssd<br />
cat <<'EOF'> /etc/pam.d/system-auth<br />
auth required pam_env.so<br />
auth sufficient pam_unix.so try_first_pass likeauth nullok<br />
auth sufficient pam_sss.so use_first_pass<br />
auth required pam_deny.so<br />
<br />
account required pam_unix.so<br />
account [default=bad success=ok user_unknown=ignore] pam_sss.so<br />
account required pam_permit.so<br />
<br />
password required pam_cracklib.so difok=2 minlen=8 dcredit=2 ocredit=2 retry=3<br />
password sufficient pam_unix.so try_first_pass use_authtok nullok sha512 shadow<br />
password sufficient pam_sss.so use_authtok<br />
password required pam_deny.so<br />
<br />
session required pam_limits.so<br />
session required pam_env.so<br />
session optional pam_mkhomedir.so skel=/etc/skel umask=0077<br />
session required pam_unix.so<br />
session optional pam_sss.so<br />
session optional pam_permit.so<br />
<br />
EOF<br />
<br />
[[Category:Howto]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Client_Authentication:Debian_via_sssd/ldap&diff=29131Client Authentication:Debian via sssd/ldap2015-10-27T20:34:06Z<p>Arnaud: Created page with "{{Warning box|This how-to should be validated by Daniel before you use it!!.}} {{Warning box|This is based upon limited testing and a small number of users.}} ==Introductio..."</p>
<hr />
<div>{{Warning box|This how-to should be validated by Daniel before you use it!!.}}<br />
<br />
<br />
{{Warning box|This is based upon limited testing and a small number of users.}}<br />
<br />
==Introduction==<br />
This how-to shows how to configure a SME-server (>=8b6) and a client Debian (method tested with Debian squeeze) for a LDAP based SSSD authentication of the client machine on the configured user accounts of the SME. <br />
<br />
The main advantage in comparaison to nss_ldap is that the authentication informations stay in the cache and the authentication can therefore furter work, even in offline mode (when the server not available).<br />
<br />
Nevertheless, the creation of a local user with the admin rights is recommanded for the emergency case.<br />
<br />
These lines are a translation of the method given by Daniel: https://wikit.firewall-services.com/doku.php/tuto/ipasserelle/authentification/debian_sssd_on_sme. Many thanks to him for it.<br />
<br />
In this how-to: we assume that:<br />
the host name of the SME is "sme-server" and the domain is "domain.tld".<br />
<br />
==Configuration of the SME-server==<br />
There is quite no necessary configuration of the SME. <br />
*The only thing to do is to create a user (named "auth" in this how-to) via the server-manager and to give him a valid password ("something_very_secret" in the how-to).<br />
It is not required to make "auth" member of any group.<br />
<br />
*In addition, it is recommended to install and configure [[PHPki]] in order to make the managing of the self-created certificates easier.<br />
<br />
<br />
==Configuration of the client Dedian==<br />
<br />
===Manage the CA of the SME===<br />
after having installed PHPki, go to https://www.domain.tld/phpki and download on the client machine the certificate of authority (ca-certificates.crt).<br />
<br />
Place a copy of it or of another CA into /etc/ssl/certs/ and give the 644 permissions:<br />
cp ~/download/ca-certificates.crt /etc/ssl/certs/<br />
chmod 644 /etc/ssl/certs/ca-certificates.crt<br />
<br />
===Install the required packages===<br />
apt-get install sssd libnss-sss libpam-sss ca-certificates<br />
<br />
===Configure sssd===<br />
The configuration is made by the file /ets/sssd/sssd.conf.<br />
<br />
* At the beginning of this file, the used domain has to be set. In sssd, a domain can be taken as a source of content. it is possible to set several domains in order of priority. <br />
* And deeper in the file, we will add the configuration of the domain <br />
<br />
<br />
If the file doesn't exist by default it has to be created and it needs to get the permissions 600 to allow the daemon to start:<br />
cat <<'_EOF' > /etc/sssd/sssd.conf<br />
[sssd]<br />
config_file_version = 2<br />
services = nss, pam<br />
domains = LDAP<br />
<br />
[nss]<br />
<br />
[pam]<br />
<br />
[domain/LDAP]<br />
id_provider = ldap<br />
auth_provider = ldap<br />
ldap_schema = rfc2307<br />
ldap_uri = ldap://sme-server.domain.tld<br />
ldap_default_bind_dn = uid=auth,ou=Users,dc=domain,dc=tld<br />
ldap_default_authtok = something_very_secret<br />
ldap_default_authtok_type = password<br />
ldap_search_base = dc=domain,dc=tld<br />
ldap_user_search_base = ou=Users,dc=domain,dc=tld<br />
ldap_group_search_base = ou=Groups,dc=domain,dc=tld<br />
ldap_user_object_class = inetOrgPerson<br />
ldap_user_gecos = cn<br />
ldap_tls_reqcert = hard<br />
ldap_tls_cacert = /etc/ssl/certs/ca-certificates.crt<br />
ldap_id_use_start_tls = true<br />
# uncomment below if the SME is a âiPasserelleâ<br />
#ldap_user_shell = desktopLoginShell<br />
# comment below if the SME is a âiPasserelleâ<br />
override_shell = /bin/bash<br />
cache_credentials = true<br />
enumerate = true<br />
# It is possible to filter the logins via a LDAP-filer<br />
# by commenting the both lines below.<br />
# In this exemple, only the users member of the group netusers<br />
# will be valid on this host.<br />
# posixMemberOF is a parameter only for a iPasserelle<br />
#access_provider = ldap<br />
#ldap_access_filter = (|(posixMemberOf=admins)(uid=backup))<br />
_EOF<br />
<br />
chmod 600 /etc/sssd/sssd.conf<br />
<br />
<br />
{{Tip box|Make sure that the file /etc/ssl/certs/ca-certificates.crt contains the CA that has sign the certificate of the SME (if PHPki is used, a version > 0,82-13 is required).}}<br />
<br />
<br />
===nsswitch===<br />
Edit /etc/nsswitch.conf and add sss for passwd, group and shadow:<br />
passwd: compat sss<br />
group: compat sss<br />
shadow: compat sss<br />
===pam===<br />
cd /etc/pam.d<br />
cp -a common-account common-account.orig<br />
cat <<'EOF'> common-account<br />
#<br />
# /etc/pam.d/common-account - authorization settings common to all services<br />
#<br />
account [success=1 new_authtok_reqd=done default=ignore] pam_unix.so<br />
# here's the fallback if no module succeeds<br />
account requisite pam_deny.so<br />
# prime the stack with a positive return value if there isn't one already;<br />
# this avoids us returning an error just because nothing sets a success code<br />
# since the modules above will each just jump around<br />
account required pam_permit.so<br />
# and here are more per-package modules (the "Additional" block)<br />
session optional pam_mkhomedir.so skel=/etc/skel umask=0077<br />
account [default=bad success=ok user_unknown=ignore] pam_sss.so<br />
EOF<br />
<br />
cp -a common-auth common-auth.orig<br />
cat <<'EOF'> common-auth<br />
#<br />
# /etc/pam.d/common-auth - authentication settings common to all services<br />
#<br />
# here are the per-package modules (the "Primary" block)<br />
auth [success=2 default=ignore] pam_sss.so<br />
auth [success=1 default=ignore] pam_unix.so nullok_secure try_first_pass<br />
# here's the fallback if no module succeeds<br />
auth requisite pam_deny.so<br />
# prime the stack with a positive return value if there isn't one already;<br />
# this avoids us returning an error just because nothing sets a success code<br />
# since the modules above will each just jump around<br />
auth required pam_permit.so<br />
# and here are more per-package modules (the "Additional" block)<br />
EOF<br />
<br />
cp -a common-password common-password.orig<br />
cat <<'EOF'> common-password<br />
#<br />
# /etc/pam.d/common-password - password-related modules common to all services<br />
# here are the per-package modules (the "Primary" block)<br />
password sufficient pam_sss.so<br />
password [success=1 default=ignore] pam_unix.so obscure try_first_pass sha512<br />
# here's the fallback if no module succeeds<br />
password requisite pam_deny.so<br />
# prime the stack with a positive return value if there isn't one already;<br />
# this avoids us returning an error just because nothing sets a success code<br />
# since the modules above will each just jump around<br />
password required pam_permit.so<br />
# and here are more per-package modules (the "Additional" block)<br />
EOF<br />
<br />
cp -a common-session common-session.orig<br />
cat <<'EOF'> common-session<br />
#<br />
# /etc/pam.d/common-session - session-related modules common to all services<br />
# <br />
# here are the per-package modules (the "Primary" block)<br />
session [default=1] pam_permit.so<br />
# here's the fallback if no module succeeds<br />
session requisite pam_deny.so<br />
# prime the stack with a positive return value if there isn't one already;<br />
# this avoids us returning an error just because nothing sets a success code<br />
# since the modules above will each just jump around<br />
session required pam_permit.so<br />
# and here are more per-package modules (the "Additional" block)<br />
session optional pam_mkhomedir.so skel=/etc/skel umask=0077<br />
session optional pam_sss.so<br />
session required pam_unix.so <br />
EOF<br />
<br />
===Enable at statup===<br />
update-rc.d sssd enable<br />
/etc/init.d/sssd start<br />
<br />
[[Category:Howto]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Client_Authentication:Centos_via_sssd/ldap&diff=29130Client Authentication:Centos via sssd/ldap2015-10-27T20:25:42Z<p>Arnaud: Created page with "{{Warning box|This how-to should be validated by Daniel before you use it!!.}} {{Warning box|This is based upon limited testing and a small number of users.}} ==Introductio..."</p>
<hr />
<div>{{Warning box|This how-to should be validated by Daniel before you use it!!.}}<br />
<br />
<br />
{{Warning box|This is based upon limited testing and a small number of users.}}<br />
<br />
==Introduction==<br />
This how-to shows how to configure a SME-server (>=8b6) and a client Centos >= 5 for a LDAP based SSSD authentication of the client machine on the configured user accounts of the SME. <br />
<br />
The main advantage in comparaison to nss_ldap is that the authentication informations stay in the cache and the authentication can therefore furter work, even in offline mode (when the server not available).<br />
<br />
Nevertheless, the creation of a local user with the admin rights is recommanded for the emergency case.<br />
<br />
These lines are a translation of the method given by Daniel: https://wikit.firewall-services.com/doku.php/tuto/ipasserelle/authentification/centos_sssd_on_sme. Many thanks to him for it.<br />
<br />
In this how-to: we assume that:<br />
the host name of the SME is "sme-server" and the domain is "domain.tld".<br />
<br />
==Configuration of the SME-server==<br />
There is quite no necessary configuration of the SME. <br />
*The only thing to do is to create a user (named "auth" in this how-to) via the server-manager and to give him a valid password ("something_very_secret" in the how-to).<br />
It is not required to make "auth" member of any group.<br />
<br />
*In addition, it is recommended to install and configure [[PHPki]] in order to make the managing of the self-created certificates easier.<br />
<br />
<br />
==Configuration of the client CentOS==<br />
<br />
===Manage the CA of the SME===<br />
after having installed phpki, go to https://www.domain.tld/phpki and download on the client machine the certificate of authority (ca.crt).<br />
<br />
Place a copy of it or of another CA into /etc/phpki/tls/certs/ and give the 644 permissions:<br />
cp ~/download/ca.crt /etc/phpki/tls/certs/<br />
chmod 644 /etc/phpki/tls/certs/ca.crt<br />
<br />
<br />
===Install the required package===<br />
First of all, install the required package:<br />
yum install sssd<br />
<br />
<br />
===Configure SSSD===<br />
The configuration is made by the file /ets/sssd/sssd.conf.<br />
* At the beginning of this file, the used domain has to be set. In sssd, a domain can be taken as a source of content. it is possible to set several domains in order of priority. <br />
* And deeper in the file, we will add the configuration of the domain <br />
<br />
If the file doesn't exist by default it has to be created and it needs to get the permissions 600 to allow the daemon to start:<br />
cat <<'_EOF' > /etc/sssd/sssd.conf<br />
[sssd]<br />
config_file_version = 2<br />
services = nss, pam<br />
domains = LDAP<br />
<br />
[nss]<br />
<br />
[pam]<br />
<br />
[domain/LDAP]<br />
id_provider = ldap<br />
auth_provider = ldap<br />
ldap_schema = rfc2307<br />
ldap_uri = ldap://sme-server.domain.tld<br />
ldap_default_bind_dn = uid=auth,ou=Users,dc=domain,dc=tld<br />
ldap_default_authtok = something_very_secret<br />
ldap_default_authtok_type = password<br />
ldap_search_base = dc=domain,dc=tld<br />
ldap_user_search_base = ou=Users,dc=domain,dc=tld<br />
ldap_group_search_base = ou=Groups,dc=domain,dc=tld<br />
ldap_user_object_class = inetOrgPerson<br />
ldap_user_gecos = cn<br />
ldap_tls_reqcert = hard<br />
ldap_tls_cacert = /etc/pki/tls/certs/ca.crt<br />
ldap_id_use_start_tls = true<br />
# uncomment below if the SME is a âiPasserelleâ<br />
#ldap_user_shell = desktopLoginShell<br />
# comment below if the SME is a âiPasserelleâ<br />
override_shell = /bin/bash<br />
cache_credentials = true<br />
enumerate = true<br />
# It is possible to filter the logins via a LDAP-filer<br />
# by commenting the both lines below.<br />
# In this exemple, only the users member of the group netusers<br />
# will be valid on this host.<br />
# posixMemberOF is a parameter only for a iPasserelle<br />
#access_provider = ldap<br />
#ldap_access_filter = (|(posixMemberOf=admins)(uid=backup))<br />
_EOF<br />
<br />
chmod 600 /etc/sssd/sssd.conf<br />
<br />
{{Tip box|Make sure that the file /etc/pki/tls/certs/ca.crt contains the CA that has sign the certificate of the SME (if PHPki is used, a version > 0,82-13 is required).}}<br />
<br />
<br />
===Configure nss===<br />
<br />
To allow nss to use sssd, you should check that sss is set as a source for users and groups in the /etc/nsswitch.conf. <br />
[...]<br />
passwd: files sss<br />
shadow: files sss<br />
group: files sss<br />
[...]<br />
netgroup: files sss<br />
{{Tip box|This has been done automatically from CentOS 7}}<br />
<br />
===Check===<br />
getent passwd<br />
should show the ldap-users.<br />
If it doen't work, you should start debugging by running sssd in interactiv mode (with sssd -i -d 5 for exemple).<br />
<br />
<br />
===Configure pam===<br />
pam must be configured on order to use sssd as a source too:<br />
<br />
====CentOS 5====<br />
rm -f /etc/pam.d/system-auth<br />
cat <<'EOF' > /etc/pam.d/system-auth<br />
#%PAM-1.0<br />
# This file is auto-generated.<br />
# User changes will be destroyed the next time authconfig is run.<br />
auth required pam_env.so<br />
auth sufficient pam_unix.so nullok try_first_pass<br />
auth requisite pam_succeed_if.so uid >= 500 quiet<br />
auth sufficient pam_sss.so use_first_pass<br />
auth required pam_deny.so<br />
<br />
account required pam_unix.so<br />
account sufficient pam_succeed_if.so uid < 500 quiet<br />
account [default=bad success=ok user_unknown=ignore] pam_sss.so<br />
account required pam_permit.so<br />
<br />
password requisite pam_cracklib.so try_first_pass retry=3<br />
password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok <br />
password sufficient pam_sss.so use_authtok<br />
password required pam_deny.so<br />
<br />
session optional pam_keyinit.so revoke<br />
session required pam_limits.so<br />
session optional pam_mkhomedir.so skel=/etc/skel umask=0077<br />
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid<br />
session required pam_unix.so<br />
session optional pam_sss.so<br />
EOF<br />
====CentOS 6 / 7====<br />
rm -f /etc/pam.d/system-auth<br />
cat <<'EOF' > /etc/pam.d/system-auth<br />
#%PAM-1.0<br />
# This file is auto-generated.<br />
# User changes will be destroyed the next time authconfig is run.<br />
auth required pam_env.so<br />
auth sufficient pam_unix.so nullok try_first_pass<br />
auth requisite pam_succeed_if.so uid >= 500 quiet<br />
auth sufficient pam_sss.so use_first_pass<br />
auth required pam_deny.so<br />
<br />
account required pam_unix.so<br />
account sufficient pam_succeed_if.so uid < 500 quiet<br />
account [default=bad success=ok user_unknown=ignore] pam_sss.so<br />
account required pam_permit.so<br />
<br />
password requisite pam_cracklib.so try_first_pass retry=3 type=<br />
password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok<br />
password sufficient pam_sss.so use_authtok<br />
password required pam_deny.so<br />
<br />
session optional pam_keyinit.so revoke<br />
session required pam_limits.so<br />
session optional pam_mkhomedir.so skel=/etc/skel umask=0077<br />
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid<br />
session required pam_unix.so<br />
session optional pam_sss.so<br />
EOF<br />
rm -f /etc/pam.d/password-auth<br />
ln -sf system-auth /etc/pam.d/password-auth<br />
<br />
That's all. It has only to be tested now.<br />
When it works, the daemon sssd should be enabled at start:<br />
chkconfig sssd on<br />
or<br />
systemctl enable sssd<br />
<br />
<br />
<br />
[[Category:Howto]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Client_Authentication:Fedora_via_sssd/ldap&diff=29128Client Authentication:Fedora via sssd/ldap2015-10-27T20:11:04Z<p>Arnaud: </p>
<hr />
<div>{{Warning box|This is based upon limited testing and a small number of users.}}<br />
<br />
==Introduction==<br />
This how-to shows how to configure a SME-server (>=8b6) and a client Fedora (tested with F22 for this how-to) for a LDAP based SSSD authentication of the client machine on the configured user accounts of the SME. This way is more simple that the way via winbind [[Client Authentication:Fedora]]. Further more, the winbind method doesn't seem to work on a Fedora 22.<br />
<br />
The SSSD authentication via LDAP allows to login with the SME-user accounts, even if the client machine is offline.<br />
Nevertheless, I recommend to create a local user with the admin rights for the emergency case.<br />
<br />
These lines are a translation and an application of the method given by Daniel: https://wikit.firewall-services.com/doku.php/tuto/ipasserelle/authentification/fedora_sssd_on_sme. Many thanks to him for it.<br />
<br />
In this how-to: we assume that:<br />
the host name of the SME is "sme-server" and the domain is "domain.tld".<br />
<br />
==Configuration of the SME-server==<br />
There is quite no necessary configuration of the SME. <br />
*The only thing to do is to create a user (named "auth" in this how-to) via the server-manager and to give him a valid password ("something_very_secret" in the how-to).<br />
It is not required to make "auth" member of any group.<br />
<br />
*In addition, it is recommended to install and configure [[PHPki]] in order to make the managing of the certificates easier and to allow Fedora to accept a non-official certificate of the authentication.<br />
<br />
<br />
==Configuration of the client Fedora==<br />
<br />
===Installation===<br />
Install the OS as usual, without any particularity.<br />
<br />
===Add the necessary packages===<br />
dnf install oddjob-mkhomedir sssd policycoreutils-python<br />
<br />
===Prepare the /home folder===<br />
The “/home” folders of the users will be placed at the same place as on the SME (/home/e-smith/fies/users).<br />
This must be configured to avoid blocks from SELinux:<br />
mkdir -p /home/e-smith/files/users/<br />
semanage fcontext -a -e /home /home/e-smith/files/users<br />
restorecon -R /home/e-smith/files/users<br />
restorecon -R /var/lib/sss<br />
<br />
===Active and enable the oddjobd daemon===<br />
This daemon will create the personal /home folders at the first login of the users.<br />
systemctl start oddjobd.service<br />
systemctl enable oddjobd.service<br />
<br />
===Manage the CA of the SME===<br />
after having installed phpki, go to https://www.domain.tld/phpki and download on the client machine the certificate of authority (ca.crt).<br />
<br />
Place a copy of it or of another CA into /etc/phpki/tls/certs/ and give the 644 permissions:<br />
cp ~/download/ca.crt /etc/phpki/tls/certs/<br />
chmod 644 /etc/phpki/tls/certs/ca.crt<br />
<br />
Place another copy of the ca into /etc/pki/ca-trust/source/anchors/ and install the certificate system wide:<br />
cp ~/download/ca.crt /etc/pki/ca-trust/source/anchors/<br />
update-ca-trust<br />
(Note: to remove the ca systemwide: remove ca.crt from /etc/pki/ca-trust/source/anchors/ and run update-ca-trust again. See https://fedoraproject.org/wiki/Features/SharedSystemCertificates:Testing#How_to_add_a_systemwide_CA)<br />
<br />
===Configure SSSD===<br />
The configuration is made by the file /ets/sssd/sssd.conf.<br />
This file doesn't exist by default. Therefore it has to be created and it needs to get the permissions 600 to allow the daemon to start:<br />
cat <<'_EOF' > /etc/sssd/sssd.conf<br />
[sssd]<br />
config_file_version = 2<br />
services = nss, pam<br />
domains = LDAP<br />
<br />
[nss]<br />
<br />
[pam]<br />
<br />
[domain/LDAP]<br />
id_provider = ldap<br />
auth_provider = ldap<br />
ldap_schema = rfc2307<br />
ldap_uri = ldap://sme-server.domain.tld<br />
ldap_default_bind_dn = uid=auth,ou=Users,dc=domain,dc=tld<br />
ldap_default_authtok = something_very_secret<br />
ldap_default_authtok_type = password<br />
ldap_search_base = dc=domain,dc=tld<br />
ldap_user_search_base = ou=Users,dc=domain,dc=tld<br />
ldap_group_search_base = ou=Groups,dc=domain,dc=tld<br />
ldap_user_object_class = inetOrgPerson<br />
ldap_user_gecos = cn<br />
ldap_tls_reqcert = hard<br />
ldap_tls_cacert = /etc/pki/tls/certs/ca.crt<br />
ldap_id_use_start_tls = true<br />
# uncomment below if the SME is a “iPasserelle”<br />
#ldap_user_shell = desktopLoginShell<br />
# comment below if the SME is a “iPasserelle”<br />
override_shell = /bin/bash<br />
cache_credentials = true<br />
enumerate = true<br />
# It is possible to filter the logins via a LDAP-filer<br />
# by commenting the both lines below.<br />
# In this exemple, only the users member of the group netusers<br />
# will be valid on this host.<br />
# posixMemberOF is a parameter only for a iPasserelle<br />
#access_provider = ldap<br />
#ldap_access_filter = (|(posixMemberOf=admins)(uid=backup))<br />
_EOF<br />
<br />
chmod 600 /etc/sssd/sssd.conf<br />
<br />
{{Tip box|Make sure that the file /etc/pki/tls/certs/ca.crt contains the CA that has sign the certificate of the SME (if PHPki is used, a version > 0,82-13 is required).}}<br />
<br />
===Start and enable the daemon sssd===<br />
<br />
*This daemon must be automatically started by running the client machine:<br />
systemctl start sssd.service<br />
systemctl enable sssd.service<br />
<br />
{{Tip box|It is possible to debug by starting sssd manually:<br />
systemctl stop sssd.service<br />
sssd -i -d5<br />
}}<br />
<br />
*You should check that sss is set as a source in the /etc/nsswitch.conf. Normally it has been done automatically<br />
[...]<br />
passwd: files sss<br />
shadow: files sss<br />
group: files sss<br />
[...]<br />
<br />
===General configuration===<br />
*The system must be configured in order to use sssd.<br />
This is done very easily by using the tool authconfig:<br />
authconfig --enablemkhomedir --enablesssd --enablesssdauth --update<br />
<br />
*<br />
getent passwd <br />
should show you the users of the SME.<br />
<br />
'''''If it doesn't work:'''''<br />
*check first of all if the password of the user “auth” is present in clear by the parameter “ldap_default_authtok” of /etc/sssd/sssd.conf.<br />
*Further more, for tests and debug, you can set “ldap_tls_reqcert = never” to avoid problems due to a non accepted CA.<br />
{{Warning box|It is not recommended to use this parameter for a production server!}}<br />
<br />
{{Tip box|It is possible to give admin permissions to LDAP-users: on Fedora, members of the special, local "wheel" group will be admin, so just add your LDAP users to this group:<br />
gpasswd -a <your ldap login> wheel}}<br />
<br />
=== Automount of the ibays and of the user folders ===<br />
* Make sure that both rpms pam_mount and cifs-utils are installed<br />
sudo dnf install pam_mount, cifs-utils<br />
* Modify /etc/pam.d/lightdm (can be /etc/pam.d/gdm-password by Gnome instead of XFCE) as follow: <br />
#%PAM-1.0<br />
auth [success=done ignore=ignore default=bad]<br />
dpam_selinux_permit.so<br />
auth required pam_env.so<br />
auth substack system-auth<br />
-auth optional pam_gnome_keyring.so<br />
-auth optional pam_kwallet.so<br />
auth optional pam_mount.so<br />
auth include postlogin<br />
<br />
account required pam_nologin.so<br />
account include system-auth<br />
<br />
password include system-auth<br />
<br />
session required pam_selinux.so close<br />
session required pam_loginuid.so<br />
session optional pam_console.so<br />
-session optional pam_ck_connector.so<br />
session required pam_selinux.so open<br />
session optional pam_keyinit.so force revoke<br />
session required pam_namespace.so<br />
-session optional pam_gnome_keyring.so auto_start<br />
-session optional pam_kwallet.so<br />
session include system-auth<br />
session optional pam_lastlog.so silent<br />
session optional pam_mount.so<br />
session include postlogin <br />
by adding:<br />
session optional pam_mount.so<br />
session include postlogin<br />
at the end of the file. <br />
* Edit the /etc/security/pam_mount.conf.xml file. Find the 'Volume Definitions' section and add after it:<br />
<volume fstype="cifs" server="192.168.2.5" path="ibay1" mountpoint="/media/sme/ibay1" user="*" options="rw,auto,iocharset=utf8" /><br />
<volume fstype="cifs" server="192.168.2.5" path="ibay2" mountpoint="/media/sme/ibay2" user="*" options="rw,auto,iocharset=utf8" /><br />
<!-- .........etc....etc... --> <br />
<volume fstype="cifs" server="192.168.2.5" path="%(USER)" mountpoint="/media/sme/%(USER)/" user="*" options="rw,auto,iocharset=utf8" /><br />
* Create all the above configured the mount points:<br />
sudo mkdir /media/sme/ibay1<br />
sudo chmod 777 /media/sme/ibay1<br />
sudo mkdir /media/sme/ibay2<br />
sudo chmod 777 /media/sme/ibay2<br />
etc...<br />
<br />
sudo mkdir /media/sme/user1<br />
sudo chmod 777 /media/sme/user1<br />
sudo mkdir /media/sme/user2<br />
sudo chmod 777 /media/sme/user2<br />
etc... <br />
<br />
{{note box|With Fedora21, it was possible only the create and chmod777 /media/sme. The different mount points (ibay1, ibay2 etc...) have been created automatically by Fedora at the mounting operation. // It seem that this is not possible any more!!<br />
Please inform us, if there is a way to get it again.}}<br />
{{Tip box|Event if all the mount points are created, the several shares (ibays and user folders) are only mounted according to the permissions of the user.}}<br />
{{Tip box|The shares are automatically unmounted when the user loggs out.}}<br />
That's all. <br />
Enjoy!<br />
[[Category:Howto]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Client_Authentication:Fedora_via_sssd/ldap&diff=29127Client Authentication:Fedora via sssd/ldap2015-10-27T19:55:24Z<p>Arnaud: </p>
<hr />
<div>{{Warning box|This is based upon limited testing and a small number of users.}}<br />
<br />
==Introduction==<br />
This how-to shows how to configure a SME-server (>=8b6) and a client Fedora (tested with F22 for this how-to) for a LDAP based SSSD authentication of the client machine on the configured user accounts of the SME. This way is more simple that the way via winbind [[Client Authentication:Fedora]]. Further more, the winbind method doesn't seem to work on a Fedora 22.<br />
<br />
The SSSD authentication via LDAP allows to login with the SME-user accounts, even if the client machine is offline.<br />
Nevertheless, I recommend to create a local user with the admin rights for the emergency case.<br />
<br />
These lines are a translation and an application of the method given by Daniel: https://wikit.firewall-services.com/doku.php/tuto/ipasserelle/authentification/fedora_sssd_on_sme. Many thanks to him for it.<br />
<br />
In this how-to: we assume that:<br />
the host name of the SME is "sme-server" and the domain is "domain.tld".<br />
<br />
==Configuration of the SME-server==<br />
There is quite no necessary configuration of the SME. <br />
*The only thing to do is to create a user (named "auth" in this how-to) via the server-manager and to give him a valid password ("something_very_secret" in the how-to).<br />
It is not required to make "auth" member of any group.<br />
<br />
*In addition, it is recommended to install and configure [[PHPki]] in order to make the managing of the certificates easier and to allow Fedora to accept a non-official certificate of the authentication.<br />
<br />
<br />
==Configuration of the client Fedora==<br />
<br />
===Installation===<br />
Install the OS as usual, without any particularity.<br />
<br />
===Add the necessary packages===<br />
dnf install oddjob-mkhomedir sssd policycoreutils-python<br />
<br />
===Prepare the /home folder===<br />
The “/home” folders of the users will be placed at the same place as on the SME (/home/e-smith/fies/users).<br />
This must be configured to avoid blocks from SELinux:<br />
mkdir -p /home/e-smith/files/users/<br />
semanage fcontext -a -e /home /home/e-smith/files/users<br />
restorecon -R /home/e-smith/files/users<br />
restorecon -R /var/lib/sss<br />
<br />
===Active and enable the oddjobd daemon===<br />
This daemon will create the personal /home folders at the first login of the users.<br />
systemctl start oddjobd.service<br />
systemctl enable oddjobd.service<br />
<br />
===Manage the CA of the SME===<br />
after having installed phpki, go to https://www.domain.tld/phpki and download on the client machine the certificate of authority (ca.crt).<br />
<br />
Place a copy of it or of another CA into /etc/phpki/tls/certs/ and give the 644 permissions:<br />
cp ~/download/ca.crt /etc/phpki/tls/certs/<br />
chmod 644 /etc/phpki/tls/certs/ca.crt<br />
<br />
Place another copy of the ca into /etc/pki/ca-trust/source/anchors/ and install the certificate system wide:<br />
cp ~/download/ca.crt /etc/pki/ca-trust/source/anchors/<br />
update-ca-trust<br />
(Note: to remove the ca systemwide: remove ca.crt from /etc/pki/ca-trust/source/anchors/ and run update-ca-trust again. See https://fedoraproject.org/wiki/Features/SharedSystemCertificates:Testing#How_to_add_a_systemwide_CA)<br />
<br />
===Configure SSSD===<br />
The configuration is made by the file /ets/sssd/sssd.conf.<br />
This file doesn't exist by default. Therefore it has to be created and it needs to get the permissions 600 to allow the daemon to start:<br />
cat <<'_EOF' > /etc/sssd/sssd.conf<br />
[sssd]<br />
config_file_version = 2<br />
services = nss, pam<br />
domains = LDAP<br />
<br />
[nss]<br />
<br />
[pam]<br />
<br />
[domain/LDAP]<br />
id_provider = ldap<br />
auth_provider = ldap<br />
ldap_schema = rfc2307<br />
ldap_uri = ldap://sme-server.domain.tld<br />
ldap_default_bind_dn = uid=auth,ou=Users,dc=domain,dc=tld<br />
ldap_default_authtok = something_very_secret<br />
ldap_default_authtok_type = password<br />
ldap_search_base = dc=domain,dc=tld<br />
ldap_user_search_base = ou=Users,dc=domain,dc=tld<br />
ldap_group_search_base = ou=Groups,dc=domain,dc=tld<br />
ldap_user_object_class = inetOrgPerson<br />
ldap_user_gecos = cn<br />
ldap_tls_reqcert = hard<br />
ldap_tls_cacert = /etc/pki/tls/certs/ca.crt<br />
ldap_id_use_start_tls = true<br />
# uncomment below if the SME is a “iPasserelle”<br />
#ldap_user_shell = desktopLoginShell<br />
# comment below if the SME is a “iPasserelle”<br />
override_shell = /bin/bash<br />
cache_credentials = true<br />
enumerate = true<br />
# It is possible to filter the logins via a LDAP-filer<br />
# by commenting the both lines below.<br />
# In this exemple, only the users member of the group netusers<br />
# will be valid on this host.<br />
# posixMemberOF is a parameter only for a iPasserelle<br />
#access_provider = ldap<br />
#ldap_access_filter = (|(posixMemberOf=admins)(uid=backup))<br />
_EOF<br />
<br />
chmod 600 /etc/sssd/sssd.conf<br />
<br />
{{Tip box|Make sure that the file /etc/pki/tls/certs/ca.crt contains the CA that has sign the certificate of the SME (if PHPki is used, a version > 0,82-13 is required).}}<br />
<br />
===Start and enable the daemon sssd===<br />
<br />
*This daemon must be automatically started by running the client machine:<br />
systemctl start sssd.service<br />
systemctl enable sssd.service<br />
<br />
{{Tip box|It is possible to debug by starting sssd manually:<br />
systemctl stop sssd.service<br />
sssd -i -d5<br />
}}<br />
<br />
*You should check that sss is set as a source in the /etc/nsswitch.conf. Normally it has been done automatically<br />
[...]<br />
passwd: files sss<br />
shadow: files sss<br />
group: files sss<br />
[...]<br />
<br />
===General configuration===<br />
*The system must be configured in order to use sssd.<br />
This is done very easily by using the tool authconfig:<br />
authconfig --enablemkhomedir --enablesssd --enablesssdauth --update<br />
<br />
*<br />
getent passwd <br />
should show you the users of the SME.<br />
<br />
'''''If it doesn't work:'''''<br />
*check first of all if the password of the user “auth” is present in clear by the parameter “ldap_default_authtok” of /etc/sssd/sssd.conf.<br />
*Further more, for tests and debug, you can set “ldap_tls_reqcert = never” to avoid problems due to a non accepted CA.<br />
{{Warning box|It is not recommended to use this parameter for a production server!}}<br />
<br />
{{Tip box|It is possible to give admin permissions to LDAP-users: on Fedora, members of the special, local "wheel" group will be admin, so just add your LDAP users to this group:<br />
gpasswd -a <your ldap login> wheel}}<br />
<br />
=== Automount of the ibays and of the user folders ===<br />
* Make sure that both rpms pam_mount and cifs-utils are installed<br />
sudo dnf install pam_mount, cifs-utils<br />
* Modify /etc/pam.d/lightdm (can be /etc/pam.d/gdm-password by Gnome instead of XFCE) as follow: <br />
#%PAM-1.0<br />
auth [success=done ignore=ignore default=bad]<br />
dpam_selinux_permit.so<br />
auth required pam_env.so<br />
auth substack system-auth<br />
-auth optional pam_gnome_keyring.so<br />
-auth optional pam_kwallet.so<br />
auth optional pam_mount.so<br />
auth include postlogin<br />
<br />
account required pam_nologin.so<br />
account include system-auth<br />
<br />
password include system-auth<br />
<br />
session required pam_selinux.so close<br />
session required pam_loginuid.so<br />
session optional pam_console.so<br />
-session optional pam_ck_connector.so<br />
session required pam_selinux.so open<br />
session optional pam_keyinit.so force revoke<br />
session required pam_namespace.so<br />
-session optional pam_gnome_keyring.so auto_start<br />
-session optional pam_kwallet.so<br />
session include system-auth<br />
session optional pam_lastlog.so silent<br />
session optional pam_mount.so<br />
session include postlogin <br />
by adding:<br />
session optional pam_mount.so<br />
session include postlogin<br />
at the end of the file. <br />
* Edit the /etc/security/pam_mount.conf.xml file. Find the 'Volume Definitions' section and add after it:<br />
<volume fstype="cifs" server="192.168.2.5" path="ibay1" mountpoint="/media/sme/ibay1" user="*" options="rw,auto,iocharset=utf8" /><br />
<volume fstype="cifs" server="192.168.2.5" path="ibay2" mountpoint="/media/sme/ibay2" user="*" options="rw,auto,iocharset=utf8" /><br />
<!-- .........etc....etc... --> <br />
<volume fstype="cifs" server="192.168.2.5" path="%(USER)" mountpoint="/media/sme/%(USER)/" user="*" options="rw,auto,iocharset=utf8" /><br />
* Create all the above configured the mount points:<br />
sudo mkdir /media/sme/ibay1<br />
sudo chmod 777 /media/sme/ibay1<br />
sudo mkdir /media/sme/ibay2<br />
sudo chmod 777 /media/sme/ibay2<br />
etc...<br />
<br />
sudo mkdir /media/sme/user1<br />
sudo chmod 777 /media/sme/user1<br />
sudo mkdir /media/sme/user2<br />
sudo chmod 777 /media/sme/user2<br />
etc... <br />
<br />
{{note box|With Fedora21, it was possible only the create and chmod777 /media/sme. The different mount points (ibay1, ibay2 etc...) have been created automatically by Fedora at the mounting operation. // It seem that this is not possible any more!!<br />
Please inform us, if there is a way to get it again.}}<br />
{{note box|Event if all the mount points are created, the several shares (ibays and user folders) are only mounted according to the permissions of the user.}}<br />
{{note box|The shares are automatically unmounted when the user loggs out.}}<br />
That's all. <br />
Enjoy!<br />
[[Category:Howto]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Client_Authentication:Fedora_via_sssd/ldap&diff=29102Client Authentication:Fedora via sssd/ldap2015-10-23T18:52:52Z<p>Arnaud: </p>
<hr />
<div>{{Warning box|This is based upon limited testing and a small number of users.}}<br />
<br />
==Introduction==<br />
This how-to shows how to configure a SME-server (>=8b6) and a client Fedora (tested with F22 for this how-to) for a LDAP based SSSD authentication of the client machine on the configured user accounts of the SME. This way is more simple that the way via winbind [[Client Authentication:Fedora]]. Further more, the winbind method doesn't seem to work on a Fedora 22.<br />
<br />
The SSSD authentication via LDAP allows to login with the SME-user accounts, even if the client machine is offline.<br />
Nevertheless, I recommend to create a local user with the admin rights for the emergency case.<br />
<br />
These lines are a translation and an application of the method given by Daniel: https://wikit.firewall-services.com/doku.php/tuto/ipasserelle/authentification/fedora_sssd_on_sme. Many thanks to him for it.<br />
<br />
In this how-to: we assume that:<br />
the host name of the SME is "sme-server" and the domain is "domain.tld".<br />
<br />
==Configuration of the SME-server==<br />
There is quite no necessary configuration of the SME. <br />
*The only thing to do is to create a user (named "auth" in this how-to) via the server-manager and to give him a valid password ("something_very_secret" in the how-to).<br />
It is not required to make "auth" member of any group.<br />
<br />
*In addition, it is recommended to install and configure [[PHPki]] in order to make the managing of the certificates easier and to allow Fedora to accept a non-official certificate of the authentication.<br />
<br />
<br />
==Configuration of the client Fedora==<br />
<br />
===Installation===<br />
Install the OS as usual, without any particularity.<br />
<br />
===Add the necessary packages===<br />
dnf install oddjob-mkhomedir sssd policycoreutils-python<br />
<br />
===Prepare the /home folder===<br />
The “/home” folders of the users will be placed at the same place as on the SME (/home/e-smith/fies/users).<br />
This must be configured to avoid blocks from SELinux:<br />
mkdir -p /home/e-smith/files/users/<br />
semanage fcontext -a -e /home /home/e-smith/files/users<br />
restorecon -R /home/e-smith/files/users<br />
restorecon -R /var/lib/sss<br />
<br />
===Active and enable the oddjobd daemon===<br />
This daemon will create the personal /home folders at the first login of the users.<br />
systemctl start oddjobd.service<br />
systemctl enable oddjobd.service<br />
<br />
===Manage the CA of the SME===<br />
after having installed phpki, go to https://www.domain.tld/phpki and download on the client machine the certificate of authority (ca.crt).<br />
<br />
Place a copy of it into /etc/phpki/tls/certs/ and give the 644 permissions:<br />
cp ~/download/ca.crt /etc/phpki/tls/certs/<br />
chmod 644 /etc/phpki/tls/certs/ca.crt<br />
<br />
Place another copy of the ca into /etc/pki/ca-trust/source/anchors/ and install the certificate system wide:<br />
cp ~/download/ca.crt /etc/pki/ca-trust/source/anchors/<br />
update-ca-trust<br />
(Note: to remove the ca systemwide: remove ca.crt from /etc/pki/ca-trust/source/anchors/ and run update-ca-trust again. See https://fedoraproject.org/wiki/Features/SharedSystemCertificates:Testing#How_to_add_a_systemwide_CA)<br />
<br />
===Configure SSSD===<br />
The configuration is made by the file /ets/sssd/sssd.conf.<br />
This file doesn't exist by default. Therefore it has to be created and it needs to get the permissions 600 to allow the daemon to start:<br />
cat <<'_EOF' > /etc/sssd/sssd.conf<br />
[sssd]<br />
config_file_version = 2<br />
services = nss, pam<br />
domains = LDAP<br />
<br />
[nss]<br />
<br />
[pam]<br />
<br />
[domain/LDAP]<br />
id_provider = ldap<br />
auth_provider = ldap<br />
ldap_schema = rfc2307<br />
ldap_uri = ldap://sme-server.domain.tld<br />
ldap_default_bind_dn = uid=auth,ou=Users,dc=domain,dc=tld<br />
ldap_default_authtok = something_very_secret<br />
ldap_default_authtok_type = password<br />
ldap_search_base = dc=domain,dc=tld<br />
ldap_user_search_base = ou=Users,dc=domain,dc=tld<br />
ldap_group_search_base = ou=Groups,dc=domain,dc=tld<br />
ldap_user_object_class = inetOrgPerson<br />
ldap_user_gecos = cn<br />
ldap_tls_reqcert = hard<br />
ldap_tls_cacert = /etc/pki/tls/certs/ca.crt<br />
ldap_id_use_start_tls = true<br />
# uncomment below if the SME is a “iPasserelle”<br />
#ldap_user_shell = desktopLoginShell<br />
# comment below if the SME is a “iPasserelle”<br />
override_shell = /bin/bash<br />
cache_credentials = true<br />
enumerate = true<br />
# It is possible to filter the logins via a LDAP-filer<br />
# by commenting the both lines below.<br />
# In this exemple, only the users member of the group netusers<br />
# will be valid on this host.<br />
# posixMemberOF is a parameter only for a iPasserelle<br />
#access_provider = ldap<br />
#ldap_access_filter = (|(posixMemberOf=admins)(uid=backup))<br />
_EOF<br />
<br />
chmod 600 /etc/sssd/sssd.conf<br />
<br />
===Start and enable the daemon sssd===<br />
<br />
*This daemon must be automatically started by running the client machine:<br />
systemctl start sssd.service<br />
systemctl enable sssd.service<br />
<br />
{{Tip box|It is possible to debug by starting sssd manually:<br />
systemctl stop sssd.service<br />
sssd -i -d5<br />
}}<br />
<br />
*You should check that sss is set as a source in the /etc/nsswitch.conf. Normally it has been done automatically<br />
[...]<br />
passwd: files sss<br />
shadow: files sss<br />
group: files sss<br />
[...]<br />
<br />
===General configuration===<br />
*The system must be configured in order to use sssd.<br />
This is done very easily by using the tool authconfig:<br />
authconfig --enablemkhomedir --enablesssd --enablesssdauth --update<br />
<br />
*<br />
getent passwd <br />
should show you the users of the SME.<br />
<br />
'''''If it doesn't work:'''''<br />
*check first of all if the password of the user “auth” is present in clear by the parameter “ldap_default_authtok” of /etc/sssd/sssd.conf.<br />
*Further more, for tests and debug, you can set “ldap_tls_reqcert = never” to avoid problems due to a non accepted CA.<br />
{{Warning box|It is not recommended to use this parameter for a production server!}}<br />
<br />
{{Tip box|It is possible to give admin permissions to LDAP-users: on Fedora, members of the special, local "wheel" group will be admin, so just add your LDAP users to this group:<br />
gpasswd -a <your ldap login> wheel}}<br />
<br />
That's all. <br />
Enjoy!<br />
[[Category:Howto]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Client_Authentication:Fedora_via_sssd/ldap&diff=29000Client Authentication:Fedora via sssd/ldap2015-10-18T13:59:09Z<p>Arnaud: </p>
<hr />
<div>{{Warning box|This is based upon limited testing and a small number of users.}}<br />
<br />
==Introduction==<br />
This how-to shows how to configure a SME-server (>=8b6) and a client Fedora (tested with F22 for this how-to) for a LDAP based SSSD authentication of the client machine on the configured user accounts of the SME. This way is more simple that the way via winbind [[Client Authentication:Fedora]]. Further more, the winbind method doesn't seem to work on a Fedora 22.<br />
<br />
The SSSD authentication via LDAP allows to login with the SME-user accounts, even if the client machine is offline.<br />
Nevertheless, I recommend to create a local user with the admin rights for the emergency case.<br />
<br />
These lines are a translation and an application of the method given by Daniel: https://wikit.firewall-services.com/doku.php/tuto/ipasserelle/authentification/fedora_sssd_on_sme. Many thanks to him for it.<br />
<br />
In this how-to: we assume that:<br />
the host name of the SME is "sme-server" and the domain is "domain.tld".<br />
<br />
==Configuration of the SME-server==<br />
There is quite no necessary configuration of the SME. <br />
*The only thing to do is to create a user (named "auth" in this how-to) via the server-manager and to give him a valid password ("something_very_secret" in the how-to).<br />
It is not required to make "auth" member of any group.<br />
<br />
*In addition, it is recommended to install and configure [[PHPki]] in order to make the managing of the certificates easier and to allow Fedora to accept a non-official certificate of the authentication.<br />
<br />
<br />
==Configuration of the client Fedora==<br />
<br />
===Installation===<br />
Install the OS as usual, without any particularity.<br />
<br />
===Add the necessary packages===<br />
dnf install oddjob-mkhomedir sssd policycoreutils-python<br />
<br />
===Preparare the /home folder===<br />
The “/home” folders of the users will be placed at the same place as on the SME (/home/e-smith/fies/users).<br />
This must be configured to avoid blocks from SELinux:<br />
mkdir -p /home/e-smith/files/users/<br />
semanage fcontext -a -e /home /home/e-smith/files/users<br />
restorecon -R /home/e-smith/files/users<br />
restorecon -R /var/lib/sss<br />
<br />
===Acive and enable the oddjobd daemon===<br />
This deamon will create the personal /home folders at the first login of the users.<br />
systemctl start oddjobd.service<br />
systemctl enable oddjobd.service<br />
<br />
===Manage the CA of the SME===<br />
after having installed phpki, go to https://www.domain.tld/phpki and download on the client machine the certificate of authority (ca.crt).<br />
<br />
Place a copy of it into /etc/phpki/tls/certs/ and give the 644 permissions:<br />
cp ~/download/ca.crt /etc/phpki/tls/certs/<br />
chmod 644 /etc/phpki/tls/certs/ca.crt<br />
<br />
Place another copy of the ca into /etc/pki/ca-trust/source/anchors/ and install the certificate system wide:<br />
cp ~/download/ca.crt /etc/pki/ca-trust/source/anchors/<br />
update-ca-trust<br />
(Note: to remove the ca systemwide: remove ca.crt from /etc/pki/ca-trust/source/anchors/ and run update-ca-trust again. See https://fedoraproject.org/wiki/Features/SharedSystemCertificates:Testing#How_to_add_a_systemwide_CA)<br />
<br />
===Configure SSSD===<br />
The configuration is made by the file /ets/sssd/sssd.conf.<br />
This file doesn't exist by default. Therefore it has to be created and it needs to get the permissions 600 to allow the daemon to start:<br />
cat <<'_EOF' > /etc/sssd/sssd.conf<br />
[sssd]<br />
config_file_version = 2<br />
services = nss, pam<br />
domains = LDAP<br />
<br />
[nss]<br />
<br />
[pam]<br />
<br />
[domain/LDAP]<br />
id_provider = ldap<br />
auth_provider = ldap<br />
ldap_schema = rfc2307<br />
ldap_uri = ldap://sme-server.domain.tld<br />
ldap_default_bind_dn = uid=auth,ou=Users,dc=domain,dc=tld<br />
ldap_default_authtok = something_very_secret<br />
ldap_default_authtok_type = password<br />
ldap_search_base = dc=domain,dc=tld<br />
ldap_user_search_base = ou=Users,dc=domain,dc=tld<br />
ldap_group_search_base = ou=Groups,dc=domain,dc=tld<br />
ldap_user_object_class = inetOrgPerson<br />
ldap_user_gecos = cn<br />
ldap_tls_reqcert = hard<br />
ldap_tls_cacert = /etc/pki/tls/certs/ca.crt<br />
ldap_id_use_start_tls = true<br />
# uncomment below if the SME is a “iPasserelle”<br />
#ldap_user_shell = desktopLoginShell<br />
# comment below if the SME is a “iPasserelle”<br />
override_shell = /bin/bash<br />
cache_credentials = true<br />
enumerate = true<br />
# It is possible to filter the logins via a LDAP-filer<br />
# by commenting the both lines below.<br />
# In this exemple, only the users member of the group netusers<br />
# will be valid on this host.<br />
# posixMemberOF is a parameter only for a iPasserelle<br />
#access_provider = ldap<br />
#ldap_access_filter = (|(posixMemberOf=admins)(uid=backup))<br />
_EOF<br />
<br />
chmod 600 /etc/sssd/sssd.conf<br />
<br />
===Start and enable the daemon sssd===<br />
<br />
*This daemon must be automatically started by running the client machine:<br />
systemctl start sssd.service<br />
systemctl enable sssd.service<br />
<br />
{{Tip box|It is possible to debug by starting sssd manually:<br />
systemctl stop sssd.service<br />
sssd -i -d5<br />
}}<br />
<br />
*You should check that sss is set as a source in the /etc/nsswitch.conf. Normally it has been done automatically<br />
[...]<br />
passwd: files sss<br />
shadow: files sss<br />
group: files sss<br />
[...]<br />
<br />
===General configuration===<br />
*The system must be configured in order to use sssd.<br />
This is done very easily by using the tool authconfig:<br />
authconfig --enablemkhomedir --enablesssd --enablesssdauth --update<br />
<br />
*<br />
getent passwd <br />
should show you the users of the SME.<br />
<br />
'''''If it doen't work:'''''<br />
*check first of all if the password of the user “auth” is present in clear by the parameter “ldap_default_authtok” of /etc/sssd/sssd.conf.<br />
*Further more, for tests and debug, you can set “ldap_tls_reqcert = never” to avoid problems due to a non accepted CA.<br />
{{Warning box|It is not recommended to use this parameter for a production server!}}<br />
<br />
{{Tip box|It is possible to give admin permissions to LDAP-users: on Fedora, members of the special, local "wheel" group will be admin, so just add your LDAP users to this group:<br />
gpasswd -a <your ldap login> wheel}}<br />
<br />
Thats all. <br />
Enjoy!<br />
[[Category:Howto]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Client_Authentication:Fedora_via_sssd/ldap&diff=28999Client Authentication:Fedora via sssd/ldap2015-10-18T13:57:22Z<p>Arnaud: Created page with "{{Warning box|This is based upon limited testing and a small number of users.}} ==Introduction== This how-to shows how to configure a SME-server (>=8b6) and a client Fedora (..."</p>
<hr />
<div>{{Warning box|This is based upon limited testing and a small number of users.}}<br />
<br />
==Introduction==<br />
This how-to shows how to configure a SME-server (>=8b6) and a client Fedora (tested with F22 for this how-to) for a LDAP based SSSD authentication of the client machine on the configured user accounts of the SME. This way is more simple that the way via winbind [[Client Authentication:Fedora]]. Further more, the winbind method doesn't seem to work on a Fedora 22.<br />
<br />
The SSSD authentication via LDAP allows to login with the SME-user accounts, even if the client machine is offline.<br />
Nevertheless, I recommend to create a local user with the admin rights for the emergency case.<br />
<br />
These lines are a translation and an application of the method given by Daniel: https://wikit.firewall-services.com/doku.php/tuto/ipasserelle/authentification/fedora_sssd_on_sme. Many thanks to him for it.<br />
<br />
In this how-to: we assume that:<br />
the host name of the SME is "sme-server" and the domain is "domain.tld".<br />
<br />
==Configuration of the SME-server==<br />
There is quite no necessary configuration of the SME. <br />
*The only thing to do is to create a user (named "auth" in this how-to) via the server-manager and to give him a valid password ("something_very_secret" in the how-to).<br />
It is not required to make "auth" member of any group.<br />
<br />
*In addition, it is recommended to install and configure [[PHPki]] in order to make the managing of the certificates easier and to allow Fedora to accept a non-official certificate of the authentication.<br />
<br />
<br />
==Configuration of the client Fedora==<br />
<br />
===Installation===<br />
Install the OS as usual, without any particularity.<br />
<br />
===Add the necessary packages===<br />
dnf install oddjob-mkhomedir sssd policycoreutils-python<br />
<br />
===Preparare the /home folder===<br />
The “/home” folders of the users will be placed at the same place as on the SME (/home/e-smith/fies/users).<br />
This must be configured to avoid blocks from SELinux:<br />
mkdir -p /home/e-smith/files/users/<br />
semanage fcontext -a -e /home /home/e-smith/files/users<br />
restorecon -R /home/e-smith/files/users<br />
restorecon -R /var/lib/sss<br />
<br />
===Acive and enable the oddjobd daemon===<br />
This deamon will create the personal /home folders at the first login of the users.<br />
systemctl start oddjobd.service<br />
systemctl enable oddjobd.service<br />
<br />
===Manage the CA of the SME===<br />
after having installed phpki, go to https://www.domain.tld/phpki and download on the client machine the certificate of authority (ca.crt).<br />
<br />
Place a copy of it into /etc/phpki/tls/certs/ and give the 644 permissions:<br />
cp ~/download/ca.crt /etc/phpki/tls/certs/<br />
chmod 644 /etc/phpki/tls/certs/ca.crt<br />
<br />
Place another copy of the ca into /etc/pki/ca-trust/source/anchors/ and install the certificate system wide:<br />
cp ~/download/ca.crt /etc/pki/ca-trust/source/anchors/<br />
update-ca-trust<br />
(Note: to remove the ca systemwide: remove ca.crt from /etc/pki/ca-trust/source/anchors/ and run update-ca-trust again. See https://fedoraproject.org/wiki/Features/SharedSystemCertificates:Testing#How_to_add_a_systemwide_CA)<br />
<br />
===Configure SSSD===<br />
The configuration is made by the file /ets/sssd/sssd.conf.<br />
This file doesn't exist by default. Therefore it has to be created and it needs to get the permissions 600 to allow the daemon to start:<br />
cat <<'_EOF' > /etc/sssd/sssd.conf<br />
[sssd]<br />
config_file_version = 2<br />
services = nss, pam<br />
domains = LDAP<br />
<br />
[nss]<br />
<br />
[pam]<br />
<br />
[domain/LDAP]<br />
id_provider = ldap<br />
auth_provider = ldap<br />
ldap_schema = rfc2307<br />
ldap_uri = ldap://sme-server.domain.tld<br />
ldap_default_bind_dn = uid=auth,ou=Users,dc=domain,dc=tld<br />
ldap_default_authtok = something_very_secret<br />
ldap_default_authtok_type = password<br />
ldap_search_base = dc=domain,dc=tld<br />
ldap_user_search_base = ou=Users,dc=domain,dc=tld<br />
ldap_group_search_base = ou=Groups,dc=domain,dc=tld<br />
ldap_user_object_class = inetOrgPerson<br />
ldap_user_gecos = cn<br />
ldap_tls_reqcert = hard<br />
ldap_tls_cacert = /etc/pki/tls/certs/ca.crt<br />
ldap_id_use_start_tls = true<br />
# uncomment below if the SME is a “iPasserelle”<br />
#ldap_user_shell = desktopLoginShell<br />
# comment below if the SME is a “iPasserelle”<br />
override_shell = /bin/bash<br />
cache_credentials = true<br />
enumerate = true<br />
# It is possible to filter the logins via a LDAP-filer<br />
# by commenting the both lines below.<br />
# In this exemple, only the users member of the group netusers<br />
# will be valid on this host.<br />
# posixMemberOF is a parameter only for a iPasserelle<br />
#access_provider = ldap<br />
#ldap_access_filter = (|(posixMemberOf=admins)(uid=backup))<br />
_EOF<br />
<br />
chmod 600 /etc/sssd/sssd.conf<br />
<br />
===Start and enable the daemon sssd===<br />
<br />
*This daemon must be automatically started by running the client machine:<br />
systemctl start sssd.service<br />
systemctl enable sssd.service<br />
<br />
{{Tip box|It is possible to debug by starting sssd manually:<br />
systemctl stop sssd.service<br />
sssd -i -d5<br />
}}<br />
<br />
*You should check that sss is set as a source in the /etc/nsswitch.conf. Normally it has been done automatically<br />
[...]<br />
passwd: files sss<br />
shadow: files sss<br />
group: files sss<br />
[...]<br />
<br />
===General configuration===<br />
*The system must be configured in order to use sssd.<br />
This is done very easily by using the tool authconfig:<br />
authconfig --enablemkhomedir --enablesssd --enablesssdauth --update<br />
<br />
*<br />
getent passwd <br />
should show you the users of the SME.<br />
<br />
'''''If it doen't work:'''''<br />
*check first of all if the password of the user “auth” is present in clear by the parameter “ldap_default_authtok” of /etc/sssd/sssd.conf.<br />
*Further more, for tests and debug, you can set “ldap_tls_reqcert = never” to avoid problems due to a non accepted CA.<br />
{{Warning box|It is not recommended to use this parameter for a production server!}}<br />
<br />
{{Tip box|It is possible to give admin permissions to LDAP-users: on Fedora, members of the special, local "wheel" group will be admin, so just add your LDAP users to this group:<br />
gpasswd -a <your ldap login> wheel}}<br />
<br />
Thats all. <br />
Enjoy!</div>Arnaudhttps://wiki.koozali.org/index.php?title=Affa/fr&diff=23569Affa/fr2014-02-05T20:10:16Z<p>Arnaud: </p>
<hr />
<div>{{Languages|Affa}}<br />
{{Level|type=Niveau de compétence: Avancé|Advanced|Les instructions dans cette page requièrent un niveau avancé. Une bonne compréhension de Linux et de SME est recommandée.}}<br />
<br />
{{Warning box| Ce code est obsolète et non maintenu par son développeur, smeserver-affa attend la correction de 32 bugs ouverts. Un nombre important de bugs et d’événements sont associés à l'utilisation d'affa après une restauration ou de l'utilisation de la fonction rise, comme des groupes manquants, crash de services, ou erreur de CGI dans le server-manager}}<br />
voir [http://bugs.contribs.org/buglist.cgi?action=wrap&bug_status=UNCONFIRMED&bug_status=CONFIRMED&bug_status=NEEDINFO&bug_status=IN_PROGRESS&bug_status=RESOLVED&bug_status=VERIFIED&classification=Contribs&component=smeserver-affa%20%28unmaintained%29&list_id=23915&product=SME%20Contribs&resolution=---&resolution=FIXED&resolution=NOTABUG&resolution=WONTFIX&resolution=UPSTREAM&resolution=DUPLICATE&resolution=WORKSFORME&saved_report_id=1 bugs]<br />
=== Développeur ===<br />
Développeur: Michael Weinberger.<br />
Le développement et la maintenance d'Affa pour SME Server ont été interrompus. La nouvelle version Affa 3.X pour CentOS est disponible sur [http://affa.sourceforge.net/ Sourceforge].<br />
<br />
<br />
=== Traducteurs ===<br />
[http://bj-informatique.com/affa.php?lang=fr jibe]<br />
<br />
Pour la mise à jour: [http://www.guedel.eu Arnaud]<br />
<br />
Affa signifie '''A'''utomatische '''F'''estplatten '''F'''ern'''a'''rchivierung, soit approximativement en français : Sauvegarde automatique de disque dur à distance.<br />
<br />
<br />
=== Description ===<br />
Le but principal de ce paquetage Affa est de transformer un serveur SME 7 en un serveur dédié de sauvegardes en quelques minutes. Affa sauvegarde autant que vous le désirez de serveurs SME sur n'importe quel autre serveur ayant sshd en service et rsync installé. Une fois qu'il est configuré, Affa fonctionne sans surveillance et de manière fiable, envoyant des messages d'avertissement en cas d'erreur. <br />
<br />
Toutes les archives sont des sauvegardes complètes, puisque Affa utilise la technique des hardlink. Une nouvelle sauvegarde complète ne nécessite donc que l'espace disque pour les différences et les nécessités du système de fichier pour les hardlinks et les répertoires (ce qui représente généralement 2-3%).<br />
<br />
Affa est basé sur le programme rsync et supporte l'option --compress. Cela vous permet de lancer des sauvegardes via internet ou VPN. ne configuration typique consiste en un ou plusieurs serveurs de sauvegarde Affa placés en des lieux déportés, qui sauvegardent les serveurs de production via le VPN.<br />
<br />
Une caractéristique spéciale est l'option "rise", qui vous permet de convertir votre serveur de sauvegarde en serveur de production depuis une archive sauvegardée en cas de perte totale de votre serveur de production. Cette conversion est exécutée en un temps très court, même avec une énorme quantité de données. Cette possibilité de conversion utilise les hardlinks et de ce fait n'utilise pas d'espace disque supplémentaire.<br />
<br />
Affa est un outil en ligne de commande pour les administrateurs système et est conçu intentionnellement sans interface graphique. De ce fait, il peut être géré efficacement depuis la console même via une connexion internet lente.<br />
<br />
'''Note:''' Ce document concerne la version 2 Release Candidate d'Affa. La documentation de la dernière version 1 stable se trouve [http://wiki.contribs.org/index.php?title=Affa&oldid=11974 ici].<br />
<br />
==== Un coup d'oeil aux caractéristiques d'Affa ====<br />
* Affa est sûr : Tous transferts et connexions sont faits en utilisant le protocole ssh avec une authentification par clés publique/privée.<br />
* Lance périodiquement des sauvegardes complètes sans intervention. * Seules les différences avec la sauvegarde complète précédente sont transférées par le réseau<br />
* Utilise rsync avec l'option de compression pour les connexions lentes, ce qui permet des sauvegardes via internet/VPN<br />
* Utilise la technique des hardlink pour réduire l'espace disque physique à seulement ce qui est nécessaire pour les différences entre deux sauvegardes complètes<br />
* Mémorise un nombre configurable de sauvegardes automatisées, journalières, hebdomadaires, mensuelles et annuelles<br />
* On peut naviguer dans les archives sans avoir à les décompresser. Il n'est pas nécessaire de les décompresser d'abord.<br />
* Les archives peuvent être attachées à un partage Samba.<br />
* Les tâches de sauvegarde sont démarrées par le démon cron<br />
* Les travaux interrompus continuent au redémarrage en tenant compte des données déjà transférées<br />
* Sauvegarde les répertoires et fichiers e-smith par défaut, lorsque la propriété SMEServer est mise à 'yes'<br />
* Des répertoires et fichiers supplémentaires peuvent être inclus<br />
* Des répertoires et fichiers peuvent être exclus de la sauvegarde<br />
* Les serveurs Linux non SME peuvent être sauvegardés en mettant la propriété SMEServer à 'no' et en utilisant une liste d'inclusion<br />
* En mode ESXi, les machines virtuelles actives peuvent être sauvegardées. Voir [[Sauvegarde de machines virtuelles ESXi avec Affa]]<br />
* Priorité configurable (nice level) pour les process rsync sur les serveurs backup et source<br />
* Possibilité de lancer des programmes personnels avant et après le lancement d'une tâche (par exemple, lancement d'une sauvegarde sur bande)<br />
* Vérification de l'espace disque disponible après le lancement d'une tâche avec niveaux d'avertissement strict, normal ou risqué<br />
* Vérification étendue des conditions d'échec<br />
* Envoi de messages d'échec à une liste configurable d'adresses email<br />
* Envoi d'un message d'avertissement si le serveur de backup n'a plus d'espace disque disponible<br />
* Installe un chien de garde optionnel sur le serveur source vérifiant la disponibilité du serveur de backup (SME server seulement)<br />
* Le chien de garde envoie des avertissements, si une sauvegarde programmée n'a pas été lancée (SME server seulement)<br />
* Le chien de garde envoie un message journalier de rappel tant que l'erreur n'est pas corrigée (SME server seulement)<br />
* Possibilité d'afficher l'état courant de toutes les tâches, montrant les heures de dernier et prochain lancement, la taille et l'espace disque utilisé<br />
* Le status peut être envoyé par email journellement, hebdomadairement ou mensuellement<br />
* Possibilit d'afficher toutes les archives existantes pour une tâche, montrant la date, le nombre de fichiers et l'espace disque<br />
* Possibilité d'envoyer (et de retirer) la clé publique au serveur source (SME server et ESXi seulement)<br />
* Option pour convertir le serveur de sauvegardes en serveur de production (SME server seulement)<br />
* La fonction de conversion ne déplace pas physiquement les données et est donc extrêmement rapide et ne nécessite (presque) pas d'espace disque<br />
* La fonction de conversion peut être lancée à distance puisque les drivers ethernet du serveur de sauvegarde sont conservés<br />
* Comparaison des RPMs installés entre les serveurs source et de sauvegarde. Envoie un avertissement s'il n'y a pas correspondance.<br />
* Annulation de la conversion pour remise en service du serveur de sauvegardes<br />
* Configurable par commandes style e-smith db, avec un enregistrement pour chaque tâche et un enregistrement par défaut commun à toutes les tâches<br />
* Logs dans /var/log/affa/JOB.log et /var/log/affa/affa.log avec option debug pour plus de détails<br />
* Les fichiers de log sont permutés hebdomadairement, avec 5 fichiers conservés<br />
<br />
<br />
=== Installation ou mise à jour ===<br />
==== Smecontribs pour sme8 ====<br />
yum install -–enablerepo=smecontribs smeserver-affa<br />
<br />
Si Affa est installé pour la première fois, entrez la commande suivante pour réinitialiser la base de données d'Affa.<br />
Affa -–make-cronjob<br />
<br />
et déconnectez-vous puis reconnectez-vous à la console pour que le complément automatique de bash (touche TAB) prenne effet .<br />
==== Ancienne méthode d'installation / méthode pour sme7 ====<br />
{{Note box|SME7 est maintenant en fin de vie et a été transféré dans le répertoire obsolète}}<br />
Version SME7: '''smeserver-affa-2.0.0-rc4.noarch.rpm'''<br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/smeserver-affa-2.0.0-rc4.noarch.rpm</nowiki><br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm<br />
<br />
<br />
Version SME8: SME8 est basé sur CentOS5, c'est pourquoi perl-Compress-Bzip2 el5 rpm doit être utilisé. <br />
<br />
Pour SME8 32 bits:<br />
wget <nowiki>http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm<br />
<br />
Pour SME8 64 bits:<br />
wget http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
<br />
Après la première installation d'Affa, lancez la commande suivante pour initialiser la base de données Affa.<br />
affa --make-cronjobs<br />
et déconnectez-vous puis reconnectez-vous à la console pour que le complément automatique de bash (touche TAB) prenne effet<br />
<br />
=== Exemple de démarrage rapide ===<br />
Vous avez un serveur de production SME 7 dont le nom est 'prodbox' et l'IP 10.200.48.1. Installez un second SME 7 comme serveur de sauvegardes avec le nom 'affabox' et l'IP 10.200.48.2. <br />
<br />
<ol></li><li>loggez-vous dans 'affabox' et installez les paquetages mentionnés ci-dessus.<br />
</li><li>copiez le script d'exemple de configuration<br />
cp /usr/lib/affa/jobconfig-sample.pl /root/prodbox-job.pl<br />
</li><li>editez /root/prodbox-job.pl et mettez<br />
my $jobname='prodbox';<br />
et<br />
'remoteHostName‘=>'10.200.48.1',<br />
Eventuellement, si le port du serveur distant est configuré, par exemple, en 2222, mettez<br />
'sshPort'=>2222,<br />
</li><li>sauvegardez et lancez le fichier 'prodbox-job.pl'. Cela enregistrera la configuration (et crée les entrées de base de données et les tâches cron)<br />
/root/prodbox-job.pl<br />
</li><li>Générez les clés DSA et envoyez la clé publique à la 'prodbox'<br />
{{Note box|Initialement, lorsque vous faites cette étape, vous devez temporairement mettre oui à "Autoriser l'authentification par SSH à l'aide de mots de passe standards" dans l'accès à distance sur le serveur de production}}<br />
affa --send-key prodbox<br />
</li><li>Lancez la tâche à la main<br />
<br />
affa --run prodbox<br />
</li></ol><br />
<br />
=== Configuration ===<br />
La configuration est stockée dans une base de données style e-smith. Utilisez la commande db pour configurer Affa.<br />
Le nom de tâche est la clé d'enregistrement de type 'job'.<br>Pour configurer un nouveau travail avec le nom 'prodbox', entrez :<br />
<br />
db affa set prodbox job<br />
<br />
Puis positionnez les propriétés<br />
<br />
db affa setprop prodbox remoteHostName 192.168.1.1<br />
db affa setprop prodbox TimeSchedule '0030,0730,1130,1330,1730,2030'<br />
db affa setprop prodbox Description 'My Production Server'<br />
db affa setprop prodbox status enabled<br />
et ainsi de suite...<br />
<br />
Alternativement, vous pouvez utiliser un script comme décrit ci-dessus dans le chapitre 'Démarrage rapide'.<br />
<br />
Pour vérifier votre travail, tapez :<br />
<br />
db affa show prodbox<br />
<br />
Pour finir, mettez en place les tâches cron :<br />
affa --make-cronjobs<br />
<br />
==== Propriétés de configuration de tâches ====<br />
'''Note:''' Les valeurs par défaut données dans ce tableau sont celles du programme Affa et ne doivent pas être confondues avec les valeurs prépositionnées dans les scripts d'aide, par exemple jobconfig-sample.pl.<br />
{| border="1" cellpadding="3" cellspacing=0<br />
|-<br />
| '''Propriété''' || '''Valeur''' || '''Defaut''' || '''Description''' <br />
|-<br />
| remoteHostName <br />
| FQHN ou IP || || FQHN ou IP de l'hôte source (obligatoire)<br />
|-<br />
| TimeSchedule <br />
| HHMM,HHMM,... || || N'a pas besoin d'être ordonné. Au moins une heure est obligatoire. '''Important:''' Utiliser le bon format HHMM est essentiel. Affa ne le vérifie pas. Un TimeSchedule mal formatté provoquera d'étranges erreurs Perl. <br />
|-<br />
| Description<br />
| texte || || <br />
|-<br />
| scheduledKeep<br />
| entier >= 1 || 1|| Combien de sauvegardes programmées doivent être conservées <br />
|-<br />
| dailyKeep<br>weeklyKeep<br>monthlyKeep<br>yearlyKeep<br />
| entier >= 0 || 7<br>4<br>12<br>2<br>|| Combien de sauvegardes quotidiennes, hebdomadaires, mensuelles ou annuelles doivent être conservées <br />
|-<br />
| SMEServer<br />
| yes ''ou'' no || yes || Positionné à 'yes', les répertoires e-smith par défaut sont automatiquement inclus et la propriété RPMCheck=yes peut être utilisée<br />
<br />
|-<br />
| Include[0]<br>Include[1]<br>...<br />
| full path || || Fichiers ou répertoires supplémentaires à inclure à la sauvegarde <br />
|-<br />
| Exclude[0]<br>Exclude[1]<br>...<br />
| full path || || Fichiers ou répertoires supplémentaires à exclure de la sauvegarde <br />
|-<br />
| RPMCheck<br />
| yes ''ou'' no || no || Seulement applicable aux tâches qui sauvegardent un serveur SME 7.<br>Compares les RPM de l'hôte source avec ceux de l'hôte de sauvegarde affa. Envoie un message avec une liste de différences s'il n'y a pas correspondance. Cette vérification est utile, si vous voulez avoir la possibilité de convertir le serveur de sauvegarde en serveur de production.<br />
|-<br />
| DiskSpaceWarn<br />
| strict ''ou'' normal ''ou'' risky ''ou'' none || strict || lance une vérification de l'espace disque après qu'une tâche soit terminée. Avec le niveau 'strict' un message d'avertissement sera envoyé si l'espace disponible est moins que la taille de la sauvegarde qui vient de s'achever. Avec les niveaux 'normal'/'risky' le message est envoyé si moins de 50%/10% de la taille de la sauvegarde reste disponible.<br />
'''Note:''' Lorsque RootDir est un lien symbolique vers un autre système de fichiers, l'espace disque utilisé du système de fichiers local est vérifié à la place de celui du système de fichiers lié. Positionnez le point de montage comme RootDir pour avoir une vérification correcte.<br />
|-<br />
| localNice <br />
| -19...+19 || 0 || lance le process local rsync par nice. <br />
|-<br />
| remoteNice<br />
| -19...+19 || 0 || lance le process rsync de la source par nice. <br />
|-<br />
| Watchdog<br />
| yes ''ou'' no || yes || Seulement applicable aux tâches qui sauvegardent un serveur SME 7.<br>Quand une tâche est lancée, affa installe un script chien de garde sur la source dans /etc/cron.d/, qui envoie un message d'avertissement si la prochaine tâche programmée (prise dans la propriété TimeSchedule + 10 minutes) n'a pas été lancée. Cela garantit que vous serez prévenu même en cas de panne du serveur affa. Le script chien de garde envoie un message de rappel quotidien si l'erreur persiste. La tâche lancée suivante remplace le script chien de garde avec un nouveau temps de déclenchement. <br />
|-<br />
| sshPort<br />
| port du service || 22 || Lorsque sshd sur le serveur source ou votre firewall écoute sur un port non standard, précisez le port ici.<br />
|-<br />
| ConnectionCheckTimeout<br />
| secondes || 120 || Avant que le process rsync démarre sur l'hôte source distant, affa vérifie la connexion ssh et se termine avec une erreur après le temps configuré, si l'hôte ne répond pas.<br />
|- <br />
| rsyncTimeout<br />
| secondes || 900 || Rsync se termine si aucune donnée n'est transférée pendant le temps configuré. Cela évite un gelage permanent dans le cas d'une erreur sur le réseau.<br />
|- <br />
| BandwidthLimit<br />
| entier>=0 Ko/s || 0 || Limite la vitesse de transfert des données. A valeur à zero specifie qu'il n'y a pas de limite.<br />
|- <br />
| rsyncCompress<br />
| yes ''ou'' no || no || Compresse les données transférées. Peut être utile avec les connexions internet lentes. Cela augmente la charge CPU sur les hôtes source et de sauvegarde.<br />
|-<br />
| EmailAddresses<br />
| name@domain.com,name@domain.com,... || admin|| Liste séparée par des virgules des adresses email où les messages seront envoyés.<br>'''Note:''' Par défaut, Affa n'envoie de messages que pour les erreurs, jamais en cas de succès (voir la propriété chattyOnSuccess).<br />
|-<br />
| RetryAttempts<br />
| entier >= 0 || 3 || Quand positionné à une valeur >0, Affa relance une tâche échouée RetryAttemps fois avec un délai de RetryAfter secondes.<br />
|-<br />
| RetryAfter <br />
| secondes >= 3 || 600 || Quand positionné à une valeur >0, attend RetryAfter secondes avant de relancer la tâche suite à une erreur. Utile seulement avec RetryAttempts>0<br />
|-<br />
| RetryNotification<br />
| yes ''ou'' no || yes || Quand positionné à no, Affa n'envoie pas de message d'erreur lorsqu'une tâche a échoué et que RetryAttemps est configuré. Un message d'erreur n'est envoyé que lorsque le dernier essai a échoué.<br />
|-<br />
| RetryAttempts<br />
| entier >= 0 || 0 || Quand positionné à une valeur >0, Affa envoie un message pour les tâches terminées avec succès et décrémente la valeur chattyOnSuccess. Lorsque la valeur atteint 0, Affa revient à son comportement par défaut et n'envoie plus de messages qu'en cas d'erreur.<br />
|-<br />
| AutomountDevice<br>AutomountPoint<br />
| full path || || Unité et point de montage de l'unité de sauvegarde (ex. disque USB). L'unité est montée automatiquement avant qu'une tâche soit lancées et démontée après la fin de la tâche. Les deux propriétés doivent être remplies pour que la fonction Automount soit active.<br />
|-<br />
| AutomountOptions<br />
| chaine de caractères || || Une chaine d'options passées à la commande de montage automatique<br />
|-<br />
| AutoUnmount <br />
| yes ''ou'' no || yes || Quand positionné à 'no' l'unité montée automatiquement reste montée après l'exécution d'Affa.<br />
|-<br />
| preJobCommand<br>postJobCommand<br />
| full path || || Programmes (locaux sur le serveur affa) à exécuter avant/après qu'une tâche soit lancée. Le nom de la tâche et le type (programmée, journalière etc.) sont passés en arguments au programme. Le code de sortie est de plus passé à la commande de programme post-tâche. Voir /usr/lib/affa/ pour des exemples de scripts perl.<br />
|-<br />
| RootDir<br />
| full path || /var/affa || Où stocker les archives de sauvegarde. N'utilisez pas /home/e-smith ou /root puisqu'ils sont inclus dans les répertoires à sauvegarder, ce qui empêcherait la fonction de conversion de fonctionner ! Répertoire conseillé : /var/affa<br />
|-<br />
| SambaShare<br />
| yes ''ou'' no|| yes || Accède aux archives de la tâche via SMB.<br />
|-<br />
| Debug <br />
| yes ''ou'' no || no || Positionné à 'yes', donne plus de détails dans le fichier de log.<br />
|-<br />
| status<br />
| enabled ''ou'' disabled || enabled || Positionné à 'disable', aucune entrée cron ne sera faite. Vous pouvez encore lancer les tâches à la main.<br />
|-<br />
| rsync--inplace<br />
| yes ''ou'' no || yes || Positionné à 'no', si la version rsync de la source ne supporte pas cette option (comme rsync sur SME6)<br />
|-<br />
| rsync--modify-window<br />
| entier >= 0 || 0 || En comparant deux horodatages, rsync les traite comme étant égaux s'ils ne diffèrent pas de plus de la valeur modify-window. C'est normalement 0 pour une correspondance exacte. Une valeur >= 0 est pratique si vous ne pouvez pas garder les horloges de la source et du serveur Affa synchonisées.<br />
|-<br />
| rsyncOptions<br />
| texte || || chaine d'options aditionnelles à passer à rsync.<br />
|-<br />
| rsyncdMode<br />
| yes ''ou'' no || no || Positionné à yes pour se connecter au démon rsync de l'hôte distant (au lieu de lancer rsync over ssh).<br />
|-<br />
| rsyncdModule<br />
| texte || AFFA || Le nom du module rsyncd (seulement applicable avec rsyncdMode=yes)<br />
|-<br />
| rsyncdUser<br />
| texte || affa || Le nom d'utilisateur pour authentification au démon rsync (seulement applicable avec rsyncdMode=yes)<br />
|-<br />
| rsyncdPassword<br />
| texte || || Le mot de passe pour authentification au démon rsync (seulement applicable avec rsyncdMode=yes)<br />
|-<br />
| remoteOS<br />
| cygwin || || avec remoteOS=cygwin les options --send-key et --revoke-key utilisent le compte 'Administrator' et le chemin correct pour la clé publique sur un hôte distant sous Windows/Cygwin. <br />
|- <br />
| ESXi<br />
| yes ''ou'' no || no || active le mode de sauvegarde VMware ESXi pour les machines virtuelles.<br>Voir [[Sauvegarde des machines virtuelles ESXi avec Affa]]<br />
|-<br />
| ESXiVMName<br />
| texte || || Le nom de la VM tel qu'affiché dans le client VI.<br />
|-<br />
| ESXiUsername<br />
| texte || || Le nom du client VI avec la permission de créer et détruire les instantanés (snapshots).<br />
|-<br />
| ESXiPassword<br />
| texte || || Le mot de passe de l'utilisateur ESXiUsername<br />
|-<br />
| chunkFiles<br />
| texte || || Noms de fichiers qu'Affa devrait découper et compresser. Plusieurs noms doivent être séparés par un slash (/), ex. 'mysql.dump/pgsql.dump' découpe les deux fichiers mysql.dump et pgsql.dump. Avec ESXi=yes c'est implicitement positionné à '*.vmdk'. Voir aussi les options de ligne de commande --chunk-archive et --unchunk-archive<br />
|}<br />
<br />
==== Propriétés de la configuration par défaut ====<br />
Toutes les propriétées peuvent être positionnées par défaut dans l'enregistrement DefaultAffaConfig. C'est pratique, lorsque vous configurez plusieurs tâches similaires. <br />
<br />
Exemple : Vous voulez positionner la propriété 'localNice' à 19 pour toutes les tâches. Tapez<br />
db affa setprop DefaultAffaConfig localNice 19<br />
et ne positionnez pas cette propriété pour les tâches.<br />
Les propriétés positionnées dans les tâches sont prioritaires sur les propriétés par défaut.<br />
<br />
La propriété spéciale 'sendStatus' ne s'applique que dans l'enregistrement DefaultAffaConfig. Elle contrôle le rapport d'état (status) envoyé par mail et peut prendre les valeurs 'none', 'daily', 'weekly' ou 'monthly'. <br />
<br />
Pour configurer un rapport hebdomadaire, tapez :<br />
db affa setprop DefaultAffaConfig sendStatus weekly<br />
puis configurez cronjob :<br />
affa --make-cronjobs<br />
<br />
==== Désactivation globale ====<br />
Toutes les tâches peuvent être désactivées en positionnant l'enregistrement AffaGlobalDisable sur 'yes'.<br />
db affa set AffaGlobalDisable yes<br />
affa --make-cronjobs<br />
<br />
Pour réactiver :<br />
db affa set AffaGlobalDisable no<br />
affa --make-cronjobs<br />
<br />
=== Utilisation et options de ligne de commande ===<br />
{{Note box|Les options peuvent être abrégées tant qu'elles restent uniques, ex. --mak est identique à --make-cronjobs}}<br />
'''affa --run JOB'''<br />
Lance une tâche. Normalement effectué par cronjob. <br />
<br />
<br />
'''affa --make-cronjobs'''<br />
Configure les tâches cron comme programmées dans les enregistrements de tâches.<br />
<br />
<br />
'''affa --send-key JOB'''<br />
'''affa --send-key --host=TARGETHOST [--port=PORT] [--remoteOS=cygwin]'''<br />
Génère d'abord la clé DSA pour le serveur Affa, si cela n'a pas déjà été fait. Envoie ensuite la clé publique à l'hôte 'remoteHostName' comme configuré dans l'enregistrement de la tâche et génère l'entrée ssh known host (hôte connu) spécifique à la tâche. <br />
{{Note box|Initialement, lorsque vous faites cette étape, vous aurez besoin d'activer temporairement "Autoriser l'authentification par SSH à l'aide de mots de passe standards" sur le serveur de production.}}<br />
{{Note box|<nowiki>l'option --send-key fonctionne pour SME Server comme serveur distant et pour les systèmes sur lesquels les clés publiques sont stockées dans /root/.ssh/authorized_keys2 et sur lesquels les commandes /bin/cat, /bin/touch, /bin/grep et /bin/mv sont disponibles. Avec remoteOS=cygwin cela fonctionne pour les serveurs distants sous Cygwin/Windows.</nowiki>}}<br />
<br />
<br />
'''affa --full-restore JOB [ARCHIVE]'''<br />
Effectue une restauration totale depuis l'archive de sauvegarde [ARCHIVE] sur le serveur source distant comme défini dans l'enregistrement JOB. Si [ARCHIVE] n'est pas donnée, l'archive 'scheduled.0' est utilisée comme défaut. La restauration totale reconstruit le serveur comme il était au moment de la sauvegarde. Après la restauration l'hôte source est redémarré.<br />
<br />
<br />
'''affa --rise [--all] JOB [ARCHIVE]'''<br />
Lance une restauration totale depuis l'archive de sauvegarde [ARCHIVE] sur le serveur Affa <b>(!)</b> comme défini dans l'enregistrement JOB. Autrement dit, une fois terminée, le serveur de sauvegarde reboote comme un clone du serveur source. Assurez-vous d'avoir arrêté le serveur source avant de rebooter le serveur de sauvegardes, sinon l'interface réseau ne sera pas activée. Cela est important, lorsque vous lancez --rise à distance. La fonction --rise ne fonctionne qu'avec les serveurs SME 7et ne devrait être utilisée que sur les serveurs de sauvegardes dédiés.<br />
<br />
Avec l'option --all, tous les fichiers et répertoires de l'archive tels que définis par les propriétés include[] sont restaurés. Les fichiers et répertoires avec le même nom sur le serveur Affa seront écrasés et ne peuvent pas être restaurés par un undoris. Cela ne devrait pas poser de problème sur un serveur Affa dédié qui ne contient aucune autre donnée. Après un éventuel undorise ces données aditionnelles restaurées doivent être supprimées manuellement.<br />
<br />
Remarquez que le processus rise sauvegarde le serveur Affa lui-même avant d'effectuer la restauration de l'archive. Cette sauvegarde est utilisée par un éventuel undorise pour restaurer le serveur Affa. Seulement les fichiers et répertoires standards sont sauvegardés. Les données dans des endroits non standard (comme /opt) ne sont pas touchées et existeront encore après le rise. Voir aussi : [[Backup_server_config#Standard_backup_.26_restore_inclusions]]<br />
<br />
<br />
'''affa --undo-rise'''<br />
Cette fonction reconvertit en serveur de sauvegarde un serveur Affa préalablement converti en serveur de production. Après un redémarrage, toutes les tâches configurées fonctionneront à nouveau.<br />
<br />
<br />
'''affa --unchunk-archive JOB ARCHIVE'''<br />
Concatène et décompresse tous les fichiers découpés dans l'archive ARCHIVE de la tâche JOB pour rendre l'archive prête pour une restauration. Notez que l'archives la plus récente scheduled.0 n'est jamais découpée et donc qu'elle peut être restaurée telle quelle sans re-concaténation préalable.<br />
<br />
<br />
'''affa --chunk-archive JOB ARCHIVE'''<br />
Découpe et compresse tous les fichiers dont les noms correspondent à la propriété chunkFiles pour économiser l'espace disque. Cela ne devrait être utilisé que comme l'inverse de --unchunk-archive. Les découpages sont stockés en fichiers bzip2 dans un répertoire nommé FILENAME.affa-chunks avec 8 niveaux de profondeur.<br />
<br />
<br />
<br />
'''affa --create-backup-file JOB [ARCHIVE] [--outfile=FILE]'''<br />
Crée une archive tar gzippée FILE depuis l'ARCHIVE de la tâche JOB. Pendant la création la somme MD5 est calculée puis le fichier tar enregistré lui est comparé. L'archive par défaut est schedule.0 et le fichier FILE par défaut est./smeserver.tg. La somme de contrôle est écrite dans FILE.md5sum.<br />
<br />
Un fichier smeserver.tgz stocké sur un disque USB peut être utilisé pour restauration lors d'une nouvelle installation de SME Server.<br />
<br />
<br />
'''affa --list-archives [--csv] JOB'''<br />
Affiche un tableau de toutes les archives de la tâche JOB existantes, avec la date, le nombre de fichiers, la taille et l'espace disque utilisé. <br />
<small><pre><br />
Affa version 2.0.0-rc4 on affa1.mydomain.de (10.204.48.2)<br />
+------------------------------------------------------------------------------+<br />
| Job: primmail |<br />
| Description: Mailserver Albstadt |<br />
| Directory: /var/affa/primmail/ |<br />
| Hostname: 10.204.144.2 |<br />
| Email: admin@mydomain.de |<br />
+-------+-----------------------+-----------+----------------+--------+--------+<br />
| Run | Completion date | Exec Time | Files | Size | Recvd |<br />
+-------+-----------------------+-----------+----------------+--------+--------+<br />
| M 9 | Sun 2008 Mar 02 06:11 | 0h29m47s | 679010 | 80GB | 60MB |<br />
| M 8 | Sun 2008 Mar 30 06:12 | 0h26m59s | 701683 | 83GB | 48MB |<br />
| M 7 | Sun 2008 Apr 27 06:16 | 0h28m01s | 731332 | 87GB | 47MB |<br />
| M 6 | Sun 2008 Jun 01 06:20 | 0h27m37s | 755529 | 90GB | 50MB |<br />
| M 5 | Sun 2008 Jun 29 06:24 | 0h28m39s | 800200 | 94GB | 52MB |<br />
| M 4 | Sun 2008 Jul 27 06:28 | 0h29m10s | 835398 | 99GB | 59MB |<br />
| M 3 | Sun 2008 Aug 31 06:34 | 0h34m53s | 869409 | 101GB | 65MB |<br />
| M 2 | Sun 2008 Sep 28 06:38 | 0h33m19s | 910889 | 106GB | 59MB |<br />
| M 1 | Sun 2008 Oct 26 06:42 | 0h38m41s | 932627 | 110GB | 61MB |<br />
| M 0 | Sun 2008 Nov 30 06:48 | 0h51m44s | 954090 | 116GB | 75MB |<br />
+-------+-----------------------+-----------+----------------+--------+--------+<br />
| W 3 | Sun 2008 Dec 14 06:50 | 0h53m34s | 962267 | 118GB | 68MB |<br />
| W 2 | Sun 2008 Dec 21 06:52 | 0h53m11s | 974914 | 120GB | 69MB |<br />
| W 1 | Wed 2008 Dec 24 06:52 | 0h48m58s | 974536 | 120GB | 64MB |<br />
| W 0 | Sun 2008 Dec 28 06:58 | 0h39m01s | 977003 | 120GB | 61MB |<br />
+-------+-----------------------+-----------+----------------+--------+--------+<br />
| D 6 | Tue 2008 Dec 30 06:52 | 0h54m45s | 976872 | 120GB | 77MB |<br />
| D 5 | Wed 2008 Dec 31 06:52 | 0h49m10s | 977764 | 120GB | 63MB |<br />
| D 4 | Thu 2009 Jan 01 06:28 | 0h28m09s | 977879 | 120GB | 60MB |<br />
| D 3 | Fri 2009 Jan 02 06:53 | 0h53m21s | 978144 | 120GB | 68MB |<br />
| D 2 | Sat 2009 Jan 03 06:53 | 0h53m15s | 978072 | 120GB | 60MB |<br />
| D 1 | Sun 2009 Jan 04 06:53 | 0h53m16s | 976478 | 120GB | 70MB |<br />
| D 0 | Mon 2009 Jan 05 06:52 | 0h52m42s | 977062 | 120GB | 61MB |<br />
+-------+-----------------------+-----------+----------------+--------+--------+<br />
| S 1 | Mon 2009 Jan 05 17:57 | 0h27m48s | 976353 | 120GB | 98MB |<br />
| S 0 | Tue 2009 Jan 06 06:52 | 0h52m55s | 976539 | 120GB | 70MB |<br />
+-------+-----------------------+-----------+----------------+--------+--------+<br />
</pre></small><br />
Avec --csv, le tableau est généré selon un format séparé par des deux-points (:) lisible par les programmes.<br />
<br />
<br />
'''affa --status [--csv]'''<br />
Affiche un tableau de toutes les tâches actives (status enabled), avec l'heure de dernier et prochain lancement, la taille, l'espace disque utilisé par la plus récente archive, la date d'exécution du dernier lancement, et le nombre d'archives programmées (S), quotidiennes (D), hebdomadaires (W), mensuelles (M) et annuelles (Y). Le dernier lancement (colonne 'Last') affiche 'failed' si une tâche n'a pas été lancée dans les dernières 24h. Pour les tâches désactivées, 'Last' affiche toujours 'failed' au bout de 24 h. Pour voir la date et l'heure du dernier lancement de ces tâches, utilisez l'option --list-archives. La colonne 'Next' montre l'heure à laquelle le prochain lancement aura lieu, si 'Enabled' est à 'yes'.<br />
<small><pre><br />
Affa version 2.0.0-rc4 on backup.mydomain.de (10.204.48.2)<br />
+----------------+-----+-------+-----------+-------+-------+----------------+<br />
| Job | ENA | Last | Exec Time | Next | Size | N of S,D,W,M,Y |<br />
+----------------+-----+-------+-----------+-------+-------+----------------+<br />
| erp-alb-rsyncd | yes | 00:20 | 0h00m14s | 00:20 | 46MB | 1, 7, 4,10, 0 |<br />
| erp-fra-rsyncd | yes | 02:46 | 0h01m03s | 02:45 | 712MB | 1, 7, 4,10, 0 |<br />
| esxi-TS-W2K | yes | 00:37 | 4h22m08s | 20:15 | 60GB | 1, 7, 1, 0, 0 |<br />
| esxi-W2KR2 | yes | 02:22 | 3h52m09s | 22:30 | 40GB | 1, 7, 1, 0, 0 |<br />
| helpdesk | yes | 00:25 | 0h00m47s | 00:25 | 117MB | 1, 7, 4,10, 0 |<br />
| imageserv | yes | running (pid 16477) | 1, 7, 4,10, 0 |<br />
| intraweb | yes | 06:34 | 0h04m50s | 18:00 | 2.1GB | 2, 7, 4,10, 0 |<br />
| pdcalb | yes | 21:19 | 0h14m27s | 21:05 | 89GB | 1, 7, 4,10, 0 |<br />
| pdcfra | yes | 04:18 | 0h03m33s | 04:15 | 33GB | 1, 7, 4,10, 0 |<br />
| primmail | yes | 06:52 | 0h52m55s | 17:30 | 120GB | 2, 7, 4,10, 0 |<br />
| rayofhope | yes | 21:52 | 0h07m40s | 21:45 | 26GB | 1, 7, 4,10, 0 |<br />
| smecrmpg | yes | 22:35 | 0h00m21s | 22:35 | 70MB | 1, 7, 4,10, 0 |<br />
| sozserv | yes | 02:09 | 0h04m03s | 02:05 | 5.9GB | 1, 7, 4,10, 0 |<br />
| wiki | yes | 03:45 | 0h00m27s | 03:45 | 278MB | 1, 7, 4, 8, 0 |<br />
+----------------+-----+-------+-----------+-------+-------+----------------+<br />
| az32share | no | - | 0h00m22s | 03:05 | 1.3GB | 1, 7, 4, 8, 0 |<br />
+----------------+-----+-------+-----------+-------+-------+----------------+<br />
</pre></small><br />
Avec --cxv, le tableau est généré selon un format séparé par des deux-points (:) lisible par les programmes.<br />
<br />
<br />
'''affa --disk-usage'''<br />
Affiche l'utilisation actuelle du disque pour les répertoires racine de tous les systèmes de fichiers.<br />
<small><pre><br />
Affa version 2.0.0-rc4 on backup2.mydomain.de (10.204.0.52)<br />
+------+--------+--------+----------------------------------------------------+<br />
| Use% | Used | Avail | Root Dir |<br />
+------+--------+--------+----------------------------------------------------+<br />
| 71% | 938GB | 365GB | /var/affa |<br />
| 20% | 194GB | 759GB | /mnt/affadev |<br />
+------+--------+--------+----------------------------------------------------+<br />
</pre></small><br />
Avec --cxv, le tableau est généré selon un format séparé par des deux-points (:) lisible par les programmes.<br />
<br />
<br />
'''affa --show-schedule [--all]'''<br />
Affiche un calendrier 'graphique' de toutes les tâches actives. La résolution est de 30 minutes. Un 'X' indique les heures de lancement programmés. La durée de la dernière exécution est marquée avec des '='.<br />
<small><pre><br />
Affa version 2.0.0-rc4 on backup.mydomain.de (10.204.48.2)<br />
TIME 0:00 4:00 8:00 12:00 16:00 20:00<br />
erp-alb-rsyncd X------- -------- -------- -------- -------- --------<br />
helpdesk X------- -------- -------- -------- -------- --------<br />
esxi-W2KR2 =====--- -------- -------- -------- -------- -----X==<br />
esxi-TS-W2K ==------ -------- -------- -------- -------- X=======<br />
imageserv ---X---- -------- -------- -------- -------- --------<br />
sozserv ----X--- -------- -------- -------- -------- --------<br />
erp-fra-rsyncd -----X-- -------- -------- -------- -------- --------<br />
wiki -------X -------- -------- -------- -------- --------<br />
pdcfra -------- X------- -------- -------- -------- --------<br />
primmail -------- ----X=-- -------- -------- ---X---- --------<br />
intraweb -------- -----X-- -------- -------- ----X--- --------<br />
pdcalb -------- -------- -------- -------- -------- --X-----<br />
rayofhope -------- -------- -------- -------- -------- ---X----<br />
smecrmpg -------- -------- -------- -------- -------- -----X--<br />
1 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
<br />
<br />
'''affa --send-status'''<br />
Envoie le tableau d'état, l'espace disque utilisé et la liste des archives de toutes les tâches à l'adresse email configurée dans l'enregistrement 'DefaultAffaConfig'. Cette commande est utilisée par la tâche cron 'affa status'.<br />
<br />
<br />
'''affa --mailtest JOB'''<br />
Envoie un email de test aux adresses email configurées dans l'enregistrement de la tâche. Avec la propriété 'Watchdog' à 'yes', un email de test est envoyé depuis le serveur distant aussi. Utilisez cela pour vérifier que l'envoi des mails fonctionne.<br><br />
'''Note:''' Par défaut, Affa n'envoie des messages qu'en cas d'erreur, jamais en cas de succès (voir la propriété 'chattyOnSuccess').<br />
<br />
<br />
'''affa --cleanup JOB'''<br />
Après que vous ayez diminué le temps de conservation (scheduledKeep), les archives avec un indice plus élevé ne seront plus supprimées automatiquement et existeront toujours. Cette option trouve ces archives et les supprime.<br />
<br />
<br />
'''affa --rename-job JOB NEWNAME'''<br />
Renomme la tâche JOB en NEWNAME, y compris tous les enregistrements de la base de données et les répertoires d'archive.<br />
<br />
<br />
'''affa --move-archive JOB NEWROOTDIR'''<br />
Déplace le répertoire d'archive de JOB vers NEWROOTDIR et ajuste la propriété RootDir. NEWROOTDIR doit être un chemin complet (full path) commençant par un slash. Comme le déplacement d'un système de fichiers à un autre (par exemple d'un disque USB à un disque local) n'est pas possible, Affa se sert dans ce cas d'une commande de copie et efface ensuite le répertoire source. Selon la taille de l'archive, la copie d'un système de fichiers à l'autre peut prendre longtemps.<br />
<br />
<br />
'''affa --delete-job [--revoke-key] JOB'''<br />
Efface irréversiblement une tâche, y compris toutes les archives, la configuration et les bases de données attachées. En passant l'option '--revoke-key', la clé publique sur le serveur distant sera détruite.<br />
<br />
<br />
'''affa --revoke-key JOB'''<br />
'''affa --revoke-key --host=TARGETHOST [--port=PORT] [--remoteOS=cygwin]'''<br />
Détruit la clé DSA publique sur le serveur distant.<br />
{{Note box|<nowiki>L'option --revoke-key seule fonctionne sur SME Server comme serveur distant et pour les systèmes où les clés sont stockées dans /root/.ssh/authorized_keys2 Avec remoteOS=cygwin cela fonctionne pour les serveurs distants sous Cygwin/Windows.</nowiki>}}<br />
<br />
<br />
'''affa --check-connections'''<br />
Contrôle le login ssh pour toutes les tâches configurées. Pour les tâches pour lesquelles la clé publique n'avait pas encore été envoyée, il vous sera demandé le mot de passe et la clé sera envoyée ensuite.<br />
<br />
<br />
'''affa --kill JOB'''<br />
Termine la tâche en cours JOB et tous les processus fils (les processus rsync)..<br />
<br />
'''affa --debug'''<br />
Active un journal verbeux. Remplace la tâche et les configurations globales.<br />
<br />
=== Exemples de configuration ===<br />
==== Serveur de sauvegarde dédié ====<br />
* Configurez une tâche pour chaque serveur que vous voulez sauvegarder<br />
* Envoyez les clés publiques à chaque serveur.<br />
affa --send-key JOBNAME<br />
* Envoie la clé DSA correspontante à la tâche JOBNAME.<br />
affa --check-connections<br />
* Vérifie que les connexions sans mot de passe fonctionnent.<br />
affa --show-schedule<br />
* Vérifie si les tâches planifiées sont réparties dans la journée.<br />
affa --make-cronjobs<br />
* Crée les tâches cron.<br />
affa --status<br />
* Vérifie l'état après 24 heures.<br />
<br />
<br />
==== Sauvegarder une ibay individuellement ====<br />
Supposons que nous voulions sauvegarder les ibays 'staff1' et 'staff2' de notre serveur de production dont l'adresse IP WAN est 82.123.1.1 sur un serveur Affa déporté via internet, toutes les nuits à 2h30.<br />
* On se connecte sur le serveur Affa et on installe les paquetages comme décrit ci-dessus.<br />
* On copie l'exemple de configuration<br />
cp /usr/lib/affa/jobconfig-sample.pl /root/ibay-staff-job.pl<br />
* On édite /root/ibay-staff-job.pl pour y mettre :<br />
my $jobname='ibay-staff';<br />
et<br />
'remoteHostName‘=>'82.123.1.1',<br />
'TimeSchedule'=>'0230',<br />
'SMEServer'=>'no',<br />
'Include[0]'=>'/home/e-smith/files/ibays/staff1',<br />
'Include[1]'=>'/home/e-smith/files/ibays/staff2',<br />
* On enregistre la configuration<br />
/root/ibay-staff-job.pl<br />
* On envoie la clé publique au serveur de production<br />
affa --send-key ibay-staff<br />
* On vérifie le lendemain matin :<br />
affa --list-archives ibay-staff<br />
affa --status<br />
* Vérifie l'état après 24 heures.<br />
ls /var/affa/ibay-staff<br />
<br />
==== Deux serveurs de production qui se sauvegardent mutuellement ====<br />
Vous avez deux sites connectés via un VPN et un serveur SME fonctionnant sur chaque site. Dans ce cas, vous n'avez pas besoin d'un serveur de sauvegarde Affa dédié. Les deux serveurs de production peuvent en plus agir chacun comme un serveur Affa sauvegardant le serveur de l'autre site. Installez simplement Affa et configurez une tâche qui sauvegarde l'autre serveur. Vous pouvez utiliser toutes les fonctionnalités Affa excepté la fonctionnalité rise. <br />
En utilisant la fonctionnalité rise, le serveur devient n'importe lequel des systèmes sauvegardés, ce qui est moins utile dans ce scénario puisque cela vous donnerait une copie fonctionnelle du serveur de l'autre site lorsque celui-ci est en panne.<br />
<br />
Pour obtenir une redondance et une restauration plus rapide, vous pouvez configurer une sauvegarde locale sur un disque USB externe ou sur un NAS.<br />
<br />
Attention de ne pas sauvegarder à nouveau les archives de l'autre site. Configurez la propriété RootDir vers un chemin qui n'est pas inclu dans la liste de sauvegarde par défaut de SME. En laissant la propriété RootDir par défaut sur '/var/affa', cela est garanti.<br />
<br />
==== Sauvegarde d'un ordinateur Windows ====<br />
Sauvegarder les données d'un système Windows nécessite que le démon Rsyncd Cygwin soit installé et configuré sous Windows. La procédure "rsync over ssh" ne fonctionne pas, puisque le processus rsync Cigwin se plante systématiquement après le transfert de quelques fichiers. <br />
<br />
===== Configuration de Rsyncd sur l'ordinateur Windows ===== <br />
Installez la base Cygwin, le paquetage rsync puis configurez le service Rsyncd comme décrit dans ce document : [[Rsyncd setup on a windows computer for use with Affa backup]]'. <br />
L'installation du service sshd est optionnelle et n'est pas nécessaire pour la sauvegarde elle-même, mais avoir une connexion ssh peut être très utile pour l'administration et pour exécuter des scripts sur le système Windows. Affa supporte l'envoi de la clé publique vers un ordinateur Windows Cygwin pour une connexion sans mot de passe.<br />
<br />
Note: Affa ne sauvegarde pas l'information Windows Access Control List (ACL). Vous pourriez avoir à corriger les ACLs après une restauration.<br />
<br />
===== Configuration du mode Affa Rsyncd (exemple de démarrage rapide) =====<br />
Vous voulez sauvegarder les répertoires ''Mes Documents''des utilisateurs ''ssorglos'' et ''bsimpson'' depuis l'ordinateur Windows 'ws001' avec l'IP 192.168.1.65 sur votre serveur Affa 'affabox' avec l'IP 192.168.1.3.<br />
* connectez-vous sur 'affabox' et copiez le script exemple de configuration Cygwin<br />
cp /usr/lib/affa/jobconfig-cygwin-sample.pl /root/ws001-mydocs-job.pl<br />
* editez /root/ws001-mydocs-job.pl et mettez<br />
my $jobname='ws001-mydocs';<br />
et<br />
'remoteHostName‘=>'192.168.1.65',<br />
'rsyncdPassword'=>'<i>secretword</i>',<br />
'Include[0]'=>'/c/Documents and Settings/ssorglos/My Documents/', ''# N'utilisez pas les backslashes dans les chemins !''<br />
'Include[1]'=>'/c/Documents and Settings/bsimpson/My Documents/',<br />
où ''secretword'' doit être remplacé par le mot de passe que vous avez choisi dans rsyncd.secretsfile sur l'ordinateur Windows.<br />
* Enregistrez la configuration (cela crée les entrées de base de données et configure les tâches cron.<br />
/root/ws001-mydocs-job.pl<br />
* Lancez la tâche manuellement. Après l'achèvement, vérifiez l'archive /var/affa/ws001-mydocs/scheduled.0 et le journal /var/log/affa/ws001-mydocs.log.<br />
affa --run ws001-mydocs<br />
<br />
===== Configuration manuelle d'Affa en mode Rsyncd =====<br />
Au cas où vous voudriez configurer manuellement en utilisant les commandes db, voici les paramètres obligatoires pour le mode Rsyncd Cygwin.<br />
db affa setprop JOB rsyncdMode yes<br />
db affa setprop JOB rsyncdModule AFFA<br />
db affa setprop JOB rsyncdUser affa<br />
db affa setprop JOB rsyncdPassword ''secretword''<br />
db affa setprop JOB SMEServer no<br />
db affa setprop JOB RPMCheck no<br />
db affa setprop JOB Watchdog no<br />
Optionel pour une connexion ssh.<br />
db affa setprop JOB remoteOS cygwin <br />
<br />
<br />
==== Utiliser Affa pour sauvegarder un NAS monté par NFS ou un disque USB local ====<br />
<br />
On désire sauvegarder notre serveur SME 7 de production dont le nom d'hôte est 'prodbox' et l'IP 10.200.48.1 sur un système de fichiers monté plutôt que de configurer un serveur Affa dédié.<br />
<br />
===== Configuration du NAS =====<br />
On dispose d'un serveur FREENAS dont l'adresse IP est 10.200.48.2 en fonction et avec le service NFS activé. Le disque est monté sur /mnt/affashare. On a autorisé l'adresse IP de notre serveur de production à accéder au partage /mnt/affashare.<br />
* On se logge sur 'prodbox' et on installe les paquetages NFS :<br />
/usr/bin/yum install --enablerepo=smecontribs smeserver-nfs<br />
* maintenant, on active et on démarre le service portmapper.<br />
config setprop portmap status enabled<br />
service portmap start<br />
<br />
* On monte le partage NFS :<br />
mkdir -p /mnt/affadevice<br />
mount 10.200.48.2:/mnt/affashare /mnt/affadevice<br />
<br />
<br />
===== Alternative : configuration du disque USB =====<br />
* On se logge sur 'prodbox'<br />
* On connecte le disque USB sur le bus USB. On doit maintenant déterminer sur quel 'device' le noyau a placé le lecteur. On regarde /var/log/messages et on cherche la ligne d'initialisation correspondante (Initializing USB Mass Storage driver). Quelques lignes plus bas, on trouve le nom du device. Supposons qu'il s'agisse de 'sdh'. Si ce n'est pas le cas, on remplacera dans les instructions ci-après /dev/sdh par la valeur trouvée.<br />
* On crée une partition linux avec fdisk : '''Il faut bien vérifier que c'est bien le disque USB qui est attaché avant de continuer !'''<br />
<br />
fdisk /dev/sdh<br />
On trouvera probablement une partition dos vfat, qu'on devra d'abord détruire. Supposons que nous ayons créé une seule partition ''/dev/sdh1''.<br />
* On formatte avec un système ext3 :<br />
mkfs.ext3 /dev/sdh1<br />
* On crée le point de montage :<br />
mkdir -p /mnt/affadevice<br />
* On ajoute notre disque USB dans /etc/fstab. On crée donc une nouvelle ligne avec :<br />
/dev/sdh1 /mnt/affadevice ext3 defaults<br />
* On monte le disque :<br />
mount /mnt/affadevice<br />
* On fait une vérification croisée avec df :<br />
df<br />
<br />
===== Copier une archive Affa d'un disque dur USB sur un nouveau disque =====<br />
Affa utilise un grand nombre de liens matériels (hard links) pour compresser et préserver de la place sur le disque pour les sauvegardes. Dans la situation où l'on voudrait copier un disque contenant une archive Affa sur un autre disque (de taille plus importante), il est nécessaire de s'assurer que les liens matériels ont été copiés correctement au risque que la copie devienne considérablement plus volumineuse que l'archive source.<br />
<br />
Une méthode pour copier à travers des systèmes de fichiers distincts (par ex. entre 2 disques usb différents) et préserver les liens matériels est la suivante:<br />
* monter les deux disques usb avec des points de montage différents, par exemple /media/backup1 et /media/backup2 et effectuer alors:<br />
<br />
mkdir /media/backup2/archive <br />
cd /media/backup1/archive<br />
tar cpf - . | ( cd /media/backup2/archive && tar xpf - )<br />
<br />
en remplaçant '''archive''' par le véritable nom de l'archive (job) Affa que l'on souhaite placer sur l'autre disque.<br />
<br />
<br />
<br />
===== Configuration d'Affa =====<br />
On veut lancer des sauvegardes à 11h30, 15h30, 19h30 et on veut garder les 3 dernières, 7 journalières, 5 hebdomadaires, 12 mensuelles et 1 annuelle.<br />
* On se logge sur 'prodbox' et on installe les paquetages Affa comme indiqué ci-dessus.<br />
* On copie l'exemple de configuration<br />
cp /usr/lib/affa/jobconfig-sample.pl /root/prodbox-job.pl<br />
* On édite /root/prodbox-job.pl pour y mettre :<br />
my $jobname='prodbox';<br />
et<br />
'remoteHostName‘=>'localhost',<br />
'TimeSchedule'=>'1130,1530,1930',<br />
'scheduledKeep'=>3,<br />
'dailyKeep'=>7,<br />
'weeklyKeep'=>5,<br />
'monthlyKeep'=>12,<br />
'yearlyKeep'=>1,<br />
'RootDir=>'/mnt/affadevice',<br />
On vérifie les autres propriétés et on les change si besoin.<br />
* On enregistre la configuration<br />
/root/prodbox-job.pl<br />
* On lance la tâche manuellement :<br />
<br />
affa --run prodbox<br />
<br />
<br />
===== Limitations =====<br />
Avec ce type de configuration, on ne peut pas utiliser la fonction 'rise' (conversion du serveur de sauvegarde en serveur de production), puisqu'elle nécessite que l'archive soit sur le même système de fichiers que le serveur. La fonction 'rise' utilise les hardlinks, qui ne fonctionnent pas entre systèmes de fichiers différents.<br />
<br />
===== Automount : montage automatique =====<br />
Avoir les archives de sauvegarde sur le même système de fichiers est toujours un risque, qui peut être minimisé en utilisant la fonction automount. Ainsi, le système de fichiers externe n'est monté que durant le déroulement d'une tâche.<br />
<br />
Dans l'exemple NAS, configurez :<br />
'AutomountDevice=>'10.200.48.2:/mnt/affashare',<br />
'AutomountPoint =>'mnt/affadevice',<br />
et sautez l'étape 2<br />
<br />
Dans l'exemple du disque USB, configurez :<br />
'AutomountDevice=>'/dev/sdc1',<br />
'AutomountPoint =>'mnt/affadevice',<br />
et sautez les étapes 5 à 8.<br />
<br />
Le point de montage sera automatiquement créé, s'il n'existe pas. <br><br />
Pour accéder au répertoire de l'archive, vous devez le monter à la main.<br />
<br />
=== Restauration ===<br />
==== Restauration de fichiers ou répertoires individuels ====<br />
'''Example 1:''' Nous sommes le Mardi 6 Janvier 2009 quand l'utilisateur 'briedlin' vous demande de restaurer les messages de sa boite 'orders' qu'il a malencontreusement détruite Lundi.<br />
<br />
* Vous devez d'abord regarder quelles sont les archives disponibles. Le nom de tâche de ce serveur de sauvegardes est 'primmail. Pour avoir une liste de toutes les archives, lancez <br />
affa --list-archives primmail<br />
''(voir la liste exemple dans le chapitre [[Affa#Usage_and_command_line_options]]''<br />
* Choisissez l'archive daily.0, qui a été créée la nuit de Lundi. Maintenant, restaurez la mailbox 'orders' en utilisant la commande rsync.<br />
* Maintenant, lancez la commande rsync (notez le slash en fin !) sur le serveur de sauvegarde Affa.<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# cette variable est utilisée pour raccourcir la ligne de commande suivante''<br />
rsync -av /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
Si les serveurs sont configurés pour utiliser un port ssh différent, ex. 2222, à la place faites :<br />
<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# cette variable est utilisée pour raccourcir la ligne de commande suivante''<br />
rsync -av -e 'ssh -p 2222' /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
<br />
<br />
'''Example 2:''' Un utilisateur a effacé le fichier orderform.pdf dans l'ibay 'docs' du serveur 10.204.48.1 et vous demande de le restaurer.<br />
<br />
* Vous avez cherché et trouvé la dernière version de ce fichier dans l'archive hebdomadaire de la tâche 'prodserv'.<br />
* Pour la recopier vers le serveur 10.204.48.1 lancez sur le serveur Affa<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# cette variable est utilisée pour raccourcir la ligne de commande suivante''<br />
rsync -av /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
Si les serveurs sont configurés pour utiliser un port ssh différent, ex. 2222, à la place faites :<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# cette variable est utilisée pour raccourcir la ligne de commande suivante''<br />
rsync -av -e 'ssh -p 2222' /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
<br />
<br />
'''Example 3:''' Restaurer un fichier avec un caractère spécial dans le chemin<br />
<br />
N'utilisez pas les variables shell pour raccourcir la commande. Cela compliquerait les choses plus que cela aiderait.<br />
Mettez entre guillemets les chemins source et destination. Dans le chemin de destination, echappez les caractères blancs avec un double backslash et les parenthèses avec un simple backslash. Sur le serveur de sauvegardes AFFA lancez :<br />
rsync -av "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
Si les serveurs sont configurés pour utiliser un port ssh différent, ex. 2222, à la place faites :<br />
<br />
rsync -av -e 'ssh -p 2222' "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
==== Restauration complète ====<br />
Pour lancer une restauration complète des utilisateurs et des données de configuration, lancez sur le serveur Affa<br />
affa --full-restore <JOB> [<ARCHIVE>]<br />
Cela synchroniste (par rsync) les données de la sauvegarde ARCHIVE vers l'ordinateur 'remoteHostname' défini dans la configuration de la tâche JOB. <br />
<br />
Exemple:<br />
Vous avez sauvegardé votre serveur de production 'prodsrv' par une tâche 'prodbox'. Pour restaurer la dernière sauvegarde, lancez<br />
affa --full-restore prodbox<br />
<br />
Pour restaurer la plus ancienne archive daily.3, lancez <br />
affa --full-restore prodbox daily.3<br />
<br />
{{Warning box| Une restauration complète reconstruit le serveur tel qu'il était au moment de la sauvegarde. Cela signifie que tous les fichiers créés ou les changement de configuration due serveur effectués après la sauvegarde seront perdus. Après la restauration, le serveur restauré redémarre automatiquement. }}<br />
<br />
==== Migration d'un serveur SME7 vers un nouveau matériel en utilisant la fonctionnalité "Affa rise" ====<br />
Merci de consulter cet Howto: [[Moving SME to new Hardware]]<br />
<br />
<br />
==== Restauration d'un disque USB sur un nouveau serveur ====<br />
<br />
<br />
Cette astuce vient de http://forums.contribs.org/index.php?topic=42412.0<br />
<br />
Q) J'ai des sauvegardes complètes stockées sur un disque dur USB connecté sur notre serveur de sauvegarde Affa. J'ai besoin de restaurer une ancienne sauvegarde mensuelle de notre tâche "mailbackup" sur un serveur de tests à la place du système original. Si je le fais depuis le serveur de sauvegardes, je vois les instructions pour remonter ce serveur depuis la sauvegarde courante, mais je veux restaurer d'un état datant d'environ un mois, avant que d'étranges choses se soient passées. Et je veux le faire sur une machine qui n'est pas notre serveur de sauvegarde ni notre serveur de production. J'ai tenté d'imaginer comment faire, mais je suis perdu dans les options. Mon but est de faire quelques tests.<br />
<br />
R) Sur votre serveur de tests, configurez une tâche "mailbackup" identique à celle de votre serveur de sauvegardes, mais paramétrez correctement RootDir en /var/affa et mettez la propriété "status" sur "disabled" Connectez le lecteur USB et copiez l'archive de la tâche "mailbackup" dans /var/affa. Ensuite, lancez affa --rise mailbackup ARCHIVE<br />
<br />
Rappelez-vous qu'Affa place les scripts de configuration de tâches dans les répertoires d'archive. Dans votre cas, c'est mailbackup-setup.pl. Copiez-le sur votre serveur de tests, changez les propriétés RootDir et status, puis exécutez-le.<br />
<br />
=== FAQ ===<br />
'''Quels fichiers et répertoires sont inclus par défaut ?'''<br />
<br />
Avec SMEServer=no aucun.<br />
<br />
Avec SMEServer=yes la liste par défaut des sauvegardes SME. TLa commande suivante affiche la liste :<br />
perl -e 'use esmith::Backup;$b=new esmith::Backup;print join("\n",$b->restore_list)."\n"' <br />
<br />
<br />
'''Puis-je sauvegarder exclusivement des fichiers d'images contenus dans un répertoire spécifique ?'''<br />
<br />
Oui. Supposant que vous voulez sauvegarder tous les fichiers gif et jpg du répertoire /home/e-smith/files/ibays/pictures/files utilisez cette configuration<br />
db affa setprop JOBNAME SMEServer no # Ne pas inclure les répertoires et fichiers par défaut de SME Server<br />
db affa setprop JOBNAME Include[0] '/home/e-smith/files/ibays/pictures/files' # Commencer à chercher ici<br />
db affa setprop JOBNAME Include[1] '*/' # Chercher récursivement dans l'arborescence<br />
db affa setprop JOBNAME Include[2] '*.gif' # Copier les fichiers qui correspondent à ce profil<br />
db affa setprop JOBNAME Include[3] '*.jpg' <br />
db affa setprop JOBNAME Exclude[0] '*' # Exclure tous les autres<br />
db affa setprop JOBNAME rsyncOptions '-m' # Ne pas créer les répertoires vides<br />
<br />
<br />
'''Comment puis-je déplacer une archive de sauvegarde de mon serveur de production dans une agence distante vers le serveur Affa en utilisant un disque USB ? Je veux éviter de télécharger la première sauvegarde complète depuis internet.<br />
Installez Affa sur votre serveur de production, connectez et montez le disque USB.''' Configurez une tâche pour enregistrer la sauvegarde sur le disque USB. Une fois la tâche exécutée, connectez le disque USB à votre serveur Affa et configurez la tâche de la même manière. Utilisez le script de configuration du répertoire d'archive. Vous n'avez à mofier que la propriété "remoteHostName". Maintenant, utilisez l'option --move-archive pour transférer l'archive sur le disque local. Si ce n'est pas déjà fait, envoyez la clé publique à votre serveur de production. C'est fait.<br />
<br />
<br />
'''Comment je sauvegarde deux serveurs SME derrière un pare-feu ?'''<br />
D'abord, vous devez configurer les port forwardings pour le service ssh sur votre pare-feu. Utilisez un port non standard, par ex. 2200->22, pour votre second serveur. Configurez une tâche sur votre serveur de sauvegarde Affa pour chacun de vos serveurs de production. Configurez la propriété sshPort=2200 pour votre second serveur.<br />
<br />
=== Désinstallation ===<br />
Cela supprime le paquetage Affa installé, toutes les données de configuration et toutes les archives de sauvegarde.<br />
<br />
Lancez les commandes suivantes pour chaque tâche :<br />
affa --revoke-key JOBNAME<br />
affa --delete-job JOBNAME<br />
Verifie que toutes les tâches ont été supprimées.<br />
affa --status<br />
Suprime le paquetage Affa<br />
rpm -e smeserver-affa<br />
et, si ces paquetages ne sont pas des dépendances d'autres paquetages :<br />
Nettoyage général<br />
rm -f /etc/cron.d/affa /etc/cron.d/affa-status<br />
rm -rf /home/e-smith/db/affa /home/e-smith/db/affa-report<br />
rm -rf /var/affa /var/log/affa<br />
<br />
=== Résolution de problèmes ===<br />
==== Affa cesse de fonctionner après que prodserver ait été restauré. ====<br />
'''Description:'''<br />
* Affa cesse de fonctionner ou est endommagé<br />
* prodserver ne peut plus réaliser de connexion via ssh vers affaserver<br />
* Le message "SSH connection to ''prodserver ip'' failed. Did you send the public key" est présent dans le fichier /var/log/affa/''nom_du_job''.log<br />
* Réenvoyer les clés ssh ne résout pas le problème<br />
<br />
'''Solution:'''<br />
* Ceci vient probablement du fait que la clé pour prodserver, qui est présente sur affaserver, ne correspond plus avec la clé de prodserver.<br />
* Il faut alors effacer la clé existante afin que la clé soit réenvoyée par prodserver à la prochaine connexion ssh.<br />
* En ligne de commande, éditer le fichier /root/.ssh/known_hosts et enlever la ligne correspondant à prodserver. Cette dernière débute par: ''prodserver ip'' ssh-rsa<br />
<br />
==== Impossibilité d'envoyer des emails après un "rise". ====<br />
'''Description:'''<br />
* impossibilité d'envoyer de emails après un "rise" sur le server de sauvegardes en restaurant de SME7 à SME8<br />
* le journal qpsmtpd affiche:<br />
@400000005125f42409a8cd1c 4280 vvirus::clamav plugin (data_post): clamscan results: /var/spool/qpsmtpd/1361441818:4280:0: Access denied. ERROR<br />
@400000005125f42409aa9624 4280 virus::clamav plugin (data_post): ClamAV error: /usr/bin/clamdscan --stdout --config-file=/etc/clamd.conf --no-summary /var/spool/qpsmtpd/1361441818:4280:0 2>&1: 2<br />
* problème de permission sur /var/spool<br />
#Sur le serveur fonctionnel:<br />
ls -al /var/spool<br />
drwxr-s--- 2 qpsmtpd clamav 4096 Jun 10 12:31 qpsmtpd<br />
<br />
#Sur le serveur cassé (restauré):<br />
ls -al /var/spool<br />
drwxr-x--- 2 qpsmtpd clamav 4096 Jun 10 10:46 qpsmtpd<br />
<br />
#Sur la nouvelle installation avant la restauration:<br />
ls -al /var/spool<br />
drwxr-s--- 2 qpsmtpd clamav 4096 Apr 17 13:47 qpsmtpd<br />
voire [http://bugs.contribs.org/show_bug.cgi?id=4399| SME bug #4399]<br />
<br />
<br />
'''Solution:'''<br />
en ligne de commande sur le server de sauvegardes après le "Rise":<br />
chmod g+s /var/spool/qpsmtpd<br />
<br />
==== Les utilisateurs ne peuvent se connecter au serveur – Important. ====<br />
Après une restauration complète ou un "rise" par Affa sur SME8, tout semble fonctionner, mais les utilisateurs ne peuvent se connecter sur le serveur (samba, horde, etc...). Les comptes utilisateurs apparaissent bien dans le server-manager voir [[bugzilla:6983]] et [[bugzilla:7687]]<br /><br />
{{Warning box|msg=Travaux en cours: veuillez lire le contenu du bug car il semble y avoir d'autres problèmes avec cette solution temporaire}}<br />
La cause est une incomplète restauration / conversion des fichiers contenus dans le répertoire /etc, en particulièrement les fichiers contenant les identifiants et les mots de passe. La première chose à faire après un "affa --full-restore prodserv" est de ne pas effacer la sauvegarde existante de Affa. Si un "affa --rise --all prodserv" a été effectué, ne pas continuer avec "/bin/rm -rf /var/affa" comme indiqué dans le wiki (ne pas effacer non plus l'archive de quelque manière que ce soit).<br />
* Si la fonction rise a été utilisée:<br />
Se connecter en "root" sur le serveur de production et entrer en ligne de commande :<br />
<br />
mkdir /etc/backup<br />
<br />
cp -r /etc/{passwd,samba,e-smith,group,gshadow,shadow,ssh,sudoers} /etc/backup<br />
<br />
yes | cp -r /var/affa/"JOBNAME"/"ARCHIVE"/etc/* /etc/<br />
<br />
et reconfigurer le serveur pour finir<br />
<br />
signal-event post-upgrade; signal-event reboot<br />
<br />
* Si la fonction « full-restore » a été utilisée:<br />
Effectuer les étapes 1, 2 et 3 ci-dessus<br />
Se connecter en "root" sur le server de sauvegardes (server Affa) et entrer en ligne de commande:<br />
<br />
export RDIR=/var/affa/"NOM_DU_JOB"/"ARCHIVE"/etc/<br />
<br />
rsync -av $RDIR "PRODSERV-IP":/etc/<br />
<br />
Sur le serveur de production "PRODSERV", entrer par ssh dans un terminal<br />
signal-event post-upgrade; signal-event reboot<br />
<br />
<br />
<br />
=== Rapport de bugs ===<br />
Affa est répertorié dans [http://bugs.contribs.org/enter_bug.cgi?product=SME%20Contribs&component=smeserver-affa bugtracker contribs section].<br />
SVP, rapportez là tous les bugs, demandes de nouvelles fonctionnalités et problèmes de documentation. <br />
<br />
=== Informations complémentaires ===<br />
Pour les détails de performance, la liste des modifications etc. voyez [[:Affa:Additional information]]<br />
<br />
=== Références ===<br />
* Article Wikipedia http://en.wikipedia.org/wiki/Rsync<br />
* Site Rsync http://rsync.samba.org/<br />
* Article Wiki [[Moving SME to new Hardware]]<br />
* Article Wiki [[Backup with Affa and FreeDup]]<br />
* Article Wiki [[Rsyncd setup on a windows computer for use with Affa backup]]<br />
* Article Wiki [[Backup Windows Computers Using Affa]]<br />
* Article Wiki [[Backup of ESXi Virtual Machines using Affa]]<br />
<br />
----<br />
[[Category: Contrib/fr]]<br />
[[Category: Backup/fr]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Affa/fr&diff=23568Affa/fr2014-02-05T19:45:56Z<p>Arnaud: </p>
<hr />
<div>{{Languages|Affa}}<br />
{{Level|type=Niveau de compétence: Avancé|Advanced|Les instructions dans cette page requièrent un niveau avancé. Une bonne compréhension de Linux et de SME est recommandée.}}<br />
<br />
{{Warning box| Ce code est obsolète et non maintenu par son développeur, smeserver-affa attend la correction de 32 bugs ouverts. Un nombre important de bugs et d’événements sont associés à l'utilisation d'affa après une restauration ou de l'utilisation de la fonction rise, comme des groupes manquants, crash de services, ou erreur de CGI dans le server-manager}}<br />
voir [http://bugs.contribs.org/buglist.cgi?action=wrap&bug_status=UNCONFIRMED&bug_status=CONFIRMED&bug_status=NEEDINFO&bug_status=IN_PROGRESS&bug_status=RESOLVED&bug_status=VERIFIED&classification=Contribs&component=smeserver-affa%20%28unmaintained%29&list_id=23915&product=SME%20Contribs&resolution=---&resolution=FIXED&resolution=NOTABUG&resolution=WONTFIX&resolution=UPSTREAM&resolution=DUPLICATE&resolution=WORKSFORME&saved_report_id=1 bugs]<br />
=== Développeur ===<br />
Développeur: Michael Weinberger.<br />
Le développement et la maintenance d'Affa pour SME Server ont été interrompus. La nouvelle version Affa 3.X pour CentOS est disponible sur [http://affa.sourceforge.net/ Sourceforge].<br />
<br />
<br />
=== Traducteurs ===<br />
[http://bj-informatique.com/affa.php?lang=fr jibe]<br />
<br />
Pour la mise à jour: [http://www.guedel.eu Arnaud]<br />
<br />
Affa signifie '''A'''utomatische '''F'''estplatten '''F'''ern'''a'''rchivierung, soit approximativement en français : Sauvegarde automatique de disque dur à distance.<br />
<br />
<br />
=== Description ===<br />
Le but principal de ce paquetage Affa est de transformer un serveur SME 7 en un serveur dédié de sauvegardes en quelques minutes. Affa sauvegarde autant que vous le désirez de serveurs SME sur n'importe quel autre serveur ayant sshd en service et rsync installé. Une fois qu'il est configuré, Affa fonctionne sans surveillance et de manière fiable, envoyant des messages d'avertissement en cas d'erreur. <br />
<br />
Toutes les archives sont des sauvegardes complètes, puisque Affa utilise la technique des hardlink. Une nouvelle sauvegarde complète ne nécessite donc que l'espace disque pour les différences et les nécessités du système de fichier pour les hardlinks et les répertoires (ce qui représente généralement 2-3%).<br />
<br />
Affa est basé sur le programme rsync et supporte l'option --compress. Cela vous permet de lancer des sauvegardes via internet ou VPN. ne configuration typique consiste en un ou plusieurs serveurs de sauvegarde Affa placés en des lieux déportés, qui sauvegardent les serveurs de production via le VPN.<br />
<br />
Une caractéristique spéciale est l'option "rise", qui vous permet de convertir votre serveur de sauvegarde en serveur de production depuis une archive sauvegardée en cas de perte totale de votre serveur de production. Cette conversion est exécutée en un temps très court, même avec une énorme quantité de données. Cette possibilité de conversion utilise les hardlinks et de ce fait n'utilise pas d'espace disque supplémentaire.<br />
<br />
Affa est un outil en ligne de commande pour les administrateurs système et est conçu intentionnellement sans interface graphique. De ce fait, il peut être géré efficacement depuis la console même via une connexion internet lente.<br />
<br />
'''Note:''' Ce document concerne la version 2 Release Candidate d'Affa. La documentation de la dernière version 1 stable se trouve [http://wiki.contribs.org/index.php?title=Affa&oldid=11974 ici].<br />
<br />
==== Un coup d'oeil aux caractéristiques d'Affa ====<br />
* Affa est sûr : Tous transferts et connexions sont faits en utilisant le protocole ssh avec une authentification par clés publique/privée.<br />
* Lance périodiquement des sauvegardes complètes sans intervention. * Seules les différences avec la sauvegarde complète précédente sont transférées par le réseau<br />
* Utilise rsync avec l'option de compression pour les connexions lentes, ce qui permet des sauvegardes via internet/VPN<br />
* Utilise la technique des hardlink pour réduire l'espace disque physique à seulement ce qui est nécessaire pour les différences entre deux sauvegardes complètes<br />
* Mémorise un nombre configurable de sauvegardes automatisées, journalières, hebdomadaires, mensuelles et annuelles<br />
* On peut naviguer dans les archives sans avoir à les décompresser. Il n'est pas nécessaire de les décompresser d'abord.<br />
* Les archives peuvent être attachées à un partage Samba.<br />
* Les tâches de sauvegarde sont démarrées par le démon cron<br />
* Les travaux interrompus continuent au redémarrage en tenant compte des données déjà transférées<br />
* Sauvegarde les répertoires et fichiers e-smith par défaut, lorsque la propriété SMEServer est mise à 'yes'<br />
* Des répertoires et fichiers supplémentaires peuvent être inclus<br />
* Des répertoires et fichiers peuvent être exclus de la sauvegarde<br />
* Les serveurs Linux non SME peuvent être sauvegardés en mettant la propriété SMEServer à 'no' et en utilisant une liste d'inclusion<br />
* En mode ESXi, les machines virtuelles actives peuvent être sauvegardées. Voir [[Sauvegarde de machines virtuelles ESXi avec Affa]]<br />
* Priorité configurable (nice level) pour les process rsync sur les serveurs backup et source<br />
* Possibilité de lancer des programmes personnels avant et après le lancement d'une tâche (par exemple, lancement d'une sauvegarde sur bande)<br />
* Vérification de l'espace disque disponible après le lancement d'une tâche avec niveaux d'avertissement strict, normal ou risqué<br />
* Vérification étendue des conditions d'échec<br />
* Envoi de messages d'échec à une liste configurable d'adresses email<br />
* Envoi d'un message d'avertissement si le serveur de backup n'a plus d'espace disque disponible<br />
* Installe un chien de garde optionnel sur le serveur source vérifiant la disponibilité du serveur de backup (SME server seulement)<br />
* Le chien de garde envoie des avertissements, si une sauvegarde programmée n'a pas été lancée (SME server seulement)<br />
* Le chien de garde envoie un message journalier de rappel tant que l'erreur n'est pas corrigée (SME server seulement)<br />
* Possibilité d'afficher l'état courant de toutes les tâches, montrant les heures de dernier et prochain lancement, la taille et l'espace disque utilisé<br />
* Le status peut être envoyé par email journellement, hebdomadairement ou mensuellement<br />
* Possibilit d'afficher toutes les archives existantes pour une tâche, montrant la date, le nombre de fichiers et l'espace disque<br />
* Possibilité d'envoyer (et de retirer) la clé publique au serveur source (SME server et ESXi seulement)<br />
* Option pour convertir le serveur de sauvegardes en serveur de production (SME server seulement)<br />
* La fonction de conversion ne déplace pas physiquement les données et est donc extrêmement rapide et ne nécessite (presque) pas d'espace disque<br />
* La fonction de conversion peut être lancée à distance puisque les drivers ethernet du serveur de sauvegarde sont conservés<br />
* Comparaison des RPMs installés entre les serveurs source et de sauvegarde. Envoie un avertissement s'il n'y a pas correspondance.<br />
* Annulation de la conversion pour remise en service du serveur de sauvegardes<br />
* Configurable par commandes style e-smith db, avec un enregistrement pour chaque tâche et un enregistrement par défaut commun à toutes les tâches<br />
* Logs dans /var/log/affa/JOB.log et /var/log/affa/affa.log avec option debug pour plus de détails<br />
* Les fichiers de log sont permutés hebdomadairement, avec 5 fichiers conservés<br />
<br />
<br />
=== Installation ou mise à jour ===<br />
==== Smecontribs pour sme8 ====<br />
yum install -–enablerepo=smecontribs smeserver-affa<br />
<br />
Si Affa est installé pour la première fois, entrez la commande suivante pour réinitialiser la base de données d'Affa.<br />
Affa -–make-cronjob<br />
<br />
et déconnectez-vous puis reconnectez-vous à la console pour que le complément automatique de bash (touche TAB) prenne effet .<br />
==== Ancienne méthode d'installation / méthode pour sme7 ====<br />
{{Note box|SME7 est maintenant en fin de vie et a été transféré dans le répertoire obsolète}}<br />
Version SME7: '''smeserver-affa-2.0.0-rc4.noarch.rpm'''<br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/smeserver-affa-2.0.0-rc4.noarch.rpm</nowiki><br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm<br />
<br />
<br />
Version SME8: SME8 est basé sur CentOS5, c'est pourquoi perl-Compress-Bzip2 el5 rpm doit être utilisé. <br />
<br />
Pour SME8 32 bits:<br />
wget <nowiki>http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm</nowiki><br />
wget <nowiki>http://mirror.pialasse.com/releases/8/smecontribs/i386/RPMS/smeserver-affa-2.0.0-rc5.el5.sme.noarch.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc5.sme.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm<br />
<br />
Pour SME8 64 bits:<br />
wget http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
wget <nowiki>http://mirror.pialasse.com/releases/8/smecontribs/i386/RPMS/smeserver-affa-2.0.0-rc5.el5.sme.noarch.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc5.sme.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
<br />
Après la première installation d'Affa, lancez la commande suivante pour initialiser la base de données Affa.<br />
affa --make-cronjobs<br />
et déconnectez-vous puis reconnectez-vous à la console pour que le complément automatique de bash (touche TAB) prenne effet<br />
<br />
=== Exemple de démarrage rapide ===<br />
Vous avez un serveur de production SME 7 dont le nom est 'prodbox' et l'IP 10.200.48.1. Installez un second SME 7 comme serveur de sauvegardes avec le nom 'affabox' et l'IP 10.200.48.2. <br />
<br />
<ol></li><li>loggez-vous dans 'affabox' et installez les paquetages mentionnés ci-dessus.<br />
</li><li>copiez le script d'exemple de configuration<br />
cp /usr/lib/affa/jobconfig-sample.pl /root/prodbox-job.pl<br />
</li><li>editez /root/prodbox-job.pl et mettez<br />
my $jobname='prodbox';<br />
et<br />
'remoteHostName‘=>'10.200.48.1',<br />
Eventuellement, si le port du serveur distant est configuré, par exemple, en 2222, mettez<br />
'sshPort'=>2222,<br />
</li><li>sauvegardez et lancez le fichier 'prodbox-job.pl'. Cela enregistrera la configuration (et crée les entrées de base de données et les tâches cron)<br />
/root/prodbox-job.pl<br />
</li><li>Générez les clés DSA et envoyez la clé publique à la 'prodbox'<br />
{{Note box|Initialement, lorsque vous faites cette étape, vous devez temporairement mettre oui à "Autoriser l'authentification par SSH à l'aide de mots de passe standards" dans l'accès à distance sur le serveur de production}}<br />
affa --send-key prodbox<br />
</li><li>Lancez la tâche à la main<br />
<br />
affa --run prodbox<br />
</li></ol><br />
<br />
=== Configuration ===<br />
La configuration est stockée dans une base de données style e-smith. Utilisez la commande db pour configurer Affa.<br />
Le nom de tâche est la clé d'enregistrement de type 'job'.<br>Pour configurer un nouveau travail avec le nom 'prodbox', entrez :<br />
<br />
db affa set prodbox job<br />
<br />
Puis positionnez les propriétés<br />
<br />
db affa setprop prodbox remoteHostName 192.168.1.1<br />
db affa setprop prodbox TimeSchedule '0030,0730,1130,1330,1730,2030'<br />
db affa setprop prodbox Description 'My Production Server'<br />
db affa setprop prodbox status enabled<br />
et ainsi de suite...<br />
<br />
Alternativement, vous pouvez utiliser un script comme décrit ci-dessus dans le chapitre 'Démarrage rapide'.<br />
<br />
Pour vérifier votre travail, tapez :<br />
<br />
db affa show prodbox<br />
<br />
Pour finir, mettez en place les tâches cron :<br />
affa --make-cronjobs<br />
<br />
==== Propriétés de configuration de tâches ====<br />
'''Note:''' Les valeurs par défaut données dans ce tableau sont celles du programme Affa et ne doivent pas être confondues avec les valeurs prépositionnées dans les scripts d'aide, par exemple jobconfig-sample.pl.<br />
{| border="1" cellpadding="3" cellspacing=0<br />
|-<br />
| '''Propriété''' || '''Valeur''' || '''Defaut''' || '''Description''' <br />
|-<br />
| remoteHostName <br />
| FQHN ou IP || || FQHN ou IP de l'hôte source (obligatoire)<br />
|-<br />
| TimeSchedule <br />
| HHMM,HHMM,... || || N'a pas besoin d'être ordonné. Au moins une heure est obligatoire. '''Important:''' Utiliser le bon format HHMM est essentiel. Affa ne le vérifie pas. Un TimeSchedule mal formatté provoquera d'étranges erreurs Perl. <br />
|-<br />
| Description<br />
| texte || || <br />
|-<br />
| scheduledKeep<br />
| entier >= 1 || 1|| Combien de sauvegardes programmées doivent être conservées <br />
|-<br />
| dailyKeep<br>weeklyKeep<br>monthlyKeep<br>yearlyKeep<br />
| entier >= 0 || 7<br>4<br>12<br>2<br>|| Combien de sauvegardes quotidiennes, hebdomadaires, mensuelles ou annuelles doivent être conservées <br />
|-<br />
| SMEServer<br />
| yes ''ou'' no || yes || Positionné à 'yes', les répertoires e-smith par défaut sont automatiquement inclus et la propriété RPMCheck=yes peut être utilisée<br />
<br />
|-<br />
| Include[0]<br>Include[1]<br>...<br />
| full path || || Fichiers ou répertoires supplémentaires à inclure à la sauvegarde <br />
|-<br />
| Exclude[0]<br>Exclude[1]<br>...<br />
| full path || || Fichiers ou répertoires supplémentaires à exclure de la sauvegarde <br />
|-<br />
| RPMCheck<br />
| yes ''ou'' no || no || Seulement applicable aux tâches qui sauvegardent un serveur SME 7.<br>Compares les RPM de l'hôte source avec ceux de l'hôte de sauvegarde affa. Envoie un message avec une liste de différences s'il n'y a pas correspondance. Cette vérification est utile, si vous voulez avoir la possibilité de convertir le serveur de sauvegarde en serveur de production.<br />
|-<br />
| DiskSpaceWarn<br />
| strict ''ou'' normal ''ou'' risky ''ou'' none || strict || lance une vérification de l'espace disque après qu'une tâche soit terminée. Avec le niveau 'strict' un message d'avertissement sera envoyé si l'espace disponible est moins que la taille de la sauvegarde qui vient de s'achever. Avec les niveaux 'normal'/'risky' le message est envoyé si moins de 50%/10% de la taille de la sauvegarde reste disponible.<br />
'''Note:''' Lorsque RootDir est un lien symbolique vers un autre système de fichiers, l'espace disque utilisé du système de fichiers local est vérifié à la place de celui du système de fichiers lié. Positionnez le point de montage comme RootDir pour avoir une vérification correcte.<br />
|-<br />
| localNice <br />
| -19...+19 || 0 || lance le process local rsync par nice. <br />
|-<br />
| remoteNice<br />
| -19...+19 || 0 || lance le process rsync de la source par nice. <br />
|-<br />
| Watchdog<br />
| yes ''ou'' no || yes || Seulement applicable aux tâches qui sauvegardent un serveur SME 7.<br>Quand une tâche est lancée, affa installe un script chien de garde sur la source dans /etc/cron.d/, qui envoie un message d'avertissement si la prochaine tâche programmée (prise dans la propriété TimeSchedule + 10 minutes) n'a pas été lancée. Cela garantit que vous serez prévenu même en cas de panne du serveur affa. Le script chien de garde envoie un message de rappel quotidien si l'erreur persiste. La tâche lancée suivante remplace le script chien de garde avec un nouveau temps de déclenchement. <br />
|-<br />
| sshPort<br />
| port du service || 22 || Lorsque sshd sur le serveur source ou votre firewall écoute sur un port non standard, précisez le port ici.<br />
|-<br />
| ConnectionCheckTimeout<br />
| secondes || 120 || Avant que le process rsync démarre sur l'hôte source distant, affa vérifie la connexion ssh et se termine avec une erreur après le temps configuré, si l'hôte ne répond pas.<br />
|- <br />
| rsyncTimeout<br />
| secondes || 900 || Rsync se termine si aucune donnée n'est transférée pendant le temps configuré. Cela évite un gelage permanent dans le cas d'une erreur sur le réseau.<br />
|- <br />
| BandwidthLimit<br />
| entier>=0 Ko/s || 0 || Limite la vitesse de transfert des données. A valeur à zero specifie qu'il n'y a pas de limite.<br />
|- <br />
| rsyncCompress<br />
| yes ''ou'' no || no || Compresse les données transférées. Peut être utile avec les connexions internet lentes. Cela augmente la charge CPU sur les hôtes source et de sauvegarde.<br />
|-<br />
| EmailAddresses<br />
| name@domain.com,name@domain.com,... || admin|| Liste séparée par des virgules des adresses email où les messages seront envoyés.<br>'''Note:''' Par défaut, Affa n'envoie de messages que pour les erreurs, jamais en cas de succès (voir la propriété chattyOnSuccess).<br />
|-<br />
| RetryAttempts<br />
| entier >= 0 || 3 || Quand positionné à une valeur >0, Affa relance une tâche échouée RetryAttemps fois avec un délai de RetryAfter secondes.<br />
|-<br />
| RetryAfter <br />
| secondes >= 3 || 600 || Quand positionné à une valeur >0, attend RetryAfter secondes avant de relancer la tâche suite à une erreur. Utile seulement avec RetryAttempts>0<br />
|-<br />
| RetryNotification<br />
| yes ''ou'' no || yes || Quand positionné à no, Affa n'envoie pas de message d'erreur lorsqu'une tâche a échoué et que RetryAttemps est configuré. Un message d'erreur n'est envoyé que lorsque le dernier essai a échoué.<br />
|-<br />
| RetryAttempts<br />
| entier >= 0 || 0 || Quand positionné à une valeur >0, Affa envoie un message pour les tâches terminées avec succès et décrémente la valeur chattyOnSuccess. Lorsque la valeur atteint 0, Affa revient à son comportement par défaut et n'envoie plus de messages qu'en cas d'erreur.<br />
|-<br />
| AutomountDevice<br>AutomountPoint<br />
| full path || || Unité et point de montage de l'unité de sauvegarde (ex. disque USB). L'unité est montée automatiquement avant qu'une tâche soit lancées et démontée après la fin de la tâche. Les deux propriétés doivent être remplies pour que la fonction Automount soit active.<br />
|-<br />
| AutomountOptions<br />
| chaine de caractères || || Une chaine d'options passées à la commande de montage automatique<br />
|-<br />
| AutoUnmount <br />
| yes ''ou'' no || yes || Quand positionné à 'no' l'unité montée automatiquement reste montée après l'exécution d'Affa.<br />
|-<br />
| preJobCommand<br>postJobCommand<br />
| full path || || Programmes (locaux sur le serveur affa) à exécuter avant/après qu'une tâche soit lancée. Le nom de la tâche et le type (programmée, journalière etc.) sont passés en arguments au programme. Le code de sortie est de plus passé à la commande de programme post-tâche. Voir /usr/lib/affa/ pour des exemples de scripts perl.<br />
|-<br />
| RootDir<br />
| full path || /var/affa || Où stocker les archives de sauvegarde. N'utilisez pas /home/e-smith ou /root puisqu'ils sont inclus dans les répertoires à sauvegarder, ce qui empêcherait la fonction de conversion de fonctionner ! Répertoire conseillé : /var/affa<br />
|-<br />
| SambaShare<br />
| yes ''ou'' no|| yes || Accède aux archives de la tâche via SMB.<br />
|-<br />
| Debug <br />
| yes ''ou'' no || no || Positionné à 'yes', donne plus de détails dans le fichier de log.<br />
|-<br />
| status<br />
| enabled ''ou'' disabled || enabled || Positionné à 'disable', aucune entrée cron ne sera faite. Vous pouvez encore lancer les tâches à la main.<br />
|-<br />
| rsync--inplace<br />
| yes ''ou'' no || yes || Positionné à 'no', si la version rsync de la source ne supporte pas cette option (comme rsync sur SME6)<br />
|-<br />
| rsync--modify-window<br />
| entier >= 0 || 0 || En comparant deux horodatages, rsync les traite comme étant égaux s'ils ne diffèrent pas de plus de la valeur modify-window. C'est normalement 0 pour une correspondance exacte. Une valeur >= 0 est pratique si vous ne pouvez pas garder les horloges de la source et du serveur Affa synchonisées.<br />
|-<br />
| rsyncOptions<br />
| texte || || chaine d'options aditionnelles à passer à rsync.<br />
|-<br />
| rsyncdMode<br />
| yes ''ou'' no || no || Positionné à yes pour se connecter au démon rsync de l'hôte distant (au lieu de lancer rsync over ssh).<br />
|-<br />
| rsyncdModule<br />
| texte || AFFA || Le nom du module rsyncd (seulement applicable avec rsyncdMode=yes)<br />
|-<br />
| rsyncdUser<br />
| texte || affa || Le nom d'utilisateur pour authentification au démon rsync (seulement applicable avec rsyncdMode=yes)<br />
|-<br />
| rsyncdPassword<br />
| texte || || Le mot de passe pour authentification au démon rsync (seulement applicable avec rsyncdMode=yes)<br />
|-<br />
| remoteOS<br />
| cygwin || || avec remoteOS=cygwin les options --send-key et --revoke-key utilisent le compte 'Administrator' et le chemin correct pour la clé publique sur un hôte distant sous Windows/Cygwin. <br />
|- <br />
| ESXi<br />
| yes ''ou'' no || no || active le mode de sauvegarde VMware ESXi pour les machines virtuelles.<br>Voir [[Sauvegarde des machines virtuelles ESXi avec Affa]]<br />
|-<br />
| ESXiVMName<br />
| texte || || Le nom de la VM tel qu'affiché dans le client VI.<br />
|-<br />
| ESXiUsername<br />
| texte || || Le nom du client VI avec la permission de créer et détruire les instantanés (snapshots).<br />
|-<br />
| ESXiPassword<br />
| texte || || Le mot de passe de l'utilisateur ESXiUsername<br />
|-<br />
| chunkFiles<br />
| texte || || Noms de fichiers qu'Affa devrait découper et compresser. Plusieurs noms doivent être séparés par un slash (/), ex. 'mysql.dump/pgsql.dump' découpe les deux fichiers mysql.dump et pgsql.dump. Avec ESXi=yes c'est implicitement positionné à '*.vmdk'. Voir aussi les options de ligne de commande --chunk-archive et --unchunk-archive<br />
|}<br />
<br />
==== Propriétés de la configuration par défaut ====<br />
Toutes les propriétées peuvent être positionnées par défaut dans l'enregistrement DefaultAffaConfig. C'est pratique, lorsque vous configurez plusieurs tâches similaires. <br />
<br />
Exemple : Vous voulez positionner la propriété 'localNice' à 19 pour toutes les tâches. Tapez<br />
db affa setprop DefaultAffaConfig localNice 19<br />
et ne positionnez pas cette propriété pour les tâches.<br />
Les propriétés positionnées dans les tâches sont prioritaires sur les propriétés par défaut.<br />
<br />
La propriété spéciale 'sendStatus' ne s'applique que dans l'enregistrement DefaultAffaConfig. Elle contrôle le rapport d'état (status) envoyé par mail et peut prendre les valeurs 'none', 'daily', 'weekly' ou 'monthly'. <br />
<br />
Pour configurer un rapport hebdomadaire, tapez :<br />
db affa setprop DefaultAffaConfig sendStatus weekly<br />
puis configurez cronjob :<br />
affa --make-cronjobs<br />
<br />
==== Désactivation globale ====<br />
Toutes les tâches peuvent être désactivées en positionnant l'enregistrement AffaGlobalDisable sur 'yes'.<br />
db affa set AffaGlobalDisable yes<br />
affa --make-cronjobs<br />
<br />
Pour réactiver :<br />
db affa set AffaGlobalDisable no<br />
affa --make-cronjobs<br />
<br />
=== Utilisation et options de ligne de commande ===<br />
{{Note box|Les options peuvent être abrégées tant qu'elles restent uniques, ex. --mak est identique à --make-cronjobs}}<br />
'''affa --run JOB'''<br />
Lance une tâche. Normalement effectué par cronjob. <br />
<br />
<br />
'''affa --make-cronjobs'''<br />
Configure les tâches cron comme programmées dans les enregistrements de tâches.<br />
<br />
<br />
'''affa --send-key JOB'''<br />
'''affa --send-key --host=TARGETHOST [--port=PORT] [--remoteOS=cygwin]'''<br />
Génère d'abord la clé DSA pour le serveur Affa, si cela n'a pas déjà été fait. Envoie ensuite la clé publique à l'hôte 'remoteHostName' comme configuré dans l'enregistrement de la tâche et génère l'entrée ssh known host (hôte connu) spécifique à la tâche. <br />
{{Note box|Initialement, lorsque vous faites cette étape, vous aurez besoin d'activer temporairement "Autoriser l'authentification par SSH à l'aide de mots de passe standards" sur le serveur de production.}}<br />
{{Note box|<nowiki>l'option --send-key fonctionne pour SME Server comme serveur distant et pour les systèmes sur lesquels les clés publiques sont stockées dans /root/.ssh/authorized_keys2 et sur lesquels les commandes /bin/cat, /bin/touch, /bin/grep et /bin/mv sont disponibles. Avec remoteOS=cygwin cela fonctionne pour les serveurs distants sous Cygwin/Windows.</nowiki>}}<br />
<br />
<br />
'''affa --full-restore JOB [ARCHIVE]'''<br />
Effectue une restauration totale depuis l'archive de sauvegarde [ARCHIVE] sur le serveur source distant comme défini dans l'enregistrement JOB. Si [ARCHIVE] n'est pas donnée, l'archive 'scheduled.0' est utilisée comme défaut. La restauration totale reconstruit le serveur comme il était au moment de la sauvegarde. Après la restauration l'hôte source est redémarré.<br />
<br />
<br />
'''affa --rise [--all] JOB [ARCHIVE]'''<br />
Lance une restauration totale depuis l'archive de sauvegarde [ARCHIVE] sur le serveur Affa <b>(!)</b> comme défini dans l'enregistrement JOB. Autrement dit, une fois terminée, le serveur de sauvegarde reboote comme un clone du serveur source. Assurez-vous d'avoir arrêté le serveur source avant de rebooter le serveur de sauvegardes, sinon l'interface réseau ne sera pas activée. Cela est important, lorsque vous lancez --rise à distance. La fonction --rise ne fonctionne qu'avec les serveurs SME 7et ne devrait être utilisée que sur les serveurs de sauvegardes dédiés.<br />
<br />
Avec l'option --all, tous les fichiers et répertoires de l'archive tels que définis par les propriétés include[] sont restaurés. Les fichiers et répertoires avec le même nom sur le serveur Affa seront écrasés et ne peuvent pas être restaurés par un undoris. Cela ne devrait pas poser de problème sur un serveur Affa dédié qui ne contient aucune autre donnée. Après un éventuel undorise ces données aditionnelles restaurées doivent être supprimées manuellement.<br />
<br />
Remarquez que le processus rise sauvegarde le serveur Affa lui-même avant d'effectuer la restauration de l'archive. Cette sauvegarde est utilisée par un éventuel undorise pour restaurer le serveur Affa. Seulement les fichiers et répertoires standards sont sauvegardés. Les données dans des endroits non standard (comme /opt) ne sont pas touchées et existeront encore après le rise. Voir aussi : [[Backup_server_config#Standard_backup_.26_restore_inclusions]]<br />
<br />
<br />
'''affa --undo-rise'''<br />
Cette fonction reconvertit en serveur de sauvegarde un serveur Affa préalablement converti en serveur de production. Après un redémarrage, toutes les tâches configurées fonctionneront à nouveau.<br />
<br />
<br />
'''affa --unchunk-archive JOB ARCHIVE'''<br />
Concatène et décompresse tous les fichiers découpés dans l'archive ARCHIVE de la tâche JOB pour rendre l'archive prête pour une restauration. Notez que l'archives la plus récente scheduled.0 n'est jamais découpée et donc qu'elle peut être restaurée telle quelle sans re-concaténation préalable.<br />
<br />
<br />
'''affa --chunk-archive JOB ARCHIVE'''<br />
Découpe et compresse tous les fichiers dont les noms correspondent à la propriété chunkFiles pour économiser l'espace disque. Cela ne devrait être utilisé que comme l'inverse de --unchunk-archive. Les découpages sont stockés en fichiers bzip2 dans un répertoire nommé FILENAME.affa-chunks avec 8 niveaux de profondeur.<br />
<br />
<br />
<br />
'''affa --create-backup-file JOB [ARCHIVE] [--outfile=FILE]'''<br />
Crée une archive tar gzippée FILE depuis l'ARCHIVE de la tâche JOB. Pendant la création la somme MD5 est calculée puis le fichier tar enregistré lui est comparé. L'archive par défaut est schedule.0 et le fichier FILE par défaut est./smeserver.tg. La somme de contrôle est écrite dans FILE.md5sum.<br />
<br />
Un fichier smeserver.tgz stocké sur un disque USB peut être utilisé pour restauration lors d'une nouvelle installation de SME Server.<br />
<br />
<br />
'''affa --list-archives [--csv] JOB'''<br />
Affiche un tableau de toutes les archives de la tâche JOB existantes, avec la date, le nombre de fichiers, la taille et l'espace disque utilisé. <br />
<small><pre><br />
Affa version 2.0.0-rc4 on affa1.mydomain.de (10.204.48.2)<br />
+------------------------------------------------------------------------------+<br />
| Job: primmail |<br />
| Description: Mailserver Albstadt |<br />
| Directory: /var/affa/primmail/ |<br />
| Hostname: 10.204.144.2 |<br />
| Email: admin@mydomain.de |<br />
+-------+-----------------------+-----------+----------------+--------+--------+<br />
| Run | Completion date | Exec Time | Files | Size | Recvd |<br />
+-------+-----------------------+-----------+----------------+--------+--------+<br />
| M 9 | Sun 2008 Mar 02 06:11 | 0h29m47s | 679010 | 80GB | 60MB |<br />
| M 8 | Sun 2008 Mar 30 06:12 | 0h26m59s | 701683 | 83GB | 48MB |<br />
| M 7 | Sun 2008 Apr 27 06:16 | 0h28m01s | 731332 | 87GB | 47MB |<br />
| M 6 | Sun 2008 Jun 01 06:20 | 0h27m37s | 755529 | 90GB | 50MB |<br />
| M 5 | Sun 2008 Jun 29 06:24 | 0h28m39s | 800200 | 94GB | 52MB |<br />
| M 4 | Sun 2008 Jul 27 06:28 | 0h29m10s | 835398 | 99GB | 59MB |<br />
| M 3 | Sun 2008 Aug 31 06:34 | 0h34m53s | 869409 | 101GB | 65MB |<br />
| M 2 | Sun 2008 Sep 28 06:38 | 0h33m19s | 910889 | 106GB | 59MB |<br />
| M 1 | Sun 2008 Oct 26 06:42 | 0h38m41s | 932627 | 110GB | 61MB |<br />
| M 0 | Sun 2008 Nov 30 06:48 | 0h51m44s | 954090 | 116GB | 75MB |<br />
+-------+-----------------------+-----------+----------------+--------+--------+<br />
| W 3 | Sun 2008 Dec 14 06:50 | 0h53m34s | 962267 | 118GB | 68MB |<br />
| W 2 | Sun 2008 Dec 21 06:52 | 0h53m11s | 974914 | 120GB | 69MB |<br />
| W 1 | Wed 2008 Dec 24 06:52 | 0h48m58s | 974536 | 120GB | 64MB |<br />
| W 0 | Sun 2008 Dec 28 06:58 | 0h39m01s | 977003 | 120GB | 61MB |<br />
+-------+-----------------------+-----------+----------------+--------+--------+<br />
| D 6 | Tue 2008 Dec 30 06:52 | 0h54m45s | 976872 | 120GB | 77MB |<br />
| D 5 | Wed 2008 Dec 31 06:52 | 0h49m10s | 977764 | 120GB | 63MB |<br />
| D 4 | Thu 2009 Jan 01 06:28 | 0h28m09s | 977879 | 120GB | 60MB |<br />
| D 3 | Fri 2009 Jan 02 06:53 | 0h53m21s | 978144 | 120GB | 68MB |<br />
| D 2 | Sat 2009 Jan 03 06:53 | 0h53m15s | 978072 | 120GB | 60MB |<br />
| D 1 | Sun 2009 Jan 04 06:53 | 0h53m16s | 976478 | 120GB | 70MB |<br />
| D 0 | Mon 2009 Jan 05 06:52 | 0h52m42s | 977062 | 120GB | 61MB |<br />
+-------+-----------------------+-----------+----------------+--------+--------+<br />
| S 1 | Mon 2009 Jan 05 17:57 | 0h27m48s | 976353 | 120GB | 98MB |<br />
| S 0 | Tue 2009 Jan 06 06:52 | 0h52m55s | 976539 | 120GB | 70MB |<br />
+-------+-----------------------+-----------+----------------+--------+--------+<br />
</pre></small><br />
Avec --csv, le tableau est généré selon un format séparé par des deux-points (:) lisible par les programmes.<br />
<br />
<br />
'''affa --status [--csv]'''<br />
Affiche un tableau de toutes les tâches actives (status enabled), avec l'heure de dernier et prochain lancement, la taille, l'espace disque utilisé par la plus récente archive, la date d'exécution du dernier lancement, et le nombre d'archives programmées (S), quotidiennes (D), hebdomadaires (W), mensuelles (M) et annuelles (Y). Le dernier lancement (colonne 'Last') affiche 'failed' si une tâche n'a pas été lancée dans les dernières 24h. Pour les tâches désactivées, 'Last' affiche toujours 'failed' au bout de 24 h. Pour voir la date et l'heure du dernier lancement de ces tâches, utilisez l'option --list-archives. La colonne 'Next' montre l'heure à laquelle le prochain lancement aura lieu, si 'Enabled' est à 'yes'.<br />
<small><pre><br />
Affa version 2.0.0-rc4 on backup.mydomain.de (10.204.48.2)<br />
+----------------+-----+-------+-----------+-------+-------+----------------+<br />
| Job | ENA | Last | Exec Time | Next | Size | N of S,D,W,M,Y |<br />
+----------------+-----+-------+-----------+-------+-------+----------------+<br />
| erp-alb-rsyncd | yes | 00:20 | 0h00m14s | 00:20 | 46MB | 1, 7, 4,10, 0 |<br />
| erp-fra-rsyncd | yes | 02:46 | 0h01m03s | 02:45 | 712MB | 1, 7, 4,10, 0 |<br />
| esxi-TS-W2K | yes | 00:37 | 4h22m08s | 20:15 | 60GB | 1, 7, 1, 0, 0 |<br />
| esxi-W2KR2 | yes | 02:22 | 3h52m09s | 22:30 | 40GB | 1, 7, 1, 0, 0 |<br />
| helpdesk | yes | 00:25 | 0h00m47s | 00:25 | 117MB | 1, 7, 4,10, 0 |<br />
| imageserv | yes | running (pid 16477) | 1, 7, 4,10, 0 |<br />
| intraweb | yes | 06:34 | 0h04m50s | 18:00 | 2.1GB | 2, 7, 4,10, 0 |<br />
| pdcalb | yes | 21:19 | 0h14m27s | 21:05 | 89GB | 1, 7, 4,10, 0 |<br />
| pdcfra | yes | 04:18 | 0h03m33s | 04:15 | 33GB | 1, 7, 4,10, 0 |<br />
| primmail | yes | 06:52 | 0h52m55s | 17:30 | 120GB | 2, 7, 4,10, 0 |<br />
| rayofhope | yes | 21:52 | 0h07m40s | 21:45 | 26GB | 1, 7, 4,10, 0 |<br />
| smecrmpg | yes | 22:35 | 0h00m21s | 22:35 | 70MB | 1, 7, 4,10, 0 |<br />
| sozserv | yes | 02:09 | 0h04m03s | 02:05 | 5.9GB | 1, 7, 4,10, 0 |<br />
| wiki | yes | 03:45 | 0h00m27s | 03:45 | 278MB | 1, 7, 4, 8, 0 |<br />
+----------------+-----+-------+-----------+-------+-------+----------------+<br />
| az32share | no | - | 0h00m22s | 03:05 | 1.3GB | 1, 7, 4, 8, 0 |<br />
+----------------+-----+-------+-----------+-------+-------+----------------+<br />
</pre></small><br />
Avec --cxv, le tableau est généré selon un format séparé par des deux-points (:) lisible par les programmes.<br />
<br />
<br />
'''affa --disk-usage'''<br />
Affiche l'utilisation actuelle du disque pour les répertoires racine de tous les systèmes de fichiers.<br />
<small><pre><br />
Affa version 2.0.0-rc4 on backup2.mydomain.de (10.204.0.52)<br />
+------+--------+--------+----------------------------------------------------+<br />
| Use% | Used | Avail | Root Dir |<br />
+------+--------+--------+----------------------------------------------------+<br />
| 71% | 938GB | 365GB | /var/affa |<br />
| 20% | 194GB | 759GB | /mnt/affadev |<br />
+------+--------+--------+----------------------------------------------------+<br />
</pre></small><br />
Avec --cxv, le tableau est généré selon un format séparé par des deux-points (:) lisible par les programmes.<br />
<br />
<br />
'''affa --show-schedule [--all]'''<br />
Affiche un calendrier 'graphique' de toutes les tâches actives. La résolution est de 30 minutes. Un 'X' indique les heures de lancement programmés. La durée de la dernière exécution est marquée avec des '='.<br />
<small><pre><br />
Affa version 2.0.0-rc4 on backup.mydomain.de (10.204.48.2)<br />
TIME 0:00 4:00 8:00 12:00 16:00 20:00<br />
erp-alb-rsyncd X------- -------- -------- -------- -------- --------<br />
helpdesk X------- -------- -------- -------- -------- --------<br />
esxi-W2KR2 =====--- -------- -------- -------- -------- -----X==<br />
esxi-TS-W2K ==------ -------- -------- -------- -------- X=======<br />
imageserv ---X---- -------- -------- -------- -------- --------<br />
sozserv ----X--- -------- -------- -------- -------- --------<br />
erp-fra-rsyncd -----X-- -------- -------- -------- -------- --------<br />
wiki -------X -------- -------- -------- -------- --------<br />
pdcfra -------- X------- -------- -------- -------- --------<br />
primmail -------- ----X=-- -------- -------- ---X---- --------<br />
intraweb -------- -----X-- -------- -------- ----X--- --------<br />
pdcalb -------- -------- -------- -------- -------- --X-----<br />
rayofhope -------- -------- -------- -------- -------- ---X----<br />
smecrmpg -------- -------- -------- -------- -------- -----X--<br />
1 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
<br />
<br />
'''affa --send-status'''<br />
Envoie le tableau d'état, l'espace disque utilisé et la liste des archives de toutes les tâches à l'adresse email configurée dans l'enregistrement 'DefaultAffaConfig'. Cette commande est utilisée par la tâche cron 'affa status'.<br />
<br />
<br />
'''affa --mailtest JOB'''<br />
Envoie un email de test aux adresses email configurées dans l'enregistrement de la tâche. Avec la propriété 'Watchdog' à 'yes', un email de test est envoyé depuis le serveur distant aussi. Utilisez cela pour vérifier que l'envoi des mails fonctionne.<br><br />
'''Note:''' Par défaut, Affa n'envoie des messages qu'en cas d'erreur, jamais en cas de succès (voir la propriété 'chattyOnSuccess').<br />
<br />
<br />
'''affa --cleanup JOB'''<br />
Après que vous ayez diminué le temps de conservation (scheduledKeep), les archives avec un indice plus élevé ne seront plus supprimées automatiquement et existeront toujours. Cette option trouve ces archives et les supprime.<br />
<br />
<br />
'''affa --rename-job JOB NEWNAME'''<br />
Renomme la tâche JOB en NEWNAME, y compris tous les enregistrements de la base de données et les répertoires d'archive.<br />
<br />
<br />
'''affa --move-archive JOB NEWROOTDIR'''<br />
Déplace le répertoire d'archive de JOB vers NEWROOTDIR et ajuste la propriété RootDir. NEWROOTDIR doit être un chemin complet (full path) commençant par un slash. Comme le déplacement d'un système de fichiers à un autre (par exemple d'un disque USB à un disque local) n'est pas possible, Affa se sert dans ce cas d'une commande de copie et efface ensuite le répertoire source. Selon la taille de l'archive, la copie d'un système de fichiers à l'autre peut prendre longtemps.<br />
<br />
<br />
'''affa --delete-job [--revoke-key] JOB'''<br />
Efface irréversiblement une tâche, y compris toutes les archives, la configuration et les bases de données attachées. En passant l'option '--revoke-key', la clé publique sur le serveur distant sera détruite.<br />
<br />
<br />
'''affa --revoke-key JOB'''<br />
'''affa --revoke-key --host=TARGETHOST [--port=PORT] [--remoteOS=cygwin]'''<br />
Détruit la clé DSA publique sur le serveur distant.<br />
{{Note box|<nowiki>L'option --revoke-key seule fonctionne sur SME Server comme serveur distant et pour les systèmes où les clés sont stockées dans /root/.ssh/authorized_keys2 Avec remoteOS=cygwin cela fonctionne pour les serveurs distants sous Cygwin/Windows.</nowiki>}}<br />
<br />
<br />
'''affa --check-connections'''<br />
Contrôle le login ssh pour toutes les tâches configurées. Pour les tâches pour lesquelles la clé publique n'avait pas encore été envoyée, il vous sera demandé le mot de passe et la clé sera envoyée ensuite.<br />
<br />
<br />
'''affa --kill JOB'''<br />
Termine la tâche en cours JOB et tous les processus fils (les processus rsync)..<br />
<br />
'''affa --debug'''<br />
Active un journal verbeux. Remplace la tâche et les configurations globales.<br />
<br />
=== Exemples de configuration ===<br />
==== Serveur de sauvegarde dédié ====<br />
* Configurez une tâche pour chaque serveur que vous voulez sauvegarder<br />
* Envoyez les clés publiques à chaque serveur.<br />
affa --send-key JOBNAME<br />
* Envoie la clé DSA correspontante à la tâche JOBNAME.<br />
affa --check-connections<br />
* Vérifie que les connexions sans mot de passe fonctionnent.<br />
affa --show-schedule<br />
* Vérifie si les tâches planifiées sont réparties dans la journée.<br />
affa --make-cronjobs<br />
* Crée les tâches cron.<br />
affa --status<br />
* Vérifie l'état après 24 heures.<br />
<br />
<br />
==== Sauvegarder une ibay individuellement ====<br />
Supposons que nous voulions sauvegarder les ibays 'staff1' et 'staff2' de notre serveur de production dont l'adresse IP WAN est 82.123.1.1 sur un serveur Affa déporté via internet, toutes les nuits à 2h30.<br />
* On se connecte sur le serveur Affa et on installe les paquetages comme décrit ci-dessus.<br />
* On copie l'exemple de configuration<br />
cp /usr/lib/affa/jobconfig-sample.pl /root/ibay-staff-job.pl<br />
* On édite /root/ibay-staff-job.pl pour y mettre :<br />
my $jobname='ibay-staff';<br />
et<br />
'remoteHostName‘=>'82.123.1.1',<br />
'TimeSchedule'=>'0230',<br />
'SMEServer'=>'no',<br />
'Include[0]'=>'/home/e-smith/files/ibays/staff1',<br />
'Include[1]'=>'/home/e-smith/files/ibays/staff2',<br />
* On enregistre la configuration<br />
/root/ibay-staff-job.pl<br />
* On envoie la clé publique au serveur de production<br />
affa --send-key ibay-staff<br />
* On vérifie le lendemain matin :<br />
affa --list-archives ibay-staff<br />
affa --status<br />
* Vérifie l'état après 24 heures.<br />
ls /var/affa/ibay-staff<br />
<br />
==== Deux serveurs de production qui se sauvegardent mutuellement ====<br />
Vous avez deux sites connectés via un VPN et un serveur SME fonctionnant sur chaque site. Dans ce cas, vous n'avez pas besoin d'un serveur de sauvegarde Affa dédié. Les deux serveurs de production peuvent en plus agir chacun comme un serveur Affa sauvegardant le serveur de l'autre site. Installez simplement Affa et configurez une tâche qui sauvegarde l'autre serveur. Vous pouvez utiliser toutes les fonctionnalités Affa excepté la fonctionnalité rise. <br />
En utilisant la fonctionnalité rise, le serveur devient n'importe lequel des systèmes sauvegardés, ce qui est moins utile dans ce scénario puisque cela vous donnerait une copie fonctionnelle du serveur de l'autre site lorsque celui-ci est en panne.<br />
<br />
Pour obtenir une redondance et une restauration plus rapide, vous pouvez configurer une sauvegarde locale sur un disque USB externe ou sur un NAS.<br />
<br />
Attention de ne pas sauvegarder à nouveau les archives de l'autre site. Configurez la propriété RootDir vers un chemin qui n'est pas inclu dans la liste de sauvegarde par défaut de SME. En laissant la propriété RootDir par défaut sur '/var/affa', cela est garanti.<br />
<br />
==== Sauvegarde d'un ordinateur Windows ====<br />
Sauvegarder les données d'un système Windows nécessite que le démon Rsyncd Cygwin soit installé et configuré sous Windows. La procédure "rsync over ssh" ne fonctionne pas, puisque le processus rsync Cigwin se plante systématiquement après le transfert de quelques fichiers. <br />
<br />
===== Configuration de Rsyncd sur l'ordinateur Windows ===== <br />
Installez la base Cygwin, le paquetage rsync puis configurez le service Rsyncd comme décrit dans ce document : [[Rsyncd setup on a windows computer for use with Affa backup]]'. <br />
L'installation du service sshd est optionnelle et n'est pas nécessaire pour la sauvegarde elle-même, mais avoir une connexion ssh peut être très utile pour l'administration et pour exécuter des scripts sur le système Windows. Affa supporte l'envoi de la clé publique vers un ordinateur Windows Cygwin pour une connexion sans mot de passe.<br />
<br />
Note: Affa ne sauvegarde pas l'information Windows Access Control List (ACL). Vous pourriez avoir à corriger les ACLs après une restauration.<br />
<br />
===== Configuration du mode Affa Rsyncd (exemple de démarrage rapide) =====<br />
Vous voulez sauvegarder les répertoires ''Mes Documents''des utilisateurs ''ssorglos'' et ''bsimpson'' depuis l'ordinateur Windows 'ws001' avec l'IP 192.168.1.65 sur votre serveur Affa 'affabox' avec l'IP 192.168.1.3.<br />
* connectez-vous sur 'affabox' et copiez le script exemple de configuration Cygwin<br />
cp /usr/lib/affa/jobconfig-cygwin-sample.pl /root/ws001-mydocs-job.pl<br />
* editez /root/ws001-mydocs-job.pl et mettez<br />
my $jobname='ws001-mydocs';<br />
et<br />
'remoteHostName‘=>'192.168.1.65',<br />
'rsyncdPassword'=>'<i>secretword</i>',<br />
'Include[0]'=>'/c/Documents and Settings/ssorglos/My Documents/', ''# N'utilisez pas les backslashes dans les chemins !''<br />
'Include[1]'=>'/c/Documents and Settings/bsimpson/My Documents/',<br />
où ''secretword'' doit être remplacé par le mot de passe que vous avez choisi dans rsyncd.secretsfile sur l'ordinateur Windows.<br />
* Enregistrez la configuration (cela crée les entrées de base de données et configure les tâches cron.<br />
/root/ws001-mydocs-job.pl<br />
* Lancez la tâche manuellement. Après l'achèvement, vérifiez l'archive /var/affa/ws001-mydocs/scheduled.0 et le journal /var/log/affa/ws001-mydocs.log.<br />
affa --run ws001-mydocs<br />
<br />
===== Configuration manuelle d'Affa en mode Rsyncd =====<br />
Au cas où vous voudriez configurer manuellement en utilisant les commandes db, voici les paramètres obligatoires pour le mode Rsyncd Cygwin.<br />
db affa setprop JOB rsyncdMode yes<br />
db affa setprop JOB rsyncdModule AFFA<br />
db affa setprop JOB rsyncdUser affa<br />
db affa setprop JOB rsyncdPassword ''secretword''<br />
db affa setprop JOB SMEServer no<br />
db affa setprop JOB RPMCheck no<br />
db affa setprop JOB Watchdog no<br />
Optionel pour une connexion ssh.<br />
db affa setprop JOB remoteOS cygwin <br />
<br />
<br />
==== Utiliser Affa pour sauvegarder un NAS monté par NFS ou un disque USB local ====<br />
<br />
On désire sauvegarder notre serveur SME 7 de production dont le nom d'hôte est 'prodbox' et l'IP 10.200.48.1 sur un système de fichiers monté plutôt que de configurer un serveur Affa dédié.<br />
<br />
===== Configuration du NAS =====<br />
On dispose d'un serveur FREENAS dont l'adresse IP est 10.200.48.2 en fonction et avec le service NFS activé. Le disque est monté sur /mnt/affashare. On a autorisé l'adresse IP de notre serveur de production à accéder au partage /mnt/affashare.<br />
* On se logge sur 'prodbox' et on installe les paquetages NFS :<br />
/usr/bin/yum install --enablerepo=smecontribs smeserver-nfs<br />
* maintenant, on active et on démarre le service portmapper.<br />
config setprop portmap status enabled<br />
service portmap start<br />
<br />
* On monte le partage NFS :<br />
mkdir -p /mnt/affadevice<br />
mount 10.200.48.2:/mnt/affashare /mnt/affadevice<br />
<br />
<br />
===== Alternative : configuration du disque USB =====<br />
* On se logge sur 'prodbox'<br />
* On connecte le disque USB sur le bus USB. On doit maintenant déterminer sur quel 'device' le noyau a placé le lecteur. On regarde /var/log/messages et on cherche la ligne d'initialisation correspondante (Initializing USB Mass Storage driver). Quelques lignes plus bas, on trouve le nom du device. Supposons qu'il s'agisse de 'sdh'. Si ce n'est pas le cas, on remplacera dans les instructions ci-après /dev/sdh par la valeur trouvée.<br />
* On crée une partition linux avec fdisk : '''Il faut bien vérifier que c'est bien le disque USB qui est attaché avant de continuer !'''<br />
<br />
fdisk /dev/sdh<br />
On trouvera probablement une partition dos vfat, qu'on devra d'abord détruire. Supposons que nous ayons créé une seule partition ''/dev/sdh1''.<br />
* On formatte avec un système ext3 :<br />
mkfs.ext3 /dev/sdh1<br />
* On crée le point de montage :<br />
mkdir -p /mnt/affadevice<br />
* On ajoute notre disque USB dans /etc/fstab. On crée donc une nouvelle ligne avec :<br />
/dev/sdh1 /mnt/affadevice ext3 defaults<br />
* On monte le disque :<br />
mount /mnt/affadevice<br />
* On fait une vérification croisée avec df :<br />
df<br />
<br />
===== Copier une archive Affa d'un disque dur USB sur un nouveau disque =====<br />
Affa utilise un grand nombre de liens matériels (hard links) pour compresser et préserver de la place sur le disque pour les sauvegardes. Dans la situation où l'on voudrait copier un disque contenant une archive Affa sur un autre disque (de taille plus importante), il est nécessaire de s'assurer que les liens matériels ont été copiés correctement au risque que la copie devienne considérablement plus volumineuse que l'archive source.<br />
<br />
Une méthode pour copier à travers des systèmes de fichiers distincts (par ex. entre 2 disques usb différents) et préserver les liens matériels est la suivante:<br />
* monter les deux disques usb avec des points de montage différents, par exemple /media/backup1 et /media/backup2 et effectuer alors:<br />
<br />
mkdir /media/backup2/archive <br />
cd /media/backup1/archive<br />
tar cpf - . | ( cd /media/backup2/archive && tar xpf - )<br />
<br />
en remplaçant '''archive''' par le véritable nom de l'archive (job) Affa que l'on souhaite placer sur l'autre disque.<br />
<br />
<br />
<br />
===== Configuration d'Affa =====<br />
On veut lancer des sauvegardes à 11h30, 15h30, 19h30 et on veut garder les 3 dernières, 7 journalières, 5 hebdomadaires, 12 mensuelles et 1 annuelle.<br />
* On se logge sur 'prodbox' et on installe les paquetages Affa comme indiqué ci-dessus.<br />
* On copie l'exemple de configuration<br />
cp /usr/lib/affa/jobconfig-sample.pl /root/prodbox-job.pl<br />
* On édite /root/prodbox-job.pl pour y mettre :<br />
my $jobname='prodbox';<br />
et<br />
'remoteHostName‘=>'localhost',<br />
'TimeSchedule'=>'1130,1530,1930',<br />
'scheduledKeep'=>3,<br />
'dailyKeep'=>7,<br />
'weeklyKeep'=>5,<br />
'monthlyKeep'=>12,<br />
'yearlyKeep'=>1,<br />
'RootDir=>'/mnt/affadevice',<br />
On vérifie les autres propriétés et on les change si besoin.<br />
* On enregistre la configuration<br />
/root/prodbox-job.pl<br />
* On lance la tâche manuellement :<br />
<br />
affa --run prodbox<br />
<br />
<br />
===== Limitations =====<br />
Avec ce type de configuration, on ne peut pas utiliser la fonction 'rise' (conversion du serveur de sauvegarde en serveur de production), puisqu'elle nécessite que l'archive soit sur le même système de fichiers que le serveur. La fonction 'rise' utilise les hardlinks, qui ne fonctionnent pas entre systèmes de fichiers différents.<br />
<br />
===== Automount : montage automatique =====<br />
Avoir les archives de sauvegarde sur le même système de fichiers est toujours un risque, qui peut être minimisé en utilisant la fonction automount. Ainsi, le système de fichiers externe n'est monté que durant le déroulement d'une tâche.<br />
<br />
Dans l'exemple NAS, configurez :<br />
'AutomountDevice=>'10.200.48.2:/mnt/affashare',<br />
'AutomountPoint =>'mnt/affadevice',<br />
et sautez l'étape 2<br />
<br />
Dans l'exemple du disque USB, configurez :<br />
'AutomountDevice=>'/dev/sdc1',<br />
'AutomountPoint =>'mnt/affadevice',<br />
et sautez les étapes 5 à 8.<br />
<br />
Le point de montage sera automatiquement créé, s'il n'existe pas. <br><br />
Pour accéder au répertoire de l'archive, vous devez le monter à la main.<br />
<br />
=== Restauration ===<br />
==== Restauration de fichiers ou répertoires individuels ====<br />
'''Example 1:''' Nous sommes le Mardi 6 Janvier 2009 quand l'utilisateur 'briedlin' vous demande de restaurer les messages de sa boite 'orders' qu'il a malencontreusement détruite Lundi.<br />
<br />
* Vous devez d'abord regarder quelles sont les archives disponibles. Le nom de tâche de ce serveur de sauvegardes est 'primmail. Pour avoir une liste de toutes les archives, lancez <br />
affa --list-archives primmail<br />
''(voir la liste exemple dans le chapitre [[Affa#Usage_and_command_line_options]]''<br />
* Choisissez l'archive daily.0, qui a été créée la nuit de Lundi. Maintenant, restaurez la mailbox 'orders' en utilisant la commande rsync.<br />
* Maintenant, lancez la commande rsync (notez le slash en fin !) sur le serveur de sauvegarde Affa.<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# cette variable est utilisée pour raccourcir la ligne de commande suivante''<br />
rsync -av /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
Si les serveurs sont configurés pour utiliser un port ssh différent, ex. 2222, à la place faites :<br />
<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# cette variable est utilisée pour raccourcir la ligne de commande suivante''<br />
rsync -av -e 'ssh -p 2222' /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
<br />
<br />
'''Example 2:''' Un utilisateur a effacé le fichier orderform.pdf dans l'ibay 'docs' du serveur 10.204.48.1 et vous demande de le restaurer.<br />
<br />
* Vous avez cherché et trouvé la dernière version de ce fichier dans l'archive hebdomadaire de la tâche 'prodserv'.<br />
* Pour la recopier vers le serveur 10.204.48.1 lancez sur le serveur Affa<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# cette variable est utilisée pour raccourcir la ligne de commande suivante''<br />
rsync -av /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
Si les serveurs sont configurés pour utiliser un port ssh différent, ex. 2222, à la place faites :<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# cette variable est utilisée pour raccourcir la ligne de commande suivante''<br />
rsync -av -e 'ssh -p 2222' /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
<br />
<br />
'''Example 3:''' Restaurer un fichier avec un caractère spécial dans le chemin<br />
<br />
N'utilisez pas les variables shell pour raccourcir la commande. Cela compliquerait les choses plus que cela aiderait.<br />
Mettez entre guillemets les chemins source et destination. Dans le chemin de destination, echappez les caractères blancs avec un double backslash et les parenthèses avec un simple backslash. Sur le serveur de sauvegardes AFFA lancez :<br />
rsync -av "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
Si les serveurs sont configurés pour utiliser un port ssh différent, ex. 2222, à la place faites :<br />
<br />
rsync -av -e 'ssh -p 2222' "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
==== Restauration complète ====<br />
Pour lancer une restauration complète des utilisateurs et des données de configuration, lancez sur le serveur Affa<br />
affa --full-restore <JOB> [<ARCHIVE>]<br />
Cela synchroniste (par rsync) les données de la sauvegarde ARCHIVE vers l'ordinateur 'remoteHostname' défini dans la configuration de la tâche JOB. <br />
<br />
Exemple:<br />
Vous avez sauvegardé votre serveur de production 'prodsrv' par une tâche 'prodbox'. Pour restaurer la dernière sauvegarde, lancez<br />
affa --full-restore prodbox<br />
<br />
Pour restaurer la plus ancienne archive daily.3, lancez <br />
affa --full-restore prodbox daily.3<br />
<br />
{{Warning box| Une restauration complète reconstruit le serveur tel qu'il était au moment de la sauvegarde. Cela signifie que tous les fichiers créés ou les changement de configuration due serveur effectués après la sauvegarde seront perdus. Après la restauration, le serveur restauré redémarre automatiquement. }}<br />
<br />
==== Migration d'un serveur SME7 vers un nouveau matériel en utilisant la fonctionnalité "Affa rise" ====<br />
Merci de consulter cet Howto: [[Moving SME to new Hardware]]<br />
<br />
<br />
==== Restauration d'un disque USB sur un nouveau serveur ====<br />
<br />
<br />
Cette astuce vient de http://forums.contribs.org/index.php?topic=42412.0<br />
<br />
Q) J'ai des sauvegardes complètes stockées sur un disque dur USB connecté sur notre serveur de sauvegarde Affa. J'ai besoin de restaurer une ancienne sauvegarde mensuelle de notre tâche "mailbackup" sur un serveur de tests à la place du système original. Si je le fais depuis le serveur de sauvegardes, je vois les instructions pour remonter ce serveur depuis la sauvegarde courante, mais je veux restaurer d'un état datant d'environ un mois, avant que d'étranges choses se soient passées. Et je veux le faire sur une machine qui n'est pas notre serveur de sauvegarde ni notre serveur de production. J'ai tenté d'imaginer comment faire, mais je suis perdu dans les options. Mon but est de faire quelques tests.<br />
<br />
R) Sur votre serveur de tests, configurez une tâche "mailbackup" identique à celle de votre serveur de sauvegardes, mais paramétrez correctement RootDir en /var/affa et mettez la propriété "status" sur "disabled" Connectez le lecteur USB et copiez l'archive de la tâche "mailbackup" dans /var/affa. Ensuite, lancez affa --rise mailbackup ARCHIVE<br />
<br />
Rappelez-vous qu'Affa place les scripts de configuration de tâches dans les répertoires d'archive. Dans votre cas, c'est mailbackup-setup.pl. Copiez-le sur votre serveur de tests, changez les propriétés RootDir et status, puis exécutez-le.<br />
<br />
=== FAQ ===<br />
'''Quels fichiers et répertoires sont inclus par défaut ?'''<br />
<br />
Avec SMEServer=no aucun.<br />
<br />
Avec SMEServer=yes la liste par défaut des sauvegardes SME. TLa commande suivante affiche la liste :<br />
perl -e 'use esmith::Backup;$b=new esmith::Backup;print join("\n",$b->restore_list)."\n"' <br />
<br />
<br />
'''Puis-je sauvegarder exclusivement des fichiers d'images contenus dans un répertoire spécifique ?'''<br />
<br />
Oui. Supposant que vous voulez sauvegarder tous les fichiers gif et jpg du répertoire /home/e-smith/files/ibays/pictures/files utilisez cette configuration<br />
db affa setprop JOBNAME SMEServer no # Ne pas inclure les répertoires et fichiers par défaut de SME Server<br />
db affa setprop JOBNAME Include[0] '/home/e-smith/files/ibays/pictures/files' # Commencer à chercher ici<br />
db affa setprop JOBNAME Include[1] '*/' # Chercher récursivement dans l'arborescence<br />
db affa setprop JOBNAME Include[2] '*.gif' # Copier les fichiers qui correspondent à ce profil<br />
db affa setprop JOBNAME Include[3] '*.jpg' <br />
db affa setprop JOBNAME Exclude[0] '*' # Exclure tous les autres<br />
db affa setprop JOBNAME rsyncOptions '-m' # Ne pas créer les répertoires vides<br />
<br />
<br />
'''Comment puis-je déplacer une archive de sauvegarde de mon serveur de production dans une agence distante vers le serveur Affa en utilisant un disque USB ? Je veux éviter de télécharger la première sauvegarde complète depuis internet.<br />
Installez Affa sur votre serveur de production, connectez et montez le disque USB.''' Configurez une tâche pour enregistrer la sauvegarde sur le disque USB. Une fois la tâche exécutée, connectez le disque USB à votre serveur Affa et configurez la tâche de la même manière. Utilisez le script de configuration du répertoire d'archive. Vous n'avez à mofier que la propriété "remoteHostName". Maintenant, utilisez l'option --move-archive pour transférer l'archive sur le disque local. Si ce n'est pas déjà fait, envoyez la clé publique à votre serveur de production. C'est fait.<br />
<br />
<br />
'''Comment je sauvegarde deux serveurs SME derrière un pare-feu ?'''<br />
D'abord, vous devez configurer les port forwardings pour le service ssh sur votre pare-feu. Utilisez un port non standard, par ex. 2200->22, pour votre second serveur. Configurez une tâche sur votre serveur de sauvegarde Affa pour chacun de vos serveurs de production. Configurez la propriété sshPort=2200 pour votre second serveur.<br />
<br />
=== Désinstallation ===<br />
Cela supprime le paquetage Affa installé, toutes les données de configuration et toutes les archives de sauvegarde.<br />
<br />
Lancez les commandes suivantes pour chaque tâche :<br />
affa --revoke-key JOBNAME<br />
affa --delete-job JOBNAME<br />
Verifie que toutes les tâches ont été supprimées.<br />
affa --status<br />
Suprime le paquetage Affa<br />
rpm -e smeserver-affa<br />
et, si ces paquetages ne sont pas des dépendances d'autres paquetages :<br />
Nettoyage général<br />
rm -f /etc/cron.d/affa /etc/cron.d/affa-status<br />
rm -rf /home/e-smith/db/affa /home/e-smith/db/affa-report<br />
rm -rf /var/affa /var/log/affa<br />
<br />
=== Résolution de problèmes ===<br />
==== Affa cesse de fonctionner après que prodserver ait été restauré. ====<br />
'''Description:'''<br />
* Affa cesse de fonctionner ou est endommagé<br />
* prodserver ne peut plus réaliser de connexion via ssh vers affaserver<br />
* Le message "SSH connection to ''prodserver ip'' failed. Did you send the public key" est présent dans le fichier /var/log/affa/''nom_du_job''.log<br />
* Réenvoyer les clés ssh ne résout pas le problème<br />
<br />
'''Solution:'''<br />
* Ceci vient probablement du fait que la clé pour prodserver, qui est présente sur affaserver, ne correspond plus avec la clé de prodserver.<br />
* Il faut alors effacer la clé existante afin que la clé soit réenvoyée par prodserver à la prochaine connexion ssh.<br />
* En ligne de commande, éditer le fichier /root/.ssh/known_hosts et enlever la ligne correspondant à prodserver. Cette dernière débute par: ''prodserver ip'' ssh-rsa<br />
<br />
==== Impossibilité d'envoyer des emails après un "rise". ====<br />
'''Description:'''<br />
* impossibilité d'envoyer de emails après un "rise" sur le server de sauvegardes en restaurant de SME7 à SME8<br />
* le journal qpsmtpd affiche:<br />
@400000005125f42409a8cd1c 4280 vvirus::clamav plugin (data_post): clamscan results: /var/spool/qpsmtpd/1361441818:4280:0: Access denied. ERROR<br />
@400000005125f42409aa9624 4280 virus::clamav plugin (data_post): ClamAV error: /usr/bin/clamdscan --stdout --config-file=/etc/clamd.conf --no-summary /var/spool/qpsmtpd/1361441818:4280:0 2>&1: 2<br />
* problème de permission sur /var/spool<br />
#Sur le serveur fonctionnel:<br />
ls -al /var/spool<br />
drwxr-s--- 2 qpsmtpd clamav 4096 Jun 10 12:31 qpsmtpd<br />
<br />
#Sur le serveur cassé (restauré):<br />
ls -al /var/spool<br />
drwxr-x--- 2 qpsmtpd clamav 4096 Jun 10 10:46 qpsmtpd<br />
<br />
#Sur la nouvelle installation avant la restauration:<br />
ls -al /var/spool<br />
drwxr-s--- 2 qpsmtpd clamav 4096 Apr 17 13:47 qpsmtpd<br />
voire [http://bugs.contribs.org/show_bug.cgi?id=4399| SME bug #4399]<br />
<br />
<br />
'''Solution:'''<br />
en ligne de commande sur le server de sauvegardes après le "Rise":<br />
chmod g+s /var/spool/qpsmtpd<br />
<br />
==== Les utilisateurs ne peuvent se connecter au serveur – Important. ====<br />
Après une restauration complète ou un "rise" par Affa sur SME8, tout semble fonctionner, mais les utilisateurs ne peuvent se connecter sur le serveur (samba, horde, etc...). Les comptes utilisateurs apparaissent bien dans le server-manager voir [[bugzilla:6983]] et [[bugzilla:7687]]<br /><br />
{{Warning box|msg=Travaux en cours: veuillez lire le contenu du bug car il semble y avoir d'autres problèmes avec cette solution temporaire}}<br />
La cause est une incomplète restauration / conversion des fichiers contenus dans le répertoire /etc, en particulièrement les fichiers contenant les identifiants et les mots de passe. La première chose à faire après un "affa --full-restore prodserv" est de ne pas effacer la sauvegarde existante de Affa. Si un "affa --rise --all prodserv" a été effectué, ne pas continuer avec "/bin/rm -rf /var/affa" comme indiqué dans le wiki (ne pas effacer non plus l'archive de quelque manière que ce soit).<br />
* Si la fonction rise a été utilisée:<br />
Se connecter en "root" sur le serveur de production et entrer en ligne de commande :<br />
<br />
mkdir /etc/backup<br />
<br />
cp -r /etc/{passwd,samba,e-smith,group,gshadow,shadow,ssh,sudoers} /etc/backup<br />
<br />
yes | cp -r /var/affa/"JOBNAME"/"ARCHIVE"/etc/* /etc/<br />
<br />
et reconfigurer le serveur pour finir<br />
<br />
signal-event post-upgrade; signal-event reboot<br />
<br />
* Si la fonction « full-restore » a été utilisée:<br />
Effectuer les étapes 1, 2 et 3 ci-dessus<br />
Se connecter en "root" sur le server de sauvegardes (server Affa) et entrer en ligne de commande:<br />
<br />
export RDIR=/var/affa/"NOM_DU_JOB"/"ARCHIVE"/etc/<br />
<br />
rsync -av $RDIR "PRODSERV-IP":/etc/<br />
<br />
Sur le serveur de production "PRODSERV", entrer par ssh dans un terminal<br />
signal-event post-upgrade; signal-event reboot<br />
<br />
<br />
<br />
=== Rapport de bugs ===<br />
Affa est répertorié dans [http://bugs.contribs.org/enter_bug.cgi?product=SME%20Contribs&component=smeserver-affa bugtracker contribs section].<br />
SVP, rapportez là tous les bugs, demandes de nouvelles fonctionnalités et problèmes de documentation. <br />
<br />
=== Informations complémentaires ===<br />
Pour les détails de performance, la liste des modifications etc. voyez [[:Affa:Additional information]]<br />
<br />
=== Références ===<br />
* Article Wikipedia http://en.wikipedia.org/wiki/Rsync<br />
* Site Rsync http://rsync.samba.org/<br />
* Article Wiki [[Moving SME to new Hardware]]<br />
* Article Wiki [[Backup with Affa and FreeDup]]<br />
* Article Wiki [[Rsyncd setup on a windows computer for use with Affa backup]]<br />
* Article Wiki [[Backup Windows Computers Using Affa]]<br />
* Article Wiki [[Backup of ESXi Virtual Machines using Affa]]<br />
<br />
----<br />
[[Category: Contrib/fr]]<br />
[[Category: Backup/fr]]</div>Arnaudhttps://wiki.koozali.org/index.php?title=Affa/fr&diff=23477Affa/fr2014-01-28T20:41:34Z<p>Arnaud: </p>
<hr />
<div>{{Languages|Affa}}<br />
{{Level|type=Niveau de compétence: Avancé|Advanced|Les instructions dans cette page requièrent un niveau avancé. Une bonne compréhension de Linux et de SME est recommandée.}}<br />
<br />
{{Warning box| Ce code est obsolète et non maintenu par son développeur, smeserver-affa attend la correction de 32 bugs ouverts. Un nombre important de bugs et d’événements sont associés à l'utilisation d'affa après une restauration ou de l'utilisation de la fonction rise, comme des groupes manquants, crash de services, ou erreur de CGI dans le server-manager}}<br />
voir [http://bugs.contribs.org/buglist.cgi?action=wrap&bug_status=UNCONFIRMED&bug_status=CONFIRMED&bug_status=NEEDINFO&bug_status=IN_PROGRESS&bug_status=RESOLVED&bug_status=VERIFIED&classification=Contribs&component=smeserver-affa%20%28unmaintained%29&list_id=23915&product=SME%20Contribs&resolution=---&resolution=FIXED&resolution=NOTABUG&resolution=WONTFIX&resolution=UPSTREAM&resolution=DUPLICATE&resolution=WORKSFORME&saved_report_id=1 bugs]<br />
=== Développeur ===<br />
Développeur: Michael Weinberger.<br />
Le développement et la maintenance d'Affa pour SME Server ont été interrompus. La nouvelle version Affa 3.X pour CentOS est disponible sur [http://affa.sourceforge.net/ Sourceforge].<br />
<br />
<br />
=== Traducteurs ===<br />
[http://bj-informatique.com/affa.php?lang=fr jibe]<br />
<br />
Pour la mise à jour: [http://www.guedel.eu Arnaud]<br />
<br />
Affa signifie '''A'''utomatische '''F'''estplatten '''F'''ern'''a'''rchivierung, soit approximativement en français : Sauvegarde automatique de disque dur à distance.<br />
<br />
<br />
=== Description ===<br />
Le but principal de ce paquetage Affa est de transformer un serveur SME 7 en un serveur dédié de sauvegardes en quelques minutes. Affa sauvegarde autant que vous le désirez de serveurs SME sur n'importe quel autre serveur ayant sshd en service et rsync installé. Une fois qu'il est configuré, Affa fonctionne sans surveillance et de manière fiable, envoyant des messages d'avertissement en cas d'erreur. <br />
<br />
Toutes les archives sont des sauvegardes complètes, puisque Affa utilise la technique des hardlink. Une nouvelle sauvegarde complète ne nécessite donc que l'espace disque pour les différences et les nécessités du système de fichier pour les hardlinks et les répertoires (ce qui représente généralement 2-3%).<br />
<br />
Affa est basé sur le programme rsync et supporte l'option --compress. Cela vous permet de lancer des sauvegardes via internet ou VPN. ne configuration typique consiste en un ou plusieurs serveurs de sauvegarde Affa placés en des lieux déportés, qui sauvegardent les serveurs de production via le VPN.<br />
<br />
Une caractéristique spéciale est l'option "rise", qui vous permet de convertir votre serveur de sauvegarde en serveur de production depuis une archive sauvegardée en cas de perte totale de votre serveur de production. Cette conversion est exécutée en un temps très court, même avec une énorme quantité de données. Cette possibilité de conversion utilise les hardlinks et de ce fait n'utilise pas d'espace disque supplémentaire.<br />
<br />
Affa est un outil en ligne de commande pour les administrateurs système et est conçu intentionnellement sans interface graphique. De ce fait, il peut être géré efficacement depuis la console même via une connexion internet lente.<br />
<br />
'''Note:''' Ce document concerne la version 2 Release Candidate d'Affa. La documentation de la dernière version 1 stable se trouve [http://wiki.contribs.org/index.php?title=Affa&oldid=11974 ici].<br />
<br />
==== Un coup d'oeil aux caractéristiques d'Affa ====<br />
* Affa est sûr : Tous transferts et connexions sont faits en utilisant le protocole ssh avec une authentification par clés publique/privée.<br />
* Lance périodiquement des sauvegardes complètes sans intervention. * Seules les différences avec la sauvegarde complète précédente sont transférées par le réseau<br />
* Utilise rsync avec l'option de compression pour les connexions lentes, ce qui permet des sauvegardes via internet/VPN<br />
* Utilise la technique des hardlink pour réduire l'espace disque physique à seulement ce qui est nécessaire pour les différences entre deux sauvegardes complètes<br />
* Mémorise un nombre configurable de sauvegardes automatisées, journalières, hebdomadaires, mensuelles et annuelles<br />
* On peut naviguer dans les archives sans avoir à les décompresser. Il n'est pas nécessaire de les décompresser d'abord.<br />
* Les archives peuvent être attachées à un partage Samba.<br />
* Les tâches de sauvegarde sont démarrées par le démon cron<br />
* Les travaux interrompus continuent au redémarrage en tenant compte des données déjà transférées<br />
* Sauvegarde les répertoires et fichiers e-smith par défaut, lorsque la propriété SMEServer est mise à 'yes'<br />
* Des répertoires et fichiers supplémentaires peuvent être inclus<br />
* Des répertoires et fichiers peuvent être exclus de la sauvegarde<br />
* Les serveurs Linux non SME peuvent être sauvegardés en mettant la propriété SMEServer à 'no' et en utilisant une liste d'inclusion<br />
* En mode ESXi, les machines virtuelles actives peuvent être sauvegardées. Voir [[Sauvegarde de machines virtuelles ESXi avec Affa]]<br />
* Priorité configurable (nice level) pour les process rsync sur les serveurs backup et source<br />
* Possibilité de lancer des programmes personnels avant et après le lancement d'une tâche (par exemple, lancement d'une sauvegarde sur bande)<br />
* Vérification de l'espace disque disponible après le lancement d'une tâche avec niveaux d'avertissement strict, normal ou risqué<br />
* Vérification étendue des conditions d'échec<br />
* Envoi de messages d'échec à une liste configurable d'adresses email<br />
* Envoi d'un message d'avertissement si le serveur de backup n'a plus d'espace disque disponible<br />
* Installe un chien de garde optionnel sur le serveur source vérifiant la disponibilité du serveur de backup (SME server seulement)<br />
* Le chien de garde envoie des avertissements, si une sauvegarde programmée n'a pas été lancée (SME server seulement)<br />
* Le chien de garde envoie un message journalier de rappel tant que l'erreur n'est pas corrigée (SME server seulement)<br />
* Possibilité d'afficher l'état courant de toutes les tâches, montrant les heures de dernier et prochain lancement, la taille et l'espace disque utilisé<br />
* Le status peut être envoyé par email journellement, hebdomadairement ou mensuellement<br />
* Possibilit d'afficher toutes les archives existantes pour une tâche, montrant la date, le nombre de fichiers et l'espace disque<br />
* Possibilité d'envoyer (et de retirer) la clé publique au serveur source (SME server et ESXi seulement)<br />
* Option pour convertir le serveur de sauvegardes en serveur de production (SME server seulement)<br />
* La fonction de conversion ne déplace pas physiquement les données et est donc extrêmement rapide et ne nécessite (presque) pas d'espace disque<br />
* La fonction de conversion peut être lancée à distance puisque les drivers ethernet du serveur de sauvegarde sont conservés<br />
* Comparaison des RPMs installés entre les serveurs source et de sauvegarde. Envoie un avertissement s'il n'y a pas correspondance.<br />
* Annulation de la conversion pour remise en service du serveur de sauvegardes<br />
* Configurable par commandes style e-smith db, avec un enregistrement pour chaque tâche et un enregistrement par défaut commun à toutes les tâches<br />
* Logs dans /var/log/affa/JOB.log et /var/log/affa/affa.log avec option debug pour plus de détails<br />
* Les fichiers de log sont permutés hebdomadairement, avec 5 fichiers conservés<br />
<br />
<br />
=== Installation ou mise à jour ===<br />
==== Smecontribs pour sme8 ====<br />
yum install -–enablerepo=smecontribs smeserver-affa<br />
<br />
Si Affa est installé pour la première fois, entrez la commande suivante pour réinitialiser la base de données d'Affa.<br />
Affa -–make-cronjob<br />
<br />
et déconnectez-vous puis reconnectez-vous à la console pour que le complément automatique de bash (touche TAB) prenne effet .<br />
==== Ancienne méthode d'installation ====<br />
{{Note box|SME7 est maintenant en fin de vie et a été transféré dans le répertoire obsolète}}<br />
Version SME7: '''smeserver-affa-2.0.0-rc4.noarch.rpm'''<br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/smeserver-affa-2.0.0-rc4.noarch.rpm</nowiki><br />
wget <nowiki>http://mirror.contribs.org/smeserver/contribs/michaelw/sme7/Affa2/perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el4.rf.i386.rpm<br />
<br />
Version SME8: SME8 est basé sur CentOS5, c'est pourquoi perl-Compress-Bzip2 el5 rpm doit être utilisé:<br />
wget <nowiki>http://mirror.contribs.org/releases/8/smecontribs/i386/repoview/smeserver-affa.html</nowiki><br />
<br />
Pour SME8 32 bits:<br />
wget <nowiki>http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm</nowiki><br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.i386.rpm<br />
<br />
Pour SME8 64 bits:<br />
wget http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS/perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
/usr/bin/yum --enablerepo=smecontribs localinstall \<br />
smeserver-affa-2.0.0-rc4.noarch.rpm \<br />
perl-Compress-Bzip2-2.09-1.2.el5.rf.x86_64.rpm<br />
<br />
Après la première installation d'Affa, lancez la commande suivante pour initialiser la base de données Affa.<br />
affa --make-cronjobs<br />
et déconnectez-vous puis reconnectez-vous à la console pour que le complément automatique de bash (touche TAB) prenne effet<br />
<br />
=== Exemple de démarrage rapide ===<br />
Vous avez un serveur de production SME 7 dont le nom est 'prodbox' et l'IP 10.200.48.1. Installez un second SME 7 comme serveur de sauvegardes avec le nom 'affabox' et l'IP 10.200.48.2. <br />
<br />
<ol></li><li>loggez-vous dans 'affabox' et installez les paquetages mentionnés ci-dessus.<br />
</li><li>copiez le script d'exemple de configuration<br />
cp /usr/lib/affa/jobconfig-sample.pl /root/prodbox-job.pl<br />
</li><li>editez /root/prodbox-job.pl et mettez<br />
my $jobname='prodbox';<br />
et<br />
'remoteHostName‘=>'10.200.48.1',<br />
Eventuellement, si le port du serveur distant est configuré, par exemple, en 2222, mettez<br />
'sshPort'=>2222,<br />
</li><li>sauvegardez et lancez le fichier 'prodbox-job.pl'. Cela enregistrera la configuration (et crée les entrées de base de données et les tâches cron)<br />
/root/prodbox-job.pl<br />
</li><li>Générez les clés DSA et envoyez la clé publique à la 'prodbox'<br />
{{Note box|Initialement, lorsque vous faites cette étape, vous devez temporairement mettre oui à "Autoriser l'authentification par SSH à l'aide de mots de passe standards" dans l'accès à distance sur le serveur de production}}<br />
affa --send-key prodbox<br />
</li><li>Lancez la tâche à la main<br />
<br />
affa --run prodbox<br />
</li></ol><br />
<br />
=== Configuration ===<br />
La configuration est stockée dans une base de données style e-smith. Utilisez la commande db pour configurer Affa.<br />
Le nom de tâche est la clé d'enregistrement de type 'job'.<br>Pour configurer un nouveau travail avec le nom 'prodbox', entrez :<br />
<br />
db affa set prodbox job<br />
<br />
Puis positionnez les propriétés<br />
<br />
db affa setprop prodbox remoteHostName 192.168.1.1<br />
db affa setprop prodbox TimeSchedule '0030,0730,1130,1330,1730,2030'<br />
db affa setprop prodbox Description 'My Production Server'<br />
db affa setprop prodbox status enabled<br />
et ainsi de suite...<br />
<br />
Alternativement, vous pouvez utiliser un script comme décrit ci-dessus dans le chapitre 'Démarrage rapide'.<br />
<br />
Pour vérifier votre travail, tapez :<br />
<br />
db affa show prodbox<br />
<br />
Pour finir, mettez en place les tâches cron :<br />
affa --make-cronjobs<br />
<br />
==== Propriétés de configuration de tâches ====<br />
'''Note:''' Les valeurs par défaut données dans ce tableau sont celles du programme Affa et ne doivent pas être confondues avec les valeurs prépositionnées dans les scripts d'aide, par exemple jobconfig-sample.pl.<br />
{| border="1" cellpadding="3" cellspacing=0<br />
|-<br />
| '''Propriété''' || '''Valeur''' || '''Defaut''' || '''Description''' <br />
|-<br />
| remoteHostName <br />
| FQHN ou IP || || FQHN ou IP de l'hôte source (obligatoire)<br />
|-<br />
| TimeSchedule <br />
| HHMM,HHMM,... || || N'a pas besoin d'être ordonné. Au moins une heure est obligatoire. '''Important:''' Utiliser le bon format HHMM est essentiel. Affa ne le vérifie pas. Un TimeSchedule mal formatté provoquera d'étranges erreurs Perl. <br />
|-<br />
| Description<br />
| texte || || <br />
|-<br />
| scheduledKeep<br />
| entier >= 1 || 1|| Combien de sauvegardes programmées doivent être conservées <br />
|-<br />
| dailyKeep<br>weeklyKeep<br>monthlyKeep<br>yearlyKeep<br />
| entier >= 0 || 7<br>4<br>12<br>2<br>|| Combien de sauvegardes quotidiennes, hebdomadaires, mensuelles ou annuelles doivent être conservées <br />
|-<br />
| SMEServer<br />
| yes ''ou'' no || yes || Positionné à 'yes', les répertoires e-smith par défaut sont automatiquement inclus et la propriété RPMCheck=yes peut être utilisée<br />
<br />
|-<br />
| Include[0]<br>Include[1]<br>...<br />
| full path || || Fichiers ou répertoires supplémentaires à inclure à la sauvegarde <br />
|-<br />
| Exclude[0]<br>Exclude[1]<br>...<br />
| full path || || Fichiers ou répertoires supplémentaires à exclure de la sauvegarde <br />
|-<br />
| RPMCheck<br />
| yes ''ou'' no || no || Seulement applicable aux tâches qui sauvegardent un serveur SME 7.<br>Compares les RPM de l'hôte source avec ceux de l'hôte de sauvegarde affa. Envoie un message avec une liste de différences s'il n'y a pas correspondance. Cette vérification est utile, si vous voulez avoir la possibilité de convertir le serveur de sauvegarde en serveur de production.<br />
|-<br />
| DiskSpaceWarn<br />
| strict ''ou'' normal ''ou'' risky ''ou'' none || strict || lance une vérification de l'espace disque après qu'une tâche soit terminée. Avec le niveau 'strict' un message d'avertissement sera envoyé si l'espace disponible est moins que la taille de la sauvegarde qui vient de s'achever. Avec les niveaux 'normal'/'risky' le message est envoyé si moins de 50%/10% de la taille de la sauvegarde reste disponible.<br />
'''Note:''' Lorsque RootDir est un lien symbolique vers un autre système de fichiers, l'espace disque utilisé du système de fichiers local est vérifié à la place de celui du système de fichiers lié. Positionnez le point de montage comme RootDir pour avoir une vérification correcte.<br />
|-<br />
| localNice <br />
| -19...+19 || 0 || lance le process local rsync par nice. <br />
|-<br />
| remoteNice<br />
| -19...+19 || 0 || lance le process rsync de la source par nice. <br />
|-<br />
| Watchdog<br />
| yes ''ou'' no || yes || Seulement applicable aux tâches qui sauvegardent un serveur SME 7.<br>Quand une tâche est lancée, affa installe un script chien de garde sur la source dans /etc/cron.d/, qui envoie un message d'avertissement si la prochaine tâche programmée (prise dans la propriété TimeSchedule + 10 minutes) n'a pas été lancée. Cela garantit que vous serez prévenu même en cas de panne du serveur affa. Le script chien de garde envoie un message de rappel quotidien si l'erreur persiste. La tâche lancée suivante remplace le script chien de garde avec un nouveau temps de déclenchement. <br />
|-<br />
| sshPort<br />
| port du service || 22 || Lorsque sshd sur le serveur source ou votre firewall écoute sur un port non standard, précisez le port ici.<br />
|-<br />
| ConnectionCheckTimeout<br />
| secondes || 120 || Avant que le process rsync démarre sur l'hôte source distant, affa vérifie la connexion ssh et se termine avec une erreur après le temps configuré, si l'hôte ne répond pas.<br />
|- <br />
| rsyncTimeout<br />
| secondes || 900 || Rsync se termine si aucune donnée n'est transférée pendant le temps configuré. Cela évite un gelage permanent dans le cas d'une erreur sur le réseau.<br />
|- <br />
| BandwidthLimit<br />
| entier>=0 Ko/s || 0 || Limite la vitesse de transfert des données. A valeur à zero specifie qu'il n'y a pas de limite.<br />
|- <br />
| rsyncCompress<br />
| yes ''ou'' no || no || Compresse les données transférées. Peut être utile avec les connexions internet lentes. Cela augmente la charge CPU sur les hôtes source et de sauvegarde.<br />
|-<br />
| EmailAddresses<br />
| name@domain.com,name@domain.com,... || admin|| Liste séparée par des virgules des adresses email où les messages seront envoyés.<br>'''Note:''' Par défaut, Affa n'envoie de messages que pour les erreurs, jamais en cas de succès (voir la propriété chattyOnSuccess).<br />
|-<br />
| RetryAttempts<br />
| entier >= 0 || 3 || Quand positionné à une valeur >0, Affa relance une tâche échouée RetryAttemps fois avec un délai de RetryAfter secondes.<br />
|-<br />
| RetryAfter <br />
| secondes >= 3 || 600 || Quand positionné à une valeur >0, attend RetryAfter secondes avant de relancer la tâche suite à une erreur. Utile seulement avec RetryAttempts>0<br />
|-<br />
| RetryNotification<br />
| yes ''ou'' no || yes || Quand positionné à no, Affa n'envoie pas de message d'erreur lorsqu'une tâche a échoué et que RetryAttemps est configuré. Un message d'erreur n'est envoyé que lorsque le dernier essai a échoué.<br />
|-<br />
| RetryAttempts<br />
| entier >= 0 || 0 || Quand positionné à une valeur >0, Affa envoie un message pour les tâches terminées avec succès et décrémente la valeur chattyOnSuccess. Lorsque la valeur atteint 0, Affa revient à son comportement par défaut et n'envoie plus de messages qu'en cas d'erreur.<br />
|-<br />
| AutomountDevice<br>AutomountPoint<br />
| full path || || Unité et point de montage de l'unité de sauvegarde (ex. disque USB). L'unité est montée automatiquement avant qu'une tâche soit lancées et démontée après la fin de la tâche. Les deux propriétés doivent être remplies pour que la fonction Automount soit active.<br />
|-<br />
| AutomountOptions<br />
| chaine de caractères || || Une chaine d'options passées à la commande de montage automatique<br />
|-<br />
| AutoUnmount <br />
| yes ''ou'' no || yes || Quand positionné à 'no' l'unité montée automatiquement reste montée après l'exécution d'Affa.<br />
|-<br />
| preJobCommand<br>postJobCommand<br />
| full path || || Programmes (locaux sur le serveur affa) à exécuter avant/après qu'une tâche soit lancée. Le nom de la tâche et le type (programmée, journalière etc.) sont passés en arguments au programme. Le code de sortie est de plus passé à la commande de programme post-tâche. Voir /usr/lib/affa/ pour des exemples de scripts perl.<br />
|-<br />
| RootDir<br />
| full path || /var/affa || Où stocker les archives de sauvegarde. N'utilisez pas /home/e-smith ou /root puisqu'ils sont inclus dans les répertoires à sauvegarder, ce qui empêcherait la fonction de conversion de fonctionner ! Répertoire conseillé : /var/affa<br />
|-<br />
| SambaShare<br />
| yes ''ou'' no|| yes || Accède aux archives de la tâche via SMB.<br />
|-<br />
| Debug <br />
| yes ''ou'' no || no || Positionné à 'yes', donne plus de détails dans le fichier de log.<br />
|-<br />
| status<br />
| enabled ''ou'' disabled || enabled || Positionné à 'disable', aucune entrée cron ne sera faite. Vous pouvez encore lancer les tâches à la main.<br />
|-<br />
| rsync--inplace<br />
| yes ''ou'' no || yes || Positionné à 'no', si la version rsync de la source ne supporte pas cette option (comme rsync sur SME6)<br />
|-<br />
| rsync--modify-window<br />
| entier >= 0 || 0 || En comparant deux horodatages, rsync les traite comme étant égaux s'ils ne diffèrent pas de plus de la valeur modify-window. C'est normalement 0 pour une correspondance exacte. Une valeur >= 0 est pratique si vous ne pouvez pas garder les horloges de la source et du serveur Affa synchonisées.<br />
|-<br />
| rsyncOptions<br />
| texte || || chaine d'options aditionnelles à passer à rsync.<br />
|-<br />
| rsyncdMode<br />
| yes ''ou'' no || no || Positionné à yes pour se connecter au démon rsync de l'hôte distant (au lieu de lancer rsync over ssh).<br />
|-<br />
| rsyncdModule<br />
| texte || AFFA || Le nom du module rsyncd (seulement applicable avec rsyncdMode=yes)<br />
|-<br />
| rsyncdUser<br />
| texte || affa || Le nom d'utilisateur pour authentification au démon rsync (seulement applicable avec rsyncdMode=yes)<br />
|-<br />
| rsyncdPassword<br />
| texte || || Le mot de passe pour authentification au démon rsync (seulement applicable avec rsyncdMode=yes)<br />
|-<br />
| remoteOS<br />
| cygwin || || avec remoteOS=cygwin les options --send-key et --revoke-key utilisent le compte 'Administrator' et le chemin correct pour la clé publique sur un hôte distant sous Windows/Cygwin. <br />
|- <br />
| ESXi<br />
| yes ''ou'' no || no || active le mode de sauvegarde VMware ESXi pour les machines virtuelles.<br>Voir [[Sauvegarde des machines virtuelles ESXi avec Affa]]<br />
|-<br />
| ESXiVMName<br />
| texte || || Le nom de la VM tel qu'affiché dans le client VI.<br />
|-<br />
| ESXiUsername<br />
| texte || || Le nom du client VI avec la permission de créer et détruire les instantanés (snapshots).<br />
|-<br />
| ESXiPassword<br />
| texte || || Le mot de passe de l'utilisateur ESXiUsername<br />
|-<br />
| chunkFiles<br />
| texte || || Noms de fichiers qu'Affa devrait découper et compresser. Plusieurs noms doivent être séparés par un slash (/), ex. 'mysql.dump/pgsql.dump' découpe les deux fichiers mysql.dump et pgsql.dump. Avec ESXi=yes c'est implicitement positionné à '*.vmdk'. Voir aussi les options de ligne de commande --chunk-archive et --unchunk-archive<br />
|}<br />
<br />
==== Propriétés de la configuration par défaut ====<br />
Toutes les propriétées peuvent être positionnées par défaut dans l'enregistrement DefaultAffaConfig. C'est pratique, lorsque vous configurez plusieurs tâches similaires. <br />
<br />
Exemple : Vous voulez positionner la propriété 'localNice' à 19 pour toutes les tâches. Tapez<br />
db affa setprop DefaultAffaConfig localNice 19<br />
et ne positionnez pas cette propriété pour les tâches.<br />
Les propriétés positionnées dans les tâches sont prioritaires sur les propriétés par défaut.<br />
<br />
La propriété spéciale 'sendStatus' ne s'applique que dans l'enregistrement DefaultAffaConfig. Elle contrôle le rapport d'état (status) envoyé par mail et peut prendre les valeurs 'none', 'daily', 'weekly' ou 'monthly'. <br />
<br />
Pour configurer un rapport hebdomadaire, tapez :<br />
db affa setprop DefaultAffaConfig sendStatus weekly<br />
puis configurez cronjob :<br />
affa --make-cronjobs<br />
<br />
==== Désactivation globale ====<br />
Toutes les tâches peuvent être désactivées en positionnant l'enregistrement AffaGlobalDisable sur 'yes'.<br />
db affa set AffaGlobalDisable yes<br />
affa --make-cronjobs<br />
<br />
Pour réactiver :<br />
db affa set AffaGlobalDisable no<br />
affa --make-cronjobs<br />
<br />
=== Utilisation et options de ligne de commande ===<br />
{{Note box|Les options peuvent être abrégées tant qu'elles restent uniques, ex. --mak est identique à --make-cronjobs}}<br />
'''affa --run JOB'''<br />
Lance une tâche. Normalement effectué par cronjob. <br />
<br />
<br />
'''affa --make-cronjobs'''<br />
Configure les tâches cron comme programmées dans les enregistrements de tâches.<br />
<br />
<br />
'''affa --send-key JOB'''<br />
'''affa --send-key --host=TARGETHOST [--port=PORT] [--remoteOS=cygwin]'''<br />
Génère d'abord la clé DSA pour le serveur Affa, si cela n'a pas déjà été fait. Envoie ensuite la clé publique à l'hôte 'remoteHostName' comme configuré dans l'enregistrement de la tâche et génère l'entrée ssh known host (hôte connu) spécifique à la tâche. <br />
{{Note box|Initialement, lorsque vous faites cette étape, vous aurez besoin d'activer temporairement "Autoriser l'authentification par SSH à l'aide de mots de passe standards" sur le serveur de production.}}<br />
{{Note box|<nowiki>l'option --send-key fonctionne pour SME Server comme serveur distant et pour les systèmes sur lesquels les clés publiques sont stockées dans /root/.ssh/authorized_keys2 et sur lesquels les commandes /bin/cat, /bin/touch, /bin/grep et /bin/mv sont disponibles. Avec remoteOS=cygwin cela fonctionne pour les serveurs distants sous Cygwin/Windows.</nowiki>}}<br />
<br />
<br />
'''affa --full-restore JOB [ARCHIVE]'''<br />
Effectue une restauration totale depuis l'archive de sauvegarde [ARCHIVE] sur le serveur source distant comme défini dans l'enregistrement JOB. Si [ARCHIVE] n'est pas donnée, l'archive 'scheduled.0' est utilisée comme défaut. La restauration totale reconstruit le serveur comme il était au moment de la sauvegarde. Après la restauration l'hôte source est redémarré.<br />
<br />
<br />
'''affa --rise [--all] JOB [ARCHIVE]'''<br />
Lance une restauration totale depuis l'archive de sauvegarde [ARCHIVE] sur le serveur Affa <b>(!)</b> comme défini dans l'enregistrement JOB. Autrement dit, une fois terminée, le serveur de sauvegarde reboote comme un clone du serveur source. Assurez-vous d'avoir arrêté le serveur source avant de rebooter le serveur de sauvegardes, sinon l'interface réseau ne sera pas activée. Cela est important, lorsque vous lancez --rise à distance. La fonction --rise ne fonctionne qu'avec les serveurs SME 7et ne devrait être utilisée que sur les serveurs de sauvegardes dédiés.<br />
<br />
Avec l'option --all, tous les fichiers et répertoires de l'archive tels que définis par les propriétés include[] sont restaurés. Les fichiers et répertoires avec le même nom sur le serveur Affa seront écrasés et ne peuvent pas être restaurés par un undoris. Cela ne devrait pas poser de problème sur un serveur Affa dédié qui ne contient aucune autre donnée. Après un éventuel undorise ces données aditionnelles restaurées doivent être supprimées manuellement.<br />
<br />
Remarquez que le processus rise sauvegarde le serveur Affa lui-même avant d'effectuer la restauration de l'archive. Cette sauvegarde est utilisée par un éventuel undorise pour restaurer le serveur Affa. Seulement les fichiers et répertoires standards sont sauvegardés. Les données dans des endroits non standard (comme /opt) ne sont pas touchées et existeront encore après le rise. Voir aussi : [[Backup_server_config#Standard_backup_.26_restore_inclusions]]<br />
<br />
<br />
'''affa --undo-rise'''<br />
Cette fonction reconvertit en serveur de sauvegarde un serveur Affa préalablement converti en serveur de production. Après un redémarrage, toutes les tâches configurées fonctionneront à nouveau.<br />
<br />
<br />
'''affa --unchunk-archive JOB ARCHIVE'''<br />
Concatène et décompresse tous les fichiers découpés dans l'archive ARCHIVE de la tâche JOB pour rendre l'archive prête pour une restauration. Notez que l'archives la plus récente scheduled.0 n'est jamais découpée et donc qu'elle peut être restaurée telle quelle sans re-concaténation préalable.<br />
<br />
<br />
'''affa --chunk-archive JOB ARCHIVE'''<br />
Découpe et compresse tous les fichiers dont les noms correspondent à la propriété chunkFiles pour économiser l'espace disque. Cela ne devrait être utilisé que comme l'inverse de --unchunk-archive. Les découpages sont stockés en fichiers bzip2 dans un répertoire nommé FILENAME.affa-chunks avec 8 niveaux de profondeur.<br />
<br />
<br />
<br />
'''affa --create-backup-file JOB [ARCHIVE] [--outfile=FILE]'''<br />
Crée une archive tar gzippée FILE depuis l'ARCHIVE de la tâche JOB. Pendant la création la somme MD5 est calculée puis le fichier tar enregistré lui est comparé. L'archive par défaut est schedule.0 et le fichier FILE par défaut est./smeserver.tg. La somme de contrôle est écrite dans FILE.md5sum.<br />
<br />
Un fichier smeserver.tgz stocké sur un disque USB peut être utilisé pour restauration lors d'une nouvelle installation de SME Server.<br />
<br />
<br />
'''affa --list-archives [--csv] JOB'''<br />
Affiche un tableau de toutes les archives de la tâche JOB existantes, avec la date, le nombre de fichiers, la taille et l'espace disque utilisé. <br />
<small><pre><br />
Affa version 2.0.0-rc4 on affa1.mydomain.de (10.204.48.2)<br />
+------------------------------------------------------------------------------+<br />
| Job: primmail |<br />
| Description: Mailserver Albstadt |<br />
| Directory: /var/affa/primmail/ |<br />
| Hostname: 10.204.144.2 |<br />
| Email: admin@mydomain.de |<br />
+-------+-----------------------+-----------+----------------+--------+--------+<br />
| Run | Completion date | Exec Time | Files | Size | Recvd |<br />
+-------+-----------------------+-----------+----------------+--------+--------+<br />
| M 9 | Sun 2008 Mar 02 06:11 | 0h29m47s | 679010 | 80GB | 60MB |<br />
| M 8 | Sun 2008 Mar 30 06:12 | 0h26m59s | 701683 | 83GB | 48MB |<br />
| M 7 | Sun 2008 Apr 27 06:16 | 0h28m01s | 731332 | 87GB | 47MB |<br />
| M 6 | Sun 2008 Jun 01 06:20 | 0h27m37s | 755529 | 90GB | 50MB |<br />
| M 5 | Sun 2008 Jun 29 06:24 | 0h28m39s | 800200 | 94GB | 52MB |<br />
| M 4 | Sun 2008 Jul 27 06:28 | 0h29m10s | 835398 | 99GB | 59MB |<br />
| M 3 | Sun 2008 Aug 31 06:34 | 0h34m53s | 869409 | 101GB | 65MB |<br />
| M 2 | Sun 2008 Sep 28 06:38 | 0h33m19s | 910889 | 106GB | 59MB |<br />
| M 1 | Sun 2008 Oct 26 06:42 | 0h38m41s | 932627 | 110GB | 61MB |<br />
| M 0 | Sun 2008 Nov 30 06:48 | 0h51m44s | 954090 | 116GB | 75MB |<br />
+-------+-----------------------+-----------+----------------+--------+--------+<br />
| W 3 | Sun 2008 Dec 14 06:50 | 0h53m34s | 962267 | 118GB | 68MB |<br />
| W 2 | Sun 2008 Dec 21 06:52 | 0h53m11s | 974914 | 120GB | 69MB |<br />
| W 1 | Wed 2008 Dec 24 06:52 | 0h48m58s | 974536 | 120GB | 64MB |<br />
| W 0 | Sun 2008 Dec 28 06:58 | 0h39m01s | 977003 | 120GB | 61MB |<br />
+-------+-----------------------+-----------+----------------+--------+--------+<br />
| D 6 | Tue 2008 Dec 30 06:52 | 0h54m45s | 976872 | 120GB | 77MB |<br />
| D 5 | Wed 2008 Dec 31 06:52 | 0h49m10s | 977764 | 120GB | 63MB |<br />
| D 4 | Thu 2009 Jan 01 06:28 | 0h28m09s | 977879 | 120GB | 60MB |<br />
| D 3 | Fri 2009 Jan 02 06:53 | 0h53m21s | 978144 | 120GB | 68MB |<br />
| D 2 | Sat 2009 Jan 03 06:53 | 0h53m15s | 978072 | 120GB | 60MB |<br />
| D 1 | Sun 2009 Jan 04 06:53 | 0h53m16s | 976478 | 120GB | 70MB |<br />
| D 0 | Mon 2009 Jan 05 06:52 | 0h52m42s | 977062 | 120GB | 61MB |<br />
+-------+-----------------------+-----------+----------------+--------+--------+<br />
| S 1 | Mon 2009 Jan 05 17:57 | 0h27m48s | 976353 | 120GB | 98MB |<br />
| S 0 | Tue 2009 Jan 06 06:52 | 0h52m55s | 976539 | 120GB | 70MB |<br />
+-------+-----------------------+-----------+----------------+--------+--------+<br />
</pre></small><br />
Avec --csv, le tableau est généré selon un format séparé par des deux-points (:) lisible par les programmes.<br />
<br />
<br />
'''affa --status [--csv]'''<br />
Affiche un tableau de toutes les tâches actives (status enabled), avec l'heure de dernier et prochain lancement, la taille, l'espace disque utilisé par la plus récente archive, la date d'exécution du dernier lancement, et le nombre d'archives programmées (S), quotidiennes (D), hebdomadaires (W), mensuelles (M) et annuelles (Y). Le dernier lancement (colonne 'Last') affiche 'failed' si une tâche n'a pas été lancée dans les dernières 24h. Pour les tâches désactivées, 'Last' affiche toujours 'failed' au bout de 24 h. Pour voir la date et l'heure du dernier lancement de ces tâches, utilisez l'option --list-archives. La colonne 'Next' montre l'heure à laquelle le prochain lancement aura lieu, si 'Enabled' est à 'yes'.<br />
<small><pre><br />
Affa version 2.0.0-rc4 on backup.mydomain.de (10.204.48.2)<br />
+----------------+-----+-------+-----------+-------+-------+----------------+<br />
| Job | ENA | Last | Exec Time | Next | Size | N of S,D,W,M,Y |<br />
+----------------+-----+-------+-----------+-------+-------+----------------+<br />
| erp-alb-rsyncd | yes | 00:20 | 0h00m14s | 00:20 | 46MB | 1, 7, 4,10, 0 |<br />
| erp-fra-rsyncd | yes | 02:46 | 0h01m03s | 02:45 | 712MB | 1, 7, 4,10, 0 |<br />
| esxi-TS-W2K | yes | 00:37 | 4h22m08s | 20:15 | 60GB | 1, 7, 1, 0, 0 |<br />
| esxi-W2KR2 | yes | 02:22 | 3h52m09s | 22:30 | 40GB | 1, 7, 1, 0, 0 |<br />
| helpdesk | yes | 00:25 | 0h00m47s | 00:25 | 117MB | 1, 7, 4,10, 0 |<br />
| imageserv | yes | running (pid 16477) | 1, 7, 4,10, 0 |<br />
| intraweb | yes | 06:34 | 0h04m50s | 18:00 | 2.1GB | 2, 7, 4,10, 0 |<br />
| pdcalb | yes | 21:19 | 0h14m27s | 21:05 | 89GB | 1, 7, 4,10, 0 |<br />
| pdcfra | yes | 04:18 | 0h03m33s | 04:15 | 33GB | 1, 7, 4,10, 0 |<br />
| primmail | yes | 06:52 | 0h52m55s | 17:30 | 120GB | 2, 7, 4,10, 0 |<br />
| rayofhope | yes | 21:52 | 0h07m40s | 21:45 | 26GB | 1, 7, 4,10, 0 |<br />
| smecrmpg | yes | 22:35 | 0h00m21s | 22:35 | 70MB | 1, 7, 4,10, 0 |<br />
| sozserv | yes | 02:09 | 0h04m03s | 02:05 | 5.9GB | 1, 7, 4,10, 0 |<br />
| wiki | yes | 03:45 | 0h00m27s | 03:45 | 278MB | 1, 7, 4, 8, 0 |<br />
+----------------+-----+-------+-----------+-------+-------+----------------+<br />
| az32share | no | - | 0h00m22s | 03:05 | 1.3GB | 1, 7, 4, 8, 0 |<br />
+----------------+-----+-------+-----------+-------+-------+----------------+<br />
</pre></small><br />
Avec --cxv, le tableau est généré selon un format séparé par des deux-points (:) lisible par les programmes.<br />
<br />
<br />
'''affa --disk-usage'''<br />
Affiche l'utilisation actuelle du disque pour les répertoires racine de tous les systèmes de fichiers.<br />
<small><pre><br />
Affa version 2.0.0-rc4 on backup2.mydomain.de (10.204.0.52)<br />
+------+--------+--------+----------------------------------------------------+<br />
| Use% | Used | Avail | Root Dir |<br />
+------+--------+--------+----------------------------------------------------+<br />
| 71% | 938GB | 365GB | /var/affa |<br />
| 20% | 194GB | 759GB | /mnt/affadev |<br />
+------+--------+--------+----------------------------------------------------+<br />
</pre></small><br />
Avec --cxv, le tableau est généré selon un format séparé par des deux-points (:) lisible par les programmes.<br />
<br />
<br />
'''affa --show-schedule [--all]'''<br />
Affiche un calendrier 'graphique' de toutes les tâches actives. La résolution est de 30 minutes. Un 'X' indique les heures de lancement programmés. La durée de la dernière exécution est marquée avec des '='.<br />
<small><pre><br />
Affa version 2.0.0-rc4 on backup.mydomain.de (10.204.48.2)<br />
TIME 0:00 4:00 8:00 12:00 16:00 20:00<br />
erp-alb-rsyncd X------- -------- -------- -------- -------- --------<br />
helpdesk X------- -------- -------- -------- -------- --------<br />
esxi-W2KR2 =====--- -------- -------- -------- -------- -----X==<br />
esxi-TS-W2K ==------ -------- -------- -------- -------- X=======<br />
imageserv ---X---- -------- -------- -------- -------- --------<br />
sozserv ----X--- -------- -------- -------- -------- --------<br />
erp-fra-rsyncd -----X-- -------- -------- -------- -------- --------<br />
wiki -------X -------- -------- -------- -------- --------<br />
pdcfra -------- X------- -------- -------- -------- --------<br />
primmail -------- ----X=-- -------- -------- ---X---- --------<br />
intraweb -------- -----X-- -------- -------- ----X--- --------<br />
pdcalb -------- -------- -------- -------- -------- --X-----<br />
rayofhope -------- -------- -------- -------- -------- ---X----<br />
smecrmpg -------- -------- -------- -------- -------- -----X--<br />
1 disabled jobs not listed. Use --all to display.<br />
</pre></small><br />
<br />
<br />
'''affa --send-status'''<br />
Envoie le tableau d'état, l'espace disque utilisé et la liste des archives de toutes les tâches à l'adresse email configurée dans l'enregistrement 'DefaultAffaConfig'. Cette commande est utilisée par la tâche cron 'affa status'.<br />
<br />
<br />
'''affa --mailtest JOB'''<br />
Envoie un email de test aux adresses email configurées dans l'enregistrement de la tâche. Avec la propriété 'Watchdog' à 'yes', un email de test est envoyé depuis le serveur distant aussi. Utilisez cela pour vérifier que l'envoi des mails fonctionne.<br><br />
'''Note:''' Par défaut, Affa n'envoie des messages qu'en cas d'erreur, jamais en cas de succès (voir la propriété 'chattyOnSuccess').<br />
<br />
<br />
'''affa --cleanup JOB'''<br />
Après que vous ayez diminué le temps de conservation (scheduledKeep), les archives avec un indice plus élevé ne seront plus supprimées automatiquement et existeront toujours. Cette option trouve ces archives et les supprime.<br />
<br />
<br />
'''affa --rename-job JOB NEWNAME'''<br />
Renomme la tâche JOB en NEWNAME, y compris tous les enregistrements de la base de données et les répertoires d'archive.<br />
<br />
<br />
'''affa --move-archive JOB NEWROOTDIR'''<br />
Déplace le répertoire d'archive de JOB vers NEWROOTDIR et ajuste la propriété RootDir. NEWROOTDIR doit être un chemin complet (full path) commençant par un slash. Comme le déplacement d'un système de fichiers à un autre (par exemple d'un disque USB à un disque local) n'est pas possible, Affa se sert dans ce cas d'une commande de copie et efface ensuite le répertoire source. Selon la taille de l'archive, la copie d'un système de fichiers à l'autre peut prendre longtemps.<br />
<br />
<br />
'''affa --delete-job [--revoke-key] JOB'''<br />
Efface irréversiblement une tâche, y compris toutes les archives, la configuration et les bases de données attachées. En passant l'option '--revoke-key', la clé publique sur le serveur distant sera détruite.<br />
<br />
<br />
'''affa --revoke-key JOB'''<br />
'''affa --revoke-key --host=TARGETHOST [--port=PORT] [--remoteOS=cygwin]'''<br />
Détruit la clé DSA publique sur le serveur distant.<br />
{{Note box|<nowiki>L'option --revoke-key seule fonctionne sur SME Server comme serveur distant et pour les systèmes où les clés sont stockées dans /root/.ssh/authorized_keys2 Avec remoteOS=cygwin cela fonctionne pour les serveurs distants sous Cygwin/Windows.</nowiki>}}<br />
<br />
<br />
'''affa --check-connections'''<br />
Contrôle le login ssh pour toutes les tâches configurées. Pour les tâches pour lesquelles la clé publique n'avait pas encore été envoyée, il vous sera demandé le mot de passe et la clé sera envoyée ensuite.<br />
<br />
<br />
'''affa --kill JOB'''<br />
Termine la tâche en cours JOB et tous les processus fils (les processus rsync)..<br />
<br />
'''affa --debug'''<br />
Active un journal verbeux. Remplace la tâche et les configurations globales.<br />
<br />
=== Exemples de configuration ===<br />
==== Serveur de sauvegarde dédié ====<br />
* Configurez une tâche pour chaque serveur que vous voulez sauvegarder<br />
* Envoyez les clés publiques à chaque serveur.<br />
affa --send-key JOBNAME<br />
* Envoie la clé DSA correspontante à la tâche JOBNAME.<br />
affa --check-connections<br />
* Vérifie que les connexions sans mot de passe fonctionnent.<br />
affa --show-schedule<br />
* Vérifie si les tâches planifiées sont réparties dans la journée.<br />
affa --make-cronjobs<br />
* Crée les tâches cron.<br />
affa --status<br />
* Vérifie l'état après 24 heures.<br />
<br />
<br />
==== Sauvegarder une ibay individuellement ====<br />
Supposons que nous voulions sauvegarder les ibays 'staff1' et 'staff2' de notre serveur de production dont l'adresse IP WAN est 82.123.1.1 sur un serveur Affa déporté via internet, toutes les nuits à 2h30.<br />
* On se connecte sur le serveur Affa et on installe les paquetages comme décrit ci-dessus.<br />
* On copie l'exemple de configuration<br />
cp /usr/lib/affa/jobconfig-sample.pl /root/ibay-staff-job.pl<br />
* On édite /root/ibay-staff-job.pl pour y mettre :<br />
my $jobname='ibay-staff';<br />
et<br />
'remoteHostName‘=>'82.123.1.1',<br />
'TimeSchedule'=>'0230',<br />
'SMEServer'=>'no',<br />
'Include[0]'=>'/home/e-smith/files/ibays/staff1',<br />
'Include[1]'=>'/home/e-smith/files/ibays/staff2',<br />
* On enregistre la configuration<br />
/root/ibay-staff-job.pl<br />
* On envoie la clé publique au serveur de production<br />
affa --send-key ibay-staff<br />
* On vérifie le lendemain matin :<br />
affa --list-archives ibay-staff<br />
affa --status<br />
* Vérifie l'état après 24 heures.<br />
ls /var/affa/ibay-staff<br />
<br />
==== Deux serveurs de production qui se sauvegardent mutuellement ====<br />
Vous avez deux sites connectés via un VPN et un serveur SME fonctionnant sur chaque site. Dans ce cas, vous n'avez pas besoin d'un serveur de sauvegarde Affa dédié. Les deux serveurs de production peuvent en plus agir chacun comme un serveur Affa sauvegardant le serveur de l'autre site. Installez simplement Affa et configurez une tâche qui sauvegarde l'autre serveur. Vous pouvez utiliser toutes les fonctionnalités Affa excepté la fonctionnalité rise. <br />
En utilisant la fonctionnalité rise, le serveur devient n'importe lequel des systèmes sauvegardés, ce qui est moins utile dans ce scénario puisque cela vous donnerait une copie fonctionnelle du serveur de l'autre site lorsque celui-ci est en panne.<br />
<br />
Pour obtenir une redondance et une restauration plus rapide, vous pouvez configurer une sauvegarde locale sur un disque USB externe ou sur un NAS.<br />
<br />
Attention de ne pas sauvegarder à nouveau les archives de l'autre site. Configurez la propriété RootDir vers un chemin qui n'est pas inclu dans la liste de sauvegarde par défaut de SME. En laissant la propriété RootDir par défaut sur '/var/affa', cela est garanti.<br />
<br />
==== Sauvegarde d'un ordinateur Windows ====<br />
Sauvegarder les données d'un système Windows nécessite que le démon Rsyncd Cygwin soit installé et configuré sous Windows. La procédure "rsync over ssh" ne fonctionne pas, puisque le processus rsync Cigwin se plante systématiquement après le transfert de quelques fichiers. <br />
<br />
===== Configuration de Rsyncd sur l'ordinateur Windows ===== <br />
Installez la base Cygwin, le paquetage rsync puis configurez le service Rsyncd comme décrit dans ce document : [[Rsyncd setup on a windows computer for use with Affa backup]]'. <br />
L'installation du service sshd est optionnelle et n'est pas nécessaire pour la sauvegarde elle-même, mais avoir une connexion ssh peut être très utile pour l'administration et pour exécuter des scripts sur le système Windows. Affa supporte l'envoi de la clé publique vers un ordinateur Windows Cygwin pour une connexion sans mot de passe.<br />
<br />
Note: Affa ne sauvegarde pas l'information Windows Access Control List (ACL). Vous pourriez avoir à corriger les ACLs après une restauration.<br />
<br />
===== Configuration du mode Affa Rsyncd (exemple de démarrage rapide) =====<br />
Vous voulez sauvegarder les répertoires ''Mes Documents''des utilisateurs ''ssorglos'' et ''bsimpson'' depuis l'ordinateur Windows 'ws001' avec l'IP 192.168.1.65 sur votre serveur Affa 'affabox' avec l'IP 192.168.1.3.<br />
* connectez-vous sur 'affabox' et copiez le script exemple de configuration Cygwin<br />
cp /usr/lib/affa/jobconfig-cygwin-sample.pl /root/ws001-mydocs-job.pl<br />
* editez /root/ws001-mydocs-job.pl et mettez<br />
my $jobname='ws001-mydocs';<br />
et<br />
'remoteHostName‘=>'192.168.1.65',<br />
'rsyncdPassword'=>'<i>secretword</i>',<br />
'Include[0]'=>'/c/Documents and Settings/ssorglos/My Documents/', ''# N'utilisez pas les backslashes dans les chemins !''<br />
'Include[1]'=>'/c/Documents and Settings/bsimpson/My Documents/',<br />
où ''secretword'' doit être remplacé par le mot de passe que vous avez choisi dans rsyncd.secretsfile sur l'ordinateur Windows.<br />
* Enregistrez la configuration (cela crée les entrées de base de données et configure les tâches cron.<br />
/root/ws001-mydocs-job.pl<br />
* Lancez la tâche manuellement. Après l'achèvement, vérifiez l'archive /var/affa/ws001-mydocs/scheduled.0 et le journal /var/log/affa/ws001-mydocs.log.<br />
affa --run ws001-mydocs<br />
<br />
===== Configuration manuelle d'Affa en mode Rsyncd =====<br />
Au cas où vous voudriez configurer manuellement en utilisant les commandes db, voici les paramètres obligatoires pour le mode Rsyncd Cygwin.<br />
db affa setprop JOB rsyncdMode yes<br />
db affa setprop JOB rsyncdModule AFFA<br />
db affa setprop JOB rsyncdUser affa<br />
db affa setprop JOB rsyncdPassword ''secretword''<br />
db affa setprop JOB SMEServer no<br />
db affa setprop JOB RPMCheck no<br />
db affa setprop JOB Watchdog no<br />
Optionel pour une connexion ssh.<br />
db affa setprop JOB remoteOS cygwin <br />
<br />
<br />
==== Utiliser Affa pour sauvegarder un NAS monté par NFS ou un disque USB local ====<br />
<br />
On désire sauvegarder notre serveur SME 7 de production dont le nom d'hôte est 'prodbox' et l'IP 10.200.48.1 sur un système de fichiers monté plutôt que de configurer un serveur Affa dédié.<br />
<br />
===== Configuration du NAS =====<br />
On dispose d'un serveur FREENAS dont l'adresse IP est 10.200.48.2 en fonction et avec le service NFS activé. Le disque est monté sur /mnt/affashare. On a autorisé l'adresse IP de notre serveur de production à accéder au partage /mnt/affashare.<br />
* On se logge sur 'prodbox' et on installe les paquetages NFS :<br />
/usr/bin/yum install --enablerepo=smecontribs smeserver-nfs<br />
* maintenant, on active et on démarre le service portmapper.<br />
config setprop portmap status enabled<br />
service portmap start<br />
<br />
* On monte le partage NFS :<br />
mkdir -p /mnt/affadevice<br />
mount 10.200.48.2:/mnt/affashare /mnt/affadevice<br />
<br />
<br />
===== Alternative : configuration du disque USB =====<br />
* On se logge sur 'prodbox'<br />
* On connecte le disque USB sur le bus USB. On doit maintenant déterminer sur quel 'device' le noyau a placé le lecteur. On regarde /var/log/messages et on cherche la ligne d'initialisation correspondante (Initializing USB Mass Storage driver). Quelques lignes plus bas, on trouve le nom du device. Supposons qu'il s'agisse de 'sdh'. Si ce n'est pas le cas, on remplacera dans les instructions ci-après /dev/sdh par la valeur trouvée.<br />
* On crée une partition linux avec fdisk : '''Il faut bien vérifier que c'est bien le disque USB qui est attaché avant de continuer !'''<br />
<br />
fdisk /dev/sdh<br />
On trouvera probablement une partition dos vfat, qu'on devra d'abord détruire. Supposons que nous ayons créé une seule partition ''/dev/sdh1''.<br />
* On formatte avec un système ext3 :<br />
mkfs.ext3 /dev/sdh1<br />
* On crée le point de montage :<br />
mkdir -p /mnt/affadevice<br />
* On ajoute notre disque USB dans /etc/fstab. On crée donc une nouvelle ligne avec :<br />
/dev/sdh1 /mnt/affadevice ext3 defaults<br />
* On monte le disque :<br />
mount /mnt/affadevice<br />
* On fait une vérification croisée avec df :<br />
df<br />
<br />
===== Copier une archive Affa d'un disque dur USB sur un nouveau disque =====<br />
Affa utilise un grand nombre de liens matériels (hard links) pour compresser et préserver de la place sur le disque pour les sauvegardes. Dans la situation où l'on voudrait copier un disque contenant une archive Affa sur un autre disque (de taille plus importante), il est nécessaire de s'assurer que les liens matériels ont été copiés correctement au risque que la copie devienne considérablement plus volumineuse que l'archive source.<br />
<br />
Une méthode pour copier à travers des systémes de fichiers distincts (par ex. entre 2 disques usb différents) et préserver les liens matériels est la suivante:<br />
* monter les deux disques usb avec des points de montage différents, par exemple /media/backup1 et /media/backup2 et effectuer alors:<br />
<br />
mkdir /media/backup2/archive <br />
cd /media/backup1/archive<br />
tar cpf - . | ( cd /media/backup2/archive && tar xpf - )<br />
<br />
en remplaçant '''archive''' par le véritable nom de l'archive (job) Affa que l'on souhaite placer sur l'autre disque.<br />
<br />
<br />
<br />
===== Configuration d'Affa =====<br />
On veut lancer des sauvegardes à 11h30, 15h30, 19h30 et on veut garder les 3 dernières, 7 journalières, 5 hebdomadaires, 12 mensuelles et 1 annuelle.<br />
* On se logge sur 'prodbox' et on installe les paquetages Affa comme indiqué ci-dessus.<br />
* On copie l'exemple de configuration<br />
cp /usr/lib/affa/jobconfig-sample.pl /root/prodbox-job.pl<br />
* On édite /root/prodbox-job.pl pour y mettre :<br />
my $jobname='prodbox';<br />
et<br />
'remoteHostName‘=>'localhost',<br />
'TimeSchedule'=>'1130,1530,1930',<br />
'scheduledKeep'=>3,<br />
'dailyKeep'=>7,<br />
'weeklyKeep'=>5,<br />
'monthlyKeep'=>12,<br />
'yearlyKeep'=>1,<br />
'RootDir=>'/mnt/affadevice',<br />
On vérifie les autres propriétés et on les change si besoin.<br />
* On enregistre la configuration<br />
/root/prodbox-job.pl<br />
* On lance la tâche manuellement :<br />
<br />
affa --run prodbox<br />
<br />
<br />
===== Limitations =====<br />
Avec ce type de configuration, on ne peut pas utiliser la fonction 'rise' (conversion du serveur de sauvegarde en serveur de production), puisqu'elle nécessite que l'archive soit sur le même système de fichiers que le serveur. La fonction 'rise' utilise les hardlinks, qui ne fonctionnent pas entre systèmes de fichiers différents.<br />
<br />
===== Automount : montage automatique =====<br />
Avoir les archives de sauvegarde sur le même système de fichiers est toujours un risque, qui peut être minimisé en utilisant la fonction automount. Ainsi, le système de fichiers externe n'est monté que durant le déroulement d'une tâche.<br />
<br />
Dans l'exemple NAS, configurez :<br />
'AutomountDevice=>'10.200.48.2:/mnt/affashare',<br />
'AutomountPoint =>'mnt/affadevice',<br />
et sautez l'étape 2<br />
<br />
Dans l'exemple du disque USB, configurez :<br />
'AutomountDevice=>'/dev/sdc1',<br />
'AutomountPoint =>'mnt/affadevice',<br />
et sautez les étapes 5 à 8.<br />
<br />
Le point de montage sera automatiquement créé, s'il n'existe pas. <br><br />
Pour accéder au répertoire de l'archive, vous devez le monter à la main.<br />
<br />
=== Restauration ===<br />
==== Restauration de fichiers ou répertoires individuels ====<br />
'''Example 1:''' Nous sommes le Mardi 6 Janvier 2009 quand l'utilisateur 'briedlin' vous demande de restaurer les messages de sa boite 'orders' qu'il a malencontreusement détruite Lundi.<br />
<br />
* Vous devez d'abord regarder quelles sont les archives disponibles. Le nom de tâche de ce serveur de sauvegardes est 'primmail. Pour avoir une liste de toutes les archives, lancez <br />
affa --list-archives primmail<br />
''(voir la liste exemple dans le chapitre [[Affa#Usage_and_command_line_options]]''<br />
* Choisissez l'archive daily.0, qui a été créée la nuit de Lundi. Maintenant, restaurez la mailbox 'orders' en utilisant la commande rsync.<br />
* Maintenant, lancez la commande rsync (notez le slash en fin !) sur le serveur de sauvegarde Affa.<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# cette variable est utilisée pour raccourcir la ligne de commande suivante''<br />
rsync -av /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
Si les serveurs sont configurés pour utiliser un port ssh différent, ex. 2222, à la place faites :<br />
<br />
export RDIR=/home/e-smith/files/users/briedlin/Maildir/.orders/ ''# cette variable est utilisée pour raccourcir la ligne de commande suivante''<br />
rsync -av -e 'ssh -p 2222' /var/affa/primmail/daily.0/$RDIR 10.204.48.1:$RDIR<br />
<br />
<br />
<br />
'''Example 2:''' Un utilisateur a effacé le fichier orderform.pdf dans l'ibay 'docs' du serveur 10.204.48.1 et vous demande de le restaurer.<br />
<br />
* Vous avez cherché et trouvé la dernière version de ce fichier dans l'archive hebdomadaire de la tâche 'prodserv'.<br />
* Pour la recopier vers le serveur 10.204.48.1 lancez sur le serveur Affa<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# cette variable est utilisée pour raccourcir la ligne de commande suivante''<br />
rsync -av /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
Si les serveurs sont configurés pour utiliser un port ssh différent, ex. 2222, à la place faites :<br />
export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf ''# cette variable est utilisée pour raccourcir la ligne de commande suivante''<br />
rsync -av -e 'ssh -p 2222' /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE<br />
<br />
<br />
<br />
'''Example 3:''' Restaurer un fichier avec un caractère spécial dans le chemin<br />
<br />
N'utilisez pas les variables shell pour raccourcir la commande. Cela compliquerait les choses plus que cela aiderait.<br />
Mettez entre guillemets les chemins source et destination. Dans le chemin de destination, echappez les caractères blancs avec un double backslash et les parenthèses avec un simple backslash. Sur le serveur de sauvegardes AFFA lancez :<br />
rsync -av "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
Si les serveurs sont configurés pour utiliser un port ssh différent, ex. 2222, à la place faites :<br />
<br />
rsync -av -e 'ssh -p 2222' "/var/affa/fileshare/daily.4/home/e-smith/files/ibays/mechfiles/files/Valve Control (Design Gr)/VALVE LIST FOR ISSUED.xls" "192.168.1.7:/home/e-smith/files/ibays/mechfiles/files/Valve\\ Control\\ \(Design\\ Gr\)/VALVE\\ LIST\\ FOR\\ ISSUED.xls"<br />
<br />
==== Restauration complète ====<br />
Pour lancer une restauration complète des utilisateurs et des données de configuration, lancez sur le serveur Affa<br />
affa --full-restore <JOB> [<ARCHIVE>]<br />
Cela synchroniste (par rsync) les données de la sauvegarde ARCHIVE vers l'ordinateur 'remoteHostname' défini dans la configuration de la tâche JOB. <br />
<br />
Exemple:<br />
Vous avez sauvegardé votre serveur de production 'prodsrv' par une tâche 'prodbox'. Pour restaurer la dernière sauvegarde, lancez<br />
affa --full-restore prodbox<br />
<br />
Pour restaurer la plus ancienne archive daily.3, lancez <br />
affa --full-restore prodbox daily.3<br />
<br />
{{Warning box| Une restauration complète reconstruit le serveur tel qu'il était au moment de la sauvegarde. Cela signifie que tous les fichiers créés ou les changement de configuration due serveur effectués après la sauvegarde seront perdus. Après la restauration, le serveur restauré redémarre automatiquement. }}<br />
<br />
==== Migration d'un serveur SME7 vers un nouveau matériel en utilisant la fonctionnalité "Affa rise" ====<br />
Merci de consulter cet Howto: [[Moving SME to new Hardware]]<br />
<br />
<br />
==== Restauration d'un disque USB sur un nouveau serveur ====<br />
<br />
<br />
Cette astuce vient de http://forums.contribs.org/index.php?topic=42412.0<br />
<br />
Q) J'ai des sauvegardes complètes stockées sur un disque dur USB connecté sur notre serveur de sauvegarde Affa. J'ai besoin de restaurer une ancienne sauvegarde mensuelle de notre tâche "mailbackup" sur un serveur de tests à la place du système original. Si je le fais depuis le serveur de sauvegardes, je vois les instructions pour remonter ce serveur depuis la sauvegarde courante, mais je veux restaurer d'un état datant d'environ un mois, avant que d'étranges choses se soient passées. Et je veux le faire sur une machine qui n'est pas notre serveur de sauvegarde ni notre serveur de production. J'ai tenté d'imaginer comment faire, mais je suis perdu dans les options. Mon but est de faire quelques tests.<br />
<br />
R) Sur votre serveur de tests, configurez une tâche "mailbackup" identique à celle de votre serveur de sauvegardes, mais paramétrez correctement RootDir en /var/affa et mettez la propriété "status" sur "disabled" Connectez le lecteur USB et copiez l'archive de la tâche "mailbackup" dans /var/affa. Ensuite, lancez affa --rise mailbackup ARCHIVE<br />
<br />
Rappelez-vous qu'Affa place les scripts de configuration de tâches dans les répertoires d'archive. Dans votre cas, c'est mailbackup-setup.pl. Copiez-le sur votre serveur de tests, changez les propriétés RootDir et status, puis exécutez-le.<br />
<br />
=== FAQ ===<br />
'''Quels fichiers et répertoires sont inclus par défaut ?'''<br />
<br />
Avec SMEServer=no aucun.<br />
<br />
Avec SMEServer=yes la liste par défaut des sauvegardes SME. TLa commande suivante affiche la liste :<br />
perl -e 'use esmith::Backup;$b=new esmith::Backup;print join("\n",$b->restore_list)."\n"' <br />
<br />
<br />
'''Puis-je sauvegarder exclusivement des fichiers d'images contenus dans un répertoire spécifique ?'''<br />
<br />
Oui. Supposant que vous voulez sauvegarder tous les fichiers gif et jpg du répertoire /home/e-smith/files/ibays/pictures/files utilisez cette configuration<br />
db affa setprop JOBNAME SMEServer no # Ne pas inclure les répertoires et fichiers par défaut de SME Server<br />
db affa setprop JOBNAME Include[0] '/home/e-smith/files/ibays/pictures/files' # Commencer à chercher ici<br />
db affa setprop JOBNAME Include[1] '*/' # Chercher récursivement dans l'arborescence<br />
db affa setprop JOBNAME Include[2] '*.gif' # Copier les fichiers qui correspondent à ce profil<br />
db affa setprop JOBNAME Include[3] '*.jpg' <br />
db affa setprop JOBNAME Exclude[0] '*' # Exclure tous les autres<br />
db affa setprop JOBNAME rsyncOptions '-m' # Ne pas créer les répertoires vides<br />
<br />
<br />
'''Comment puis-je déplacer une archive de sauvegarde de mon serveur de production dans une agence distante vers le serveur Affa en utilisant un disque USB ? Je veux éviter de télécharger la première sauvegarde complète depuis internet.<br />
Installez Affa sur votre serveur de production, connectez et montez le disque USB.''' Configurez une tâche pour enregistrer la sauvegarde sur le disque USB. Une fois la tâche exécutée, connectez le disque USB à votre serveur Affa et configurez la tâche de la même manière. Utilisez le script de configuration du répertoire d'archive. Vous n'avez à mofier que la propriété "remoteHostName". Maintenant, utilisez l'option --move-archive pour transférer l'archive sur le disque local. Si ce n'est pas déjà fait, envoyez la clé publique à votre serveur de production. C'est fait.<br />
<br />
<br />
'''Comment je sauvegarde deux serveurs SME derrière un pare-feu ?'''<br />
D'abord, vous devez configurer les port forwardings pour le service ssh sur votre pare-feu. Utilisez un port non standard, par ex. 2200->22, pour votre second serveur. Configurez une tâche sur votre serveur de sauvegarde Affa pour chacun de vos serveurs de production. Configurez la propriété sshPort=2200 pour votre second serveur.<br />
<br />
=== Désinstallation ===<br />
Cela supprime le paquetage Affa installé, toutes les données de configuration et toutes les archives de sauvegarde.<br />
<br />
Lancez les commandes suivantes pour chaque tâche :<br />
affa --revoke-key JOBNAME<br />
affa --delete-job JOBNAME<br />
Verifie que toutes les tâches ont été supprimées.<br />
affa --status<br />
Suprime le paquetage Affa<br />
rpm -e smeserver-affa<br />
et, si ces paquetages ne sont pas des dépendances d'autres paquetages :<br />
Nettoyage général<br />
rm -f /etc/cron.d/affa /etc/cron.d/affa-status<br />
rm -rf /home/e-smith/db/affa /home/e-smith/db/affa-report<br />
rm -rf /var/affa /var/log/affa<br />
<br />
=== Résolution de problèmes ===<br />
==== Affa cesse de fonctionner après que prodserver ait été restauré. ====<br />
'''Description:'''<br />
* Affa cesse de fonctionner ou est endommagé<br />
* prodserver ne peut plus réaliser de connextion via ssh vers affaserver<br />
* Le message "SSH connection to ''prodserver ip'' failed. Did you send the public key" est présent dans le fichier /var/log/affa/''nom_du_job''.log<br />
* Réenvoyer les clés ssh ne résout pas le problème<br />
<br />
'''Solution:'''<br />
* Ceci vient probablement du fait que la clé pour prodserver, qui est présente sur affaserver, ne correspond plus avec la clé de prodserver.<br />
* Il faut alors effacer la clé existante afin que la clé soit réenvoyée par prodserver à la prochaine connexion ssh.<br />
* En ligne de commande, éditer le fichier /root/.ssh/known_hosts et enlever la ligne correspondant à prodserver. Cette dernière débute par: ''prodserver ip'' ssh-rsa<br />
<br />
==== Impossibilité d'envoyer des emails après un "rise". ====<br />
'''Description:'''<br />
* impossibilité d'envoyer de emails après un "rise" sur le server de sauvegardes en restaurant de SME7 à SME8<br />
* le journal qpsmtpd affiche:<br />
@400000005125f42409a8cd1c 4280 vvirus::clamav plugin (data_post): clamscan results: /var/spool/qpsmtpd/1361441818:4280:0: Access denied. ERROR<br />
@400000005125f42409aa9624 4280 virus::clamav plugin (data_post): ClamAV error: /usr/bin/clamdscan --stdout --config-file=/etc/clamd.conf --no-summary /var/spool/qpsmtpd/1361441818:4280:0 2>&1: 2<br />
* problème de permission sur /var/spool<br />
#on the working server:<br />
ls -al /var/spool<br />
drwxr-s--- 2 qpsmtpd clamav 4096 Jun 10 12:31 qpsmtpd<br />
<br />
#On the broken (restored) server:<br />
ls -al /var/spool<br />
drwxr-x--- 2 qpsmtpd clamav 4096 Jun 10 10:46 qpsmtpd<br />
<br />
#On the fresh install before the restore:<br />
ls -al /var/spool<br />
drwxr-s--- 2 qpsmtpd clamav 4096 Apr 17 13:47 qpsmtpd<br />
voire [http://bugs.contribs.org/show_bug.cgi?id=4399| SME bug #4399]<br />
<br />
<br />
'''Solution:'''<br />
en ligne de commande sur le server de sauvegardes après le "Rise":<br />
chmod g+s /var/spool/qpsmtpd<br />
<br />
==== Les utilisateurs ne peuvent se connecter au server – Important. ====<br />
Après une restauration complète ou un "rise" par Affa sur SME8, tout semble fonctionner, mais les utilisateurs ne peuvent se connecter sur le server (samba, horde, etc...). Les comptes utilisateurs apparaissent bien dans le server-manager voir [[bugzilla:6983]] et [[bugzilla:7687]]<br /><br />
{{Warning box|msg=Travaux en cours: veuillez lire le contenu du bug car il semble y avoir d'autres problèmes avec cette solution temporaire}}<br />
La cause est une imcomplète restauration / conversion des fichiers contenus dans le répertoire /etc, en particulièrement les fichiers contenant les identifiants et les mots de passe. La première chose à faire après un "affa --full-restore prodserv" est de ne pas effacer la sauvegarde existante de Affa. Si un "affa --rise --all prodserv" a été effectué, ne pas continuer avec "/bin/rm -rf /var/affa" comme dans le wiki (ce qui effacerait l'archive de manière certaine).<br />
* Si la fonction rise a été utilisée:<br />
Se connecter en "root" sur le server de production et entrer en ligne de commande :<br />
<br />
mkdir /etc/backup<br />
<br />
cp -r /etc/{passwd,samba,e-smith,group,gshadow,shadow,ssh,sudoers} /etc/backup<br />
<br />
yes | cp -r /var/affa/"JOBNAME"/"ARCHIVE"/etc/* /etc/<br />
<br />
et reconfigurer le server pour finir<br />
<br />
signal-event post-upgrade; signal-event reboot<br />
<br />
* Si la fonction « full-restore » a été utilisée:<br />
Effectuer les étapes 1, 2 et 3 ci-dessus<br />
Se connecter en "root" sur le server de sauvegardes (server Affa) et entrer en ligne de commande:<br />
<br />
export RDIR=/var/affa/"NOM_DU_JOB"/"ARCHIVE"/etc/<br />
<br />
rsync -av $RDIR "PRODSERV-IP":/etc/<br />
<br />
Sur le serveur de production "PRODSERV", entrer par ssh dans un terminal<br />
signal-event post-upgrade; signal-event reboot<br />
<br />
<br />
<br />
=== Rapport de bugs ===<br />
Affa est répertorié dans [http://bugs.contribs.org/enter_bug.cgi?product=SME%20Contribs&component=smeserver-affa bugtracker contribs section].<br />
SVP, rapportez là tous les bugs, demandes de nouvelles fonctionnalités et problèmes de documentation. <br />
<br />
=== Informations complémentaires ===<br />
Pour les détails de performance, la liste des modifications etc. voyez [[:Affa:Additional information]]<br />
<br />
=== Références ===<br />
* Article Wikipedia http://en.wikipedia.org/wiki/Rsync<br />
* Site Rsync http://rsync.samba.org/<br />
* Article Wiki [[Moving SME to new Hardware]]<br />
* Article Wiki [[Backup with Affa and FreeDup]]<br />
* Article Wiki [[Rsyncd setup on a windows computer for use with Affa backup]]<br />
* Article Wiki [[Backup Windows Computers Using Affa]]<br />
* Article Wiki [[Backup of ESXi Virtual Machines using Affa]]<br />
<br />
----<br />
[[Category: Contrib/fr]]<br />
[[Category: Backup/fr]]</div>Arnaud