Stats

  • Category: core
  • License: GNU Affero General Public License (AGPL) version 3
  • Updated: 2012-8-28
  • Downloads: 1199

Releases

3

Recommendations

Summary:

This community package delivers high-performance (threadable) backup for Zarafa - also for filter-enabled backup strategies

Full description:

Zarafa-cluster-backup takes care to backup all users found by defined ldap-filters and is intended to be used in combination with cron. It is also possible to do overlapping backups (like one daily backup and some VIP backups for special users).

This project is actively used among customers delivering the performance expected whilst delivering the full ability of restore on brick-level backup. All backups between the first and the last (rotation) are stored incrementally. Please refer to the Zarafa docs for documentation on zarafa-backup and zarafa-restore.

Requirements for running zarafa-cluster-backup:

- perl (or perl-base)

- make

- ldapsearch (often in package ldap-utils)

- zarafa-backup (and corresponding libraries neccessary to zarafa-backup)

Short Howto

###########

1. Install package requirements

2. chmod u+x zarafa-cluster-backup-1.3.0.installer.sh

3. run ./zarafa-cluster-backup-1.3.0.installer.sh

4. <EDIT /etc/zarafa/backup-cluster.cfg>

5. <CREATE /etc/zarafa/backup-<NODE_NAME>.cfg files>

6. Run test, and create cronjobs (example included)

Enjoy!

Warning!

The author recommends using a different release of this plugin!

Release notes:

* 1.3.0 - major changes, added public folder backup

  Added support for public folder backup

  Split config file section from working script (easy updates)

  Fixed inverted multi-server LDAP-Filter setting

  Added simple cron example file

  Added simple install script (install.sh)

The error is line at line 23 at the zarafa-backup-cluster shell-script.

LDAP_FILTER="${LDAP_PARTIAL_USERFILTER}"

better

LDAP_FILTER="(&${LDAP_PARTIAL_USERFILTER})"

Because the LDAP_PARTIAL_USERFILTER is set to "(zarafaAccount=1)(mail=*)"

 

Jens Rabe 2944 days ago

Even if the is only one node defined in $BACKUP_NODES, there could be more than one Zarafaserver in LDAP.

I think the best is not to make a decision how many nodes are in BACKUP_NODES and leave the LDAP_FILTER with "(&(zarafaUserServer=${NODE})${LDAP_PARTIAL_USERFILTER})".

Jens Rabe 2944 days ago

I use LDAP_FILTER="(&(zarafaUserServer=${NODE})${LDAP_PARTIAL_USERFILTER}) but i moved it into the last for-loop ("for NODE in ${BACKUP_NODES}...").

I've got another question:

What if the ldapsearch-results are limited?

Jens Rabe 2944 days ago

There is even somethinge wrong with the whitespaces!

Every space before >> and & is really bad. Because it forks ever call and so it runs not the defined threadcount but many many zarafa-backups.

Jens Rabe 2944 days ago

Hi, thanks for all the comments and input!

I'll push out a new version ASAP, since I didn't upload the most current state (yet). 

1. Paged LDAP-Results are supported in the new version (So limits won't be hit)

2. LDAP_FILTER="(&${LDAP_PARTIAL_USERFILTER})" was indeed a "typo", which already has been fixed. But thanks for reporting also this!

3. (&(zarafaUserServer=${NODE})${LDAP_PARTIAL_USERFILTER}) is per definition wanted. If you only want to backup one node, then you are able to. Binding to the resultset of LDAP will leave you with complete backups of all nodes to be made.

4. The last thing (forks) I will check ASAP.

Expect a new version to be online tomorrow.

Thanks a lot for your feedback! 

Mike 2943 days ago