Pagina Inicial Forums HAast (High Availability for Asterisk) Configuration & Optimization OK to use rsync or NFS share for cluster data

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
    Posts
  • teliumcustomer22
    Participant
    Post count: 3

    There are non-Asterisk related files I would like to keep in sync between cluster nodes. Is it ok to use rsync or an NFS share to store these?

    Avatar photoTelium Support Group
    Participant
    Post count: 265

    You are welcome to use rsync,NFS,samba, etc. in your cluster. However, we generally recommend keeping data on each peer and allowing HAAst to control all synchronization, and here’s why:

    1. HAAst only synchronizes data between peers if peers have passed a health check. That means if one node is failing and starts to accidentally corrupt data, it will not be copied to the other peer! Tools like rsync, NFS, DRBD, etc. will immediately share/mirror all data including, corrupt data.
    2. By allowing HAAst to control synchronization, the HAAst event handler system will allow you to customize inbound data following a synchronization (e.g. update trunk information, modify the dialplan, customize TFTP files for the local network, etc)

    You should not place databases on any block level sharing device (NFS/SAMBA), or do block level mirroring (DRBD,iSCSI), as corruption by one peer will destroy the database for the other peer! Even worse, a failure midway through a write will corrupt both peers! Note that HAAst performs SQL transactions (not block level access) for database synchronization, so even if a peer fails midway through a database write neither peer will be left with an invalid database state.

    The one exception to this rule is if you need to archive a high volume of files, or very large files, that are written once and thereafter only read. A perfect example of this is call center call recordings or logs. A call center can easily generate gigabytes of recordings every minute, to be referenced in the future in case of dispute or for quality assurance. Since these are large files written once and then archived, they are the perfect example of data that should be written to a server share, common iSCSI device, etc. It would not make sense to generate the high network load and disk load required to continually create a second copy of this data.

Viewing 2 posts - 1 through 2 (of 2 total)
  • You must be logged in to reply to this topic.