Дома Forums HAast (High Availability for Asterisk) General Can HAAst suffer from split brain, and how does it recover

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
    Posts
  • teliumcustomer26
    Participant
    Post count: 1

    Hi there,

    I am a newbie – just reading and trying to wrap my mind around the HAAst solution. Looks very interesting to me.
    There is however one question I couldn’t find the answer for:

    How does HAAst deal with a split brain scenario? How to prevent this?

    kind regards,

    Bert

    Avatar photoTelium Support Group
    Participant
    Post count: 265

    No. Split brain usually refers to a mirrored file system (e.g.: DRBD) in which the two sides have gone out of sync. Proper recovery from split brain usually involves manually choosing which files to keep, one file at a time (or risk losing all data from one side if you blindly accept once site as correct). Since other products use block level mirroring, an interruption in the mirroring can leave files/databases in an inconsistent state and prevent Asterisk from starting or operating correctly.

    HAAst on the other hand does not use a mirrored file system. In fact HAAst is the only HA system for Asterisk that does not use block level mirroring. HAAst synchronizes files/directories/databases/tables from Active to Standby only, and only when the peers are both confirmed healthy. Files use differential analysis and compression to send only changes, and databases use SQL level transactions to ensure databases are always in a consistent state.

    With HAAst data is not sent if a node is detected to be in an unhealthy state, so potentially damaged files/databases are never sent to the other peer. Once a node recovers from a failure, the data from the healthy node will be sent to the recovered node to bring it back into sync. You will never have a split brain scenario with HAAst.

    If that didn’t answer your question please provide more details.

Viewing 2 posts - 1 through 2 (of 2 total)
  • You must be logged in to reply to this topic.