Make a Web Application Highly Available with IP Failover, Heartbeat, Pacemaker, and DRBD on Fedora 13

High availability refers to the practice of keeping online resources available through node failure or system maintenance. This guide will demonstrate a method for using two Linodes to keep a website online, even when the node initially hosting it is powered off. IP failover, HeartbeatExternal Link 3.0,PacemakerExternal Link 1.1, DRBDExternal Link (Distributed Replicated Block Device), MySQLExternal Link and ApacheExternal Link 2.2 will be used for this example configuration.

As high availability is a complex topic with many methods available for achieving various goals, it should be noted that the method discussed here may not be appropriate for some use cases. However, it should provide a good foundation for developing a customized HA solution.

This guide enhances the setup described in our previous guide on making a static website highly available, adding DRBDExternal Link to mirror partitions between two Linodes. MySQL will be used as the backend for a sample WordPress website (other dynamic applications would work as well), with/var/lib/mysql and /srv/www mirrored between both nodes to provide data redundancy and support application failover.

This guide assumes you have two active Linodes on your account, and that both are freshly deployed Fedora 13 instances. If you only have one Linode on your account, you may add another by clicking the “Add a Linode to this Account” link on the “Linodes” tab of the Linode Manager.

Important(1): Both Linodes must reside in the same datacenter for IP failover (a required component of this guide) to work. Future HA guides will address combining the principles demonstrated in this tutorial with cross-datacenter clustering techniques.

Important(2): When you deploy your Linodes, be sure not to allocate all the available disk space to the main disk images. As part of this tutorial, you’ll be creating two additional images on each Linode, so be sure to leave at least 2 GB free when deploying Fedora 13 to each. You may wish to leave more free disk space, depending on your needs; the additional disk images will be used to store web application and database data.

These instructions work with the Linode platform. If you don’t have a Linode yet, sign up for a Linux VPS and get started today.

Terminology Link

Throughout this document, the following terms are used:

  • ha1 – the primary Linode
  • ha2 – the secondary Linode
  • 12.34.56.78 – the static IP address assigned to the primary Linode
  • 98.76.54.32 – the static IP address assigned to the secondary Linode
  • 55.55.55.55 – a “floating” IP address, initially assigned to the primary Linode, which may be brought up on either Linode
  • 192.168.88.88 – a private IP address, assigned to the primary Linode
  • 192.168.99.99 – a private IP address, assigned to the secondary Linode
  • CHANGEME – a password or authentication key
  • blog.bambookites.com – an example site to be made highly available

You should substitute your own values for these terms wherever they are found in this document.

Basic System Configuration Link

Choose one Linode to serve as the “primary” node. Log into it via SSH as root and edit its /etc/hosts file to resemble the following:

File: /etc/hosts (on primary Linode)

127.0.0.1       localhost.localdomain       localhost
12.34.56.78     ha1.bambookites.com         ha1
98.76.54.32     ha2.bambookites.com         ha2

Remember to substitute your primary and secondary Linode’s IP addresses for 12.34.56.78 and 98.76.54.32, respectively, along with appropriate hostnames for each. You will find the IP addresses for your Linodes on their “Network” tabs in the Linode Manager.

For the sake of simplicity, it is recommended that you keep the short hostnames assigned as ha1 and ha2. Next, issue the following commands to generate SSH keys for the root user on each VPS, synchronize their SSH host keys, set their hostnames, and allow passwordless logins from each to the other. SSH host key synchronization will prevent issues with key checking later on, which might otherwise occur should you need to perform an SSH login via a DNS name pointing to a floating IP while the secondary node is serving your content. You will be prompted to assign passphrases to the SSH keys; this is optional, and you may skip this step by pressing the “Enter” key.

ssh-keygen -t rsa
scp ~/.ssh/id_rsa.pub root@ha2:/root/ha1_key.pub
ssh root@ha2 "ssh-keygen -t rsa"
ssh root@ha2 "echo \`cat ~/ha1_key.pub\` >> ~/.ssh/authorized_keys2"
ssh root@ha2 "rm ~/ha1_key.pub"
scp root@ha2:/root/.ssh/id_rsa.pub /root
cat ~/id_rsa.pub >> ~/.ssh/authorized_keys2
rm -f ~/id_rsa.pub
scp /etc/ssh/ssh_host* root@ha2:/etc/ssh/
rm -f ~/.ssh/known_hosts
ssh root@ha2 "service sshd restart"
scp /etc/hosts root@ha2:/etc/hosts
echo "HOSTNAME=ha1" >> /etc/sysconfig/network
hostname "ha1"
ssh root@ha2 "echo \"HOSTNAME=ha2\" >> /etc/sysconfig/network"
ssh root@ha2 "hostname \"ha2\""

Assign Static IP Addresses Link

By default, when Linodes are booted DHCP is used to assign IP addresses. This works fine for cases where a Linode will only have one IP address, as DHCP will always assign that IP to the Linode. If a Linode has or may have multiple IPs assigned to it, an explicit static configuration is required, as is the case with this configuration.

On the primary Linode, edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file to resemble the following, making sure the values entered match those shown on the “Network” tab for the primary Linode:

File: /etc/sysconfig/network-scripts/ifcfg-eth0 (on primary node)

DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
IPADDR=12.34.56.78
NETMASK=255.255.255.0
GATEWAY=12.34.56.1

Issue the following command to restart networking on the primary Linode:

service network restart

On the primary Linode, edit the /etc/resolv.conf file to resemble the following. Replace 11.11.11.11 and 22.22.22.22 with the DNS servers listed on the Linode’s “Network” tab in the Linode Manager.

File: /etc/resolv.conf (on primary node)

nameserver 11.11.11.11
nameserver 22.22.22.22
options rotate

On the secondary Linode, edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file to resemble the following, making sure the values entered match those shown on the “Network” tab for the secondary Linode:

File: /etc/sysconfig/network-scripts/ifcfg-eth0 (on secondary node)

DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
IPADDR=98.76.54.32
NETMASK=255.255.255.0
GATEWAY=98.76.54.1

On the primary Linode, issue the following commands to copy the /etc/resolv.conf file to the secondary Linode and restart networking:

scp /etc/resolv.conf root@ha2:/etc/
ssh root@ha2 "service network restart"

Set Up IP Failover Linkage Link

First, add a second IP address to your primary Linode by navigating to the “Extras” tab for this Linode and select one additional IP as an extra. After purchasing the additional IP, visit the “Network” tab for the primary Linode and make a note of the newly added IP address. This will serve as your “floating” IP.

Next, navigate to the “Network” tab for the secondary Linode and click the “IP Failover linkage” link in the bottom left of the screen. Check the box next to the newly added IP address and click the “Submit” button. You will see a message above the IP list informing you that IP failover configuration for this Linode has been updated.

Important(1): After configuring IP failover linkage, reboot both Linodes from their “Dashboard” tabs in the Linode Manager. This will allow the new IP address to be routed properly, and will allow you to verify that both Linodes come back up properly.

Important(2): Before proceeding, add a DNS entry for your example site (“blog.bambookites.com” in our case), pointing to the newly assigned “floating” IP address. Please note that DNS may take some time to propagate fully; for testing purposes, you may wish to add an entry to your workstation’s /etc/hosts (MacOS X or Linux) file, pointing the test site to the floating IP. Microsoft Windows users may add an entry to their local workstations’s hosts file as well, although its location will vary according to the version of Windows installed. As a third option, you may add an entry to your LAN’s local DNS server to point the site to the floating IP.

Install Heartbeat, Pacemaker, and Apache2 Link

On the primary Linode, issue the following commands to install Heartbeat, Pacemaker, and Apache 2. The second set of commands will ensure that the same packages are installed on the secondary Linode as well.

yum update -y
yum install heartbeat pacemaker httpd -y
chkconfig httpd off
ssh root@ha2 "yum update -y"
ssh root@ha2 "yum install heartbeat pacemaker httpd -y"
ssh root@ha2 "chkconfig httpd off"

After issuing the commands listed above, the required packages will be installed. Additionally, the system startup links for Apache will be removed on both Linodes, as Pacemaker will be responsible for starting and stopping the web server daemon as necessary.

Configure Apache 2 Link

On the primary Linode, create a file named /etc/httpd/conf.d/blog.bambookites.com.conf with the following contents. Substitute the “floating” address for 55.55.55.55 in the example shown below. In this example, the site “blog.bambookites.com” is being made highly available.

File: /etc/httpd/conf.d/blog.bambookites.com.conf (on primary node)

NameVirtualHost 55.55.55.55:80
<VirtualHost 55.55.55.55:80>
     ServerAdmin support@bambookites.com
     ServerName blog.bambookites.com
     DocumentRoot /srv/www/bambookites.com/blog/public_html/
     ErrorLog /srv/www/bambookites.com/blog/logs/error.log
     CustomLog /srv/www/bambookites.com/blog/logs/access.log combined
</VirtualHost>

On the primary Linode, issue the following command to copy the Apache configuration file to the secondary Linode:

scp /etc/httpd/conf.d/blog.bambookites.com.conf root@ha2:/etc/httpd/conf.d/

Configure Heartbeat Link

On the primary Linode, create a file named /etc/ha.d/ha.cf with the following contents. Replace 98.76.54.32 with the statically assigned IP address of the secondary Linode.

File: /etc/ha.d/ha.cf (on primary node)

logfile /var/log/heartbeat.log
logfacility local0
keepalive 2
deadtime 15
warntime 5
initdead 120
udpport 694
ucast eth0 98.76.54.32
auto_failback on
node ha1
node ha2
use_logd no
crm respawn

On the secondary Linode, create a file named /etc/ha.d/ha.cf with the following contents. Replace 12.34.56.78 with the statically assigned IP address of the primary Linode.

File: /etc/ha.d/ha.cf (on secondary node)

logfile /var/log/heartbeat.log
logfacility local0
keepalive 2
deadtime 15
warntime 5
initdead 120
udpport 694
ucast eth0 12.34.56.78
auto_failback on
node ha1
node ha2
use_logd no
crm respawn

On the primary Linode, create the file /etc/ha.d/authkeys with the following contents. Make sure to change “CHANGEME” to a strong password consisting of letters and numbers.

File: /etc/ha.d/authkeys (on primary node)

auth 1
1 sha1 CHANGEME

Issue the following commands to set proper permissions on this file, copy it to the secondary Linode, and start the Heartbeat service on both nodes:

chmod 600 /etc/ha.d/authkeys
service heartbeat start
scp /etc/ha.d/authkeys root@ha2:/etc/ha.d/
ssh root@ha2 "chmod 600 /etc/ha.d/authkeys"
ssh root@ha2 "/etc/init.d/heartbeat start"

Configure PV-GRUB, Disk Images, and Networking Link

Configure PV-GRUB Link

Although DRBD is included in the vanilla mainline Linux kernel as of version 2.6.33, it isn’t compiled into Linode kernels as of this writing. Even if it were, the version of DRBD included in the kernel should match the DRBD userspace tools, prompting the need to run a stock Fedora kernel. Fortunately, this is easy to accomplish via PV-GRUB on your Linodes.

On the primary Linode, issue one of the following sets of commands to install a kernel and the grub bootloader, depending on your chosen architecture:

32-bit:

yum install -y kernel-PAE.i686 grub
ssh root@ha2 "yum install -y kernel-PAE.i686 grub"

64-bit:

yum install -y kernel.x86_64 grub
ssh root@ha2 "yum install -y kernel.x86_64 grub"

On the primary Linode, issue the following command to obtain the name of the kernel image and initrd:

ls /boot/vmlinuz* && ls /boot/initramfs*

On the primary Linode, create a file named /boot/grub/menu.lst with the following contents. Edit the “kernel” and “initrd” lines to reflect the filenames contained in the output of the previous commands (if they differ).

File: /boot/grub/menu.lst (on primary Linode)

# groot=(hd0)
# indomU=false
timeout 5
title Fedora 13
root (hd0)
kernel /boot/vmlinuz-2.6.33.5-124.fc13.i686.PAE root=/dev/xvda ro
initrd /boot/initramfs-2.6.33.5-124.fc13.i686.PAE.img

Save the file and issue the following command to copy it to the secondary Linode.

scp /boot/grub/menu.lst root@ha2:/boot/grub/

Create Disk Images Link

In the Linode Manager, create two extra disk images for each Linode. The first additional disk image for each Linode should be called “DRBD /var/lib/mysql”, while the second should be called “DRBD /srv/www”. For testing purposes, each may be 1000 MB in size. The disks must be created as “raw” images (not ext3). Please note that each Linode’s corresponding disk images must be exactly the same size, as they will be replicated between the two nodes using DRBD later.

Add Private IPs to the Linodes Link

Under the “Network” tab for each Linode, click the button entitled “Add a Private IP to this Linode.” Make a note of the private IP address assigned to each node.

Update Configuration Profiles Link

In each Linode’s configuration (in the Linode Manager), change the “Kernel” dropdown to select “pv-grub-x86_32” (or “pv-grub-x86_64” if you deployed the 64-bit version of Fedora). Assign your newly created disk images to the configuration profile as follows:

  • /dev/xvdc – “DRBD /var/lib/mysql”
  • /dev/xvdd – “DRBD /srv/www”

Save the profiles and reboot both Linodes from their dashboards to make sure everything comes back up properly under pv-grub. Issue the following command on each to make sure the newly created disk devices are available:

ls -all /dev/xvdc && ls -all /dev/xvdd

You should see output similar to the following:

brw-rw---- 1 root disk 202, 32 Jul  6 13:51 /dev/xvdc
brw-rw---- 1 root disk 202, 48 Jul  6 13:51 /dev/xvdd

Issue the following command on each Linode to make sure you’re running the stock Fedora kernel:

uname -a

You should see output similar to the following:

Linux ha1 2.6.33.5-124.fc13.i686.PAE #1 SMP Fri Jun 11 09:42:24 UTC 2010 i686 i686 i386 GNU/Linux

Configure Private Networking Link

Edit the /etc/sysconfig/network-scripts/ifcfg-eth0:1 file for each Linode, adding a block similar to the following. Be sure to replace192.168.99.99 with the private IP address that corresponds to each Linode (available from the “Network” tab in the Linode Manager), and be sure to specify the subnet mask as 255.255.128.0.

File: /etc/sysconfig/network-scripts/ifcfg-eth0:1 (on each Linode)

# Configuration for eth0:1
DEVICE=eth0:1
BOOTPROTO=none
# This line ensures that the interface will be brought up during boot.
ONBOOT=yes
# Private IP address.
IPADDR=192.168.99.99
NETMASK=255.255.128.0

On both Linodes, issue the following command to restart networking.

service network restart

You should be able to ping the private IP of ha1 from ha2, and vice versa. Next, you’ll configure DRBD to replicate your additional disk images between your Linodes.

Install and Configure DRBD Link

On the primary Linode, issue the following commands to install DRBD and utilities for controlling it. The second chkconfig command will remove the system startup links for DRBD, as it will be controlled by Pacemaker.

yum install -y drbd drbd-heartbeat drbd-pacemaker drbd-utils
chkconfig drbd off
ssh root@ha2 "yum install -y drbd drbd-heartbeat drbd-pacemaker drbd-utils"
ssh root@ha2 "chkconfig drbd off"

Next, reboot both Linodes from their Linode Manager dashboards. Once they’ve come back online, log back into the primary Linode via SSH and create the file /etc/drbd.d/r0.res. Be sure to replace 192.168.88.88 and 192.168.99.99 with the private addresses of your primary and secondary Linodes, respectively. Change the shared-secret directive to a strong password consisting of letters and numbers.

File: /etc/drbd.d/r0.res (on primary Linode)

resource r0 {
    protocol C;
    syncer {
        rate 4M;
    }
    startup {
        wfc-timeout 15;
        degr-wfc-timeout 60;
    }
    net {
        cram-hmac-alg sha1;
        shared-secret "CHANGEME";
    }
    on ha1 {
        device /dev/drbd0;
        disk /dev/xvdc;
        address 192.168.88.88:7788;
        meta-disk internal;
    }
    on ha2 {
        device /dev/drbd0;
        disk /dev/xvdc;
        address 192.168.99.99:7788;
        meta-disk internal;
    }
}

On the primary Linode, create another file called /etc/drbd.d/r1.res with the following contents. Again, be sure to modify the address and shared-secret lines appropriately.

File: /etc/drbd.d/r1.res (on primary Linode)

resource r1 {
    protocol C;
    syncer {
        rate 4M;
    }
    startup {
        wfc-timeout 15;
        degr-wfc-timeout 60;
    }
    net {
        cram-hmac-alg sha1;
        shared-secret "CHANGEME";
    }
    on ha1 {
        device /dev/drbd1;
        disk /dev/xvdd;
        address 192.168.88.88:7789;
        meta-disk internal;
    }
    on ha2 {
        device /dev/drbd1;
        disk /dev/xvdd;
        address 192.168.99.99:7789;
        meta-disk internal;
    }
}

On the primary Linode, issue the following commands to copy the DRBD configuration files to the secondary Linode, zero out the new disk images on both Linodes, and create DRBD devices.

scp /etc/drbd.d/* root@ha2:/etc/drbd.d/
dd if=/dev/zero of=/dev/xvdc bs=1024k
dd if=/dev/zero of=/dev/xvdd bs=1024k
drbdadm create-md r0
drbdadm create-md r1
ssh root@ha2 "dd if=/dev/zero of=/dev/xvdc bs=1024k"
ssh root@ha2 "dd if=/dev/zero of=/dev/xvdd bs=1024k"
ssh root@ha2 "drbdadm create-md r0"
ssh root@ha2 "drbdadm create-md r1"

On both Linodes, in separate terminal windows, issue the following command to start DRBD. The commands must be issued fairly quickly to avoid timing out.

service drbd start

On the primary Linode, issue the following commands to start synchronizing the r0 and r1 disk resources.

drbdadm -- --overwrite-data-of-peer primary r0
drbdadm -- --overwrite-data-of-peer primary r1

DRBD will begin synchronizing disk block data between your Linodes. You can watch the progress by issuing the following command on the primaryLinode:

watch -n3 cat /proc/drbd

You should see output similar to the following:

Every 3.0s: cat /proc/drbd                              Tue Jul  6 14:21:12 2010
version: 8.3.7 (api:88/proto:86-91)
srcversion: EE47D8BF18AC166BE219757
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r----
    ns:145088 nr:0 dw:0 dr:150652 al:0 bm:8 lo:421 pe:288 ua:1761 ap:0 ep:1 wo:b
 oos:879992
        [=>..................] sync'ed: 14.4% (879992/1023932)K
        finish: 0:05:25 speed: 2,660 (3,268) K/sec
 1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r----
    ns:102268 nr:0 dw:0 dr:107496 al:0 bm:6 lo:523 pe:270 ua:1779 ap:0 ep:1 wo:b
 oos:922740
        [=>..................] sync'ed: 10.0% (922740/1023932)K
        finish: 0:04:39 speed: 3,252 (2,660) K/sec

You don’t have to wait for the disks to be fully synced, although you may if you wish. Next, issue the following commands on the primary Linode to become the primary DRBD node and create an ext3 filesystem on each DRBD disk:

drbdadm primary r0
mkfs.ext3 /dev/drbd0
drbdadm primary r1
mkfs.ext3 /dev/drbd1

Install and Configure MySQL Link

On the primary Linode, issue the following commands to install the MySQL database server. The second set of commands will install it on the secondary Linode as well.

yum install -y mysql-server
service mysqld start
mysql_secure_installation
service mysqld stop
chkconfig mysqld off
ssh root@ha2 "yum install -y mysql-server"
ssh root@ha2 "service mysqld start"
ssh root@ha2 "mysql_secure_installation"
ssh root@ha2 "service mysqld stop"
ssh root@ha2 "chkconfig mysqld off"

On the primary Linode, issue the following commands to move MySQL’s data files to the r0 DRBD disk resource and delete the data files on the secondary Linode:

mkdir /root/mysql_bak
cp -Ra /var/lib/mysql/* /root/mysql_bak/
rm -rf /var/lib/mysql/*
drbdadm primary r0
mount /dev/drbd0 /var/lib/mysql
cp -Ra /root/mysql_bak/* /var/lib/mysql/
chown mysql:mysql /var/lib/mysql
umount /var/lib/mysql
drbdadm secondary r0
ssh root@ha2 "rm -rf /var/lib/mysql/*"

Prepare Web Directories and Data Link

On the primary Linode, issue the following commands to mount the web data DRBD disk to /srv/www, create directories for your site, and put WordPress installation files into place:

mkdir /srv/www
ssh root@ha2 "mkdir /srv/www"
mount /dev/drbd1 /srv/www
mkdir -p /srv/www/bambookites.com/blog/public_html
mkdir /srv/www/bambookites.com/blog/logs
cd /srv/www/bambookites.com/blog/
wget http://wordpress.org/latest.tar.gz
tar -xzf latest.tar.gz
rm -f latest.tar.gz
mv wordpress/* public_html/
rmdir wordpress/
chown -R apache:apache /srv/www/bambookites.com/blog/public_html
cd /root
umount /srv/www
drbdadm secondary r1

On the primary Linode, issue the following commands to install packages required for PHP and PHP-MySQL connectivity on both Linodes:

yum install -y php php-pear php-mysql
ssh root@ha2 "yum install -y php php-pear php-mysql"

Configure Cluster Resources Link

It should be noted that unless you have a different editor set via the “EDITOR” environment variable, the cluster resource manager will use vim as its editing environment. If you would prefer to use nano instead, you may set this permanently by issuing the following commands on both Linodes:

export EDITOR=/bin/nano
echo "export EDITOR=/bin/nano" >> .bashrc

For the purposes of these instructions, it will be assumed that you are are using vim as your editor. On the primary Linode, issue the following command to start the cluster resource manager in “edit” mode:

crm configure edit

You will be presented with information resembling the following. If you don’t see anything, enter “:q” to quit the editor and wait a minute before restarting it.

node $id="d518c2fc-2c46-484c-918a-7178a6c6c41a" ha1
node $id="122319f5-9b43-41a8-a226-e62ed09449d6" ha2
property $id="cib-bootstrap-options" \
        dc-version="1.1.1-972b9a5f68606f632893fceed658efa085062f55" \
        cluster-infrastructure="Heartbeat"

To begin editing your configuration, press the “i” key. To leave edit mode, press “Ctrl+c”. To quit without saving any changes, press “:” and enter “q!”. To save changes and quit, press “:” and enter “wq”.

Insert the following lines in between the second “node” line at the top of the configuration and the “property” line at the bottom. Important: Be sure to replace both instances of 55.55.55.55 with the “floating” IP address used earlier.

primitive apache2 lsb:httpd \
        op start interval="0" timeout="60" \
        op stop interval="0" timeout="120" start-delay="15"
primitive drbd_mysql ocf:linbit:drbd \
        params drbd_resource="r0" \
        op monitor interval="15s"
primitive drbd_webfs ocf:linbit:drbd \
        params drbd_resource="r1" \
        op monitor interval="15s" \
        op start interval="0" timeout="240" \
        op stop interval="0" timeout="100"
primitive fs_mysql ocf:heartbeat:Filesystem \
        params device="/dev/drbd/by-res/r0" directory="/var/lib/mysql" fstype="ext3" \
        op start interval="0" timeout="60" \
        op stop interval="0" timeout="120"
primitive fs_webfs ocf:heartbeat:Filesystem \
        params device="/dev/drbd/by-res/r1" directory="/srv/www" fstype="ext3" \
        op start interval="0" timeout="60" \
        op stop interval="0" timeout="120"
primitive ip1 ocf:heartbeat:IPaddr2 \
        params ip="55.55.55.55" nic="eth0:0" \
        op monitor interval="5s"
primitive ip1arp ocf:heartbeat:SendArp \
        params ip="55.55.55.55" nic="eth0:0"
primitive mysql ocf:heartbeat:mysql \
        params binary="/usr/bin/mysqld_safe" config="/etc/my.cnf" user="mysql" \
        group="mysql" log="/var/log/mysqld.log" pid="/var/run/mysqld/mysqld.pid" \
        datadir="/var/lib/mysql" socket="/var/lib/mysql/mysql.sock" \
        op monitor interval="30s" timeout="30s" \
        op start interval="0" timeout="120" \
        op stop interval="0" timeout="120"
group WebServices ip1 ip1arp fs_webfs fs_mysql apache2 mysql \
        meta target-role="Started"
ms ms_drbd_mysql drbd_mysql \
        meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
ms ms_drbd_webfs drbd_webfs \
        meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
colocation apache2_with_ip inf: apache2 ip1
colocation apache2_with_mysql inf: apache2 ms_drbd_mysql:Master
colocation apache2_with_webfs inf: apache2 ms_drbd_webfs:Master
colocation fs_on_drbd inf: fs_mysql ms_drbd_mysql:Master
colocation ip_with_ip_arp inf: ip1 ip1arp
colocation mysqlfs_on_drbd inf: fs_mysql ms_drbd_mysql:Master
colocation webfs_on_drbd inf: fs_webfs ms_drbd_webfs:Master
order apache-after-webfs inf: fs_webfs:start apache2:start
order arp-after-ip inf: ip1:start ip1arp:start
order fs-mysql-after-drbd inf: ms_drbd_mysql:promote fs_mysql:start
order fs-webfs-after-drbd inf: ms_drbd_webfs:promote fs_webfs:start
order mysql-after-fs-mysql inf: fs_mysql:start mysql:start

Change the “property” section to resemble the following excerpt. You’ll be adding an “expected-quorum-votes” entry due to the fact that your cluster only has two nodes, as well as adding the lines for “stonith-enabled” and “no-quorum-policy”. Don’t forget the trailing “\” after the “cluster-infrastructure” line.

property $id="cib-bootstrap-options" \
        dc-version="1.1.1-972b9a5f68606f632893fceed658efa085062f55" \
        cluster-infrastructure="Heartbeat" \
        expected-quorum-votes="1" \
        stonith-enabled="false" \
        no-quorum-policy="ignore"

Add the following excerpt after the “property” section:

rsc_defaults $id="rsc-options" \
        resource-stickiness="100"

After making these changes, press “Ctrl+c” and enter “:wq” to save the configuration and exit the editor.

Monitor Cluster Resources