Monday 8 December 2008

Xen virtualization - part II

Let's continue this exercise.
Step one - hardware:
I managed to buy two IBM x3950M2, each with
  • 2xCPUs (4 cores each)
  • 40GB RAM
  • 2 Emulex HBA
  • 2 dual port Ethernet adapters + 2 onboard
  • RSA card for management and fencing
It is important to check compatibility matrix for your SAN configuration, but most of HBA are compliant. My network configuration requires at least 6 Ethernet ports (described later), you may have different needs:

eth0- onboard adapter
eth1
- onboard adapter
eth2- port 0, external adapter #1
eth3- port 1, external adapter #1
eth4- port 0, external adapter #2
eth5- port 1, external adapter #2

As it is going to be a cluster both servers must (should?) have the same configuration.

Step two - software:
I decided to install Redhat Linux 5.2, previous versions (and also other OS that implements XEN) does not support more that 3 ethernet adapters dedicated for virtual domain. There is still a limit of maximum 16 disks assigned to virtual domain but there is a workaround - using LVM on dom0.
Installing RHEL is a pretty easy job so I am not going to describe it - check www.redhat.com.

Let's focus on planning the connectivity:
  • SAN: at minimum we will need at least one disks exported to both servers to share virtual domain images. I prefer to use LVM group with clustered attribute turn on instead of GFS and flat files (but it also work quite well).
  • Ethernet: here comes some tricky stuff, because according to our standard all network that are going to be used are separated. Configuration could look like this
eth0 - administration network, basicly to which 'hostname' is bind
eth1 - backups
bonding (bond0) - service network (for httpd, database etc)
eth2
eth4
bonding (bond1)- NAS
eth3
eth5

Bonding will be configured for service network and NAS becase it is the most important to have access to running service like database, and possibility to use data exported via NAS.
In virtual domains default gateway is always through service network, so it is not so important to protect connectivity to administration network with bonding, however it would be desirable.
  • RSA card must be configured to support administration task and also is very useful for fencing (simply it is a way how cluster protects resources in case of split brain by doing STONITH - Shut The Other Node In Head) - to be described later.
After some planning we can install RHEL 5.2 on both nodes.

To be continued...

No comments: