Please note: I did not write this tutorial to be an ethical discussion on slabbing, merely as a how to on slabbing. Similar comparison is the ethics of hunting versus a firearm class.
I also wrote this guide so that there are some expertise barriers, so that dum dums can't exactly just start a business and load up 600-700 VPS's on a single hardware node (lets call it the mistake with slabbing).
First off, you want to install your host node. For this, you'll need a Ubuntu 13.10+ install with 2gb on /boot, 8gb on /, and 2gb on swap. Make the rest of the drive an unformatted partition. You may want to argue about that 8gb on root. Don't. It's a bitch to upgrade if you skimp.
Once you have your base OS installed, make sure to log in as root. This can be pulled off by doing sudo -i from the user you cretaed during the installation process. From this point, use apt-get to install xen-hypervisor-amd64, openvswitch-datapath-dkms, and openvswitch-switch.
After this step, make SURE you have remote access to the physical node. Sometimes network cards don't play nice, and it's a royal pain in the ass to correct without access to the node (read: reinstall and try again, many DC's charge $25 a reinstall). If you're in ColoCrossing, you can use ipmicfg to shimmy yourself an IPMI user and use a VPN to a VPS in CC-NET to connect to the IP it gives you (I am unashamed to admit I've done this on MULTIPLE occasions, the VPN part). Beyond that, I don't know other DC's procedures.
Now that you've verified that you have remote access, edit your network config so that you have a standard bridge in Ubuntu (you don't have the bridge_ports line, OpenVSwitch handles that bit for you).
After that, run the following command (replacing p4p1 with your external network interface, along with 10.0.0.1 with your actual network gateway):
ifconfig p4p1 down && ovs-vsctl add-br br0 && ifconfig p4p1 0.0.0.0 up && ovs-vsctl add-port br0 p4p1 && ifup br0 && ip route add default 10.0.0.1 && ping -c 3 8.8.8.8
(yes, the ping -c 3 8.8.8.8 is necessary, I've had OpenVSwitch fail on me a few times without it)
Once you've got that setup, you need to setup xen.
mv /etc/grub.d/20_linux-xen /etc/grub.d/09_linux-xen
update-grub
sed 's/DEFAULT=.*/DEFAULT=xl/' -i /etc/default/xen
reboot
Sometimes Ubuntu will interpret that reboot command as a shutdown command. Boot it externally if this happens.
Now that you've rebooted successfully, you're on what I like to call step 2.
What you need to do is first set your default network script in /etc/xen/xl.conf to vif-openvswitch. After this, you need to pull the netboot images from CentOS. They can be found in $CENTOSMIRROR/6.5/os/x86_64/images/pxeboot/ (you need both vmlinuz and initrd.img).
Now, you need to install lvm2 via apt-get. After you've done that, create a volume group on your target volume, and create a logical volume for each slab you intend to load.
The basic installation xen configuration is like follows.
vmlinuz = "/root/vmlinuz"
ramdisk = "/root/initrd.img"
disk = [ '/dev/mapper/VolGroup-LogVol,raw,xvda,rw' ]
vif = [ 'mac=00:16:3E:01:23:45,bridge=br0' ]
name = "slab1"
memory = 10000
Run thru the centos installer, then reconfigure xen to full boot:
bootloader = "/usr/lib/xen-4.3/bin/pygrub"
disk = [ '/dev/mapper/VolGroup-LogVol,raw,xvda,rw' ]
vif = [ 'mac=00:16:3E:01:23:45,bridge=br0' ]
name = "slab1"
memory = 10000
Install OpenVZ like you normally would, then voila.
A couple things you may want to keep in mind when using Xen slabbing:
Set your Dom0's maximum memory.
Set your CPU's on your DomU's with vcpus = and cpu = ['1','2','3','4','5'] (remember, count starts at 0, reserve one on each physical CPU for Dom0's networking et al).
elevator=noop is a good kernel option to make disk IO less fucky.
Remove most your kernel command line, leaving just the ones you need.