Kevin Maguire
2010-Jun-24 12:03 UTC
[Xen-users] IO intensive guests - how to design for best performance
Hi I am trying to engineer a HA xen solution for a specific application workload. I will use: *) 2 multicore systems (maybe 32 or 48 cores) with lots of RAM (256 GB) *) dom0 OS will be RHEL 5.5 *) I would prefer to use xen as bundled by the distribution, but if required features are found in later releases then this can be considered *) the servers are connected to the SAN *) I have about 10 TB of shared storage, and will use around 20-25 RHEL paravirt guests *) The HA I will manage with heartbeat, probably use use clvmd for the shared storage My concern is to get the most out of the system in terms of IO. The guests will have a range of vCPUs assigned, from 1 to 8 say, and their workload varies over time. when they are doing some work it is both I/O and CPU intensive. It is only in unlikely use cases that all or most guests are very busy at the same time. The current solution to this workload is a cluster of nodes with either GFS (using shared SAN storage) or local disks, of which both approaches have some merits. However I am not tied to that architecture at all. There seems a lot (too many!) of options here *) created a large LUN / LVM volume on my SAN, and pass it to the guests and use GFS/GFS2 *) same thing, except use OCFS2 *) split my SAN storage into man LUNs / LVM volumes, and export 1 chunk per VM via phys: or tap:... interfaces *) more complex PCI-passthru configurations giving guest direct (?) access to storage *) create a big ext3/xfs/... file system on dom0 and export using NFS to guests (a kind of loopback?) *) others ... I ask really for any advise and experiences of list members faced with similar problems and what they found best. Thanks KM _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2010-Jun-24 12:24 UTC
Re: [Xen-users] IO intensive guests - how to design for best performance
On Thu, Jun 24, 2010 at 7:03 PM, Kevin Maguire <k.c.f.maguire@gmail.com> wrote:> *) split my SAN storage into man LUNs / LVM volumes, and export 1 > chunk per VM via phys: or tap:... interfacesThat''s raw block device, right? It would give you highest I/O. I highly recommend that. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Bart Coninckx
2010-Jun-24 12:24 UTC
Re: [Xen-users] IO intensive guests - how to design for best performance
On Thursday 24 June 2010 14:03:06 Kevin Maguire wrote:> Hi > > I am trying to engineer a HA xen solution for a specific application > workload. > > I will use: > > *) 2 multicore systems (maybe 32 or 48 cores) with lots of RAM (256 GB) > *) dom0 OS will be RHEL 5.5 > *) I would prefer to use xen as bundled by the distribution, but if > required features are found in later releases then this can be > considered > *) the servers are connected to the SAN > *) I have about 10 TB of shared storage, and will use around 20-25 > RHEL paravirt guests > *) The HA I will manage with heartbeat, probably use use clvmd for the > shared storage > > > My concern is to get the most out of the system in terms of IO. The > guests will have a range of vCPUs assigned, from 1 to 8 say, and their > workload varies over time. when they are doing some work it is both > I/O and CPU intensive. It is only in unlikely use cases that all or > most guests are very busy at the same time. > > The current solution to this workload is a cluster of nodes with > either GFS (using shared SAN storage) or local disks, of which both > approaches have some merits. However I am not tied to that > architecture at all. > > There seems a lot (too many!) of options here > > *) created a large LUN / LVM volume on my SAN, and pass it to the > guests and use GFS/GFS2 > *) same thing, except use OCFS2 > *) split my SAN storage into man LUNs / LVM volumes, and export 1 > chunk per VM via phys: or tap:... interfaces > *) more complex PCI-passthru configurations giving guest direct (?) > access to storage > *) create a big ext3/xfs/... file system on dom0 and export using NFS > to guests (a kind of loopback?) > *) others ... > > I ask really for any advise and experiences of list members faced with > similar problems and what they found best. > > Thanks > KM > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >Hi Kevin, I opted for iSCSI, mainly because I need support of my distro supplier (Novell) and because it is pretty mainstream. It is not the most high performing probably. What happens on top of iSCSI is to my understanding related to whether you want to use images files (for live migration you would need to have a cluster filesystem) or block devices. I create a LVM LV on one big DRBD partition which is a LUN on a target. Each DomU has a seperate LUN. I initiate to this LUN from all the Dom0''s and use them as block devices. I don''t put any LVM on their frmo the Dom0''s perspective since this involved cLVM which I don''t have on SLES 10. Since there is no snapshotting with cLVM anyway, I don''t see the added value of LVM anyway. Hope this helps somehat. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Kevin Maguire
2010-Jun-24 14:53 UTC
Re: [Xen-users] IO intensive guests - how to design for best performance
Hi Fajar Thanks for the reply. On Thu, Jun 24, 2010 at 2:24 PM, Fajar A. Nugraha <fajar@fajar.net> wrote:> On Thu, Jun 24, 2010 at 7:03 PM, Kevin Maguire <k.c.f.maguire@gmail.com> wrote: >> *) split my SAN storage into man LUNs / LVM volumes, and export 1 >> chunk per VM via phys: or tap:... interfaces > > That''s raw block device, right? It would give you highest I/O. I > highly recommend that.Well, 2 ways occur to me a) On the RAID units I create multiple smallish LUNs, and the corresponding block devices (/dev/sdX) are passed to the guests b) One large LUN on the split, split into many LVs using LVM, and those (raw) devices passed to the guests The latter is a bit more flexible as LVM allows growing/shrinking of LVs easier 9and quicker) than my RAID arrays. Note the guests needs a file-system, so inside the guests I would need to create ext3 file-systems. Kevin _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users