kevin
2010-Sep-13 15:44 UTC
[Xen-users] Hardware performance question : Disk RPM speed & Xen Performance
Hello, I am a relatively new user of Xen virtualization, so you''ll have to forgive the simplistic nature of my question. I have a Dell R410 poweredge server (dual quad core CPUs + 32gb ram). I plan on utilizing this server with Xen. The ''dilemma'' I am having is whether or not to replace the 2x 500gb 7.2K RPM drives that came with the server with faster 300gb 15K RPM drives. Obviously drives that spin faster in general are a better thing. I am trying to avoid investing $1,000 more in obtaining these drives unless I feel it is absolutely necessary.>From Xen documentation, I couldn''t get enough of an idea of how disk writeand the speed of disks might play in a potential bottleneck scenario when 20-30 VMs are ultimately going to be running on the box. Does anyone have any experience or advise to share? Ultimately I don''t mind spending the extra money to replace the drives but I would love to hear what your thoughts might be as far as what kind of actual performance increases I might expect. Thanks! Kevin _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
<admin@xenhive.com>
2010-Sep-13 16:34 UTC
RE: [Xen-users] Hardware performance question : Disk RPM speed & XenPerformance
Each 7200RPM drive is good for about 100 IOPS. Each 15k RPM SAS can usually handle 200 IOPS. I would not personally try to run 20-30 VMs from two SATA drives, because it would almost surely lead to poor performance. But I am basing that statement on the type of IO I typically see in our environment. Your VMs might use totally different amounts of disk IO than my VMs do, so you may or may need not to worry about disk IO. It really depends on the type of tasks each VM is doing. One idea would be to measure the IOPS and graph it using MRTG. Start with a few VMs and measure them for a few weeks to get an idea how much total disk IO is needed prior to moving all of the VMs into production. Once you actually measure the disk IO for a while, then you can make an informed decision. -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of kevin Sent: Monday, September 13, 2010 10:45 AM To: xen-users@lists.xensource.com Subject: [Xen-users] Hardware performance question : Disk RPM speed & XenPerformance Hello, I am a relatively new user of Xen virtualization, so you''ll have to forgive the simplistic nature of my question. I have a Dell R410 poweredge server (dual quad core CPUs + 32gb ram). I plan on utilizing this server with Xen. The ''dilemma'' I am having is whether or not to replace the 2x 500gb 7.2K RPM drives that came with the server with faster 300gb 15K RPM drives. Obviously drives that spin faster in general are a better thing. I am trying to avoid investing $1,000 more in obtaining these drives unless I feel it is absolutely necessary.>From Xen documentation, I couldn''t get enough of an idea of how disk writeand the speed of disks might play in a potential bottleneck scenario when 20-30 VMs are ultimately going to be running on the box. Does anyone have any experience or advise to share? Ultimately I don''t mind spending the extra money to replace the drives but I would love to hear what your thoughts might be as far as what kind of actual performance increases I might expect. Thanks! Kevin _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
kevin
2010-Sep-13 16:49 UTC
RE: [Xen-users] Hardware performance question : Disk RPM speed & XenPerformance
Great response - thank you. From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of admin@xenhive.com Sent: Monday, September 13, 2010 12:35 PM To: xen-users@lists.xensource.com Subject: RE: [Xen-users] Hardware performance question : Disk RPM speed & XenPerformance Each 7200RPM drive is good for about 100 IOPS. Each 15k RPM SAS can usually handle 200 IOPS. I would not personally try to run 20-30 VMs from two SATA drives, because it would almost surely lead to poor performance. But I am basing that statement on the type of IO I typically see in our environment. Your VMs might use totally different amounts of disk IO than my VMs do, so you may or may need not to worry about disk IO. It really depends on the type of tasks each VM is doing. One idea would be to measure the IOPS and graph it using MRTG. Start with a few VMs and measure them for a few weeks to get an idea how much total disk IO is needed prior to moving all of the VMs into production. Once you actually measure the disk IO for a while, then you can make an informed decision. -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of kevin Sent: Monday, September 13, 2010 10:45 AM To: xen-users@lists.xensource.com Subject: [Xen-users] Hardware performance question : Disk RPM speed & XenPerformance Hello, I am a relatively new user of Xen virtualization, so you''ll have to forgive the simplistic nature of my question. I have a Dell R410 poweredge server (dual quad core CPUs + 32gb ram). I plan on utilizing this server with Xen. The ''dilemma'' I am having is whether or not to replace the 2x 500gb 7.2K RPM drives that came with the server with faster 300gb 15K RPM drives. Obviously drives that spin faster in general are a better thing. I am trying to avoid investing $1,000 more in obtaining these drives unless I feel it is absolutely necessary.>From Xen documentation, I couldn''t get enough of an idea of how disk writeand the speed of disks might play in a potential bottleneck scenario when 20-30 VMs are ultimately going to be running on the box. Does anyone have any experience or advise to share? Ultimately I don''t mind spending the extra money to replace the drives but I would love to hear what your thoughts might be as far as what kind of actual performance increases I might expect. Thanks! Kevin _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Jeff Sturm
2010-Sep-13 17:21 UTC
RE: [Xen-users] Hardware performance question : Disk RPM speed &XenPerformance
Agreed with what''s said below. Traditional (Winchester) disk drives are incredibly slow devices relative to the rest of your computing environment (CPU, memory, network etc.). You can''t do much with a pair of disks. 20-30 VM''s won''t work very well unless they each do very little I/O. Cheap way to get more I/O throughput is to buy a big chassis and stuff lots of disks into it-as many as you can get. The size of the disks isn''t important. The quantity is. 15k drives often aren''t cost effective in such arrangements. Most server chassis are optimized for PCI expansion and air flow, not storage, so an external chassis is often a necessity. If cost is a factor you can buy a chassis that can be shared across 2 or more dom0''s. In our environments, we tend to run anywhere from 4 up to about 8 domU''s per dom0, and no more than 4 dom0''s per disk array. So one of our disk arrays (typically 14 disks in RAID10, plus a spare) may serve from 16 to 32 domU''s. Performance overall is good with our workload. It also helps to tune your Linux domU''s to reduce I/O. I''ve found a few simple tricks that help: - Mount ext3 partitions with "noatime" - Configure syslogd not to sync file writes - Get rid of disk-intensive packages like mlocate - Use tmpfs for small, volatile file storage (e.g. /tmp) Other tricks may be possible depending on the types of user applications you operate. -Jeff From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of admin@xenhive.com Sent: Monday, September 13, 2010 12:35 PM To: xen-users@lists.xensource.com Subject: RE: [Xen-users] Hardware performance question : Disk RPM speed &XenPerformance Each 7200RPM drive is good for about 100 IOPS. Each 15k RPM SAS can usually handle 200 IOPS. I would not personally try to run 20-30 VMs from two SATA drives, because it would almost surely lead to poor performance. But I am basing that statement on the type of IO I typically see in our environment. Your VMs might use totally different amounts of disk IO than my VMs do, so you may or may need not to worry about disk IO. It really depends on the type of tasks each VM is doing. One idea would be to measure the IOPS and graph it using MRTG. Start with a few VMs and measure them for a few weeks to get an idea how much total disk IO is needed prior to moving all of the VMs into production. Once you actually measure the disk IO for a while, then you can make an informed decision. -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of kevin Sent: Monday, September 13, 2010 10:45 AM To: xen-users@lists.xensource.com Subject: [Xen-users] Hardware performance question : Disk RPM speed & XenPerformance Hello, I am a relatively new user of Xen virtualization, so you''ll have to forgive the simplistic nature of my question. I have a Dell R410 poweredge server (dual quad core CPUs + 32gb ram). I plan on utilizing this server with Xen. The ''dilemma'' I am having is whether or not to replace the 2x 500gb 7.2K RPM drives that came with the server with faster 300gb 15K RPM drives. Obviously drives that spin faster in general are a better thing. I am trying to avoid investing $1,000 more in obtaining these drives unless I feel it is absolutely necessary.>From Xen documentation, I couldn''t get enough of an idea of how diskwrite and the speed of disks might play in a potential bottleneck scenario when 20-30 VMs are ultimately going to be running on the box. Does anyone have any experience or advise to share? Ultimately I don''t mind spending the extra money to replace the drives but I would love to hear what your thoughts might be as far as what kind of actual performance increases I might expect. Thanks! Kevin _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Jason Showell
2010-Sep-13 17:36 UTC
RE: [Xen-users] Hardware performance question : Disk RPM speed&XenPerformance
I agree about the number of disks. Your idea of getting a bigger chassis is something we are looking at for a customer at the moment. Our main Xen pool connect to Equallogics over iscsi. Something I have been reading about just recently is getting a large chassis and installing 8 or 10 disks in it and install Xen on a pair of mirrored drives and then install Open Filer on the machine too. This means you can then present dynamic disks to your Xen Dom0 and share the disks to another Dom0 should you want to and allow you to add more disks in the future as you need more space. There are a few articles out there about doing it but I haven''t personally tried it yet although I have used Open Filer without issue before. Obviously if you already have the machine this might not work for you but open filer is a cheap way of creating a good size NAS / SAN with iSCSI capability. Ja From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jeff Sturm Sent: 13 September 2010 18:21 To: admin@xenhive.com; xen-users@lists.xensource.com Subject: RE: [Xen-users] Hardware performance question : Disk RPM speed&XenPerformance Agreed with what''s said below. Traditional (Winchester) disk drives are incredibly slow devices relative to the rest of your computing environment (CPU, memory, network etc.). You can''t do much with a pair of disks. 20-30 VM''s won''t work very well unless they each do very little I/O. Cheap way to get more I/O throughput is to buy a big chassis and stuff lots of disks into it-as many as you can get. The size of the disks isn''t important. The quantity is. 15k drives often aren''t cost effective in such arrangements. Most server chassis are optimized for PCI expansion and air flow, not storage, so an external chassis is often a necessity. If cost is a factor you can buy a chassis that can be shared across 2 or more dom0''s. In our environments, we tend to run anywhere from 4 up to about 8 domU''s per dom0, and no more than 4 dom0''s per disk array. So one of our disk arrays (typically 14 disks in RAID10, plus a spare) may serve from 16 to 32 domU''s. Performance overall is good with our workload. It also helps to tune your Linux domU''s to reduce I/O. I''ve found a few simple tricks that help: - Mount ext3 partitions with "noatime" - Configure syslogd not to sync file writes - Get rid of disk-intensive packages like mlocate - Use tmpfs for small, volatile file storage (e.g. /tmp) Other tricks may be possible depending on the types of user applications you operate. -Jeff From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of admin@xenhive.com Sent: Monday, September 13, 2010 12:35 PM To: xen-users@lists.xensource.com Subject: RE: [Xen-users] Hardware performance question : Disk RPM speed &XenPerformance Each 7200RPM drive is good for about 100 IOPS. Each 15k RPM SAS can usually handle 200 IOPS. I would not personally try to run 20-30 VMs from two SATA drives, because it would almost surely lead to poor performance. But I am basing that statement on the type of IO I typically see in our environment. Your VMs might use totally different amounts of disk IO than my VMs do, so you may or may need not to worry about disk IO. It really depends on the type of tasks each VM is doing. One idea would be to measure the IOPS and graph it using MRTG. Start with a few VMs and measure them for a few weeks to get an idea how much total disk IO is needed prior to moving all of the VMs into production. Once you actually measure the disk IO for a while, then you can make an informed decision. -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of kevin Sent: Monday, September 13, 2010 10:45 AM To: xen-users@lists.xensource.com Subject: [Xen-users] Hardware performance question : Disk RPM speed & XenPerformance Hello, I am a relatively new user of Xen virtualization, so you''ll have to forgive the simplistic nature of my question. I have a Dell R410 poweredge server (dual quad core CPUs + 32gb ram). I plan on utilizing this server with Xen. The ''dilemma'' I am having is whether or not to replace the 2x 500gb 7.2K RPM drives that came with the server with faster 300gb 15K RPM drives. Obviously drives that spin faster in general are a better thing. I am trying to avoid investing $1,000 more in obtaining these drives unless I feel it is absolutely necessary.>From Xen documentation, I couldn''t get enough of an idea of how diskwrite and the speed of disks might play in a potential bottleneck scenario when 20-30 VMs are ultimately going to be running on the box. Does anyone have any experience or advise to share? Ultimately I don''t mind spending the extra money to replace the drives but I would love to hear what your thoughts might be as far as what kind of actual performance increases I might expect. Thanks! Kevin _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Rudi Ahlers
2010-Sep-13 17:58 UTC
Re: [Xen-users] Hardware performance question : Disk RPM speed&XenPerformance
On Mon, Sep 13, 2010 at 7:36 PM, Jason Showell <JShowell@serverspace.co.uk> wrote:> I agree about the number of disks. Your idea of getting a bigger chassis is > something we are looking at for a customer at the moment. > > Our main Xen pool connect to Equallogics over iscsi. Something I have been > reading about just recently is getting a large chassis and installing 8 or > 10 disks in it and install Xen on a pair of mirrored drives and then install > Open Filer on the machine too. This means you can then present dynamic disks > to your Xen Dom0 and share the disks to another Dom0 should you want to and > allow you to add more disks in the future as you need more space. There are > a few articles out there about doing it but I haven’t personally tried it > yet although I have used Open Filer without issue before. > > Obviously if you already have the machine this might not work for you but > open filer is a cheap way of creating a good size NAS / SAN with iSCSI > capability. > >Openfiler hasn''t been updated in 2 years. How do you find it''s stability? -- Kind Regards Rudi Ahlers SoftDux Website: http://www.SoftDux.com Technical Blog: http://Blog.SoftDux.com Office: 087 805 9573 Cell: 082 554 7532 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Jean-Francois Couture
2010-Sep-13 19:55 UTC
RE: [Xen-users] Hardware performance question : Disk RPM speed&XenPerformance
> -----Message d''origine----- > De : xen-users-bounces@lists.xensource.com [mailto:xen-users- > bounces@lists.xensource.com] De la part de Rudi Ahlers > Envoyé : 13 septembre 2010 13:58 > À : Jason Showell > Cc : Jeff Sturm; xen-users@lists.xensource.com; admin@xenhive.com > Objet : Re: [Xen-users] Hardware performance question : Disk RPM > speed&XenPerformance > > On Mon, Sep 13, 2010 at 7:36 PM, Jason Showell > <JShowell@serverspace.co.uk> wrote: > > I agree about the number of disks. Your idea of getting a bigger > chassis is > > something we are looking at for a customer at the moment. > > > > Our main Xen pool connect to Equallogics over iscsi. Something I have > been > > reading about just recently is getting a large chassis and installing > 8 or > > 10 disks in it and install Xen on a pair of mirrored drives and then > install > > Open Filer on the machine too. This means you can then present > dynamic disks > > to your Xen Dom0 and share the disks to another Dom0 should you want > to and > > allow you to add more disks in the future as you need more space. > There are > > a few articles out there about doing it but I havent personally > tried it > > yet although I have used Open Filer without issue before. > > > > Obviously if you already have the machine this might not work for you > but > > open filer is a cheap way of creating a good size NAS / SAN with > iSCSI > > capability. > > > > > > > Openfiler hasn''t been updated in 2 years. How do you find it''s > stability? >Would FreeNAS be a good choice then ?> > > -- > Kind Regards > Rudi Ahlers > SoftDux > > Website: http://www.SoftDux.com > Technical Blog: http://Blog.SoftDux.com > Office: 087 805 9573 > Cell: 082 554 7532 > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users