Hello all, I would like your advise/opinion on setting up a redundant Xen infrastructure. We have two identical boxes now running xen: vs01 and vs02 - interconnected with a cross cable on eth1. What I would like to realize is my virtual hosts not being dependent on (a) physical hardware and (b) potential Xen failures due to misconfiration anywhere in the machine. All virtual servers are running on vs01 and I plan to rsync the whole /etc/xen directory to vs02 every night. I tried using scp and that worked OK. Would this be a good setup to realize redundancy? Any opinions or experiences are welcome - TX. John _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, May 11, 2006 at 11:13:48PM +0200, John wrote:> Hello all, > > I would like your advise/opinion on setting up a redundant Xen > infrastructure. We have two identical boxes now running xen: vs01 and > vs02 - interconnected with a cross cable on eth1. > > What I would like to realize is my virtual hosts not being dependent on > (a) physical hardware and (b) potential Xen failures due to > misconfiration anywhere in the machine. > > All virtual servers are running on vs01 and I plan to rsync > the whole /etc/xen directory to vs02 every night. I tried using scp and that > worked OK. > > Would this be a good setup to realize redundancy?Not particularly. What you want is drbd syncing your block devices, with heartbeat maintaining the "services" of your domUs. That''ll save you from hardware failures, and really nasty Xen misconfigurations (of the "I b0rk3d my grub" severity). It won''t save you from minor stuff-ups, like giving a domU the wrong bridge -- but then again, there''s not much that will manage to save yourself from yourself like that. To ensure that configurations are properly synced across dom0s, I''d highly recommend a structured configuration management system like Puppet. The domUs can be managed using the same tool, as well. - Matt -- I don''t do veggies if I can help it. -- stevo If you could see your colon, you''d be horrified. -- Iain Broadfoot If he could see his colon, he''d be management. -- David Scheidt _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Matthew Palmer wrote:> On Thu, May 11, 2006 at 11:13:48PM +0200, John wrote: >> Hello all, >> >> I would like your advise/opinion on setting up a redundant Xen >> infrastructure. We have two identical boxes now running xen: vs01 and >> vs02 - interconnected with a cross cable on eth1. >> >> What I would like to realize is my virtual hosts not being dependent on >> (a) physical hardware and (b) potential Xen failures due to >> misconfiration anywhere in the machine. >> >> All virtual servers are running on vs01 and I plan to rsync >> the whole /etc/xen directory to vs02 every night. I tried using scp and that >> worked OK. >> >> Would this be a good setup to realize redundancy? > > Not particularly. > > What you want is drbd syncing your block devices, with heartbeat maintaining > the "services" of your domUs. That''ll save you from hardware failures, and > really nasty Xen misconfigurations (of the "I b0rk3d my grub" severity). It > won''t save you from minor stuff-ups, like giving a domU the wrong bridge -- > but then again, there''s not much that will manage to save yourself from > yourself like that. > > To ensure that configurations are properly synced across dom0s, I''d highly > recommend a structured configuration management system like Puppet. The > domUs can be managed using the same tool, as well. >I don''t suppose anybody''s written a HOWTO on this. I''m looking to do something similar. Miles Fidelman _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, May 11, 2006 at 09:11:48PM -0400, Miles Fidelman wrote:> Matthew Palmer wrote: > >On Thu, May 11, 2006 at 11:13:48PM +0200, John wrote: > >>Hello all, > >> > >>I would like your advise/opinion on setting up a redundant Xen > >>infrastructure. We have two identical boxes now running xen: vs01 and > >>vs02 - interconnected with a cross cable on eth1. > >> > >>What I would like to realize is my virtual hosts not being dependent on > >>(a) physical hardware and (b) potential Xen failures due to > >>misconfiration anywhere in the machine. > >> > >>All virtual servers are running on vs01 and I plan to rsync > >>the whole /etc/xen directory to vs02 every night. I tried using scp and > >>that > >>worked OK. > >> > >>Would this be a good setup to realize redundancy? > > > >Not particularly. > > > >What you want is drbd syncing your block devices, with heartbeat > >maintaining > >the "services" of your domUs. That''ll save you from hardware failures, and > >really nasty Xen misconfigurations (of the "I b0rk3d my grub" severity). > >It > >won''t save you from minor stuff-ups, like giving a domU the wrong bridge -- > >but then again, there''s not much that will manage to save yourself from > >yourself like that. > > > >To ensure that configurations are properly synced across dom0s, I''d highly > >recommend a structured configuration management system like Puppet. The > >domUs can be managed using the same tool, as well. > > I don''t suppose anybody''s written a HOWTO on this. I''m looking to do > something similar.Probably not -- it''s pretty trivial to do the Xen-specific parts of it. Use the DRBD HOWTO to get the DRBD portion of it working, and the heartbeat docs for the failover. The only Xen-specific bit is the heartbeat resource file, which is about 10 lines of shell. These days I have it all in a config management program and tell the system "you''re a failover dom0 for VM <X>" and the system works it all out for me. Now all I have to do is write a small program that gathers customer requirements, and I can retire to the mountains. <grin> - Matt -- Sure, it''s possible to write C in an object-oriented way. But, in practice, getting an entire team to do that is like telling them to walk along a straight line painted on the floor, with the lights off. -- Tess Snider, slug-chat@slug.org.au _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> These days I have it all in a config management program and tell thesystem> "you''re a failover dom0 for VM <X>" and the system works it all out forme.> Now all I have to do is write a small program that gathers customer > requirements, and I can retire to the mountains. <grin>Is that using Puppet or some proprietary program? Would you be willing to share/donate it? I think it''s quite common that people want to have a redundant setup. Regards, Matthijs ter Woord _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Friday 12 May 2006 00:15, Matthew Palmer wrote:> On Thu, May 11, 2006 at 11:13:48PM +0200, John wrote: > > Hello all, > > > > I would like your advise/opinion on setting up a redundant Xen > > infrastructure. We have two identical boxes now running xen: vs01 and > > vs02 - interconnected with a cross cable on eth1. > > > > What I would like to realize is my virtual hosts not being dependent on > > (a) physical hardware and (b) potential Xen failures due to > > misconfiration anywhere in the machine. > > > > All virtual servers are running on vs01 and I plan to rsync > > the whole /etc/xen directory to vs02 every night. I tried using scp and > > that worked OK. > > > > Would this be a good setup to realize redundancy? > > Not particularly. > > What you want is drbd syncing your block devices, with heartbeat > maintaining the "services" of your domUs. That''ll save you from hardware > failures, and really nasty Xen misconfigurations (of the "I b0rk3d my grub" > severity). It won''t save you from minor stuff-ups, like giving a domU the > wrong bridge -- but then again, there''s not much that will manage to save > yourself from yourself like that. >What I''ve been working on is using a separate box to provide the storage for the domUs through iSCSI. So at the moment I have two machines set up as dom0s and one NAS box using iscsitarget to provide storage. As long as I use the /dev/disk/by-id names for the disks, I can run xm migrate --live and everything switches over imperceptibly. All we need now is to use heartbeat to check if domUs or a dom0 has failed and start up on the other as appropriate. It would also be useful to consider some load-balancing, though that''s a longer term thought. The other thing that I''m thinking hard about is how to decide which domUs come up automatically on which dom0, though heartbeat may be able to help with that. Matthew -- Matthew Wild Tel.: +44 (0)1235 445173 M.Wild@rl.ac.uk URL http://www.ukssdc.ac.uk/ UK Solar System Data Centre and World Data Centre - Solar-Terrestrial Physics, Chilton Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, OX11 0QX _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, May 12, 2006 at 09:38:09AM +0100, Matthew Wild wrote:> What I''ve been working on is using a separate box to provide the storage for > the domUs through iSCSI. So at the moment I have two machines set up as dom0s > and one NAS box using iscsitarget to provide storage. As long as I use > the /dev/disk/by-id names for the disks, I can run xm migrate --live and > everything switches over imperceptibly.Shared storage is the best if you can swing it. Not necessarily the cheapest option in the world, though if you want real reliability -- I''d never just use another box as my NAS, because that''s a new and possibly even more dangerous single point of failure. And fully-redundant channel-bonded everything can get costly and complex quickly. DRBD has it''s quirks, but it''s nicely redundant and *cheap*.> All we need now is to use heartbeat to check if domUs or a dom0 has failed and > start up on the other as appropriate.That''s easy enough to do -- you just specify domU::<hostname> in the haresources line for each of your domUs, and write a quick domU script for resources.d to up/down the domains. Bonus points are available for checking if we''re doing a gentle move (hb_takeover instead of a DR event) and use live migration to minimise downtime.> It would also be useful to consider some load-balancing, though that''s a > longer term thought.You mean automatically distribute domUs across the cluster to maintain a consistent(ish) load average? That''s non-trivial to do, but with live migration you can have a fairly minimal amount of downtime while the hosts move around.> The other thing that I''m thinking hard about is how to decide which domUs > come up automatically on which dom0, though heartbeat may be able to help > with that.With the simple two-node heartbeat, yeah, you just list the domUs in the haresources file and heartbeat manages the start/stop. For the simplest case -- that of one active, one reserve -- dom0, you simply list all your domUs against the "primary" node (a fairly arbitrary distinction, of course). For a slightly more interesting configuration, you can list half your domUs as primarying against each of the two dom0s, so you can get maximum performance in the general case. auto_failback is probably a given in this situation -- and here the live migration support in your domU script will really shine. The problem with both of these configurations, by default, is that you''ll only ever be able to allocate half your cluster-available RAM, so there''s enough RAM available to have all the domains on one machine during failover. The coolest configuration is when you have the failover scripts automatically balloon down all of the running domains to half their RAM when you''re bringing the other node''s domUs across (and then ballooning them back up afterwards when all the other domUs go away), so you can normally have all your domains running at full memory, for maximum memory usage. Of course, you want to make sure you give all your domUs a little extra swap for those occasional DR moments. - Matt _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Friday 12 May 2006 13:03, Matthew Palmer wrote:> On Fri, May 12, 2006 at 09:38:09AM +0100, Matthew Wild wrote: > > What I''ve been working on is using a separate box to provide the storage > > for the domUs through iSCSI. So at the moment I have two machines set up > > as dom0s and one NAS box using iscsitarget to provide storage. As long as > > I use the /dev/disk/by-id names for the disks, I can run xm migrate > > --live and everything switches over imperceptibly. > > Shared storage is the best if you can swing it. Not necessarily the > cheapest option in the world, though if you want real reliability -- I''d > never just use another box as my NAS, because that''s a new and possibly > even more dangerous single point of failure. And fully-redundant > channel-bonded everything can get costly and complex quickly. DRBD has > it''s quirks, but it''s nicely redundant and *cheap*. >But as far as I can see, DRBD only provides twin machine shared storage. I have a few servers I want to use as dom0s and since they are generally simple 1U boxes I want them as clean from local storage as possible. The NAS box is bigger and generally better protected with redundant PSU, hardware RAID with hotswap spares etc. If I could pair two machines like that and then offer storage to the dom0s from that redundant pair that would be better. Otherwise I''d just put a FC HBA in each box and connect them to our SAN directly, but since these are meant to be cheap replaceable server boxes I don''t really want to do that as that''ll also mean buying another SAN switch :-(. Or stay with the AlphaServer cluster, which is nicely redundant, but getting long in the tooth.> > All we need now is to use heartbeat to check if domUs or a dom0 has > > failed and start up on the other as appropriate. > > That''s easy enough to do -- you just specify domU::<hostname> in the > haresources line for each of your domUs, and write a quick domU script for > resources.d to up/down the domains. Bonus points are available for > checking if we''re doing a gentle move (hb_takeover instead of a DR event) > and use live migration to minimise downtime. >I''ve yet to take a proper look at heartbeat yet so these hints are helpful. Matthew -- Matthew Wild Tel.: +44 (0)1235 445173 M.Wild@rl.ac.uk URL http://www.ukssdc.ac.uk/ UK Solar System Data Centre and World Data Centre - Solar-Terrestrial Physics, Chilton Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, OX11 0QX _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 12. mai. 2006, at 15:31, Matthew Wild wrote:> But as far as I can see, DRBD only provides twin machine shared > storage. I > have a few servers I want to use as dom0s and since they are > generally simple > 1U boxes I want them as clean from local storage as possible. The > NAS box is > bigger and generally better protected with redundant PSU, hardware > RAID with > hotswap spares etc. If I could pair two machines like that and then > offer > storage to the dom0s from that redundant pair that would be better.Buy two servers, use DRBD between them and share the storage with gnbd. gnbd seems to be quite stable and easy to manage. Besides - it does not exhibit any of the strange issues people see with iSCSI when under load. Per. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 5/22/06, Per Andreas Buer <per.buer@linpro.no> wrote:> On 12. mai. 2006, at 15:31, Matthew Wild wrote: > > > But as far as I can see, DRBD only provides twin machine shared > > storage. I > > have a few servers I want to use as dom0s and since they are > > generally simple > > 1U boxes I want them as clean from local storage as possible. The > > NAS box is > > bigger and generally better protected with redundant PSU, hardware > > RAID with > > hotswap spares etc. If I could pair two machines like that and then > > offer > > storage to the dom0s from that redundant pair that would be better. > > Buy two servers, use DRBD between them and share the storage with > gnbd. gnbd seems to be quite stable and easy to manage. Besides - it > does not exhibit any of the strange issues people see with iSCSI when > under load.If you have a few machines to devote to file storage than a cluster file system is the best way to go. Having dom0 on each of the machines provide a CFS<-->NFS bridge for the domUs and a Nagios server running to monitor services/domU''s and restart on other machines if needed would provide a system with the network being the only single point of failure. And that could be taken care of too.... -Paul _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Have you got any hints for others to set up the nagios server? And how about a CFS<>NFS bridge? Why is that neccessary? ----- Original Message ----- From: "Paul M." <paul@gpmidi.net> To: "Per Andreas Buer" <per.buer@linpro.no> Cc: "Matthew Wild" <M.Wild@rl.ac.uk>; <xen-users@lists.xensource.com> Sent: Tuesday, May 23, 2006 6:21 PM Subject: Re: [Xen-users] Re: Re: Redundant server setup On 5/22/06, Per Andreas Buer <per.buer@linpro.no> wrote:> On 12. mai. 2006, at 15:31, Matthew Wild wrote: > > > But as far as I can see, DRBD only provides twin machine shared > > storage. I > > have a few servers I want to use as dom0s and since they are > > generally simple > > 1U boxes I want them as clean from local storage as possible. The > > NAS box is > > bigger and generally better protected with redundant PSU, hardware > > RAID with > > hotswap spares etc. If I could pair two machines like that and then > > offer > > storage to the dom0s from that redundant pair that would be better. > > Buy two servers, use DRBD between them and share the storage with > gnbd. gnbd seems to be quite stable and easy to manage. Besides - it > does not exhibit any of the strange issues people see with iSCSI when > under load.If you have a few machines to devote to file storage than a cluster file system is the best way to go. Having dom0 on each of the machines provide a CFS<-->NFS bridge for the domUs and a Nagios server running to monitor services/domU''s and restart on other machines if needed would provide a system with the network being the only single point of failure. And that could be taken care of too.... -Paul _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> If you have a few machines to devote to file storage than a cluster > file system is the best way to go. Having dom0 on each of the machines > provide a CFS<-->NFS bridge for the domUs and a Nagios server running > to monitor services/domU''s and restart on other machines if needed > would provide a system with the network being the only single point of > failure. And that could be taken care of too....Sounds fishy. Please define "few machines to devote to file storage" and what that has to do with a cluster filesystem. John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Christopher G. Stach II
2006-May-23 17:26 UTC
Re: [Xen-users] Re: Re: Redundant server setup
Paul M. wrote:> If you have a few machines to devote to file storage than a cluster > file system is the best way to go. Having dom0 on each of the machines > provide a CFS<-->NFS bridge for the domUs and a Nagios server running > to monitor services/domU''s and restart on other machines if needed > would provide a system with the network being the only single point of > failure. And that could be taken care of too.... > -PaulI would only recommend involving NFS for those users who have been delegated the responsibility of throwing performance into the toilet. There''s no need to involve an extra lock manager and incur extra communications overhead, and more processing for dom0. It could even be dangerous. -- Christopher G. Stach II _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Friday 12 May 2006 8:31 am, Matthew Wild wrote:> But as far as I can see, DRBD only provides twin machine shared storage. I > have a few servers I want to use as dom0s and since they are generally > simple 1U boxes I want them as clean from local storage as possible. Thethis is a setup i''d be willing to use: (of course, if money isn''t an issue, i''d buy some big iron storage from EMC or such) 1) a box with two GigE and big SATA disks, (there''s a nice supermicro 3U box with 15 bays, i''d do two 7-disk RAID5 plus one hotspare). Call it S1A, and the two volumes S1A1 and S1A2. 2) build another one identical to S1A, call it S1B, with volumes S1B1 and S1B2 3) use a crossover cable on eth1 of S1A and S1B. run DRBD on both boxes to join S1A1 with S1B1 and S1A2 with S1B2 4) on each box, export the replicated volumes on eth0 with GNBD, iSCSI or AoE 5) call the whole group S1. there are two volumes: S1.1 and S1.2; each have two ''paths'': S1A1 and S1B1 to S1.1 , and S1A2 and S1B2 to S1.2 6) add new pairs S2, S3, etc 7) on the Xen boxes, configure dom0 to access all shared volumes, but using only one path for each volumen. e.g. on the first Xen box, reach S1.1 via S1A1, S1.2 via S1B2, S2.1 via S2A1 and S2.2 via S2B2. try to balance how many Xen boxes use the ''A'' path for a volume, and how many use the ''B'' path for the same volume. 8) setup a cluster aware Volume manager on all dom0s; either CLVM or EVMS with HA 9) create one single big volume group with all the shared volumes. 10) split the volume group on several logical volumes, ''feed'' these to the domUs one further thing i''ve just thought, it would be possible to put all ''A'' member of each storage pair on a switch, and all ''B'' members on another. the Xen boxes would have to have two ethernet ports each, one to the ''real'' LAN, and the other to the storage LAN, half on the A side, half on the B side. now not even the storage LAN is a failure point, but all Xen boxes can see all volumes and do live migration from any box to any other, no matter if they use the same or different storage switch. Again, if i had so much money to spend, i''d probably just call EMC, IBM, or someone else to build it for me with FC (or Infiniband? i can hope..) -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 5/23/06, Christopher G. Stach II <cgs@ldsys.net> wrote:> Paul M. wrote: > > If you have a few machines to devote to file storage than a cluster > > file system is the best way to go. Having dom0 on each of the machines > > provide a CFS<-->NFS bridge for the domUs and a Nagios server running > > to monitor services/domU''s and restart on other machines if needed > > would provide a system with the network being the only single point of > > failure. And that could be taken care of too.... > > -Paul > > I would only recommend involving NFS for those users who have been > delegated the responsibility of throwing performance into the toilet. > There''s no need to involve an extra lock manager and incur extra > communications overhead, and more processing for dom0. It could even be > dangerous.The NFS would be extra overhead but it would allow the DFS to work on a per file basis instead of with an image of the hard disk. Performance and irreparable corruption are possible problems with the disk image. Some DFSs might have problems dealing with the frequent, minor changes to disk image. There may also be problems if the disk image is larger than the local cache. Much of this would depend on which DFS you went with. With the DFS working on a per-file basis corruption would be limited to application level corruption of a few files. With a disk image related writes (ex updating an inode and adding some data) may not get placed in the same transaction (assuming your DFS worked like this). Thus the chance of data loss or total failure is higher than if working on a per file basis. Of course if there is a way to use a local directory as a domU root then you can avoid the problem completely. -Paul _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 5/23/06, John Madden <jmadden@ivytech.edu> wrote:> > If you have a few machines to devote to file storage than a cluster > > file system is the best way to go. Having dom0 on each of the machines > > provide a CFS<-->NFS bridge for the domUs and a Nagios server running > > to monitor services/domU''s and restart on other machines if needed > > would provide a system with the network being the only single point of > > failure. And that could be taken care of too.... > > Sounds fishy. Please define "few machines to devote to file storage" > and what that has to do with a cluster filesystem.The "few machines" would be nodes in the DFS. Distributed file system might have been a better way to put it but it might confuse windows admins. http://en.wikipedia.org/wiki/Distributed_file_system -Paul _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Javier Guerra wrote:> On Friday 12 May 2006 8:31 am, Matthew Wild wrote: > >> But as far as I can see, DRBD only provides twin machine shared storage. I >> have a few servers I want to use as dom0s and since they are generally >> simple 1U boxes I want them as clean from local storage as possible. The >> > > this is a setup i''d be willing to use: > (of course, if money isn''t an issue, i''d buy some big iron storage from EMC or > such) > > 1) a box with two GigE and big SATA disks, (there''s a nice supermicro 3U box > with 15 bays, i''d do two 7-disk RAID5 plus one hotspare). Call it S1A, and > the two volumes S1A1 and S1A2. >What does anybody think about something like: - 2 boxes, each with two GigE - multiple SCSI drives per box - mounting all the drives with iSCSI - using a volume manager (EVLM perhaps) to build shared and raided space on top of the drives _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> - using a volume manager (EVLM perhaps) to build shared and raided > space > on top of the drivesAs long as you''re including mirroring in there, I see that as workable. But gosh, mirroring iscsi... What a dog that could be. John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> 7-disk RAID5Are you serious ? Do you know the performance you can hope from this for write operations ? Unless you don''t have more than one write op for 1 000 000 reads, forget it. -- Sylvain COUTANT ADVISEO http://www.adviseo.fr/ http://www.open-sp.fr/ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, 2006-05-23 at 23:00 +0200, Sylvain Coutant wrote:> > 7-disk RAID5 > > Are you serious ? Do you know the performance you can hope from this for write operations ? > > Unless you don''t have more than one write op for 1 000 000 reads, forget it.This config is actually fairly typical in the SAN world. John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > > 7-disk RAID5 > >[...]> > This config is actually fairly typical in the SAN world.I know. But it''s usually running FC or SCSI disks with a latency twice as low as SATA, 2 GB of battery backed cache memory and really good RAID 5 processors. Even with this, performance for write ops is not very good. BR, -- Sylvain COUTANT ADVISEO http://www.adviseo.fr/ http://www.open-sp.fr/ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Christopher G. Stach II
2006-May-23 21:14 UTC
Re: [Xen-users] Re: Re: Redundant server setup
Paul M. wrote:> The NFS would be extra overhead but it would allow the DFS to work > on a per file basis instead of with an image of the hard disk.Why on earth would you want to do that? :) -- Christopher G. Stach II _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tuesday 23 May 2006 4:00 pm, Sylvain Coutant wrote:> > 7-disk RAID5 > > Are you serious ? Do you know the performance you can hope from this for > write operations ?sure, this part of the setup is actually running, and with very good results. it helps a lot that most files are 80-400GB, making almost all writes bigger than one stripe-row. in those circumstances there''s no need to read before writing. with smaller files i''d be very carefull of the stripe size, but even 128Kb (default for most) x 6 = 768Kb, still far smaller than any significant file, and even less than the usual 4MB LVM blocksize> Unless you don''t have more than one write op for 1 000 000 reads, forget > it.i think i get even more writes than reads for each file -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users