hi all forgive me if this question has already been answered (i couldnt find any :( ).. I am working on some xen based cluster where we were thinking of using AoE for storage management -rather than pinning a DomU''s data partition to the same host (same stuff which everybody does) here comes my question: I have read in several places that AoE is sleek and fast for certain conditions,etc etc. But i couldnt find any real Xen-AoE combination (or atleast linux AoE) benchmarks in the internet. I managed to stumble upon several Coraid benchmarks - but they arent useful at all. because the coraid disks are connected by a MyriNet NIC (i guess 1/10 Giga bit nic card) and its their implementation. I am looking at benchmarks for a simple 100 MBps ethernet lan (or a 1GBps ethernet lan) - Dedicated for Aoe, based on the vblade/aoe kernel modules.. There are no racks/special storage servers. Its just a set of commodity machines in the cluster. each machine has a dedicated nic card for aoe tasks, while the other is used for cluster communication/ communication to the internet. -----Note that these machines are also performing computationally intensive tasks sometimes. -----Some things i would definitely like to know are , --If a host A is busy, does it affect the host B''s AoE data fetch from host A its hard drive significantly? --What kind of bandwidths (atleast a ballpark number) can i expect, when compared between a in host hard disk access and remote host AoE based hard disk access? If such bench marks are available anywhere, would somebody please point me to the links or any such source where i can dig it from? thanks in advance. cheers shriram _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi shriram, You can find a few benchmarks that I did in the past. I used 1Gbps Ethernet card (on board in t42 IBM thinkpad and intel pci card) I used regular linux distro (Fedora, RHEL, SLES and Open SuSE) and I worked with the basic Coraid SR1521 (http://coraid.com/products1.html) . My information is in Hebrew but you don''t need Hebrew to read the results . If you have any questions you can e-mail my directly. - doron oops, the link for the benchmarks: http://www.ofek.biz/WiKi/doku.php?id=%D7%91%D7%93%D7%99%D7%A7%D7%95%D7%AA_%D7%91%D7%99%D7%A6%D7%95%D7%A2%D7%99%D7%9D_%D7%A9%D7%9C_coraid_%D7%A2%D7%9D_%D7%9E%D7%A2%D7%A8%D7%9B%D7%95%D7%AA_%D7%9C%D7%99%D7%A0%D7%95%D7%A7%D7%A1 XenoCrateS wrote:> hi all > forgive me if this question has already been answered (i couldnt find any > :( ).. > > I am working on some xen based cluster where we were thinking of using AoE > for storage management -rather than pinning a DomU''s data partition to the > same host (same stuff which everybody does) > > here comes my question: > I have read in several places that AoE is sleek and fast for certain > conditions,etc etc. But i couldnt find any real Xen-AoE combination (or > atleast linux AoE) benchmarks in the internet. > > I managed to stumble upon several Coraid benchmarks - but they arent useful > at all. because the coraid disks are connected by a MyriNet NIC (i guess > 1/10 Giga bit nic card) and its their implementation. > > I am looking at benchmarks for a simple 100 MBps ethernet lan (or a 1GBps > ethernet lan) - Dedicated for Aoe, based on the vblade/aoe kernel modules.. > There are no racks/special storage servers. Its just a set of commodity > machines in the cluster. each machine has a dedicated nic card for aoe > tasks, while the other is used for cluster communication/ communication to > the internet. > -----Note that these machines are also performing computationally intensive > tasks sometimes. > > -----Some things i would definitely like to know are , > --If a host A is busy, does it affect the host B''s AoE data fetch > from host A its hard drive significantly? > --What kind of bandwidths (atleast a ballpark number) can i expect, > when compared between a in host hard disk access and remote host AoE based > hard disk access? > > > If such bench marks are available anywhere, would somebody please point me > to the links or any such source where i can dig it from? > > thanks in advance. > > cheers > shriram > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- P Save a tree...please don''t print this e-mail/ / _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
For my two bits, I''d recommend using at least Gigabit Ethernet. As cheap as Gigabit Ethernet infrastructure is getting, you might as well just get that and not do 100 megabit. 100 Mb = 12 MB, whereas Gigabit is 120 MB. SATA drives burst at 150 or 300 MB/s. Really old IDE drives were at least 33 MB/s and went up to 120 MB/s or so. I''d stick with Gigabit. --Nick>>> On 2008/01/01 at 00:12, "XenoCrateS" <shriram@cs.ucsb.edu> wrote:hi all forgive me if this question has already been answered (i couldnt find any :( ).. I am working on some xen based cluster where we were thinking of using AoE for storage management -rather than pinning a DomU''s data partition to the same host (same stuff which everybody does) here comes my question: I have read in several places that AoE is sleek and fast for certain conditions,etc etc. But i couldnt find any real Xen-AoE combination (or atleast linux AoE) benchmarks in the internet. I managed to stumble upon several Coraid benchmarks - but they arent useful at all. because the coraid disks are connected by a MyriNet NIC (i guess 1/10 Giga bit nic card) and its their implementation. I am looking at benchmarks for a simple 100 MBps ethernet lan (or a 1GBps ethernet lan) - Dedicated for Aoe, based on the vblade/aoe kernel modules.. There are no racks/special storage servers. Its just a set of commodity machines in the cluster. each machine has a dedicated nic card for aoe tasks, while the other is used for cluster communication/ communication to the internet. -----Note that these machines are also performing computationally intensive tasks sometimes. -----Some things i would definitely like to know are , --If a host A is busy, does it affect the host B''s AoE data fetch from host A its hard drive significantly? --What kind of bandwidths (atleast a ballpark number) can i expect, when compared between a in host hard disk access and remote host AoE based hard disk access? If such bench marks are available anywhere, would somebody please point me to the links or any such source where i can dig it from? thanks in advance. cheers shriram _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Coraid SR1521''s allow you to use two ethernet ports. I would suggest you give that a try with 2 x 1Gb NICs. They provide a tool called ddt to do tests with. Here are some results I got a couple of weeks ago. - Mike # # Xen guest on sm02 with lvm filesystem on 4 disk RAID5 # root@x4:~/ddt-6# time ddt -t 8g / Writing to /ddt.3159 ... syncing ... done. sleeping 10 seconds ... done. Reading from /ddt.3159 ... done. 8192 MiB KiB/s CPU% Write 52009 7 Read 105424 1 real 4m26.415s user 0m0.044s sys 0m11.017s On Jan 1, 2008 10:57 PM, Ofek Doron [Ofek BIZ] <doron@ofek.biz> wrote:> > > > Hi shriram, > > > > > > > > You can find a few benchmarks that I did in the past. > > I used 1Gbps Ethernet card (on board in t42 IBM thinkpad and intel pci > card) > > I used regular linux distro (Fedora, RHEL, SLES and Open SuSE) and I worked > with the basic Coraid SR1521 (http://coraid.com/products1.html) . > > > > > My information is in Hebrew but you don''t need Hebrew to read the results . > > > > > If you have any questions you can e-mail my directly. > > > > > - doron > > > > > oops, > > the link for the benchmarks: > > > > http://www.ofek.biz/WiKi/doku.php?id=%D7%91%D7%93%D7%99%D7%A7%D7%95%D7%AA_%D7%91%D7%99%D7%A6%D7%95%D7%A2%D7%99%D7%9D_%D7%A9%D7%9C_coraid_%D7%A2%D7%9D_%D7%9E%D7%A2%D7%A8%D7%9B%D7%95%D7%AA_%D7%9C%D7%99%D7%A0%D7%95%D7%A7%D7%A1 > > > > > > > > > > > > > > > > > > > XenoCrateS wrote: > hi all > forgive me if this question has already been answered (i couldnt find any > :( ).. > > I am working on some xen based cluster where we were thinking of using AoE > for storage management -rather than pinning a DomU''s data partition to the > same host (same stuff which everybody does) > > here comes my question: > I have read in several places that AoE is sleek and fast for certain > conditions,etc etc. But i couldnt find any real Xen-AoE combination (or > atleast linux AoE) benchmarks in the internet. > > I managed to stumble upon several Coraid benchmarks - but they arent useful > at all. because the coraid disks are connected by a MyriNet NIC (i guess > 1/10 Giga bit nic card) and its their implementation. > > I am looking at benchmarks for a simple 100 MBps ethernet lan (or a 1GBps > ethernet lan) - Dedicated for Aoe, based on the vblade/aoe kernel modules.. > There are no racks/special storage servers. Its just a set of commodity > machines in the cluster. each machine has a dedicated nic card for aoe > tasks, while the other is used for cluster communication/ communication to > the internet. > -----Note that these machines are also performing computationally intensive > tasks sometimes. > > -----Some things i would definitely like to know are , > --If a host A is busy, does it affect the host B''s AoE data fetch > from host A its hard drive significantly? > --What kind of bandwidths (atleast a ballpark number) can i expect, > when compared between a in host hard disk access and remote host AoE based > hard disk access? > > > If such bench marks are available anywhere, would somebody please point me > to the links or any such source where i can dig it from? > > thanks in advance. > > cheers > shriram > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > > > > > -- > > > > P Save a tree...please don''t print this e-mail > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Firstly, thanks to those who responded. I ll take the Giga bit ethernet suggestion for sure. The objective here is NOT to USE CoRAID at all. Use the pcs in the lan , for storage purposes and carry on. The HPC cluster which we are building or aiming to build focusses more on other resources like cpus, network, etc and doesnt place much emphasis on RAID . So, I am looking for a of the shelf, no cost, unified storage solution (sans any extra hardware aka coraid ) ie just use the vblade and aoe modules, and use it over the simple gigabit lan. It doesnt have to be highly sophisticated like SANs or have multiple raid levels (i could use drbd if such a need arises). ---Again, the emphasis here is not on enterprise type scenarios with High Availability storage and data integrity. thanks shriram On 8:20 am 01/01/08 "Mike Bailey" <mike@bailey.net.au> wrote:> Coraid SR1521''s allow you to use two ethernet ports. I would suggest > you give that a try with 2 x 1Gb NICs. > > They provide a tool called ddt to do tests with. > > Here are some results I got a couple of weeks ago. > > - Mike > > # > # Xen guest on sm02 with lvm filesystem on 4 disk RAID5 > # > > root@x4:~/ddt-6# time ddt -t 8g / > Writing to /ddt.3159 ... syncing ... done. > sleeping 10 seconds ... done. > Reading from /ddt.3159 ... done. > 8192 MiB KiB/s CPU% > Write 52009 7 > Read 105424 1 > > real 4m26.415s > user 0m0.044s > sys 0m11.017s > > On Jan 1, 2008 10:57 PM, Ofek Doron [Ofek BIZ] <doron@ofek.biz> wrote: > > > > > > > > Hi shriram, > > > > > > > > > > > > > > > > You can find a few benchmarks that I did in the past. > > > > I used 1Gbps Ethernet card (on board in t42 IBM thinkpad and intel > > pci card) > > > > I used regular linux distro (Fedora, RHEL, SLES and Open SuSE) and I > > worked with the basic Coraid SR1521 (http://coraid.com/products1.html) . > > > > > > > > > > > > My information is in Hebrew but you don''t need Hebrew to read the > > results . > > > > > > > > > > If you have any questions you can e-mail my directly. > > > > > > > > > > - doron > > > > > > > > > > oops, > > > > the link for the benchmarks: > > > > > > > > http://www.ofek.biz/WiKi/doku.php?id=%D7%91%D7%93%D7%99%D7%A7%D7%95%D7%AA_%D7%91%D7%99%D7%A6%D7%95%D7%A2%D7%99%D7%9D_%D7%A9%D7%9C_coraid_%D7%A2%D7%9D_%D7%9E%D7%A2%D7%A8%D7%9B%D7%95%D7%AA_%D7%9C%D7%99%D7%A0%D7%95%D7%A7%D7%A1> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > XenoCrateS wrote: > > hi all > > forgive me if this question has already been answered (i couldnt find > > any :( ).. > > > > I am working on some xen based cluster where we were thinking of > > using AoE for storage management -rather than pinning a DomU''s data > > partition to the same host (same stuff which everybody does) > > > > here comes my question: > > I have read in several places that AoE is sleek and fast for certain > > conditions,etc etc. But i couldnt find any real Xen-AoE combination > > (or atleast linux AoE) benchmarks in the internet. > > > > I managed to stumble upon several Coraid benchmarks - but they arent > > useful at all. because the coraid disks are connected by a MyriNet > > NIC (i guess 1/10 Giga bit nic card) and its their implementation. > > > > I am looking at benchmarks for a simple 100 MBps ethernet lan (or a > > 1GBps ethernet lan) - Dedicated for Aoe, based on the vblade/aoe > > kernel modules.. There are no racks/special storage servers. Its just > > a set of commodity machines in the cluster. each machine has a > > dedicated nic card for aoe tasks, while the other is used for cluster > > communication/ communication to the internet. > > -----Note that these machines are also performing computationally > > intensive tasks sometimes. > > > > -----Some things i would definitely like to know are , > > --If a host A is busy, does it affect the host B''s AoE data fetch > > from host A its hard drive significantly? > > --What kind of bandwidths (atleast a ballpark number) can i expect, > > when compared between a in host hard disk access and remote host AoE > > based hard disk access? > > > > > > If such bench marks are available anywhere, would somebody please > > point me to the links or any such source where i can dig it from? > > > > thanks in advance. > > > > cheers > > shriram > > > > > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xensource.com > > http://lists.xensource.com/xen-users > > > > > > > > > > -- > > > > > > > > P Save a tree...please don''t print this e-mail > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xensource.com > > http://lists.xensource.com/xen-users > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
XenoCrateS wrote:> Firstly, thanks to those who responded. I ll take the Giga bit ethernet > suggestion for sure. > > The objective here is NOT to USE CoRAID at all. Use the pcs in the lan , > for storage purposes and carry on. The HPC cluster which we are building or > aiming to build focusses more on other resources like cpus, network, etc > and doesnt place much emphasis on RAID . So, I am looking for a of the > shelf, no cost, unified storage solution (sans any extra hardware aka > coraid ) ie just use the vblade and aoe modules, and use it over the simple > gigabit lan. It doesnt have to be highly sophisticated like SANs or have > multiple raid levels (i could use drbd if such a need arises). > ---Again, the emphasis here is not on enterprise type scenarios with High > Availability storage and data integrity. >Hi, I use it with vblades and did not perform benchmark on it. On a cluster pair and a raid 10 array with drbd and heartbeat and Gb ethernet it "feels" really fast, in other words I could not see any difference for regular applications under normal load. Aoe is very interesting as it does not use cpu for complex tcp/ip computation like iSCSI or other routed technologies. My 2 cents. Enjoy 2008. ++> > thanks > shriram > On 8:20 am 01/01/08 "Mike Bailey" <mike@bailey.net.au> wrote: > >> Coraid SR1521''s allow you to use two ethernet ports. I would suggest >> you give that a try with 2 x 1Gb NICs. >> >> They provide a tool called ddt to do tests with. >> >> Here are some results I got a couple of weeks ago. >> >> - Mike >> >> # >> # Xen guest on sm02 with lvm filesystem on 4 disk RAID5 >> # >> >> root@x4:~/ddt-6# time ddt -t 8g / >> Writing to /ddt.3159 ... syncing ... done. >> sleeping 10 seconds ... done. >> Reading from /ddt.3159 ... done. >> 8192 MiB KiB/s CPU% >> Write 52009 7 >> Read 105424 1 >> >> real 4m26.415s >> user 0m0.044s >> sys 0m11.017s >> >> On Jan 1, 2008 10:57 PM, Ofek Doron [Ofek BIZ] <doron@ofek.biz> wrote: >> >>> >>> Hi shriram, >>> >>> >>> >>> >>> >>> >>> >>> You can find a few benchmarks that I did in the past. >>> >>> I used 1Gbps Ethernet card (on board in t42 IBM thinkpad and intel >>> pci card) >>> >>> I used regular linux distro (Fedora, RHEL, SLES and Open SuSE) and I >>> worked with the basic Coraid SR1521 (http://coraid.com/products1.html) . >>> >>> >>> >>> >>> >>> My information is in Hebrew but you don''t need Hebrew to read the >>> results . >>> >>> >>> >>> >>> If you have any questions you can e-mail my directly. >>> >>> >>> >>> >>> - doron >>> >>> >>> >>> >>> oops, >>> >>> the link for the benchmarks: >>> >>> >>> >>> http://www.ofek.biz/WiKi/doku.php?id=%D7%91%D7%93%D7%99%D7%A7%D7%95%D7%AA_%D7%91%D7%99%D7%A6%D7%95%D7%A2%D7%99%D7%9D_%D7%A9%D7%9C_coraid_%D7%A2%D7%9D_%D7%9E%D7%A2%D7%A8%D7%9B%D7%95%D7%AA_%D7%9C%D7%99%D7%A0%D7%95%D7%A7%D7%A1 >>> > > >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> XenoCrateS wrote: >>> hi all >>> forgive me if this question has already been answered (i couldnt find >>> any :( ).. >>> >>> I am working on some xen based cluster where we were thinking of >>> using AoE for storage management -rather than pinning a DomU''s data >>> partition to the same host (same stuff which everybody does) >>> >>> here comes my question: >>> I have read in several places that AoE is sleek and fast for certain >>> conditions,etc etc. But i couldnt find any real Xen-AoE combination >>> (or atleast linux AoE) benchmarks in the internet. >>> >>> I managed to stumble upon several Coraid benchmarks - but they arent >>> useful at all. because the coraid disks are connected by a MyriNet >>> NIC (i guess 1/10 Giga bit nic card) and its their implementation. >>> >>> I am looking at benchmarks for a simple 100 MBps ethernet lan (or a >>> 1GBps ethernet lan) - Dedicated for Aoe, based on the vblade/aoe >>> kernel modules.. There are no racks/special storage servers. Its just >>> a set of commodity machines in the cluster. each machine has a >>> dedicated nic card for aoe tasks, while the other is used for cluster >>> communication/ communication to the internet. >>> -----Note that these machines are also performing computationally >>> intensive tasks sometimes. >>> >>> -----Some things i would definitely like to know are , >>> --If a host A is busy, does it affect the host B''s AoE data fetch >>> from host A its hard drive significantly? >>> --What kind of bandwidths (atleast a ballpark number) can i expect, >>> when compared between a in host hard disk access and remote host AoE >>> based hard disk access? >>> >>> >>> If such bench marks are available anywhere, would somebody please >>> point me to the links or any such source where i can dig it from? >>> >>> thanks in advance. >>> >>> cheers >>> shriram >>> >>> >>> _______________________________________________ >>> Xen-users mailing list >>> Xen-users@lists.xensource.com >>> http://lists.xensource.com/xen-users >>> >>> >>> >>> >>> -- >>> >>> >>> >>> P Save a tree...please don''t print this e-mail >>> _______________________________________________ >>> Xen-users mailing list >>> Xen-users@lists.xensource.com >>> http://lists.xensource.com/xen-users >>> >>> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > >-- Virtual Space International Inc. Steven Dugway USA 206-734-HOST Canada 514-939-HOST (4678) ext 5 -------------------------------------------------------------- Internet Is Here To Stay, Make Sure Your Business Is! -------------------------------------------------------------- _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, Am Dienstag, 1. Januar 2008 08:12 schrieb XenoCrateS:> forgive me if this question has already been answered (i couldnt find any > :( ).. > I am working on some xen based cluster where we were thinking of using AoE > for storage management -rather than pinning a DomU''s data partition to the > same host (same stuff which everybody does) > here comes my question: > I have read in several places that AoE is sleek and fast for certain > conditions,etc etc. But i couldnt find any real Xen-AoE combination (or > atleast linux AoE) benchmarks in the internet.also i read this many times. Well, i deployed my first Xen-Cluster in the beginning of 2006 and i choosed gnbd (over aoe, iscsi, enbd) for block device transport over the network. In the last two weeks i''m re-evaluating. I''m comparing solutions which doesn''t need any special hardware. I use gbit ethernet (sometimes bonded) and a lot of sata disks. As far as i can tell i am very disappointed from AoE! It was a nightmare to setup aoe and vblade to get nearly the performance of gnbd! You have to use latest aoe (v63) module on the client side. You have to use latest vblade (18) on the server side, otherwise you will have poor performance with a high load. And you have to use jumbo frames (i''m using mtu 9000 with a direct link between two servers). With this setup i get the same throughput as with gnbd. But with aoe the cpu load is much higher. I also have to note here that older vblade versions dropped extremly performance if you exported more than one block devices with vblade. With 100 lvs exported with older vblade version i couldn''t read more than 600 KB/s (yes, KB!!!!) from ONE aoe device at a time! The system load of my vblade server went up to over 20 (2 Intel Quad Core 2.66 GHz). The real world test, a running domU, i can read about 130 MB/s with gnbd and with aoe. With aoe, the load of the (vblade) server is much higher than with gnbd. So i could live with this. But i can only write about 30 MB/s with aoe in contrast to gnbd where i can write about 80 MB/s. If i turn on O_SYNC within vblade, write performance goes under 10 MB/s. So with this results i can''t change my setups to aoe! The performance loss would be to much! Because of what i read, i thought aoe should perform better than iscsi. For now i will test this myself. I get the feeling that people are using the technique they get first to work!? -- greetings eMHa _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Aug 13, 2008 at 10:55 AM, Markus Hochholdinger <Markus@hochholdinger.net> wrote:> Because of what i read, i thought aoe should perform better than iscsi. For > now i will test this myself. I get the feeling that people are using the > technique they get first to work!?you''re probably right. familiarity breeds contempt BTW, did you test different vblade servers? (vblade, kvblade, aoeserver, qaoed), or the optimisations mentioned int xenaoe.org ? -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am Mittwoch, 13. August 2008 22:51 schrieb Javier Guerra:> On Wed, Aug 13, 2008 at 10:55 AM, Markus Hochholdinger > <Markus@hochholdinger.net> wrote: > > Because of what i read, i thought aoe should perform better than iscsi. > > For now i will test this myself. I get the feeling that people are using > > the technique they get first to work!? > you''re probably right. familiarity breeds contempt > BTW, did you test different vblade servers? (vblade, kvblade, > aoeserver, qaoed), or the optimisations mentioned int xenaoe.org ?kvblade seems dead, qoaed looks promising but is in its early stages in my opinion and aoeserver is the first time i hear. I will look into this. Well, i''ve done optimisations like increasing network buffers and so on. But i didn''t activated any hardware related optimisation. My goal is to achive good performance with standard hardware. I try to avoid a special kind of network card or a special kind of switch. The only reason i''m looking forward to use aoe is because the disconnection of malfunctioning storage servers is better handled than with gnbd. gnbd (in "No Cluster" mode) waits forever if the server dies and i had to write my own programs to handle such a situation. aoe just gives i/o error, and that''s what i want. -- greetings eMHa _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Markus Hochholdinger wrote:> The only reason i''m looking forward to use aoe is because the disconnection of > malfunctioning storage servers is better handled than with gnbd. gnbd (in "No > Cluster" mode) waits forever if the server dies and i had to write my own > programs to handle such a situation. aoe just gives i/o error, and that''s > what i want. > >Is there a reason whay you''re not looking at iscsi? Redhat''s iscsi-initiator-utils (from open-iscsi) has node.session.timeo.replacement_timeout, which "specify the length of time to wait for session re-establishment before failing SCSI commands back to the application when running the Linux SCSI Layer error handler". I assume this is what you want. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, Am Donnerstag, 14. August 2008 04:09 schrieb Fajar A. Nugraha:> Markus Hochholdinger wrote: > > The only reason i''m looking forward to use aoe is because the > > disconnection of malfunctioning storage servers is better handled than > > with gnbd. gnbd (in "No Cluster" mode) waits forever if the server dies > > and i had to write my own programs to handle such a situation. aoe just > > gives i/o error, and that''s what i want. > Is there a reason whay you''re not looking at iscsi? > Redhat''s iscsi-initiator-utils (from open-iscsi) has > node.session.timeo.replacement_timeout, which > "specify the length of time to wait for session re-establishment before > failing SCSI commands back to the application when running the Linux > SCSI Layer error handler". I assume this is what you want.yes, iscsi is on my list to check again. The last time i tested (software) iscsi, it needed a lot more of cpu on both sides as gnbd. -- greetings eMHa _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, Am Mittwoch, 13. August 2008 22:51 schrieb Javier Guerra:> On Wed, Aug 13, 2008 at 10:55 AM, Markus Hochholdinger > <Markus@hochholdinger.net> wrote: > > Because of what i read, i thought aoe should perform better than iscsi. > > For now i will test this myself. I get the feeling that people are using > > the technique they get first to work!? > you''re probably right. familiarity breeds contempt > BTW, did you test different vblade servers? (vblade, kvblade, > aoeserver, qaoed), or the optimisations mentioned int xenaoe.org ?i''m searching for this aoeserver. Did you mean: http://aoeserver.googlecode.com/svn/trunk/ ? This seems very dead and i''m afraid it can work with the 2.6.18 xen dom0 kernel. -- greetings eMHa _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Aug 14, 2008 at 7:09 AM, Markus Hochholdinger > i''m searching for this aoeserver. Did you mean:> http://aoeserver.googlecode.com/svn/trunk/ ? This seems very dead and i''m > afraid it can work with the 2.6.18 xen dom0 kernel.yep, after writing that email, i checked a bit more of each one, and quickly saw that it''s the deadest of them all! sorry about the noise. i''m somewhat surprised (and sorry) to see kvblade isn''t too much alive either. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, Am Donnerstag, 14. August 2008 04:09 schrieb Fajar A. Nugraha:> Markus Hochholdinger wrote: > > The only reason i''m looking forward to use aoe is because the > > disconnection of malfunctioning storage servers is better handled than > > with gnbd. gnbd (in "No Cluster" mode) waits forever if the server dies > > and i had to write my own programs to handle such a situation. aoe just > > gives i/o error, and that''s what i want. > Is there a reason whay you''re not looking at iscsi? > Redhat''s iscsi-initiator-utils (from open-iscsi) has > node.session.timeo.replacement_timeout, which > "specify the length of time to wait for session re-establishment before > failing SCSI commands back to the application when running the Linux > SCSI Layer error handler". I assume this is what you want.yeah, i''m very satisfied with iscsi. It seems _there_ was the development the last years. My first tests indicate that it has a similar load on the server side as gnbd has. The load on the client was not really noticeable. And the performance is at least equal to gnbd. Read throughput is slightly better and write throughput is more than 5% better than with gnbd. I''m impressed. I''ll make more tests with iscsi against gnbd and it seems iscsi will be the choice for the future. I used http://iscsitarget.sourceforge.net/ for the server and http://www.open-iscsi.org/ for the client. Interesting is that the server part is a kernel module and not user land software like with gnbd and aoe (vblade). I tried the disconnecting feature and it is really what i want. I just turned off the iscsi target and waited till my raid1 degraded. Then turned on the iscsi target and simply resynced my raid1. I didn''t had to do any re-discovering or resetting anything! That''s what i want. The only disadvantage i see so far is the complexity of iscsi. But this shouldn''t be a excuse. -- greetings eMHa _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 14/08/2008 17:31, Markus Hochholdinger wrote:> I tried the disconnecting feature and it is really what i want. I just turned > off the iscsi target and waited till my raid1 degraded. Then turned on the > iscsi target and simply resynced my raid1. I didn''t had to do any > re-discovering or resetting anything! That''s what i want.Just interested, did you use md raid over iscsi within each domU? Or mdraid over iscsi in the dom0 then slice it up via LVM and export to the domUs? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, Am Donnerstag, 14. August 2008 19:27 schrieb Andy Burns:> On 14/08/2008 17:31, Markus Hochholdinger wrote: > > I tried the disconnecting feature and it is really what i want. I just > > turned off the iscsi target and waited till my raid1 degraded. Then > > turned on the iscsi target and simply resynced my raid1. I didn''t had to > > do any re-discovering or resetting anything! That''s what i want. > Just interested, did you use md raid over iscsi within each domU? > Or mdraid over iscsi in the dom0 then slice it up via LVM and export to > the domUs?i use md raid1 inside domU. With this configuration i can grow disks, raid1 and filesystem of a domU without interrupting (or restarting) it. I don''t need any heartbeat or cluster thing for my storage. I need only the ability of my (virtual) disks to fail if one storage server fails. Only drawback is the rebuild process after one storage completely fails. -- greetings eMHa _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users