hi, I ask me some questions about xen disk backend performance. What is the better backend to use to have the best ones. For me I tought its better to use phy: than disk: , because it''s does''nt need "encapsulation" to store data in and so, writing data is quicker. But maybe i''m wrong. Maybe some of you can give advise and more info about that ! Thanks. -- Guillaume _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, 28 Nov 2008, Guillaume wrote:> I ask me some questions about xen disk backend performance. What is the > better backend to use to have the best ones. > > For me I tought its better to use phy: than disk: , because it''s does''nt > need "encapsulation" to store data in and so, writing data is quicker. But > maybe i''m wrong. > Maybe some of you can give advise and more info about that !iSCSI + pvSCSI seems to be optimal I guess. Stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am Freitag, den 28.11.2008, 10:54 +0100 schrieb Stefan de Konink:> On Fri, 28 Nov 2008, Guillaume wrote: > > > I ask me some questions about xen disk backend performance. What is the > > better backend to use to have the best ones. > > > > For me I tought its better to use phy: than disk: , because it''s does''nt > > need "encapsulation" to store data in and so, writing data is quicker. But > > maybe i''m wrong. > > Maybe some of you can give advise and more info about that ! > > iSCSI + pvSCSI seems to be optimal I guess.iSCSI is pretty slow, because of all the tcp-ip overhead. Try AoE since its Layer 2 (Ethernet)> > > StefanThomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, 28 Nov 2008, Thomas Halinka wrote:> Am Freitag, den 28.11.2008, 10:54 +0100 schrieb Stefan de Konink: > > On Fri, 28 Nov 2008, Guillaume wrote: > > > > > I ask me some questions about xen disk backend performance. What is the > > > better backend to use to have the best ones. > > > > > > For me I tought its better to use phy: than disk: , because it''s does''nt > > > need "encapsulation" to store data in and so, writing data is quicker. But > > > maybe i''m wrong. > > > Maybe some of you can give advise and more info about that ! > > > > iSCSI + pvSCSI seems to be optimal I guess. > > > iSCSI is pretty slow, because of all the tcp-ip overhead. Try AoE since > its Layer 2 (Ethernet)Please come with benchmarks, and preferably stability comparisons. Never the less, AoE would still process on dom0, while pvSCSI is directly done on the domU. Stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Stefan, Am Freitag, den 28.11.2008, 15:41 +0100 schrieb Stefan de Konink:> On Fri, 28 Nov 2008, Thomas Halinka wrote: > > > Am Freitag, den 28.11.2008, 10:54 +0100 schrieb Stefan de Konink: > > > On Fri, 28 Nov 2008, Guillaume wrote: > > > > > > > I ask me some questions about xen disk backend performance. What is the > > > > better backend to use to have the best ones. > > > > > > > > For me I tought its better to use phy: than disk: , because it''s does''nt > > > > need "encapsulation" to store data in and so, writing data is quicker. But > > > > maybe i''m wrong. > > > > Maybe some of you can give advise and more info about that ! > > > > > > iSCSI + pvSCSI seems to be optimal I guess. > > > > > > iSCSI is pretty slow, because of all the tcp-ip overhead. Try AoE since > > its Layer 2 (Ethernet) > > Please come with benchmarks,i do not need any benchmarks. i measured that iscsi could saturate a GB-Link with about 55-60% - AoE was about 80-85% at less CPU-Usage! Why is FC faster than iSCSI? Ah, it s because of the protocol. and why? Because FC is layer2 like AoE and iscsi is layer 3/4 - so much more protocol-overhead has to processed.> and preferably stability comparisons.open-iscsi has no stable releases yet. aoetools do have. There are also many users complaining about iscsi-kernel-issues.... Just search the net for iscsi.problems on linux.> Never > the less, AoE would still process on dom0, while pvSCSI is directly done > on the domU.yep.> StefanThomas [1] http://www.apac.edu.au/apac07/pages/program/presentations/Tuesday% 20Harbour%20C/Antony%20Gerdelan.pdf [2] http://www.linuxdevices.com/news/NS3189760067.html _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, 28 Nov 2008, Thomas Halinka wrote:> i do not need any benchmarks. i measured that iscsi could saturate a > GB-Link with about 55-60% - AoE was about 80-85% at less CPU-Usage!My benchmarks for iSCSI vs NFS performance tests both saturate the links 10GE -> 1GE, while the first has a bit better < 10% performance.> Why is FC faster than iSCSI? Ah, it s because of the protocol.Non-sence.> > and preferably stability comparisons. > > open-iscsi has no stable releases yet. aoetools do have. There are also > many users complaining about iscsi-kernel-issues.......there is more than open-iscsi, in targets and initiators. (+ OS''es) Stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I would like a step by step description of how to install AOE support between two Xen servers, in my case SUSE SP2. It seems like the ideal protocol for storage. Yours Federico -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Stefan de Konink Sent: Friday, November 28, 2008 10:23 AM To: Thomas Halinka Cc: Guillaume; Xen Users; Stefan de Konink Subject: Re: [Xen-users] disk backend performance On Fri, 28 Nov 2008, Thomas Halinka wrote:> i do not need any benchmarks. i measured that iscsi could saturate a > GB-Link with about 55-60% - AoE was about 80-85% at less CPU-Usage!My benchmarks for iSCSI vs NFS performance tests both saturate the links 10GE -> 1GE, while the first has a bit better < 10% performance.> Why is FC faster than iSCSI? Ah, it s because of the protocol.Non-sence.> > and preferably stability comparisons. > > open-iscsi has no stable releases yet. aoetools do have. There are also > many users complaining about iscsi-kernel-issues.......there is more than open-iscsi, in targets and initiators. (+ OS''es) Stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am Freitag, den 28.11.2008, 16:22 +0100 schrieb Stefan de Konink:> On Fri, 28 Nov 2008, Thomas Halinka wrote: > > > i do not need any benchmarks. i measured that iscsi could saturate a > > GB-Link with about 55-60% - AoE was about 80-85% at less CPU-Usage! > > My benchmarks for iSCSI vs NFS performance tests both saturate the links > 10GE -> 1GE, while the first has a bit better < 10% performance.A GBit-Link has a maximum throughput of 110 MB/s and you really got about 100 MB/s? ;) i think it was about maybe 65 MB/s> > > Why is FC faster than iSCSI? Ah, it s because of the protocol. > > Non-sence.Nope. Waht is a SAN? It s a bunch of disk, some intelligence (striping, mirroring,caching...) and a connection to servers. It doesn''t matter which protocol you ''re using: FC, FCoE, AoE or iSCSI to connect. All of them implement a SAN for you. The difference between those techniques is how data is transferred. and in fc and aoe you only have 2 layers, which to have to be passed. iscsi has more layers, that are passed and every layer which is passed produces overhead. so: less layers = less overhead = more performance.> > > > and preferably stability comparisons. > > > > open-iscsi has no stable releases yet. aoetools do have. There are also > > many users complaining about iscsi-kernel-issues.... > > ...there is more than open-iscsi, in targets and initiators. (+ OS''es)you''re right, ..... Linux seems to have better support for AoE than for iSCSI, which is probably because AoE is simpler and has less peculiar bits. (There is a certain enterprisey smell about iSCSI.) http://utcc.utoronto.ca/~cks/space/blog/tech/FCvsiSCSIvsAOE> > > Stefan >Thomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
What I don''t understand is if I can share a directory on the root or it needs to be a disk partition raw. Federico _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am Freitag, den 28.11.2008, 10:37 -0500 schrieb Venefax:> What I don''t understand is if I can share a directory on the root or it > needs to be a disk partition raw.SAN = exporting raw devices (FC, iSCSI, AoE,...) NAS = exporting directories (NFS; CIFS,...)> FedericoThomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Federico, Am Freitag, den 28.11.2008, 10:36 -0500 schrieb Venefax:> I would like a step by step description of how to install AOE support > between two Xen servers, in my case SUSE SP2.client-side: you need only modprobe aoe and aoetools installed to connect to a aoe-device. server-side: vblade installed lvm and bonding is optional, but makes sense.> It seems like the ideal > protocol for storage.yep ;)> Yours > FedericoThomas> > -----Original Message----- > From: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Stefan de Konink > Sent: Friday, November 28, 2008 10:23 AM > To: Thomas Halinka > Cc: Guillaume; Xen Users; Stefan de Konink > Subject: Re: [Xen-users] disk backend performance > > On Fri, 28 Nov 2008, Thomas Halinka wrote: > > > i do not need any benchmarks. i measured that iscsi could saturate a > > GB-Link with about 55-60% - AoE was about 80-85% at less CPU-Usage! > > My benchmarks for iSCSI vs NFS performance tests both saturate the links > 10GE -> 1GE, while the first has a bit better < 10% performance. > > > Why is FC faster than iSCSI? Ah, it s because of the protocol. > > Non-sence. > > > > and preferably stability comparisons. > > > > open-iscsi has no stable releases yet. aoetools do have. There are also > > many users complaining about iscsi-kernel-issues.... > > ...there is more than open-iscsi, in targets and initiators. (+ OS''es) > > > Stefan > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I really need to share a directory but don''t want to use NFS, but a lower level, faster protocol over Ethernet that does not have the overhead of TCP-IP. My main partition is formatted as the root "/" and then I guess I cannot share a directory over AOE? An alternative would be to repartition my drive and break it into two separate partitions, and publish one via AOE, but how can I do that without reinstalling the OS? Federico -----Original Message----- From: Thomas Halinka [mailto:lists@thohal.de] Sent: Friday, November 28, 2008 10:41 AM To: Venefax Cc: ''Stefan de Konink''; ''Guillaume''; ''Xen Users'' Subject: RE: [Xen-users] disk backend performance Am Freitag, den 28.11.2008, 10:37 -0500 schrieb Venefax:> What I don''t understand is if I can share a directory on the root or it > needs to be a disk partition raw.SAN = exporting raw devices (FC, iSCSI, AoE,...) NAS = exporting directories (NFS; CIFS,...)> FedericoThomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, 28 Nov 2008, Thomas Halinka wrote:> Am Freitag, den 28.11.2008, 16:22 +0100 schrieb Stefan de Konink: > > On Fri, 28 Nov 2008, Thomas Halinka wrote: > > > > > i do not need any benchmarks. i measured that iscsi could saturate a > > > GB-Link with about 55-60% - AoE was about 80-85% at less CPU-Usage! > > > > My benchmarks for iSCSI vs NFS performance tests both saturate the links > > 10GE -> 1GE, while the first has a bit better < 10% performance. > > A GBit-Link has a maximum throughput of 110 MB/s and you really got > about 100 MB/s? ;)NetApp is fast...Solaris too if it has enough disk;) Stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Stefan de Konink schrieb:> My benchmarks for iSCSI vs NFS performance tests both saturate the links > 10GE -> 1GE, while the first has a bit better < 10% performance.Don''t compare apples/oranges. iSCSI is a transport protocol and has nothing todo with application layer stuff like NFS. just my 5 cent -- stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Federico, Am Freitag, den 28.11.2008, 10:55 -0500 schrieb Venefax:> I really need to share a directory but don''t want to use NFS, but a lower > level, faster protocol over Ethernet that does not have the overhead of > TCP-IP. My main partition is formatted as the root "/" and then I guess I > cannot share a directory over AOE? An alternative would be to repartition my > drive and break it into two separate partitions, and publish one via AOE, > but how can I do that without reinstalling the OS?Ahm, you also can use loopback-files, too. mkdir /aoe-blades dd if=/dev/zero of=/aoe-blades/blade1 bs=1M count=10000 <-- creates 10GB-File or preferring sparse-files dd if=/dev/zero of=/aoe-blades/blade1 bs=1M seek=10000 count=1 vblade 0 1 eth0 /aoe-blades/blade1 exports this file through aoe> FedericoThomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2008/11/28 Guillaume <guillaume.chardin@gmail.com>> > > 2008/11/28 Stefan de Konink <stefan@konink.de> >> >> On Fri, 28 Nov 2008, Thomas Halinka wrote: >> >> > Am Freitag, den 28.11.2008, 16:22 +0100 schrieb Stefan de Konink: >> > > On Fri, 28 Nov 2008, Thomas Halinka wrote: >> > > >> > > > i do not need any benchmarks. i measured that iscsi could saturate a >> > > > GB-Link with about 55-60% - AoE was about 80-85% at less CPU-Usage! >> > > >> > > My benchmarks for iSCSI vs NFS performance tests both saturate the links >> > > 10GE -> 1GE, while the first has a bit better < 10% performance. >> > >> > A GBit-Link has a maximum throughput of 110 MB/s and you really got >> > about 100 MB/s? ;) >> >> NetApp is fast...Solaris too if it has enough disk;) >> >> >> StefanHey, i''d never thinking that my thread will bring so much ppl !! And about phy:// and disk:// :) which one is the best ? (was original question :] not SAN, FC and other ! but interresting) -- Guillaume _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am Freitag, den 28.11.2008, 17:01 +0100 schrieb Stefan de Konink:> On Fri, 28 Nov 2008, Thomas Halinka wrote: > > > Am Freitag, den 28.11.2008, 16:22 +0100 schrieb Stefan de Konink: > > > On Fri, 28 Nov 2008, Thomas Halinka wrote: > > > > > > > i do not need any benchmarks. i measured that iscsi could saturate a > > > > GB-Link with about 55-60% - AoE was about 80-85% at less CPU-Usage! > > > > > > My benchmarks for iSCSI vs NFS performance tests both saturate the links > > > 10GE -> 1GE, while the first has a bit better < 10% performance. > > > > A GBit-Link has a maximum throughput of 110 MB/s and you really got > > about 100 MB/s? ;) > > NetApp is fast...Solaris too if it has enough disk;)Solaris NFS rocks ;) but Linux-NFS sucks.> > Stefan >Thomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Nov 28, 2008 at 11:09 AM, Thomas Halinka <lists@thohal.de> wrote:> Hi Federico, > > Am Freitag, den 28.11.2008, 10:55 -0500 schrieb Venefax: >> I really need to share a directory but don''t want to use NFS, but a lower >> level, faster protocol over Ethernet that does not have the overhead ofif you want to share a directory, you need a filesharing protocol: NFS and Samba are the best bets there. on most (but not all) cases, you can get higher performance with a cluster filesystem over a shared block device. the shared block part is easy: AoE, iSCSI, FC, DRBD, gnbd, etc. but remember that you can''t use a ''regular'' filesystem on top of these if you plan to mount on more than one machine at a time!! for that, you need a cluster filesystem: GFS, OCFS2 are the best known. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am Freitag, den 28.11.2008, 17:13 +0100 schrieb Guillaume:> 2008/11/28 Guillaume <guillaume.chardin@gmail.com> > > > > > > 2008/11/28 Stefan de Konink <stefan@konink.de> > >> > >> On Fri, 28 Nov 2008, Thomas Halinka wrote: > >> > >> > Am Freitag, den 28.11.2008, 16:22 +0100 schrieb Stefan de Konink: > >> > > On Fri, 28 Nov 2008, Thomas Halinka wrote: > >> > > > >> > > > i do not need any benchmarks. i measured that iscsi could saturate a > >> > > > GB-Link with about 55-60% - AoE was about 80-85% at less CPU-Usage! > >> > > > >> > > My benchmarks for iSCSI vs NFS performance tests both saturate the links > >> > > 10GE -> 1GE, while the first has a bit better < 10% performance. > >> > > >> > A GBit-Link has a maximum throughput of 110 MB/s and you really got > >> > about 100 MB/s? ;) > >> > >> NetApp is fast...Solaris too if it has enough disk;) > >> > >> > >> Stefan > Hey, i''d never thinking that my thread will bring so much ppl !!:D> > And about phy:// and disk:// :) which one is the best ?phy!> (was original > question :] not SAN, FC and other ! but interresting);)> > > > -- > GuillaumeThomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thomas Halinka schrieb:> Am Freitag, den 28.11.2008, 10:55 -0500 schrieb Venefax: >> I really need to share a directory but don''t want to use NFS, but a lower >> level, faster protocol over Ethernet that does not have the overhead of >> TCP-IP. My main partition is formatted as the root "/" and then I guess I >> cannot share a directory over AOE? An alternative would be to repartition my >> drive and break it into two separate partitions, and publish one via AOE, >> but how can I do that without reinstalling the OS?> Ahm, you also can use loopback-files, too.Good point, as i did some disk i/o measurments and came to the conclusion, that running aoe with loop devices boosts the speed compared to real partitions in writing in my case. probably the needle eye is cause by how things get cached/handled before writing. -- stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>I really need to share a directory but don''t want to use NFS, but a lower >level, faster protocol over Ethernet that does not have the overhead of >TCP-IP. My main partition is formatted as the root "/" and then I guess I >cannot share a directory over AOE? An alternative would be to repartition my >drive and break it into two separate partitions, and publish one via AOE, >but how can I do that without reinstalling the OS?Federico, If your interests performance, you''re wasting your time nit picking the difference''s in how you export storage if you''re doing it all off the system disc whether it''s a raw partition exported or a directory or a file on your filesystem exported as a raw device... If you have only one disc at your disposal, you''re not going to squeeze that last little bit out _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
My system disk is a raid 5 array, 800 GB, so it has plenty of throughput. But you say that any technique even NFS or even SAMBA, versos creating a file and sharing it via AOE will be comparable? Which technique would you pick? I have a back-to-back 1 GB Ethernet cable between the two servers. Federico -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Joseph L. Casale Sent: Friday, November 28, 2008 11:41 AM To: ''Xen Users'' Subject: RE: [Xen-users] disk backend performance>I really need to share a directory but don''t want to use NFS, but a lower >level, faster protocol over Ethernet that does not have the overhead of >TCP-IP. My main partition is formatted as the root "/" and then I guess I >cannot share a directory over AOE? An alternative would be to repartitionmy>drive and break it into two separate partitions, and publish one via AOE, >but how can I do that without reinstalling the OS?Federico, If your interests performance, you''re wasting your time nit picking the difference''s in how you export storage if you''re doing it all off the system disc whether it''s a raw partition exported or a directory or a file on your filesystem exported as a raw device... If you have only one disc at your disposal, you''re not going to squeeze that last little bit out _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am Freitag, den 28.11.2008, 11:45 -0500 schrieb Venefax:> My system disk is a raid 5 array, 800 GB, so it has plenty of throughput. > But you say that any technique even NFS or even SAMBA, versos creating a > file and sharing it via AOE will be comparable?i guess that nfs(linux)/cifs will reach maybe 40 MB/s iscsi maybe 60-70 MB/s and aoe should reach 85 Mb/s> Which technique would you pick?aoe> I have a back-to-back 1 GB Ethernet cable > between the two servers. > FedericoThomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>My system disk is a raid 5 array, 800 GB, so it has plenty of throughput. >But you say that any technique even NFS or even SAMBA, versos creating a >file and sharing it via AOE will be comparable? >Which technique would you pick? I have a back-to-back 1 GB Ethernet cable >between the two servers.Well, Given that you''re still using one array for everything, I would just use whatever is most convenient. Although AoE has some apparent advantages, I am not a fan as it lacks the security that iSCSI has native. I would probably export an NFS directory or use iSCSI and export a file using fileio mode. OTOH, your network between the two is pretty secure :) Use AoE... I wouldn''t even consider Samba, why muddy the waters sharing data between Linux hosts emulating a non native Linux protocol. That''s just silly? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am Freitag, den 28.11.2008, 09:56 -0700 schrieb Joseph L. Casale:> >My system disk is a raid 5 array, 800 GB, so it has plenty of throughput. > >But you say that any technique even NFS or even SAMBA, versos creating a > >file and sharing it via AOE will be comparable? > >Which technique would you pick? I have a back-to-back 1 GB Ethernet cable > >between the two servers. > > Well, > Given that you''re still using one array for everything, I would just use > whatever is most convenient. Although AoE has some apparent advantages, I am > not a fan as it lacks the security that iSCSI has native. I would probably > export an NFS directory or use iSCSI and export a file using fileio mode. > > OTOH, your network between the two is pretty secure :) Use AoE...use cross-over-cable for those 2 xen-boxes ;)> > I wouldn''t even consider Samba, why muddy the waters sharing data between Linux > hosts emulating a non native Linux protocol. That''s just silly? > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
It will be AOE, but I find weird that I cannot write to the virtual disk exported from the container box. There must be a way to mount it on the container without the packets having to go round trip. Like cluster file system, but how? I could install a cluster file system on the client side, but if I need open a file from the server side, the packets would still go around the wire. Am I wrong? -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Joseph L. Casale Sent: Friday, November 28, 2008 11:57 AM To: ''Xen Users'' Subject: RE: [Xen-users] disk backend performance>My system disk is a raid 5 array, 800 GB, so it has plenty of throughput. >But you say that any technique even NFS or even SAMBA, versos creating a >file and sharing it via AOE will be comparable? >Which technique would you pick? I have a back-to-back 1 GB Ethernet cable >between the two servers.Well, Given that you''re still using one array for everything, I would just use whatever is most convenient. Although AoE has some apparent advantages, I am not a fan as it lacks the security that iSCSI has native. I would probably export an NFS directory or use iSCSI and export a file using fileio mode. OTOH, your network between the two is pretty secure :) Use AoE... I wouldn''t even consider Samba, why muddy the waters sharing data between Linux hosts emulating a non native Linux protocol. That''s just silly? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Stefan Bauer wrote:> Stefan de Konink schrieb: >> My benchmarks for iSCSI vs NFS performance tests both saturate the links >> 10GE -> 1GE, while the first has a bit better < 10% performance. > > Don''t compare apples/oranges. iSCSI is a transport protocol and has > nothing todo with application layer stuff like NFS.It was all bonnied ;) So I had a test with native iSCSI connectors (non-pv) and NFS (tap:aio). Clearly if both saturizes my links, and tap:aio takes more memory, iscsi is my winner. (The main reason why I prefer layer 3, because I can use different subnets on the same target) Stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Nov 28, 2008, at 5:54 PM, Stefan de Konink <stefan@konink.de> wrote:> Stefan Bauer wrote: >> Stefan de Konink schrieb: >>> My benchmarks for iSCSI vs NFS performance tests both saturate the >>> links >>> 10GE -> 1GE, while the first has a bit better < 10% performance. >> Don''t compare apples/oranges. iSCSI is a transport protocol and has >> nothing todo with application layer stuff like NFS. > > It was all bonnied ;) So I had a test with native iSCSI connectors > (non-pv) and NFS (tap:aio). Clearly if both saturizes my links, and > tap:aio takes more memory, iscsi is my winner. > > (The main reason why I prefer layer 3, because I can use different > subnets on the same target)There are many other reasons to pick iSCSI over AoE such as error recovery, error detection, transmission reliability, disk sharing via reserve/release or persistent reservations, different target types other than random access disk storage such as virtual/real tape drives, virtual/real optical drives and other SCSI based devices. If you want cheap simple local storage that emulates SATA then AoE should fit the bill. -Ross _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > > > iSCSI + pvSCSI seems to be optimal I guess. > > iSCSI is pretty slow, because of all the tcp-ip overhead. Try AoEsince> its Layer 2 (Ethernet) >If you implement iSCSI in software then the overhead could matter. If it''s implemented in hardware though (eg an iSCSI HBA) then the processing overhead becomes negligible. With jumbo frames, the IP+TCP header overhead (40 bytes) is also negligible. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
So: over a dedicated cable with jumbo frames it is actually smarter to use ISCSI than AOE? Is that your conclusion? -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of James Harper Sent: Friday, November 28, 2008 9:56 PM To: Thomas Halinka; Stefan de Konink Cc: Xen Users; Guillaume Subject: RE: [Xen-users] disk backend performance> > > > iSCSI + pvSCSI seems to be optimal I guess. > > iSCSI is pretty slow, because of all the tcp-ip overhead. Try AoEsince> its Layer 2 (Ethernet) >If you implement iSCSI in software then the overhead could matter. If it''s implemented in hardware though (eg an iSCSI HBA) then the processing overhead becomes negligible. With jumbo frames, the IP+TCP header overhead (40 bytes) is also negligible. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > So: over a dedicated cable with jumbo frames it is actually smarter touse> ISCSI than AOE? Is that your conclusion?My conclusion was more along the lines of "don''t assume that just because iSCSI uses TCP/IP that it is inferior performance-wise to AoE". I don''t believe that there currently exist any hardware implemented AoE adapters, but there are hardware iSCSI adapters (HBA''s). With a software implementation, Ethernet headers have to be constructed per packet (+IP+TCP for iSCSI), error detection has to be implemented for AoE, and packetization has to be done in software (less so for iSCSI as most hardware has TCP Large Send Offload which from a software point of view allows sending of 64k TCP packets). Any network adapter that you''d use in a server these days has TX and RX checksum offloading available, so you get checksumming of your TCP packets for free when using software iSCSI, while for AoE you need to calculate checksums manually in software. With a hardware iSCSI HBA, the O/S just has to say to the card ''read x sectors starting at y and put them in memory here''. The card does the rest. Of course, to do this you have to buy an iSCSI HBA, which increases the cost of the solution somewhat. If you want to use non-disk devices then iSCSI is probably a better choice - a robotic tape library is more likely to work over iSCSI without problems than it will over AoE, although my information on that sort of thing may be out of date... A protocol called HyperSCSI exists, which is basically SCSI over Ethernet, but I don''t know how available it is. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi James, Am Samstag, den 29.11.2008, 14:18 +1100 schrieb James Harper:> > > > So: over a dedicated cable with jumbo frames it is actually smarter to > use > > ISCSI than AOE? Is that your conclusion? > > My conclusion was more along the lines of "don''t assume that just > because iSCSI uses TCP/IP that it is inferior performance-wise to AoE". > > I don''t believe that there currently exist any hardware implemented AoE > adapters, but there are hardware iSCSI adapters (HBA''s).Yeah, but only Gbit. Intel has actually hid away from TCP offload engines because of the tremendous costs (and resultant pricing) that goes along with it. So 10GBit-Adapters have no TCOE atm.> > With a software implementation, Ethernet headers have to be constructed > per packet (+IP+TCP for iSCSI), error detection has to be implemented > for AoE, and packetization has to be done in software (less so for iSCSI > as most hardware has TCP Large Send Offload which from a software point > of view allows sending of 64k TCP packets). Any network adapter that > you''d use in a server these days has TX and RX checksum offloading > available, so you get checksumming of your TCP packets for free when > using software iSCSI, while for AoE you need to calculate checksums > manually in software. > > With a hardware iSCSI HBA, the O/S just has to say to the card ''read x > sectors starting at y and put them in memory here''. The card does the > rest. Of course, to do this you have to buy an iSCSI HBA, which > increases the cost of the solution somewhat. > > If you want to use non-disk devices then iSCSI is probably a better > choice - a robotic tape library is more likely to work over iSCSI > without problems than it will over AoE, although my information on that > sort of thing may be out of date... > > A protocol called HyperSCSI exists, which is basically SCSI over > Ethernet, but I don''t know how available it is.HyperSCSI died somewhere 2002. It seemed to have a good chance to become the optimal tradeoff for simple SANs. May be someone decided to commercialize it deeply ;) But there are Still Ethernet-Storage-Techniques like AoE, SCSIoverEthernet (XoE) or the coming FCoE-Standard. http://www.open-fcoe.org/ is interesting, too. FCoE seems to be intended as the FC camp''s response to iSCSI. Besides running over raw ethernet versus TCP/IP, I''m guessing the main difference would be in how they integrate with the rest of a data center...> > JamesThomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users