Hi All, just a quick question I could not figure out. Is there a way to export a PCI device to multiple VMs (para) keeping it available to dom0? Xen version is 3.0.4. Thanks, Jan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, On Feb 17, 2008 6:34 PM, Jan Kalcic <jandot@googlemail.com> wrote:> Hi All, > > just a quick question I could not figure out. Is there a way to export a > PCI device to multiple VMs (para) keeping it available to dom0? Xen > version is 3.0.4. >As far as I know you can''t. That is what virtual devices are used for right? In what scenario would you want to grant direct access to a PCI device in VMs and also in dom0? Todd> Thanks, > Jan > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Todd Deshane wrote:> > Hi, > > On Feb 17, 2008 6:34 PM, Jan Kalcic <jandot@googlemail.com > <mailto:jandot@googlemail.com>> wrote: > > Hi All, > > just a quick question I could not figure out. Is there a way to > export a > PCI device to multiple VMs (para) keeping it available to dom0? Xen > version is 3.0.4. > > > As far as I know you can''t. That is what virtual devices are used for > right? > > In what scenario would you want to grant direct access to a PCI device > in VMs and also in dom0? >Hi Todd, the PCI device which I would need to "share" is the fibre channel card connected to two different storage, on of this is the VMs repository which has to be visible to dom0 and the other one is the data storage for VMs which, obviously, has to be visibile to VMs. So the solution would be using two different fibre channel cards, right? Thanks, Jan> Todd > > > > > Thanks, > Jan > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com <mailto:Xen-users@lists.xensource.com> > http://lists.xensource.com/xen-users > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Feb 18, 2008 6:02 AM, Jan Kalcic <jandot@googlemail.com> wrote:> Todd Deshane wrote: > > > > Hi, > > > > On Feb 17, 2008 6:34 PM, Jan Kalcic <jandot@googlemail.com > > <mailto:jandot@googlemail.com>> wrote: > > > > Hi All, > > > > just a quick question I could not figure out. Is there a way to > > export a > > PCI device to multiple VMs (para) keeping it available to dom0? Xen > > version is 3.0.4. > > > > > > As far as I know you can''t. That is what virtual devices are used for > > right? > > > > In what scenario would you want to grant direct access to a PCI device > > in VMs and also in dom0? > > > Hi Todd, > > the PCI device which I would need to "share" is the fibre channel card > connected to two different storage, on of this is the VMs repository > which has to be visible to dom0 and the other one is the data storage > for VMs which, obviously, has to be visibile to VMs. So the solution > would be using two different fibre channel cards, right? >Adding more hardware is usually a nice solution if you can, since you it will be easier to configure and you should get better performance. You could then dedicate one card to dom0 and one card to a driver domain. As far as I know there isn''t a way to do what you want yet with configuration tricks in Xen. And I don''t know for sure if Xen support would be written in the future. I am not an expert on this. Can others comment? Thanks, Todd> Thanks, > Jan > > Todd > > > > > > > > > > Thanks, > > Jan > > > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xensource.com <mailto:Xen-users@lists.xensource.com> > > http://lists.xensource.com/xen-users > > > > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Todd Deshane wrote:> > > On Feb 18, 2008 6:02 AM, Jan Kalcic <jandot@googlemail.com > <mailto:jandot@googlemail.com>> wrote: > > Todd Deshane wrote: > > > > Hi, > > > > On Feb 17, 2008 6:34 PM, Jan Kalcic <jandot@googlemail.com > <mailto:jandot@googlemail.com> > > <mailto:jandot@googlemail.com <mailto:jandot@googlemail.com>>> > wrote: > > > > Hi All, > > > > just a quick question I could not figure out. Is there a way to > > export a > > PCI device to multiple VMs (para) keeping it available to > dom0? Xen > > version is 3.0.4. > > > > > > As far as I know you can''t. That is what virtual devices are > used for > > right? > > > > In what scenario would you want to grant direct access to a PCI > device > > in VMs and also in dom0? > > > Hi Todd, > > the PCI device which I would need to "share" is the fibre channel card > connected to two different storage, on of this is the VMs repository > which has to be visible to dom0 and the other one is the data storage > for VMs which, obviously, has to be visibile to VMs. So the solution > would be using two different fibre channel cards, right? > > > > Adding more hardware is usually a nice solution if you can, since you > it will be easier to configure and you should get better performance. > You could then dedicate one card to dom0 and one card to a driver domain. > > As far as I know there isn''t a way to do what you want yet with > configuration tricks in Xen. And I don''t know for sure if Xen support > would be written in the future. I am not an expert on this. Can others > comment? >Thanks Todd, this is exactly what I was thinking. More hardware and better performance. I just wanted to know if there is, as you said, a tricky solution for this which would avoid to purchase new hardware. Of course, any other suggestion is welcome. Thanks, Jan> Thanks, > Todd > > > > Thanks, > Jan > > Todd > > > > > > > > > > Thanks, > > Jan > > > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xensource.com > <mailto:Xen-users@lists.xensource.com> > <mailto:Xen-users@lists.xensource.com > <mailto:Xen-users@lists.xensource.com>> > > http://lists.xensource.com/xen-users > > > > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 2/18/08, Jan Kalcic <jandot@googlemail.com> wrote:> this is exactly what I was thinking. More hardware and better > performance. I just wanted to know if there is, as you said, a tricky > solution for this which would avoid to purchase new hardware. Of course, > any other suggestion is welcome.note that assigning real hardware to a DomU prevents it from being migrated. IOW, PCI passtrhough ties the DomU to the hardware. (is there any brave soul trying to emulate PCI-hotplug to make that doable?) -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>-----Original Message----- >From: xen-users-bounces@lists.xensource.com >[mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jan Kalcic >Sent: Monday, 18 February 2008 12:02 >To: deshantm@gmail.com >Cc: xen-users@lists.xensource.com >Subject: Re: [Xen-users] Exporting a PCI Device > >Todd Deshane wrote: >> >> Hi, >> >> On Feb 17, 2008 6:34 PM, Jan Kalcic <jandot@googlemail.com >> <mailto:jandot@googlemail.com>> wrote: >> >> Hi All, >> >> just a quick question I could not figure out. Is there a way to >> export a >> PCI device to multiple VMs (para) keeping it available >to dom0? Xen >> version is 3.0.4. >> >> >> As far as I know you can''t. That is what virtual devices are >used for >> right? >> >> In what scenario would you want to grant direct access to a >PCI device >> in VMs and also in dom0? >> >Hi Todd, > >the PCI device which I would need to "share" is the fibre >channel card connected to two different storage, on of this is >the VMs repository which has to be visible to dom0 and the >other one is the data storage for VMs which, obviously, has to >be visibile to VMs. So the solution would be using two >different fibre channel cards, right?What I would do is make all storage available to the dom0 and use regular methods to export it to the domU. In other words: treat dom0 as a very fancy piece of hardware that''s between your kernel and the fibre-channel attached storage. For generic solutions the Virtual Block Device should be fast enough, otherwise you should probably consider a separate server, dedicated to that single task. I don''t know if you are going to loose any fibre-channel advantages, but I figure you also reduce administrative complexity to dom0''s only. - Joris _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Javier Guerra wrote:> On 2/18/08, Jan Kalcic <jandot@googlemail.com> wrote: > >> this is exactly what I was thinking. More hardware and better >> performance. I just wanted to know if there is, as you said, a tricky >> solution for this which would avoid to purchase new hardware. Of course, >> any other suggestion is welcome. >> > > note that assigning real hardware to a DomU prevents it from being > migrated. IOW, PCI passtrhough ties the DomU to the hardware. > > (is there any brave soul trying to emulate PCI-hotplug to make that doable?) > >That''s very very boring. I did not know that. Thanks, Jan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Javier Guerra wrote:> On 2/18/08, Jan Kalcic <jandot@googlemail.com> wrote: > >> this is exactly what I was thinking. More hardware and better >> performance. I just wanted to know if there is, as you said, a tricky >> solution for this which would avoid to purchase new hardware. Of course, >> any other suggestion is welcome. >> > > note that assigning real hardware to a DomU prevents it from being > migrated. IOW, PCI passtrhough ties the DomU to the hardware. >Moreover, passing through pci devices means that you are trusting that guest just like dom0 since the guest now can do DMA which somewhat crumbles the memory isolation assured by the hypervisor if you are not using Intel VT-d for passing through devices to guests. --Sadique> (is there any brave soul trying to emulate PCI-hotplug to make that doable?) > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
well, Joris recommandation is valid anyway, and you will have less pains... but consider also NPIV (you will need an newer, kernel maybe not supported and also Xen 3.1 and of course the right HBA (4GB)) when you do not have HBA''s you may use iSCSI for your disks, but also here you must access the Storage from Dom0. happy testing ;-) Mit freundlichen Grüßen / with kind regards Gerhard Possler IT Architect IBM Enterprise Linux Services ELS Wiki@IBM (only accessible via IBM intranet) IT-Services and Solutions GmbH Rathausstr. 7, D-09111 Chemnitz Geschäftsführung: Rainer Laier, Michael Mai Sitz der Gesellschaft: Chemnitz Registergericht: Amtsgericht Chemnitz, HRB 18409 http://www.itsas.de/ gerhard.possler@de.ibm.com Mobil +49 (0) 160 90578637 "Joris Dobbelsteen" <Joris@familiedobbelsteen.nl> Sent by: xen-users-bounces@lists.xensource.com 18.02.2008 16:51 To "Jan Kalcic" <jandot@googlemail.com>, <deshantm@gmail.com> cc xen-users@lists.xensource.com Subject RE: [Xen-users] Exporting a PCI Device>-----Original Message----- >From: xen-users-bounces@lists.xensource.com >[mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jan Kalcic >Sent: Monday, 18 February 2008 12:02 >To: deshantm@gmail.com >Cc: xen-users@lists.xensource.com >Subject: Re: [Xen-users] Exporting a PCI Device > >Todd Deshane wrote: >> >> Hi, >> >> On Feb 17, 2008 6:34 PM, Jan Kalcic <jandot@googlemail.com >> <mailto:jandot@googlemail.com>> wrote: >> >> Hi All, >> >> just a quick question I could not figure out. Is there a way to >> export a >> PCI device to multiple VMs (para) keeping it available >to dom0? Xen >> version is 3.0.4. >> >> >> As far as I know you can''t. That is what virtual devices are >used for >> right? >> >> In what scenario would you want to grant direct access to a >PCI device >> in VMs and also in dom0? >> >Hi Todd, > >the PCI device which I would need to "share" is the fibre >channel card connected to two different storage, on of this is >the VMs repository which has to be visible to dom0 and the >other one is the data storage for VMs which, obviously, has to >be visibile to VMs. So the solution would be using two >different fibre channel cards, right?What I would do is make all storage available to the dom0 and use regular methods to export it to the domU. In other words: treat dom0 as a very fancy piece of hardware that''s between your kernel and the fibre-channel attached storage. For generic solutions the Virtual Block Device should be fast enough, otherwise you should probably consider a separate server, dedicated to that single task. I don''t know if you are going to loose any fibre-channel advantages, but I figure you also reduce administrative complexity to dom0''s only. - Joris _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Joris Dobbelsteen wrote:>> -----Original Message----- >> From: xen-users-bounces@lists.xensource.com >> [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jan Kalcic >> Sent: Monday, 18 February 2008 12:02 >> To: deshantm@gmail.com >> Cc: xen-users@lists.xensource.com >> Subject: Re: [Xen-users] Exporting a PCI Device >> >> Todd Deshane wrote: >> >>> Hi, >>> >>> On Feb 17, 2008 6:34 PM, Jan Kalcic <jandot@googlemail.com >>> <mailto:jandot@googlemail.com>> wrote: >>> >>> Hi All, >>> >>> just a quick question I could not figure out. Is there a way to >>> export a >>> PCI device to multiple VMs (para) keeping it available >>> >> to dom0? Xen >> >>> version is 3.0.4. >>> >>> >>> As far as I know you can''t. That is what virtual devices are >>> >> used for >> >>> right? >>> >>> In what scenario would you want to grant direct access to a >>> >> PCI device >> >>> in VMs and also in dom0? >>> >>> >> Hi Todd, >> >> the PCI device which I would need to "share" is the fibre >> channel card connected to two different storage, on of this is >> the VMs repository which has to be visible to dom0 and the >> other one is the data storage for VMs which, obviously, has to >> be visibile to VMs. So the solution would be using two >> different fibre channel cards, right? >> > > What I would do is make all storage available to the dom0 and use > regular methods to export it to the domU. > In other words: treat dom0 as a very fancy piece of hardware that''s > between your kernel and the fibre-channel attached storage. For generic > solutions the Virtual Block Device should be fast enough, otherwise you > should probably consider a separate server, dedicated to that single > task. > > I don''t know if you are going to loose any fibre-channel advantages, but > I figure you also reduce administrative complexity to dom0''s only. > >I did some test and actually it''s quite slow both in reading and in writing. Roughly 50% as Block Device attacched to the domU. It reduces complexity but too much is lost in performance. Thanks, Jan> - Joris > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Gerhard Possler wrote:> > well, Joris recommandation is valid anyway, and you will have less > pains... but consider also NPIV (you will need an newer, kernel maybe > not supported and also Xen 3.1 and of course the right HBA (4GB)) > > when you do not have HBA''s you may use iSCSI for your disks, but also > here you must access the Storage from Dom0. > > happy testing ;-) >A new kernel and a newer xen version is unfortunately not a valid solution for my scenario. NPIV?? Thanks, Jan> Mit freundlichen Grüßen / with kind regards > *Gerhard Possler* > IT Architect > IBM Enterprise Linux Services > ELS Wiki <http://w3.webahead.ibm.com/w3ki/display/ELSde> @IBM (only > accessible via IBM intranet) *IT-Services and Solutions GmbH* > Rathausstr. 7, D-09111 Chemnitz > Geschäftsführung: Rainer Laier, Michael Mai > Sitz der Gesellschaft: Chemnitz > Registergericht: Amtsgericht Chemnitz, HRB 18409 > http://www.itsas.de/ <http://www.itsas.de/> *gerhard.possler@de.ibm.com* > Mobil +49 (0) 160 90578637 > > > > > > *"Joris Dobbelsteen" <Joris@familiedobbelsteen.nl>* > Sent by: xen-users-bounces@lists.xensource.com > > 18.02.2008 16:51 > > > To > "Jan Kalcic" <jandot@googlemail.com>, <deshantm@gmail.com> > cc > xen-users@lists.xensource.com > Subject > RE: [Xen-users] Exporting a PCI Device > > > > > > > > > > >-----Original Message----- > >From: xen-users-bounces@lists.xensource.com > >[ mailto:xen-users-bounces@lists.xensource.com > <mailto:xen-users-bounces@lists.xensource.com> ] On Behalf Of Jan Kalcic > >Sent: Monday, 18 February 2008 12:02 > >To: deshantm@gmail.com > >Cc: xen-users@lists.xensource.com > >Subject: Re: [Xen-users] Exporting a PCI Device > > > >Todd Deshane wrote: > >> > >> Hi, > >> > >> On Feb 17, 2008 6:34 PM, Jan Kalcic <jandot@googlemail.com > >> < mailto:jandot@googlemail.com <mailto:jandot@googlemail.com> >> wrote: > >> > >> Hi All, > >> > >> just a quick question I could not figure out. Is there a way to > >> export a > >> PCI device to multiple VMs (para) keeping it available > >to dom0? Xen > >> version is 3.0.4. > >> > >> > >> As far as I know you can''t. That is what virtual devices are > >used for > >> right? > >> > >> In what scenario would you want to grant direct access to a > >PCI device > >> in VMs and also in dom0? > >> > >Hi Todd, > > > >the PCI device which I would need to "share" is the fibre > >channel card connected to two different storage, on of this is > >the VMs repository which has to be visible to dom0 and the > >other one is the data storage for VMs which, obviously, has to > >be visibile to VMs. So the solution would be using two > >different fibre channel cards, right? > > What I would do is make all storage available to the dom0 and use > regular methods to export it to the domU. > In other words: treat dom0 as a very fancy piece of hardware that''s > between your kernel and the fibre-channel attached storage. For generic > solutions the Virtual Block Device should be fast enough, otherwise you > should probably consider a separate server, dedicated to that single > task. > > I don''t know if you are going to loose any fibre-channel advantages, but > I figure you also reduce administrative complexity to dom0''s only. > > - Joris > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > <http://lists.xensource.com/xen-users> >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>-----Original Message----- >From: Jan Kalcic [mailto:jandot@googlemail.com] >Sent: Monday, 18 February 2008 23:47 >To: Gerhard Possler >Cc: Joris Dobbelsteen; deshantm@gmail.com; >xen-users@lists.xensource.com; xen-users-bounces@lists.xensource.com >Subject: Re: [Xen-users] Exporting a PCI Device >[snip]> >NPIV??Take a tour on google... First hit this time: http://en.wikipedia.org/wiki/NPIV Seemed related. [snip] _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>-----Original Message----- >From: Jan Kalcic [mailto:jandot@googlemail.com] >Sent: Monday, 18 February 2008 23:44 >To: Joris Dobbelsteen >Cc: deshantm@gmail.com; xen-users@lists.xensource.com >Subject: Re: [Xen-users] Exporting a PCI Device >[snip]> >I did some test and actually it''s quite slow both in reading >and in writing. Roughly 50% as Block Device attacched to the domU. >It reduces complexity but too much is lost in performance.For some odd reason I''m seeing a similar things on my box with attached RAID-0 & LVM. The domU only reaches 50%. I don''t know the cause, I only heared comments that it had todo with LVM oddities that RAID seemed to aggrivate. Nevertheless the dom0 reaches full speed. Coincidence or a sign of deeper trouble? My setup is just some standard (cheap) SATA disks, it''s a personal system. It does run Xen 3.1.2, the 2.6.20 kernel on the domU and the Debian Etch 2.6.18 kernels on the guests. Tests where with Bonnie++ (that''s part of debian etch). To rule out the virtual block device playing tricks I would try to see what happens with a slower disk on the SAN and even with a local disk (in the system itself). - Joris _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Joris Dobbelsteen wrote:>> -----Original Message----- >> From: Jan Kalcic [mailto:jandot@googlemail.com] >> Sent: Monday, 18 February 2008 23:44 >> To: Joris Dobbelsteen >> Cc: deshantm@gmail.com; xen-users@lists.xensource.com >> Subject: Re: [Xen-users] Exporting a PCI Device >> >> > [snip] > >> I did some test and actually it''s quite slow both in reading >> and in writing. Roughly 50% as Block Device attacched to the domU. >> It reduces complexity but too much is lost in performance. >> > > For some odd reason I''m seeing a similar things on my box with attached > RAID-0 & LVM. The domU only reaches 50%. I don''t know the cause, I only > heared comments that it had todo with LVM oddities that RAID seemed to > aggrivate. Nevertheless the dom0 reaches full speed. > > Coincidence or a sign of deeper trouble? > > My setup is just some standard (cheap) SATA disks, it''s a personal > system. It does run Xen 3.1.2, the 2.6.20 kernel on the domU and the > Debian Etch 2.6.18 kernels on the guests. Tests where with Bonnie++ > (that''s part of debian etch). > > To rule out the virtual block device playing tricks I would try to see > what happens with a slower disk on the SAN and even with a local disk > (in the system itself). > > - Joris > > >Hi Joris, I did some test on the local disk (software RAID) of another system just to have an idea about performance. There is a bit lost in performance, dom0 tends to be a little quicker even if the difference is not so big to be problematic. Differently was on the other system with SAN which I could not test. I''ll do soon and I''ll let you know. The following is the output of the test I did. What do you mean when you say "deeper trouble"? Do you refer to Xen code or something about the infrastructure configuration? Thanks, Jan domU # dd if=/dev/zero of=/mnt/test count=1000 bs=1M 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 18.7752 seconds, 55.8 MB/s 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 12.395 seconds, 84.6 MB/s 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 18.1577 seconds, 57.7 MB/s # hdparm -t /dev/sdd1 (three times) /dev/sdd1: Timing buffered disk reads: 116 MB in 3.00 seconds = 38.62 MB/sec /dev/sdd1: Timing buffered disk reads: 156 MB in 3.01 seconds = 51.80 MB/sec /dev/sdd1: Timing buffered disk reads: 148 MB in 3.03 seconds = 48.85 MB/sec ------------------------------------------------------------- dom0 # dd if=/dev/zero of=/data/test count=1000 bs=1M (three times) 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 13.2577 seconds, 79.1 MB/s 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 18.3124 seconds, 57.3 MB/s 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 12.244 seconds, 85.6 MB/s # hdparm -t /dev/md3 /dev/md3: Timing buffered disk reads: 164 MB in 3.02 seconds = 54.37 MB/sec /dev/md3: Timing buffered disk reads: 172 MB in 3.01 seconds = 57.21 MB/sec /dev/md3: Timing buffered disk reads: 180 MB in 3.01 seconds = 59.85 MB/sec _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>> Hi Todd, >> >> the PCI device which I would need to "share" is the fibre >> channel card connected to two different storage, on of this is >> the VMs repository which has to be visible to dom0 and the >> other one is the data storage for VMs which, obviously, has to >> be visibile to VMs. So the solution would be using two >> different fibre channel cards, right? >> > > What I would do is make all storage available to the dom0 and use > regular methods to export it to the domU. > In other words: treat dom0 as a very fancy piece of hardware that''s > between your kernel and the fibre-channel attached storage. For generic > solutions the Virtual Block Device should be fast enough, otherwise you > should probably consider a separate server, dedicated to that single > task. > >Exporting the device to domU as a Physical Block Device, let''s suppose /dev/sda, when I migrate the VMs to another server I have the problem that device has to be re-attached, don''t I? Or if the same device still exists on the other server the VMs has that device still attached? Thanks, Jan> I don''t know if you are going to loose any fibre-channel advantages, but > I figure you also reduce administrative complexity to dom0''s only. > > - Joris > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Feb 19, 2008 8:30 AM, Jan Kalcic <jandot@googlemail.com> wrote:> > >> Hi Todd, > >> > >> the PCI device which I would need to "share" is the fibre > >> channel card connected to two different storage, on of this is > >> the VMs repository which has to be visible to dom0 and the > >> other one is the data storage for VMs which, obviously, has to > >> be visibile to VMs. So the solution would be using two > >> different fibre channel cards, right? > >> > > > > What I would do is make all storage available to the dom0 and use > > regular methods to export it to the domU. > > In other words: treat dom0 as a very fancy piece of hardware that''s > > between your kernel and the fibre-channel attached storage. For generic > > solutions the Virtual Block Device should be fast enough, otherwise you > > should probably consider a separate server, dedicated to that single > > task. > > > > > Exporting the device to domU as a Physical Block Device, let''s suppose > /dev/sda, when I migrate the VMs to another server I have the problem > that device has to be re-attached, don''t I? Or if the same device still > exists on the other server the VMs has that device still attached? >I''ve gotten confused about your exact setup and what you are testing, but one word of caution is to be sure not to mount the same file system twice as that is sure to lead to data corruption.> Thanks, > Jan > > > > I don''t know if you are going to loose any fibre-channel advantages, but > > I figure you also reduce administrative complexity to dom0''s only. > > > > - Joris > > > > > > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>-----Original Message----- >From: Jan Kalcic [mailto:jandot@googlemail.com] >Sent: Tuesday, 19 February 2008 11:31 >To: Joris Dobbelsteen >Cc: deshantm@gmail.com; xen-users@lists.xensource.com >Subject: Re: [Xen-users] Exporting a PCI Device > >Joris Dobbelsteen wrote: >>> -----Original Message----- >>> From: Jan Kalcic [mailto:jandot@googlemail.com] >>> Sent: Monday, 18 February 2008 23:44 >>> To: Joris Dobbelsteen >>> Cc: deshantm@gmail.com; xen-users@lists.xensource.com >>> Subject: Re: [Xen-users] Exporting a PCI Device >>> >>> >> [snip] >> >>> I did some test and actually it''s quite slow both in reading and in >>> writing. Roughly 50% as Block Device attacched to the domU. >>> It reduces complexity but too much is lost in performance. >>> >> >> For some odd reason I''m seeing a similar things on my box with >> attached RAID-0 & LVM. The domU only reaches 50%. I don''t know the >> cause, I only heared comments that it had todo with LVM >oddities that >> RAID seemed to aggrivate. Nevertheless the dom0 reaches full speed. >> >> Coincidence or a sign of deeper trouble? >> >> My setup is just some standard (cheap) SATA disks, it''s a personal >> system. It does run Xen 3.1.2, the 2.6.20 kernel on the domU and the >> Debian Etch 2.6.18 kernels on the guests. Tests where with Bonnie++ >> (that''s part of debian etch). >> >> To rule out the virtual block device playing tricks I would >try to see >> what happens with a slower disk on the SAN and even with a >local disk >> (in the system itself). >> >> - Joris >> >> >> >Hi Joris, > >I did some test on the local disk (software RAID) of another >system just to have an idea about performance. There is a bit >lost in performance, dom0 tends to be a little quicker even if >the difference is not so big to be problematic. Differently >was on the other system with SAN which I could not test. I''ll >do soon and I''ll let you know. The following is the output of >the test I did.Jan, Please try to use bonnie++ or some specialized benchmarking tool. "dd" is not really for benchmarking and you will get effects from caching and lazy writeback algorithms. Bonnie++ tries to mitigate these effects by making the data set at least twice the memory size (and perform explicit write barriers). Also hdparm is intended to be used on disks, not virtual things. You might get strange effects here. What IS a very good thing is that you did multiple runs, but these have HUGE variantions, up to 20% from the mean value. This counts for both dom0 and domU. If everything is well, the variantions between runs should be a lot smaller. Also you should use the same areas on the disks for tests. Towards the end of the disk, the thing will get slower transfer speeds.>What do you mean when you say "deeper trouble"? Do you refer >to Xen code or something about the infrastructure configuration?Yes, it could be that components in your system, when mixed properly, behave poorly. On this, result for dom0 and domU seem to be similar for a local disk.>Thanks, >Jan > > >domU > ># dd if=/dev/zero of=/mnt/test count=1000 bs=1M >1000+0 records in >1000+0 records out >1048576000 bytes (1.0 GB) copied, 18.7752 seconds, 55.8 MB/s > >1000+0 records in >1000+0 records out >1048576000 bytes (1.0 GB) copied, 12.395 seconds, 84.6 MB/s > >1000+0 records in >1000+0 records out >1048576000 bytes (1.0 GB) copied, 18.1577 seconds, 57.7 MB/s > > ># hdparm -t /dev/sdd1 (three times) > >/dev/sdd1: > Timing buffered disk reads: 116 MB in 3.00 seconds = 38.62 MB/sec > >/dev/sdd1: > Timing buffered disk reads: 156 MB in 3.01 seconds = 51.80 MB/sec > >/dev/sdd1: > Timing buffered disk reads: 148 MB in 3.03 seconds = 48.85 MB/sec >------------------------------------------------------------- >dom0 > ># dd if=/dev/zero of=/data/test count=1000 bs=1M (three times) >1000+0 records in >1000+0 records out >1048576000 bytes (1.0 GB) copied, 13.2577 seconds, 79.1 MB/s > >1000+0 records in >1000+0 records out >1048576000 bytes (1.0 GB) copied, 18.3124 seconds, 57.3 MB/s > >1000+0 records in >1000+0 records out >1048576000 bytes (1.0 GB) copied, 12.244 seconds, 85.6 MB/s > ># hdparm -t /dev/md3 > >/dev/md3: > Timing buffered disk reads: 164 MB in 3.02 seconds = 54.37 MB/sec > >/dev/md3: > Timing buffered disk reads: 172 MB in 3.01 seconds = 57.21 MB/sec > >/dev/md3: > Timing buffered disk reads: 180 MB in 3.01 seconds = 59.85 MB/sec_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users