Hi list, I''ve read about recent efforts to push pv-on-hvm drivers to Linux mainline and I''m curious to know the cause for this. What''s the advantage over using pv_ops directly and booting the kernel paravirtualized? Are there plans to move Linux domUs closer to the KVM way (from an architectural point of view)? Hope you can help. Regards, Markus _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sun, Sep 05, 2010 at 01:29:19AM +0200, Markus Schuster wrote:> Hi list, > > I''ve read about recent efforts to push pv-on-hvm drivers to Linux mainline > and I''m curious to know the cause for this. What''s the advantage over using > pv_ops directly and booting the kernel paravirtualized? > Are there plans to move Linux domUs closer to the KVM way (from an > architectural point of view)? > > Hope you can help. >Some operating systems might be easier to install as Xen HVM guests.. The other point is performance: 32bit PV (paravirtualized) guests perform OK, but 64bit PV guests have a performance hit if your workload creates a lot of new processes in the guest. HVM helps there; 64bit Linux guests might be faster as HVM, depending on the workload. When running Xen HVM guests you obviously need PV-on-HVM drivers, otherwise the disk/net IO will be really slow. So that''s why Xen developers are upstreaming the PV-on-HVM drivers now.. to make it easy for every distro to ship the Xen PV-on-HVM drivers in the future when they''re included automatically in the upstream Linux kernel (starting from Linux 2.6.36). -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
In general, you definitely want to use PV-based kernels for Linux or any operating system that supports it (Linux, Solaris, *BSD). However, there are a few scenarios where you may not be able to run a PV kernel and may actually need to install HVM, even for an O/S that supports PV. The primary example I can think of is that you have a proprietary kernel module (for a piece of hardware or a software application) that is not compiled for Xen-enabled kernels. In this scenario, you''re forced to run a kernel that is not Xen-aware, but you want to accelerate as many of the devices in the HVM as possible, mainly network and disk devices. -Nick>>> On 2010/09/04 at 17:29, Markus Schuster <ml@markus.schuster.name> wrote: > Hi list, > > I''ve read about recent efforts to push pv-on-hvm drivers to Linux mainline > and I''m curious to know the cause for this. What''s the advantage over using > pv_ops directly and booting the kernel paravirtualized? > Are there plans to move Linux domUs closer to the KVM way (from an > architectural point of view)? > > Hope you can help. > > Regards, > Markus-------- This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen wrote:> On Sun, Sep 05, 2010 at 01:29:19AM +0200, Markus Schuster wrote: >> Hi list, >> >> I''ve read about recent efforts to push pv-on-hvm drivers to Linux >> mainline and I''m curious to know the cause for this. What''s the advantage >> over using pv_ops directly and booting the kernel paravirtualized? > > The other point is performance: 32bit PV (paravirtualized) guests > perform OK, but 64bit PV guests have a performance hit if your > workload creates a lot of new processes in the guest. > > HVM helps there; 64bit Linux guests might be faster as HVM, > depending on the workload.Hi Parsi, thanks for your (as usual :)) good answer. That''s the first time I read about a PV performance hit compared to HVM - maybe you (or someone else) can write a few words about what''s causing that? Could be interesting for other people, maybe? Regards, Markus _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, Sep 06, 2010 at 10:59:26AM +0200, Markus Schuster wrote:> Pasi Kärkkäinen wrote: > > On Sun, Sep 05, 2010 at 01:29:19AM +0200, Markus Schuster wrote: > >> Hi list, > >> > >> I''ve read about recent efforts to push pv-on-hvm drivers to Linux > >> mainline and I''m curious to know the cause for this. What''s the advantage > >> over using pv_ops directly and booting the kernel paravirtualized? > > > > The other point is performance: 32bit PV (paravirtualized) guests > > perform OK, but 64bit PV guests have a performance hit if your > > workload creates a lot of new processes in the guest. > > > > HVM helps there; 64bit Linux guests might be faster as HVM, > > depending on the workload. > > Hi Parsi, thanks for your (as usual :)) good answer. > That''s the first time I read about a PV performance hit compared to HVM - > maybe you (or someone else) can write a few words about what''s causing that? > Could be interesting for other people, maybe? >I think there are some XenSummit presentations about it on xen.org website. It has to do with 32bit vs 64bit architecture differences related to memory management. Every time a new process is created by the 64bit PV kernel the guest process pagetables need to be verified/checked by the hypervisor, and this causes a performance hit if you need to create a lof of new processes in the guest. It doesn''t affect ''long running'' processes in a 64bit PV guest, ie. the performance hit happens only when new processes are created often (kernel compilation, unixbench). For an HVM guest that stuff is handled by the CPU/hardware, so there''s no performance hit related to it. HVM guests have some other performance hits though.. That''s my understanding of it :) -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, Sep 06, 2010 at 12:55:05PM +0300, Pasi Kärkkäinen wrote:> On Mon, Sep 06, 2010 at 10:59:26AM +0200, Markus Schuster wrote: > > Pasi Kärkkäinen wrote: > > > On Sun, Sep 05, 2010 at 01:29:19AM +0200, Markus Schuster wrote: > > >> Hi list, > > >> > > >> I''ve read about recent efforts to push pv-on-hvm drivers to Linux > > >> mainline and I''m curious to know the cause for this. What''s the advantage > > >> over using pv_ops directly and booting the kernel paravirtualized? > > > > > > The other point is performance: 32bit PV (paravirtualized) guests > > > perform OK, but 64bit PV guests have a performance hit if your > > > workload creates a lot of new processes in the guest. > > > > > > HVM helps there; 64bit Linux guests might be faster as HVM, > > > depending on the workload. > > > > Hi Parsi, thanks for your (as usual :)) good answer. > > That''s the first time I read about a PV performance hit compared to HVM - > > maybe you (or someone else) can write a few words about what''s causing that? > > Could be interesting for other people, maybe? > > > > I think there are some XenSummit presentations about it on xen.org website. > It has to do with 32bit vs 64bit architecture differences related to memory management. > > Every time a new process is created by the 64bit PV kernel > the guest process pagetables need to be verified/checked by the hypervisor, > and this causes a performance hit if you need to create a lof > of new processes in the guest. > > It doesn''t affect ''long running'' processes in a 64bit PV guest, > ie. the performance hit happens only when new processes > are created often (kernel compilation, unixbench). > > For an HVM guest that stuff is handled by the CPU/hardware, > so there''s no performance hit related to it. > HVM guests have some other performance hits though.. > > That''s my understanding of it :) >I think this video interview of Keir Fraser includes that stuff: http://blog.xen.org/index.php/2009/07/02/developer-interview-series/>From around 8 minutes on that video..-- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
-----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Pasi Kärkkäinen Sent: Monday, September 06, 2010 8:41 AM To: Markus Schuster Cc: xen-users@lists.xensource.com Subject: Re: [Xen-users] Re: Why pv-on-hvm drivers? On Mon, Sep 06, 2010 at 12:55:05PM +0300, Pasi Kärkkäinen wrote:> > > HVM helps there; 64bit Linux guests might be faster as HVM, > > > depending on the workload.Stupid HVM question... is hvmloader the only way to boot a guest in HVM mode? I''d give HVM a try, but our domU images are stripped to the bone. Some likely don''t have grub, as they were intended to boot only as a PV guest (i.e. by specifying kernel/ramdisk). If I can enable HVM with a simple configuration setting I''ll try it. If I have to make modifications to the guest, there''s a bit more to it... -Jeff _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen wrote:> On Mon, Sep 06, 2010 at 12:55:05PM +0300, Pasi Kärkkäinen wrote: >> On Mon, Sep 06, 2010 at 10:59:26AM +0200, Markus Schuster wrote: >> > Pasi Kärkkäinen wrote: >> > > On Sun, Sep 05, 2010 at 01:29:19AM +0200, Markus Schuster wrote: >> > >> Hi list, >> > >> >> > >> I''ve read about recent efforts to push pv-on-hvm drivers to Linux >> > >> mainline and I''m curious to know the cause for this. What''s the >> > >> advantage over using pv_ops directly and booting the kernel >> > >> paravirtualized? >> > > >> > > The other point is performance: 32bit PV (paravirtualized) guests >> > > perform OK, but 64bit PV guests have a performance hit if your >> > > workload creates a lot of new processes in the guest. >> > > >> > > HVM helps there; 64bit Linux guests might be faster as HVM, >> > > depending on the workload. >> > >> > Hi Parsi, thanks for your (as usual :)) good answer. >> > That''s the first time I read about a PV performance hit compared to HVM >> > - maybe you (or someone else) can write a few words about what''s >> > causing that? Could be interesting for other people, maybe? >> > >> >> I think there are some XenSummit presentations about it on xen.org >> website. It has to do with 32bit vs 64bit architecture differences >> related to memory management. >> >> Every time a new process is created by the 64bit PV kernel >> the guest process pagetables need to be verified/checked by the >> hypervisor, and this causes a performance hit if you need to create a lof >> of new processes in the guest. >> >> It doesn''t affect ''long running'' processes in a 64bit PV guest, >> ie. the performance hit happens only when new processes >> are created often (kernel compilation, unixbench). >> >> For an HVM guest that stuff is handled by the CPU/hardware, >> so there''s no performance hit related to it. >> HVM guests have some other performance hits though.. >> >> That''s my understanding of it :) >> > > I think this video interview of Keir Fraser includes that stuff: > http://blog.xen.org/index.php/2009/07/02/developer-interview-series/ > >>From around 8 minutes on that video..Thank''s for the link, really interesting stuff. Maybe one should consider that fact when setting up a domU. But finally pv-on-hvm makes sense to me :) Regards, Markus _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users