With GPLPV under 2.6.30, GPLPV gets the following from the ring: ring slot n (first buffer): status (length) = 54 bytes offset = 0 flags = NETRXF_extra_info (possibly csum too but not relevant) ring slot n + 1 (extra info) gso.size (mss) = 1460 Because NETRXF_extra_info is not set, that''s all I get for that packet. In the IP header though, the total length is 1544 (which in itself is a little strange), but obviously I''m not getting a full packet, just the ETH+IP+TCP header. According to Andrew Lyon it works fine in previous versions, so this problem only arises on 2.6.30. I don''t know if netfront on Linux suffers from a similar problem. I can''t identify any changes that could cause this, but if the problem is in netback either the frags count isn''t being set correctly, or skb->cb (which appears to be used temporarily to hold nr_frags) is becoming corrupt (set to 0) somehow, but the window where this could occur is very small and I can''t see where it could happen. Any suggestions as to where to start looking? (one nice thing is that I have identified a crash that would occur when the IP header lied about its length!) Thanks James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Andrew Lyon
2009-Jul-18 18:28 UTC
[Xen-devel] Re: network misbehaviour with gplpv and 2.6.30
On Sat, Jul 18, 2009 at 4:42 AM, James Harper<james.harper@bendigoit.com.au> wrote:> With GPLPV under 2.6.30, GPLPV gets the following from the ring: > > ring slot n (first buffer): > status (length) = 54 bytes > offset = 0 > flags = NETRXF_extra_info (possibly csum too but not relevant) > ring slot n + 1 (extra info) > gso.size (mss) = 1460 > > Because NETRXF_extra_info is not set, that''s all I get for that packet. > In the IP header though, the total length is 1544 (which in itself is a > little strange), but obviously I''m not getting a full packet, just the > ETH+IP+TCP header. > > According to Andrew Lyon it works fine in previous versions, so this > problem only arises on 2.6.30. I don''t know if netfront on Linux suffers > from a similar problem. > > I can''t identify any changes that could cause this, but if the problem > is in netback either the frags count isn''t being set correctly, or > skb->cb (which appears to be used temporarily to hold nr_frags) is > becoming corrupt (set to 0) somehow, but the window where this could > occur is very small and I can''t see where it could happen. > > Any suggestions as to where to start looking? > > (one nice thing is that I have identified a crash that would occur when > the IP header lied about its length!) > > Thanks > > James > >James, I tried using the 2.6.29 netback.c with 2.6.30, I had to change a couple of calls to __mod_timer to use mod_timer instead but after that it compiles and seems to work normally, but it does not get rid of the problem. I will keep trying to find the change that caused this problem. Andy _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Paul Durrant
2009-Jul-21 09:35 UTC
Re: [Xen-devel] network misbehaviour with gplpv and 2.6.30
James Harper wrote:> With GPLPV under 2.6.30, GPLPV gets the following from the ring: > > ring slot n (first buffer): > status (length) = 54 bytes > offset = 0 > flags = NETRXF_extra_info (possibly csum too but not relevant) > ring slot n + 1 (extra info) > gso.size (mss) = 1460 > > Because NETRXF_extra_info is not set, that''s all I get for that packet.I assume you mean NETRXF_more_data here? Are you saying that ring slot n has only NETRXF_extra_info and *not* NETRXF_more_data?> In the IP header though, the total length is 1544 (which in itself is a > little strange), but obviously I''m not getting a full packet, just the > ETH+IP+TCP header. > > According to Andrew Lyon it works fine in previous versions, so this > problem only arises on 2.6.30. I don''t know if netfront on Linux suffers > from a similar problem. > > I can''t identify any changes that could cause this, but if the problem > is in netback either the frags count isn''t being set correctly, or > skb->cb (which appears to be used temporarily to hold nr_frags) is > becoming corrupt (set to 0) somehow, but the window where this could > occur is very small and I can''t see where it could happen. > > Any suggestions as to where to start looking? > > (one nice thing is that I have identified a crash that would occur when > the IP header lied about its length!) > > Thanks > > James > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel-- ==============================Paul Durrant, Software Engineer Citrix Systems (R&D) Ltd. First Floor, Building 101 Cambridge Science Park Milton Road Cambridge CB4 0FY United Kingdom TEL: x35957 (+44 1223 225957) ============================== _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
James Harper
2009-Jul-21 10:05 UTC
RE: [Xen-devel] network misbehaviour with gplpv and 2.6.30
> > James Harper wrote: > > With GPLPV under 2.6.30, GPLPV gets the following from the ring: > > > > ring slot n (first buffer): > > status (length) = 54 bytes > > offset = 0 > > flags = NETRXF_extra_info (possibly csum too but not relevant) > > ring slot n + 1 (extra info) > > gso.size (mss) = 1460 > > > > Because NETRXF_extra_info is not set, that''s all I get for thatpacket.> > I assume you mean NETRXF_more_data here?Oops. Yes, that''s exactly what I mean.> Are you saying that ring slot n > has only NETRXF_extra_info and *not* NETRXF_more_data? >Yes. From the debug I have received from Andrew Lyon, NETRXF_more_data is _never_ set.>From what Andrew tells me (and it''s not unlikely that I misunderstood),the packets in question come from a physical machine external to the machine running xen. I can''t quite understand how that could be as they are ''large'' packets (>1514 byte total packet length) which should only be locally originated. Unless he''s running with jumbo frames (are you Andrew?). I''ve asked for some more debug info but he''s in a different timezone to me and probably isn''t awake yet. I''m less and less inclined to think that this is actually a problem with GPLPV and more a problem with netback (or a physical network driver) in 2.6.30, but a tcpdump in Dom0, HVM without GPLPV and maybe in a Linux DomU should tell us more. Thanks James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Paul Durrant
2009-Jul-21 10:13 UTC
Re: [Xen-devel] network misbehaviour with gplpv and 2.6.30
James Harper wrote:> >> Are you saying that ring slot n >> has only NETRXF_extra_info and *not* NETRXF_more_data? >> > > Yes. From the debug I have received from Andrew Lyon, NETRXF_more_data > is _never_ set. > > From what Andrew tells me (and it''s not unlikely that I misunderstood), > the packets in question come from a physical machine external to the > machine running xen. I can''t quite understand how that could be as they > are ''large'' packets (>1514 byte total packet length) which should only > be locally originated. Unless he''s running with jumbo frames (are you > Andrew?). >It''s not unusual for h/w drivers to support ''LRO'', i.e. they re-assemble consecutive in-order TCP segments into a large packet before passing up the stack. I believe that these would manifest themselves as TSOs coming into the transmit side of netback, just as locally originated large packets would.> I''ve asked for some more debug info but he''s in a different timezone to > me and probably isn''t awake yet. I''m less and less inclined to think > that this is actually a problem with GPLPV and more a problem with > netback (or a physical network driver) in 2.6.30, but a tcpdump in Dom0, > HVM without GPLPV and maybe in a Linux DomU should tell us more. >Yes, a tcpdump of what''s being passed into netback in dom0 should tell us what''s happening. Paul -- ==============================Paul Durrant, Software Engineer Citrix Systems (R&D) Ltd. First Floor, Building 101 Cambridge Science Park Milton Road Cambridge CB4 0FY United Kingdom TEL: x35957 (+44 1223 225957) ============================== _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Nerijus Narmontas
2009-Jul-21 10:53 UTC
Re: [Xen-devel] network misbehaviour with gplpv and 2.6.30
Hello, If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I gracefully shutdown domU, the domain stays in ---s- state. Is this fixed in 3.4.1-rc8? Regards, Nerijus N. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote:> Hello, > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I gracefully > shutdown domU, the domain stays in ---s- state. > > Is this fixed in 3.4.1-rc8? >Hello. Please don''t hijack threads - you replied to a thread about network problems and gplpv drivers. Always start a new thread for new subjects. What version are you seeing this behaviour with? Xen 3.4.0 ? What dom0 kernel version? -- Pasi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
James Harper
2009-Jul-21 11:09 UTC
RE: [Xen-devel] network misbehaviour with gplpv and 2.6.30
> James Harper wrote: > > > >> Are you saying that ring slot n > >> has only NETRXF_extra_info and *not* NETRXF_more_data? > >> > > > > Yes. From the debug I have received from Andrew Lyon,NETRXF_more_data> > is _never_ set. > > > > From what Andrew tells me (and it''s not unlikely that Imisunderstood),> > the packets in question come from a physical machine external to the > > machine running xen. I can''t quite understand how that could be asthey> > are ''large'' packets (>1514 byte total packet length) which shouldonly> > be locally originated. Unless he''s running with jumbo frames (areyou> > Andrew?). > > > > It''s not unusual for h/w drivers to support ''LRO'', i.e. theyre-assemble> consecutive in-order TCP segments into a large packet before passingup> the stack. I believe that these would manifest themselves as TSOscoming> into the transmit side of netback, just as locally originated large > packets would. >Interesting. My work with the windows NDIS framework said that this must be very rare as I couldn''t find a way to make Windows accept ''large'' packets. GPLPV actually has to break up the packets and checksum them. Checksum is another thing that Windows is very fussy about. The checksum on rx has to be correct. There is no ''the data is good, don''t worry about the checksum'' flag. Windows seems to check it anyway and drop the packet if it is incorrect. James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Fajar A. Nugraha
2009-Jul-22 02:11 UTC
Re: [Xen-users] Re: [Xen-devel] network misbehaviour with gplpv and 2.6.30
On Tue, Jul 21, 2009 at 5:53 PM, Nerijus Narmontas<n.narmontas@gmail.com> wrote:> Hello, > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I gracefully > shutdown domU, the domain stays in ---s- state. > Is this fixed in 3.4.1-rc8?I don''t use 3.4.1, but if I remember correctly this is a bug in 3.2 (is that the version you''re using?) I''m using Redhat''s 3.1+ and Gitco''s 3.3.1 and 3.4.0, with (dom0-cpus 1) and dom0_vcpus_pin on xen.gz''s grub.conf line, and it works correctly. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Jul 21, 2009 at 2:01 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:> On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote: > > Hello, > > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I gracefully > > shutdown domU, the domain stays in ---s- state. > > > > Is this fixed in 3.4.1-rc8? > > > > Hello. > > Please don''t hijack threads - you replied to a thread about network > problems > and gplpv drivers. Always start a new thread for new subjects. > > What version are you seeing this behaviour with? Xen 3.4.0 ? What dom0 > kernel version? > > -- Pasi >Sorry for the threads thing. root@xen1:/# more /etc/xen/xend-config.sxp | grep cpu # In SMP system, dom0 will use dom0-cpus # of CPUS # If dom0-cpus = 0, dom0 will take all cpus available (dom0-cpus 1) root@xen1:/# xm dmesg | grep Command (XEN) Command line: console=com2 com2=115200,8n1 root@xen1:/# xm dmesg | grep VCPUs (XEN) Dom0 has maximum 8 VCPUs root@xen1:/# xm vcpu-list Name ID VCPU CPU State Time(s) CPU Affinity Domain-0 0 0 5 r-- 9.2 any cpu Domain-0 0 1 - --p 1.8 any cpu Domain-0 0 2 - --p 1.7 any cpu Domain-0 0 3 - --p 1.6 any cpu Domain-0 0 4 - --p 1.4 any cpu Domain-0 0 5 - --p 1.4 any cpu Domain-0 0 6 - --p 1.5 any cpu Domain-0 0 7 - --p 1.3 any cpu root@xen1:/# xm create /etc/xen/dc3.conf Using config file "/etc/xen/dc3.conf". Started domain dc3 (id=1) root@xen1:/# xm vcpu-list Name ID VCPU CPU State Time(s) CPU Affinity Domain-0 0 0 7 r-- 36.5 any cpu Domain-0 0 1 - --p 1.8 any cpu Domain-0 0 2 - --p 1.7 any cpu Domain-0 0 3 - --p 1.6 any cpu Domain-0 0 4 - --p 1.4 any cpu Domain-0 0 5 - --p 1.4 any cpu Domain-0 0 6 - --p 1.5 any cpu Domain-0 0 7 - --p 1.3 any cpu dc3 1 0 0 -b- 15.2 0 dc3 1 1 1 -b- 6.8 1 dc3 1 2 2 -b- 7.5 2 dc3 1 3 3 -b- 8.0 3 After HVM Windows domU shutdown, it stays in ---s- state. root@xen1:/# xm li Name ID Mem VCPUs State Time(s) Domain-0 0 24106 1 r----- 58.7 dc3 1 8192 4 ---s-- 59.0 root@xen1:/# xm vcpu-list Name ID VCPU CPU State Time(s) CPU Affinity Domain-0 0 0 4 r-- 48.4 any cpu ... Domain-0 0 7 - --p 1.3 any cpu dc3 1 0 0 --- 20.0 0 dc3 1 1 1 --- 10.9 1 dc3 1 2 2 --- 15.2 2 dc3 1 3 3 --- 12.9 3 The problem goes away if I tell Xen to boot with options dom0_max_vcpus=1 dom0_vcpus_pin. What''s the difference between Xen boot options to limit vcpus for dom0 to /etc/xen/xend-config.sxp? I am running Xen 3.4.1-rc6 version. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Pasi Kärkkäinen
2009-Jul-22 15:21 UTC
Re: [Xen-devel] dom0-cpus problem with Xen 3.4.1-rc6
On Wed, Jul 22, 2009 at 06:18:37PM +0300, Nerijus Narmontas wrote:> On Tue, Jul 21, 2009 at 2:01 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote: > > > On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote: > > > Hello, > > > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I gracefully > > > shutdown domU, the domain stays in ---s- state. > > > > > > Is this fixed in 3.4.1-rc8? > > > > > > > Hello. > > > > Please don''t hijack threads - you replied to a thread about network > > problems > > and gplpv drivers. Always start a new thread for new subjects. > > > > What version are you seeing this behaviour with? Xen 3.4.0 ? What dom0 > > kernel version? > > > > -- Pasi > > > > Sorry for the threads thing. > > root@xen1:/# more /etc/xen/xend-config.sxp | grep cpu > # In SMP system, dom0 will use dom0-cpus # of CPUS > # If dom0-cpus = 0, dom0 will take all cpus available > (dom0-cpus 1) > > root@xen1:/# xm dmesg | grep Command > (XEN) Command line: console=com2 com2=115200,8n1 > > root@xen1:/# xm dmesg | grep VCPUs > (XEN) Dom0 has maximum 8 VCPUs > > root@xen1:/# xm vcpu-list > Name ID VCPU CPU State Time(s) CPU > Affinity > Domain-0 0 0 5 r-- 9.2 any cpu > Domain-0 0 1 - --p 1.8 any cpu > Domain-0 0 2 - --p 1.7 any cpu > Domain-0 0 3 - --p 1.6 any cpu > Domain-0 0 4 - --p 1.4 any cpu > Domain-0 0 5 - --p 1.4 any cpu > Domain-0 0 6 - --p 1.5 any cpu > Domain-0 0 7 - --p 1.3 any cpu > > root@xen1:/# xm create /etc/xen/dc3.conf > Using config file "/etc/xen/dc3.conf". > Started domain dc3 (id=1) > > root@xen1:/# xm vcpu-list > Name ID VCPU CPU State Time(s) CPU > Affinity > Domain-0 0 0 7 r-- 36.5 any cpu > Domain-0 0 1 - --p 1.8 any cpu > Domain-0 0 2 - --p 1.7 any cpu > Domain-0 0 3 - --p 1.6 any cpu > Domain-0 0 4 - --p 1.4 any cpu > Domain-0 0 5 - --p 1.4 any cpu > Domain-0 0 6 - --p 1.5 any cpu > Domain-0 0 7 - --p 1.3 any cpu > dc3 1 0 0 -b- 15.2 0 > dc3 1 1 1 -b- 6.8 1 > dc3 1 2 2 -b- 7.5 2 > dc3 1 3 3 -b- 8.0 3 > > After HVM Windows domU shutdown, it stays in ---s- state. > > root@xen1:/# xm li > Name ID Mem VCPUs State > Time(s) > Domain-0 0 24106 1 r----- > 58.7 > dc3 1 8192 4 ---s-- > 59.0 > > root@xen1:/# xm vcpu-list > Name ID VCPU CPU State Time(s) CPU > Affinity > Domain-0 0 0 4 r-- 48.4 any cpu > ... > Domain-0 0 7 - --p 1.3 any cpu > dc3 1 0 0 --- 20.0 0 > dc3 1 1 1 --- 10.9 1 > dc3 1 2 2 --- 15.2 2 > dc3 1 3 3 --- 12.9 3 > > The problem goes away if I tell Xen to boot with options dom0_max_vcpus=1 > dom0_vcpus_pin. > > What''s the difference between Xen boot options to limit vcpus for dom0 to > /etc/xen/xend-config.sxp? > > I am running Xen 3.4.1-rc6 version.OK. What dom0 kernel version are you running? -- Pasi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Nerijus Narmontas
2009-Jul-22 16:34 UTC
Re: [Xen-devel] dom0-cpus problem with Xen 3.4.1-rc6
On Wed, Jul 22, 2009 at 6:21 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:> On Wed, Jul 22, 2009 at 06:18:37PM +0300, Nerijus Narmontas wrote: > > On Tue, Jul 21, 2009 at 2:01 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote: > > > > > On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote: > > > > Hello, > > > > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I > gracefully > > > > shutdown domU, the domain stays in ---s- state. > > > > > > > > Is this fixed in 3.4.1-rc8? > > > > > > > > > > Hello. > > > > > > Please don''t hijack threads - you replied to a thread about network > > > problems > > > and gplpv drivers. Always start a new thread for new subjects. > > > > > > What version are you seeing this behaviour with? Xen 3.4.0 ? What dom0 > > > kernel version? > > > > > > -- Pasi > > > > > > > Sorry for the threads thing. > > > > root@xen1:/# more /etc/xen/xend-config.sxp | grep cpu > > # In SMP system, dom0 will use dom0-cpus # of CPUS > > # If dom0-cpus = 0, dom0 will take all cpus available > > (dom0-cpus 1) > > > > root@xen1:/# xm dmesg | grep Command > > (XEN) Command line: console=com2 com2=115200,8n1 > > > > root@xen1:/# xm dmesg | grep VCPUs > > (XEN) Dom0 has maximum 8 VCPUs > > > > root@xen1:/# xm vcpu-list > > Name ID VCPU CPU State Time(s) CPU > > Affinity > > Domain-0 0 0 5 r-- 9.2 any > cpu > > Domain-0 0 1 - --p 1.8 any > cpu > > Domain-0 0 2 - --p 1.7 any > cpu > > Domain-0 0 3 - --p 1.6 any > cpu > > Domain-0 0 4 - --p 1.4 any > cpu > > Domain-0 0 5 - --p 1.4 any > cpu > > Domain-0 0 6 - --p 1.5 any > cpu > > Domain-0 0 7 - --p 1.3 any > cpu > > > > root@xen1:/# xm create /etc/xen/dc3.conf > > Using config file "/etc/xen/dc3.conf". > > Started domain dc3 (id=1) > > > > root@xen1:/# xm vcpu-list > > Name ID VCPU CPU State Time(s) CPU > > Affinity > > Domain-0 0 0 7 r-- 36.5 any > cpu > > Domain-0 0 1 - --p 1.8 any > cpu > > Domain-0 0 2 - --p 1.7 any > cpu > > Domain-0 0 3 - --p 1.6 any > cpu > > Domain-0 0 4 - --p 1.4 any > cpu > > Domain-0 0 5 - --p 1.4 any > cpu > > Domain-0 0 6 - --p 1.5 any > cpu > > Domain-0 0 7 - --p 1.3 any > cpu > > dc3 1 0 0 -b- 15.2 0 > > dc3 1 1 1 -b- 6.8 1 > > dc3 1 2 2 -b- 7.5 2 > > dc3 1 3 3 -b- 8.0 3 > > > > After HVM Windows domU shutdown, it stays in ---s- state. > > > > root@xen1:/# xm li > > Name ID Mem VCPUs State > > Time(s) > > Domain-0 0 24106 1 r----- > > 58.7 > > dc3 1 8192 4 ---s-- > > 59.0 > > > > root@xen1:/# xm vcpu-list > > Name ID VCPU CPU State Time(s) CPU > > Affinity > > Domain-0 0 0 4 r-- 48.4 any > cpu > > ... > > Domain-0 0 7 - --p 1.3 any > cpu > > dc3 1 0 0 --- 20.0 0 > > dc3 1 1 1 --- 10.9 1 > > dc3 1 2 2 --- 15.2 2 > > dc3 1 3 3 --- 12.9 3 > > > > The problem goes away if I tell Xen to boot with options dom0_max_vcpus=1 > > dom0_vcpus_pin. > > > > What''s the difference between Xen boot options to limit vcpus for dom0 to > > /etc/xen/xend-config.sxp? > > > > I am running Xen 3.4.1-rc6 version. > > OK. > > What dom0 kernel version are you running? > > -- Pasi >>From Ubuntu hardy-backports repositories 2.6.24-24-xen._______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Pasi Kärkkäinen
2009-Jul-22 16:39 UTC
[Xen-users] Re: [Xen-devel] dom0-cpus problem with Xen 3.4.1-rc6
On Wed, Jul 22, 2009 at 07:34:16PM +0300, Nerijus Narmontas wrote:> On Wed, Jul 22, 2009 at 6:21 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote: > > > On Wed, Jul 22, 2009 at 06:18:37PM +0300, Nerijus Narmontas wrote: > > > On Tue, Jul 21, 2009 at 2:01 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote: > > > > > > > On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote: > > > > > Hello, > > > > > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I > > gracefully > > > > > shutdown domU, the domain stays in ---s- state. > > > > > > > > > > Is this fixed in 3.4.1-rc8? > > > > > > > > > > > > > Hello. > > > > > > > > Please don''t hijack threads - you replied to a thread about network > > > > problems > > > > and gplpv drivers. Always start a new thread for new subjects. > > > > > > > > What version are you seeing this behaviour with? Xen 3.4.0 ? What dom0 > > > > kernel version? > > > > > > > > -- Pasi > > > > > > > > > > Sorry for the threads thing. > > > > > > root@xen1:/# more /etc/xen/xend-config.sxp | grep cpu > > > # In SMP system, dom0 will use dom0-cpus # of CPUS > > > # If dom0-cpus = 0, dom0 will take all cpus available > > > (dom0-cpus 1) > > > > > > root@xen1:/# xm dmesg | grep Command > > > (XEN) Command line: console=com2 com2=115200,8n1 > > > > > > root@xen1:/# xm dmesg | grep VCPUs > > > (XEN) Dom0 has maximum 8 VCPUs > > > > > > root@xen1:/# xm vcpu-list > > > Name ID VCPU CPU State Time(s) CPU > > > Affinity > > > Domain-0 0 0 5 r-- 9.2 any > > cpu > > > Domain-0 0 1 - --p 1.8 any > > cpu > > > Domain-0 0 2 - --p 1.7 any > > cpu > > > Domain-0 0 3 - --p 1.6 any > > cpu > > > Domain-0 0 4 - --p 1.4 any > > cpu > > > Domain-0 0 5 - --p 1.4 any > > cpu > > > Domain-0 0 6 - --p 1.5 any > > cpu > > > Domain-0 0 7 - --p 1.3 any > > cpu > > > > > > root@xen1:/# xm create /etc/xen/dc3.conf > > > Using config file "/etc/xen/dc3.conf". > > > Started domain dc3 (id=1) > > > > > > root@xen1:/# xm vcpu-list > > > Name ID VCPU CPU State Time(s) CPU > > > Affinity > > > Domain-0 0 0 7 r-- 36.5 any > > cpu > > > Domain-0 0 1 - --p 1.8 any > > cpu > > > Domain-0 0 2 - --p 1.7 any > > cpu > > > Domain-0 0 3 - --p 1.6 any > > cpu > > > Domain-0 0 4 - --p 1.4 any > > cpu > > > Domain-0 0 5 - --p 1.4 any > > cpu > > > Domain-0 0 6 - --p 1.5 any > > cpu > > > Domain-0 0 7 - --p 1.3 any > > cpu > > > dc3 1 0 0 -b- 15.2 0 > > > dc3 1 1 1 -b- 6.8 1 > > > dc3 1 2 2 -b- 7.5 2 > > > dc3 1 3 3 -b- 8.0 3 > > > > > > After HVM Windows domU shutdown, it stays in ---s- state. > > > > > > root@xen1:/# xm li > > > Name ID Mem VCPUs State > > > Time(s) > > > Domain-0 0 24106 1 r----- > > > 58.7 > > > dc3 1 8192 4 ---s-- > > > 59.0 > > > > > > root@xen1:/# xm vcpu-list > > > Name ID VCPU CPU State Time(s) CPU > > > Affinity > > > Domain-0 0 0 4 r-- 48.4 any > > cpu > > > ... > > > Domain-0 0 7 - --p 1.3 any > > cpu > > > dc3 1 0 0 --- 20.0 0 > > > dc3 1 1 1 --- 10.9 1 > > > dc3 1 2 2 --- 15.2 2 > > > dc3 1 3 3 --- 12.9 3 > > > > > > The problem goes away if I tell Xen to boot with options dom0_max_vcpus=1 > > > dom0_vcpus_pin. > > > > > > What''s the difference between Xen boot options to limit vcpus for dom0 to > > > /etc/xen/xend-config.sxp? > > > > > > I am running Xen 3.4.1-rc6 version. > > > > OK. > > > > What dom0 kernel version are you running? > > > > -- Pasi > > > > From Ubuntu hardy-backports repositories 2.6.24-24-xen.Maybe dom0 kernel is your problem.. I remember there was a bug in kernel that caused that kind of problems. That hardy''s dom0 kernel is known to have other bugs aswell. If possible, try running the latest linux-2.6.18-xen from xenbits. Or some other dom0 kernel, and see if that fixes the problem. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Nerijus Narmontas
2009-Jul-22 16:42 UTC
[Xen-users] Re: [Xen-devel] dom0-cpus problem with Xen 3.4.1-rc6
On Wed, Jul 22, 2009 at 7:39 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:> On Wed, Jul 22, 2009 at 07:34:16PM +0300, Nerijus Narmontas wrote: > > On Wed, Jul 22, 2009 at 6:21 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote: > > > > > On Wed, Jul 22, 2009 at 06:18:37PM +0300, Nerijus Narmontas wrote: > > > > On Tue, Jul 21, 2009 at 2:01 PM, Pasi Kärkkäinen <pasik@iki.fi> > wrote: > > > > > > > > > On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote: > > > > > > Hello, > > > > > > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I > > > gracefully > > > > > > shutdown domU, the domain stays in ---s- state. > > > > > > > > > > > > Is this fixed in 3.4.1-rc8? > > > > > > > > > > > > > > > > Hello. > > > > > > > > > > Please don''t hijack threads - you replied to a thread about network > > > > > problems > > > > > and gplpv drivers. Always start a new thread for new subjects. > > > > > > > > > > What version are you seeing this behaviour with? Xen 3.4.0 ? What > dom0 > > > > > kernel version? > > > > > > > > > > -- Pasi > > > > > > > > > > > > > Sorry for the threads thing. > > > > > > > > root@xen1:/# more /etc/xen/xend-config.sxp | grep cpu > > > > # In SMP system, dom0 will use dom0-cpus # of CPUS > > > > # If dom0-cpus = 0, dom0 will take all cpus available > > > > (dom0-cpus 1) > > > > > > > > root@xen1:/# xm dmesg | grep Command > > > > (XEN) Command line: console=com2 com2=115200,8n1 > > > > > > > > root@xen1:/# xm dmesg | grep VCPUs > > > > (XEN) Dom0 has maximum 8 VCPUs > > > > > > > > root@xen1:/# xm vcpu-list > > > > Name ID VCPU CPU State Time(s) > CPU > > > > Affinity > > > > Domain-0 0 0 5 r-- 9.2 > any > > > cpu > > > > Domain-0 0 1 - --p 1.8 > any > > > cpu > > > > Domain-0 0 2 - --p 1.7 > any > > > cpu > > > > Domain-0 0 3 - --p 1.6 > any > > > cpu > > > > Domain-0 0 4 - --p 1.4 > any > > > cpu > > > > Domain-0 0 5 - --p 1.4 > any > > > cpu > > > > Domain-0 0 6 - --p 1.5 > any > > > cpu > > > > Domain-0 0 7 - --p 1.3 > any > > > cpu > > > > > > > > root@xen1:/# xm create /etc/xen/dc3.conf > > > > Using config file "/etc/xen/dc3.conf". > > > > Started domain dc3 (id=1) > > > > > > > > root@xen1:/# xm vcpu-list > > > > Name ID VCPU CPU State Time(s) > CPU > > > > Affinity > > > > Domain-0 0 0 7 r-- 36.5 > any > > > cpu > > > > Domain-0 0 1 - --p 1.8 > any > > > cpu > > > > Domain-0 0 2 - --p 1.7 > any > > > cpu > > > > Domain-0 0 3 - --p 1.6 > any > > > cpu > > > > Domain-0 0 4 - --p 1.4 > any > > > cpu > > > > Domain-0 0 5 - --p 1.4 > any > > > cpu > > > > Domain-0 0 6 - --p 1.5 > any > > > cpu > > > > Domain-0 0 7 - --p 1.3 > any > > > cpu > > > > dc3 1 0 0 -b- 15.2 0 > > > > dc3 1 1 1 -b- 6.8 1 > > > > dc3 1 2 2 -b- 7.5 2 > > > > dc3 1 3 3 -b- 8.0 3 > > > > > > > > After HVM Windows domU shutdown, it stays in ---s- state. > > > > > > > > root@xen1:/# xm li > > > > Name ID Mem VCPUs State > > > > Time(s) > > > > Domain-0 0 24106 1 r----- > > > > 58.7 > > > > dc3 1 8192 4 ---s-- > > > > 59.0 > > > > > > > > root@xen1:/# xm vcpu-list > > > > Name ID VCPU CPU State Time(s) > CPU > > > > Affinity > > > > Domain-0 0 0 4 r-- 48.4 > any > > > cpu > > > > ... > > > > Domain-0 0 7 - --p 1.3 > any > > > cpu > > > > dc3 1 0 0 --- 20.0 0 > > > > dc3 1 1 1 --- 10.9 1 > > > > dc3 1 2 2 --- 15.2 2 > > > > dc3 1 3 3 --- 12.9 3 > > > > > > > > The problem goes away if I tell Xen to boot with options > dom0_max_vcpus=1 > > > > dom0_vcpus_pin. > > > > > > > > What''s the difference between Xen boot options to limit vcpus for > dom0 to > > > > /etc/xen/xend-config.sxp? > > > > > > > > I am running Xen 3.4.1-rc6 version. > > > > > > OK. > > > > > > What dom0 kernel version are you running? > > > > > > -- Pasi > > > > > > > From Ubuntu hardy-backports repositories 2.6.24-24-xen. > > Maybe dom0 kernel is your problem.. I remember there was a bug in kernel > that caused that kind of problems. > > That hardy''s dom0 kernel is known to have other bugs aswell. > > If possible, try running the latest linux-2.6.18-xen from xenbits. > Or some other dom0 kernel, and see if that fixes the problem. > > -- Pasi >Ok I will try to build the latest 2.6.18 kernel. Can you tell me what''s the difference between Xen boot option dom0_max_vcpus=1 and (dom0-cpus 1) option in /etc/xen/xend-config.sxp? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2009-Jul-22 17:01 UTC
Re: [Xen-devel] dom0-cpus problem with Xen 3.4.1-rc6
On Wed, Jul 22, 2009 at 07:42:14PM +0300, Nerijus Narmontas wrote:> On Wed, Jul 22, 2009 at 7:39 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote: > > > On Wed, Jul 22, 2009 at 07:34:16PM +0300, Nerijus Narmontas wrote: > > > On Wed, Jul 22, 2009 at 6:21 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote: > > > > > > > On Wed, Jul 22, 2009 at 06:18:37PM +0300, Nerijus Narmontas wrote: > > > > > On Tue, Jul 21, 2009 at 2:01 PM, Pasi Kärkkäinen <pasik@iki.fi> > > wrote: > > > > > > > > > > > On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote: > > > > > > > Hello, > > > > > > > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I > > > > gracefully > > > > > > > shutdown domU, the domain stays in ---s- state. > > > > > > > > > > > > > > Is this fixed in 3.4.1-rc8? > > > > > > > > > > > > > > > > > > > Hello. > > > > > > > > > > > > Please don''t hijack threads - you replied to a thread about network > > > > > > problems > > > > > > and gplpv drivers. Always start a new thread for new subjects. > > > > > > > > > > > > What version are you seeing this behaviour with? Xen 3.4.0 ? What > > dom0 > > > > > > kernel version? > > > > > > > > > > > > -- Pasi > > > > > > > > > > > > > > > > Sorry for the threads thing. > > > > > > > > > > root@xen1:/# more /etc/xen/xend-config.sxp | grep cpu > > > > > # In SMP system, dom0 will use dom0-cpus # of CPUS > > > > > # If dom0-cpus = 0, dom0 will take all cpus available > > > > > (dom0-cpus 1) > > > > > > > > > > root@xen1:/# xm dmesg | grep Command > > > > > (XEN) Command line: console=com2 com2=115200,8n1 > > > > > > > > > > root@xen1:/# xm dmesg | grep VCPUs > > > > > (XEN) Dom0 has maximum 8 VCPUs > > > > > > > > > > root@xen1:/# xm vcpu-list > > > > > Name ID VCPU CPU State Time(s) > > CPU > > > > > Affinity > > > > > Domain-0 0 0 5 r-- 9.2 > > any > > > > cpu > > > > > Domain-0 0 1 - --p 1.8 > > any > > > > cpu > > > > > Domain-0 0 2 - --p 1.7 > > any > > > > cpu > > > > > Domain-0 0 3 - --p 1.6 > > any > > > > cpu > > > > > Domain-0 0 4 - --p 1.4 > > any > > > > cpu > > > > > Domain-0 0 5 - --p 1.4 > > any > > > > cpu > > > > > Domain-0 0 6 - --p 1.5 > > any > > > > cpu > > > > > Domain-0 0 7 - --p 1.3 > > any > > > > cpu > > > > > > > > > > root@xen1:/# xm create /etc/xen/dc3.conf > > > > > Using config file "/etc/xen/dc3.conf". > > > > > Started domain dc3 (id=1) > > > > > > > > > > root@xen1:/# xm vcpu-list > > > > > Name ID VCPU CPU State Time(s) > > CPU > > > > > Affinity > > > > > Domain-0 0 0 7 r-- 36.5 > > any > > > > cpu > > > > > Domain-0 0 1 - --p 1.8 > > any > > > > cpu > > > > > Domain-0 0 2 - --p 1.7 > > any > > > > cpu > > > > > Domain-0 0 3 - --p 1.6 > > any > > > > cpu > > > > > Domain-0 0 4 - --p 1.4 > > any > > > > cpu > > > > > Domain-0 0 5 - --p 1.4 > > any > > > > cpu > > > > > Domain-0 0 6 - --p 1.5 > > any > > > > cpu > > > > > Domain-0 0 7 - --p 1.3 > > any > > > > cpu > > > > > dc3 1 0 0 -b- 15.2 0 > > > > > dc3 1 1 1 -b- 6.8 1 > > > > > dc3 1 2 2 -b- 7.5 2 > > > > > dc3 1 3 3 -b- 8.0 3 > > > > > > > > > > After HVM Windows domU shutdown, it stays in ---s- state. > > > > > > > > > > root@xen1:/# xm li > > > > > Name ID Mem VCPUs State > > > > > Time(s) > > > > > Domain-0 0 24106 1 r----- > > > > > 58.7 > > > > > dc3 1 8192 4 ---s-- > > > > > 59.0 > > > > > > > > > > root@xen1:/# xm vcpu-list > > > > > Name ID VCPU CPU State Time(s) > > CPU > > > > > Affinity > > > > > Domain-0 0 0 4 r-- 48.4 > > any > > > > cpu > > > > > ... > > > > > Domain-0 0 7 - --p 1.3 > > any > > > > cpu > > > > > dc3 1 0 0 --- 20.0 0 > > > > > dc3 1 1 1 --- 10.9 1 > > > > > dc3 1 2 2 --- 15.2 2 > > > > > dc3 1 3 3 --- 12.9 3 > > > > > > > > > > The problem goes away if I tell Xen to boot with options > > dom0_max_vcpus=1 > > > > > dom0_vcpus_pin. > > > > > > > > > > What''s the difference between Xen boot options to limit vcpus for > > dom0 to > > > > > /etc/xen/xend-config.sxp? > > > > > > > > > > I am running Xen 3.4.1-rc6 version. > > > > > > > > OK. > > > > > > > > What dom0 kernel version are you running? > > > > > > > > -- Pasi > > > > > > > > > > From Ubuntu hardy-backports repositories 2.6.24-24-xen. > > > > Maybe dom0 kernel is your problem.. I remember there was a bug in kernel > > that caused that kind of problems. > > > > That hardy''s dom0 kernel is known to have other bugs aswell. > > > > If possible, try running the latest linux-2.6.18-xen from xenbits. > > Or some other dom0 kernel, and see if that fixes the problem. > > > > -- Pasi > > > > Ok I will try to build the latest 2.6.18 kernel.hg clone http://xenbits.xen.org/linux-2.6.18-xen.hg> Can you tell me what''s the difference between Xen boot > option dom0_max_vcpus=1 and (dom0-cpus 1) option > in /etc/xen/xend-config.sxp?If I haven''t misunderstood this dom0-cpus option in xend-config.sxp tells which physical CPUs/cores vcpu''s of dom0 will use.. ie. if you limit dom0_max_vcpus=1, then you can use dom0-cpus to tell which one of the 8 available cpus/cores dom0''s 1 vcpu will run on. So you can use that option do dedicate a core for dom0, and then use the cpus= option for other domains to make them use other cores.. and this way you''ll be able to dedicate a core _only_ for dom0. But yeah, I don''t know why you''re seeing problems with shutting down HVM domains.. sounds like a bug, like I said earlier.. -- Pasi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Pasi Kärkkäinen
2009-Jul-22 17:08 UTC
[Xen-users] Re: [Xen-devel] dom0-cpus problem with Xen 3.4.1-rc6
On Wed, Jul 22, 2009 at 08:01:02PM +0300, Pasi Kärkkäinen wrote:> On Wed, Jul 22, 2009 at 07:42:14PM +0300, Nerijus Narmontas wrote: > > On Wed, Jul 22, 2009 at 7:39 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote: > > > > > On Wed, Jul 22, 2009 at 07:34:16PM +0300, Nerijus Narmontas wrote: > > > > On Wed, Jul 22, 2009 at 6:21 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote: > > > > > > > > > On Wed, Jul 22, 2009 at 06:18:37PM +0300, Nerijus Narmontas wrote: > > > > > > On Tue, Jul 21, 2009 at 2:01 PM, Pasi Kärkkäinen <pasik@iki.fi> > > > wrote: > > > > > > > > > > > > > On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote: > > > > > > > > Hello, > > > > > > > > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I > > > > > gracefully > > > > > > > > shutdown domU, the domain stays in ---s- state. > > > > > > > > > > > > > > > > Is this fixed in 3.4.1-rc8? > > > > > > > > > > > > > > > > > > > > > > Hello. > > > > > > > > > > > > > > Please don''t hijack threads - you replied to a thread about network > > > > > > > problems > > > > > > > and gplpv drivers. Always start a new thread for new subjects. > > > > > > > > > > > > > > What version are you seeing this behaviour with? Xen 3.4.0 ? What > > > dom0 > > > > > > > kernel version? > > > > > > > > > > > > > > -- Pasi > > > > > > > > > > > > > > > > > > > Sorry for the threads thing. > > > > > > > > > > > > root@xen1:/# more /etc/xen/xend-config.sxp | grep cpu > > > > > > # In SMP system, dom0 will use dom0-cpus # of CPUS > > > > > > # If dom0-cpus = 0, dom0 will take all cpus available > > > > > > (dom0-cpus 1) > > > > > > > > > > > > root@xen1:/# xm dmesg | grep Command > > > > > > (XEN) Command line: console=com2 com2=115200,8n1 > > > > > > > > > > > > root@xen1:/# xm dmesg | grep VCPUs > > > > > > (XEN) Dom0 has maximum 8 VCPUs > > > > > > > > > > > > root@xen1:/# xm vcpu-list > > > > > > Name ID VCPU CPU State Time(s) > > > CPU > > > > > > Affinity > > > > > > Domain-0 0 0 5 r-- 9.2 > > > any > > > > > cpu > > > > > > Domain-0 0 1 - --p 1.8 > > > any > > > > > cpu > > > > > > Domain-0 0 2 - --p 1.7 > > > any > > > > > cpu > > > > > > Domain-0 0 3 - --p 1.6 > > > any > > > > > cpu > > > > > > Domain-0 0 4 - --p 1.4 > > > any > > > > > cpu > > > > > > Domain-0 0 5 - --p 1.4 > > > any > > > > > cpu > > > > > > Domain-0 0 6 - --p 1.5 > > > any > > > > > cpu > > > > > > Domain-0 0 7 - --p 1.3 > > > any > > > > > cpu > > > > > > > > > > > > root@xen1:/# xm create /etc/xen/dc3.conf > > > > > > Using config file "/etc/xen/dc3.conf". > > > > > > Started domain dc3 (id=1) > > > > > > > > > > > > root@xen1:/# xm vcpu-list > > > > > > Name ID VCPU CPU State Time(s) > > > CPU > > > > > > Affinity > > > > > > Domain-0 0 0 7 r-- 36.5 > > > any > > > > > cpu > > > > > > Domain-0 0 1 - --p 1.8 > > > any > > > > > cpu > > > > > > Domain-0 0 2 - --p 1.7 > > > any > > > > > cpu > > > > > > Domain-0 0 3 - --p 1.6 > > > any > > > > > cpu > > > > > > Domain-0 0 4 - --p 1.4 > > > any > > > > > cpu > > > > > > Domain-0 0 5 - --p 1.4 > > > any > > > > > cpu > > > > > > Domain-0 0 6 - --p 1.5 > > > any > > > > > cpu > > > > > > Domain-0 0 7 - --p 1.3 > > > any > > > > > cpu > > > > > > dc3 1 0 0 -b- 15.2 0 > > > > > > dc3 1 1 1 -b- 6.8 1 > > > > > > dc3 1 2 2 -b- 7.5 2 > > > > > > dc3 1 3 3 -b- 8.0 3 > > > > > > > > > > > > After HVM Windows domU shutdown, it stays in ---s- state. > > > > > > > > > > > > root@xen1:/# xm li > > > > > > Name ID Mem VCPUs State > > > > > > Time(s) > > > > > > Domain-0 0 24106 1 r----- > > > > > > 58.7 > > > > > > dc3 1 8192 4 ---s-- > > > > > > 59.0 > > > > > > > > > > > > root@xen1:/# xm vcpu-list > > > > > > Name ID VCPU CPU State Time(s) > > > CPU > > > > > > Affinity > > > > > > Domain-0 0 0 4 r-- 48.4 > > > any > > > > > cpu > > > > > > ... > > > > > > Domain-0 0 7 - --p 1.3 > > > any > > > > > cpu > > > > > > dc3 1 0 0 --- 20.0 0 > > > > > > dc3 1 1 1 --- 10.9 1 > > > > > > dc3 1 2 2 --- 15.2 2 > > > > > > dc3 1 3 3 --- 12.9 3 > > > > > > > > > > > > The problem goes away if I tell Xen to boot with options > > > dom0_max_vcpus=1 > > > > > > dom0_vcpus_pin. > > > > > > > > > > > > What''s the difference between Xen boot options to limit vcpus for > > > dom0 to > > > > > > /etc/xen/xend-config.sxp? > > > > > > > > > > > > I am running Xen 3.4.1-rc6 version. > > > > > > > > > > OK. > > > > > > > > > > What dom0 kernel version are you running? > > > > > > > > > > -- Pasi > > > > > > > > > > > > > From Ubuntu hardy-backports repositories 2.6.24-24-xen. > > > > > > Maybe dom0 kernel is your problem.. I remember there was a bug in kernel > > > that caused that kind of problems. > > > > > > That hardy''s dom0 kernel is known to have other bugs aswell. > > > > > > If possible, try running the latest linux-2.6.18-xen from xenbits. > > > Or some other dom0 kernel, and see if that fixes the problem. > > > > > > -- Pasi > > > > > > > Ok I will try to build the latest 2.6.18 kernel. > > hg clone http://xenbits.xen.org/linux-2.6.18-xen.hg > > > Can you tell me what''s the difference between Xen boot > > option dom0_max_vcpus=1 and (dom0-cpus 1) option > > in /etc/xen/xend-config.sxp? > > If I haven''t misunderstood this dom0-cpus option in xend-config.sxp tells > which physical CPUs/cores vcpu''s of dom0 will use.. > > ie. if you limit dom0_max_vcpus=1, then you can use dom0-cpus to tell which > one of the 8 available cpus/cores dom0''s 1 vcpu will run on. > > So you can use that option do dedicate a core for dom0, and then use the > cpus= option for other domains to make them use other cores.. and this way > you''ll be able to dedicate a core _only_ for dom0. >http://lists.xensource.com/archives/html/xen-users/2009-06/msg00037.html Explained better there.. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 22/07/2009 18:01, "Pasi Kärkkäinen" <pasik@iki.fi> wrote:>> Can you tell me what''s the difference between Xen boot >> option dom0_max_vcpus=1 and (dom0-cpus 1) option >> in /etc/xen/xend-config.sxp? > > If I haven''t misunderstood this dom0-cpus option in xend-config.sxp tells > which physical CPUs/cores vcpu''s of dom0 will use.. > > ie. if you limit dom0_max_vcpus=1, then you can use dom0-cpus to tell which > one of the 8 available cpus/cores dom0''s 1 vcpu will run on.dom0-cpus does the same as dom0_max_vcpus -- it specifies number of VCPUs which dom0 should run with. The difference is that dom0_max_vcpus=1 means that is all that dom0 kernel will detect and boot with: you cannot subsequently enable more. With (dom0-cpus 1), dom0 will boot with a vcpu for every host cpu (by default) and then hot-unplug/offline all but one vcpu when xend starts. The latter is obviously a more complex operation, but could be reverted (i.e., you could online some of those vcpus at a later time). -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Pasi Kärkkäinen
2009-Jul-22 17:29 UTC
[Xen-users] Re: [Xen-devel] dom0-cpus problem with Xen 3.4.1-rc6
On Wed, Jul 22, 2009 at 06:15:05PM +0100, Keir Fraser wrote:> On 22/07/2009 18:01, "Pasi Kärkkäinen" <pasik@iki.fi> wrote: > > >> Can you tell me what''s the difference between Xen boot > >> option dom0_max_vcpus=1 and (dom0-cpus 1) option > >> in /etc/xen/xend-config.sxp? > > > > If I haven''t misunderstood this dom0-cpus option in xend-config.sxp tells > > which physical CPUs/cores vcpu''s of dom0 will use.. > > > > ie. if you limit dom0_max_vcpus=1, then you can use dom0-cpus to tell which > > one of the 8 available cpus/cores dom0''s 1 vcpu will run on. > > dom0-cpus does the same as dom0_max_vcpus -- it specifies number of VCPUs > which dom0 should run with. The difference is that dom0_max_vcpus=1 means > that is all that dom0 kernel will detect and boot with: you cannot > subsequently enable more. With (dom0-cpus 1), dom0 will boot with a vcpu for > every host cpu (by default) and then hot-unplug/offline all but one vcpu > when xend starts. The latter is obviously a more complex operation, but > could be reverted (i.e., you could online some of those vcpus at a later > time). >Hmm, so ''dom0-cpus'' in xend-config.sxp doesn''t limit on what physical CPUs the VCPUs of dom0 can run on? Then many people have gotten that wrong.. :) -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2009-Jul-22 17:30 UTC
Re: [Xen-devel] dom0-cpus problem with Xen 3.4.1-rc6 / HVM domains don''t die
On Wed, Jul 22, 2009 at 08:01:02PM +0300, Pasi Kärkkäinen wrote:> > But yeah, I don''t know why you''re seeing problems with shutting down HVM > domains.. sounds like a bug, like I said earlier.. >And I meant this bug: http://lists.xensource.com/archives/html/xen-devel/2009-01/msg00004.html "Domains don''t die, they just stay in the ''s'' state until you ''xm destroy'' them" And a fix/patch to dom0 kernel here: http://lists.xensource.com/archives/html/xen-devel/2009-01/msg00050.html -- Pasi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 22/07/2009 18:29, "Pasi Kärkkäinen" <pasik@iki.fi> wrote:>> dom0-cpus does the same as dom0_max_vcpus -- it specifies number of VCPUs >> which dom0 should run with. The difference is that dom0_max_vcpus=1 means >> that is all that dom0 kernel will detect and boot with: you cannot >> subsequently enable more. With (dom0-cpus 1), dom0 will boot with a vcpu for >> every host cpu (by default) and then hot-unplug/offline all but one vcpu >> when xend starts. The latter is obviously a more complex operation, but >> could be reverted (i.e., you could online some of those vcpus at a later >> time). > > Hmm, so ''dom0-cpus'' in xend-config.sxp doesn''t limit on what physical CPUs > the VCPUs of dom0 can run on?No, there''s no way to configure affinity in the xend config file. You''d have to issue ''xm vcpu-pin'' commands after xend is started. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2009-Jul-22 17:55 UTC
[Xen-users] Re: [Xen-devel] dom0-cpus problem with Xen 3.4.1-rc6
On 22/07/2009 18:08, "Pasi Kärkkäinen" <pasik@iki.fi> wrote:>> So you can use that option do dedicate a core for dom0, and then use the >> cpus= option for other domains to make them use other cores.. and this way >> you''ll be able to dedicate a core _only_ for dom0. >> > > http://lists.xensource.com/archives/html/xen-users/2009-06/msg00037.html > > Explained better there..The above-cited posting is mostly correct. In particular cpus= in a guest config does behave as you think, whereas (dom0-cpus 1) will cause dom0 to enable only one vcpu for itself. However, it is not true that by default each dom0 vcpu is pinned to its equivalent numbered physical cpu. To get that behaviour you must either configure it via use of ''vm vcpu-pin'' commands, or by specifying dom0_vcpus_pin as a Xen boot parameter. -- Keir _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2009-Jul-22 17:59 UTC
Re: [Xen-users] Re: [Xen-devel] network misbehaviour with gplpv and 2.6.30 / HVM domains not shutting down with Xen 3.4.1-rc6 when dom0-cpus is 1
On Wed, Jul 22, 2009 at 09:11:32AM +0700, Fajar A. Nugraha wrote:> On Tue, Jul 21, 2009 at 5:53 PM, Nerijus Narmontas<n.narmontas@gmail.com> wrote: > > Hello, > > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I gracefully > > shutdown domU, the domain stays in ---s- state. > > Is this fixed in 3.4.1-rc8? > > I don''t use 3.4.1, but if I remember correctly this is a bug in 3.2 > (is that the version you''re using?) > I''m using Redhat''s 3.1+ and Gitco''s 3.3.1 and 3.4.0, with (dom0-cpus > 1) and dom0_vcpus_pin on xen.gz''s grub.conf line, and it works > correctly. >I _think_ this is not a Xen hypervisor bug, but a dom0 kernel bug: http://lists.xensource.com/archives/html/xen-devel/2009-07/msg00871.html -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2009-Jul-22 18:03 UTC
[Xen-users] Re: [Xen-devel] dom0-cpus problem with Xen 3.4.1-rc6
On Wed, Jul 22, 2009 at 06:46:00PM +0100, Keir Fraser wrote:> On 22/07/2009 18:29, "Pasi Kärkkäinen" <pasik@iki.fi> wrote: > > >> dom0-cpus does the same as dom0_max_vcpus -- it specifies number of VCPUs > >> which dom0 should run with. The difference is that dom0_max_vcpus=1 means > >> that is all that dom0 kernel will detect and boot with: you cannot > >> subsequently enable more. With (dom0-cpus 1), dom0 will boot with a vcpu for > >> every host cpu (by default) and then hot-unplug/offline all but one vcpu > >> when xend starts. The latter is obviously a more complex operation, but > >> could be reverted (i.e., you could online some of those vcpus at a later > >> time). > > > > Hmm, so ''dom0-cpus'' in xend-config.sxp doesn''t limit on what physical CPUs > > the VCPUs of dom0 can run on? > > No, there''s no way to configure affinity in the xend config file. You''d have > to issue ''xm vcpu-pin'' commands after xend is started. >Ok, thanks for clarifying that. I was already checking the xend-config.sxp docs and figured out it''s correctly written/described there. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I didn''t see the original question; but the "problem" seems to be that when using xend, xm vcpu-list still shows 8 vcpus for dom0? The number of cpus for a domain is assigned at creation; in domain 0''s case, this is at boot, necessarily before xend runs. I suspect what (dom0-cpus 1) does is tell xend to unplug all cpus except one, by writing "0" into /sys/.../cpus/[1-7]/online. This will tell dom0 to take vcpus 1-7 offline, which will put them in a "paused" state (as you can see from xm vcpu-list); but they''re still registered to dom0 in Xen, and still available to be brought online at any time. Setting the boot parameter will change the number of vcpus assigned on VM creation. -George On Wed, Jul 22, 2009 at 4:18 PM, Nerijus Narmontas<n.narmontas@gmail.com> wrote:> On Tue, Jul 21, 2009 at 2:01 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote: >> >> On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote: >> > Hello, >> > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I gracefully >> > shutdown domU, the domain stays in ---s- state. >> > >> > Is this fixed in 3.4.1-rc8? >> > >> >> Hello. >> >> Please don''t hijack threads - you replied to a thread about network >> problems >> and gplpv drivers. Always start a new thread for new subjects. >> >> What version are you seeing this behaviour with? Xen 3.4.0 ? What dom0 >> kernel version? >> >> -- Pasi > > Sorry for the threads thing. > root@xen1:/# more /etc/xen/xend-config.sxp | grep cpu > # In SMP system, dom0 will use dom0-cpus # of CPUS > # If dom0-cpus = 0, dom0 will take all cpus available > (dom0-cpus 1) > root@xen1:/# xm dmesg | grep Command > (XEN) Command line: console=com2 com2=115200,8n1 > root@xen1:/# xm dmesg | grep VCPUs > (XEN) Dom0 has maximum 8 VCPUs > root@xen1:/# xm vcpu-list > Name ID VCPU CPU State Time(s) CPU > Affinity > Domain-0 0 0 5 r-- 9.2 any cpu > Domain-0 0 1 - --p 1.8 any cpu > Domain-0 0 2 - --p 1.7 any cpu > Domain-0 0 3 - --p 1.6 any cpu > Domain-0 0 4 - --p 1.4 any cpu > Domain-0 0 5 - --p 1.4 any cpu > Domain-0 0 6 - --p 1.5 any cpu > Domain-0 0 7 - --p 1.3 any cpu > root@xen1:/# xm create /etc/xen/dc3.conf > Using config file "/etc/xen/dc3.conf". > Started domain dc3 (id=1) > root@xen1:/# xm vcpu-list > Name ID VCPU CPU State Time(s) CPU > Affinity > Domain-0 0 0 7 r-- 36.5 any cpu > Domain-0 0 1 - --p 1.8 any cpu > Domain-0 0 2 - --p 1.7 any cpu > Domain-0 0 3 - --p 1.6 any cpu > Domain-0 0 4 - --p 1.4 any cpu > Domain-0 0 5 - --p 1.4 any cpu > Domain-0 0 6 - --p 1.5 any cpu > Domain-0 0 7 - --p 1.3 any cpu > dc3 1 0 0 -b- 15.2 0 > dc3 1 1 1 -b- 6.8 1 > dc3 1 2 2 -b- 7.5 2 > dc3 1 3 3 -b- 8.0 3 > After HVM Windows domU shutdown, it stays in ---s- state. > root@xen1:/# xm li > Name ID Mem VCPUs State > Time(s) > Domain-0 0 24106 1 r----- > 58.7 > dc3 1 8192 4 ---s-- > 59.0 > root@xen1:/# xm vcpu-list > Name ID VCPU CPU State Time(s) CPU > Affinity > Domain-0 0 0 4 r-- 48.4 any cpu > ... > Domain-0 0 7 - --p 1.3 any cpu > dc3 1 0 0 --- 20.0 0 > dc3 1 1 1 --- 10.9 1 > dc3 1 2 2 --- 15.2 2 > dc3 1 3 3 --- 12.9 3 > The problem goes away if I tell Xen to boot with options dom0_max_vcpus=1 > dom0_vcpus_pin. > What''s the difference between Xen boot options to limit vcpus for dom0 to > /etc/xen/xend-config.sxp? > I am running Xen 3.4.1-rc6 version. > > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Jul 23, 2009 at 10:39:27AM +0100, George Dunlap wrote:> I didn''t see the original question; but the "problem" seems to be that > when using xend, xm vcpu-list still shows 8 vcpus for dom0? > > The number of cpus for a domain is assigned at creation; in domain 0''s > case, this is at boot, necessarily before xend runs. > > I suspect what (dom0-cpus 1) does is tell xend to unplug all cpus > except one, by writing "0" into /sys/.../cpus/[1-7]/online. This will > tell dom0 to take vcpus 1-7 offline, which will put them in a "paused" > state (as you can see from xm vcpu-list); but they''re still registered > to dom0 in Xen, and still available to be brought online at any time. > > Setting the boot parameter will change the number of vcpus assigned on > VM creation. >Yep, thanks for explaining that. Although the original problem was when you specify (dom0-cpus 1) you cannot stop HVM domains anymore - they just get stuck and stay in ''s'' state. I believe it''s because the user is running ubuntu 2.6.24 kernel in dom0, which most probably has this bug: http://lists.xensource.com/archives/html/xen-devel/2009-01/msg00004.html "Domains don''t die, they just stay in the ''s'' state until you ''xm destroy'' them" And a fix/patch to dom0 kernel here: http://lists.xensource.com/archives/html/xen-devel/2009-01/msg00050.html -- Pasi> -George > > On Wed, Jul 22, 2009 at 4:18 PM, Nerijus Narmontas<n.narmontas@gmail.com> wrote: > > On Tue, Jul 21, 2009 at 2:01 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote: > >> > >> On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote: > >> > Hello, > >> > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I gracefully > >> > shutdown domU, the domain stays in ---s- state. > >> > > >> > Is this fixed in 3.4.1-rc8? > >> > > >> > >> Hello. > >> > >> Please don''t hijack threads - you replied to a thread about network > >> problems > >> and gplpv drivers. Always start a new thread for new subjects. > >> > >> What version are you seeing this behaviour with? Xen 3.4.0 ? What dom0 > >> kernel version? > >> > >> -- Pasi > > > > Sorry for the threads thing. > > root@xen1:/# more /etc/xen/xend-config.sxp | grep cpu > > # In SMP system, dom0 will use dom0-cpus # of CPUS > > # If dom0-cpus = 0, dom0 will take all cpus available > > (dom0-cpus 1) > > root@xen1:/# xm dmesg | grep Command > > (XEN) Command line: console=com2 com2=115200,8n1 > > root@xen1:/# xm dmesg | grep VCPUs > > (XEN) Dom0 has maximum 8 VCPUs > > root@xen1:/# xm vcpu-list > > Name ID VCPU CPU State Time(s) CPU > > Affinity > > Domain-0 0 0 5 r-- 9.2 any cpu > > Domain-0 0 1 - --p 1.8 any cpu > > Domain-0 0 2 - --p 1.7 any cpu > > Domain-0 0 3 - --p 1.6 any cpu > > Domain-0 0 4 - --p 1.4 any cpu > > Domain-0 0 5 - --p 1.4 any cpu > > Domain-0 0 6 - --p 1.5 any cpu > > Domain-0 0 7 - --p 1.3 any cpu > > root@xen1:/# xm create /etc/xen/dc3.conf > > Using config file "/etc/xen/dc3.conf". > > Started domain dc3 (id=1) > > root@xen1:/# xm vcpu-list > > Name ID VCPU CPU State Time(s) CPU > > Affinity > > Domain-0 0 0 7 r-- 36.5 any cpu > > Domain-0 0 1 - --p 1.8 any cpu > > Domain-0 0 2 - --p 1.7 any cpu > > Domain-0 0 3 - --p 1.6 any cpu > > Domain-0 0 4 - --p 1.4 any cpu > > Domain-0 0 5 - --p 1.4 any cpu > > Domain-0 0 6 - --p 1.5 any cpu > > Domain-0 0 7 - --p 1.3 any cpu > > dc3 1 0 0 -b- 15.2 0 > > dc3 1 1 1 -b- 6.8 1 > > dc3 1 2 2 -b- 7.5 2 > > dc3 1 3 3 -b- 8.0 3 > > After HVM Windows domU shutdown, it stays in ---s- state. > > root@xen1:/# xm li > > Name ID Mem VCPUs State > > Time(s) > > Domain-0 0 24106 1 r----- > > 58.7 > > dc3 1 8192 4 ---s-- > > 59.0 > > root@xen1:/# xm vcpu-list > > Name ID VCPU CPU State Time(s) CPU > > Affinity > > Domain-0 0 0 4 r-- 48.4 any cpu > > ... > > Domain-0 0 7 - --p 1.3 any cpu > > dc3 1 0 0 --- 20.0 0 > > dc3 1 1 1 --- 10.9 1 > > dc3 1 2 2 --- 15.2 2 > > dc3 1 3 3 --- 12.9 3 > > The problem goes away if I tell Xen to boot with options dom0_max_vcpus=1 > > dom0_vcpus_pin. > > What''s the difference between Xen boot options to limit vcpus for dom0 to > > /etc/xen/xend-config.sxp? > > I am running Xen 3.4.1-rc6 version. > > > > > > > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xensource.com > > http://lists.xensource.com/xen-devel > > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2009-Jul-23 13:56 UTC
Re: [Xen-users] Re: [Xen-devel] dom0-cpus problem with Xen 3.4.1-rc6 / HVM domains don''t die
On Wed, Jul 22, 2009 at 08:30:57PM +0300, Pasi Kärkkäinen wrote:> On Wed, Jul 22, 2009 at 08:01:02PM +0300, Pasi Kärkkäinen wrote: > > > > But yeah, I don''t know why you''re seeing problems with shutting down HVM > > domains.. sounds like a bug, like I said earlier.. > > > > And I meant this bug: > http://lists.xensource.com/archives/html/xen-devel/2009-01/msg00004.html > > "Domains don''t die, they just stay in the ''s'' state until you ''xm destroy'' them" > > And a fix/patch to dom0 kernel here: > http://lists.xensource.com/archives/html/xen-devel/2009-01/msg00050.html >And here''s the fix/patch in linux-2.6.18-xen.hg: http://xenbits.xen.org/linux-2.6.18-xen.hg?rev/79e82ae1bad0 -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Andrew Lyon
2009-Jul-29 09:48 UTC
Re: [Xen-devel] network misbehaviour with gplpv and 2.6.30
On Tue, Jul 21, 2009 at 11:13 AM, Paul Durrant<paul.durrant@citrix.com> wrote:> James Harper wrote: >> >>> Are you saying that ring slot n >>> has only NETRXF_extra_info and *not* NETRXF_more_data? >>> >> >> Yes. From the debug I have received from Andrew Lyon, NETRXF_more_data >> is _never_ set. >> >> From what Andrew tells me (and it''s not unlikely that I misunderstood), >> the packets in question come from a physical machine external to the >> machine running xen. I can''t quite understand how that could be as they >> are ''large'' packets (>1514 byte total packet length) which should only >> be locally originated. Unless he''s running with jumbo frames (are you >> Andrew?). >> > > It''s not unusual for h/w drivers to support ''LRO'', i.e. they re-assemble > consecutive in-order TCP segments into a large packet before passing up the > stack. I believe that these would manifest themselves as TSOs coming into > the transmit side of netback, just as locally originated large packets > would. > >> I''ve asked for some more debug info but he''s in a different timezone to >> me and probably isn''t awake yet. I''m less and less inclined to think >> that this is actually a problem with GPLPV and more a problem with >> netback (or a physical network driver) in 2.6.30, but a tcpdump in Dom0, >> HVM without GPLPV and maybe in a Linux DomU should tell us more. >> > > Yes, a tcpdump of what''s being passed into netback in dom0 should tell us > what''s happening. > > Paul >I did more testing including running various wireshark captures which James looked at, the problem is not the gplpv drivers as it also affects the linux pv netfront driver, it seems to be a dom0 problem, packets arrive with frame.len < 72 but ip.len > 72 which of course causes terrible throughput in domU networking, and also crashed the gplpv drivers until James added a check for the condition (see http://xenbits.xensource.com/ext/win-pvdrivers.hg?rev/0436238bcda5), now it triggers a warning message, for example: XenNet XN_HDR_SIZE + ip4_length (2974) > total_length (54) Yesterday I noticed something quite interesting, if I switch off receive checksum offloading on the dom0 nic (ethtool -K peth0 rx off) the network performance in domU is much improved, but something is still wrong because some network performance tests are still very slow, and a different warning message is triggered in the Xennet driver: XenNet Size Mismatch 54 (ip4_length + XN_HDR_SIZE) != 60 (total_length) Now the really strange thing is that if I re-enable rx checksum offload (ethtool -K peth0 rx on) everything works perfectly, networking throughput is the same as with 2.6.29 and no warning messages are triggered in the Xennet driver. The dom0 NIC is a 82575EB, I have tried using both the 1.3.16-k2 driver which is included in 2.6.30, and the 1.3.19.3 which I downloaded from Intel''s support site, I will try another nic if I can find one. I don''t understand how toggling rx offload off and on can fix the problem but it does. Andy _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Apparently Analagous Threads
- network misbehaviour with gplpv and 2.6.30
- ntpd under Xen Dom0 exhibits extremely high jitter/noise? runs stable/quiet under non-xen kernel.
- ntpd under Xen Dom0 exhibits extremely high jitter/noise? runs stable/quiet under non-xen kernel.
- [Xen-tools] Unable to start xend
- [Xen-tools] Unable to start xend