Ian Pratt
2005-Jun-03 22:06 UTC
RE: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding
> > I''ll take a look at pft. Does it use futexes, or is it just > contending > > for spinlocks in the kernel? > > It contends for spinlocks in kernel.Sounds like this will be a good benchmark. Does it generate a perofrmance figure as it runs? (e.g. iterations a second or such like).> > Thanks, I did look at the graphs at the time. As I recall, the > > notification mechanism was beginning to look somewhat > expensive under > > high context switch loads induced by IO. We''ll have to see what the > > cost > > Yes. One of the tweaks we are looking to do is change the IO > operation from kernel space (responding to an icmp packet > happens within the > kernel) to something that is more IO realistic which would > involve more time per operation, like sending a message over > tcp (echo server or something like that).Running a parallel UDP ping-pong test might be good.> > BTW: it would be really great if you could work up a patch > to enable > > xm/xend to add/remove VCPUs from a domain. > > OK. I have an older patch that I''ll bring up-to-date.Great, thanks.> Here > is a list of things that I think we should do with add/remove. > > 1. Fix cpu_down() to tell Xen to remove the vcpu from its > list of runnable domains. Currently it a "down" vcpu only > yields it''s timeslice back. > > 2. Fix cpu_up() to have Xen make the target vcpu runnable again. > > 3. Add cpu_remove() which removes the cpu from Linux, and > removes the vcpu in Xen. > > 4. Add cpu_add() which boots another vcpu and then brings it > up another cpu in Linux. > > I expect that cpu_up/cpu_down to be more light-weight than > cpu_add/cpu_remove. > > Does that sound reasonable. Do we want all four or can we > live with just 1 and 2?It''s been a while since I looked at Xen''s boot_vcpu code (which could do with a bit of refactoring between common and arch anyhow), but I don''t recall there being anything in there that looked particularly expensive. Having said that, it''s only holding down a couple of KB of memory, so maybe we just need up/down/add. Thanks, Ian _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ryan Harper
2005-Jun-03 22:52 UTC
Re: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding
* Ian Pratt <m+Ian.Pratt@cl.cam.ac.uk> [2005-06-03 17:41]:> > > > I''ll take a look at pft. Does it use futexes, or is it just > > contending > > > for spinlocks in the kernel? > > > > It contends for spinlocks in kernel. > > Sounds like this will be a good benchmark. Does it generate a > perofrmance figure as it runs? (e.g. iterations a second or such like).yes, here some sample output: #Gb Rep Thr CLine User System Wall flt/cpu/s fault/wsec 0 5 8 1 2.30s 0.33s 1.05s 62296.578 104970.599 Gb = Gigabytes of mem (I used 128M) Reps = repetitions of the test internally Thr = # of test threads I generally run this with one thread per VCPU, and 128M of memory.> > > Thanks, I did look at the graphs at the time. As I recall, the > > > notification mechanism was beginning to look somewhat > > expensive under > > > high context switch loads induced by IO. We''ll have to see what the > > > cost > > > > Yes. One of the tweaks we are looking to do is change the IO > > operation from kernel space (responding to an icmp packet > > happens within the > > kernel) to something that is more IO realistic which would > > involve more time per operation, like sending a message over > > tcp (echo server or something like that). > > Running a parallel UDP ping-pong test might be good.OK.> > Here > > is a list of things that I think we should do with add/remove. > > > > 1. Fix cpu_down() to tell Xen to remove the vcpu from its > > list of runnable domains. Currently it a "down" vcpu only > > yields it''s timeslice back. > > > > 2. Fix cpu_up() to have Xen make the target vcpu runnable again. > > > > 3. Add cpu_remove() which removes the cpu from Linux, and > > removes the vcpu in Xen. > > > > 4. Add cpu_add() which boots another vcpu and then brings it > > up another cpu in Linux. > > > > I expect that cpu_up/cpu_down to be more light-weight than > > cpu_add/cpu_remove. > > > > Does that sound reasonable. Do we want all four or can we > > live with just 1 and 2? > > It''s been a while since I looked at Xen''s boot_vcpu code (which could do > with a bit of refactoring between common and arch anyhow), but I don''t > recall there being anything in there that looked particularly expensive. > Having said that, it''s only holding down a couple of KB of memory, so > maybe we just need up/down/add.Sounds good. -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx (512) 838-9253 T/L: 678-9253 ryanh@us.ibm.com _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2005-Jun-04 06:55 UTC
Re: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding
On 3 Jun 2005, at 23:06, Ian Pratt wrote:> It''s been a while since I looked at Xen''s boot_vcpu code (which could > do > with a bit of refactoring between common and arch anyhow), but I don''t > recall there being anything in there that looked particularly > expensive. > Having said that, it''s only holding down a couple of KB of memory, so > maybe we just need up/down/add.You can''t do a full remove without auditing the whole of Xen and adding a reference count to the vcpu structure. We''re probably best off lazily allocating vcpus but then destroy them all when the domain finally is destructed. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel