I noticed this was gone from libxc. Would there be any objection to
adding xc_domain_get_vcpu_info? I am interested in querying the
cpu_time for each vcpu for a utility that does something like:
vm-stat
cpu[ util] domN-vcpuM[util]...domY-vcpuZ[util]
------------ --------------------------------------
cpu0[075.4] dom0-vcpu0[000.3] dom1-vcpu1[075.1]
cpu1[083.7] dom1-vcpu2[083.7]
cpu2[069.2] dom1-vcpu3[069.2]
cpu3[075.9] dom1-vcpu0[075.9]
< time interval>
cpu0[100.0] dom0-vcpu0[000.5] dom1-vcpu1[099.5]
cpu1[099.8] dom1-vcpu2[099.8]
cpu2[099.8] dom1-vcpu3[099.8]
cpu3[099.8] dom1-vcpu0[099.8]
cpu0[100.0] dom0-vcpu0[000.3] dom1-vcpu1[099.7]
cpu1[099.7] dom1-vcpu2[099.7]
cpu2[099.7] dom1-vcpu3[099.7]
cpu3[099.7] dom1-vcpu0[099.7]
cpu0[100.0] dom0-vcpu0[000.6] dom1-vcpu1[099.4]
cpu1[099.7] dom1-vcpu2[099.7]
cpu2[099.7] dom1-vcpu3[099.7]
cpu3[101.4] dom1-vcpu0[101.4]
And while we''re on this subject, I wanted to track, per phys cpu,
exec_domain context switches, and store this as ctx_switches in
schedule_data struct. I believe tracking context switches would be a
good stat to have, for example, to expose problems like high domU
traffic networking on one cpu system. Any objection to this or suggestions?
Thanks,
-Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
It is already there. It hasn''t been exported to python as the main
purpose is to query register state. You''re welcome to add it.
-Kip
On 5/13/05, Andrew Theurer <habanero@us.ibm.com>
wrote:> I noticed this was gone from libxc. Would there be any objection to
> adding xc_domain_get_vcpu_info? I am interested in querying the
> cpu_time for each vcpu for a utility that does something like:
>
> vm-stat
>
> cpu[ util] domN-vcpuM[util]...domY-vcpuZ[util]
> ------------ --------------------------------------
> cpu0[075.4] dom0-vcpu0[000.3] dom1-vcpu1[075.1]
> cpu1[083.7] dom1-vcpu2[083.7]
> cpu2[069.2] dom1-vcpu3[069.2]
> cpu3[075.9] dom1-vcpu0[075.9]
> < time interval>
> cpu0[100.0] dom0-vcpu0[000.5] dom1-vcpu1[099.5]
> cpu1[099.8] dom1-vcpu2[099.8]
> cpu2[099.8] dom1-vcpu3[099.8]
> cpu3[099.8] dom1-vcpu0[099.8]
>
> cpu0[100.0] dom0-vcpu0[000.3] dom1-vcpu1[099.7]
> cpu1[099.7] dom1-vcpu2[099.7]
> cpu2[099.7] dom1-vcpu3[099.7]
> cpu3[099.7] dom1-vcpu0[099.7]
>
> cpu0[100.0] dom0-vcpu0[000.6] dom1-vcpu1[099.4]
> cpu1[099.7] dom1-vcpu2[099.7]
> cpu2[099.7] dom1-vcpu3[099.7]
> cpu3[101.4] dom1-vcpu0[101.4]
>
> And while we''re on this subject, I wanted to track, per phys cpu,
> exec_domain context switches, and store this as ctx_switches in
> schedule_data struct. I believe tracking context switches would be a
> good stat to have, for example, to expose problems like high domU
> traffic networking on one cpu system. Any objection to this or
suggestions?
>
> Thanks,
>
> -Andrew
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Never mind, I was thinking get_vcpu_context, but per-cpu time is
already available in get_vcpu_context.
-Kip
On 5/13/05, Andrew Theurer <habanero@us.ibm.com>
wrote:> I noticed this was gone from libxc. Would there be any objection to
> adding xc_domain_get_vcpu_info? I am interested in querying the
> cpu_time for each vcpu for a utility that does something like:
>
> vm-stat
>
> cpu[ util] domN-vcpuM[util]...domY-vcpuZ[util]
> ------------ --------------------------------------
> cpu0[075.4] dom0-vcpu0[000.3] dom1-vcpu1[075.1]
> cpu1[083.7] dom1-vcpu2[083.7]
> cpu2[069.2] dom1-vcpu3[069.2]
> cpu3[075.9] dom1-vcpu0[075.9]
> < time interval>
> cpu0[100.0] dom0-vcpu0[000.5] dom1-vcpu1[099.5]
> cpu1[099.8] dom1-vcpu2[099.8]
> cpu2[099.8] dom1-vcpu3[099.8]
> cpu3[099.8] dom1-vcpu0[099.8]
>
> cpu0[100.0] dom0-vcpu0[000.3] dom1-vcpu1[099.7]
> cpu1[099.7] dom1-vcpu2[099.7]
> cpu2[099.7] dom1-vcpu3[099.7]
> cpu3[099.7] dom1-vcpu0[099.7]
>
> cpu0[100.0] dom0-vcpu0[000.6] dom1-vcpu1[099.4]
> cpu1[099.7] dom1-vcpu2[099.7]
> cpu2[099.7] dom1-vcpu3[099.7]
> cpu3[101.4] dom1-vcpu0[101.4]
>
> And while we''re on this subject, I wanted to track, per phys cpu,
> exec_domain context switches, and store this as ctx_switches in
> schedule_data struct. I believe tracking context switches would be a
> good stat to have, for example, to expose problems like high domU
> traffic networking on one cpu system. Any objection to this or
suggestions?
>
> Thanks,
>
> -Andrew
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
> And while we''re on this subject, I wanted to track, per phys > cpu, exec_domain context switches, and store this as > ctx_switches in schedule_data struct. I believe tracking > context switches would be a good stat to have, for example, > to expose problems like high domU traffic networking on one > cpu system. Any objection to this or suggestions?context switches per CPU woul be a useful thing to have -- we already record this in the s/w perf counters. Ian _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel