Hi ian I find Cs 22292 cause xenpm broken. When run "xenpm start" or "xenpm get-cpuidle-states" and other xenmpm command, it will get segment fault. After do some investigation, I find call xc_pm_get_cxstat() will free the cxstat->tiggers, For example: Here is some code form my test.c. struct xc_cx_stat cxstatinfo, *cxstat = &cxstatinfo; cxstat->triggers = malloc(max_cx_num * sizeof(uint64_t)); if ( !cxstat->triggers ) { printf("get memory fail"); return NOMEM; } ret = xc_pm_get_cxstat(xc_handle, cpu, cxstat); printf("triggers=%lx \n", cxstat->triggers[0]); Run it, and it will show segment fault at print the cxtat->tiggers[0]. It seems that xc_pm_get_cxstat() will free cxstat->triggers which we allocate memory before, and then when try to touch cxstat->tiggers[0], the issue raised. If remove the patch 22292, everything is ok. best regards yang _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Thanks for the report. I can''t reproduce on first attempt but I will take a look. On Fri, 2010-10-29 at 09:32 +0100, Zhang, Yang Z wrote:> Hi ian > I find Cs 22292 cause xenpm broken. When run "xenpm start" or "xenpm get-cpuidle-states" and other xenmpm command, it will get segment fault. > After do some investigation, I find call xc_pm_get_cxstat() will free the cxstat->tiggers, > For example: > Here is some code form my test.c. > > struct xc_cx_stat cxstatinfo, *cxstat = &cxstatinfo; > > cxstat->triggers = malloc(max_cx_num * sizeof(uint64_t)); > > if ( !cxstat->triggers ) { > printf("get memory fail"); > return NOMEM; > } > ret = xc_pm_get_cxstat(xc_handle, cpu, cxstat); > printf("triggers=%lx \n", cxstat->triggers[0]); > > Run it, and it will show segment fault at print the cxtat->tiggers[0]. It seems that xc_pm_get_cxstat() will free cxstat->triggers which we allocate memory before, and then when try to touch cxstat->tiggers[0], the issue raised. > If remove the patch 22292, everything is ok. > > best regards > yang > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Fri, 2010-10-29 at 09:32 +0100, Zhang, Yang Z wrote:> Hi ian > I find Cs 22292 cause xenpm broken. When run "xenpm start" or "xenpm > get-cpuidle-states" and other xenmpm command, it will get segment > fault. > After do some investigation, I find call xc_pm_get_cxstat() will free > the cxstat->tiggers, > For example: > Here is some code form my test.c. > > struct xc_cx_stat cxstatinfo, *cxstat = &cxstatinfo; > > cxstat->triggers = malloc(max_cx_num * sizeof(uint64_t)); > > if ( !cxstat->triggers ) { > printf("get memory fail"); > return NOMEM; > } > ret = xc_pm_get_cxstat(xc_handle, cpu, cxstat);what is ret at this point?> printf("triggers=%lx \n", cxstat->triggers[0]); > > Run it, and it will show segment fault at print the cxtat->tiggers[0]. > It seems that xc_pm_get_cxstat() will free cxstat->triggers which we > allocate memory before, and then when try to touch cxstat->tiggers[0], > the issue raised.I can''t see any code which frees cxstat->triggers in xc_pm_get_cxstat, there is only code which frees the bounce buffer. Perhaps the issue you are seeing is with get_cxstat_by_cpuid from xenpm.c rather than xc_pm_get_cxstat directly? I notice that get_cxstat_by_cpuid is called on one occasion without checking the return code and that it frees the trigger array when xc_pm_get_cxstat fails so a new failure introduced by 22292 could cause this? What hardware is this on? What is max_cx_num and max_cpu_nr for you?> If remove the patch 22292, everything is ok. > > best regards > yang > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Fri, 2010-10-29 at 10:26 +0100, Ian Campbell wrote:> On Fri, 2010-10-29 at 09:32 +0100, Zhang, Yang Z wrote: > > Hi ian > > I find Cs 22292 cause xenpm broken. When run "xenpm start" or "xenpm > > get-cpuidle-states" and other xenmpm command, it will get segment > > fault. > > After do some investigation, I find call xc_pm_get_cxstat() will free > > the cxstat->tiggers, > > For example: > > Here is some code form my test.c. > > > > struct xc_cx_stat cxstatinfo, *cxstat = &cxstatinfo; > > > > cxstat->triggers = malloc(max_cx_num * sizeof(uint64_t)); > > > > if ( !cxstat->triggers ) { > > printf("get memory fail"); > > return NOMEM; > > } > > ret = xc_pm_get_cxstat(xc_handle, cpu, cxstat); > > what is ret at this point? > > > printf("triggers=%lx \n", cxstat->triggers[0]); > > > > Run it, and it will show segment fault at print the cxtat->tiggers[0]. > > It seems that xc_pm_get_cxstat() will free cxstat->triggers which we > > allocate memory before, and then when try to touch cxstat->tiggers[0], > > the issue raised. > > I can''t see any code which frees cxstat->triggers in xc_pm_get_cxstat, > there is only code which frees the bounce buffer. > > Perhaps the issue you are seeing is with get_cxstat_by_cpuid from > xenpm.c rather than xc_pm_get_cxstat directly? I notice that > get_cxstat_by_cpuid is called on one occasion without checking the > return code and that it frees the trigger array when xc_pm_get_cxstat > fails so a new failure introduced by 22292 could cause this? > > What hardware is this on? What is max_cx_num and max_cpu_nr for you?Please could you also try this debug patch diff -r a1b39d2b9001 tools/misc/xenpm.c --- a/tools/misc/xenpm.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/misc/xenpm.c Fri Oct 29 10:41:37 2010 +0100 @@ -121,6 +121,7 @@ static int get_cxstat_by_cpuid(xc_interf cxstat->residencies = malloc(max_cx_num * sizeof(uint64_t)); if ( !cxstat->residencies ) { + fprintf(stderr, "failed to allocate residencies, freeing triggers\n"); free(cxstat->triggers); return -ENOMEM; } @@ -129,6 +130,7 @@ static int get_cxstat_by_cpuid(xc_interf if( ret ) { int temp = errno; + fprintf(stderr, "xc_pm_get_cx_stat failed %d %d, freeing buffers\n", ret, errno); free(cxstat->triggers); free(cxstat->residencies); cxstat->triggers = NULL; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> -----Original Message----- > From: Ian Campbell [mailto:Ian.Campbell@eu.citrix.com] > Sent: Friday, October 29, 2010 5:26 PM > To: Zhang, Yang Z > Cc: xen-devel@lists.xensource.com; Ian Jackson > Subject: Re: xenpm fail > > On Fri, 2010-10-29 at 09:32 +0100, Zhang, Yang Z wrote: > > Hi ian > > I find Cs 22292 cause xenpm broken. When run "xenpm start" or "xenpm > > get-cpuidle-states" and other xenmpm command, it will get segment > > fault. > > After do some investigation, I find call xc_pm_get_cxstat() will free > > the cxstat->tiggers, > > For example: > > Here is some code form my test.c. > > > > struct xc_cx_stat cxstatinfo, *cxstat = &cxstatinfo; > > > > cxstat->triggers = malloc(max_cx_num * sizeof(uint64_t)); > > > > if ( !cxstat->triggers ) { > > printf("get memory fail"); > > return NOMEM; > > } > > ret = xc_pm_get_cxstat(xc_handle, cpu, cxstat); > > what is ret at this point? >ret = 0> > printf("triggers=%lx \n", cxstat->triggers[0]); > > > > Run it, and it will show segment fault at print the cxtat->tiggers[0]. > > It seems that xc_pm_get_cxstat() will free cxstat->triggers which we > > allocate memory before, and then when try to touch cxstat->tiggers[0], > > the issue raised. > > I can't see any code which frees cxstat->triggers in xc_pm_get_cxstat, > there is only code which frees the bounce buffer. > > Perhaps the issue you are seeing is with get_cxstat_by_cpuid from > xenpm.c rather than xc_pm_get_cxstat directly? I notice that > get_cxstat_by_cpuid is called on one occasion without checking the > return code and that it frees the trigger array when xc_pm_get_cxstat > fails so a new failure introduced by 22292 could cause this? > > What hardware is this on? What is max_cx_num and max_cpu_nr for you? >I used westmere-ep. max_cx_num equal to 4 and I didn't get the max_cpu_nr in my test case.> > If remove the patch 22292, everything is ok. > > > > best regards > > yang > > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Nothing print with your patch. best regards yang> -----Original Message----- > From: Ian Campbell [mailto:Ian.Campbell@citrix.com] > Sent: Friday, October 29, 2010 5:42 PM > To: Zhang, Yang Z > Cc: xen-devel@lists.xensource.com; Ian Jackson > Subject: Re: [Xen-devel] Re: xenpm fail > > On Fri, 2010-10-29 at 10:26 +0100, Ian Campbell wrote: > > On Fri, 2010-10-29 at 09:32 +0100, Zhang, Yang Z wrote: > > > Hi ian > > > I find Cs 22292 cause xenpm broken. When run "xenpm start" or "xenpm > > > get-cpuidle-states" and other xenmpm command, it will get segment > > > fault. > > > After do some investigation, I find call xc_pm_get_cxstat() will free > > > the cxstat->tiggers, > > > For example: > > > Here is some code form my test.c. > > > > > > struct xc_cx_stat cxstatinfo, *cxstat = &cxstatinfo; > > > > > > cxstat->triggers = malloc(max_cx_num * sizeof(uint64_t)); > > > > > > if ( !cxstat->triggers ) { > > > printf("get memory fail"); > > > return NOMEM; > > > } > > > ret = xc_pm_get_cxstat(xc_handle, cpu, cxstat); > > > > what is ret at this point? > > > > > printf("triggers=%lx \n", cxstat->triggers[0]); > > > > > > Run it, and it will show segment fault at print the cxtat->tiggers[0]. > > > It seems that xc_pm_get_cxstat() will free cxstat->triggers which we > > > allocate memory before, and then when try to touch cxstat->tiggers[0], > > > the issue raised. > > > > I can't see any code which frees cxstat->triggers in xc_pm_get_cxstat, > > there is only code which frees the bounce buffer. > > > > Perhaps the issue you are seeing is with get_cxstat_by_cpuid from > > xenpm.c rather than xc_pm_get_cxstat directly? I notice that > > get_cxstat_by_cpuid is called on one occasion without checking the > > return code and that it frees the trigger array when xc_pm_get_cxstat > > fails so a new failure introduced by 22292 could cause this? > > > > What hardware is this on? What is max_cx_num and max_cpu_nr for you? > > Please could you also try this debug patch > > diff -r a1b39d2b9001 tools/misc/xenpm.c > --- a/tools/misc/xenpm.c Fri Oct 22 15:14:51 2010 +0100 > +++ b/tools/misc/xenpm.c Fri Oct 29 10:41:37 2010 +0100 > @@ -121,6 +121,7 @@ static int get_cxstat_by_cpuid(xc_interf > cxstat->residencies = malloc(max_cx_num * sizeof(uint64_t)); > if ( !cxstat->residencies ) > { > + fprintf(stderr, "failed to allocate residencies, freeing triggers\n"); > free(cxstat->triggers); > return -ENOMEM; > } > @@ -129,6 +130,7 @@ static int get_cxstat_by_cpuid(xc_interf > if( ret ) > { > int temp = errno; > + fprintf(stderr, "xc_pm_get_cx_stat failed %d %d, freeing buffers\n", > ret, errno); > free(cxstat->triggers); > free(cxstat->residencies); > cxstat->triggers = NULL; >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Mon, 2010-11-01 at 02:14 +0000, Zhang, Yang Z wrote:> > -----Original Message----- > > From: Ian Campbell [mailto:Ian.Campbell@eu.citrix.com] > > Sent: Friday, October 29, 2010 5:26 PM > > To: Zhang, Yang Z > > Cc: xen-devel@lists.xensource.com; Ian Jackson > > Subject: Re: xenpm fail > > > > On Fri, 2010-10-29 at 09:32 +0100, Zhang, Yang Z wrote: > > > Hi ian > > > I find Cs 22292 cause xenpm broken. When run "xenpm start" or "xenpm > > > get-cpuidle-states" and other xenmpm command, it will get segment > > > fault. > > > After do some investigation, I find call xc_pm_get_cxstat() will free > > > the cxstat->tiggers, > > > For example: > > > Here is some code form my test.c. > > > > > > struct xc_cx_stat cxstatinfo, *cxstat = &cxstatinfo; > > > > > > cxstat->triggers = malloc(max_cx_num * sizeof(uint64_t)); > > > > > > if ( !cxstat->triggers ) { > > > printf("get memory fail"); > > > return NOMEM; > > > } > > > ret = xc_pm_get_cxstat(xc_handle, cpu, cxstat); > > > > what is ret at this point? > > > ret = 0Are you running the precise code you give above? xc_pm_get_cxstat will return failure if cxstat->residencies is not initialised. This didn''t change in 22292:a1b39d2b9001. I suspect the error you are seeing with your test.c may be due to this and unrelated to the problem(s) with xenpm. My guess is that 22292:a1b39d2b9001 added a new potential failure case to xc_pm_get_cxstat (the bounce buffer allocation) which is causing an error to be returned which is incorrectly handled in xenpm but unfortunately I can''t see it from staring at the code. Please can you try the attached, increasingly desperate, debugging patch and send the complete output of running both your test case and xenpm. Please could you also run xenpm under gdb and grab a backtrace from the location of the segfault. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Attachment is our test source. Use "gcc -lxenctrl residency.c -o residency" to compile, then run "./ residency -n 1 -c" to get the c status data. And following is the backtrace and output with your patch: [root@vt-nhm7 tools]# gdb xenpm GNU gdb (GDB) Red Hat Enterprise Linux (7.0.1-23.el5) Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /usr/sbin/xenpm...done. (gdb) set args get-cpuidle-states (gdb) r Starting program: /usr/sbin/xenpm get-cpuidle-states [Thread debugging using libthread_db enabled] xc__hypercall_bounce_pre bounced 136 bytes from user buf 0x7fffffffe700 into hcall buf 0x607004 xc__hypercall_bounce_post bounced 136 bytes back from hcall buf 0x607004 into user buf 0x7fffffffe700 xc__hypercall_bounce_pre bounced 136 bytes from user buf 0x7fffffffe6d0 into hcall buf 0x607004 xc__hypercall_bounce_post bounced 136 bytes back from hcall buf 0x607004 into user buf 0x7fffffffe6d0 Max C-state: C7 xc__hypercall_bounce_pre bounced 136 bytes from user buf 0x7fffffffe620 into hcall buf 0x607004 xc__hypercall_bounce_post bounced 136 bytes back from hcall buf 0x607004 into user buf 0x7fffffffe620 get_cxstat_by_cpuid: max_cx 4 for cpuid 0 xc__hypercall_bounce_pre bounced 136 bytes from user buf 0x7fffffffe4f0 into hcall buf 0x607004 xc__hypercall_bounce_post bounced 136 bytes back from hcall buf 0x607004 into user buf 0x7fffffffe4f0 xc__hypercall_bounce_pre bounced 32 bytes from user buf 0x7fffffffe720 into hcall buf 0x607004 xc__hypercall_bounce_pre bounced 32 bytes from user buf 0x7fffffffe728 into hcall buf 0x609004 xc__hypercall_bounce_pre bounced 136 bytes from user buf 0x7fffffffe610 into hcall buf 0x60b004 xc__hypercall_bounce_post bounced 136 bytes back from hcall buf 0x60b004 into user buf 0x7fffffffe610 xc__hypercall_bounce_post bounced 32 bytes back from hcall buf 0x609004 into user buf 0x7fffffffe728 xc__hypercall_bounce_post bounced 32 bytes back from hcall buf 0x607004 into user buf 0x7fffffffe720 xc_pm_get_cxstat done returning 0 get_cx_stat_by_cpuid succeeded for cpu 0 cpu id : 0 total C-states : 4 idle time(ms) : 32842665 Program received signal SIGSEGV, Segmentation fault. 0x000000000040255a in print_cxstat (xc_handle=<value optimized out>, cpuid=<value optimized out>) at xenpm.c:90 90 printf("C%d : transition [%020"PRIu64"]\n", (gdb) bt #0 0x000000000040255a in print_cxstat (xc_handle=<value optimized out>, cpuid=<value optimized out>) at xenpm.c:90 #1 show_cxstat_by_cpuid (xc_handle=<value optimized out>, cpuid=<value optimized out>) at xenpm.c:167 #2 0x0000000000403a8b in cxstat_func (argc=<value optimized out>, argv=<value optimized out>) at xenpm.c:191 #3 0x0000000000401394 in main (argc=2, argv=0x7fffffffe998) at xenpm.c:1177 best regards yang> -----Original Message----- > From: Ian Campbell [mailto:Ian.Campbell@eu.citrix.com] > Sent: Monday, November 01, 2010 6:27 PM > To: Zhang, Yang Z > Cc: xen-devel@lists.xensource.com; Ian Jackson > Subject: RE: xenpm fail > > On Mon, 2010-11-01 at 02:14 +0000, Zhang, Yang Z wrote: > > > -----Original Message----- > > > From: Ian Campbell [mailto:Ian.Campbell@eu.citrix.com] > > > Sent: Friday, October 29, 2010 5:26 PM > > > To: Zhang, Yang Z > > > Cc: xen-devel@lists.xensource.com; Ian Jackson > > > Subject: Re: xenpm fail > > > > > > On Fri, 2010-10-29 at 09:32 +0100, Zhang, Yang Z wrote: > > > > Hi ian > > > > I find Cs 22292 cause xenpm broken. When run "xenpm start" or > > > > "xenpm get-cpuidle-states" and other xenmpm command, it will get > > > > segment fault. > > > > After do some investigation, I find call xc_pm_get_cxstat() will > > > > free the cxstat->tiggers, For example: > > > > Here is some code form my test.c. > > > > > > > > struct xc_cx_stat cxstatinfo, *cxstat = &cxstatinfo; > > > > > > > > cxstat->triggers = malloc(max_cx_num * sizeof(uint64_t)); > > > > > > > > if ( !cxstat->triggers ) { > > > > printf("get memory fail"); > > > > return NOMEM; > > > > } > > > > ret = xc_pm_get_cxstat(xc_handle, cpu, cxstat); > > > > > > what is ret at this point? > > > > > ret = 0 > > Are you running the precise code you give above? xc_pm_get_cxstat will return > failure if cxstat->residencies is not initialised. This didn't change in > 22292:a1b39d2b9001. I suspect the error you are seeing with your test.c may > be due to this and unrelated to the problem(s) with xenpm. > > My guess is that 22292:a1b39d2b9001 added a new potential failure case to > xc_pm_get_cxstat (the bounce buffer allocation) which is causing an error to be > returned which is incorrectly handled in xenpm but unfortunately I can't see it > from staring at the code. > > Please can you try the attached, increasingly desperate, debugging patch and > send the complete output of running both your test case and xenpm. > > Please could you also run xenpm under gdb and grab a backtrace from the > location of the segfault. > > Ian._______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Thanks for this... On Mon, 2010-11-01 at 10:49 +0000, Zhang, Yang Z wrote: [...]> xc__hypercall_bounce_pre bounced 32 bytes from user buf 0x7fffffffe720 into hcall buf 0x607004 > xc__hypercall_bounce_pre bounced 32 bytes from user buf 0x7fffffffe728 into hcall buf 0x609004 > xc__hypercall_bounce_pre bounced 136 bytes from user buf 0x7fffffffe610 into hcall buf 0x60b004 > xc__hypercall_bounce_post bounced 136 bytes back from hcall buf 0x60b004 into user buf 0x7fffffffe610 > xc__hypercall_bounce_post bounced 32 bytes back from hcall buf 0x609004 into user buf 0x7fffffffe728 > xc__hypercall_bounce_post bounced 32 bytes back from hcall buf 0x607004 into user buf 0x7fffffffe720This is the xc_pm_get_cxstat call, we can see it bounce max_cx(=4) * sizeof(uint64_t)==32 bytes for both cxpt->triggers and cxpt->residencies as well as 136 bytes for struct xensysctl. However the ubuf values for triggers and residencies are suspicious, they are only 8 bytes different, IOW they apparently overlap. Can you try this patch which fixes a stupid thinko. diff -r c3d7d2729410 tools/libxc/xc_pm.c --- a/tools/libxc/xc_pm.c Mon Nov 01 11:12:51 2010 +0000 +++ b/tools/libxc/xc_pm.c Mon Nov 01 11:19:53 2010 +0000 @@ -124,8 +124,8 @@ int xc_pm_get_cxstat(xc_interface *xch, int xc_pm_get_cxstat(xc_interface *xch, int cpuid, struct xc_cx_stat *cxpt) { DECLARE_SYSCTL; - DECLARE_NAMED_HYPERCALL_BOUNCE(triggers, &cxpt->triggers, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); - DECLARE_NAMED_HYPERCALL_BOUNCE(residencies, &cxpt->residencies, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + DECLARE_NAMED_HYPERCALL_BOUNCE(triggers, cxpt->triggers, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + DECLARE_NAMED_HYPERCALL_BOUNCE(residencies, cxpt->residencies, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); int max_cx, ret; if( !cxpt || !(cxpt->triggers) || !(cxpt->residencies) ) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
"Xenpm get-cpuidle-states" works well with your patch, but "xenpm start" still get segment fault. And with following patch, I didn't see any segment fault when run xenpm or my test case. Maybe this problem will solved by this patch. Pls take a look at it. diff -r a1b39d2b9001 tools/libxc/xc_pm.c --- a/tools/libxc/xc_pm.c Fri Oct 22 15:14:51 2010 +0100 +++ b/tools/libxc/xc_pm.c Tue Nov 02 04:06:10 2010 +0800 @@ -46,8 +46,8 @@ int xc_pm_get_pxstat(xc_interface *xch, { DECLARE_SYSCTL; /* Sizes unknown until xc_pm_get_max_px */ - DECLARE_NAMED_HYPERCALL_BOUNCE(trans, &pxpt->trans_pt, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); - DECLARE_NAMED_HYPERCALL_BOUNCE(pt, &pxpt->pt, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + DECLARE_NAMED_HYPERCALL_BOUNCE(trans, pxpt->trans_pt, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + DECLARE_NAMED_HYPERCALL_BOUNCE(pt, pxpt->pt, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); int max_px, ret; @@ -124,8 +124,8 @@ int xc_pm_get_cxstat(xc_interface *xch, int xc_pm_get_cxstat(xc_interface *xch, int cpuid, struct xc_cx_stat *cxpt) { DECLARE_SYSCTL; - DECLARE_NAMED_HYPERCALL_BOUNCE(triggers, &cxpt->triggers, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); - DECLARE_NAMED_HYPERCALL_BOUNCE(residencies, &cxpt->residencies, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + DECLARE_NAMED_HYPERCALL_BOUNCE(triggers, cxpt->triggers, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); + DECLARE_NAMED_HYPERCALL_BOUNCE(residencies, cxpt->residencies, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); int max_cx, ret; if( !cxpt || !(cxpt->triggers) || !(cxpt->residencies) ) best regards yang> -----Original Message----- > From: Ian Campbell [mailto:Ian.Campbell@eu.citrix.com] > Sent: Monday, November 01, 2010 7:24 PM > To: Zhang, Yang Z > Cc: xen-devel@lists.xensource.com; Ian Jackson > Subject: RE: xenpm fail > > Thanks for this... > > On Mon, 2010-11-01 at 10:49 +0000, Zhang, Yang Z wrote: > [...] > > xc__hypercall_bounce_pre bounced 32 bytes from user buf 0x7fffffffe720 into > hcall buf 0x607004 > > xc__hypercall_bounce_pre bounced 32 bytes from user buf 0x7fffffffe728 into > hcall buf 0x609004 > > xc__hypercall_bounce_pre bounced 136 bytes from user buf 0x7fffffffe610 > into hcall buf 0x60b004 > > xc__hypercall_bounce_post bounced 136 bytes back from hcall buf 0x60b004 > into user buf 0x7fffffffe610 > > xc__hypercall_bounce_post bounced 32 bytes back from hcall buf 0x609004 > into user buf 0x7fffffffe728 > > xc__hypercall_bounce_post bounced 32 bytes back from hcall buf 0x607004 > into user buf 0x7fffffffe720 > > This is the xc_pm_get_cxstat call, we can see it bounce max_cx(=4) * > sizeof(uint64_t)==32 bytes for both cxpt->triggers and cxpt->residencies > as well as 136 bytes for struct xensysctl. > > However the ubuf values for triggers and residencies are suspicious, > they are only 8 bytes different, IOW they apparently overlap. > > Can you try this patch which fixes a stupid thinko. > > diff -r c3d7d2729410 tools/libxc/xc_pm.c > --- a/tools/libxc/xc_pm.c Mon Nov 01 11:12:51 2010 +0000 > +++ b/tools/libxc/xc_pm.c Mon Nov 01 11:19:53 2010 +0000 > @@ -124,8 +124,8 @@ int xc_pm_get_cxstat(xc_interface *xch, > int xc_pm_get_cxstat(xc_interface *xch, int cpuid, struct xc_cx_stat *cxpt) > { > DECLARE_SYSCTL; > - DECLARE_NAMED_HYPERCALL_BOUNCE(triggers, &cxpt->triggers, 0, > XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > - DECLARE_NAMED_HYPERCALL_BOUNCE(residencies, > &cxpt->residencies, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > + DECLARE_NAMED_HYPERCALL_BOUNCE(triggers, cxpt->triggers, 0, > XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > + DECLARE_NAMED_HYPERCALL_BOUNCE(residencies, cxpt->residencies, > 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > int max_cx, ret; > > if( !cxpt || !(cxpt->triggers) || !(cxpt->residencies) ) >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Mon, 2010-11-01 at 12:26 +0000, Zhang, Yang Z wrote:> "Xenpm get-cpuidle-states" works well with your patch, but "xenpm > start" still get segment fault. And with following patch, I didn''t see > any segment fault when run xenpm or my test case. Maybe this problem > will solved by this patch. Pls take a look at it.Thanks, I came up with the same by looking for all the & used with these macros. Will submit a patch shortly. Ian.> > diff -r a1b39d2b9001 tools/libxc/xc_pm.c > --- a/tools/libxc/xc_pm.c Fri Oct 22 15:14:51 2010 +0100 > +++ b/tools/libxc/xc_pm.c Tue Nov 02 04:06:10 2010 +0800 > @@ -46,8 +46,8 @@ int xc_pm_get_pxstat(xc_interface *xch, > { > DECLARE_SYSCTL; > /* Sizes unknown until xc_pm_get_max_px */ > - DECLARE_NAMED_HYPERCALL_BOUNCE(trans, &pxpt->trans_pt, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > - DECLARE_NAMED_HYPERCALL_BOUNCE(pt, &pxpt->pt, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > + DECLARE_NAMED_HYPERCALL_BOUNCE(trans, pxpt->trans_pt, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > + DECLARE_NAMED_HYPERCALL_BOUNCE(pt, pxpt->pt, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > > int max_px, ret; > > @@ -124,8 +124,8 @@ int xc_pm_get_cxstat(xc_interface *xch, > int xc_pm_get_cxstat(xc_interface *xch, int cpuid, struct xc_cx_stat *cxpt) > { > DECLARE_SYSCTL; > - DECLARE_NAMED_HYPERCALL_BOUNCE(triggers, &cxpt->triggers, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > - DECLARE_NAMED_HYPERCALL_BOUNCE(residencies, &cxpt->residencies, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > + DECLARE_NAMED_HYPERCALL_BOUNCE(triggers, cxpt->triggers, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > + DECLARE_NAMED_HYPERCALL_BOUNCE(residencies, cxpt->residencies, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > int max_cx, ret; > > if( !cxpt || !(cxpt->triggers) || !(cxpt->residencies) ) > > best regards > yang > > > -----Original Message----- > > From: Ian Campbell [mailto:Ian.Campbell@eu.citrix.com] > > Sent: Monday, November 01, 2010 7:24 PM > > To: Zhang, Yang Z > > Cc: xen-devel@lists.xensource.com; Ian Jackson > > Subject: RE: xenpm fail > > > > Thanks for this... > > > > On Mon, 2010-11-01 at 10:49 +0000, Zhang, Yang Z wrote: > > [...] > > > xc__hypercall_bounce_pre bounced 32 bytes from user buf 0x7fffffffe720 into > > hcall buf 0x607004 > > > xc__hypercall_bounce_pre bounced 32 bytes from user buf 0x7fffffffe728 into > > hcall buf 0x609004 > > > xc__hypercall_bounce_pre bounced 136 bytes from user buf 0x7fffffffe610 > > into hcall buf 0x60b004 > > > xc__hypercall_bounce_post bounced 136 bytes back from hcall buf 0x60b004 > > into user buf 0x7fffffffe610 > > > xc__hypercall_bounce_post bounced 32 bytes back from hcall buf 0x609004 > > into user buf 0x7fffffffe728 > > > xc__hypercall_bounce_post bounced 32 bytes back from hcall buf 0x607004 > > into user buf 0x7fffffffe720 > > > > This is the xc_pm_get_cxstat call, we can see it bounce max_cx(=4) * > > sizeof(uint64_t)==32 bytes for both cxpt->triggers and cxpt->residencies > > as well as 136 bytes for struct xensysctl. > > > > However the ubuf values for triggers and residencies are suspicious, > > they are only 8 bytes different, IOW they apparently overlap. > > > > Can you try this patch which fixes a stupid thinko. > > > > diff -r c3d7d2729410 tools/libxc/xc_pm.c > > --- a/tools/libxc/xc_pm.c Mon Nov 01 11:12:51 2010 +0000 > > +++ b/tools/libxc/xc_pm.c Mon Nov 01 11:19:53 2010 +0000 > > @@ -124,8 +124,8 @@ int xc_pm_get_cxstat(xc_interface *xch, > > int xc_pm_get_cxstat(xc_interface *xch, int cpuid, struct xc_cx_stat *cxpt) > > { > > DECLARE_SYSCTL; > > - DECLARE_NAMED_HYPERCALL_BOUNCE(triggers, &cxpt->triggers, 0, > > XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > > - DECLARE_NAMED_HYPERCALL_BOUNCE(residencies, > > &cxpt->residencies, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > > + DECLARE_NAMED_HYPERCALL_BOUNCE(triggers, cxpt->triggers, 0, > > XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > > + DECLARE_NAMED_HYPERCALL_BOUNCE(residencies, cxpt->residencies, > > 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH); > > int max_cx, ret; > > > > if( !cxpt || !(cxpt->triggers) || !(cxpt->residencies) ) > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel