I have a 12GB 64-bit Linux HVM guest (CentOS 5.5). When I look at the e820 map in the guest, I see the following: BIOS-provided physical RAM map: BIOS-e820: 0000000000010000 - 000000000009e000 (usable) BIOS-e820: 000000000009e000 - 00000000000a0000 (reserved) BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved) BIOS-e820: 0000000000100000 - 00000000f0000000 (usable) BIOS-e820: 00000000fc000000 - 0000000100000000 (reserved) BIOS-e820: 0000000100000000 - 000000030f000000 (usable) I see that the highest usable gpa is over 12GB due to the reserved slots and the max gfn is 30f000. If I use xc_domain_getinfolist() and look at max_pages, it returns 300100 which correctly reflects 12GB. But is there a way for me to find out the max gfn that is reflected in the guest using libxc? Thanks, AP
On 09/03/2012 05:56, "AP" <apxeng@gmail.com> wrote:> I have a 12GB 64-bit Linux HVM guest (CentOS 5.5). When I look at the > e820 map in the guest, I see the following: > > BIOS-provided physical RAM map: > BIOS-e820: 0000000000010000 - 000000000009e000 (usable) > BIOS-e820: 000000000009e000 - 00000000000a0000 (reserved) > BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved) > BIOS-e820: 0000000000100000 - 00000000f0000000 (usable) > BIOS-e820: 00000000fc000000 - 0000000100000000 (reserved) > BIOS-e820: 0000000100000000 - 000000030f000000 (usable) > > I see that the highest usable gpa is over 12GB due to the reserved > slots and the max gfn is 30f000. If I use xc_domain_getinfolist() and > look at max_pages, it returns 300100 which correctly reflects 12GB. > But is there a way for me to find out the max gfn that is reflected in > the guest using libxc?xc_domain_maximum_gpfn() returns a value guaranteed to be >= the current maximum gpfn in the guests physical address space. -- Keir> Thanks, > AP > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel
On Thu, Mar 8, 2012 at 11:25 PM, Keir Fraser <keir@xen.org> wrote:> On 09/03/2012 05:56, "AP" <apxeng@gmail.com> wrote: > >> I have a 12GB 64-bit Linux HVM guest (CentOS 5.5). When I look at the >> e820 map in the guest, I see the following: >> >> BIOS-provided physical RAM map: >> BIOS-e820: 0000000000010000 - 000000000009e000 (usable) >> BIOS-e820: 000000000009e000 - 00000000000a0000 (reserved) >> BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved) >> BIOS-e820: 0000000000100000 - 00000000f0000000 (usable) >> BIOS-e820: 00000000fc000000 - 0000000100000000 (reserved) >> BIOS-e820: 0000000100000000 - 000000030f000000 (usable) >> >> I see that the highest usable gpa is over 12GB due to the reserved >> slots and the max gfn is 30f000. If I use xc_domain_getinfolist() and >> look at max_pages, it returns 300100 which correctly reflects 12GB. >> But is there a way for me to find out the max gfn that is reflected in >> the guest using libxc? > > xc_domain_maximum_gpfn() returns a value guaranteed to be >= the current > maximum gpfn in the guests physical address space.Thank you, that worked. What I am actually doing is passing it as a parameter to xc_hvm_set_mem_access(), to set all of guest memory to a certain mem_access type. Now for a 512MB VM, xc_domain_maximum_gpfn() returns 0xfffff. If I pass that value to xc_hvm_set_mem_access() it returns -1. However with 1GB, 2GB, 3GB VMs, xc_domain_maximum_gpfn() returns 0xfffff but the xc_hvm_set_mem_access() goes through. Any idea why the discrepancy with 512MB VMs? Thanks, AP> > -- Keir > >> Thanks, >> AP >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xen.org >> http://lists.xen.org/xen-devel > >
On Mon, Mar 12, 2012 at 12:31 PM, AP <apxeng@gmail.com> wrote:> On Thu, Mar 8, 2012 at 11:25 PM, Keir Fraser <keir@xen.org> wrote: >> On 09/03/2012 05:56, "AP" <apxeng@gmail.com> wrote: >> >>> I have a 12GB 64-bit Linux HVM guest (CentOS 5.5). When I look at the >>> e820 map in the guest, I see the following: >>> >>> BIOS-provided physical RAM map: >>> BIOS-e820: 0000000000010000 - 000000000009e000 (usable) >>> BIOS-e820: 000000000009e000 - 00000000000a0000 (reserved) >>> BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved) >>> BIOS-e820: 0000000000100000 - 00000000f0000000 (usable) >>> BIOS-e820: 00000000fc000000 - 0000000100000000 (reserved) >>> BIOS-e820: 0000000100000000 - 000000030f000000 (usable) >>> >>> I see that the highest usable gpa is over 12GB due to the reserved >>> slots and the max gfn is 30f000. If I use xc_domain_getinfolist() and >>> look at max_pages, it returns 300100 which correctly reflects 12GB. >>> But is there a way for me to find out the max gfn that is reflected in >>> the guest using libxc? >> >> xc_domain_maximum_gpfn() returns a value guaranteed to be >= the current >> maximum gpfn in the guests physical address space. > > Thank you, that worked. What I am actually doing is passing it as a > parameter to xc_hvm_set_mem_access(), to set all of guest memory to a > certain mem_access type. Now for a 512MB VM, xc_domain_maximum_gpfn() > returns 0xfffff. If I pass that value to xc_hvm_set_mem_access() it > returns -1. However with 1GB, 2GB, 3GB VMs, xc_domain_maximum_gpfn() > returns 0xfffff but the xc_hvm_set_mem_access() goes through. Any idea > why the discrepancy with 512MB VMs?The error occurs in HVMOP_set_mem_access on the set_entry call to 0x9dc00 gpfn. I still don''t understand why this is an issue only with 512MB VMs.> Thanks, > AP > >> >> -- Keir >> >>> Thanks, >>> AP >>> >>> _______________________________________________ >>> Xen-devel mailing list >>> Xen-devel@lists.xen.org >>> http://lists.xen.org/xen-devel >> >>
On 12/03/2012 22:03, "AP" <apxeng@gmail.com> wrote:> On Mon, Mar 12, 2012 at 12:31 PM, AP <apxeng@gmail.com> wrote: >> On Thu, Mar 8, 2012 at 11:25 PM, Keir Fraser <keir@xen.org> wrote: >>> On 09/03/2012 05:56, "AP" <apxeng@gmail.com> wrote: >>> >>>> I have a 12GB 64-bit Linux HVM guest (CentOS 5.5). When I look at the >>>> e820 map in the guest, I see the following: >>>> >>>> BIOS-provided physical RAM map: >>>> BIOS-e820: 0000000000010000 - 000000000009e000 (usable) >>>> BIOS-e820: 000000000009e000 - 00000000000a0000 (reserved) >>>> BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved) >>>> BIOS-e820: 0000000000100000 - 00000000f0000000 (usable) >>>> BIOS-e820: 00000000fc000000 - 0000000100000000 (reserved) >>>> BIOS-e820: 0000000100000000 - 000000030f000000 (usable) >>>> >>>> I see that the highest usable gpa is over 12GB due to the reserved >>>> slots and the max gfn is 30f000. If I use xc_domain_getinfolist() and >>>> look at max_pages, it returns 300100 which correctly reflects 12GB. >>>> But is there a way for me to find out the max gfn that is reflected in >>>> the guest using libxc? >>> >>> xc_domain_maximum_gpfn() returns a value guaranteed to be >= the current >>> maximum gpfn in the guests physical address space. >> >> Thank you, that worked. What I am actually doing is passing it as a >> parameter to xc_hvm_set_mem_access(), to set all of guest memory to a >> certain mem_access type. Now for a 512MB VM, xc_domain_maximum_gpfn() >> returns 0xfffff. If I pass that value to xc_hvm_set_mem_access() it >> returns -1. However with 1GB, 2GB, 3GB VMs, xc_domain_maximum_gpfn() >> returns 0xfffff but the xc_hvm_set_mem_access() goes through. Any idea >> why the discrepancy with 512MB VMs? > > The error occurs in HVMOP_set_mem_access on the set_entry call to > 0x9dc00 gpfn. I still don''t understand why this is an issue only with > 512MB VMs.I''m also not sure why 512MB would be special. You''ll have to do some more debugging. If there is a legitimate reason for there to be ''holes'' in the p2m address space, causing set_mem_access to fail, you can always recursively decompose a failed set_mem_access(s,e) call into set_mem_access(s,(s+e)/2) and set_mem_access((s+e)/2+1,e). -- Keir>> Thanks, >> AP >> >>> >>> -- Keir >>> >>>> Thanks, >>>> AP >>>> >>>> _______________________________________________ >>>> Xen-devel mailing list >>>> Xen-devel@lists.xen.org >>>> http://lists.xen.org/xen-devel >>> >>>
Apparently Analagous Threads
- [PATCH] [resend] xen-access: Check return values and clean up on errors during init
- Mem_event API and MEM_EVENT_REASON_SINGLESTEP
- [PATCH 0 of 2] v2: memshare/xenpaging/xen-access fixes for xen-unstable
- [PATCH 00/13] Coverity fixes for libxl
- odd gfn number checking in p2m.c