Hi, ** I sent this mail originally to xen-users, but after seeing other subjects posted on that list I felt that it was kind of wrong place to ask ** I''m wondering why hypervisor itself uses so much memory? Is it normal? I have several servers with 8GB of ram, all of them running x86_64 centos 5.4 (latest) and when running under xen, dom0 has [root@palmae ~]# head -4 /proc/meminfo MemTotal: 7661568 kB MemFree: 6550044 kB Buffers: 37400 kB Cached: 303480 kB Which is 440MB less than what I get without xen. It comes down to what amount of memory kernel gets either from Xen or from BIOS. Please note the differences: when run under xen: BIOS-provided physical RAM map: Xen: 0000000000000000 - 00000001dc25c000 (usable) On node 0 totalpages: 1950300 and just bare metal linux: BIOS-provided physical RAM map: BIOS-e820: 0000000000010000 - 000000000009ec00 (usable) BIOS-e820: 000000000009ec00 - 00000000000a0000 (reserved) BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved) BIOS-e820: 0000000000100000 - 00000000bf780000 (usable) BIOS-e820: 00000000bf780000 - 00000000bf78e000 (ACPI data) BIOS-e820: 00000000bf78e000 - 00000000bf7d0000 (ACPI NVS) BIOS-e820: 00000000bf7d0000 - 00000000bf7e0000 (reserved) BIOS-e820: 00000000bf7ec000 - 00000000c0000000 (reserved) BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved) BIOS-e820: 00000000ffc00000 - 0000000100000000 (reserved) BIOS-e820: 0000000100000000 - 0000000240000000 (usable) DMI present. ACPI: RSDP (v000 ACPIAM ) @ 0x00000000000fa460 ACPI: RSDT (v001 7522MS A7522800 0x20090903 MSFT 0x00000097) @ 0x00000000bf780000 ACPI: FADT (v001 7522MS A7522800 0x20090903 MSFT 0x00000097) @ 0x00000000bf780200 ACPI: MADT (v001 7522MS A7522800 0x20090903 MSFT 0x00000097) @ 0x00000000bf780390 ACPI: MCFG (v001 7522MS OEMMCFG 0x20090903 MSFT 0x00000097) @ 0x00000000bf780440 ACPI: OEMB (v001 7522MS A7522800 0x20090903 MSFT 0x00000097) @ 0x00000000bf78e040 ACPI: HPET (v001 7522MS OEMHPET 0x20090903 MSFT 0x00000097) @ 0x00000000bf78a480 ACPI: SSDT (v001 DpgPmm CpuPm 0x00000012 INTL 0x20051117) @ 0x00000000bf790350 ACPI: DSDT (v001 A7522 A7522800 0x00000800 INTL 0x20051117) @ 0x0000000000000000 No NUMA configuration found Faking a node at 0000000000000000-0000000240000000 Bootmem setup node 0 0000000000000000-0000000240000000 Memory for crash kernel (0x0 to 0x0) notwithin permissible range disabling kdump On node 0 totalpages: 2061313 That is around 440MB.. It just doesn''t seem normal to me? Anything to be tuned or checked or changed? Kind Regards, Vladimir PS, I apologize if this was asked more than 10 times, but I haven''t been able to google it out (poor choice of keywords maybe). _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Wed, Oct 28, 2009 at 10:29:37AM +0100, Vladimir Zidar wrote:> Hi, > > ** I sent this mail originally to xen-users, but after seeing other > subjects posted on that list I felt that it was kind of wrong place to > ask ** > > > I''m wondering why hypervisor itself uses so much memory? Is it normal? > > I have several servers with 8GB of ram, all of them running x86_64 > centos 5.4 (latest) and when running under xen, dom0 has > [root@palmae ~]# head -4 /proc/meminfo > MemTotal: 7661568 kB > MemFree: 6550044 kB > Buffers: 37400 kB > Cached: 303480 kB > > Which is 440MB less than what I get without xen. > It comes down to what amount of memory kernel gets either from Xen or > from BIOS. Please note the differences: >First of all you should limit the amount of memory visible to dom0 by specifying dom0_mem=512M (or so) parameter for xen.gz in grub.conf. After that reboot and then you can check the Xen hypervisor free memory with "xm info" and list the guest/domain mem usage with "xm list". Xen has some memory overhead (just like every virtualization solution). I think the formula for Xen memory overhead is: 8kB per 1MB of guest memory, plus 1MB per guest virtual cpu. I think this also applies to dom0, since basicly it is just a guest (with some more privileges). And in addition of course there is the "global" hypervisor memory usage.. not sure if those will add up to 440 MB though. -- Pasi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
What bothers me with this issue is that memory is ''lost'' just after boot, without any of domUs running. Also, 8kb per 1MB would add up to 64MB per 8GB of ram, then I add 1 more MB for dom0 (when counted as guest). Global hypervisor memory, that is question now, can it use 384MB straight after boot? - memory loss is detected at very first lines of dom0 dmesg. Is there a way to show more xen debugging before dom0 kicks in? Pasi Kärkkäinen wrote:> On Wed, Oct 28, 2009 at 10:29:37AM +0100, Vladimir Zidar wrote: > >> Hi, >> >> ** I sent this mail originally to xen-users, but after seeing other >> subjects posted on that list I felt that it was kind of wrong place to >> ask ** >> >> >> I''m wondering why hypervisor itself uses so much memory? Is it normal? >> >> I have several servers with 8GB of ram, all of them running x86_64 >> centos 5.4 (latest) and when running under xen, dom0 has >> [root@palmae ~]# head -4 /proc/meminfo >> MemTotal: 7661568 kB >> MemFree: 6550044 kB >> Buffers: 37400 kB >> Cached: 303480 kB >> >> Which is 440MB less than what I get without xen. >> It comes down to what amount of memory kernel gets either from Xen or >> from BIOS. Please note the differences: >> >> > > First of all you should limit the amount of memory visible to dom0 by > specifying dom0_mem=512M (or so) parameter for xen.gz in grub.conf. > > After that reboot and then you can check the Xen hypervisor free memory with > "xm info" and list the guest/domain mem usage with "xm list". > > Xen has some memory overhead (just like every virtualization solution). > > I think the formula for Xen memory overhead is: 8kB per 1MB of guest > memory, plus 1MB per guest virtual cpu. > > I think this also applies to dom0, since basicly it is just a guest > (with some more privileges). > > And in addition of course there is the "global" hypervisor memory > usage.. not sure if those will add up to 440 MB though. > > -- Pasi > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
By default Xen leaves 1/16 of memory free, up to a maximum of 128MB, for things like allocation of DMA buffers and swiotlb, and other domains, during/after dom0 boot. So you can see this memory is not actually all used, but is in Xen free pools, by looking at output of ''xm info'' after dom0 has booted. If you want dom0 to be given all available memory, add something like ''dom0_mem=64G'' to Xen''s command line. This overrides the default policy, and a really large number like 64GB will get clamped down to merely "all memory available". -- Keir On 28/10/2009 09:53, "Vladimir Zidar" <mr_w@mindnever.org> wrote:> What bothers me with this issue is that memory is ''lost'' just after > boot, without any of domUs running. > > Also, 8kb per 1MB would add up to 64MB per 8GB of ram, then I add 1 > more MB for dom0 (when counted as guest). > > Global hypervisor memory, that is question now, can it use 384MB > straight after boot? - memory loss is detected at very first lines of > dom0 dmesg. > > Is there a way to show more xen debugging before dom0 kicks in? > > Pasi Kärkkäinen wrote: >> On Wed, Oct 28, 2009 at 10:29:37AM +0100, Vladimir Zidar wrote: >> >>> Hi, >>> >>> ** I sent this mail originally to xen-users, but after seeing other >>> subjects posted on that list I felt that it was kind of wrong place to >>> ask ** >>> >>> >>> I''m wondering why hypervisor itself uses so much memory? Is it normal? >>> >>> I have several servers with 8GB of ram, all of them running x86_64 >>> centos 5.4 (latest) and when running under xen, dom0 has >>> [root@palmae ~]# head -4 /proc/meminfo >>> MemTotal: 7661568 kB >>> MemFree: 6550044 kB >>> Buffers: 37400 kB >>> Cached: 303480 kB >>> >>> Which is 440MB less than what I get without xen. >>> It comes down to what amount of memory kernel gets either from Xen or >>> from BIOS. Please note the differences: >>> >>> >> >> First of all you should limit the amount of memory visible to dom0 by >> specifying dom0_mem=512M (or so) parameter for xen.gz in grub.conf. >> >> After that reboot and then you can check the Xen hypervisor free memory with >> "xm info" and list the guest/domain mem usage with "xm list". >> >> Xen has some memory overhead (just like every virtualization solution). >> >> I think the formula for Xen memory overhead is: 8kB per 1MB of guest >> memory, plus 1MB per guest virtual cpu. >> >> I think this also applies to dom0, since basicly it is just a guest >> (with some more privileges). >> >> And in addition of course there is the "global" hypervisor memory >> usage.. not sure if those will add up to 440 MB though. >> >> -- Pasi >> >> > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
I have actually tracked this down to xen version which centos (could be also what rhel uses): Version xen.gz-2.6.18-53.1.4.el5.centos.plus gives BIOS-provided physical RAM map: Xen: 0000000000000000 - 00000001f00cb000 (usable) On node 0 totalpages: 2031819 DMA zone: 2031819 pages, LIFO batch:31 and xen.gz-2.6.18-164.el5 gives: BIOS-provided physical RAM map: Xen: 0000000000000000 - 00000001dc9c8000 (usable) On node 0 totalpages: 1952200 DMA zone: 1952200 pages, LIFO batch:31 That is 79619 pages difference - slightly over 300MB. Both give the same info here: release : 2.6.18-164.el5xen version : #1 SMP Thu Sep 3 04:03:03 EDT 2009 machine : x86_64 nr_cpus : 2 nr_nodes : 1 sockets_per_node : 1 cores_per_socket : 2 threads_per_core : 1 cpu_mhz : 3013 hw_caps : 178bfbff:ebd3fbff:00000000:00000010:00002001:00000000:0000001f total_memory : 8190 free_memory : 2 node_to_cpu : node0:0-1 xen_major : 3 xen_minor : 1 xen_extra : .2-164.el5 xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : unavailable cc_compiler : gcc version 4.1.2 20080704 (Red Hat 4.1.2-46) cc_compile_by : mockbuild cc_compile_domain : centos.org cc_compile_date : Thu Sep 3 03:20:50 EDT 2009 xend_config_format : 2 (except for xen_extra and cc_compiler which are different). Now I understand that this could be due to rhel patches, and maybe doesn''t relate to official xen builds, but I''d like to know if this issue was known or not before jumping into xen 3.4 - as it won''t be direct rpm/yum upgrade path. Kind Regards, Vladimir Keir Fraser wrote:> By default Xen leaves 1/16 of memory free, up to a maximum of 128MB, for > things like allocation of DMA buffers and swiotlb, and other domains, > during/after dom0 boot. So you can see this memory is not actually all used, > but is in Xen free pools, by looking at output of ''xm info'' after dom0 has > booted. > > If you want dom0 to be given all available memory, add something like > ''dom0_mem=64G'' to Xen''s command line. This overrides the default policy, and > a really large number like 64GB will get clamped down to merely "all memory > available". > > -- Keir >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Wed, Oct 28, 2009 at 12:19:01PM +0100, Vladimir Zidar wrote:> I have actually tracked this down to xen version which centos (could be > also what rhel uses): >CentOS is (or, aims to be) 1:1 clone of RHEL.> Version xen.gz-2.6.18-53.1.4.el5.centos.plus gives > BIOS-provided physical RAM map: > Xen: 0000000000000000 - 00000001f00cb000 (usable) > On node 0 totalpages: 2031819 > DMA zone: 2031819 pages, LIFO batch:31 > > and xen.gz-2.6.18-164.el5 gives: > > BIOS-provided physical RAM map: > Xen: 0000000000000000 - 00000001dc9c8000 (usable) > On node 0 totalpages: 1952200 > DMA zone: 1952200 pages, LIFO batch:31 > > That is 79619 pages difference - slightly over 300MB. >Well.. that explains it..> > Now I understand that this could be due to rhel patches, and maybe > doesn''t relate to official xen builds, but I''d like to know if this > issue was known or not before jumping into xen 3.4 - as it won''t be > direct rpm/yum upgrade path. >RHEL 5.4 (-164 kernel) added more support for VT-d etc, so maybe that''s why more DMA memory is reserved. dunno. -- Pasi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Sounds possible. However it would be great if there was switch to disable that feature in case hardware is not capable of VT-d, as I''d rather use those 300mb than have software support for something that I can''t actually use. Thanks for hints, you were very helpful. I''ll dig more into rhel/centos xen sources to look for any reference to vt-d. Pasi Kärkkäinen wrote:>> I understand that this could be due to rhel patches, and maybe >> doesn''t relate to official xen builds, but I''d like to know if this >> issue was known or not before jumping into xen 3.4 - as it won''t be >> direct rpm/yum upgrade path. >> >> > > RHEL 5.4 (-164 kernel) added more support for VT-d etc, so maybe that''s why > more DMA memory is reserved. dunno. > > -- Pasi >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Wed, Oct 28, 2009 at 01:02:14PM +0100, Vladimir Zidar wrote:> Sounds possible. However it would be great if there was switch to > disable that feature in case hardware is not capable of VT-d, as I''d > rather use those 300mb than have software support for something that I > can''t actually use. > > Thanks for hints, you were very helpful. > > I''ll dig more into rhel/centos xen sources to look for any reference to > vt-d. >It could also be some other feature/change.. Maybe this helps: http://rhn.redhat.com/errata/RHSA-2009-1243.html (there''s the changelog in the end). -- Pasi> Pasi Kärkkäinen wrote: > >> I understand that this could be due to rhel patches, and maybe > >>doesn''t relate to official xen builds, but I''d like to know if this > >>issue was known or not before jumping into xen 3.4 - as it won''t be > >>direct rpm/yum upgrade path. > >> > >> > > > >RHEL 5.4 (-164 kernel) added more support for VT-d etc, so maybe that''s why > >more DMA memory is reserved. dunno. > > > >-- Pasi > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Vladimir Zidar wrote:> Sounds possible. However it would be great if there was switch to > disable that feature in case hardware is not capable of VT-d, as I''d > rather use those 300mb than have software support for something that I > can''t actually use.In point of fact, VT-d is disabled by default; you need to explicitly enable it for it to use memory. However, it''s possible that there''s a bug, or some other change caused the memory difference, so it''s worthwhile to try and track it down a little better. In particular, you jumped from the 5.2 kernel to the 5.4, so it would be worthwhile to try the 5.3 kernel and see what you get. -- Chris Lalancette _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Chris, good that you pointed to 5.2 vs 5.3 vs 5.4, the difference in number of pages is noticed between these: xen.gz-2.6.18-92.1.22.el5 - last 5.2 update - all pages are ok, xen.gz-2.6.18-128.el5 - first 5.3 release - ~80000 pages missing on 8GB ram setup. Chris Lalancette wrote:> Vladimir Zidar wrote: > >> Sounds possible. However it would be great if there was switch to >> disable that feature in case hardware is not capable of VT-d, as I''d >> rather use those 300mb than have software support for something that I >> can''t actually use. >> > > In point of fact, VT-d is disabled by default; you need to explicitly enable it > for it to use memory. However, it''s possible that there''s a bug, or some other > change caused the memory difference, so it''s worthwhile to try and track it down > a little better. In particular, you jumped from the 5.2 kernel to the 5.4, so > it would be worthwhile to try the 5.3 kernel and see what you get. > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
I have nailed the problem down to RHEL version of compute_dom0_nr_pages() function. Vanilla xen uses something like this to reserve up to 128MB of ram for DMA etc. The same alg. is used in rhel <= 5.2 and also in official xen 3.4.1 if ( dom0_nrpages == 0 ) { dom0_nrpages = avail; dom0_nrpages = min(dom0_nrpages / 16, 128L << (20 - PAGE_SHIFT)); dom0_nrpages = -dom0_nrpages; } However, RHEL >= 5.3 uses this: /* * If domain 0 allocation isn''t specified, reserve 1/16th of available * memory for things like DMA buffers. This reservation is clamped to * a maximum of 384MB. */ if ( dom0_nrpages == 0 ) { dom0_nrpages = avail; dom0_nrpages = min(dom0_nrpages / 8, 384L << (20 - PAGE_SHIFT)); dom0_nrpages = -dom0_nrpages; } else { /* User specified a dom0_size. Do not clamp the maximum. */ dom0_max_nrpages = LONG_MAX; } I do understand that they like the idea of reserving more ram, but additionally /8 would make 1/8th of memory instead of 1/16th? So this might be intended behavior, just not advertised anywhere, and as a kind of side effect, specifying dom0_mem would altogether skip this funny allocation scheme - at least in theory [ I''ve just put dom0_mem=64G (but I have 8G only) ] and it is not coming up, and I will not be able to t see the console for at least next couple of hours. Vladimir Zidar wrote:> Chris, > > good that you pointed to 5.2 vs 5.3 vs 5.4, > the difference in number of pages is noticed between these: > > xen.gz-2.6.18-92.1.22.el5 - last 5.2 update - all pages are ok, > xen.gz-2.6.18-128.el5 - first 5.3 release - ~80000 pages missing > on 8GB ram setup. > > Chris Lalancette wrote: >> Vladimir Zidar wrote: >> >>> Sounds possible. However it would be great if there was switch to >>> disable that feature in case hardware is not capable of VT-d, as I''d >>> rather use those 300mb than have software support for something that >>> I can''t actually use. >>> >> >> In point of fact, VT-d is disabled by default; you need to explicitly >> enable it >> for it to use memory. However, it''s possible that there''s a bug, or >> some other >> change caused the memory difference, so it''s worthwhile to try and >> track it down >> a little better. In particular, you jumped from the 5.2 kernel to >> the 5.4, so >> it would be worthwhile to try the 5.3 kernel and see what you get. >> >> > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
And this is RHEL patch that caused it. Now, does it really solve anything in long term? What if onboard graphics uses 512M? What are your thoughts about it? Kind Regards, Vladimir -- patch follows -- From: Rik van Riel <riel@redhat.com> Date: Fri, 21 Nov 2008 14:32:20 -0500 Subject: [xen] increase maximum DMA buffer size Message-id: 20081121143220.08a94702@cuia.bos.redhat.com O-Subject: [RHEL5.3 PATCH 3/3] xen: increase maximum DMA buffer size Bugzilla: 412691 RH-Acked-by: Don Dutile <ddutile@redhat.com> RH-Acked-by: Bill Burns <bburns@redhat.com> RH-Acked-by: Glauber Costa <glommer@redhat.com> After more investigation, we have got the reason of the panic. Currently xen reserve 128M DMA buffer at most, while the on-board graphic card requires 256M memory. With following patch + xen patch + your patch in comments 30+31, everything works quite well. Fixes bug 412691 Signed-off-by: Jiang, Yunhong <yunhong.jiang@intel.com> Signed-off-by: Rik van Riel <riel@redhat.com> diff --git a/arch/x86/domain_build.c b/arch/x86/domain_build.c index c72c300..8dcf816 100644 --- a/arch/x86/domain_build.c +++ b/arch/x86/domain_build.c @@ -138,12 +138,12 @@ static unsigned long __init compute_dom0_nr_pages(void) /* * If domain 0 allocation isn''t specified, reserve 1/16th of available * memory for things like DMA buffers. This reservation is clamped to - * a maximum of 128MB. + * a maximum of 384MB. */ if ( dom0_nrpages == 0 ) { dom0_nrpages = avail; - dom0_nrpages = min(dom0_nrpages / 16, 128L << (20 - PAGE_SHIFT)); + dom0_nrpages = min(dom0_nrpages / 8, 384L << (20 - PAGE_SHIFT)); dom0_nrpages = -dom0_nrpages; } else { /* User specified a dom0_size. Do not clamp the maximum. */ Vladimir Zidar wrote:> I have nailed the problem down to RHEL version of > compute_dom0_nr_pages() function. > > Vanilla xen uses something like this to reserve up to 128MB of ram for > DMA etc. The same alg. is used in rhel <= 5.2 and also in official xen > 3.4.1 > > if ( dom0_nrpages == 0 ) > { > dom0_nrpages = avail; > dom0_nrpages = min(dom0_nrpages / 16, 128L << (20 - PAGE_SHIFT)); > dom0_nrpages = -dom0_nrpages; > } > > However, RHEL >= 5.3 uses this: > > /* > * If domain 0 allocation isn''t specified, reserve 1/16th of available > * memory for things like DMA buffers. This reservation is clamped to > * a maximum of 384MB. > */ > if ( dom0_nrpages == 0 ) > { > dom0_nrpages = avail; > dom0_nrpages = min(dom0_nrpages / 8, 384L << (20 - PAGE_SHIFT)); > dom0_nrpages = -dom0_nrpages; > } else { > /* User specified a dom0_size. Do not clamp the maximum. */ > dom0_max_nrpages = LONG_MAX; > } > > I do understand that they like the idea of reserving more ram, but > additionally /8 would make 1/8th of memory instead of 1/16th? > > So this might be intended behavior, just not advertised anywhere, and > as a kind of side effect, specifying dom0_mem would altogether skip > this funny allocation scheme - at least in theory [ I''ve just put > dom0_mem=64G (but I have 8G only) ] and it is not coming up, and I > will not be able to t see the console for at least next couple of hours. > > > Vladimir Zidar wrote: >> Chris, >> >> good that you pointed to 5.2 vs 5.3 vs 5.4, >> the difference in number of pages is noticed between these: >> >> xen.gz-2.6.18-92.1.22.el5 - last 5.2 update - all pages are ok, >> xen.gz-2.6.18-128.el5 - first 5.3 release - ~80000 pages >> missing on 8GB ram setup. >> >> Chris Lalancette wrote: >>> Vladimir Zidar wrote: >>> >>>> Sounds possible. However it would be great if there was switch to >>>> disable that feature in case hardware is not capable of VT-d, as >>>> I''d rather use those 300mb than have software support for something >>>> that I can''t actually use. >>>> >>> >>> In point of fact, VT-d is disabled by default; you need to >>> explicitly enable it >>> for it to use memory. However, it''s possible that there''s a bug, or >>> some other >>> change caused the memory difference, so it''s worthwhile to try and >>> track it down >>> a little better. In particular, you jumped from the 5.2 kernel to >>> the 5.4, so >>> it would be worthwhile to try the 5.3 kernel and see what you get. >>>_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Well, indeed. Your best bet is to only give dom0 the memory it needs, via dom0_mem. If you want to give it all memory then you need to specify something like dom0_mem=64G -- if that''s failing to boot for you then you may need swiotlb=off on dom0''s command line (otherwise it will fail to allocate memory for swiotlb, and hence crash, since it was already all allocated to dom0!). -- Keir On 29/10/2009 11:19, "Vladimir Zidar" <mr_w@mindnever.org> wrote:> And this is RHEL patch that caused it. > > Now, does it really solve anything in long term? What if onboard > graphics uses 512M? > What are your thoughts about it? > > > Kind Regards, > Vladimir > > > -- patch follows -- > From: Rik van Riel <riel@redhat.com> > Date: Fri, 21 Nov 2008 14:32:20 -0500 > Subject: [xen] increase maximum DMA buffer size > Message-id: 20081121143220.08a94702@cuia.bos.redhat.com > O-Subject: [RHEL5.3 PATCH 3/3] xen: increase maximum DMA buffer size > Bugzilla: 412691 > RH-Acked-by: Don Dutile <ddutile@redhat.com> > RH-Acked-by: Bill Burns <bburns@redhat.com> > RH-Acked-by: Glauber Costa <glommer@redhat.com> > > After more investigation, we have got the reason of the panic. Currently > xen reserve 128M DMA buffer at most, while the on-board graphic card > requires > 256M memory. With following patch + xen patch + your patch in comments > 30+31, > everything works quite well. > > Fixes bug 412691 > > Signed-off-by: Jiang, Yunhong <yunhong.jiang@intel.com> > Signed-off-by: Rik van Riel <riel@redhat.com> > > diff --git a/arch/x86/domain_build.c b/arch/x86/domain_build.c > index c72c300..8dcf816 100644 > --- a/arch/x86/domain_build.c > +++ b/arch/x86/domain_build.c > @@ -138,12 +138,12 @@ static unsigned long __init > compute_dom0_nr_pages(void) > /* > * If domain 0 allocation isn''t specified, reserve 1/16th of available > * memory for things like DMA buffers. This reservation is clamped to > - * a maximum of 128MB. > + * a maximum of 384MB. > */ > if ( dom0_nrpages == 0 ) > { > dom0_nrpages = avail; > - dom0_nrpages = min(dom0_nrpages / 16, 128L << (20 - PAGE_SHIFT)); > + dom0_nrpages = min(dom0_nrpages / 8, 384L << (20 - PAGE_SHIFT)); > dom0_nrpages = -dom0_nrpages; > } else { > /* User specified a dom0_size. Do not clamp the maximum. */ > > > > > Vladimir Zidar wrote: >> I have nailed the problem down to RHEL version of >> compute_dom0_nr_pages() function. >> >> Vanilla xen uses something like this to reserve up to 128MB of ram for >> DMA etc. The same alg. is used in rhel <= 5.2 and also in official xen >> 3.4.1 >> >> if ( dom0_nrpages == 0 ) >> { >> dom0_nrpages = avail; >> dom0_nrpages = min(dom0_nrpages / 16, 128L << (20 - PAGE_SHIFT)); >> dom0_nrpages = -dom0_nrpages; >> } >> >> However, RHEL >= 5.3 uses this: >> >> /* >> * If domain 0 allocation isn''t specified, reserve 1/16th of available >> * memory for things like DMA buffers. This reservation is clamped to >> * a maximum of 384MB. >> */ >> if ( dom0_nrpages == 0 ) >> { >> dom0_nrpages = avail; >> dom0_nrpages = min(dom0_nrpages / 8, 384L << (20 - PAGE_SHIFT)); >> dom0_nrpages = -dom0_nrpages; >> } else { >> /* User specified a dom0_size. Do not clamp the maximum. */ >> dom0_max_nrpages = LONG_MAX; >> } >> >> I do understand that they like the idea of reserving more ram, but >> additionally /8 would make 1/8th of memory instead of 1/16th? >> >> So this might be intended behavior, just not advertised anywhere, and >> as a kind of side effect, specifying dom0_mem would altogether skip >> this funny allocation scheme - at least in theory [ I''ve just put >> dom0_mem=64G (but I have 8G only) ] and it is not coming up, and I >> will not be able to t see the console for at least next couple of hours. >> >> >> Vladimir Zidar wrote: >>> Chris, >>> >>> good that you pointed to 5.2 vs 5.3 vs 5.4, >>> the difference in number of pages is noticed between these: >>> >>> xen.gz-2.6.18-92.1.22.el5 - last 5.2 update - all pages are ok, >>> xen.gz-2.6.18-128.el5 - first 5.3 release - ~80000 pages >>> missing on 8GB ram setup. >>> >>> Chris Lalancette wrote: >>>> Vladimir Zidar wrote: >>>> >>>>> Sounds possible. However it would be great if there was switch to >>>>> disable that feature in case hardware is not capable of VT-d, as >>>>> I''d rather use those 300mb than have software support for something >>>>> that I can''t actually use. >>>>> >>>> >>>> In point of fact, VT-d is disabled by default; you need to >>>> explicitly enable it >>>> for it to use memory. However, it''s possible that there''s a bug, or >>>> some other >>>> change caused the memory difference, so it''s worthwhile to try and >>>> track it down >>>> a little better. In particular, you jumped from the 5.2 kernel to >>>> the 5.4, so >>>> it would be worthwhile to try the 5.3 kernel and see what you get. >>>> > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel