search for: anonhugepages

Displaying 19 results from an estimated 19 matches for "anonhugepages".

2020 Jul 16
0
[RFC for Linux v4 0/2] virtio_balloon: Add VIRTIO_BALLOON_F_CONT_PAGES to report continuous pages
...Following is an example in a VM with 1G memory 1CPU. This test setups an > environment that has a lot of fragmentation pages. Then inflate balloon will > split the THPs. > // This is the THP number before VM execution in the host. > // None use THP. > cat /proc/meminfo | grep AnonHugePages: > AnonHugePages: 0 kB > // After VM start, use usemem > // (https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git) > // punch-holes function generates 400m fragmentation pages in the guest > // kernel. > usemem --punch-holes -s -1 800m & > // This...
2020 Jul 16
0
[virtio-dev] [RFC for Linux v4 0/2] virtio_balloon: Add VIRTIO_BALLOON_F_CONT_PAGES to report continuous pages
...his test setups an > >> environment that has a lot of fragmentation pages. Then inflate balloon will > >> split the THPs. > > > >> // This is the THP number before VM execution in the host. > >> // None use THP. > >> cat /proc/meminfo | grep AnonHugePages: > >> AnonHugePages: 0 kB > These lines are from host. > > >> // After VM start, use usemem > >> // (https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git) > >> // punch-holes function generates 400m fragmentation pages in the gues...
2020 Mar 12
0
[RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue
...2, 2020 at 03:49:54PM +0800, Hui Zhu wrote: > If the guest kernel has many fragmentation pages, use virtio_balloon > will split THP of QEMU when it calls MADV_DONTNEED madvise to release > the balloon pages. > This is an example in a VM with 1G memory 1CPU: > cat /proc/meminfo | grep AnonHugePages: > AnonHugePages: 0 kB > > usemem --punch-holes -s -1 800m & > > cat /proc/meminfo | grep AnonHugePages: > AnonHugePages: 976896 kB > > (qemu) device_add virtio-balloon-pci,id=balloon1 > (qemu) info balloon > balloon: actual=1024 > (qemu) balloon 6...
2020 Mar 12
2
[RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue
On 12.03.20 08:49, Hui Zhu wrote: > If the guest kernel has many fragmentation pages, use virtio_balloon > will split THP of QEMU when it calls MADV_DONTNEED madvise to release > the balloon pages. > This is an example in a VM with 1G memory 1CPU: > cat /proc/meminfo | grep AnonHugePages: > AnonHugePages: 0 kB > > usemem --punch-holes -s -1 800m & > > cat /proc/meminfo | grep AnonHugePages: > AnonHugePages: 976896 kB > > (qemu) device_add virtio-balloon-pci,id=balloon1 > (qemu) info balloon > balloon: actual=1024 > (qemu) balloon 6...
2020 Mar 12
2
[RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue
On 12.03.20 08:49, Hui Zhu wrote: > If the guest kernel has many fragmentation pages, use virtio_balloon > will split THP of QEMU when it calls MADV_DONTNEED madvise to release > the balloon pages. > This is an example in a VM with 1G memory 1CPU: > cat /proc/meminfo | grep AnonHugePages: > AnonHugePages: 0 kB > > usemem --punch-holes -s -1 800m & > > cat /proc/meminfo | grep AnonHugePages: > AnonHugePages: 976896 kB > > (qemu) device_add virtio-balloon-pci,id=balloon1 > (qemu) info balloon > balloon: actual=1024 > (qemu) balloon 6...
2012 Jul 26
2
kernel parameters for improving gluster writes on millions of small writes (long)
...5928 kB PageTables: 27312 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 827911692 kB Committed_AS: 536852 kB VmallocTotal: 34359738367 kB VmallocUsed: 1227732 kB VmallocChunk: 33888774404 kB HardwareCorrupted: 0 kB AnonHugePages: 376832 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 201088 kB DirectMap2M: 15509504 kB DirectMap1G: 521142272 kB and the server's meminfo is: $ cat /proc/meminfo MemTotal:...
2016 Aug 24
2
Transparent HugePages question
...h 2 CPUs, 10 cores each and HT enabled, 128 GB RAM. The system has transparent hugetables enabled. cat /sys/kernel/mm/transparent_hugepage/enabled [always] madvise never The system reports anonymous hugepages pages usage and a size of hugepage of 2048Kb cat /proc/meminfo |grep -i hugepages|grep AnonHugePages AnonHugePages: 35491840 kB cat /proc/meminfo |grep -i hugepages|grep Hugepagesize Hugepagesize: 2048 kB Anyway, checking the /proc/<pid>/smaps for each and every process on the system, the KernelPageSize reports 4K pages only. for i in `ls /proc/|egrep '[0-9]+'` ; do grep &...
2015 Jan 17
1
Re: Guests using more ram than specified
.... Because if you use the ordinary system pages, the translation > table for ~200G is gonna be gigantic. Remember, that the table is > counted in for memory usage. According to the system information the qemu process uses transparent hugepages and most of the Memory for a VM is reported under AnonHugePages so that looks ok. > Then, qemu itself consume some memory besides guest memory. How much? > Nobody is able to tell. Yes, and that worries me. The recommendation is not to over-commit memory but if the System uses specified RAM + X and X is unknown and can be tens of Gigabytes then I don'...
2014 Jan 09
1
Bug#734761: xen-system-amd64: "XEN kernel detects 3GB RAM instead of 4GB"
...792 kB PageTables: 2440 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 5549584 kB Committed_AS: 153956 kB VmallocTotal: 34359738367 kB VmallocUsed: 38532 kB VmallocChunk: 34359697148 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 4193024 kB DirectMap2M: 0 kB -- System Information: Debian Release: 7.3 APT prefers stable-updates APT policy: (500, 's...
2015 Jan 16
2
Guests using more ram than specified
Hi, today I noticed that one of my HVs started swapping aggressively and noticed that the two guests running on it use quite a bit more ram than I assigned to them. They respectively were assigned 124G and 60G with the idea that the 192G system then has 8G for other purposes. In top I see the VMs using about 128G and 64G which means there is nothing left for the system. This is on a CentOS 7
2015 Feb 04
2
Re: HugePages - can't start guest that requires them
...0.00 0.00 0.00 WritebackTmp 0.00 0.00 0.00 Slab 192.31 160.72 353.03 SReclaimable 137.67 118.47 256.14 SUnreclaim 54.64 42.25 96.89 AnonHugePages 0.00 0.00 0.00 HugePages_Total 17408.00 17408.00 34816.00 HugePages_Free 2048.00 0.00 2048.00 HugePages_Surp 0.00 0.00 0.00 2015-02-03 16:53:47 root@eanna i ~ # numastat -p qemu...
2015 Jan 31
2
Re: HugePages - can't start guest that requires them
Yeah, Dominique, your wiki was one of the many docs I read through before/during/after starting down this primrose path... thanks for writing it. I'm an Arch user, and I couldn't find anything to indicate qemu, as its compiled for Arch, will look in /etc/default/qemu-kvm. And now that I've got the right page size, the instances are starting... The reason I want to use the page element
2012 Nov 03
0
mtrr_gran_size and mtrr_chunk_size
...4344 kB PageTables: 35824 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 16506240 kB Committed_AS: 3714116 kB VmallocTotal: 34359738367 kB VmallocUsed: 393260 kB VmallocChunk: 34359332704 kB HardwareCorrupted: 0 kB AnonHugePages: 165888 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 10228 kB DirectMap2M: 16676864 kB $ cat /proc/swaps Filename Type Size Used Priority /dev/dm-0...
2011 Nov 29
4
[RFC] virtio: use mandatory barriers for remote processor vdevs
Virtio is using memory barriers to control the ordering of references to the vrings on SMP systems. When the guest is compiled with SMP support, virtio is only using SMP barriers in order to avoid incurring the overhead involved with mandatory barriers. Lately, though, virtio is being increasingly used with inter-processor communication scenarios too, which involve running two (separate)
2011 Nov 29
4
[RFC] virtio: use mandatory barriers for remote processor vdevs
Virtio is using memory barriers to control the ordering of references to the vrings on SMP systems. When the guest is compiled with SMP support, virtio is only using SMP barriers in order to avoid incurring the overhead involved with mandatory barriers. Lately, though, virtio is being increasingly used with inter-processor communication scenarios too, which involve running two (separate)
2011 Dec 16
6
java installation failure
Readers, Openjdk and ibm java versions have failed to install, all reporting a bad elf, e.g. ./ibm-java-i386-sdk-7.0-0.0.bin Preparing to install... Extracting the JRE from the installer archive... Unpacking the JRE... Extracting the installation resources from the installer archive... Configuring the installer for this system's environment... strings: '/lib/libc.so.6': No such file
2013 Aug 27
4
Is: Xen 4.2 and using 'xl' to save/restore is buggy with PVHVM Linux guests (v3.10 and v3.11 and presumarily earlier as well). Works with Xen 4.3 and Xen 4.4. Was:Re: FAILURE 3.11.0-rc7upstream(x86_64) 3.11.0-rc7upstream(i386)\: 2013-08-26 (tst001)
...im: 21280 kB KernelStack: 504 kB PageTables: 196 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 510760 kB Committed_AS: 70924 kB VmallocTotal: 122880 kB VmallocUsed: 5340 kB VmallocChunk: 117152 kB AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 49144 kB DirectMap2M: 864256 kB Waiting for init.late [ OK ] [ 12.489749] device-mapper: ioctl: 4.25.0-ioctl (2013-06-26) initial...
2012 Dec 13
7
HVM bug: system crashes after offline online a vcpu
Hi Konrad I encountered a bug when trying to bring offline a cpu then online it again in HVM. As I''m not very familiar with HVM stuffs I cannot come up with a quick fix. The HVM DomU is configured with 4 vcpus. After booting into command prompt, I do following operations. # echo 0 > /sys/devices/system/cpu/cpu3/online # echo 1 > /sys/devices/system/cpu/cpu3/online With
2013 Jun 19
4
e008:[<ffff82c480122353>] check_lock+0x1b/0x45 [konrad.wilk@oracle.com: FAILURE 3.10.0-rc6upstream-00061-g752bf7d(x86_64) 3.10.0-rc6upstream-00061-g752bf7d(i386): 2013-06-19 (tst007)]
...39760 kB KernelStack: 552 kB PageTables: 756 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 4053264 kB Committed_AS: 106016 kB VmallocTotal: 34359738367 kB VmallocUsed: 547268 kB VmallocChunk: 34359189499 kB AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 58380 kB DirectMap2M: 8241152 kB Waiting for init.late [ OK ] PING build.dumpdata.com (192.168.101.3) 56(84) bytes of data. --- bui...