similar to: dramatic performance slowdown due to THP allocation failure with full pagecache

Displaying 20 results from an estimated 8000 matches similar to: "dramatic performance slowdown due to THP allocation failure with full pagecache"

2017 Nov 14
2
Re: dramatic performance slowdown due to THP allocation failure with full pagecache
Thanks for the reply Daniel, However I think you slightly misunderstood the scenario... On 14 November 2017 at 10:32, Daniel P. Berrange <berrange@redhat.com> wrote: > IOW, if your application has a certain expectation of performance that can only > be satisfied by having the KVM guest backed by huge pages, then you should > really change to explicitly reserve huge pages for the
2017 Nov 14
1
Re: dramatic performance slowdown due to THP allocation failure with full pagecache
On 14 November 2017 at 10:56, Daniel P. Berrange <berrange@redhat.com> wrote: > Oh well THP usage inside the guest is then not really anything todo with > virt, just a regular Linux questions, so not sure libvirt is the best > place to ask. True, I just hoped you or one of the other devs might have some insight on reclaim behaviour that would provide a clue. I guess I'll try a
2017 Nov 14
0
Re: dramatic performance slowdown due to THP allocation failure with full pagecache
On Tue, Nov 14, 2017 at 10:23:56AM -0700, Blair Bethwaite wrote: > Hi all, > > This is not really a libvirt issue but I'm hoping some of the smart folks > here will know more about this problem... > > We have noticed when running some HPC applications on our OpenStack > (libvirt+KVM) cloud that the same application occasionally performs much > worse (4-5x slowdown)
2017 Nov 14
0
Re: dramatic performance slowdown due to THP allocation failure with full pagecache
On Tue, Nov 14, 2017 at 10:52:03AM -0700, Blair Bethwaite wrote: > Thanks for the reply Daniel, > > However I think you slightly misunderstood the scenario... > > On 14 November 2017 at 10:32, Daniel P. Berrange <berrange@redhat.com> wrote: > > IOW, if your application has a certain expectation of performance that can only > > be satisfied by having the KVM guest
2020 Mar 20
4
[PATCH 0/2] mm/thp: Rename pmd_mknotpresent() as pmd_mknotvalid()
This series renames pmd_mknotpresent() as pmd_mknotvalid(). Before that it drops an existing pmd_mknotpresent() definition from powerpc platform which was never required as it defines it's pmdp_invalidate() through subscribing __HAVE_ARCH_PMDP_INVALIDATE. This does not create any functional change. This rename was suggested by Catalin during a previous discussion while we were trying to
2020 Sep 03
1
[PATCH v3] mm/thp: fix __split_huge_pmd_locked() for migration PMD
A migrating transparent huge page has to already be unmapped. Otherwise, the page could be modified while it is being copied to a new page and data could be lost. The function __split_huge_pmd() checks for a PMD migration entry before calling __split_huge_pmd_locked() leading one to think that __split_huge_pmd_locked() can handle splitting a migrating PMD. However, the code always increments the
2020 Apr 22
1
[PATCH V2 0/2] mm/thp: Rename pmd_mknotpresent() as pmd_mkinvalid()
This series renames pmd_mknotpresent() as pmd_mkinvalid(). Before that it drops an existing pmd_mknotpresent() definition from powerpc platform which was never required as it defines it's pmdp_invalidate() through subscribing __HAVE_ARCH_PMDP_INVALIDATE. This does not create any functional change. This rename was suggested by Catalin during a previous discussion while we were trying to change
2020 Jun 22
1
[PATCH 14/16] mm/thp: add THP allocation helper
On 19 Jun 2020, at 17:56, Ralph Campbell wrote: > Transparent huge page allocation policy is controlled by several sysfs > variables. Rather than expose these to each device driver that needs to > allocate THPs, provide a helper function. > > Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> > --- > include/linux/gfp.h | 10 ++++++++++ > mm/huge_memory.c |
2020 Mar 12
2
[RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue
On 12.03.20 08:49, Hui Zhu wrote: > If the guest kernel has many fragmentation pages, use virtio_balloon > will split THP of QEMU when it calls MADV_DONTNEED madvise to release > the balloon pages. > This is an example in a VM with 1G memory 1CPU: > cat /proc/meminfo | grep AnonHugePages: > AnonHugePages: 0 kB > > usemem --punch-holes -s -1 800m & > >
2020 Mar 12
2
[RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue
On 12.03.20 08:49, Hui Zhu wrote: > If the guest kernel has many fragmentation pages, use virtio_balloon > will split THP of QEMU when it calls MADV_DONTNEED madvise to release > the balloon pages. > This is an example in a VM with 1G memory 1CPU: > cat /proc/meminfo | grep AnonHugePages: > AnonHugePages: 0 kB > > usemem --punch-holes -s -1 800m & > >
2008 Aug 26
1
Dramatic slowdown of R 2.7.2?
Dear R users/developers, simple comparison of code execution time of R 2.7.1 and R 2.7.2 shows a dramatic slowdown of the newer version. Rprof() identifies .Call function as a main cause (see the code below). What happened with R 2.7.2? Kind regards Marek Wielgosz Bayes Consulting ######### Probably useful info ############### ### CPU: Core2Duo T 7300, 2 GB RAM ### WIN XP ### both standard
2020 Mar 20
0
[PATCH 2/2] mm/thp: Rename pmd_mknotpresent() as pmd_mknotvalid()
pmd_present() is expected to test positive after pmdp_mknotpresent() as the PMD entry still points to a valid huge page in memory. pmdp_mknotpresent() implies that given PMD entry is just invalidated from MMU perspective while still holding on to pmd_page() referred valid huge page thus also clearing pmd_present() test. This creates the following situation which is counter intuitive.
2007 Mar 01
4
pagecache corruption on Tyan S3870
A couple of months ago I reported some problems with a batch of Tyan K8SSA (S3870) based machines. We are continuing to have an odd problem with these boxes, and if anyone has seen something similar elsewhere, I'd appreciate hearing about it. These boxes are running Centos 4.4 x86_64 with kernel 2.6.9-42.0.3.ELsmp. They are dual Opteron 265's (dual core) with 4x2GB DIMM's. The
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Mon, Mar 11, 2019 at 03:40:31PM +0800, Jason Wang wrote: > > On 2019/3/9 ??3:48, Andrea Arcangeli wrote: > > Hello Jeson, > > > > On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > > > Just to make sure I understand here. For boosting through huge TLB, do > > > you mean we can do that in the future (e.g by mapping more userspace > >
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Mon, Mar 11, 2019 at 03:40:31PM +0800, Jason Wang wrote: > > On 2019/3/9 ??3:48, Andrea Arcangeli wrote: > > Hello Jeson, > > > > On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > > > Just to make sure I understand here. For boosting through huge TLB, do > > > you mean we can do that in the future (e.g by mapping more userspace > >
2017 Feb 14
3
high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME
Hi all, In IRC last night Dan helpfully confirmed my analysis of an issue we are seeing attempting to launch high memory KVM guests backed by hugepages... In this case the guests have 240GB of memory allocated from two host NUMA nodes to two guest NUMA nodes. The trouble is that allocating the hugepage backed qemu process seems to take longer than the 30s QEMU_JOB_WAIT_TIME and so libvirt then
2007 Apr 27
2
ARC, mmap, pagecache...
Hi, I was wondering about the ARC and its interaction with the VM pagecache... When a file on a ZFS filesystem is mmaped, does the ARC cache get mapped to the process'' virtual memory? Or is there another copy? -Manoj
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
On Mon, Jun 22, 2020 at 4:02 PM John Hubbard <jhubbard at nvidia.com> wrote: > > On 2020-06-22 15:33, Yang Shi wrote: > > On Mon, Jun 22, 2020 at 3:30 PM Yang Shi <shy828301 at gmail.com> wrote: > >> On Mon, Jun 22, 2020 at 2:53 PM Zi Yan <ziy at nvidia.com> wrote: > >>> On 22 Jun 2020, at 17:31, Ralph Campbell wrote: > >>>> On
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
On Mon, Jun 22, 2020 at 3:30 PM Yang Shi <shy828301 at gmail.com> wrote: > > On Mon, Jun 22, 2020 at 2:53 PM Zi Yan <ziy at nvidia.com> wrote: > > > > On 22 Jun 2020, at 17:31, Ralph Campbell wrote: > > > > > On 6/22/20 1:10 PM, Zi Yan wrote: > > >> On 22 Jun 2020, at 15:36, Ralph Campbell wrote: > > >> > > >>> On
2018 Dec 25
2
[PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address
On Tue, Dec 25, 2018 at 06:05:25PM +0800, Jason Wang wrote: > > On 2018/12/25 ??2:10, Michael S. Tsirkin wrote: > > On Mon, Dec 24, 2018 at 03:53:16PM +0800, Jason Wang wrote: > > > On 2018/12/14 ??8:36, Michael S. Tsirkin wrote: > > > > On Fri, Dec 14, 2018 at 11:57:35AM +0800, Jason Wang wrote: > > > > > On 2018/12/13 ??11:44, Michael S. Tsirkin