similar to: [RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue

Displaying 20 results from an estimated 1000 matches similar to: "[RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue"

2020 Mar 12
0
[RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue
On Thu, Mar 12, 2020 at 03:49:54PM +0800, Hui Zhu wrote: > If the guest kernel has many fragmentation pages, use virtio_balloon > will split THP of QEMU when it calls MADV_DONTNEED madvise to release > the balloon pages. > This is an example in a VM with 1G memory 1CPU: > cat /proc/meminfo | grep AnonHugePages: > AnonHugePages: 0 kB > > usemem --punch-holes -s -1
2020 Apr 02
0
[RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue
On Thu, Apr 02, 2020 at 04:00:05PM +0800, teawater wrote: > > > > 2020?3?31? 22:07?Michael S. Tsirkin <mst at redhat.com> ??? > > > > On Tue, Mar 31, 2020 at 04:03:18PM +0200, David Hildenbrand wrote: > >> On 31.03.20 15:37, Michael S. Tsirkin wrote: > >>> On Tue, Mar 31, 2020 at 03:32:05PM +0200, David Hildenbrand wrote: > >>>>
2020 Mar 12
0
[RFC for QEMU] virtio-balloon: Add option thp-order to set VIRTIO_BALLOON_F_THP_ORDER
On Thu, Mar 12, 2020 at 03:49:55PM +0800, Hui Zhu wrote: > If the guest kernel has many fragmentation pages, use virtio_balloon > will split THP of QEMU when it calls MADV_DONTNEED madvise to release > the balloon pages. > Set option thp-order to on will open flags VIRTIO_BALLOON_F_THP_ORDER. > It will set balloon size to THP size to handle the THP split issue. > >
2020 Apr 01
0
[RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue
On 31.03.20 18:27, Nadav Amit wrote: >> On Mar 31, 2020, at 6:32 AM, David Hildenbrand <david at redhat.com> wrote: >> >> On 31.03.20 15:24, Michael S. Tsirkin wrote: >>> On Tue, Mar 31, 2020 at 12:35:24PM +0200, David Hildenbrand wrote: >>>> On 26.03.20 10:49, Michael S. Tsirkin wrote: >>>>> On Thu, Mar 26, 2020 at 08:54:04AM +0100,
2020 Jul 16
0
[RFC for Linux v4 0/2] virtio_balloon: Add VIRTIO_BALLOON_F_CONT_PAGES to report continuous pages
On Thu, Jul 16, 2020 at 10:41:50AM +0800, Hui Zhu wrote: > The first, second and third version are in [1], [2] and [3]. > Code of current version for Linux and qemu is available in [4] and [5]. > Update of this version: > 1. Report continuous pages will increase the speed. So added deflate > continuous pages. > 2. According to the comments from David in [6], added 2 new vqs
2020 Jul 16
0
[virtio-dev] [RFC for Linux v4 0/2] virtio_balloon: Add VIRTIO_BALLOON_F_CONT_PAGES to report continuous pages
On Thu, Jul 16, 2020 at 03:01:18PM +0800, teawater wrote: > > > > 2020?7?16? 14:38?Michael S. Tsirkin <mst at redhat.com> ??? > > > > On Thu, Jul 16, 2020 at 10:41:50AM +0800, Hui Zhu wrote: > >> The first, second and third version are in [1], [2] and [3]. > >> Code of current version for Linux and qemu is available in [4] and [5]. > >>
2020 Mar 12
0
[RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue
On Thu, Mar 12, 2020 at 09:37:32AM +0100, David Hildenbrand wrote: > 2. You are essentially stealing THPs in the guest. So the fastest > mapping (THP in guest and host) is gone. The guest won't be able to make > use of THP where it previously was able to. I can imagine this implies a > performance degradation for some workloads. This needs a proper > performance evaluation. I
2020 Apr 01
0
[RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue
On 31.03.20 18:37, Nadav Amit wrote: >> On Mar 31, 2020, at 7:09 AM, David Hildenbrand <david at redhat.com> wrote: >> >> On 31.03.20 16:07, Michael S. Tsirkin wrote: >>> On Tue, Mar 31, 2020 at 04:03:18PM +0200, David Hildenbrand wrote: >>>> On 31.03.20 15:37, Michael S. Tsirkin wrote: >>>>> On Tue, Mar 31, 2020 at 03:32:05PM +0200,
2020 Jun 22
1
[PATCH 14/16] mm/thp: add THP allocation helper
On 19 Jun 2020, at 17:56, Ralph Campbell wrote: > Transparent huge page allocation policy is controlled by several sysfs > variables. Rather than expose these to each device driver that needs to > allocate THPs, provide a helper function. > > Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> > --- > include/linux/gfp.h | 10 ++++++++++ > mm/huge_memory.c |
2016 Aug 24
2
Transparent HugePages question
Hello, I have a CentOS 7 installation on baremetal with 2 CPUs, 10 cores each and HT enabled, 128 GB RAM. The system has transparent hugetables enabled. cat /sys/kernel/mm/transparent_hugepage/enabled [always] madvise never The system reports anonymous hugepages pages usage and a size of hugepage of 2048Kb cat /proc/meminfo |grep -i hugepages|grep AnonHugePages AnonHugePages: 35491840 kB
2017 Nov 14
0
Re: dramatic performance slowdown due to THP allocation failure with full pagecache
On Tue, Nov 14, 2017 at 10:23:56AM -0700, Blair Bethwaite wrote: > Hi all, > > This is not really a libvirt issue but I'm hoping some of the smart folks > here will know more about this problem... > > We have noticed when running some HPC applications on our OpenStack > (libvirt+KVM) cloud that the same application occasionally performs much > worse (4-5x slowdown)
2017 Nov 14
2
dramatic performance slowdown due to THP allocation failure with full pagecache
Hi all, This is not really a libvirt issue but I'm hoping some of the smart folks here will know more about this problem... We have noticed when running some HPC applications on our OpenStack (libvirt+KVM) cloud that the same application occasionally performs much worse (4-5x slowdown) than normal. We can reproduce this quite easily by filling pagecache (i.e. dd-ing a single large file to
2020 Jun 23
0
[PATCH 13/16] mm: support THP migration to device private memory
On 6/22/20 4:54 PM, Yang Shi wrote: > On Mon, Jun 22, 2020 at 4:02 PM John Hubbard <jhubbard at nvidia.com> wrote: >> >> On 2020-06-22 15:33, Yang Shi wrote: >>> On Mon, Jun 22, 2020 at 3:30 PM Yang Shi <shy828301 at gmail.com> wrote: >>>> On Mon, Jun 22, 2020 at 2:53 PM Zi Yan <ziy at nvidia.com> wrote: >>>>> On 22 Jun 2020, at
2020 Jun 22
0
[PATCH 13/16] mm: support THP migration to device private memory
On 2020-06-22 15:33, Yang Shi wrote: > On Mon, Jun 22, 2020 at 3:30 PM Yang Shi <shy828301 at gmail.com> wrote: >> On Mon, Jun 22, 2020 at 2:53 PM Zi Yan <ziy at nvidia.com> wrote: >>> On 22 Jun 2020, at 17:31, Ralph Campbell wrote: >>>> On 6/22/20 1:10 PM, Zi Yan wrote: >>>>> On 22 Jun 2020, at 15:36, Ralph Campbell wrote:
2020 Jun 22
0
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
On Fri, Jun 19, 2020 at 02:56:33PM -0700, Ralph Campbell wrote: > These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go > into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM > self tests. Patch 7-8 prepare nouveau for the meat of this series which > adds support and testing for compound page mapping of system memory > (patches 9-11) and compound
2020 Jun 22
0
[PATCH 13/16] mm: support THP migration to device private memory
On Mon, Jun 22, 2020 at 2:53 PM Zi Yan <ziy at nvidia.com> wrote: > > On 22 Jun 2020, at 17:31, Ralph Campbell wrote: > > > On 6/22/20 1:10 PM, Zi Yan wrote: > >> On 22 Jun 2020, at 15:36, Ralph Campbell wrote: > >> > >>> On 6/21/20 4:20 PM, Zi Yan wrote: > >>>> On 19 Jun 2020, at 17:56, Ralph Campbell wrote: > >>>>
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
On Mon, Jun 22, 2020 at 4:02 PM John Hubbard <jhubbard at nvidia.com> wrote: > > On 2020-06-22 15:33, Yang Shi wrote: > > On Mon, Jun 22, 2020 at 3:30 PM Yang Shi <shy828301 at gmail.com> wrote: > >> On Mon, Jun 22, 2020 at 2:53 PM Zi Yan <ziy at nvidia.com> wrote: > >>> On 22 Jun 2020, at 17:31, Ralph Campbell wrote: > >>>> On
2017 Nov 14
0
Re: dramatic performance slowdown due to THP allocation failure with full pagecache
On Tue, Nov 14, 2017 at 10:52:03AM -0700, Blair Bethwaite wrote: > Thanks for the reply Daniel, > > However I think you slightly misunderstood the scenario... > > On 14 November 2017 at 10:32, Daniel P. Berrange <berrange@redhat.com> wrote: > > IOW, if your application has a certain expectation of performance that can only > > be satisfied by having the KVM guest
2020 Jun 22
0
[PATCH 13/16] mm: support THP migration to device private memory
On 6/22/20 1:10 PM, Zi Yan wrote: > On 22 Jun 2020, at 15:36, Ralph Campbell wrote: > >> On 6/21/20 4:20 PM, Zi Yan wrote: >>> On 19 Jun 2020, at 17:56, Ralph Campbell wrote: >>> >>>> Support transparent huge page migration to ZONE_DEVICE private memory. >>>> A new flag (MIGRATE_PFN_COMPOUND) is added to the input PFN array to >>>>
2017 Nov 14
2
Re: dramatic performance slowdown due to THP allocation failure with full pagecache
Thanks for the reply Daniel, However I think you slightly misunderstood the scenario... On 14 November 2017 at 10:32, Daniel P. Berrange <berrange@redhat.com> wrote: > IOW, if your application has a certain expectation of performance that can only > be satisfied by having the KVM guest backed by huge pages, then you should > really change to explicitly reserve huge pages for the