Displaying 9 results from an estimated 9 matches for "free_pcppages_bulk".
2015 Mar 30
2
[PATCH 0/9] qspinlock stuff -v15
...3%-- __pte_alloc
|--2.68%-- __drain_alien_cache
|--2.56%-- ext4_do_update_inode
|--2.54%-- try_to_wake_up
|--2.46%-- pgd_free
|--2.32%-- cache_alloc_refill
|--2.32%-- pgd_alloc
|--2.32%-- free_pcppages_bulk
|--1.88%-- do_wp_page
|--1.77%-- handle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharr...
2015 Mar 30
2
[PATCH 0/9] qspinlock stuff -v15
...3%-- __pte_alloc
|--2.68%-- __drain_alien_cache
|--2.56%-- ext4_do_update_inode
|--2.54%-- try_to_wake_up
|--2.46%-- pgd_free
|--2.32%-- cache_alloc_refill
|--2.32%-- pgd_alloc
|--2.32%-- free_pcppages_bulk
|--1.88%-- do_wp_page
|--1.77%-- handle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharr...
2015 Mar 16
19
[PATCH 0/9] qspinlock stuff -v15
Hi Waiman,
As promised; here is the paravirt stuff I did during the trip to BOS last week.
All the !paravirt patches are more or less the same as before (the only real
change is the copyright lines in the first patch).
The paravirt stuff is 'simple' and KVM only -- the Xen code was a little more
convoluted and I've no real way to test that but it should be stright fwd to
make work.
2015 Mar 16
19
[PATCH 0/9] qspinlock stuff -v15
Hi Waiman,
As promised; here is the paravirt stuff I did during the trip to BOS last week.
All the !paravirt patches are more or less the same as before (the only real
change is the copyright lines in the first patch).
The paravirt stuff is 'simple' and KVM only -- the Xen code was a little more
convoluted and I've no real way to test that but it should be stright fwd to
make work.
2015 Apr 07
18
[PATCH v15 00/15] qspinlock: a 4-byte queue spinlock with PV support
...|--2.93%-- __pte_alloc
|--2.68%-- __drain_alien_cache
|--2.56%-- ext4_do_update_inode
|--2.54%-- try_to_wake_up
|--2.46%-- pgd_free
|--2.32%-- cache_alloc_refill
|--2.32%-- pgd_alloc
|--2.32%-- free_pcppages_bulk
|--1.88%-- do_wp_page
|--1.77%-- handle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharray...
2015 Apr 07
18
[PATCH v15 00/15] qspinlock: a 4-byte queue spinlock with PV support
...|--2.93%-- __pte_alloc
|--2.68%-- __drain_alien_cache
|--2.56%-- ext4_do_update_inode
|--2.54%-- try_to_wake_up
|--2.46%-- pgd_free
|--2.32%-- cache_alloc_refill
|--2.32%-- pgd_alloc
|--2.32%-- free_pcppages_bulk
|--1.88%-- do_wp_page
|--1.77%-- handle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharray...
2015 Apr 24
16
[PATCH v16 00/14] qspinlock: a 4-byte queue spinlock with PV support
...|--2.93%-- __pte_alloc
|--2.68%-- __drain_alien_cache
|--2.56%-- ext4_do_update_inode
|--2.54%-- try_to_wake_up
|--2.46%-- pgd_free
|--2.32%-- cache_alloc_refill
|--2.32%-- pgd_alloc
|--2.32%-- free_pcppages_bulk
|--1.88%-- do_wp_page
|--1.77%-- handle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharray...
2015 Apr 24
16
[PATCH v16 00/14] qspinlock: a 4-byte queue spinlock with PV support
...|--2.93%-- __pte_alloc
|--2.68%-- __drain_alien_cache
|--2.56%-- ext4_do_update_inode
|--2.54%-- try_to_wake_up
|--2.46%-- pgd_free
|--2.32%-- cache_alloc_refill
|--2.32%-- pgd_alloc
|--2.32%-- free_pcppages_bulk
|--1.88%-- do_wp_page
|--1.77%-- handle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharray...
2013 Aug 22
13
Lustre buffer cache causes large system overhead.
We have just discovered that a large buffer cache generated from traversing a
lustre file system will cause a significant system overhead for applications
with high memory demands. We have seen a 50% slowdown or worse for
applications. Even High Performance Linpack, that have no file IO whatsoever
is affected. The only remedy seems to be to empty the buffer cache from memory
by running