Displaying 10 results from an estimated 10 matches for "wake_up_new_task".
2015 Mar 30
2
[PATCH 0/9] qspinlock stuff -v15
...dle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharray
|--0.88%-- __pmd_alloc
|--0.70%-- wake_up_new_task
|--0.66%-- __pud_alloc
|--0.59%-- ext4_discard_preallocations
--6.53%-- [...]
With the qspinlock patch, the perf profile at 1000 users was:
3.25% reaim [kernel.kallsyms] [k] queue_spin_lock_slowpath
|--62.00%-- _raw_spin_lo...
2015 Mar 30
2
[PATCH 0/9] qspinlock stuff -v15
...dle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharray
|--0.88%-- __pmd_alloc
|--0.70%-- wake_up_new_task
|--0.66%-- __pud_alloc
|--0.59%-- ext4_discard_preallocations
--6.53%-- [...]
With the qspinlock patch, the perf profile at 1000 users was:
3.25% reaim [kernel.kallsyms] [k] queue_spin_lock_slowpath
|--62.00%-- _raw_spin_lo...
2013 Feb 18
2
Kernel Error with Debian Squeeze, DRBD, 3.2.0-0.bpo.4-amd64 and Xen4.0
...57318.444140] 0000000000040001 0000000000004eae ffff880000000000
0000000000000000
[257318.444237] Call Trace:
[257318.444274] [<ffffffff81015f54>] ? save_i387_xstate+0x118/0x1e2
[257318.444321] [<ffffffff8100e132>] ? do_signal+0x21f/0x635
[257318.444365] [<ffffffff810462d5>] ? wake_up_new_task+0x96/0xc2
[257318.444410] [<ffffffff8100e56d>] ? do_notify_resume+0x25/0x67
[257318.444455] [<ffffffff811070c0>] ? sys_read+0x45/0x6e
[257318.444499] [<ffffffff8136d920>] ? int_signal+0x12/0x17
[257318.444541] Code: c1 ea 20 0f 01 d1 c3 48 8b 97 58 04 00 00 48 85
d2 0f 84 d0 00...
2015 Mar 16
19
[PATCH 0/9] qspinlock stuff -v15
Hi Waiman,
As promised; here is the paravirt stuff I did during the trip to BOS last week.
All the !paravirt patches are more or less the same as before (the only real
change is the copyright lines in the first patch).
The paravirt stuff is 'simple' and KVM only -- the Xen code was a little more
convoluted and I've no real way to test that but it should be stright fwd to
make work.
2015 Mar 16
19
[PATCH 0/9] qspinlock stuff -v15
Hi Waiman,
As promised; here is the paravirt stuff I did during the trip to BOS last week.
All the !paravirt patches are more or less the same as before (the only real
change is the copyright lines in the first patch).
The paravirt stuff is 'simple' and KVM only -- the Xen code was a little more
convoluted and I've no real way to test that but it should be stright fwd to
make work.
2013 Apr 18
39
Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
Hi,
I''ve been working on getting a working blktap driver allowing to
access ceph RBD block devices without relying on the RBD kernel driver
and it finally got to a point where, it works and is testable.
Some of the advantages are:
- Easier to update to newer RBD version
- Allows functionality only available in the userspace RBD library
(write cache, layering, ...)
- Less issue when
2015 Apr 07
18
[PATCH v15 00/15] qspinlock: a 4-byte queue spinlock with PV support
...%-- handle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharray
|--0.88%-- __pmd_alloc
|--0.70%-- wake_up_new_task
|--0.66%-- __pud_alloc
|--0.59%-- ext4_discard_preallocations
--6.53%-- [...]
With the qspinlock patch, the perf profile at 1000 users was:
3.25% reaim [kernel.kallsyms] [k] queue_spin_lock_slowpath
|--62.00%-- _raw_spin_lock_ir...
2015 Apr 07
18
[PATCH v15 00/15] qspinlock: a 4-byte queue spinlock with PV support
...%-- handle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharray
|--0.88%-- __pmd_alloc
|--0.70%-- wake_up_new_task
|--0.66%-- __pud_alloc
|--0.59%-- ext4_discard_preallocations
--6.53%-- [...]
With the qspinlock patch, the perf profile at 1000 users was:
3.25% reaim [kernel.kallsyms] [k] queue_spin_lock_slowpath
|--62.00%-- _raw_spin_lock_ir...
2015 Apr 24
16
[PATCH v16 00/14] qspinlock: a 4-byte queue spinlock with PV support
...%-- handle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharray
|--0.88%-- __pmd_alloc
|--0.70%-- wake_up_new_task
|--0.66%-- __pud_alloc
|--0.59%-- ext4_discard_preallocations
--6.53%-- [...]
With the qspinlock patch, the perf profile at 1000 users was:
3.25% reaim [kernel.kallsyms] [k] queue_spin_lock_slowpath
|--62.00%-- _raw_spin_lock_ir...
2015 Apr 24
16
[PATCH v16 00/14] qspinlock: a 4-byte queue spinlock with PV support
...%-- handle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharray
|--0.88%-- __pmd_alloc
|--0.70%-- wake_up_new_task
|--0.66%-- __pud_alloc
|--0.59%-- ext4_discard_preallocations
--6.53%-- [...]
With the qspinlock patch, the perf profile at 1000 users was:
3.25% reaim [kernel.kallsyms] [k] queue_spin_lock_slowpath
|--62.00%-- _raw_spin_lock_ir...