similar to: [PATCH][RFC][12+2][v3] A expanded CFQ scheduler for cgroups

Displaying 20 results from an estimated 100 matches similar to: "[PATCH][RFC][12+2][v3] A expanded CFQ scheduler for cgroups"

2008 Nov 07
0
[PATCH][cfq-cgroups] Introduce ioprio class for top layer.
This patch introduces iprio class for cfq data control layer. By applying this patch, controller can also handle the RT/IDLE properties among groups. Signed-off-by: Satoshi UCHIDA <s-uchida at ap.jp.nec.com> --- block/cfq-cgroup.c | 344 +++++++++++++++++++++++++------------------ include/linux/cfq-iosched.h | 1 + 2 files changed, 203 insertions(+), 142 deletions(-)
2008 Nov 07
0
[PATCH][cfq-cgroups] Introduce ioprio class for top layer.
This patch introduces iprio class for cfq data control layer. By applying this patch, controller can also handle the RT/IDLE properties among groups. Signed-off-by: Satoshi UCHIDA <s-uchida at ap.jp.nec.com> --- block/cfq-cgroup.c | 344 +++++++++++++++++++++++++------------------ include/linux/cfq-iosched.h | 1 + 2 files changed, 203 insertions(+), 142 deletions(-)
2008 Oct 31
0
[PATCH][cfq-cgroups] Interface for parameter of cfq driver data
This patch add a interface for parameter of cfq driver data. Signed-off-by: Satoshi UCHIDA <s-uchida at ap.jp.nec.com> --- block/cfq-cgroup.c | 59 +++++++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 58 insertions(+), 1 deletions(-) diff --git a/block/cfq-cgroup.c b/block/cfq-cgroup.c index 4938fa0..776874d 100644 --- a/block/cfq-cgroup.c +++
2008 Oct 29
0
[PATCH][cfq-cgroups] Introduce cgroups structure with ioprio entry.
This patch introcude cfq_cgroup structure which is type for group control within expanded CFQ scheduler. In addition, the cfq_cgroup structure has "ioprio" entry which is preference of group for I/O. Signed-off-by: Satoshi UCHIDA <s-uchida at ap.jp.nec.com> --- block/cfq-cgroup.c | 148 +++++++++++++++++++++++++++++++++++++++++
2007 May 14
3
zaptel huge irq problem
Hello, I had noticed strange crackling sound on my phone calls going through my zaptel device (TDM400P), so i decided to check on possible timer issue, and found lots of issues on forums concerning the sensibility of zaptel with IRQs, and tried about everything: moving PCI slots, noapic and acpi=off boot options, play with different kernel options: iosched/preemption/timer/..., play with
2018 May 23
3
[PATCH] block drivers/block: Use octal not symbolic permissions
Convert the S_<FOO> symbolic permissions to their octal equivalents as using octal and not symbolic permissions is preferred by many as more readable. see: https://lkml.org/lkml/2016/8/2/1945 Done with automated conversion via: $ ./scripts/checkpatch.pl -f --types=SYMBOLIC_PERMS --fix-inplace <files...> Miscellanea: o Wrapped modified multi-line calls to a single line where
2018 May 23
3
[PATCH] block drivers/block: Use octal not symbolic permissions
Convert the S_<FOO> symbolic permissions to their octal equivalents as using octal and not symbolic permissions is preferred by many as more readable. see: https://lkml.org/lkml/2016/8/2/1945 Done with automated conversion via: $ ./scripts/checkpatch.pl -f --types=SYMBOLIC_PERMS --fix-inplace <files...> Miscellanea: o Wrapped modified multi-line calls to a single line where
2013 Nov 19
5
xenwatch: page allocation failure: order:4, mode:0x10c0d0 xen_netback:xenvif_alloc: Could not allocate netdev for vif16.0
Hi Wei, I ran into the following problem when trying to boot another guest after less than a day of uptime. (the system started 15 guests at boot already which went fine). dom0 is allocated a fixed 1536M. Both host as pv guests run the same kernel, some hvm''s run a slightly older kernel (3.9 f.e.) The are quite some granttable messages in xl dmesg, i also included these and a
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys, My users are reporting some issues with memory on our lustre 1.8.1 clients. It looks like when they submit a single job at a time the run time was about 4.5 minutes. However, when they ran multiple jobs (10 or less) on a client with 192GB of memory on a single node the run time for each job was exceeding 3-4X the run time for the single process. They also noticed that the swap space
2010 Jul 06
0
[PATCH 0/6 v6][RFC] jbd[2]: enhance fsync performance when using CFQ
Hi Jeff, On 07/03/2010 03:58 AM, Jeff Moyer wrote: > Hi, > > Running iozone or fs_mark with fsync enabled, the performance of CFQ is > far worse than that of deadline for enterprise class storage when dealing > with file sizes of 8MB or less. I used the following command line as a > representative test case: > > fs_mark -S 1 -D 10000 -N 100000 -d /mnt/test/fs_mark -s
2013 Apr 19
14
[GIT PULL] (xen) stable/for-jens-3.10
Hey Jens, Please in your spare time (if there is such a thing at a conference) pull this branch: git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-jens-3.10 for your v3.10 branch. Sorry for being so late with this. <blurb> It has the ''feature-max-indirect-segments'' implemented in both backend and frontend. The current problem with the backend and
2009 Sep 10
24
[Bug 23847] New: kernel BUG when using nouveau
http://bugs.freedesktop.org/show_bug.cgi?id=23847 Summary: kernel BUG when using nouveau Product: xorg Version: 7.4 Platform: Other OS/Version: All Status: NEW Severity: normal Priority: medium Component: Driver/nouveau AssignedTo: nouveau at lists.freedesktop.org ReportedBy: shiningxc at
2012 Apr 20
1
[PATCH] multiqueue: a hodge podge of things
Not really interesting yet, this just gets us to the state where single queue boots on a current kernel. Signed-off-by: Jens Axboe <axboe at kernel.dk> --- block/Kconfig | 5 + block/Kconfig.iosched | 2 + block/blk-core.c | 427 ++++++++++++++++++-------------------- block/blk-exec.c | 14 +- block/blk-flush.c
2012 Apr 20
1
[PATCH] multiqueue: a hodge podge of things
Not really interesting yet, this just gets us to the state where single queue boots on a current kernel. Signed-off-by: Jens Axboe <axboe at kernel.dk> --- block/Kconfig | 5 + block/Kconfig.iosched | 2 + block/blk-core.c | 427 ++++++++++++++++++-------------------- block/blk-exec.c | 14 +- block/blk-flush.c
2016 Nov 17
13
automatic IRQ affinity for virtio
Hi Michael, this series contains a couple cleanups for the virtio_pci interrupt handling code, including a switch to the new pci_irq_alloc_vectors helper, and support for automatic affinity by the PCI layer if the consumers ask for it. It then converts over virtio_blk to use this functionality so that it's blk-mq queues are aligned to the MSI-X vector routing. I have a similar patch in the
2016 Nov 17
13
automatic IRQ affinity for virtio
Hi Michael, this series contains a couple cleanups for the virtio_pci interrupt handling code, including a switch to the new pci_irq_alloc_vectors helper, and support for automatic affinity by the PCI layer if the consumers ask for it. It then converts over virtio_blk to use this functionality so that it's blk-mq queues are aligned to the MSI-X vector routing. I have a similar patch in the
2017 Feb 05
13
automatic IRQ affinity for virtio V3
Hi Michael, hi Jason, This patches applies a few cleanups to the virtio PCI interrupt handling code, and then converts the virtio PCI code to use the automatic MSI-X vectors spreading, as well as using the information in virtio-blk and virtio-scsi to automatically align the blk-mq queues to the MSI-X vectors. Changes since V2: - remove a redundant callback check - calculate ->msix_vectors
2017 Feb 05
13
automatic IRQ affinity for virtio V3
Hi Michael, hi Jason, This patches applies a few cleanups to the virtio PCI interrupt handling code, and then converts the virtio PCI code to use the automatic MSI-X vectors spreading, as well as using the information in virtio-blk and virtio-scsi to automatically align the blk-mq queues to the MSI-X vectors. Changes since V2: - remove a redundant callback check - calculate ->msix_vectors
2017 Jan 27
15
automatic IRQ affinity for virtio V2
Hi Michael, hi Jason, This patches applies a few cleanups to the virtio PCI interrupt handling code, and then converts the virtio PCI code to use the automatic MSI-X vectors spreading, as well as using the information in virtio-blk and virtio-scsi to automatically align the blk-mq queues to the MSI-X vectors. Changes since V1: - dropped the patches already merged for 4.10-rc - new patch to
2017 Jan 27
15
automatic IRQ affinity for virtio V2
Hi Michael, hi Jason, This patches applies a few cleanups to the virtio PCI interrupt handling code, and then converts the virtio PCI code to use the automatic MSI-X vectors spreading, as well as using the information in virtio-blk and virtio-scsi to automatically align the blk-mq queues to the MSI-X vectors. Changes since V1: - dropped the patches already merged for 4.10-rc - new patch to