Displaying 20 results from an estimated 43 matches for "hard_nr_sectors".
2007 May 09
3
[patch 8/9] lguest: the block driver
...e only have a single request outstanding at a time. */
+ struct lguest_dma dma;
+ struct request *req;
+};
+
+/* Jens gave me this nice helper to end all chunks of a request. */
+static void end_entire_request(struct request *req, int uptodate)
+{
+ if (end_that_request_first(req, uptodate, req->hard_nr_sectors))
+ BUG();
+ add_disk_randomness(req->rq_disk);
+ blkdev_dequeue_request(req);
+ end_that_request_last(req, uptodate);
+}
+
+static irqreturn_t lgb_irq(int irq, void *_bd)
+{
+ struct blockdev *bd = _bd;
+ unsigned long flags;
+
+ if (!bd->req) {
+ pr_debug("No work!\n");
+ retur...
2007 May 09
3
[patch 8/9] lguest: the block driver
...e only have a single request outstanding at a time. */
+ struct lguest_dma dma;
+ struct request *req;
+};
+
+/* Jens gave me this nice helper to end all chunks of a request. */
+static void end_entire_request(struct request *req, int uptodate)
+{
+ if (end_that_request_first(req, uptodate, req->hard_nr_sectors))
+ BUG();
+ add_disk_randomness(req->rq_disk);
+ blkdev_dequeue_request(req);
+ end_that_request_last(req, uptodate);
+}
+
+static irqreturn_t lgb_irq(int irq, void *_bd)
+{
+ struct blockdev *bd = _bd;
+ unsigned long flags;
+
+ if (!bd->req) {
+ pr_debug("No work!\n");
+ retur...
2008 Sep 10
0
[RFC][PATCH -mm] blktrace: adds ioprio to blktrace
..._add_trace(bt, 0, rq->data_len, rw, what, rq->errors, sizeof(rq->cmd), rq->cmd);
+ __blk_add_trace(bt, 0, rq->data_len, rw, what, rq->errors, ioprio, sizeof(rq->cmd), rq->cmd);
} else {
what |= BLK_TC_ACT(BLK_TC_FS);
- __blk_add_trace(bt, rq->hard_sector, rq->hard_nr_sectors << 9, rw, what, rq->errors, 0, NULL);
+ __blk_add_trace(bt, rq->hard_sector, rq->hard_nr_sectors << 9, rw, what, rq->errors, ioprio, 0, NULL);
}
}
@@ -224,11 +226,12 @@ static inline void blk_add_trace_bio(str
u32 what)
{
struct blk_trace *bt = q->b...
2007 Jun 07
4
[PATCH RFC 0/3] Virtio draft II
Hi again all,
It turns out that networking really wants ordered requests, which the
previous patches didn't allow. This patch changes it to a callback
mechanism; kudos to Avi.
The downside is that locking is more complicated, and after a few dead
ends I implemented the simplest solution: the struct virtio_device
contains the spinlock to use, and it's held when your callbacks get
2007 Jun 07
4
[PATCH RFC 0/3] Virtio draft II
Hi again all,
It turns out that networking really wants ordered requests, which the
previous patches didn't allow. This patch changes it to a callback
mechanism; kudos to Avi.
The downside is that locking is more complicated, and after a few dead
ends I implemented the simplest solution: the struct virtio_device
contains the spinlock to use, and it's held when your callbacks get
2007 Jun 07
4
[PATCH RFC 0/3] Virtio draft II
Hi again all,
It turns out that networking really wants ordered requests, which the
previous patches didn't allow. This patch changes it to a callback
mechanism; kudos to Avi.
The downside is that locking is more complicated, and after a few dead
ends I implemented the simplest solution: the struct virtio_device
contains the spinlock to use, and it's held when your callbacks get
2007 Jul 03
6
[PATCH 1/3] Virtio draft IV
In response to Avi's excellent analysis, I've updated virtio as promised
(apologies for the delay, travel got in the way).
===
This attempts to implement a "virtual I/O" layer which should allow
common drivers to be efficiently used across most virtual I/O
mechanisms. It will no-doubt need further enhancement.
The details of probing the device are left to hypervisor-specific
2007 Jul 03
6
[PATCH 1/3] Virtio draft IV
In response to Avi's excellent analysis, I've updated virtio as promised
(apologies for the delay, travel got in the way).
===
This attempts to implement a "virtual I/O" layer which should allow
common drivers to be efficiently used across most virtual I/O
mechanisms. It will no-doubt need further enhancement.
The details of probing the device are left to hypervisor-specific
2008 Nov 12
15
[PATCH][RFC][12+2][v3] A expanded CFQ scheduler for cgroups
This patchset expands traditional CFQ scheduler in order to support cgroups,
and improves old version.
Improvements are as following.
* Modularizing our new CFQ scheduler.
The expanded CFQ scheduler is registered/unregistered as new I/O
elevator scheduler called "cfq-cgroups". By this, the traditional CFQ
scheduler, which does not handle cgroups, and our new CFQ
2008 Nov 12
15
[PATCH][RFC][12+2][v3] A expanded CFQ scheduler for cgroups
This patchset expands traditional CFQ scheduler in order to support cgroups,
and improves old version.
Improvements are as following.
* Modularizing our new CFQ scheduler.
The expanded CFQ scheduler is registered/unregistered as new I/O
elevator scheduler called "cfq-cgroups". By this, the traditional CFQ
scheduler, which does not handle cgroups, and our new CFQ
2007 Jan 02
0
[PATCH 1/4] add scsi-target and IO_CMD_EPOLL_WAIT patches
...gt;bio)
++ blk_rq_bio_prep(q, rq, bio);
++ else if (!q->back_merge_fn(q, rq, bio)) {
++ ret = -EINVAL;
++ spin_unlock_irq(q->queue_lock);
++ goto unmap_bio;
++ } else {
++ rq->biotail->bi_next = bio;
++ rq->biotail = bio;
++
++ rq->nr_sectors += bio_sectors(bio);
++ rq->hard_nr_sectors = rq->nr_sectors;
++ rq->data_len += bio->bi_size;
++ }
++ spin_unlock_irq(q->queue_lock);
++
++ return bio->bi_size;
++
++unmap_bio:
++ /* if it was boucned we must call the end io function */
++ bio_endio(bio, bio->bi_size, 0);
++ __blk_rq_unmap_user(orig_bio);
++ bio_put(bio);...
2007 Sep 25
50
[patch 00/43] lguest: Patches for 2.6.24 (and patchbomb test)
Hi all,
These are the patches I'm planning to submit for 2.6.24. Comments
gratefully accepted. Along with the usual cleanups and improvements are Jes'
de-i386-ification patches, and a new "virtio" mechanism designed to be shared
with KVM (and hopefully other hypervisors).
Cheers,
Rusty.
Documentation/lguest/Makefile | 30
Documentation/lguest/lguest.c
2007 Sep 25
50
[patch 00/43] lguest: Patches for 2.6.24 (and patchbomb test)
Hi all,
These are the patches I'm planning to submit for 2.6.24. Comments
gratefully accepted. Along with the usual cleanups and improvements are Jes'
de-i386-ification patches, and a new "virtio" mechanism designed to be shared
with KVM (and hopefully other hypervisors).
Cheers,
Rusty.
Documentation/lguest/Makefile | 30
Documentation/lguest/lguest.c
2007 Apr 18
20
[patch 00/20] XEN-paravirt: Xen guest implementation for paravirt_ops interface
This patch series implements the Linux Xen guest in terms of the
paravirt-ops interface. The features in implemented this patch series
are:
* domU only
* UP only (most code is SMP-safe, but there's no way to create a new vcpu)
* writable pagetables, with late pinning/early unpinning
(no shadow pagetable support)
* supports both PAE and non-PAE modes
* xen console
* virtual block
2007 Apr 18
20
[patch 00/20] XEN-paravirt: Xen guest implementation for paravirt_ops interface
This patch series implements the Linux Xen guest in terms of the
paravirt-ops interface. The features in implemented this patch series
are:
* domU only
* UP only (most code is SMP-safe, but there's no way to create a new vcpu)
* writable pagetables, with late pinning/early unpinning
(no shadow pagetable support)
* supports both PAE and non-PAE modes
* xen console
* virtual block
2007 Apr 18
20
[patch 00/20] XEN-paravirt: Xen guest implementation for paravirt_ops interface
This patch series implements the Linux Xen guest in terms of the
paravirt-ops interface. The features in implemented this patch series
are:
* domU only
* UP only (most code is SMP-safe, but there's no way to create a new vcpu)
* writable pagetables, with late pinning/early unpinning
(no shadow pagetable support)
* supports both PAE and non-PAE modes
* xen console
* virtual block
2007 Apr 18
24
[patch 00/24] Xen-paravirt_ops: Xen guest implementation for paravirt_ops interface
Hi Andi,
This patch series implements the Linux Xen guest as a paravirt_ops
backend. The features in implemented this patch series are:
* domU only
* UP only (most code is SMP-safe, but there's no way to create a new vcpu)
* writable pagetables, with late pinning/early unpinning
(no shadow pagetable support)
* supports both PAE and non-PAE modes
* xen hvc console (console=hvc0)
*
2007 Apr 18
24
[patch 00/24] Xen-paravirt_ops: Xen guest implementation for paravirt_ops interface
Hi Andi,
This patch series implements the Linux Xen guest as a paravirt_ops
backend. The features in implemented this patch series are:
* domU only
* UP only (most code is SMP-safe, but there's no way to create a new vcpu)
* writable pagetables, with late pinning/early unpinning
(no shadow pagetable support)
* supports both PAE and non-PAE modes
* xen hvc console (console=hvc0)
*
2007 Apr 18
24
[patch 00/24] Xen-paravirt_ops: Xen guest implementation for paravirt_ops interface
Hi Andi,
This patch series implements the Linux Xen guest as a paravirt_ops
backend. The features in implemented this patch series are:
* domU only
* UP only (most code is SMP-safe, but there's no way to create a new vcpu)
* writable pagetables, with late pinning/early unpinning
(no shadow pagetable support)
* supports both PAE and non-PAE modes
* xen hvc console (console=hvc0)
*
2007 Apr 18
25
[patch 00/21] Xen-paravirt: Xen guest implementation for paravirt_ops interface
Hi Andi,
This patch series implements the Linux Xen guest in terms of the
paravirt-ops interface. The features in implemented this patch series
are:
* domU only
* UP only (most code is SMP-safe, but there's no way to create a new vcpu)
* writable pagetables, with late pinning/early unpinning
(no shadow pagetable support)
* supports both PAE and non-PAE modes
* xen console
* virtual