search for: i_bdev

Displaying 20 results from an estimated 49 matches for "i_bdev".

2012 Oct 09
2
[PATCH] vhost-blk: Add vhost-blk support v2
...= iov_base & ~PAGE_MASK; + len = PAGE_SIZE - off; + if (len > iov_len) + len = iov_len; + + while (!bio || bio_add_page(bio, page, len, off) <= 0) { + bio = bio_alloc(GFP_KERNEL, pages_nr); + if (!bio) + goto fail; + bio->bi_sector = req->sector; + bio->bi_bdev = bdev; + bio->bi_private = req; + bio->bi_end_io = vhost_blk_req_done; + req->bio[bio_nr++] = bio; + } + req->sector += len >> 9; + iov_base += len; + iov_len -= len; + } + + pages += pages_nr; + } + atomic_set(&req->bio_nr, bio_nr); + + return 0; +...
2012 Oct 09
2
[PATCH] vhost-blk: Add vhost-blk support v2
...= iov_base & ~PAGE_MASK; + len = PAGE_SIZE - off; + if (len > iov_len) + len = iov_len; + + while (!bio || bio_add_page(bio, page, len, off) <= 0) { + bio = bio_alloc(GFP_KERNEL, pages_nr); + if (!bio) + goto fail; + bio->bi_sector = req->sector; + bio->bi_bdev = bdev; + bio->bi_private = req; + bio->bi_end_io = vhost_blk_req_done; + req->bio[bio_nr++] = bio; + } + req->sector += len >> 9; + iov_base += len; + iov_len -= len; + } + + pages += pages_nr; + } + atomic_set(&req->bio_nr, bio_nr); + + return 0; +...
2007 Jul 03
6
[PATCH 1/3] Virtio draft IV
In response to Avi's excellent analysis, I've updated virtio as promised (apologies for the delay, travel got in the way). === This attempts to implement a "virtual I/O" layer which should allow common drivers to be efficiently used across most virtual I/O mechanisms. It will no-doubt need further enhancement. The details of probing the device are left to hypervisor-specific
2007 Jul 03
6
[PATCH 1/3] Virtio draft IV
In response to Avi's excellent analysis, I've updated virtio as promised (apologies for the delay, travel got in the way). === This attempts to implement a "virtual I/O" layer which should allow common drivers to be efficiently used across most virtual I/O mechanisms. It will no-doubt need further enhancement. The details of probing the device are left to hypervisor-specific
2012 Nov 19
1
[PATCH] vhost-blk: Add vhost-blk support v5
...es(&iov[i]); + + if (unlikely(req->write == WRITE_FLUSH)) { + req->pl = NULL; + req->bio = kmalloc(sizeof(struct bio *), GFP_KERNEL); + bio = bio_alloc(GFP_KERNEL, 1); + if (!bio) { + kfree(req->bio); + return -ENOMEM; + } + bio->bi_sector = req->sector; + bio->bi_bdev = bdev; + bio->bi_private = req; + bio->bi_end_io = vhost_blk_req_done; + req->bio[bio_nr++] = bio; + + goto out; + } + + pl_len = iov_nr * sizeof(req->pl[0]); + page_len = pages_nr_total * sizeof(struct page *); + bio_len = pages_nr_total * sizeof(struct bio *); + + buf = kmall...
2012 Nov 19
1
[PATCH] vhost-blk: Add vhost-blk support v5
...es(&iov[i]); + + if (unlikely(req->write == WRITE_FLUSH)) { + req->pl = NULL; + req->bio = kmalloc(sizeof(struct bio *), GFP_KERNEL); + bio = bio_alloc(GFP_KERNEL, 1); + if (!bio) { + kfree(req->bio); + return -ENOMEM; + } + bio->bi_sector = req->sector; + bio->bi_bdev = bdev; + bio->bi_private = req; + bio->bi_end_io = vhost_blk_req_done; + req->bio[bio_nr++] = bio; + + goto out; + } + + pl_len = iov_nr * sizeof(req->pl[0]); + page_len = pages_nr_total * sizeof(struct page *); + bio_len = pages_nr_total * sizeof(struct bio *); + + buf = kmall...
2012 Oct 15
2
[PATCH 1/1] vhost-blk: Add vhost-blk support v4
...amp;iov[i]); + } + + if (unlikely(req->write == WRITE_FLUSH)) { + req->pl = NULL; + req->bio = kmalloc(sizeof(struct bio *), GFP_KERNEL); + bio = bio_alloc(GFP_KERNEL, 1); + if (!bio) { + kfree(req->bio); + return -ENOMEM; + } + bio->bi_sector = req->sector; + bio->bi_bdev = bdev; + bio->bi_private = req; + bio->bi_end_io = vhost_blk_req_done; + req->bio[bio_nr++] = bio; + + goto out; + } + + req->pl = kmalloc((iov_nr * sizeof(struct req_page_list)) + + (pages_nr_total * sizeof(struct page *)) + + (pages_nr_total * sizeof(struct bio *)), +...
2012 Oct 15
2
[PATCH 1/1] vhost-blk: Add vhost-blk support v4
...amp;iov[i]); + } + + if (unlikely(req->write == WRITE_FLUSH)) { + req->pl = NULL; + req->bio = kmalloc(sizeof(struct bio *), GFP_KERNEL); + bio = bio_alloc(GFP_KERNEL, 1); + if (!bio) { + kfree(req->bio); + return -ENOMEM; + } + bio->bi_sector = req->sector; + bio->bi_bdev = bdev; + bio->bi_private = req; + bio->bi_end_io = vhost_blk_req_done; + req->bio[bio_nr++] = bio; + + goto out; + } + + req->pl = kmalloc((iov_nr * sizeof(struct req_page_list)) + + (pages_nr_total * sizeof(struct page *)) + + (pages_nr_total * sizeof(struct bio *)), +...
2012 Oct 10
0
[PATCH] vhost-blk: Add vhost-blk support v3
...amp;iov[i]); + } + + if (unlikely(req->write == WRITE_FLUSH)) { + req->pl = NULL; + req->bio = kmalloc(sizeof(struct bio *), GFP_KERNEL); + bio = bio_alloc(GFP_KERNEL, 1); + if (!bio) { + kfree(req->bio); + return -ENOMEM; + } + bio->bi_sector = req->sector; + bio->bi_bdev = bdev; + bio->bi_private = req; + bio->bi_end_io = vhost_blk_req_done; + req->bio[bio_nr++] = bio; + + goto out; + } + + req->pl = kmalloc((iov_nr * sizeof(struct req_page_list)) + + (pages_nr_total * sizeof(struct page *)) + + (pages_nr_total * sizeof(struct bio *)), +...
2012 Oct 10
0
[PATCH] vhost-blk: Add vhost-blk support v3
...amp;iov[i]); + } + + if (unlikely(req->write == WRITE_FLUSH)) { + req->pl = NULL; + req->bio = kmalloc(sizeof(struct bio *), GFP_KERNEL); + bio = bio_alloc(GFP_KERNEL, 1); + if (!bio) { + kfree(req->bio); + return -ENOMEM; + } + bio->bi_sector = req->sector; + bio->bi_bdev = bdev; + bio->bi_private = req; + bio->bi_end_io = vhost_blk_req_done; + req->bio[bio_nr++] = bio; + + goto out; + } + + req->pl = kmalloc((iov_nr * sizeof(struct req_page_list)) + + (pages_nr_total * sizeof(struct page *)) + + (pages_nr_total * sizeof(struct bio *)), +...
2010 May 07
6
[PATCH 1/5] fs: allow short direct-io reads to be completed via buffered IO V2
V1->V2: Check to see if our current ppos is >= i_size after a short DIO read, just in case it was actually a short read and we need to just return. This is similar to what already happens in the write case. If we have a short read while doing O_DIRECT, instead of just returning, fallthrough and try to read the rest via buffered IO. BTRFS needs this because if we encounter a compressed or
2012 Dec 02
3
[PATCH] vhost-blk: Add vhost-blk support v6
...total += iov_num_pages(&iov[i]); + + if (unlikely(req->write == WRITE_FLUSH)) { + req->use_inline = true; + req->pl = NULL; + req->bio = req->inline_bio; + + bio = bio_alloc(GFP_KERNEL, 1); + if (!bio) + return -ENOMEM; + + bio->bi_sector = req->sector; + bio->bi_bdev = bdev; + bio->bi_private = req; + bio->bi_end_io = vhost_blk_req_done; + req->bio[bio_nr++] = bio; + + goto out; + } + + if (pages_nr_total > NR_INLINE) { + int pl_len, page_len, bio_len; + + req->use_inline = false; + pl_len = iov_nr * sizeof(req->pl[0]); + page_len...
2012 Dec 02
3
[PATCH] vhost-blk: Add vhost-blk support v6
...total += iov_num_pages(&iov[i]); + + if (unlikely(req->write == WRITE_FLUSH)) { + req->use_inline = true; + req->pl = NULL; + req->bio = req->inline_bio; + + bio = bio_alloc(GFP_KERNEL, 1); + if (!bio) + return -ENOMEM; + + bio->bi_sector = req->sector; + bio->bi_bdev = bdev; + bio->bi_private = req; + bio->bi_end_io = vhost_blk_req_done; + req->bio[bio_nr++] = bio; + + goto out; + } + + if (pages_nr_total > NR_INLINE) { + int pl_len, page_len, bio_len; + + req->use_inline = false; + pl_len = iov_nr * sizeof(req->pl[0]); + page_len...
2020 Sep 01
10
remove revalidate_disk()
Hi Jens, this series removes the revalidate_disk() function, which has been a really odd duck in the last years. The prime reason why most people use it is because it propagates a size change from the gendisk to the block_device structure. But it also calls into the rather ill defined ->revalidate_disk method which is rather useless for the callers. So this adds a new helper to just
2007 Jun 07
4
[PATCH RFC 0/3] Virtio draft II
Hi again all, It turns out that networking really wants ordered requests, which the previous patches didn't allow. This patch changes it to a callback mechanism; kudos to Avi. The downside is that locking is more complicated, and after a few dead ends I implemented the simplest solution: the struct virtio_device contains the spinlock to use, and it's held when your callbacks get
2007 Jun 07
4
[PATCH RFC 0/3] Virtio draft II
Hi again all, It turns out that networking really wants ordered requests, which the previous patches didn't allow. This patch changes it to a callback mechanism; kudos to Avi. The downside is that locking is more complicated, and after a few dead ends I implemented the simplest solution: the struct virtio_device contains the spinlock to use, and it's held when your callbacks get
2007 Jun 07
4
[PATCH RFC 0/3] Virtio draft II
Hi again all, It turns out that networking really wants ordered requests, which the previous patches didn't allow. This patch changes it to a callback mechanism; kudos to Avi. The downside is that locking is more complicated, and after a few dead ends I implemented the simplest solution: the struct virtio_device contains the spinlock to use, and it's held when your callbacks get
2007 Apr 18
33
[RFC PATCH 00/33] Xen i386 paravirtualization support
Unlike full virtualization in which the virtual machine provides the same platform interface as running natively on the hardware, paravirtualization requires modification to the guest operating system to work with the platform interface provided by the hypervisor. Xen was designed with performance in mind. Calls to the hypervisor are minimized, batched if necessary, and non-critical codepaths
2007 Apr 18
20
[patch 00/20] XEN-paravirt: Xen guest implementation for paravirt_ops interface
This patch series implements the Linux Xen guest in terms of the paravirt-ops interface. The features in implemented this patch series are: * domU only * UP only (most code is SMP-safe, but there's no way to create a new vcpu) * writable pagetables, with late pinning/early unpinning (no shadow pagetable support) * supports both PAE and non-PAE modes * xen console * virtual block
2007 Apr 18
20
[patch 00/20] XEN-paravirt: Xen guest implementation for paravirt_ops interface
This patch series implements the Linux Xen guest in terms of the paravirt-ops interface. The features in implemented this patch series are: * domU only * UP only (most code is SMP-safe, but there's no way to create a new vcpu) * writable pagetables, with late pinning/early unpinning (no shadow pagetable support) * supports both PAE and non-PAE modes * xen console * virtual block