search for: write_flush

Displaying 17 results from an estimated 17 matches for "write_flush".

2011 Oct 08
9
xentop reporting zero written sectors
Just moving a chunk of files from one filesysstem on xvba to another on xvdb, and was monitoring with xentop as it was taking longer than expected. The VBD_RD and VBD_WR counters were both clocking-up as expected, as was the VBD_RSECT counter, but the VBD_WSECT counter was stuck on zero, I toggled on the individual VBD device counters and these showed the same (with the RD and WR counters
2011 Oct 08
9
xentop reporting zero written sectors
Just moving a chunk of files from one filesysstem on xvba to another on xvdb, and was monitoring with xentop as it was taking longer than expected. The VBD_RD and VBD_WR counters were both clocking-up as expected, as was the VBD_RSECT counter, but the VBD_WSECT counter was stuck on zero, I toggled on the individual VBD device counters and these showed the same (with the RD and WR counters
2011 Sep 09
7
[PATCH] xen-blk[front|back] FUA additions.
I am proposing these two patches for 3.2. They allow the backend to process the REQ_FUA request as well. Previous to these patches it only did REQ_FLUSH. There is also a bug-fix for the logic of how barrier/flushes were handled. The patches are based on a branch which also has ''feature-discard'' patches, so they won''t apply nativly on top of 3.1-rc5. Please review and
2011 Dec 09
10
[PATCH 0/3] Btrfs: add IO error device stats
The goal is to detect when drives start to get an increased error rate, when drives should be replaced soon. Therefore statistic counters are added that count IO errors (read, write and flush). Additionally, the software detected errors like checksum errors and corrupted blocks are counted. An ioctl interface is added to get the device statistic counters. A second ioctl is added to atomically get
2011 Sep 01
9
[PATCH V4 0/3] xen-blkfront/blkback discard support
Dear list, This is the V4 of the trim support for xen-blkfront/blkback, Now we move BLKIF_OP_TRIM to BLKIF_OP_DISCARD, and dropped all "trim" stuffs in the patches, and use "discard" instead. Also we updated the helpers of blkif_x86_{32|64}_request or we will meet problems using a non-native protocol. And this patch has been tested with both SSD and raw file, with SSD we will
2012 Nov 19
1
[PATCH] vhost-blk: Add vhost-blk support v5
...ret; + struct iovec *iov = req->iov; + int iov_nr = req->iov_nr; + struct page **pages, *page; + struct bio *bio = NULL; + int bio_nr = 0; + void *buf; + + pages_nr_total = 0; + for (i = 0; i < iov_nr; i++) + pages_nr_total += iov_num_pages(&iov[i]); + + if (unlikely(req->write == WRITE_FLUSH)) { + req->pl = NULL; + req->bio = kmalloc(sizeof(struct bio *), GFP_KERNEL); + bio = bio_alloc(GFP_KERNEL, 1); + if (!bio) { + kfree(req->bio); + return -ENOMEM; + } + bio->bi_sector = req->sector; + bio->bi_bdev = bdev; + bio->bi_private = req; + bio->bi_e...
2012 Nov 19
1
[PATCH] vhost-blk: Add vhost-blk support v5
...ret; + struct iovec *iov = req->iov; + int iov_nr = req->iov_nr; + struct page **pages, *page; + struct bio *bio = NULL; + int bio_nr = 0; + void *buf; + + pages_nr_total = 0; + for (i = 0; i < iov_nr; i++) + pages_nr_total += iov_num_pages(&iov[i]); + + if (unlikely(req->write == WRITE_FLUSH)) { + req->pl = NULL; + req->bio = kmalloc(sizeof(struct bio *), GFP_KERNEL); + bio = bio_alloc(GFP_KERNEL, 1); + if (!bio) { + kfree(req->bio); + return -ENOMEM; + } + bio->bi_sector = req->sector; + bio->bi_bdev = bdev; + bio->bi_private = req; + bio->bi_e...
2012 Oct 15
2
[PATCH 1/1] vhost-blk: Add vhost-blk support v4
...iov_nr = req->iov_nr; + struct page **pages, *page; + struct bio *bio = NULL; + int bio_nr = 0; + + req->len = 0; + pages_nr_total = 0; + for (i = 0; i < iov_nr; i++) { + req->len += iov[i].iov_len; + pages_nr_total += iov_num_pages(&iov[i]); + } + + if (unlikely(req->write == WRITE_FLUSH)) { + req->pl = NULL; + req->bio = kmalloc(sizeof(struct bio *), GFP_KERNEL); + bio = bio_alloc(GFP_KERNEL, 1); + if (!bio) { + kfree(req->bio); + return -ENOMEM; + } + bio->bi_sector = req->sector; + bio->bi_bdev = bdev; + bio->bi_private = req; + bio->bi_e...
2012 Oct 15
2
[PATCH 1/1] vhost-blk: Add vhost-blk support v4
...iov_nr = req->iov_nr; + struct page **pages, *page; + struct bio *bio = NULL; + int bio_nr = 0; + + req->len = 0; + pages_nr_total = 0; + for (i = 0; i < iov_nr; i++) { + req->len += iov[i].iov_len; + pages_nr_total += iov_num_pages(&iov[i]); + } + + if (unlikely(req->write == WRITE_FLUSH)) { + req->pl = NULL; + req->bio = kmalloc(sizeof(struct bio *), GFP_KERNEL); + bio = bio_alloc(GFP_KERNEL, 1); + if (!bio) { + kfree(req->bio); + return -ENOMEM; + } + bio->bi_sector = req->sector; + bio->bi_bdev = bdev; + bio->bi_private = req; + bio->bi_e...
2012 Oct 10
0
[PATCH] vhost-blk: Add vhost-blk support v3
...iov_nr = req->iov_nr; + struct page **pages, *page; + struct bio *bio = NULL; + int bio_nr = 0; + + req->len = 0; + pages_nr_total = 0; + for (i = 0; i < iov_nr; i++) { + req->len += iov[i].iov_len; + pages_nr_total += iov_num_pages(&iov[i]); + } + + if (unlikely(req->write == WRITE_FLUSH)) { + req->pl = NULL; + req->bio = kmalloc(sizeof(struct bio *), GFP_KERNEL); + bio = bio_alloc(GFP_KERNEL, 1); + if (!bio) { + kfree(req->bio); + return -ENOMEM; + } + bio->bi_sector = req->sector; + bio->bi_bdev = bdev; + bio->bi_private = req; + bio->bi_e...
2012 Oct 10
0
[PATCH] vhost-blk: Add vhost-blk support v3
...iov_nr = req->iov_nr; + struct page **pages, *page; + struct bio *bio = NULL; + int bio_nr = 0; + + req->len = 0; + pages_nr_total = 0; + for (i = 0; i < iov_nr; i++) { + req->len += iov[i].iov_len; + pages_nr_total += iov_num_pages(&iov[i]); + } + + if (unlikely(req->write == WRITE_FLUSH)) { + req->pl = NULL; + req->bio = kmalloc(sizeof(struct bio *), GFP_KERNEL); + bio = bio_alloc(GFP_KERNEL, 1); + if (!bio) { + kfree(req->bio); + return -ENOMEM; + } + bio->bi_sector = req->sector; + bio->bi_bdev = bdev; + bio->bi_private = req; + bio->bi_e...
2012 Dec 02
3
[PATCH] vhost-blk: Add vhost-blk support v6
...ret; + struct iovec *iov = req->iov; + int iov_nr = req->iov_nr; + struct page **pages, *page; + struct bio *bio = NULL; + int bio_nr = 0; + void *buf; + + pages_nr_total = 0; + for (i = 0; i < iov_nr; i++) + pages_nr_total += iov_num_pages(&iov[i]); + + if (unlikely(req->write == WRITE_FLUSH)) { + req->use_inline = true; + req->pl = NULL; + req->bio = req->inline_bio; + + bio = bio_alloc(GFP_KERNEL, 1); + if (!bio) + return -ENOMEM; + + bio->bi_sector = req->sector; + bio->bi_bdev = bdev; + bio->bi_private = req; + bio->bi_end_io = vhost_blk_r...
2012 Dec 02
3
[PATCH] vhost-blk: Add vhost-blk support v6
...ret; + struct iovec *iov = req->iov; + int iov_nr = req->iov_nr; + struct page **pages, *page; + struct bio *bio = NULL; + int bio_nr = 0; + void *buf; + + pages_nr_total = 0; + for (i = 0; i < iov_nr; i++) + pages_nr_total += iov_num_pages(&iov[i]); + + if (unlikely(req->write == WRITE_FLUSH)) { + req->use_inline = true; + req->pl = NULL; + req->bio = req->inline_bio; + + bio = bio_alloc(GFP_KERNEL, 1); + if (!bio) + return -ENOMEM; + + bio->bi_sector = req->sector; + bio->bi_bdev = bdev; + bio->bi_private = req; + bio->bi_end_io = vhost_blk_r...
2012 Aug 16
0
[RFC v1 3/5] VBD: enlarge max segment per request in blkfront
..._PER_REQUEST]; + struct bio **biolist = pending_req->biolist; int i, nbio = 0; int operation; struct blk_plug plug; @@ -616,7 +670,7 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif, nseg = req->u.rw.nr_segments; if (unlikely(nseg == 0 && operation != WRITE_FLUSH) || - unlikely(nseg > BLKIF_MAX_SEGMENTS_PER_REQUEST)) { + unlikely(nseg > blkif->ops->max_seg)) { pr_debug(DRV_PFX "Bad number of segments in request (%d)\n", nseg); /* Haven''t submitted any bio''s yet. */ @@...
2012 May 25
6
[PATCH v5 0/3] Btrfs: add IO error device stats
Changes v1-v2: - Remove restriction that BTRFS_IOC_GET_DEVICE_STATS is a privileged operation - Cast u64 to unsigned long long for printf() Changes v2-v3: - Rebased on Chris'' current master Changes v3-v4: - Add padding at end of ioctl structure Changes v4-v5: - The statistic members in the ioctl are now organized as an array of 64 bit values. Symbolic names for the array indexes
2012 Apr 20
1
[PATCH] multiqueue: a hodge podge of things
...false; /* * Issue flush and toggle pending_idx. This makes pending_idx * different from running_idx, which means flush is in flight. */ - blk_rq_init(q, &q->flush_rq); + blk_rq_init(ctx, &q->flush_rq); q->flush_rq.cmd_type = REQ_TYPE_FS; q->flush_rq.cmd_flags = WRITE_FLUSH | REQ_FLUSH_SEQ; q->flush_rq.rq_disk = first_rq->rq_disk; q->flush_rq.end_io = flush_end_io; q->flush_pending_idx ^= 1; list_add_tail(&q->flush_rq.queuelist, &q->queue_head); return true; } static void flush_data_end_io(struct request *rq, int error) { - s...
2012 Apr 20
1
[PATCH] multiqueue: a hodge podge of things
...false; /* * Issue flush and toggle pending_idx. This makes pending_idx * different from running_idx, which means flush is in flight. */ - blk_rq_init(q, &q->flush_rq); + blk_rq_init(ctx, &q->flush_rq); q->flush_rq.cmd_type = REQ_TYPE_FS; q->flush_rq.cmd_flags = WRITE_FLUSH | REQ_FLUSH_SEQ; q->flush_rq.rq_disk = first_rq->rq_disk; q->flush_rq.end_io = flush_end_io; q->flush_pending_idx ^= 1; list_add_tail(&q->flush_rq.queuelist, &q->queue_head); return true; } static void flush_data_end_io(struct request *rq, int error) { - s...