search for: bio_flagged

Displaying 20 results from an estimated 23 matches for "bio_flagged".

2008 Sep 10
0
[RFC][PATCH -mm] blktrace: adds ioprio to blktrace
...} } @@ -224,11 +226,12 @@ static inline void blk_add_trace_bio(str u32 what) { struct blk_trace *bt = q->blk_trace; + unsigned short ioprio = bio_get_ioprio(bio); if (likely(!bt)) return; - __blk_add_trace(bt, bio->bi_sector, bio->bi_size, bio->bi_rw, what, !bio_flagged(bio, BIO_UPTODATE), 0, NULL); + __blk_add_trace(bt, bio->bi_sector, bio->bi_size, bio->bi_rw, what, !bio_flagged(bio, BIO_UPTODATE), ioprio, 0, NULL); } /** @@ -253,7 +256,7 @@ static inline void blk_add_trace_generic if (bio) blk_add_trace_bio(q, bio, what); else - __blk_a...
2012 Dec 18
0
[PATCH] [RFC] Btrfs: Subpagesize blocksize (WIP).
From: Wade Cline <clinew@linux.vnet.ibm.com> This patch is only an RFC. My internship is ending and I was hoping to get some feedback and incorporate any suggestions people may have before my internship ends along with life as we know it (this Friday). The filesystem should mount/umount properly but tends towards the explosive side when writes start happening. My current focus is on
2012 Jul 12
3
[PATCH v2] Btrfs: improve multi-thread buffer read
While testing with my buffer read fio jobs[1], I find that btrfs does not perform well enough. Here is a scenario in fio jobs: We have 4 threads, "t1 t2 t3 t4", starting to buffer read a same file, and all of them will race on add_to_page_cache_lru(), and if one thread successfully puts its page into the page cache, it takes the responsibility to read the page''s data. And
2013 Aug 06
6
[PATCH 0/4] btrfs: out-of-band (aka offline) dedupe v4
Hi, The following series of patches implements in btrfs an ioctl to do out-of-band deduplication of file extents. To be clear, this means that the file system is mounted and running, but the dedupe is not done during file writes, but after the fact when some userspace software initiates a dedupe. The primary patch is loosely based off of one sent by Josef Bacik back in January, 2011.
2012 Jul 10
6
[PATCH RFC] Btrfs: improve multi-thread buffer read
While testing with my buffer read fio jobs[1], I find that btrfs does not perform well enough. Here is a scenario in fio jobs: We have 4 threads, "t1 t2 t3 t4", starting to buffer read a same file, and all of them will race on add_to_page_cache_lru(), and if one thread successfully puts its page into the page cache, it takes the responsibility to read the page''s data. And
2014 Nov 11
2
kernel BUG at drivers/block/virtio_blk.c:172!
...1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index b3ac40a..d808601 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -103,13 +103,16 @@ void blk_recount_segments(struct request_queue *q, struct bio *bio) if (no_sg_merge && !bio_flagged(bio, BIO_CLONED) && merge_not_need) - bio->bi_phys_segments = bio->bi_vcnt; + bio->bi_phys_segments = min_t(unsigned int, bio->bi_vcnt, + queue_max_segments(q)); else { struct bio *nxt = bio->bi_next; bio->bi_next = NULL; - bio->bi_phys_segments = __...
2014 Nov 11
2
kernel BUG at drivers/block/virtio_blk.c:172!
...1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index b3ac40a..d808601 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -103,13 +103,16 @@ void blk_recount_segments(struct request_queue *q, struct bio *bio) if (no_sg_merge && !bio_flagged(bio, BIO_CLONED) && merge_not_need) - bio->bi_phys_segments = bio->bi_vcnt; + bio->bi_phys_segments = min_t(unsigned int, bio->bi_vcnt, + queue_max_segments(q)); else { struct bio *nxt = bio->bi_next; bio->bi_next = NULL; - bio->bi_phys_segments = __...
2023 Jan 30
1
[PATCH 01/23] block: factor out a bvec_set_page helper
...+; bio->bi_iter.bi_size += len; return len; @@ -1108,15 +1105,10 @@ EXPORT_SYMBOL_GPL(bio_add_zone_append_page); void __bio_add_page(struct bio *bio, struct page *page, unsigned int len, unsigned int off) { - struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt]; - WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED)); WARN_ON_ONCE(bio_full(bio, len)); - bv->bv_page = page; - bv->bv_offset = off; - bv->bv_len = len; - + bvec_set_page(&bio->bi_io_vec[bio->bi_vcnt], page, len, off); bio->bi_iter.bi_size += len; bio->bi_vcnt++; } diff --git a/include/linux/bvec.h...
2014 Nov 11
0
kernel BUG at drivers/block/virtio_blk.c:172!
...s(-) > > diff --git a/block/blk-merge.c b/block/blk-merge.c > index b3ac40a..d808601 100644 > --- a/block/blk-merge.c > +++ b/block/blk-merge.c > @@ -103,13 +103,16 @@ void blk_recount_segments(struct request_queue *q, struct bio *bio) > > if (no_sg_merge && !bio_flagged(bio, BIO_CLONED) && > merge_not_need) > - bio->bi_phys_segments = bio->bi_vcnt; > + bio->bi_phys_segments = min_t(unsigned int, bio->bi_vcnt, > + queue_max_segments(q)); > el...
2014 Nov 10
2
kernel BUG at drivers/block/virtio_blk.c:172!
On 2014-11-10 02:59, Rusty Russell wrote: > Jeff Layton <jlayton at poochiereds.net> writes: > >> In the latest Fedora rawhide kernel in the repos, I'm seeing the >> following oops when mounting xfs. rc2-ish kernels seem to be fine: >> >> [ 64.669633] ------------[ cut here ]------------ >> [ 64.670008] kernel BUG at drivers/block/virtio_blk.c:172!
2014 Nov 10
2
kernel BUG at drivers/block/virtio_blk.c:172!
On 2014-11-10 02:59, Rusty Russell wrote: > Jeff Layton <jlayton at poochiereds.net> writes: > >> In the latest Fedora rawhide kernel in the repos, I'm seeing the >> following oops when mounting xfs. rc2-ish kernels seem to be fine: >> >> [ 64.669633] ------------[ cut here ]------------ >> [ 64.670008] kernel BUG at drivers/block/virtio_blk.c:172!
2012 May 25
6
[PATCH v5 0/3] Btrfs: add IO error device stats
Changes v1-v2: - Remove restriction that BTRFS_IOC_GET_DEVICE_STATS is a privileged operation - Cast u64 to unsigned long long for printf() Changes v2-v3: - Rebased on Chris'' current master Changes v3-v4: - Add padding at end of ioctl structure Changes v4-v5: - The statistic members in the ioctl are now organized as an array of 64 bit values. Symbolic names for the array indexes
2011 Oct 04
68
[patch 00/65] Error handling patchset v3
Hi all - Here''s my current error handling patchset, against 3.1-rc8. Almost all of this patchset is preparing for actual error handling. Before we start in on that work, I''m trying to reduce the surface we need to worry about. It turns out that there is a ton of code that returns an error code but never actually reports an error. The patchset has grown to 65 patches. 46 of them
2011 Jul 21
10
[PATCH v5 0/8] Btrfs scrub: print path to corrupted files and trigger nodatasum fixup
While testing raid-auto-repair patches I''m going to send out later, I just found the very last bug in my current scrub patch series: Changelog v4->v5: - fixed a deadlock when fixup is taking longer while scrub is about to end Original message follows: ------------------------ This patch set introduces two new features for scrub. They share the backref iteration code which is the
2007 Jan 02
0
[PATCH 1/4] add scsi-target and IO_CMD_EPOLL_WAIT patches
...lock/ll_rw_blk.c +index 1ce88cf..c631d5a 100644 +--- a/block/ll_rw_blk.c ++++ b/block/ll_rw_blk.c +@@ -2265,6 +2265,84 @@ void blk_insert_request(request_queue_t + + EXPORT_SYMBOL(blk_insert_request); + ++static int __blk_rq_unmap_user(struct bio *bio) ++{ ++ int ret = 0; ++ ++ if (bio) { ++ if (bio_flagged(bio, BIO_USER_MAPPED)) ++ bio_unmap_user(bio); ++ else ++ ret = bio_uncopy_user(bio); ++ } ++ ++ return ret; ++} ++ ++static int __blk_rq_map_user(request_queue_t *q, struct request *rq, ++ void __user *ubuf, unsigned int len) ++{ ++ unsigned long uaddr; ++ struct bio *bio, *orig_bio; +...
2011 Dec 09
10
[PATCH 0/3] Btrfs: add IO error device stats
The goal is to detect when drives start to get an increased error rate, when drives should be replaced soon. Therefore statistic counters are added that count IO errors (read, write and flush). Additionally, the software detected errors like checksum errors and corrupted blocks are counted. An ioctl interface is added to get the device statistic counters. A second ioctl is added to atomically get
2011 May 11
8
[PATCH 1/4] Btrfs: map the node block when looking for readahead targets
If we have particularly full nodes, we could call btrfs_node_blockptr up to 32 times, which is 32 pairs of kmap/kunmap, which _sucks_. So go ahead and map the extent buffer while we look for readahead targets. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com> --- fs/btrfs/ctree.c | 23 +++++++++++++++++++++-- 1 files changed, 21 insertions(+), 2 deletions(-) diff --git
2011 Jun 10
6
[PATCH v2 0/6] btrfs: generic readeahead interface
This series introduces a generic readahead interface for btrfs trees. The intention is to use it to speed up scrub in a first run, but balance is another hot candidate. In general, every tree walk could be accompanied by a readahead. Deletion of large files comes to mind, where the fetching of the csums takes most of the time. Also the initial build-ups of free-space-caches and
2011 Aug 15
9
[patch v2 0/9] btrfs: More error handling patches
Hi all - The following 9 patches add more error handling to the btrfs code: - Add btrfs_panic - Catch locking failures in {set,clear}_extent_bit - Push up set_extent_bit errors to callers - Push up lock_extent errors to callers - Push up clear_extent_bit errors to callers - Push up unlock_extent errors to callers - Make pin_down_extent return void - Push up btrfs_pin_extent errors to
2010 Sep 03
0
[PATCH 1/2] btrfs: document where we use BUG_ON instead of error handling
Document those places in the btrfs code which are BUGing on non-fatal error conditions that should be handled by proper error paths. This makes it easier to distinguish between what needs fixing versus which BUG_ON''s we might want to keep (to trap code bugs, unexpected inconsistencies, etc). Do this with a trivial macro, ''btrfs_fixable_bug_on'' which just defines to