search for: bio_map_user_iov

Displaying 4 results from an estimated 4 matches for "bio_map_user_iov".

2019 Jul 24
1
[PATCH 03/12] block: bio_release_pages: use flags arg instead of bool
...or_each_segment_all(bvec, bio, iter_all) { - if (mark_dirty && !PageCompound(bvec->bv_page)) + if ((flags & BIO_RP_MARK_DIRTY) && !PageCompound(bvec->bv_page)) set_page_dirty_lock(bvec->bv_page); put_page(bvec->bv_page); } @@ -1421,7 +1421,7 @@ struct bio *bio_map_user_iov(struct request_queue *q, return bio; out_unmap: - bio_release_pages(bio, false); + bio_release_pages(bio, BIO_RP_NORMAL); bio_put(bio); return ERR_PTR(ret); } @@ -1437,7 +1437,7 @@ struct bio *bio_map_user_iov(struct request_queue *q, */ void bio_unmap_user(struct bio *bio) { - bio_r...
2019 Jul 24
20
[PATCH 00/12] block/bio, fs: convert put_page() to put_user_page*()
From: John Hubbard <jhubbard at nvidia.com> Hi, This is mostly Jerome's work, converting the block/bio and related areas to call put_user_page*() instead of put_page(). Because I've changed Jerome's patches, in some cases significantly, I'd like to get his feedback before we actually leave him listed as the author (he might want to disown some or all of these). I added a
2019 Jul 24
20
[PATCH 00/12] block/bio, fs: convert put_page() to put_user_page*()
From: John Hubbard <jhubbard at nvidia.com> Hi, This is mostly Jerome's work, converting the block/bio and related areas to call put_user_page*() instead of put_page(). Because I've changed Jerome's patches, in some cases significantly, I'd like to get his feedback before we actually leave him listed as the author (he might want to disown some or all of these). I added a
2007 Jan 02
0
[PATCH 1/4] add scsi-target and IO_CMD_EPOLL_WAIT patches
...558,8 @@ struct bio *bio_copy_user(request_queue_ + break; + } + +- if (bio_add_pc_page(q, bio, page, bytes, 0) < bytes) { +- ret = -EINVAL; ++ if (bio_add_pc_page(q, bio, page, bytes, 0) < bytes) + break; +- } + + len -= bytes; + } +@@ -620,10 +618,9 @@ static struct bio *__bio_map_user_iov(re + + nr_pages += end - start; + /* +- * transfer and buffer must be aligned to at least hardsector +- * size for now, in the future we can relax this restriction ++ * buffer must be aligned to at least hardsector size for now + */ +- if ((uaddr & queue_dma_alignment(q)) || (len...