search for: lockstart

Displaying 11 results from an estimated 11 matches for "lockstart".

Did you mean: kickstart
2010 Mar 22
0
[PATCH] Btrfs: change direct I/O read to not use i_mutex.
...->io_tree; + struct btrfs_ordered_extent *ordered; + u64 stop; + + /* must ensure the whole compressed extent is valid on each loop + * as we don''t know the final extent size until we look it up + */ + if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags) && + (diocb->lockstart > em->start || *lockend <= em->start + em->len)) { + unlock_extent(io_tree, diocb->lockstart, *lockend, GFP_NOFS); + diocb->lockstart = em->start; + *lockend = min(*lockend, em->start + em->len - 1); + *safe_to_read = 0; + return; + } + + /* one test on first loop...
2013 Oct 25
0
[PATCH] Btrfs: return an error from btrfs_wait_ordered_range
...rdered_range(inode, offset, len); + if (ret) + return ret; mutex_lock(&inode->i_mutex); /* @@ -2139,8 +2142,12 @@ static int btrfs_punch_hole(struct inode *inode, loff_t offset, loff_t len) btrfs_put_ordered_extent(ordered); unlock_extent_cached(&BTRFS_I(inode)->io_tree, lockstart, lockend, &cached_state, GFP_NOFS); - btrfs_wait_ordered_range(inode, lockstart, - lockend - lockstart + 1); + ret = btrfs_wait_ordered_range(inode, lockstart, + lockend - lockstart + 1); + if (ret) { + mutex_unlock(&inode->i_mutex); + return ret; + }...
2011 Aug 15
9
[patch v2 0/9] btrfs: More error handling patches
Hi all - The following 9 patches add more error handling to the btrfs code: - Add btrfs_panic - Catch locking failures in {set,clear}_extent_bit - Push up set_extent_bit errors to callers - Push up lock_extent errors to callers - Push up clear_extent_bit errors to callers - Push up unlock_extent errors to callers - Make pin_down_extent return void - Push up btrfs_pin_extent errors to
2017 Nov 21
0
Re: [nbdkit PATCH v2 0/4] enable parallel nbd forwarding
...tlenecks are. > FAIL: test-parallel-nbd.sh > ========================== > > nbdkit: file[1]: debug: release unload prevention lock > nbdkit: file[1]: debug: handshake complete, processing requests with 16 threads > nbdkit: nbd[1]: nbdkit: debug: debug: acquire unload prevention lockstarting worker thread file.0 > > nbdkit: nbd[1]: nbdkit: debug: debug: get_sizestarting worker thread file.2 > > nbdkit: nbd[1]: debug: can_write Here, we have a nice demonstration that commid d02d9c9d works for messages from one process (my debugging was worse without the mutex in errors...
2012 Sep 17
13
[PATCH 1/2 v3] Btrfs: use flag EXTENT_DEFRAG for snapshot-aware defrag
...t, page_end, - EXTENT_DIRTY | EXTENT_DELALLOC | EXTENT_DO_ACCOUNTING, + EXTENT_DIRTY | EXTENT_DELALLOC | + EXTENT_DO_ACCOUNTING | EXTENT_DEFRAG, 0, 0, &cached_state, GFP_NOFS); ret = btrfs_set_extent_delalloc(inode, page_start, page_end, @@ -5998,7 +5999,8 @@ unlock: if (lockstart < lockend) { if (create && len < lockend - lockstart) { clear_extent_bit(&BTRFS_I(inode)->io_tree, lockstart, - lockstart + len - 1, unlock_bits, 1, 0, + lockstart + len - 1, + unlock_bits | EXTENT_DEFRAG, 1, 0, &cached_state, GFP_NOFS); /*...
2017 Nov 21
3
Re: [nbdkit PATCH v2 0/4] enable parallel nbd forwarding
This works OK on x86_64, but fails on our fast new Amberwing (aarch64) machine. I've attached the test-suite.log file, but I'm not very sure what's going wrong from that. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-df lists disk usage of guests without needing to
2010 May 07
6
[PATCH 1/5] fs: allow short direct-io reads to be completed via buffered IO V2
V1->V2: Check to see if our current ppos is >= i_size after a short DIO read, just in case it was actually a short read and we need to just return. This is similar to what already happens in the write case. If we have a short read while doing O_DIRECT, instead of just returning, fallthrough and try to read the rest via buffered IO. BTRFS needs this because if we encounter a compressed or
2012 Oct 01
1
[RFC] [PATCH] Btrfs: rework can_nocow_odirect
...t_in_range(root, disk_bytenr, num_bytes)) - goto out; - /* - * all of the above have passed, it is safe to overwrite this extent - * without cow - */ - ret = 1; -out: - btrfs_free_path(path); - return ret; + return 0; + + return 1; } static int lock_extent_direct(struct inode *inode, u64 lockstart, u64 lockend, @@ -6663,7 +6620,7 @@ static int btrfs_get_blocks_direct(struct inode *inode, sector_t iblock, if (IS_ERR(trans)) goto must_cow; - if (can_nocow_odirect(trans, inode, start, len) == 1) { + if (can_nocow_odirect(trans, inode, em, start, len) == 1) { u64 orig_start = em-&...
2011 Oct 04
68
[patch 00/65] Error handling patchset v3
Hi all - Here''s my current error handling patchset, against 3.1-rc8. Almost all of this patchset is preparing for actual error handling. Before we start in on that work, I''m trying to reduce the surface we need to worry about. It turns out that there is a ton of code that returns an error code but never actually reports an error. The patchset has grown to 65 patches. 46 of them
2003 Dec 01
0
No subject
...e happening is that there is a 'hierarchy' - if the > lock start (offset) exceeds NFSv2 file limits, then you get ENOLCK, before > the count is checked. If the lock start falls within NFSv2 file limits, > you get a EFBIG only if the count is 65535 or greater... > > Since the lockstart observed by reinout exceeds the max file size over > NFSv2, then we (samba) are not falling back to a max lock size for nfs, > because we fail because of the lock START (offset), not the lock count > size... > I don't know how this fits into the summit paper you are looking into, bu...
2003 Dec 01
0
No subject
...G. So what appears to be happening is that there is a 'hierarchy' - if the lock start (offset) exceeds NFSv2 file limits, then you get ENOLCK, before the count is checked. If the lock start falls within NFSv2 file limits, you get a EFBIG only if the count is 65535 or greater... Since the lockstart observed by reinout exceeds the max file size over NFSv2, then we (samba) are not falling back to a max lock size for nfs, because we fail because of the lock START (offset), not the lock count size... I don't know how this fits into the summit paper you are looking into, but we might work arou...