search for: __lock_page

Displaying 20 results from an estimated 26 matches for "__lock_page".

2012 Jul 30
4
balance disables nodatacow
I have a 3 disk raid1 filesystem mounted with nodatacow. I have a folder in said filesystem with the ''C'' NOCOW & ''Z'' Not_Compressed flags set for good measure. I then copy in a large file and proceed to make random modifications. Filefrag shows no additional extents created, good so far. A big thank you to the those devs who got that working. However, after
2014 Nov 03
1
dmesg error
...eofday+0x40/0x90 [<ffffffff8005a412>] getnstimeofday+0x10/0x29 [<ffffffff80028bb2>] sync_page+0x0/0x43 [<ffffffff800637de>] io_schedule+0x3f/0x67 [<ffffffff80028bf0>] sync_page+0x3e/0x43 [<ffffffff80063922>] __wait_on_bit_lock+0x36/0x66 [<ffffffff8003f980>] __lock_page+0x5e/0x64 [<ffffffff800a34d5>] wake_bit_function+0x0/0x23 [<ffffffff8000c425>] do_generic_mapping_read+0x1df/0x359 [<ffffffff8000d251>] file_read_actor+0x0/0x159 [<ffffffff8000c6eb>] __generic_file_aio_read+0x14c/0x198 [<ffffffff80016eb7>] generic_file_aio_read+0x...
2011 Jul 08
5
btrfs hang in flush-btrfs-5
...;] io_schedule+0x47/0x62 Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff810d8112>] sync_page+0x4b/0x4f Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff8147482f>] __wait_on_bit_lock+0x46/0x8f Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff810d8075>] __lock_page+0x66/0x68 Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff8106f2ab>] ? wake_bit_function+0x0/0x31 Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffffa0430cf9>] lock_page+0x3a/0x3e [btrfs] Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffffa04310a6>] lock_de...
2005 Nov 01
2
xen, lvm, drbd, bad kernel messages
...0x46/0x50 Nov 1 13:52:13 localhost kernel: [__wait_on_bit_lock+94/112] __wait_on_bit_lock+0x5e/0x70 Nov 1 13:52:13 localhost kernel: [sync_page+0/80] sync_page+0x0/0x50 Nov 1 13:52:13 localhost kernel: [wake_bit_function+0/96] wake_bit_function+0x0/0x60 Nov 1 13:52:13 localhost kernel: [__lock_page+145/160] __lock_page +0x91/0xa0 Nov 1 13:52:13 localhost kernel: [wake_bit_function+0/96] wake_bit_function+0x0/0x60 Nov 1 13:52:13 localhost kernel: [page_cache_readahead+218/720] page_cache_readahead+0xda/0x2d0 Nov 1 13:52:13 localhost kernel: [wake_bit_function+0/96] wake_bit_functi...
2012 Jan 25
3
[PATCH] Btrfs: Check for NULL page in extent_range_uptodate
A user has encountered a NULL pointer kernel oops in btrfs when encountering media errors. The problem has been identified as an unhandled NULL pointer returned from find_get_page(). This modification simply checks for a NULL page, and returns with an error if found (the extent_range_uptodate() function returns 1 on errors). After testing this patch, the user reported that the error with the
2011 Aug 09
17
Re: Applications using fsync cause hangs for several seconds every few minutes
On 06/21/2011 01:15 PM, Jan Stilow wrote: > Hello, > > Nirbheek Chauhan <nirbheek <at> gentoo.org> writes: >> [...] >> >> Every few minutes, (I guess) when applications do fsync (firefox, >> xchat, vim, etc), all applications that use fsync() hang for several >> seconds, and applications that use general IO suffer extreme >> slowdowns.
2007 Oct 08
1
Xen crash
...etc. We did not see this when running the 2.6.18-8.1.8 Xen kernel, instead the Xen zones crashed less frequent with a out of memory problem as follows: Call Trace: [<ffffffff802aeefc>] out_of_memory+0x4e/0x1d3 [<ffffffff8020efe8>] __alloc_pages+0x229/0x2b2 [<ffffffff8023fd5b>] __lock_page+0x5e/0x64 [<ffffffff80232637>] read_swap_cache_async+0x42/0xd1 [<ffffffff802b32a2>] swapin_readahead+0x4e/0x77 [<ffffffff8020929d>] __handle_mm_fault+0xae3/0xf46 [<ffffffff80260709>] _spin_lock_irqsave+0x9/0x14 [<ffffffff80262fe8>] do_page_fault+0xe48/0x11dc [&lt...
2007 Oct 08
0
Xen crash
...etc. We did not see this when running the 2.6.18-8.1.8 Xen kernel, instead the Xen zones crashed less frequent with a out of memory problem as follows: Call Trace: [<ffffffff802aeefc>] out_of_memory+0x4e/0x1d3 [<ffffffff8020efe8>] __alloc_pages+0x229/0x2b2 [<ffffffff8023fd5b>] __lock_page+0x5e/0x64 [<ffffffff80232637>] read_swap_cache_async+0x42/0xd1 [<ffffffff802b32a2>] swapin_readahead+0x4e/0x77 [<ffffffff8020929d>] __handle_mm_fault+0xae3/0xf46 [<ffffffff80260709>] _spin_lock_irqsave+0x9/0x14 [<ffffffff80262fe8>] do_page_fault+0xe48/0x11dc [&lt...
2012 Jul 31
2
Btrfs Intermittent ENOSPC Issues
I''ve been working on running down intermittent ENOSPC issues. I can only seem to replicate ENOSPC errors when running zlib compression. However, I have been seeing similar ENOSPC errors to a lesser extent when playing with the LZ4HC patches. I apologize for not following up on this sooner, but I had drifted away from using zlib, and didn''t notice there was still an issue. My
2012 Sep 17
2
'umount' of multi-device volume hangs until the device is physically un-plugged
...90ecf818 0000000000000082 ffff8800ccc3c530 ffff880090ecffd8 [ 469.037796] ffff880090ecffd8 ffff880090ecffd8 ffff8801172d9710 ffff8800ccc3c530 [ 469.037801] ffff880090ecf818 ffff8800ccc3c530 ffff88011e254560 0000000000000002 [ 469.037807] Call Trace: [ 469.037814] [<ffffffff8112a830>] ? __lock_page+0x70/0x70 [ 469.037819] [<ffffffff8161e899>] schedule+0x29/0x70 [ 469.037824] [<ffffffff8161e96f>] io_schedule+0x8f/0xd0 [ 469.037831] [<ffffffff8112a83e>] sleep_on_page+0xe/0x20 [ 469.037838] [<ffffffff8161d130>] __wait_on_bit+0x60/0x90 [ 469.037845] [<ffffffff...
2011 Jan 19
0
Bug#603727: xen-hypervisor-4.0-amd64: i386 Dom0 crashes after doing some I/O on local storage (software Raid1 on SAS-drives with mpt2sas driver)
...0b5f1>] ? io_schedule+0x73/0xb7 [163440.615584] [<ffffffff810b4f26>] ? sync_page+0x41/0x46 [163440.615590] [<ffffffff8130c8b2>] ? _spin_unlock_irqrestore+0xd/0xe [163440.615596] [<ffffffff8130ba01>] ? __wait_on_bit_lock+0x3f/0x84 [163440.615604] [<ffffffff810b4eb2>] ? __lock_page+0x5d/0x63 [163440.615609] [<ffffffff81065d38>] ? wake_bit_function+0x0/0x23 [163440.615616] [<ffffffff810bcdde>] ? pagevec_lookup_tag+0x1a/0x21 [163440.615622] [<ffffffff810bb9cb>] ? write_cache_pages+0x1ad/0x327 [163440.615627] [<ffffffff810bb398>] ? __writepage+0x0/0x2...
2010 Nov 18
9
Interesting problem with write data.
Hi, Recently, I made a btrfs to use. And I met slowness problem. Trying to diag it. I found this: 1. dd if=/dev/zero of=test count=1024 bs=1MB This is fast, at about 25MB/s, and reasonable iowait. 2. dd if=/dev/zero of=test count=1 bs=1GB This is pretty slow, at about 1.5MB/s, and 90%+ iowait, constantly. May I know why it works like this? Thanks. -- To unsubscribe from this list: send the
2013 Aug 12
6
3TB External USB Drive isn't recognized
...timeofday+0x40/0x90 [<ffffffff80028b44>] sync_page+0x0/0x43 [<ffffffff800e4df4>] blkdev_readpage+0x0/0xf [<ffffffff800637ea>] io_schedule+0x3f/0x67 [<ffffffff80028b82>] sync_page+0x3e/0x43 [<ffffffff8006392e>] __wait_on_bit_lock+0x36/0x66 [<ffffffff8003fce0>] __lock_page+0x5e/0x64 [<ffffffff800a0b8d>] wake_bit_function+0x0/0x23 [<ffffffff800c67ba>] read_cache_page+0xba/0x110 [<ffffffff8010a685>] read_dev_sector+0x28/0xcf [<ffffffff8010c061>] read_lba+0x49/0xac [<ffffffff800dbba2>] alternate_node_alloc+0x70/0x8c [<ffffffff8010c2f...
2011 May 05
12
Having parent transid verify failed
Hello, I have a 5.5TB Btrfs filesystem on top of a md-raid 5 device. Now if i run some file operations like find, i get these messages. kernel is 2.6.38.5-1 on arch linux May 5 14:15:12 mail kernel: [13559.089713] parent transid verify failed on 3062073683968 wanted 5181 found 5188 May 5 14:15:12 mail kernel: [13559.089834] parent transid verify failed on 3062073683968 wanted 5181 found 5188
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...t;] io_schedule_timeout+0xad/0x130 > [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 > [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 > [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 > [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 > [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 > [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 > [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x55/0xc0 > [11159.499130] [<ffffffffb9484b76>] io...
2007 Apr 18
4
[patch 3/9] Guest page hinting: volatile page cache.
...)) { + /* + * The page has been discarded by the host. Run the + * discard handler and return NULL. + */ + read_unlock_irq(&mapping->tree_lock); + page_discard(page); + return NULL; + } else if (TestSetPageLocked(page)) { read_unlock_irq(&mapping->tree_lock); __lock_page(page); read_lock_irq(&mapping->tree_lock); @@ -800,11 +839,24 @@ unsigned find_get_pages(struct address_s unsigned int i; unsigned int ret; +repeat: read_lock_irq(&mapping->tree_lock); ret = radix_tree_gang_lookup(&mapping->page_tree, (void **)pages, start, n...
2007 Apr 18
4
[patch 3/9] Guest page hinting: volatile page cache.
...)) { + /* + * The page has been discarded by the host. Run the + * discard handler and return NULL. + */ + read_unlock_irq(&mapping->tree_lock); + page_discard(page); + return NULL; + } else if (TestSetPageLocked(page)) { read_unlock_irq(&mapping->tree_lock); __lock_page(page); read_lock_irq(&mapping->tree_lock); @@ -800,11 +839,24 @@ unsigned find_get_pages(struct address_s unsigned int i; unsigned int ret; +repeat: read_lock_irq(&mapping->tree_lock); ret = radix_tree_gang_lookup(&mapping->page_tree, (void **)pages, start, n...
2018 May 30
2
[ovirt-users] Re: Gluster problems, cluster performance issues
...30 >>> [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 >>> [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 >>> [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 >>> [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 >>> [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 >>> [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 >>> [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x >>> 55/0xc0 >>>...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...99101] [<ffffffffb9913528>] io_schedule+0x18/0x20 >>>>> [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 >>>>> [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 >>>>> [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 >>>>> [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 >>>>> [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 >>>>> [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x >>&...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...gt; [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 >>>> [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 >>>> [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 >>>> [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 >>>> [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 >>>> [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 >>>> [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x >>>> 55/0...