search for: __wait_on_bit_lock

Displaying 18 results from an estimated 18 matches for "__wait_on_bit_lock".

2011 Jul 08
5
btrfs hang in flush-btrfs-5
...d80c7>] ? sync_page+0x0/0x4f Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff8147439c>] io_schedule+0x47/0x62 Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff810d8112>] sync_page+0x4b/0x4f Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff8147482f>] __wait_on_bit_lock+0x46/0x8f Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff810d8075>] __lock_page+0x66/0x68 Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff8106f2ab>] ? wake_bit_function+0x0/0x31 Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffffa0430cf9>] lock_page+0x3...
2014 Nov 03
1
dmesg error
...8055b77 Call Trace: [<ffffffff8006ecd9>] do_gettimeofday+0x40/0x90 [<ffffffff8005a412>] getnstimeofday+0x10/0x29 [<ffffffff80028bb2>] sync_page+0x0/0x43 [<ffffffff800637de>] io_schedule+0x3f/0x67 [<ffffffff80028bf0>] sync_page+0x3e/0x43 [<ffffffff80063922>] __wait_on_bit_lock+0x36/0x66 [<ffffffff8003f980>] __lock_page+0x5e/0x64 [<ffffffff800a34d5>] wake_bit_function+0x0/0x23 [<ffffffff8000c425>] do_generic_mapping_read+0x1df/0x359 [<ffffffff8000d251>] file_read_actor+0x0/0x159 [<ffffffff8000c6eb>] __generic_file_aio_read+0x14c/0x198 [...
2009 Apr 03
1
Memory Leak with stock Squirrelmail, PHP, mysql, apache since 5.3
...out_of_memory+0x8b/0x203 Apr 2 17:18:28 s_local at webmail kernel:[<ffffffff8020f657>] __alloc_pages+0x245/0x2ce Apr 2 17:18:28 s_local at webmail kernel:[<ffffffff8021336e>] __do_page_cache_readahead+0xd0/0x21c Apr 2 17:18:28 s_local at webmail kernel:[<ffffffff80262824>] __wait_on_bit_lock+0x5b/0x66 Apr 2 17:18:28 s_local at webmail kernel:[<ffffffff88081d4d>] :dm_mod:dm_any_congested+0x38/0x3f Apr 2 17:18:28 s_local at webmail kernel:[<ffffffff80213c47>] filemap_nopage+0x148/0x322 Apr 2 17:18:28 s_local at webmail kernel:[<ffffffff80208db9>] __handle_mm_fault...
2005 Nov 01
2
xen, lvm, drbd, bad kernel messages
...d] Nov 1 13:52:13 localhost kernel: [blk_backing_dev_unplug+25/32] blk_backing_dev_unplug+0x19/0x20 Nov 1 13:52:13 localhost kernel: [block_sync_page+60/80] block_sync_page+0x3c/0x50 Nov 1 13:52:13 localhost kernel: [sync_page+70/80] sync_page+0x46/0x50 Nov 1 13:52:13 localhost kernel: [__wait_on_bit_lock+94/112] __wait_on_bit_lock+0x5e/0x70 Nov 1 13:52:13 localhost kernel: [sync_page+0/80] sync_page+0x0/0x50 Nov 1 13:52:13 localhost kernel: [wake_bit_function+0/96] wake_bit_function+0x0/0x60 Nov 1 13:52:13 localhost kernel: [__lock_page+145/160] __lock_page +0x91/0xa0 Nov 1 13:52:13 loc...
2011 Jan 19
0
Bug#603727: xen-hypervisor-4.0-amd64: i386 Dom0 crashes after doing some I/O on local storage (software Raid1 on SAS-drives with mpt2sas driver)
...fff8110ed8a>] ? sync_buffer+0x0/0x40 [163440.614937] [<ffffffff8130b5f1>] ? io_schedule+0x73/0xb7 [163440.614943] [<ffffffff8110edc5>] ? sync_buffer+0x3b/0x40 [163440.614949] [<ffffffff8130c8b2>] ? _spin_unlock_irqrestore+0xd/0xe [163440.614955] [<ffffffff8130ba01>] ? __wait_on_bit_lock+0x3f/0x84 [163440.614960] [<ffffffff8110ed8a>] ? sync_buffer+0x0/0x40 [163440.614966] [<ffffffff8130bab1>] ? out_of_line_wait_on_bit_lock+0x6b/0x77 [163440.614972] [<ffffffff81065d38>] ? wake_bit_function+0x0/0x23 [163440.614978] [<ffffffff81110157>] ? __block_write_full...
2012 Jul 30
4
balance disables nodatacow
I have a 3 disk raid1 filesystem mounted with nodatacow. I have a folder in said filesystem with the ''C'' NOCOW & ''Z'' Not_Compressed flags set for good measure. I then copy in a large file and proceed to make random modifications. Filefrag shows no additional extents created, good so far. A big thank you to the those devs who got that working. However, after
2010 Mar 29
0
Interesting lockdep message coming out of blktap
...[<ffffffff811fb5dc>] blk_unplug+0x71/0x76 [<ffffffff811fb5ee>] blk_backing_dev_unplug+0xd/0xf [<ffffffff8111a1ad>] block_sync_page+0x42/0x44 [<ffffffff810bf0fb>] sync_page+0x3f/0x48 [<ffffffff810bf10d>] sync_page_killable+0x9/0x30 [<ffffffff814e7a2f>] __wait_on_bit_lock+0x41/0x8a [<ffffffff810bf040>] __lock_page_killable+0x61/0x68 [<ffffffff8106486b>] ? wake_bit_function+0x0/0x2e [<ffffffff8103e0af>] ? __might_sleep+0x3d/0x127 [<ffffffff810c0b1f>] generic_file_aio_read+0x3db/0x594 [<ffffffff810763f0>] ? __lock_acquire+0x9a5/...
2012 May 03
0
Strange situation with openssl and kernel
...9 May 2 22:48:20 vmail kernel: [<ffffffff8001558e>] sync_buffer+0x0/0x3f May 2 22:48:20 vmail kernel: [<ffffffff800637de>] io_schedule+0x3f/0x67 May 2 22:48:20 vmail kernel: [<ffffffff800155c9>] sync_buffer+0x3b/0x3f May 2 22:48:20 vmail kernel: [<ffffffff80063922>] __wait_on_bit_lock+0x36/0x66 May 2 22:48:20 vmail kernel: [<ffffffff8001558e>] sync_buffer+0x0/0x3f May 2 22:48:20 vmail kernel: [<ffffffff800639be>] out_of_line_wait_on_bit_lock+0x6c/0x78 May 2 22:48:20 vmail kernel: [<ffffffff800a34d5>] wake_bit_function+0x0/0x23 May 2 22:48:20 vmail kern...
2010 Nov 18
9
Interesting problem with write data.
Hi, Recently, I made a btrfs to use. And I met slowness problem. Trying to diag it. I found this: 1. dd if=/dev/zero of=test count=1024 bs=1MB This is fast, at about 25MB/s, and reasonable iowait. 2. dd if=/dev/zero of=test count=1 bs=1GB This is pretty slow, at about 1.5MB/s, and 90%+ iowait, constantly. May I know why it works like this? Thanks. -- To unsubscribe from this list: send the
2012 Jul 31
2
Btrfs Intermittent ENOSPC Issues
I''ve been working on running down intermittent ENOSPC issues. I can only seem to replicate ENOSPC errors when running zlib compression. However, I have been seeing similar ENOSPC errors to a lesser extent when playing with the LZ4HC patches. I apologize for not following up on this sooner, but I had drifted away from using zlib, and didn''t notice there was still an issue. My
2013 Aug 12
6
3TB External USB Drive isn't recognized
...lkdev_readpage+0x0/0xf [<ffffffff8006e1d7>] do_gettimeofday+0x40/0x90 [<ffffffff80028b44>] sync_page+0x0/0x43 [<ffffffff800e4df4>] blkdev_readpage+0x0/0xf [<ffffffff800637ea>] io_schedule+0x3f/0x67 [<ffffffff80028b82>] sync_page+0x3e/0x43 [<ffffffff8006392e>] __wait_on_bit_lock+0x36/0x66 [<ffffffff8003fce0>] __lock_page+0x5e/0x64 [<ffffffff800a0b8d>] wake_bit_function+0x0/0x23 [<ffffffff800c67ba>] read_cache_page+0xba/0x110 [<ffffffff8010a685>] read_dev_sector+0x28/0xcf [<ffffffff8010c061>] read_lba+0x49/0xac [<ffffffff800dbba2>] al...
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...911f00>] ? bit_wait+0x50/0x50 > [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 > [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 > [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 > [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 > [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 > [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 > [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 > [11159.499125] [<ffffffffb9394e85>] grab_cache_page_wr...
2018 May 30
2
[ovirt-users] Re: Gluster problems, cluster performance issues
...>>> [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 >>> [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 >>> [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 >>> [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 >>> [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 >>> [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 >>> [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 >>> [11159.499125] [<ffffffff...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 >>>>> [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 >>>>> [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 >>>>> [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 >>>>> [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 >>>>> [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 >>>>> [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 >>>>&g...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...; [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 >>>> [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 >>>> [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 >>>> [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 >>>> [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 >>>> [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 >>>> [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 >>>> [11159.499125...
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 >>>>> [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 >>>>> [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 >>>>> [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 >>>>> [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 >>>>> [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 >>>>> [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 >>>>&g...
2018 Jun 01
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...fffffffb991348d>] io_schedule_timeout+0xad/0x130 >>>>>> [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 >>>>>> [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 >>>>>> [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 >>>>>> [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 >>>>>> [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 >>>>>> [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 >&g...
2010 Jan 28
31
[PATCH 0 of 4] aio event fd support to blktap2
Get blktap2 running on pvops. This mainly adds eventfd support to the userland code. Based on some prior cleanup to tapdisk-queue and the server object. We had most of that in XenServer for a while, so I kept it stacked. 1. Clean up IPC and AIO init in tapdisk-server. [I think tapdisk-ipc in blktap2 is basically obsolete. Pending a later patch to remove it?] 2. Split tapdisk-queue into