Displaying 20 results from an estimated 21 matches for "sync_pag".
Did you mean:
sync_page
2014 Nov 03
1
dmesg error
...ff8101714f4d80
0000000000000000 0000000000000007 ffff81016471b820 ffff81021ac87040
0000e805f8d72d5b 0000000000002d53 ffff81016471ba08 0000000388055b77
Call Trace:
[<ffffffff8006ecd9>] do_gettimeofday+0x40/0x90
[<ffffffff8005a412>] getnstimeofday+0x10/0x29
[<ffffffff80028bb2>] sync_page+0x0/0x43
[<ffffffff800637de>] io_schedule+0x3f/0x67
[<ffffffff80028bf0>] sync_page+0x3e/0x43
[<ffffffff80063922>] __wait_on_bit_lock+0x36/0x66
[<ffffffff8003f980>] __lock_page+0x5e/0x64
[<ffffffff800a34d5>] wake_bit_function+0x0/0x23
[<ffffffff8000c425>] d...
2011 Jul 08
5
btrfs hang in flush-btrfs-5
...ff88001842ffd8 0000000000013840 0000000000013840
Jul 8 11:49:40 xback2 kernel: [74920.681032] ffff88005b819730
ffff88003c7bae60 ffff88005fd140c8 ffff88005feb2188
Jul 8 11:49:40 xback2 kernel: [74920.681032] Call Trace:
Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff810d80c7>] ?
sync_page+0x0/0x4f
Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff8147439c>]
io_schedule+0x47/0x62
Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff810d8112>]
sync_page+0x4b/0x4f
Jul 8 11:49:40 xback2 kernel: [74920.681032] [<ffffffff8147482f>]
__wait_on_bit_lock+0x4...
2010 Nov 18
9
Interesting problem with write data.
Hi,
Recently, I made a btrfs to use. And I met slowness problem. Trying
to diag it. I found this:
1. dd if=/dev/zero of=test count=1024 bs=1MB
This is fast, at about 25MB/s, and reasonable iowait.
2. dd if=/dev/zero of=test count=1 bs=1GB
This is pretty slow, at about 1.5MB/s, and 90%+ iowait, constantly.
May I know why it works like this? Thanks.
--
To unsubscribe from this list: send the
2009 Mar 05
1
[PATCH] OCFS2: Pagecache usage optimization on OCFS2
...+1953,16 @@ static int ocfs2_write_end(struct file *
}
const struct address_space_operations ocfs2_aops = {
- .readpage = ocfs2_readpage,
- .readpages = ocfs2_readpages,
- .writepage = ocfs2_writepage,
- .write_begin = ocfs2_write_begin,
- .write_end = ocfs2_write_end,
- .bmap = ocfs2_bmap,
- .sync_page = block_sync_page,
- .direct_IO = ocfs2_direct_IO,
- .invalidatepage = ocfs2_invalidatepage,
- .releasepage = ocfs2_releasepage,
- .migratepage = buffer_migrate_page,
+ .readpage = ocfs2_readpage,
+ .readpages = ocfs2_readpages,
+ .writepage = ocfs2_writepage,
+ .write_begin = ocfs2_write_begi...
2005 Nov 01
2
xen, lvm, drbd, bad kernel messages
...nel: [blk_remove_plug+110/112]
blk_remove_plug+0x6e/0x70
Nov 1 13:52:13 localhost kernel: [pg0+140840834/1002423296]
drbd_unplug_fn+0x22/0x220 [drbd]
Nov 1 13:52:13 localhost kernel: [blk_backing_dev_unplug+25/32]
blk_backing_dev_unplug+0x19/0x20
Nov 1 13:52:13 localhost kernel: [block_sync_page+60/80]
block_sync_page+0x3c/0x50
Nov 1 13:52:13 localhost kernel: [sync_page+70/80] sync_page+0x46/0x50
Nov 1 13:52:13 localhost kernel: [__wait_on_bit_lock+94/112]
__wait_on_bit_lock+0x5e/0x70
Nov 1 13:52:13 localhost kernel: [sync_page+0/80] sync_page+0x0/0x50
Nov 1 13:52:13 localhost...
2010 Sep 27
2
BUG - qdev - partial loss of network connectivity
...0000000000015780 0000000000015780 ffff88011c7ce2e0 ffff88011c7ce5d8
[ 240.580502] Call Trace:
[ 240.581132] [<ffffffff8102cdac>] ? pvclock_clocksource_read+0x3a/0x8b
[ 240.582427] [<ffffffff8102cdac>] ? pvclock_clocksource_read+0x3a/0x8b
[ 240.583869] [<ffffffff810b3bdd>] ? sync_page+0x0/0x46
[ 240.585034] [<ffffffff810b3bdd>] ? sync_page+0x0/0x46
[ 240.586087] [<ffffffff812f9939>] ? io_schedule+0x73/0xb7
[ 240.587287] [<ffffffff810b3c1e>] ? sync_page+0x41/0x46
[ 240.588202] [<ffffffff812f9e46>] ? __wait_on_bit+0x41/0x70
[ 240.589314] [<fff...
2010 Sep 27
2
BUG - qdev - partial loss of network connectivity
...0000000000015780 0000000000015780 ffff88011c7ce2e0 ffff88011c7ce5d8
[ 240.580502] Call Trace:
[ 240.581132] [<ffffffff8102cdac>] ? pvclock_clocksource_read+0x3a/0x8b
[ 240.582427] [<ffffffff8102cdac>] ? pvclock_clocksource_read+0x3a/0x8b
[ 240.583869] [<ffffffff810b3bdd>] ? sync_page+0x0/0x46
[ 240.585034] [<ffffffff810b3bdd>] ? sync_page+0x0/0x46
[ 240.586087] [<ffffffff812f9939>] ? io_schedule+0x73/0xb7
[ 240.587287] [<ffffffff810b3c1e>] ? sync_page+0x41/0x46
[ 240.588202] [<ffffffff812f9e46>] ? __wait_on_bit+0x41/0x70
[ 240.589314] [<fff...
2011 May 05
12
Having parent transid verify failed
Hello, I have a 5.5TB Btrfs filesystem on top of a md-raid 5 device. Now
if i run some file operations like find, i get these messages.
kernel is 2.6.38.5-1 on arch linux
May 5 14:15:12 mail kernel: [13559.089713] parent transid verify failed
on 3062073683968 wanted 5181 found 5188
May 5 14:15:12 mail kernel: [13559.089834] parent transid verify failed
on 3062073683968 wanted 5181 found 5188
2011 Jan 19
0
Bug#603727: xen-hypervisor-4.0-amd64: i386 Dom0 crashes after doing some I/O on local storage (software Raid1 on SAS-drives with mpt2sas driver)
...42] 0000000000015780 0000000000015780 ffff88007ec254c0
ffff88007ec257b8
[163440.615551] Call Trace:
[163440.615557] [<ffffffff811982cb>] ? __bitmap_weight+0x3a/0x7e
[163440.615563] [<ffffffff8102ddc0>] ? pvclock_clocksource_read+0x3a/0x8b
[163440.615569] [<ffffffff810b4ee5>] ? sync_page+0x0/0x46
[163440.615574] [<ffffffff810b4ee5>] ? sync_page+0x0/0x46
[163440.615579] [<ffffffff8130b5f1>] ? io_schedule+0x73/0xb7
[163440.615584] [<ffffffff810b4f26>] ? sync_page+0x41/0x46
[163440.615590] [<ffffffff8130c8b2>] ? _spin_unlock_irqrestore+0xd/0xe
[163440.6155...
2010 Oct 11
4
Horrible btrfs performance on cold cache
...Google Chrome:
encrypted ext4: ~20s
btrfs: ~2:11s
I have tried different things to find out exactly what is the issue,
but haven''t quite found it yet.
Here''s some stuff I got from latencytop, not sure if would be helpful:
4969.1ms
sys_mmap_pgoff
syscall_call
(chrome)
1139.9ms
sync_page
sync_page_killable
__lock_page_killable
generic_file_aio_read
do_sync_read
vfs_read
sys_read
sysenter_do_call
(chrome)
431.9ms
sync_page
wait_on_page_bit
read_extent_buffer_pages
btree_read_extent_buffer_pages
read_tree_block
read_block_for_search
btrfs_search_slot
lookup_inline_extent_backref
__...
2020 Apr 22
3
slow performance on company production server I need help
...ffff88006654bd70 ffff88006654bc88 ffffea00016ab7c0
Apr 22 09:11:49 daisy kernel: [142441.721125]? ffff88011a707000
ffff880028321168 000000000001b7ea 0000816be9b3faa2
Apr 22 09:11:49 daisy kernel: [142441.721130] Call Trace:
Apr 22 09:11:49 daisy kernel: [142441.721139]? [<ffffffff8114f130>] ?
sync_page+0x0/0x50
Apr 22 09:11:49 daisy kernel: [142441.721144]? [<ffffffff8107c851>] ?
update_curr+0xe1/0x1f0
Apr 22 09:11:49 daisy kernel: [142441.721149]? [<ffffffff81566c55>]
schedule_timeout+0x215/0x2f0
Apr 22 09:11:49 daisy kernel: [142441.721155]? [<ffffffff81067432>] ?
check_preem...
2020 Apr 22
0
slow performance on company production server I need help
...6654bc88 ffffea00016ab7c0
> Apr 22 09:11:49 daisy kernel: [142441.721125]? ffff88011a707000
> ffff880028321168 000000000001b7ea 0000816be9b3faa2
> Apr 22 09:11:49 daisy kernel: [142441.721130] Call Trace:
> Apr 22 09:11:49 daisy kernel: [142441.721139]? [<ffffffff8114f130>] ?
> sync_page+0x0/0x50
> Apr 22 09:11:49 daisy kernel: [142441.721144]? [<ffffffff8107c851>] ?
> update_curr+0xe1/0x1f0
> Apr 22 09:11:49 daisy kernel: [142441.721149]? [<ffffffff81566c55>]
> schedule_timeout+0x215/0x2f0
> Apr 22 09:11:49 daisy kernel: [142441.721155]? [<ffffffff81...
2020 Apr 22
3
slow performance on company production server I need help
...ab7c0
>> Apr 22 09:11:49 daisy kernel: [142441.721125]? ffff88011a707000
>> ffff880028321168 000000000001b7ea 0000816be9b3faa2
>> Apr 22 09:11:49 daisy kernel: [142441.721130] Call Trace:
>> Apr 22 09:11:49 daisy kernel: [142441.721139]? [<ffffffff8114f130>] ?
>> sync_page+0x0/0x50
>> Apr 22 09:11:49 daisy kernel: [142441.721144]? [<ffffffff8107c851>] ?
>> update_curr+0xe1/0x1f0
>> Apr 22 09:11:49 daisy kernel: [142441.721149]? [<ffffffff81566c55>]
>> schedule_timeout+0x215/0x2f0
>> Apr 22 09:11:49 daisy kernel: [142441.7211...
2020 Apr 22
0
slow performance on company production server I need help
...pr 22 09:11:49 daisy kernel: [142441.721125]? ffff88011a707000
>>> ffff880028321168 000000000001b7ea 0000816be9b3faa2
>>> Apr 22 09:11:49 daisy kernel: [142441.721130] Call Trace:
>>> Apr 22 09:11:49 daisy kernel: [142441.721139]? [<ffffffff8114f130>] ?
>>> sync_page+0x0/0x50
>>> Apr 22 09:11:49 daisy kernel: [142441.721144]? [<ffffffff8107c851>] ?
>>> update_curr+0xe1/0x1f0
>>> Apr 22 09:11:49 daisy kernel: [142441.721149]? [<ffffffff81566c55>]
>>> schedule_timeout+0x215/0x2f0
>>> Apr 22 09:11:49 daisy...
2010 Mar 29
0
Interesting lockdep message coming out of blktap
...t;] ? __down_read+0x38/0xad
[<ffffffff812802d0>] ? evtchn_interrupt+0xaa/0x112
[<ffffffff8128a0de>] blktap_device_do_request+0x1dc/0x298
[<ffffffff814e9bac>] ? _spin_unlock_irqrestore+0x56/0x74
[<ffffffff8105848b>] ? del_timer+0xd7/0xe5
[<ffffffff810bf104>] ? sync_page_killable+0x0/0x30
[<ffffffff81202143>] __generic_unplug_device+0x30/0x35
[<ffffffff81202171>] generic_unplug_device+0x29/0x3a
[<ffffffff811fb5dc>] blk_unplug+0x71/0x76
[<ffffffff811fb5ee>] blk_backing_dev_unplug+0xd/0xf
[<ffffffff8111a1ad>] block_sync_page+0...
2002 Sep 22
2
Assertion failure in ext3_get_block() at inode.c:853: "handle != 0"
Hi,
Got the following on Linux 2.5.37 trying to run apt-get update.
MikaL
Sep 21 23:10:05 devil kernel: Assertion failure in ext3_get_block() at inode.c:853: "handle != 0"
Sep 21 23:10:05 devil kernel: kernel BUG at inode.c:853!
Sep 21 23:10:05 devil kernel: invalid operand: 0000
Sep 21 23:10:05 devil kernel: CPU: 1
Sep 21 23:10:05 devil kernel: EIP:
2015 Jun 02
2
GlusterFS 3.7 - slow/poor performances
hi Geoffrey,
Since you are saying it happens on all types of volumes,
lets do the following:
1) Create a dist-repl volume
2) Set the options etc you need.
3) enable gluster volume profile using "gluster volume profile <volname>
start"
4) run the work load
5) give output of "gluster volume profile <volname> info"
Repeat the steps above on new and old
2013 Aug 12
6
3TB External USB Drive isn't recognized
...ffffffff800e4df4 0000000000000086 000000000000000a ffff81000ccfd7a0
ffff810037c1b100 0001039304f5b0b1 000000000000a05c ffff81000ccfd988
0000000300000000 Call Trace: [<ffffffff800e4df4>] blkdev_readpage+0x0/0xf
[<ffffffff8006e1d7>] do_gettimeofday+0x40/0x90 [<ffffffff80028b44>]
sync_page+0x0/0x43 [<ffffffff800e4df4>] blkdev_readpage+0x0/0xf
[<ffffffff800637ea>] io_schedule+0x3f/0x67 [<ffffffff80028b82>]
sync_page+0x3e/0x43 [<ffffffff8006392e>] __wait_on_bit_lock+0x36/0x66
[<ffffffff8003fce0>] __lock_page+0x5e/0x64 [<ffffffff800a0b8d>]
wake_b...
2010 Oct 08
5
Slow link/Capacity changed + Kernel OOPS... possible hardware issues, ideas?
...7140 f066af70 00000001
f066af70 c4a08140 c3059b7c c3059b44
Oct 8 02:40:43 (none) kernel: Call Trace:
Oct 8 02:40:43 (none) kernel: [<c10682af>] ? ktime_get_ts+0xff/0x130
Oct 8 02:40:43 (none) kernel: [<c12e6f1c>] io_schedule+0x5c/0xa0
Oct 8 02:40:43 (none) kernel: [<c10bd825>] sync_page+0x35/0x40
Oct 8 02:40:43 (none) kernel: [<c12e75d5>] __wait_on_bit+0x45/0x70
Oct 8 02:40:43 (none) kernel: [<c10bd7f0>] ? sync_page+0x0/0x40
Oct 8 02:40:43 (none) kernel: [<c10bda76>] wait_on_page_bit+0x86/0x90
Oct 8 02:40:43 (none) kernel: [<c105eb70>] ? wake_bit_funct...
2010 Jan 28
31
[PATCH 0 of 4] aio event fd support to blktap2
Get blktap2 running on pvops.
This mainly adds eventfd support to the userland code. Based on some
prior cleanup to tapdisk-queue and the server object. We had most of
that in XenServer for a while, so I kept it stacked.
1. Clean up IPC and AIO init in tapdisk-server.
[I think tapdisk-ipc in blktap2 is basically obsolete.
Pending a later patch to remove it?]
2. Split tapdisk-queue into