search for: xfs_log_forc

Displaying 14 results from an estimated 14 matches for "xfs_log_forc".

Did you mean: xfs_log_force
2017 Nov 16
2
xfs_rename error and brick offline
...heck_thread_proc] 0-data-posix: health-check failed, going down Nov 16 11:15:30 node10 disks-FAvUzxiL-brick[29742]: [2017-11-16 11:15:30.206538] M [MSGID: 113075] [posix-helpers.c:1908:posix_health_check_thread_proc] 0-data-posix: still alive! -> SIGTERM Nov 16 11:15:37 node10 kernel: XFS (sdm): xfs_log_force: error 5 returned. Nov 16 11:16:07 node10 kernel: XFS (sdm): xfs_log_force: error 5 returned. I think probably it's not related to the hard disk because it can be reproduced and it occurs for different bricks. All the hard disks are new and I don't see any low level IO error. Is it a bu...
2017 Nov 16
0
xfs_rename error and brick offline
...data-posix: health-check failed, going down > Nov 16 11:15:30 node10 disks-FAvUzxiL-brick[29742]: [2017-11-16 > 11:15:30.206538] M [MSGID: 113075] [posix-helpers.c:1908:posix_health_check_thread_proc] > 0-data-posix: still alive! -> SIGTERM > Nov 16 11:15:37 node10 kernel: XFS (sdm): xfs_log_force: error 5 returned. > Nov 16 11:16:07 node10 kernel: XFS (sdm): xfs_log_force: error 5 returned. > > > As the logs indicate, xfs shut down and the posix health check feature in Gluster rendered the brick offline. You would be better off checking with the xfs community about this proble...
2015 Sep 21
2
Centos 6.6, apparent xfs corruption
...fs/xfs_trans.c. Return address = 0xffffffffa01f2e6e Sep 18 20:35:15 gries kernel: XFS (dm-2): Corruption of in-memory data detected. Shutting down filesystem Sep 18 20:35:15 gries kernel: XFS (dm-2): Please umount the filesystem and rectify the problem(s) Sep 18 20:35:27 gries kernel: XFS (dm-2): xfs_log_force: error 5 returned.
2005 Aug 15
3
[-mm PATCH 2/32] fs: fix-up schedule_timeout() usage
...de( igrab(inode); xfs_syncd_queue_work(vfs, inode, xfs_flush_inode_work); - delay(HZ/2); + delay(msecs_to_jiffies(500)); } /* @@ -441,7 +441,7 @@ xfs_flush_device( igrab(inode); xfs_syncd_queue_work(vfs, inode, xfs_flush_device_work); - delay(HZ/2); + delay(msecs_to_jiffies(500)); xfs_log_force(ip->i_mount, (xfs_lsn_t)0, XFS_LOG_FORCE|XFS_LOG_SYNC); } @@ -478,10 +478,9 @@ xfssyncd( wake_up(&vfsp->vfs_wait_sync_task); INIT_LIST_HEAD(&tmp); - timeleft = (xfs_syncd_centisecs * HZ) / 100; + timeleft = xfs_syncd_centisecs * msecs_to_jiffies(10); for (;;) { - set_curr...
2017 Oct 22
0
Areca RAID controller on latest CentOS 7 (1708 i.e. RHEL 7.4) kernel 3.10.0-693.2.2.el7.x86_64
...2f7c4f10 ffff88103ada5300 Oct 19 23:06:57 radon kernel: 0000000000000000 ffff88102f7c4f10 ffff88103afe4528 ffff88103eaa2000 Oct 19 23:06:57 radon kernel: Call Trace: Oct 19 23:06:57 radon kernel: [<ffffffff816a94e9>] schedule+0x29/0x70 Oct 19 23:06:57 radon kernel: [<ffffffffc04d1d16>] _xfs_log_force+0x1c6/0x2c0 [xfs] Oct 19 23:06:57 radon kernel: [<ffffffff810c4810>] ? wake_up_state+0x20/0x20 Oct 19 23:06:57 radon kernel: [<ffffffffc04ddb9c>] ? xfsaild+0x16c/0x6f0 [xfs] Oct 19 23:06:57 radon kernel: [<ffffffffc04d1e3c>] xfs_log_force+0x2c/0x70 [xfs] Oct 19 23:06:57 radon ker...
2015 Sep 21
0
Centos 6.6, apparent xfs corruption
...dress > = 0xffffffffa01f2e6e Sep 18 20:35:15 gries kernel: XFS (dm-2): > Corruption of in-memory data detected. Shutting down filesystem > Sep 18 20:35:15 gries kernel: XFS (dm-2): Please umount the > filesystem and rectify the problem(s) Sep 18 20:35:27 gries kernel: > XFS (dm-2): xfs_log_force: error 5 returned. > _______________________________________________ CentOS mailing > list CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos > -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBAgAGBQJWAHVZAAoJEAF3yXsqtyBlT7IQAM45t0n8I7aQ...
2017 Sep 28
2
mounting an nfs4 file system as v4.0 in CentOS 7.4?
CentOS 7.4 client mounting a CentOS 7.4 server filesystem over nfs4. nfs seems to be much slower since the upgrade to 7.4, so I thought it might be nice to mount the directory as v4.0 rather than the new default of v4.1 to see if it makes a difference. The release notes state, without an example: "You can retain the original behavior by specifying 0 as the minor version" nfs(5)
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...message. [ 8280.188837] xfsaild/dm-10 D ffff93203a2eeeb0 0 1061 2 0x00000000 [ 8280.188843] Call Trace: [ 8280.188857] [<ffffffff960a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [ 8280.188864] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8280.188932] [<ffffffffc049fe36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [ 8280.188939] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8280.188972] [<ffffffffc04abfec>] ? xfsaild+0x16c/0x6f0 [xfs] [ 8280.189003] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs] [ 8280.189035] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_f...
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...xfsaild/dm-10 D ffff93203a2eeeb0 0 1061 2 > 0x00000000 > [ 8280.188843] Call Trace: > [ 8280.188857] [<ffffffff960a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 > [ 8280.188864] [<ffffffff96713f79>] schedule+0x29/0x70 > [ 8280.188932] [<ffffffffc049fe36>] _xfs_log_force+0x1c6/0x2c0 [xfs] > [ 8280.188939] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 > [ 8280.188972] [<ffffffffc04abfec>] ? xfsaild+0x16c/0x6f0 [xfs] > [ 8280.189003] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs] > [ 8280.189035] [<ffffffffc04abe80>] ? xf...
2018 Jun 01
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...ff93203a2eeeb0 0 1061 2 >> 0x00000000 >> [ 8280.188843] Call Trace: >> [ 8280.188857] [<ffffffff960a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 >> [ 8280.188864] [<ffffffff96713f79>] schedule+0x29/0x70 >> [ 8280.188932] [<ffffffffc049fe36>] _xfs_log_force+0x1c6/0x2c0 [xfs] >> [ 8280.188939] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 >> [ 8280.188972] [<ffffffffc04abfec>] ? xfsaild+0x16c/0x6f0 [xfs] >> [ 8280.189003] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs] >> [ 8280.189035] [<ffffffffc0...
2018 May 30
2
[ovirt-users] Re: Gluster problems, cluster performance issues
...isables this message. >>> [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 >>> 0x00000080 >>> [10679.527150] Call Trace: >>> [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 >>> [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>> [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>> [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] >>> [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>> [10679.527268] [...
2013 Apr 18
39
Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
Hi, I''ve been working on getting a working blktap driver allowing to access ceph RBD block devices without relying on the RBD kernel driver and it finally got to a point where, it works and is testable. Some of the advantages are: - Easier to update to newer RBD version - Allows functionality only available in the userspace RBD library (write cache, layering, ...) - Less issue when
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...rnel/hung_task_timeout_secs" > disables this message. > [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 > 0x00000080 > [10679.527150] Call Trace: > [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 > [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] > [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] > [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [10679.527268] [<ffffffffb992076f>] ? syst...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...t; [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 >>>>> 0x00000080 >>>>> [10679.527150] Call Trace: >>>>> [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>> [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 >>>>> [xfs] >>>>> [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>> [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] >>>>> [10679.527260] [<ffffffffb944f0e7>] do_f...