similar to: poor write performance or locking issues with ocfs2

Displaying 20 results from an estimated 70 matches similar to: "poor write performance or locking issues with ocfs2"

2013 Apr 17
1
Bug#701744: We see the same with Debian wheezy.
Hello we see the same with debian Wheezy. Apr 16 16:02:25 hypervisor3 kernel: [2441115.664216] vif vif-17-0: vif17.0: Frag is bigger than frame. Apr 16 16:02:25 hypervisor3 kernel: [2441115.664267] vif vif-17-0: vif17.0: fatal error; disabling device Apr 16 16:02:25 hypervisor3 kernel: [2441115.675667] BUG: unable to handle kernel NULL pointer dereference at 00000000000008b8 Apr 16 16:02:25
2013 Oct 04
0
ANNOUNCE: cifs-utils release 6.2 ready for download
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Again, nothing earth-shattering in this release. Mostly some minor bugfixes and cleanups. Some highlights: - - setcifsacl can now work without a plugin - - systemd-ask-password is found using $PATH now - - cifs.upcall now works with KEYRING: credcaches Go forth and download! webpage: https://wiki.samba.org/index.php/LinuxCIFS_utils tarball:
2006 Apr 02
1
Zeroing freed blocks
A couple of years ago there was a discussion on lkml under the thread 'PATCH - ext2fs privacy (i.e. secure deletion) patch' about zapping deleted data in the filesystem as a security mechanism. The discussion wandered off into how 'chattr +s' could be implemented and whether encrypting filesystems wouldn't be a better solution to the problem. I've been maintaining a
2011 Sep 01
2
CentOS 6.0 and 3ware 9650SE series RAID Performance
Hello, Does anyone have experience using a 3ware 9650SE series raid controller on CentOS 6.0? I am getting very sporadic throughput with moderately sized files (0.5-2GB) on ext3. I have tried most of the mount time tuning options: * noatime * trying different journal types * setting commit=120 - helped a little Even after these optimizations it doesn't seem like the raid array is working
2008 Sep 11
4
Some more debug stuff
Added two debugfs entries... one to dump o2hb livenodes and the other to dump osb. $ cat /sys/kernel/debug/ocfs2/BC4F4550BEA74F92BDCC746AAD2EC0BF/fs_state Device => Id: 8,65 Uuid: BC4F4550BEA74F92BDCC746AAD2EC0BF Gen: 0xA02024F2 Label: sunil-xattr Volume => State: 1 Flags: 0x0 Sizes => Block: 4096 Cluster: 4096 Features => Compat: 0x1 Incompat: 0x350 ROcompat: 0x1
2010 Oct 09
2
[PATCH 1/2] Ocfs2: Add a mount option "coherency=*" for O_DIRECT writes.
Currently, default behavior of O_DIRECT writes was allowing concurrent writing among nodes, no cluster coherency guaranteed (no EX locks was taken), it hurts buffered reads on other nodes by reading stale data from cache. The new mount option introduce a chance to choose two different behaviors for O_DIRECT writes: * coherency=full, as the default value, will disallow concurrent
2015 Dec 11
0
debian ocfs2 debug fs_locks
2005 Feb 22
2
ext3 compatibility between 2.4 and 2.6 kernels
Hello-- We have a system where a central server formats removable hard disks, which are then booted in an embedded system running a highly modified RH9. The removable disks themselves contain boot, root, and data filesystems. The problem we've encountered after upgrading to FC3 / kernel 2.6 on the central server is that the 2.4 kernel in the embedded system cannot read the root filesystem,
2014 Sep 10
1
How to unlock a bloked resource? Thanks
Hi All: As we test with two node in one OCFS2 cluster. The cluster is hang up may be for dead lock. We use the debugfs.ocfs tool founding that one resource is holding by one node who has it for long time and another node can still wait for the resource. So the cluster is hang up. debugfs.ocfs2 -R "fs_locks -B" /dev/dm-0 debugfs.ocfs2 -R "dlm_locks LOCKID_XXX" /dev/dm-0 How
2014 Sep 10
1
How to unlock a bloked resource? Thanks
Hi All: As we test with two node in one OCFS2 cluster. The cluster is hang up may be for dead lock. We use the debugfs.ocfs tool founding that one resource is holding by one node who has it for long time and another node can still wait for the resource. So the cluster is hang up. debugfs.ocfs2 -R "fs_locks -B" /dev/dm-0 debugfs.ocfs2 -R "dlm_locks LOCKID_XXX" /dev/dm-0 How
2009 Jan 14
15
Backport patches to ocfs2 1.4 tree from mainline
Found 15 patches (out of 162) that appeared relevant to ocfs2 1.4. Please review. Sunil
2010 Oct 08
23
O2CB global heartbeat - hopefully final drop!
All, This is hopefully the final drop of the patches for adding global heartbeat to the o2cb stack. The diff from the previous set is here: http://oss.oracle.com/~smushran/global-hb-diff-2010-10-07 Implemented most of the suggestions provided by Joel and Wengang. The most important one was to activate the feature only at the end, Also, got mostly a clean run with checkpatch.pl. Sunil
2008 Sep 01
1
(no subject)
Hello, We just experienced a hang that looks superficially very similar to http://www.mail-archive.com/ocfs2-users at oss.oracle.com/msg02359.html There are 3 nodes in the cluster ocfs2-1.4.1 rhel 5.2. Versions, uname's in the attached text file which also includes fs_locks dumps and various other diagnostics. The lock up happened when we were restarting a java application that was
2018 Dec 28
0
[PATCH nbdkit 9/9] cache: Implement cache-max-size and method of reclaiming space from the cache.
The original plan was to have a background thread doing the reclaim. However that cannot work given the design of filters, because a background thread cannot access the next_ops struct which is only available during requests. Therefore we spread the work over the request threads. Each blk_* function checks whether there is work to do, and if there is will reclaim up to two blocks from the cache
2014 Sep 26
2
One node hangs up issue requiring goog idea, thanks
Hi, all, As we use OCFS2, the network is not good. When the converting request message can?t send to the another node, there will be a node hangs up which will still waiting for the dlm. CAS2/logdir/var/log/syslog.1-6778-Sep 16 20:57:16 CAS2 kernel: [516366.623623] o2net: Connection to node CAS1 (num 1) at 10.172.254.1:7100 has been idle for 30.87 secs, shutting it down.
2019 Jan 01
0
[PATCH nbdkit v2 4/4] cache: Implement cache-max-size and method of reclaiming space from the cache.
The original plan was to have a background thread doing the reclaim. However that cannot work given the design of filters, because a background thread cannot access the next_ops struct which is only available during requests. Therefore we spread the work over the request threads. Each blk_* function checks whether there is work to do, and if there is will reclaim up to two blocks from the cache
2019 Jan 03
0
[PATCH nbdkit v3 2/2] cache: Implement cache-max-size and method of reclaiming space from the cache.
The original plan was to have a background thread doing the reclaim. However that cannot work given the design of filters, because a background thread cannot access the next_ops struct which is only available during requests. Therefore we spread the work over the request threads. Each blk_* function checks whether there is work to do, and if there is will reclaim up to two blocks from the cache
2009 Apr 17
26
OCFS2 1.4: Patches backported from mainline
Please review the list of patches being applied to the ocfs2 1.4 tree. All patches list the mainline commit hash. Thanks Sunil
2007 Oct 17
0
28 commits - configure.ac debian/changelog debian/control debian/copyright debian/.gitignore debian/libswfdec0.dirs debian/libswfdec0.files debian/libswfdec0.shlibs debian/libswfdec-dev.dirs debian/libswfdec-dev.files debian/rules debian/swf-player.dirs
Makefile.am | 1 configure.ac | 1 debian/.gitignore | 1 debian/changelog | 54 ----------- debian/control | 36 ------- debian/copyright | 10 -- debian/libswfdec-dev.dirs | 2 debian/libswfdec-dev.files | 5 -
2009 Mar 17
33
[git patches] Ocfs2 updates for 2.6.30
Hi, The following patches comprise the bulk of Ocfs2 updates for the 2.6.30 merge window. Aside from larger, more involved fixes, we're adding the following features, which I will describe in the order their patches are mailed. Sunil's exported some more state to our debugfs files, and consolidated some other aspects of our debugfs infrastructure. This will further aid us in debugging