similar to: ocfs2 bug?

Displaying 20 results from an estimated 30000 matches similar to: "ocfs2 bug?"

2023 May 04
1
[PATCH 2/2] ocfs2: add error handling path when jbd2 enter ABORT status
On Thu, May 04, 2023 at 05:41:29PM +0800, Joseph Qi wrote: > > > On 5/4/23 4:02 PM, Heming Zhao wrote: > > On Thu, May 04, 2023 at 03:34:49PM +0800, Joseph Qi wrote: > >> > >> > >> On 5/4/23 2:21 PM, Heming Zhao wrote: > >>> On Thu, May 04, 2023 at 10:27:46AM +0800, Joseph Qi wrote: > >>>> > >>>> >
2009 Apr 15
1
hang with fsdlm
Using fsdlm/ocfs2_controld.cman, I've rerun the test I've been having problems with on 2.6.30-rc1. After running for several minutes in the same directory on three nodes, the test hangs, and I collect the following information: bull-01 ------- 3053 S< [ocfs2dc] ocfs2_downconvert_thread 3054 S< [dlm_astd] dlm_astd 3055 S< [dlm_scand]
2023 May 05
1
[PATCH 2/2] ocfs2: add error handling path when jbd2 enter ABORT status
On 5/5/23 12:20 AM, Heming Zhao wrote: > On Thu, May 04, 2023 at 05:41:29PM +0800, Joseph Qi wrote: >> >> >> On 5/4/23 4:02 PM, Heming Zhao wrote: >>> On Thu, May 04, 2023 at 03:34:49PM +0800, Joseph Qi wrote: >>>> >>>> >>>> On 5/4/23 2:21 PM, Heming Zhao wrote: >>>>> On Thu, May 04, 2023 at 10:27:46AM +0800, Joseph
2023 May 08
1
[PATCH 2/2] ocfs2: add error handling path when jbd2 enter ABORT status
Sorry for reply late, I am a little bit busy recently. On Fri, May 05, 2023 at 11:42:51AM +0800, Joseph Qi wrote: > > > On 5/5/23 12:20 AM, Heming Zhao wrote: > > On Thu, May 04, 2023 at 05:41:29PM +0800, Joseph Qi wrote: > >> > >> > >> On 5/4/23 4:02 PM, Heming Zhao wrote: > >>> On Thu, May 04, 2023 at 03:34:49PM +0800, Joseph Qi wrote:
2023 May 09
1
[PATCH 2/2] ocfs2: add error handling path when jbd2 enter ABORT status
On 5/9/23 12:40 AM, Heming Zhao wrote: > Sorry for reply late, I am a little bit busy recently. > > On Fri, May 05, 2023 at 11:42:51AM +0800, Joseph Qi wrote: >> >> >> On 5/5/23 12:20 AM, Heming Zhao wrote: >>> On Thu, May 04, 2023 at 05:41:29PM +0800, Joseph Qi wrote: >>>> >>>> >>>> On 5/4/23 4:02 PM, Heming Zhao wrote:
2014 May 06
0
poor write performance or locking issues with ocfs2
Hello all, I've got heavy troubles with my ocfs2 environment. Cluster filesystem worked fine for about 3-6 weeks after initial setup, but since 1 week performance issues occurs. I've already searched long time in google and on this mailing list but I wasn't able to found any solution. I've found a lot of posts with "same" problems but without the magic answer :-)
2008 Jul 21
5
OCFS processes active after a umount [SEC=UNOFFICIAL]
Hello, I have two OCFS file file systems mounted at /ocfs_1 and /ocfs_2. I have unmounted both OCFS file systems and was trying to then offline and unload OCFS. The offline command failed with - # ./o2cb offline Stopping O2CB cluster ocfs2: Failed Unable to stop cluster as heartbeat region still active Looking at the processes on this box shows a number of OCFS processes are still active -
2004 Jun 03
0
[BUG] lockres already get by self
When running iozone on ocfs2, after about half an hour, the call trace print and iozone hang. >From the call trace, the reason is "BUG()" in ocfs_acquire_lockres. int ocfs_acquire_lockres (ocfs_lock_res * lockres, __u32 timeout) { if (lockres->thread_id != mypid) { else { printk("lockres in_use=%d, pid=%d, mypid=%d\n", lockres->in_use, lockres->thread_id,
2023 Apr 30
3
[PATCH 2/2] ocfs2: add error handling path when jbd2 enter ABORT status
fstest generic cases 347 361 628 629 trigger a same issue: When jbd2 enter ABORT status, ocfs2 ignores it and keep going to commit journal. This commit gives ocfs2 ability to handle jbd2 ABORT case. Signed-off-by: Heming Zhao <heming.zhao at suse.com> --- fs/ocfs2/alloc.c | 10 ++++++---- fs/ocfs2/journal.c | 5 +++++ fs/ocfs2/localalloc.c | 3 +++ 3 files changed, 14
2010 Aug 19
0
[GIT PULL] ocfs2 changes for 2.6.36, part 2.
Linus et al, Here is the second batch of ocfs2 changes for 2.6.36. We've ironed out all of the ordering with the extN/jbd2 folks, and they have stewed for a little as well. There's nothing large in here. ocfs2 has long supported devices larger than 2^32 sectors in the code; we now toggle that capability on. Tao has added readahead to our CoW operations. We also have one more ECC fix
2010 Oct 22
0
[GIT PULL] ocfs2 changes for 2.6.37
Linus, et al, Here are the ocfs2 changes for 2.6.37. There are three major additions. Tao Ma has added readahead to our CoW operations. Sunil Mushran has added a global heartbeat mode, allowing one device heartbeat to support multiple ocfs2 mounts. Finally, Patrick J. LoPresti has done the final work to enable ocfs2 mounts on devices larger than 16TB. The ocfs2 disk format has always
2012 Jun 14
0
[ocfs2-announce] OCFS2 1.4.10-1 released
All, We are pleased to announce the release of OCFS2 1.4.10-1 and OCFS2 tools 1.6.3-2 for Oracle Linux 5 Update 7 and higher and Redhat Enterprise Linux 5 Update 7 and higher. Oracle's Unbreakable Linux Network users who are subscribing to the "OCFS2 1.4 packages for Enterprise Linux 5" channel can upgrade to this release by running up2date. Red Hat's Enterprise Linux 5
2009 Mar 06
0
[PATCH 1/1] ocfs2: recover orphans in offline slots during recovery and mount
During recovery, a node recovers orphans in it's slot and the dead node(s). But if the dead nodes were holding orphans in offline slots, they will be left unrecovered. If the dead node is the last one to die and is holding orphans in other slots and is the first one to mount, then it only recovers it's own slot, which leaves orphans in offline slots. This patch queues complete_recovery
2009 Apr 07
1
Backport to 1.4 of patch that recovers orphans from offline slots
The following patch is a backport of patch that recovers orphans from offline slots. It is being backported from mainline to 1.4 mainline patch: 0001-Patch-to-recover-orphans-in-offline-slots-during-rec.patch Thanks, --Srini
2009 Mar 06
1
[PATCH 1/1] Patch to recover orphans in offline slots during recovery and mount (revised)
During recovery, a node recovers orphans in it's slot and the dead node(s). But if the dead nodes were holding orphans in offline slots, they will be left unrecovered. If the dead node is the last one to die and is holding orphans in other slots and is the first one to mount, then it only recovers it's own slot, which leaves orphans in offline slots. This patch queues complete_recovery
2009 Mar 04
2
[PATCH 1/1] Patch to recover orphans in offline slots during recovery and mount
During recovery, a node recovers orphans in it's slot and the dead node(s). But if the dead nodes were holding orphans in offline slots, they will be left unrecovered. If the dead node is the last one to die and is holding orphans in other slots and is the first one to mount, then it only recovers it's own slot, which leaves orphans in offline slots. This patch queues complete_recovery
2023 Apr 30
2
[PATCH 1/2] ocfs2: fix missing reset j_num_trans for sync
fstest generic cases 266 272 281 trigger hanging issue when umount. I use 266 to describe the root cause. ``` 49 _dmerror_unmount 50 _dmerror_mount 51 52 echo "Compare files" 53 md5sum $testdir/file1 | _filter_scratch 54 md5sum $testdir/file2 | _filter_scratch 55 56 echo "CoW and unmount" 57 sync 58 _dmerror_load_error_table 59 urk=$($XFS_IO_PROG -f -c "pwrite
2013 Oct 21
1
Kernel BUG in ocfs2_get_clusters_nocache
Hi, we ran into a BUG() in ocfs2_get_clusters_nocache: [Fri Oct 18 10:52:28 2013] ------------[ cut here ]------------ [Fri Oct 18 10:52:28 2013] Kernel BUG at ffffffffa028ad5a [verbose debug info unavailable] [Fri Oct 18 10:52:28 2013] invalid opcode: 0000 [#1] SMP [Fri Oct 18 10:52:28 2013] Modules linked in: vhost_net vhost macvtap macvlan drbd ip6table_filter ip6_tables iptable_filter
2010 Oct 23
1
Reg: ocfs2 two node cluster crashed, node2 crashed, when I rebooted node1 for maintenance.
Hi All, We have ocfs2 node cluster with oracle 11G RAC running, The node2 got crashed automatically, when i rebooted node one for maintenance. please check the log from node2 , before its got crashed. Oct 23 15:42:25 node2 kernel: ocfs2_dlm: Nodes in domain ("029C02C993E44E90879922E268FB161A"): 2 Oct 23 15:42:29 node2 kernel: ocfs2_dlm: Node 1 leaves domain
2015 Oct 27
0
Bind DNS Issues
On 27/10/15 03:57, David Minard wrote: > G'day All, > > I'm running up Samba4.2.3 with 4 DCs on Centos7. There are no > changes to the default smb.conf file that gets created at provision/DC > join. "samba-tool drs showrepl" show all DC replicating in and out. > "samba-tool dbcheck" shows no errors. > > See below for named.conf.