similar to: Strange dmesg messages

Displaying 20 results from an estimated 200 matches similar to: "Strange dmesg messages"

2011 Jan 12
1
Problems with fsck
Hi List, i'd like to share with you what happened yesterday. Kernel 2.6.36.1 ocfs2-tools 1.6.3 (latest). I had an old OCFS2 partition created with a 2.6.32 kernel and ocfs2 tools 1.4.5. I unmounted all partitions on all nodes in order to enable discontig-bg. I then used tunefs to add discontig-bg, inline-data and indexed-dirs. During indexed-dirs tunefs segfaulted and since then, fsck
2008 Oct 23
2
[PATCH 1/1] OCFS2: fix for nfs getting stale inode.
Ocfs2 supports exporting. PROBLEM: There are 2 problems (1) Current version of ocfs2_get_dentry() may read from disk the inode WITHOUT any cross cluster lock. This may lead to load a stale inode. (2) for deleting an inode, ocfs2_remove_inode() doesn't sync/checkpoint to disk. This also may lead ocfs2_get_dentry() from other node read out stale inode. PROBLEM DETAIL: for problem (1), For
2010 Jun 14
3
Diagnosing some OCFS2 error messages
Hello. I am experimenting with OCFS2 on Suse Linux Enterprise Server 11 Service Pack 1. I am performing various stress tests. My current exercise involves writing to files using a shared-writable mmap() from two nodes. (Each node mmaps and writes to different files; I am not trying to access the same file from multiple nodes.) Both nodes are logging messages like these: [94355.116255]
2011 Dec 20
8
ocfs2 - Kernel panic on many write/read from both
Sorry i don`t copy everything: TEST-MAIL1# echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 5239722 26198604 246266859 TEST-MAIL1# echo "ls //orphan_dir:0001"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 6074335 30371669 285493670 TEST-MAIL2 ~ # echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 5239722 26198604
2007 Nov 29
1
Troubles with two node
Hi all, I'm running OCFS2 on two system with OpenSUSE 10.2 connected on fibre channel with a shared storage (HP MSA1500 + HP PROLIANT MSA20). The cluster has two node (web-ha1 and web-ha2), sometimes (1 or 2 times on a month) the OCFS2 stop to work on both system. On the first node I'm getting no error in log files and after a forced shoutdown of the first node on the second I can see
2009 Jan 12
1
Bug in inode deletion code leading to stale inodes
Hello, I've hit a bug in OCFS2 delete code which results in inodes being left on disk without any links to them. The workload triggering this creates directories on one node and deletes them on another node in the cluster. The inode is not deleted because both nodes bail out from ocfs2_delete_inode() with: Skipping delete of 100405 because it is in use on other nodes The scenario which I
2007 Mar 08
4
ocfs2 cluster becomes unresponsive
We are running OCFS2 on SLES9 machines using a FC SAN. Without warning both nodes will become unresponsive. Can not access either machine via ssh or terminal (hangs after typing in username). However the machine still responds to pings. This continues until one node is rebooted, at which time the second node resumes normal operations. I am not entirely sure that this is an OCFS2 problem at all
2010 Jan 14
1
another fencing question
Hi, periodically one of on my two nodes cluster is fenced here are the logs: Jan 14 07:01:44 nvr1-rc kernel: o2net: no longer connected to node nvr2- rc.minint.it (num 0) at 1.1.1.6:7777 Jan 14 07:01:44 nvr1-rc kernel: (21534,1):dlm_do_master_request:1334 ERROR: link to 0 went down! Jan 14 07:01:44 nvr1-rc kernel: (4007,4):dlm_send_proxy_ast_msg:458 ERROR: status = -112 Jan 14 07:01:44
2023 Jun 13
1
[BUG] ocfs2/dlm: possible data races in dlm_drop_lockres_ref_done() and dlm_get_lock_resource()
Hello, Our static analysis tool finds some possible data races in the OCFS2 file system in Linux 6.4.0-rc6. In most calling contexts, the variables such as res->lockname.name and res->owner are accessed with holding the lock res->spinlock. Here is an example: lockres_seq_start() --> Line 539 in dlmdebug.c spin_lock(&res->spinlock); --> Line 574 in dlmdebug.c (Lock
2023 Jun 13
0
[BUG] ocfs2/dlm: possible data races in dlm_drop_lockres_ref_done() and dlm_get_lock_resource()
Hello, Our static analysis tool finds some possible data races in the OCFS2 file system in Linux 6.4.0-rc6. In most calling contexts, the variables? such as res->lockname.name and res->owner are accessed with holding the lock res->spinlock. Here is an example: ? lockres_seq_start() --> Line 539 in dlmdebug.c ??? spin_lock(&res->spinlock); --> Line 574 in dlmdebug.c (Lock
2023 Jun 16
1
[BUG] ocfs2/dlm: possible data races in dlm_drop_lockres_ref_done() and dlm_get_lock_resource()
Hi, On 6/13/23 4:23 PM, Tuo Li wrote: > Hello, > > Our static analysis tool finds some possible data races in the OCFS2 file > system in Linux 6.4.0-rc6. > > In most calling contexts, the variables such as res->lockname.name and > res->owner are accessed with holding the lock res->spinlock. Here is an > example: > > lockres_seq_start() --> Line 539
2009 Mar 18
2
shutdown by o2net_idle_timer causes Xen to hang
Hello, we've had some serious trouble with a two-node Xen-based OCFS2 cluster. In brief: we had two incidents where one node detects an idle timeout and shuts the other node down which causes the other node and the Dom0 to hang. Both times this could only be resolved by rebooting the whole machine using the built-in IPMI card. All machines (including the other DomUs) run Centos 5.2
2009 May 12
2
add error check for ocfs2_read_locked_inode() call
After upgrading from 2.6.28.10 to 2.6.29.3 I've saw following new errors in kernel log: May 12 14:46:41 falcon-cl5 May 12 14:46:41 falcon-cl5 (6757,7):ocfs2_read_locked_inode:466 ERROR: status = -22 Only one node is mounted volumes in cluster: /dev/sde on /home/apache/users/D1 type ocfs2 (rw,_netdev,noatime,heartbeat=local) /dev/sdd on /home/apache/users/D2 type ocfs2
2009 Mar 03
3
[PATCH 1/1] OCFS2: anti stale inode for nfs (V6)
For nfs exporting, ocfs2_get_dentry() returns the dentry for fh. ocfs2_get_dentry() may read from disk(when inode not in memory) without any cross cluster lock. this leads to load a stale inode. this patch fixes above problem. solution is that in case of inode is not in memory, we get the cluster lock(PR) of alloc inode where the inode in question is allocated from(this causes node on which
2010 Dec 09
2
servers blocked on ocfs2
Hi, we have recently started to use ocfs2 on some RHEL 5.5 servers (ocfs2-1.4.7) Some days ago, two servers sharing an ocfs2 filesystem, and with quite virtual services, stalled, in what it seems on ocfs2 issue. This are the lines in their messages files: =====node heraclito (0)======================================== /Dec 4 09:15:06 heraclito kernel: o2net: connection to node parmenides
2009 Mar 06
2
[PATCH 1/1] OCFS2: anti stale inode for nfs (for 1.4git)
Back porting from mainline. For nfs exporting, ocfs2_get_dentry() returns the dentry for fh. ocfs2_get_dentry() may read from disk(when inode not in memory) without any cross cluster lock. this leads to load a stale inode. this patch fixes above problem. solution is that in case of inode is not in memory, we get the cluster lock(PR) of alloc inode where the inode in question is allocated
2009 Mar 17
33
[git patches] Ocfs2 updates for 2.6.30
Hi, The following patches comprise the bulk of Ocfs2 updates for the 2.6.30 merge window. Aside from larger, more involved fixes, we're adding the following features, which I will describe in the order their patches are mailed. Sunil's exported some more state to our debugfs files, and consolidated some other aspects of our debugfs infrastructure. This will further aid us in debugging
2008 Jul 14
1
Node fence on RHEL4 machine running 1.2.8-2
Hello, We have a four-node RHEL4 RAC cluster running OCFS2 version 1.2.8-2 and the 2.6.9-67.0.4hugemem kernel. The cluster has been really stable since we upgraded to 1.2.8-2 early this year, but this morning, one of the nodes fenced and rebooted itself, and I wonder if anyone could glance at the below remote syslogs and offer an opinion as to why. First, here's the output of
2009 Jul 29
3
Error message whil booting system
Hi, When system booting getting error message "modprobe: FATAL: Module ocfs2_stackglue not found" in message. Some nodes reboot without any error message. ------------------------------------------------- ul 27 10:02:19 alf3 kernel: ip_tables: (C) 2000-2006 Netfilter Core Team Jul 27 10:02:19 alf3 kernel: Netfilter messages via NETLINK v0.30. Jul 27 10:02:19 alf3 kernel:
2011 Mar 04
1
node eviction
Hello... I wonder if someone have had similar problem like this... a node evicts almost in a weekly basis and I have not found the root cause yet.... Mar 2 10:20:57 xirisoas3 kernel: ocfs2_dlm: Node 1 joins domain 129859624F7042EAB9829B18CA65FC88 Mar 2 10:20:57 xirisoas3 kernel: ocfs2_dlm: Nodes in domain ("129859624F7042EAB9829B18CA65FC88"): 1 2 3 4 Mar 3 16:18:02 xirisoas3 kernel: