dear all,
Today I realized I had such stuff in my logs:
(2502,0):__dlm_print_nodes:377 Nodes in my domain
("41AE1AA4C5534E50A93784D2AD94A94D"):
(2502,0):__dlm_print_nodes:381 node 1
(2502,0):__dlm_print_nodes:381 node 2
(2502,0):__dlm_print_nodes:381 node 3
(2502,0):__dlm_print_nodes:381 node 4
(2502,0):__dlm_print_nodes:381 node 5
(2502,0):__dlm_print_nodes:381 node 6
(2502,0):__dlm_print_nodes:381 node 7
(2502,0):__dlm_print_nodes:381 node 8
(2502,0):__dlm_print_nodes:381 node 9
(2502,0):__dlm_print_nodes:381 node 10
fh_update: RETAU150U0/rundata.out already up-to-date!
(27084,0):ocfs2_follow_link:159 ERROR: status = -2
(27344,0):ocfs2_follow_link:159 ERROR: status = -40
(27344,0):ocfs2_follow_link:159 ERROR: status = -40
(27344,0):ocfs2_follow_link:159 ERROR: status = -40
(27344,0):ocfs2_follow_link:159 ERROR: status = -40
(27344,0):ocfs2_follow_link:159 ERROR: status = -40
(27344,0):ocfs2_follow_link:159 ERROR: status = -40
(27344,0):ocfs2_follow_link:159 ERROR: status = -40
(27344,0):ocfs2_follow_link:159 ERROR: status = -40
is it a problem ? or just a dangling soft link ??
-10 nodes cluster
- error is on a couple of hosts.
- mixed Fedora Core 4 & 5 environment, rather updated
- storage is Infortrend A16F-R1211
regards
--
davide.rossetti@gmail.com ICQ:290677265 SKYPE:d.rossetti
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://oss.oracle.com/pipermail/ocfs2-users/attachments/20061002/1ed3aef7/attachment.html
The errors are harmless. The dangling symlink (ENOENT -2) has been silenced in mainline and ocfs2 1.2.3. Maybe we should silence ELOOP (-40) too. davide rossetti wrote:> dear all, > Today I realized I had such stuff in my logs: > > (2502,0):__dlm_print_nodes:377 Nodes in my domain > ("41AE1AA4C5534E50A93784D2AD94A94D"): > (2502,0):__dlm_print_nodes:381 node 1 > (2502,0):__dlm_print_nodes:381 node 2 > (2502,0):__dlm_print_nodes:381 node 3 > (2502,0):__dlm_print_nodes:381 node 4 > (2502,0):__dlm_print_nodes:381 node 5 > (2502,0):__dlm_print_nodes:381 node 6 > (2502,0):__dlm_print_nodes:381 node 7 > (2502,0):__dlm_print_nodes:381 node 8 > (2502,0):__dlm_print_nodes:381 node 9 > (2502,0):__dlm_print_nodes:381 node 10 > fh_update: RETAU150U0/rundata.out already up-to-date! > (27084,0):ocfs2_follow_link:159 ERROR: status = -2 > (27344,0):ocfs2_follow_link:159 ERROR: status = -40 > (27344,0):ocfs2_follow_link:159 ERROR: status = -40 > (27344,0):ocfs2_follow_link:159 ERROR: status = -40 > (27344,0):ocfs2_follow_link:159 ERROR: status = -40 > (27344,0):ocfs2_follow_link:159 ERROR: status = -40 > (27344,0):ocfs2_follow_link:159 ERROR: status = -40 > (27344,0):ocfs2_follow_link:159 ERROR: status = -40 > (27344,0):ocfs2_follow_link:159 ERROR: status = -40 > > is it a problem ? or just a dangling soft link ?? > > -10 nodes cluster > - error is on a couple of hosts. > - mixed Fedora Core 4 & 5 environment, rather updated > - storage is Infortrend A16F-R1211 > > regards > -- > davide.rossetti@gmail.com <mailto:davide.rossetti@gmail.com> > ICQ:290677265 SKYPE:d.rossetti > ------------------------------------------------------------------------ > > _______________________________________________ > Ocfs2-users mailing list > Ocfs2-users@oss.oracle.com > http://oss.oracle.com/mailman/listinfo/ocfs2-users >