Hi, I've been setting ocfs2 up on a two node 'cluster', using gnbd (network block device) to talk to a shared disk on a third node. build/install was very straightforward and I have the file system mounted on both nodes just fine. However, I've been rather disappointed by the performance: With just one node actively using the fs (but with the other mounted) I'm finding that file system operations are running *very* slowly -- a linux kernel build takes 5 times longer, with the system blocked on IO most of the time. Looking at the node that is supposedly quiescent, I see that kernel thread "events/0" is burning 100% of the CPU. Sure enough, syslog is filling up rapidly with messages of the form: Oct 17 22:01:13 breakout-0 kernel: process_vote: type: MODIFY, lockid: 2751426560, action: (11) <NULL>, num_ident: 1, alive: 1, write: 0, change2719924224, node=1, seqnum=551559, response=1 I guess I could disable the logging, but, I'm still rather surprised about the amount of communication between the nodes. Is this normal? I guess I was hoping that ocfs2 used some kind of hierarchical read/write/existence locking on fs subtrees. Performance with just a single node mounted is perfectly respectable. Any suggestions? Thanks, Ian
Hi Ian most of the multinode stuff is very much in flux right now (kurt is basically rewriting it). killing syslogd would definitely help ;) but it won't be the silver bullet. anyay yeah multinode just right now sucks, but the new dlm/net/nm stuff will be plugged in soon Wim On Sun, Oct 17, 2004 at 10:11:43PM +0100, Ian Pratt wrote:> > Hi, > > I've been setting ocfs2 up on a two node 'cluster', using gnbd > (network block device) to talk to a shared disk on a third node. > > build/install was very straightforward and I have the file system > mounted on both nodes just fine. However, I've been rather > disappointed by the performance: With just one node actively > using the fs (but with the other mounted) I'm finding that file > system operations are running *very* slowly -- a linux kernel > build takes 5 times longer, with the system blocked on IO most of > the time. > > Looking at the node that is supposedly quiescent, I see that > kernel thread "events/0" is burning 100% of the CPU. Sure enough, > syslog is filling up rapidly with messages of the form: > > Oct 17 22:01:13 breakout-0 kernel: process_vote: type: MODIFY, lockid: 2751426560, action: (11) <NULL>, num_ident: 1, alive: 1, write: 0, change2719924224, node=1, seqnum=551559, response=1 > > I guess I could disable the logging, but, I'm still rather > surprised about the amount of communication between the nodes. > Is this normal? I guess I was hoping that ocfs2 used some kind > of hierarchical read/write/existence locking on fs subtrees. > > Performance with just a single node mounted is perfectly > respectable. > > Any suggestions? > > Thanks, > Ian > > _______________________________________________ > Ocfs2-users mailing list > Ocfs2-users@oss.oracle.com > http://oss.oracle.com/mailman/listinfo/ocfs2-users
On Sun, Oct 17, 2004 at 10:11:43PM +0100, Ian Pratt wrote:> I guess I could disable the logging, but, I'm still rather > surprised about the amount of communication between the nodes. > Is this normal? I guess I was hoping that ocfs2 used some kind > of hierarchical read/write/existence locking on fs subtrees.This is normal. There is no fancy hierarchicalness. Joel -- "Not everything that can be counted counts, and not everything that counts can be counted." - Albert Einstein Joel Becker Senior Member of Technical Staff Oracle Corporation E-mail: joel.becker@oracle.com Phone: (650) 506-8127
Possibly Parallel Threads
- How to set user read request while R install
- Dial-up connection
- The truncate_inode_page call in ocfs_file_release causes the severethroughput drop of file reading in OCFS2.
- The truncate_inode_page call inocfs_file_releasecaus es the severethroughput drop of file reading in OCFS2.
- ocfs performance