search for: kroth

Displaying 9 results from an estimated 9 matches for "kroth".

Did you mean: kroah
2010 Nov 24
2
maildir maintenance?
Hi, I'm running version 1.2.15 (so no doveadm) with around 6000 maildir users, some of which are very large. For completeness, the details of the setup are as follows: - The maildirs are stored via NFS. - The indexes are on a volume local to the dovecot server. - Only one IMAP server currently. - A separate sendmail/procmail server delivers via NFS. I recently wrote the attached script
2010 Apr 26
1
slowdown - fragmentation?
[This email is either empty or too large to be displayed at this time]
2015 Mar 31
2
couple of ceph/rbd questions
Hi, I've recently been working on setting up a set of libvirt compute nodes that will be using a ceph rbd pool for storing vm disk image files. I've got a couple of issues I've run into. First, per the standard ceph documentation examples [1], the way to add a disk is to create a block in the VM definition XML that looks something like this: <disk type='network'
2009 Aug 25
1
Clear Node
I am trying to make a mysql standby setup with 2 machines, one primary and one hot standby, which both share disk for the data directory. I used tunefs.ocfs2 to change the number of open slots to 1 since only one machine should be accessing it at a time. This way it is fairly safe to assume one shouldn't clobber the other's data. Only problem is, if one node dies, the mount lock still
2009 May 20
1
[Fwd: Re: Unable to fix corrupt directories with fsck.ocfs2]
Robin, To me, anyone else includes the kernel of the current node. Well, if it is unclear the man page should be revised. Also a big warning message on ocfs2.fsck would be nice, after all we all make mistakes. But this is only my two cents. Running fsck on any journaled filesystem will replay the journal. This will cause corruption if the filesystem is mounted read/write, even if the
2015 Mar 31
0
Re: couple of ceph/rbd questions
On 03/31/2015 11:47 AM, Brian Kroth wrote: > Hi, I've recently been working on setting up a set of libvirt compute > nodes that will be using a ceph rbd pool for storing vm disk image > files. I've got a couple of issues I've run into. > > First, per the standard ceph documentation examples [1], the way to...
2009 Aug 21
1
Ghost files in OCFS2 filesystem
Hi, I have encountered an issue on an Oracle RAC cluster using ocfs2, OS is RH Linux 5.3. One of the ocfs2 filesystems appears to be 97% full, yet when I look at the files in there they only equal about 13gig (filesystems is 40gig in size). I have seen this sort of thing in HP-UX but that involved a process who's output file was deleted but the process hadn't been stopped properly, once
2010 Jun 14
3
Diagnosing some OCFS2 error messages
Hello. I am experimenting with OCFS2 on Suse Linux Enterprise Server 11 Service Pack 1. I am performing various stress tests. My current exercise involves writing to files using a shared-writable mmap() from two nodes. (Each node mmaps and writes to different files; I am not trying to access the same file from multiple nodes.) Both nodes are logging messages like these: [94355.116255]
2008 Oct 22
2
Another node is heartbeating in our slot! errors with LUN removal/addition
Greetings, Last night I manually unpresented and deleted a LUN (a SAN snapshot) that was presented to one node in a four node RAC environment running OCFS2 v1.4.1-1. The system then rebooted with the following error: Oct 21 16:45:34 ausracdb03 kernel: (27,1):o2hb_write_timeout:166 ERROR: Heartbeat write timeout to device dm-24 after 120000 milliseconds Oct 21 16:45:34 ausracdb03 kernel: