search for: masterchieflian

Displaying 5 results from an estimated 5 matches for "masterchieflian".

2017 Oct 27
3
ctdb vacuum timeouts and record locks
...nd the container is on the SSD with the OS. running mount from within the container shows: /dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered) However, the gluster native client is a fuse-based system, so the data is stored on a fuse system which is mounted in the container: masterchieflian:ctfngluster on /CTFN type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,allow_other,max_read=131072) Since this is where the files that become inaccessible are, perhaps this is really where the problem is, and not with the locking.tdb file? I will investigate about file locks on the gluster...
2017 Nov 02
0
ctdb vacuum timeouts and record locks
...h the OS.  running mount from within the container shows: > > /dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered) > > However, the gluster native client is a fuse-based system, so the data > is stored on a fuse system which is mounted in the container: > > masterchieflian:ctfngluster on /CTFN type fuse.glusterfs > (rw,relatime,user_id=0,group_id=0,allow_other,max_read=131072) > > Since this is where the files that become inaccessible are, perhaps this > is really where the problem is, and not with the locking.tdb file?  I > will investigate about...
2017 Nov 02
2
ctdb vacuum timeouts and record locks
...from within the container shows: >> >> /dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered) >> >> However, the gluster native client is a fuse-based system, so the data >> is stored on a fuse system which is mounted in the container: >> >> masterchieflian:ctfngluster on /CTFN type fuse.glusterfs >> (rw,relatime,user_id=0,group_id=0,allow_other,max_read=131072) >> >> Since this is where the files that become inaccessible are, perhaps >> this is really where the problem is, and not with the locking.tdb >> file?  I will...
2017 Oct 27
2
ctdb vacuum timeouts and record locks
Hi List, I set up a ctdb cluster a couple months back. Things seemed pretty solid for the first 2-3 weeks, but then I started getting reports of people not being able to access files, or some times directories. It has taken me a while to figure some stuff out, but it seems the common denominator to this happening is vacuuming timeouts for locking.tdb in the ctdb log, which might go on
2017 Sep 28
2
imapc and masteruser
...i dovecotserver -n # 2.2.31 (65cde28): /usr/local/etc/dovecot/dovecot.conf # Pigeonhole version 0.4.19 (e5c7051) # OS: Linux 4.9.0-3-amd64 x86_64 Debian 9.1 auth_debug = yes auth_debug_passwords = yes auth_master_user_separator = * auth_verbose_passwords = plain hostname = imap.ctfn.ca imapc_host = masterchieflian.ctfn.ca imapc_master_user = %u imapc_password = # hidden, use -P to show it imapc_port = 9993 imapc_ssl = imaps instance_name = dovecotserver lda_mailbox_autocreate = yes lda_mailbox_autosubscribe = yes listen = 192.168.120.70 log_path = /dev/stderr login_greeting = CTFN IMAP server mail_debug = y...