Displaying 20 results from an estimated 200 matches similar to: "Gluster - replica - Unable to self-heal contents of '/' (possible split-brain)"
2013 May 23
0
Error: Initial status notification not received
Hi ,
We are getting a below error on our Dovecot POP/IMAP server on every alternate days. Please help us in knowing with the errors are related to
May 23 02:40:05 blade7 dovecot: master: Error: service(pop3-login): Initial status notification not received in 30 seconds, k
illing the process
May 23 02:40:05 blade7 dovecot: master: Error: service(log): child 8697 killed with signal 9
May 23
2013 Aug 02
2
Maildir Synchronization warnings
Hi,
We are repeatedly getting these below warnings for some of our users, al though we have no complaints from them yet,
?we need to know why these warning occurs.
So it would be help full if some one explain these warning msg in detail.
-----------------------------------------------------------------------------------------------------------------------------------------------------
Aug? 2
2012 Nov 04
3
Problem with CLVM (really openais)
I'm desparately looking for more ideas on how to debug what's going on
with our CLVM cluster.
Background:
4 node "cluster"-- machines are Dell blades with Dell M6220/M6348 switches.
Sole purpose of Cluster Suite tools is to use CLVM against an iSCSI storage
array.
Machines are running CentOS 5.8 with the Xen kernels. These blades host
various VMs for a project. The iSCSI
2012 Dec 18
1
Problem with srptools
Hello,
I have a problem with the srptools to connect my Dom0 to the scst over
IB ressources.
*When i''m on the Debian kernel (without Dom0) *
root@blade1:/# ibsrpdm -c
id_ext=003048ffff9dd3b4,ioc_guid=003048ffff9dd3b4,dgid=fe80000000000000003048ffff9dd3b5,pkey=ffff,service_id=003048ffff9dd3b4
2018 Mar 28
1
virt-install --connect lxc:///
After reboot of the host i have different error message:
root@blade1:~# virt-install --connect lxc:/// --name test_LXC --memory 128 --filesystem /home/lxcuser/LXC,/ --init /bin/sh
WARNING No operating system detected, VM performance may suffer. Specify an OS with --os-variant for optimal results.
Starting install...
ERROR internal error: guest failed to start: Failure in libvirt_lxc
2018 Mar 23
2
Attempt to define unprivileged LXC by libvirt
Hi,
i converted LXC conf to xml by:
lxcuser@blade1:~/.local/share/lxc/test_deb$ virsh -c lxc:/// domxml-from-
native lxc-tools /home/lxcuser/.local/share/lxc/test_deb/config
<domain type='lxc'>
<name>test_deb</name>
<uuid>cce77799-89fd-41fd-99c1-101e00844e23</uuid>
<memory unit='KiB'>65536</memory>
<currentMemory
2013 Dec 11
0
wide symlinks across partitions
Hi samba list,
I am seeing a peculiar issue when sharing out a directory containing a soft
symlink pointing to a directory outside of the share and a different
filesystem/block device.
i have 2 shares:
[share1]
read only = No
follow symlinks = yes
wide links = yes
/media/stor0/user
[share2]
/media/stor1/user
/media/stor0 is an xfs filesystem
/media/stor1 is an xfs filesystem
there is a folder
2013 Dec 12
0
wide softlink to different partition copy fails
Hi All,
(I've also posted this to the CentOS forum - so copy and pasted for
simplicity)
I'm having an unusual issue with Samba server package 3.6.9-164.el6
When sharing out a directory containing a soft symlink pointing to a
directory outside of the share and a different filesystem/block device.
When a client mounts the partition (Linux and Mac OSX Clients tested) and
attempts to copy a
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.
The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.
Regards,
Nithya
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message.
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- -----------
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation.
2003 Jul 29
1
no charset ver. 3.0.0beta3 on solaris
Hi all
This question came up several times in this list in the last couple of
weeks, but nobody seems to have found a solution... so I post this
again, hoping that anybody out there has the answer.
I compiled Samba 3.0.0beta3 on Solaris 9 with the following options:
--with-winbind --with-acl-support --with-included-popt --with-pam
Starting the daemon or doing testparm gives me always:
Error
2012 Aug 21
1
Error: Couldn't open INBOX: Timeout while waiting for lock
Hi ,
My users are frequetly getting the below error,
Aug 21 00:04:18 blade6 dovecot: pop3-login: proxy(hgl_dipak): Login failed to 192.168.1.43:110: [IN-USE] Couldn't open INBOX: Timeout while waiting for lock.
We are proxing pop connections from one pop server (dovecot version 1.1.20) i.e 192.168.1.39 to the other pop Server (Dovecot version 2.0.16 ) i.e 192.168.1.4.
We have set?
2018 Mar 23
0
Re: Attempt to define unprivileged LXC by libvirt
On Fri, Mar 23, 2018 at 02:09:39PM +0100, ales drtik wrote:
> Hi,
> i converted LXC conf to xml by:
>
> lxcuser@blade1:~/.local/share/lxc/test_deb$ virsh -c lxc:/// domxml-from-
> native lxc-tools /home/lxcuser/.local/share/lxc/test_deb/config
>
> <domain type='lxc'>
> <name>test_deb</name>
>
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi,
Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are running, there is no error in logs, however df shows a bad total size.
My configuration for one volume:
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were very similar .
For volumedisk1 I
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- ----------- ----------- ------------
2006 Apr 17
2
making Dovecot match Courier POP3 UID's
Hi, guys.
I've read the information in the Wiki about making dovecot's UID's match
Courier's UID's in the POP3 server, so that a UIDL command on either
server will return the same UID's for the same group of messages.
However, what I've read doesn't match what I'm seeing in Courier.
Courier appears to keep the UID numbers for messages that it uses for
the POP3