Displaying 11 results from an estimated 11 matches for "4a31".
Did you mean:
431
2017 May 10
2
Kcc connection
Hello Christian,
Ive had a quick look at you output, that looks pretty normal.
I'll have a good look Friday, tomorrow day off.
So my suggestion is post the output to the list but -d10 is not needed.
The regular output should be sufficiant.
Im wondering why there is no dc3 in kcc also but why i see over 15 RPC sessions.
Maybe normal maybe not, this i dont know.
Greetz,
Louis
>
2017 May 10
0
Kcc connection
...SA Options: 0x00000001
DSA object GUID: 1781607c-77e8-405d-8c3a-b7e142dd30c4
DSA invocationId: dc096a8a-773d-4a63-8e2a-288ab747557b
==== INBOUND NEIGHBORS ====
DC=DomainDnsZones,DC=hq,DC=brain-biotech,DC=de
Default-First-Site-Name\DC2 via RPC
DSA object GUID: 27ea875c-f283-4a31-b2ab-70db62cd530d
Last attempt @ Wed May 10 19:18:28 2017 CEST was successful
0 consecutive failure(s).
Last success @ Wed May 10 19:18:28 2017 CEST
DC=DomainDnsZones,DC=hq,DC=brain-biotech,DC=de
Default-First-Site-Name\DC4 via RPC...
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 02:33 PM, mabi wrote:
> Now I understand what you mean the the "-samefile" parameter of
> "find". As requested I have now run the following command on all 3
> nodes with the ouput of all 3 nodes below:
>
> sudo find /data/myvolume/brick -samefile
> /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397
> -ls
>
>
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
Now I understand what you mean the the "-samefile" parameter of "find". As requested I have now run the following command on all 3 nodes with the ouput of all 3 nodes below:
sudo find /data/myvolume/brick -samefile /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 -ls
node1:
8404683 0 lrwxrwxrwx 1 root root 66 Jul 27 15:43
2016 Nov 02
0
doveadm index: can't index a different namespace?
...EHLO or HELO ylmf-pc: CHECK_HELO: ylmf-pcNov ?1 13:26:09 thebighonker exim[6613]: H=(ylmf-pc) [69.64.78.83]:55031 I=[192.147.25.65]:25 rejected EHLO or HELO ylmf-pc: CHECK_HELO: ylmf-pcNov ?1 13:26:15 thebighonker dovecot: imap-login: Login: user=<ler>, method=PLAIN, rip=2600:1:d613:48fb:b5da:4a31:b4b6:7ff8, lip=2001:470:1f0f:3ad:223:7dff:fe9e:6e8a, mpid=6645, TLS, session=<PsGzdUFA9Y8mAAAB1hNI+7XaSjG0tn/4>Nov ?1 13:26:15 thebighonker dovecot: imap(ler): Debug: Loading modules from directory: /usr/local/lib/dovecotNov ?1 13:26:15 thebighonker dovecot: imap(ler): Debug: Module loaded: /...
2016 Nov 01
4
doveadm index: can't index a different namespace?
doveadm -D -vvvvvv index \#ARCHIVE/\* garners the below for ALL mailboxes
in the
namespace:
doveadm(ler): Error: Mailbox #ARCHIVE/2013/04/clamav-rules: Status lookup
failed: Internal error occurred. Refer to server log for more information.
[2016-11-01 13:25:21]
doveadm(ler): Error: lucene: Failed to sync mailbox INBOX: Mailbox isn't
selectable
doveadm(ler): Error: Mailbox
2014 Sep 09
2
Re: CoreOS support
The options -x -v gave me an error that no such option so I ruined it with —debug option.
root@ny2proxd03:/var/lib/vz/images/100# virt-resize --expand /dev/sda3 vm-100-disk-1.qcow2 vm-100-disk-1.qcow2.resized --debug
command line: virt-resize --expand /dev/sda3 vm-100-disk-1.qcow2 vm-100-disk-1.qcow2.resized --debug
Examining vm-100-disk-1.qcow2 ...
libguestfs: trace: add_drive
2014 Sep 09
2
Re: CoreOS support
...art7 -> ../../sdb7
lrwxrwxrwx 1 0 0 10 Sep 9 17:19 pci-0000:00:02.0-virtio-pci-virtio0-scsi-0:0:1:0-part9 -> ../../sdb9
lrwxrwxrwx 1 0 0 9 Sep 9 17:19 pci-0000:00:02.0-virtio-pci-virtio0-scsi-0:0:2:0 -> ../../sdc
/dev/disk/by-uuid:
total 0
lrwxrwxrwx 1 0 0 10 Sep 9 17:19 1a0faf5d-08e6-4a31-bc1c-c91e81b77820 -> ../../sdb6
lrwxrwxrwx 1 0 0 9 Sep 9 17:19 2eda0818-e4fc-4314-858d-f0412fe425d4 -> ../../sdc
lrwxrwxrwx 1 0 0 10 Sep 9 17:19 7a4d648f-2b73-4628-adec-1ea911668b31 -> ../../sdb3
lrwxrwxrwx 1 0 0 10 Sep 9 17:19 BE23-53DC -> ../../sdb1
lrwxrwxrwx 1 0 0 10 Sep 9 17:1...
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed metadata selfheal on 131bc572-2808-493a-9470-7199f15612c1. sources=0 [2] sinks=1
[2017-10-25 10:40:33.844533] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 7fc416a4-4892-4a31-a60f-1b33d2ee69b6. sources=0 [2] sinks=1
[2017-10-25 10:40:33.845811] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-home-replicate-0: performing metadata selfheal on 7fc416a4-4892-4a31-a60f-1b33d2ee69b6
[2017-10-25 10:40:33.848935] I [MSGID: 108026] [afr-self-heal-c...