similar to: after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !

Displaying 20 results from an estimated 1000 matches similar to: "after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !"

2017 Sep 28
0
after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !
Hi, To resolve the gfid split-brain you can follow the steps at [1]. Since we don't have the pending markers set on the files, it is not showing in the heal info. To debug this issue, need some more data from you. Could you provide these things? 1. volume info 2. mount log 3. brick logs 4. shd log May I also know which version of gluster you are running. From the info you have provided it
2017 Sep 28
2
after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !
Hi, Thanks for reply! I?ve checked [1]. But the problem is that there is nothing shown in command ?gluster volume heal <volume-name> info?. So these split-entry files could only be detected when app try to visit them. I can find gfid mismatch for those in-split-brain entries from mount log, however, nothing show in shd log, the shd log does not know those split-brain entries. Because there
2017 Sep 28
0
after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !
On Thu, Sep 28, 2017 at 11:41 AM, Zhou, Cynthia (NSB - CN/Hangzhou) < cynthia.zhou at nokia-sbell.com> wrote: > Hi, > > Thanks for reply! > > I?ve checked [1]. But the problem is that there is nothing shown in > command ?gluster volume heal <volume-name> info?. So these split-entry > files could only be detected when app try to visit them. > > I can find
2017 Sep 28
1
after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !
The version I am using is glusterfs 3.6.9 Best regards, Cynthia ???? MBB SM HETRAN SW3 MATRIX Storage Mobile: +86 (0)18657188311 From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com] Sent: Thursday, September 28, 2017 2:37 PM To: Zhou, Cynthia (NSB - CN/Hangzhou) <cynthia.zhou at nokia-sbell.com> Cc: Gluster-users at gluster.org; gluster-devel at gluster.org Subject: Re: [Gluster-users]
2017 Sep 28
0
after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !
On Thu, Sep 28, 2017 at 12:11 PM, Zhou, Cynthia (NSB - CN/Hangzhou) < cynthia.zhou at nokia-sbell.com> wrote: > > > The version I am using is glusterfs 3.6.9 > This is a very old version which is EOL. If you can upgrade to any of the supported version (3.10 or 3.12) would be great. They have many new features, bug fixes & performance improvements. If you can try to reproduce
2017 Jul 29
2
Possible stale .glusterfs/indices/xattrop file?
Hi, Sorry for mailing again but as mentioned in my previous mail, I have added an arbiter node to my replica 2 volume and it seem to have gone fine except for the fact that there is one single file which needs healing and does not get healed as you can see here from the output of a "heal info": Brick node1.domain.tld:/data/myvolume/brick Status: Connected Number of entries: 0 Brick
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
To quickly resume my current situation: on node2 I have found the following file xattrop/indices file which matches the GFID of the "heal info" command (below is there output of "ls -lai": 2798404 ---------- 2 root root 0 Apr 28 22:51 /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397 As you can see this file has inode number 2798404, so I ran
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
I did a find on this inode number and I could find the file but only on node1 (nothing on node2 and the new arbiternode). Here is an ls -lai of the file itself on node1: -rw-r--r-- 1 www-data www-data 32 Jun 19 17:42 fileKey As you can see it is a 32 bytes file and as you suggested I ran a "stat" on this very same file through a glusterfs mount (using fuse) but unfortunately nothing
2017 Jul 30
2
Possible stale .glusterfs/indices/xattrop file?
Hi Ravi, Thanks for your hints. Below you will find the answer to your questions. First I tried to start the healing process by running: gluster volume heal myvolume and then as you suggested watch the output of the glustershd.log file but nothing appeared in that log file after running the above command. I checked the files which need to be healing using the "heal <volume> info"
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
Now I understand what you mean the the "-samefile" parameter of "find". As requested I have now run the following command on all 3 nodes with the ouput of all 3 nodes below: sudo find /data/myvolume/brick -samefile /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 -ls node1: 8404683 0 lrwxrwxrwx 1 root root 66 Jul 27 15:43
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 12:20 PM, mabi wrote: > I did a find on this inode number and I could find the file but only > on node1 (nothing on node2 and the new arbiternode). Here is an ls > -lai of the file itself on node1: Sorry I don't understand, isn't that (XFS) inode number specific to node2's brick? If you want to use the same command, maybe you should try `find
2017 Jun 28
3
afr-self-heald.c:479:afr_shd_index_sweep
Hi list, yesterday I noted the following lines into the glustershd.log log file: [2017-06-28 11:53:05.000890] W [MSGID: 108034] [afr-self-heald.c:479:afr_shd_index_sweep] 0-iso-images-repo-replicate-0: unable to get index-dir on iso-images-repo-client-0 [2017-06-28 11:53:05.001146] W [MSGID: 108034] [afr-self-heald.c:479:afr_shd_index_sweep] 0-vm-images-repo-replicate-0: unable to get index-dir
2017 Jul 30
0
Possible stale .glusterfs/indices/xattrop file?
On 07/29/2017 04:36 PM, mabi wrote: > Hi, > > Sorry for mailing again but as mentioned in my previous mail, I have > added an arbiter node to my replica 2 volume and it seem to have gone > fine except for the fact that there is one single file which needs > healing and does not get healed as you can see here from the output of > a "heal info": > > Brick
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 02:00 PM, mabi wrote: > To quickly resume my current situation: > > on node2 I have found the following file xattrop/indices file which > matches the GFID of the "heal info" command (below is there output of > "ls -lai": > > 2798404 ---------- 2 root root 0 Apr 28 22:51 >
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/30/2017 02:24 PM, mabi wrote: > Hi Ravi, > > Thanks for your hints. Below you will find the answer to your questions. > > First I tried to start the healing process by running: > > gluster volume heal myvolume > > and then as you suggested watch the output of the glustershd.log file > but nothing appeared in that log file after running the above command. >
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 02:33 PM, mabi wrote: > Now I understand what you mean the the "-samefile" parameter of > "find". As requested I have now run the following command on all 3 > nodes with the ouput of all 3 nodes below: > > sudo find /data/myvolume/brick -samefile > /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 > -ls > >
2006 Mar 20
7
strange charecters after redcloth usage
I''m using redcloth on my blog to transform my input into html. Alot of times if I type "I''ve" I''ll wind up with "I,ve" except that it''s not a comma but a very similar charecter. This is really killing my rss feeds. What''s causing this? How do I fix it? -- Posted via http://www.ruby-forum.com/.
2002 Jul 09
9
Samba authenication to Window Active Directory
Has any version of Samba been tested for compatibility with the new Windows Active Directory? We will be upgrading our NT domain to 2000 and creating Active Directory. Currently our solaris samba 2.0.7 server is configured to authenicate users from our NT domain controllers. Will we encounter problems with the 2000 upgrade? Thanks in advance, Cynthia --------------------------------- Do You
2017 Nov 22
2
error "Not able to add to index" in brick logs
in my /var/log/gluster/bricks/mybrick-path.log I get thousands of those errors: ------ [2017-11-22 21:06:23.768354] E [MSGID: 138003] [index.c:624:index_link_to_base] 0-sharedvol-index: /home/sharedvol/.glusterfs/indices/xattrop/0b852dad-b332-4bfe-a38b-976729ee46a2: Not able to add to index [Troppi collegamenti] The message "E [MSGID: 138003] [index.c:624:index_link_to_base]
2017 Jun 28
2
afr-self-heald.c:479:afr_shd_index_sweep
On Wed, Jun 28, 2017 at 9:45 PM, Ravishankar N <ravishankar at redhat.com> wrote: > On 06/28/2017 06:52 PM, Paolo Margara wrote: > >> Hi list, >> >> yesterday I noted the following lines into the glustershd.log log file: >> >> [2017-06-28 11:53:05.000890] W [MSGID: 108034] >> [afr-self-heald.c:479:afr_shd_index_sweep] >>