similar to: Is glusterfs replication intended for hard drive failure

Displaying 20 results from an estimated 20000 matches similar to: "Is glusterfs replication intended for hard drive failure"

2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/30/2017 02:24 PM, mabi wrote: > Hi Ravi, > > Thanks for your hints. Below you will find the answer to your questions. > > First I tried to start the healing process by running: > > gluster volume heal myvolume > > and then as you suggested watch the output of the glustershd.log file > but nothing appeared in that log file after running the above command. >
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 12:20 PM, mabi wrote: > I did a find on this inode number and I could find the file but only > on node1 (nothing on node2 and the new arbiternode). Here is an ls > -lai of the file itself on node1: Sorry I don't understand, isn't that (XFS) inode number specific to node2's brick? If you want to use the same command, maybe you should try `find
2017 Jul 30
2
Possible stale .glusterfs/indices/xattrop file?
Hi Ravi, Thanks for your hints. Below you will find the answer to your questions. First I tried to start the healing process by running: gluster volume heal myvolume and then as you suggested watch the output of the glustershd.log file but nothing appeared in that log file after running the above command. I checked the files which need to be healing using the "heal <volume> info"
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 02:00 PM, mabi wrote: > To quickly resume my current situation: > > on node2 I have found the following file xattrop/indices file which > matches the GFID of the "heal info" command (below is there output of > "ls -lai": > > 2798404 ---------- 2 root root 0 Apr 28 22:51 >
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
I did a find on this inode number and I could find the file but only on node1 (nothing on node2 and the new arbiternode). Here is an ls -lai of the file itself on node1: -rw-r--r-- 1 www-data www-data 32 Jun 19 17:42 fileKey As you can see it is a 32 bytes file and as you suggested I ran a "stat" on this very same file through a glusterfs mount (using fuse) but unfortunately nothing
2017 Jul 30
0
Possible stale .glusterfs/indices/xattrop file?
On 07/29/2017 04:36 PM, mabi wrote: > Hi, > > Sorry for mailing again but as mentioned in my previous mail, I have > added an arbiter node to my replica 2 volume and it seem to have gone > fine except for the fact that there is one single file which needs > healing and does not get healed as you can see here from the output of > a "heal info": > > Brick
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 02:33 PM, mabi wrote: > Now I understand what you mean the the "-samefile" parameter of > "find". As requested I have now run the following command on all 3 > nodes with the ouput of all 3 nodes below: > > sudo find /data/myvolume/brick -samefile > /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 > -ls > >
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
To quickly resume my current situation: on node2 I have found the following file xattrop/indices file which matches the GFID of the "heal info" command (below is there output of "ls -lai": 2798404 ---------- 2 root root 0 Apr 28 22:51 /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397 As you can see this file has inode number 2798404, so I ran
2017 Jul 29
2
Possible stale .glusterfs/indices/xattrop file?
Hi, Sorry for mailing again but as mentioned in my previous mail, I have added an arbiter node to my replica 2 volume and it seem to have gone fine except for the fact that there is one single file which needs healing and does not get healed as you can see here from the output of a "heal info": Brick node1.domain.tld:/data/myvolume/brick Status: Connected Number of entries: 0 Brick
2013 Dec 09
1
[CentOS 6] Upgrade to the glusterfs version in base or in glusterfs-epel
Hi, I'm using glusterfs version 3.4.0 from gluster-epel[1]. Recently, I find out that there's a glusterfs version in base repo (3.4.0.36rhs). So, is it recommend to use that version instead of gluster-epel version? If yes, is there a guide to make the switch with no downtime? When run yum update glusterfs, I got the following error[2]. I found a guide[3]: > If you have replicated or
2013 Jul 24
2
Healing in glusterfs 3.3.1
Hi, I have a glusterfs 3.3.1 setup with 2 servers and a replicated volume used by 4 clients. Sometimes from some clients I can't access some of the files. After I force a full heal on the brick I see several files healed. Is this behavior normal? Thanks -- Paulo Silva <paulojjs at gmail.com> -------------- next part -------------- An HTML attachment was scrubbed... URL:
2017 Aug 23
0
GlusterFS as virtual machine storage
Hi, after many VM crashes during upgrades of Gluster, losing network connectivity on one node etc. I would advise running replica 2 with arbiter. I once even managed to break this setup (with arbiter) due to network partitioning - one data node never healed and I had to restore from backups (it was easier and kind of non-production). Be extremely careful and plan for failure. -ps On Mon, Aug
2017 Aug 23
0
GlusterFS as virtual machine storage
What he is saying is that, on a two node volume, upgrading a node will cause the volume to go down. That's nothing weird, you really should use 3 nodes. On Wed, Aug 23, 2017 at 06:51:55PM +0200, Gionatan Danti wrote: > Il 23-08-2017 18:14 Pavel Szalbot ha scritto: > > Hi, after many VM crashes during upgrades of Gluster, losing network > > connectivity on one node etc. I would
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
Now I understand what you mean the the "-samefile" parameter of "find". As requested I have now run the following command on all 3 nodes with the ouput of all 3 nodes below: sudo find /data/myvolume/brick -samefile /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 -ls node1: 8404683 0 lrwxrwxrwx 1 root root 66 Jul 27 15:43
2017 Nov 09
0
GlusterFS healing questions
Hi Rolf, answers follow inline... On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen <rolf at jotta.no> wrote: > Hi, > > We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10 > bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit > nics) > > 1. > Tests show that healing takes about double the time on healing 200gb vs > 100, and
2017 Aug 25
0
GlusterFS as virtual machine storage
Il 25-08-2017 21:48 WK ha scritto: > On 8/25/2017 12:56 AM, Gionatan Danti wrote: > > > We ran Rep2 for years on 3.4.? It does work if you are really,really? > careful,? But in a crash on one side, you might have lost some bits > that were on the fly. The VM would then try to heal. > Without sharding, big VMs take a while because the WHOLE VM file has > to be copied over.
2017 Aug 23
4
GlusterFS as virtual machine storage
Il 23-08-2017 18:14 Pavel Szalbot ha scritto: > Hi, after many VM crashes during upgrades of Gluster, losing network > connectivity on one node etc. I would advise running replica 2 with > arbiter. Hi Pavel, this is bad news :( So, in your case at least, Gluster was not stable? Something as simple as an update would let it crash? > I once even managed to break this setup (with
2017 Aug 25
2
GlusterFS as virtual machine storage
Il 23-08-2017 18:51 Gionatan Danti ha scritto: > Il 23-08-2017 18:14 Pavel Szalbot ha scritto: >> Hi, after many VM crashes during upgrades of Gluster, losing network >> connectivity on one node etc. I would advise running replica 2 with >> arbiter. > > Hi Pavel, this is bad news :( > So, in your case at least, Gluster was not stable? Something as simple > as an
2017 Aug 25
2
GlusterFS as virtual machine storage
On 8/25/2017 12:56 AM, Gionatan Danti wrote: > > >> WK wrote: >> 2 node plus Arbiter. You NEED the arbiter or a third node. Do NOT try 2 >> node with a VM > > This is true even if I manage locking at application level (via > virlock or sanlock)? We ran Rep2 for years on 3.4.? It does work if you are really,really? careful,? But in a crash on one side, you might
2017 Nov 09
0
GlusterFS healing questions
Someone on the #gluster-users irc channel said the following : "Decreasing features.locks-revocation-max-blocked to an absurdly low number is letting our distributed-disperse set heal again." Is this something to concider? Does anyone else have experience with tweaking this to speed up healing? Sent from my iPhone > On 9 Nov 2017, at 18:00, Serkan ?oban <cobanserkan at