search for: healing

Displaying 20 results from an estimated 1042 matches for "healing".

Did you mean: dealing
2013 Feb 19
1
Problems running dbench on 3.3
To test gluster's behavior under heavy load, I'm currently doing this on two machines sharing a common /mnt/gfs gluster mount: ssh bal-6.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs ssh bal-7.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs One of the processes usually dies pretty quickly like this: [608] open
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got cluster configuration: volume afr-ns type cluster/afr subvolumes n1-ns n2-ns n3-ns option data-self-heal on option metadata-self-heal on option entry-self-heal on end-volume volume afr1 type cluster/afr subvolumes n1-brick2
2017 Oct 26
0
not healing one file
...? ? ? recovery.baklz4 >>> >>> Out of curiosity I checked all the bricks for this file. It's >>> present there. Making a checksum shows that the file is different on >>> one of the three replica servers. >>> >>> Querying healing information shows that the file should be healed: >>> # gluster volume heal home info >>> Brick sphere-six:/srv/gluster_home/brick >>> /romanoch/.mozilla/firefox/vzzqqxrm.default-1396429081309/se >>> ssionstore-backups/recovery.baklz4 >>> >>> St...
2017 Oct 26
3
not healing one file
...> -?????????? ? ? ? ? ? recovery.baklz4 >> >> Out of curiosity I checked all the bricks for this file. It's >> present there. Making a checksum shows that the file is different on >> one of the three replica servers. >> >> Querying healing information shows that the file should be healed: >> # gluster volume heal home info >> Brick sphere-six:/srv/gluster_home/brick >> /romanoch/.mozilla/firefox/vzzqqxrm.default-1396429081309/ >> sessionstore-backups/recovery.baklz4 >> >> Status: Connected >>...
2017 Oct 26
2
not healing one file
...? recovery.baklz4 > > Out of curiosity I checked all the bricks for this file. It's > present there. Making a checksum shows that the file is > different on > one of the three replica servers. > > Querying healing information shows that the file should be > healed: > # gluster volume heal home info > Brick sphere-six:/srv/gluster_home/brick > /romanoch/.mozilla/firefox/vzzqqxrm.default-1396429081309/sessionstore-backups/recovery.ba >...
2013 Jul 24
2
Healing in glusterfs 3.3.1
Hi, I have a glusterfs 3.3.1 setup with 2 servers and a replicated volume used by 4 clients. Sometimes from some clients I can't access some of the files. After I force a full heal on the brick I see several files healed. Is this behavior normal? Thanks -- Paulo Silva <paulojjs at gmail.com> -------------- next part -------------- An HTML attachment was scrubbed... URL:
2017 Jun 01
3
Heal operation detail of EC volumes
...such kind of traffic on servers. In my > configuration (16+4 EC) I see 20 servers are all have 7-8MB outbound > traffic and none of them has more than 10MB incoming traffic. > Only heal operation is happening on cluster right now, no client/other > traffic. I see constant 7-8MB write to healing brick disk. So where is > the missing traffic? Not sure about your configuration, but probably you are seeing the result of having the SHD of each server doing heals. That would explain the network traffic you have. Suppose that all SHD but the one on the damaged brick are working. In this...
2017 Jun 01
0
Heal operation detail of EC volumes
...ffic on servers. In my >> configuration (16+4 EC) I see 20 servers are all have 7-8MB outbound >> traffic and none of them has more than 10MB incoming traffic. >> Only heal operation is happening on cluster right now, no client/other >> traffic. I see constant 7-8MB write to healing brick disk. So where is >> the missing traffic? > > > Not sure about your configuration, but probably you are seeing the result of > having the SHD of each server doing heals. That would explain the network > traffic you have. > > Suppose that all SHD but the one on the d...
2012 Feb 05
2
Would difference in size (and content) of a file on replicated bricks be healed?
...1 root root 29 Feb 5 21:40 data.txt /b3: total 8 -rw-r--r-- 1 root root 29 Feb 5 21:40 data.txt /b4: total 8 -rw-r--r-- 1 root root 29 Feb 5 21:40 data.txt 8) Brick /b1 still has a bad copy of the file. In the logs it looks like the difference in size is detected. It also starts some self-healing actions. But then for some reason finds out no healing is needed. Marked a few lines with <<<<< ???? which looks suspect. ===============CUT Client Log [2012-02-05 21:41:01.206102] D [client-handshake.c:179:client_start_ping] 0-d1-client-0: returning as transport is already discon...
2018 Feb 08
5
self-heal trouble after changing arbiter brick
...ata/glusterfs Number of entries: 64999 Brick gv1:/data/gv23-arbiter Number of entries: 0 Brick gv4:/data/glusterfs Number of entries: 0 Brick gv5:/data/glusterfs Number of entries: 0 Brick pluto:/var/gv45-arbiter Number of entries: 0 According to the /var/log/glusterfs/glustershd.log, the self-healing is undergoing, so it might be worth just sit and wait, but I'm wondering why this 64999 heal-count persists (a limitation on counter? In fact, gv2 and gv3 bricks contain roughly 30 million files), and I feel bothered because of the following output: # gluster volume heal myvol info heal-failed...
2017 Jun 02
1
?==?utf-8?q? Heal operation detail of EC volumes
...ffic on servers. In my >> configuration (16+4 EC) I see 20 servers are all have 7-8MB outbound >> traffic and none of them has more than 10MB incoming traffic. >> Only heal operation is happening on cluster right now, no client/other >> traffic. I see constant 7-8MB write to healing brick disk. So where is >> the missing traffic? > > > Not sure about your configuration, but probably you are seeing the result of > having the SHD of each server doing heals. That would explain the network > traffic you have. > > Suppose that all SHD but the one on the d...
2017 Jun 08
1
Heal operation detail of EC volumes
...my > >> configuration (16+4 EC) I see 20 servers are all have 7-8MB outbound > >> traffic and none of them has more than 10MB incoming traffic. > >> Only heal operation is happening on cluster right now, no client/other > >> traffic. I see constant 7-8MB write to healing brick disk. So where is > >> the missing traffic? > > > > > > Not sure about your configuration, but probably you are seeing the > result of > > having the SHD of each server doing heals. That would explain the network > > traffic you have. > > > &...
2018 Feb 09
0
self-heal trouble after changing arbiter brick
...ata/glusterfs Number of entries: 64999 Brick gv1:/data/gv23-arbiter Number of entries: 0 Brick gv4:/data/glusterfs Number of entries: 0 Brick gv5:/data/glusterfs Number of entries: 0 Brick pluto:/var/gv45-arbiter Number of entries: 0 According to the /var/log/glusterfs/glustershd.log, the self-healing is undergoing, so it might be worth just sit and wait, but I'm wondering why this 64999 heal-count persists (a limitation on counter? In fact, gv2 and gv3 bricks contain roughly 30 million files), and I feel bothered because of the following output: # gluster volume heal myvol info heal-failed...
2012 May 03
2
[3.3 beta3] When should the self-heal daemon be triggered?
Hi, I eventually installed three Debian unstable machines, so I could install the GlusterFS 3.3 beta3. I have a question about the self-heal daemon. I'm trying to get a volume which is replicated, with two bricks. I started up the volume, wrote some data, then killed one machine, and then wrote more data to a few folders from the client machine. Then I restarted the second brick server.
2017 Nov 09
2
GlusterFS healing questions
...Rolf Larsen <rolf at jotta.no> wrote: >> >> Hi, >> >> We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10 >> bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit >> nics) >> >> 1. >> Tests show that healing takes about double the time on healing 200gb vs >> 100, and abit under the double on 400gb vs 200gb bricksizes. Is this >> expected behaviour? In light of this would make 6,4 tb bricksizes use ~ 377 >> hours to heal. >> >> 100gb brick heal: 18 hours (8+2) >> 200g...
2017 Nov 09
2
GlusterFS healing questions
Hi, We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10 bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit nics) 1. Tests show that healing takes about double the time on healing 200gb vs 100, and abit under the double on 400gb vs 200gb bricksizes. Is this expected behaviour? In light of this would make 6,4 tb bricksizes use ~ 377 hours to heal. 100gb brick heal: 18 hours (8+2) 200gb brick heal: 37 hours (8+2) +205% 400gb brick heal:...
2017 Nov 09
0
GlusterFS healing questions
...line... On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen <rolf at jotta.no> wrote: > Hi, > > We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10 > bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit > nics) > > 1. > Tests show that healing takes about double the time on healing 200gb vs > 100, and abit under the double on 400gb vs 200gb bricksizes. Is this > expected behaviour? In light of this would make 6,4 tb bricksizes use ~ 377 > hours to heal. > > 100gb brick heal: 18 hours (8+2) > 200gb brick heal: 37 hours (...
2017 Nov 09
0
GlusterFS healing questions
Someone on the #gluster-users irc channel said the following : "Decreasing features.locks-revocation-max-blocked to an absurdly low number is letting our distributed-disperse set heal again." Is this something to concider? Does anyone else have experience with tweaking this to speed up healing? Sent from my iPhone > On 9 Nov 2017, at 18:00, Serkan ?oban <cobanserkan at gmail.com> wrote: > > Hi, > > You can set disperse.shd-max-threads to 2 or 4 in order to make heal > faster. This makes my heal times 2-3x faster. > Also you can play with disperse.self-heal-...
2018 Feb 09
1
self-heal trouble after changing arbiter brick
...ata/glusterfs Number of entries: 64999 Brick gv1:/data/gv23-arbiter Number of entries: 0 Brick gv4:/data/glusterfs Number of entries: 0 Brick gv5:/data/glusterfs Number of entries: 0 Brick pluto:/var/gv45-arbiter Number of entries: 0 According to the /var/log/glusterfs/glustershd.log, the self-healing is undergoing, so it might be worth just sit and wait, but I'm wondering why this 64999 heal-count persists (a limitation on counter? In fact, gv2 and gv3 bricks contain roughly 30 million files), and I feel bothered because of the following output: # gluster volume heal myvol info heal-failed...
2018 Mar 16
2
Disperse volume recovery and healing
...g gluster volume heal [volname] when all bricks are back online? ________________________________ From: Xavi Hernandez <jahernan at redhat.com> Sent: Thursday, March 15, 2018 12:09:05 AM To: Victor T Cc: gluster-users at gluster.org Subject: Re: [Gluster-users] Disperse volume recovery and healing Hi Victor, On Wed, Mar 14, 2018 at 12:30 AM, Victor T <hero_of_nothing_1 at hotmail.com<mailto:hero_of_nothing_1 at hotmail.com>> wrote: I have a question about how disperse volumes handle brick failure. I'm running version 3.10.10 on all systems. If I have a disperse volume in a...