search for: healed

Displaying 20 results from an estimated 1042 matches for "healed".

Did you mean: headed
2013 Feb 19
1
Problems running dbench on 3.3
To test gluster's behavior under heavy load, I'm currently doing this on two machines sharing a common /mnt/gfs gluster mount: ssh bal-6.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs ssh bal-7.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs One of the processes usually dies pretty quickly like this: [608] open
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got cluster configuration: volume afr-ns type cluster/afr subvolumes n1-ns n2-ns n3-ns option data-self-heal on option metadata-self-heal on option entry-self-heal on end-volume volume afr1 type cluster/afr subvolumes n1-brick2
2017 Oct 26
0
not healing one file
...t;> >>> Out of curiosity I checked all the bricks for this file. It's >>> present there. Making a checksum shows that the file is different on >>> one of the three replica servers. >>> >>> Querying healing information shows that the file should be healed: >>> # gluster volume heal home info >>> Brick sphere-six:/srv/gluster_home/brick >>> /romanoch/.mozilla/firefox/vzzqqxrm.default-1396429081309/se >>> ssionstore-backups/recovery.baklz4 >>> >>> Status: Connected >>> Number of entries: 1...
2017 Oct 26
3
not healing one file
...? recovery.baklz4 >> >> Out of curiosity I checked all the bricks for this file. It's >> present there. Making a checksum shows that the file is different on >> one of the three replica servers. >> >> Querying healing information shows that the file should be healed: >> # gluster volume heal home info >> Brick sphere-six:/srv/gluster_home/brick >> /romanoch/.mozilla/firefox/vzzqqxrm.default-1396429081309/ >> sessionstore-backups/recovery.baklz4 >> >> Status: Connected >> Number of entries: 1 >> >> Brick sph...
2017 Oct 26
2
not healing one file
...checked all the bricks for this file. It's > present there. Making a checksum shows that the file is > different on > one of the three replica servers. > > Querying healing information shows that the file should be > healed: > # gluster volume heal home info > Brick sphere-six:/srv/gluster_home/brick > /romanoch/.mozilla/firefox/vzzqqxrm.default-1396429081309/sessionstore-backups/recovery.ba > <http://recovery.ba>klz4 > > Status: Conn...
2013 Jul 24
2
Healing in glusterfs 3.3.1
Hi, I have a glusterfs 3.3.1 setup with 2 servers and a replicated volume used by 4 clients. Sometimes from some clients I can't access some of the files. After I force a full heal on the brick I see several files healed. Is this behavior normal? Thanks -- Paulo Silva <paulojjs at gmail.com> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130724/f367eb48/attachment.html>
2017 Jun 01
3
Heal operation detail of EC volumes
...the operations? >>> >>> Healing could be triggered by client side (access of file) or server side >>> (shd). >>> However, in both the cases actual heal starts from "ec_heal_do" function. >>> >>> >>> Assume a 2GB file is being healed in 16+4 EC configuration. I was >>> thinking that SHD deamon on failed brick host will read 2GB from >>> network and reconstruct its 100MB chunk and write it on to brick. Is >>> this right? >>> >>> You are correct about read/write. >>> The only...
2017 Jun 01
0
Heal operation detail of EC volumes
>Is it possible that this matches your observations ? Yes that matches what I see. So 19 files is being in parallel by 19 SHD processes. I thought only one file is being healed at a time. Then what is the meaning of disperse.shd-max-threads parameter? If I set it to 2 then each SHD thread will heal two files at the same time? >How many IOPS can handle your bricks ? Bricks are 7200RPM NL-SAS disks. 70-80 random IOPS max. But write pattern seems sequential, 30-40MB bulk...
2012 Feb 05
2
Would difference in size (and content) of a file on replicated bricks be healed?
Hi... Started playing with gluster. And the heal functions is my "target" for testing. Short description of my test ---------------------------- * 4 replicas on single machine * glusterfs mounted locally * Create file on glusterfs-mounted directory: date >data.txt * Append to file on one of the bricks: hostname >>data.txt * Trigger a self-heal with: stat data.txt =>
2018 Feb 08
5
self-heal trouble after changing arbiter brick
...hange, so I didn't expect much trouble then. What was probably wrong is that I then forced chronos out of cluster with gluster peer detach command. All since that, over the course of the last 3 days, I see this: # gluster volume heal myvol statistics heal-count Gathering count of entries to be healed on volume myvol has been successful Brick gv0:/data/glusterfs Number of entries: 0 Brick gv1:/data/glusterfs Number of entries: 0 Brick gv4:/data/gv01-arbiter Number of entries: 0 Brick gv2:/data/glusterfs Number of entries: 64999 Brick gv3:/data/glusterfs Number of entries: 64999 Brick gv1:/...
2017 Jun 02
1
?==?utf-8?q? Heal operation detail of EC volumes
Hi Serkan, On Thursday, June 01, 2017 21:31 CEST, Serkan ?oban <cobanserkan at gmail.com> wrote: ?>Is it possible that this matches your observations ? Yes that matches what I see. So 19 files is being in parallel by 19 SHD processes. I thought only one file is being healed at a time. Then what is the meaning of disperse.shd-max-threads parameter? If I set it to 2 then each SHD thread will heal two files at the same time?Each SHD normally heals a single file at a time. However there's an SHD on each node so all of them are trying to process dirty files. If one pee...
2017 Jun 08
1
Heal operation detail of EC volumes
On Fri, Jun 2, 2017 at 1:01 AM, Serkan ?oban <cobanserkan at gmail.com> wrote: > >Is it possible that this matches your observations ? > Yes that matches what I see. So 19 files is being in parallel by 19 > SHD processes. I thought only one file is being healed at a time. > Then what is the meaning of disperse.shd-max-threads parameter? If I > set it to 2 then each SHD thread will heal two files at the same time? > Yes that is the idea. > > >How many IOPS can handle your bricks ? > Bricks are 7200RPM NL-SAS disks. 70-80 random IOPS...
2018 Feb 09
0
self-heal trouble after changing arbiter brick
...hange, so I didn't expect much trouble then. What was probably wrong is that I then forced chronos out of cluster with gluster peer detach command. All since that, over the course of the last 3 days, I see this: # gluster volume heal myvol statistics heal-count Gathering count of entries to be healed on volume myvol has been successful Brick gv0:/data/glusterfs Number of entries: 0 Brick gv1:/data/glusterfs Number of entries: 0 Brick gv4:/data/gv01-arbiter Number of entries: 0 Brick gv2:/data/glusterfs Number of entries: 64999 Brick gv3:/data/glusterfs Number of entries: 64999 Brick gv1:/...
2012 May 03
2
[3.3 beta3] When should the self-heal daemon be triggered?
Hi, I eventually installed three Debian unstable machines, so I could install the GlusterFS 3.3 beta3. I have a question about the self-heal daemon. I'm trying to get a volume which is replicated, with two bricks. I started up the volume, wrote some data, then killed one machine, and then wrote more data to a few folders from the client machine. Then I restarted the second brick server.
2017 Nov 09
2
GlusterFS healing questions
...nd when we run heal info again the command contiunes showing gfid's >> until the brick is done again. This gives quite a bad picture of the status >> of a heal. > > > The output of 'gluster volume heal <volname> info' shows the list of files > pending to be healed on each brick. The heal is complete when the list is > empty. A faster alternative if you don't want to see the whole list of files > is to use 'gluster volume heal <volname> statistics heal-count'. This will > only show the number of pending files on each brick. > &gt...
2017 Nov 09
2
GlusterFS healing questions
Hi, We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10 bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit nics) 1. Tests show that healing takes about double the time on healing 200gb vs 100, and abit under the double on 400gb vs 200gb bricksizes. Is this expected behaviour? In light of this would make 6,4 tb bricksizes use ~ 377 hours to heal. 100gb
2017 Nov 09
0
GlusterFS healing questions
...is done > healing and when we run heal info again the command contiunes showing > gfid's until the brick is done again. This gives quite a bad picture of the > status of a heal. > The output of 'gluster volume heal <volname> info' shows the list of files pending to be healed on each brick. The heal is complete when the list is empty. A faster alternative if you don't want to see the whole list of files is to use 'gluster volume heal <volname> statistics heal-count'. This will only show the number of pending files on each brick. I don't know any o...
2017 Nov 09
0
GlusterFS healing questions
...gain the command contiunes showing gfid's >>> until the brick is done again. This gives quite a bad picture of the status >>> of a heal. >> >> >> The output of 'gluster volume heal <volname> info' shows the list of files >> pending to be healed on each brick. The heal is complete when the list is >> empty. A faster alternative if you don't want to see the whole list of files >> is to use 'gluster volume heal <volname> statistics heal-count'. This will >> only show the number of pending files on each bri...
2018 Feb 09
1
self-heal trouble after changing arbiter brick
...hange, so I didn't expect much trouble then. What was probably wrong is that I then forced chronos out of cluster with gluster peer detach command. All since that, over the course of the last 3 days, I see this: # gluster volume heal myvol statistics heal-count Gathering count of entries to be healed on volume myvol has been successful Brick gv0:/data/glusterfs Number of entries: 0 Brick gv1:/data/glusterfs Number of entries: 0 Brick gv4:/data/gv01-arbiter Number of entries: 0 Brick gv2:/data/glusterfs Number of entries: 64999 Brick gv3:/data/glusterfs Number of entries: 64999 Brick gv1:/...
2018 Mar 16
2
Disperse volume recovery and healing
...ppens to one of the remaining 4 bricks, the volume would stop working. So in this case I would recommend to not have more than one server down for maintenance at the same time unless the down time is very very small. Once the stopped servers come back up again, you need to wait until all files are healed before proceeding with the next server. Failing to do so means that some files could have more than 2 non-healthy versions, what will make the file inaccessible until enough healthy versions are available again. Self-heal should be automatically triggered once the bricks come online, however there...