search for: heals

Displaying 20 results from an estimated 1042 matches for "heals".

Did you mean: deals
2013 Feb 19
1
Problems running dbench on 3.3
To test gluster's behavior under heavy load, I'm currently doing this on two machines sharing a common /mnt/gfs gluster mount: ssh bal-6.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs ssh bal-7.example.com apt-get install dbench && dbench 6 -t 60 -D /mnt/gfs One of the processes usually dies pretty quickly like this: [608] open
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got cluster configuration: volume afr-ns type cluster/afr subvolumes n1-ns n2-ns n3-ns option data-self-heal on option metadata-self-heal on option entry-self-heal on end-volume volume afr1 type cluster/afr subvolumes n1-brick2
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
Hi Karthik, thanks for taking a look at this. I'm not working with gluster long enough to make heads or tails out of the logs. The logs are attached to this mail and here is the other information: # gluster volume info home Volume Name: home Type: Replicate Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a Status: Started Snapshot Count: 1 Number of Bricks: 1 x 3 = 3 Transport-type: tcp
2013 Jul 24
2
Healing in glusterfs 3.3.1
Hi, I have a glusterfs 3.3.1 setup with 2 servers and a replicated volume used by 4 clients. Sometimes from some clients I can't access some of the files. After I force a full heal on the brick I see several files healed. Is this behavior normal? Thanks -- Paulo Silva <paulojjs at gmail.com> -------------- next part -------------- An HTML attachment was scrubbed... URL:
2017 Jun 01
3
Heal operation detail of EC volumes
...affic. > Only heal operation is happening on cluster right now, no client/other > traffic. I see constant 7-8MB write to healing brick disk. So where is > the missing traffic? Not sure about your configuration, but probably you are seeing the result of having the SHD of each server doing heals. That would explain the network traffic you have. Suppose that all SHD but the one on the damaged brick are working. In this case 19 servers will peek 16 fragments each. This gives 19 * 16 = 304 fragments to be requested. EC balances the reads among all available servers, and there's a cha...
2017 Jun 01
0
Heal operation detail of EC volumes
...ation is happening on cluster right now, no client/other >> traffic. I see constant 7-8MB write to healing brick disk. So where is >> the missing traffic? > > > Not sure about your configuration, but probably you are seeing the result of > having the SHD of each server doing heals. That would explain the network > traffic you have. > > Suppose that all SHD but the one on the damaged brick are working. In this > case 19 servers will peek 16 fragments each. This gives 19 * 16 = 304 > fragments to be requested. EC balances the reads among all available > serve...
2012 Feb 05
2
Would difference in size (and content) of a file on replicated bricks be healed?
Hi... Started playing with gluster. And the heal functions is my "target" for testing. Short description of my test ---------------------------- * 4 replicas on single machine * glusterfs mounted locally * Create file on glusterfs-mounted directory: date >data.txt * Append to file on one of the bricks: hostname >>data.txt * Trigger a self-heal with: stat data.txt =>
2018 Feb 08
5
self-heal trouble after changing arbiter brick
Hi folks, I'm troubled moving an arbiter brick to another server because of I/O load issues. My setup is as follows: # gluster volume info Volume Name: myvol Type: Distributed-Replicate Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gv0:/data/glusterfs Brick2: gv1:/data/glusterfs Brick3:
2017 Jun 02
1
?==?utf-8?q? Heal operation detail of EC volumes
...ations ? Yes that matches what I see. So 19 files is being in parallel by 19 SHD processes. I thought only one file is being healed at a time. Then what is the meaning of disperse.shd-max-threads parameter? If I set it to 2 then each SHD thread will heal two files at the same time?Each SHD normally heals a single file at a time. However there's an SHD on each node so all of them are trying to process dirty files. If one peeks one file to heal, other SHD's will skip that one and try another. The disperse.shd-max-threads indicates how many heals can do each SHD simultaneously. Setting a valu...
2017 Jun 08
1
Heal operation detail of EC volumes
...now, no client/other > >> traffic. I see constant 7-8MB write to healing brick disk. So where is > >> the missing traffic? > > > > > > Not sure about your configuration, but probably you are seeing the > result of > > having the SHD of each server doing heals. That would explain the network > > traffic you have. > > > > Suppose that all SHD but the one on the damaged brick are working. In > this > > case 19 servers will peek 16 fragments each. This gives 19 * 16 = 304 > > fragments to be requested. EC balances the reads...
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hi Karthik, Thank you for your reply. The heal is still undergoing, as the /var/log/glusterfs/glustershd.log keeps growing, and there's a lot of pending entries in the heal info. The gluster version is 3.10.9 and 3.10.10 (the version update in progress). It doesn't have info summary [yet?], and the heal info is way too long to attach here. (It takes more than 20 minutes just to collect
2012 May 03
2
[3.3 beta3] When should the self-heal daemon be triggered?
Hi, I eventually installed three Debian unstable machines, so I could install the GlusterFS 3.3 beta3. I have a question about the self-heal daemon. I'm trying to get a volume which is replicated, with two bricks. I started up the volume, wrote some data, then killed one machine, and then wrote more data to a few folders from the client machine. Then I restarted the second brick server.
2017 Nov 09
2
GlusterFS healing questions
Hi, You can set disperse.shd-max-threads to 2 or 4 in order to make heal faster. This makes my heal times 2-3x faster. Also you can play with disperse.self-heal-window-size to read more bytes at one time, but i did not test it. On Thu, Nov 9, 2017 at 4:47 PM, Xavi Hernandez <jahernan at redhat.com> wrote: > Hi Rolf, > > answers follow inline... > > On Thu, Nov 9, 2017 at
2017 Nov 09
2
GlusterFS healing questions
Hi, We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10 bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit nics) 1. Tests show that healing takes about double the time on healing 200gb vs 100, and abit under the double on 400gb vs 200gb bricksizes. Is this expected behaviour? In light of this would make 6,4 tb bricksizes use ~ 377 hours to heal. 100gb
2017 Nov 09
0
GlusterFS healing questions
Hi Rolf, answers follow inline... On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen <rolf at jotta.no> wrote: > Hi, > > We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10 > bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit > nics) > > 1. > Tests show that healing takes about double the time on healing 200gb vs > 100, and
2017 Nov 09
0
GlusterFS healing questions
Someone on the #gluster-users irc channel said the following : "Decreasing features.locks-revocation-max-blocked to an absurdly low number is letting our distributed-disperse set heal again." Is this something to concider? Does anyone else have experience with tweaking this to speed up healing? Sent from my iPhone > On 9 Nov 2017, at 18:00, Serkan ?oban <cobanserkan at
2018 Feb 09
1
self-heal trouble after changing arbiter brick
Hi Karthik, Thank you very much, you made me much more relaxed. Below is getfattr output for a file from all the bricks: root at gv2 ~ # getfattr -d -e hex -m . /data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack getfattr: Removing leading '/' from absolute path names # file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
2018 Mar 16
2
Disperse volume recovery and healing
Xavi, does that mean that even if every node was rebooted one at a time even without issuing a heal that the volume would have no issues after running gluster volume heal [volname] when all bricks are back online? ________________________________ From: Xavi Hernandez <jahernan at redhat.com> Sent: Thursday, March 15, 2018 12:09:05 AM To: Victor T Cc: gluster-users at gluster.org Subject: