Displaying 20 results from an estimated 1069 matches for "heal".
Did you mean:
head
2013 Feb 19
1
Problems running dbench on 3.3
...ient5/~dmtmp/PWRPNT/PCBENCHM.PPT failed for handle 10003 (No such file or directory)
(610) ERROR: handle 10003 was not found,
Child failed with status 1
And the logs are full of things like this (ignore the initial timestamp, that's from our logging):
[2013-02-19 14:38:38.714493] E [afr-self-heal-common.c:2160:afr_self_heal_completion_cbk] 0-replicate0: background data missing-entry gfid self-heal failed on /clients/client5/~dmtmp/PM/MOVED.DOC,
[2013-02-19 14:38:38.724494] E [afr-self-heal-common.c:2160:afr_self_heal_completion_cbk] 0-replicate0: background entry self-heal failed on /cli...
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and
unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got
cluster configuration:
volume afr-ns
type cluster/afr
subvolumes n1-ns n2-ns n3-ns
option data-self-heal on
option metadata-self-heal on
option entry-self-heal on
end-volume
volume afr1
type cluster/afr
subvolumes n1-brick2 n2-brick1
option data-self-heal on
option metadata-self-heal on
option entry-self-heal on
end-volume
volume afr2
type cluster/afr
subvolumes n...
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health report tool, and see if it
> does diagnose any issues in setup. Currently you may have to run it in all
> the three machines.
>
&...
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> S...
2017 Oct 26
2
not healing one file
...41, Karthik Subrahmanya wrote:
> HeyRichard,
>
> Could you share the following informations please?
> 1. gluster volume info <volname>
> 2. getfattr output of that file from all the bricks
> getfattr -d -e hex -m . <brickpath/filepath>
> 3. glustershd & glfsheal logs
>
> Regards,
> Karthik
>
> On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com
> <mailto:atumball at redhat.com>> wrote:
>
> On a side note, try recently released health report tool, and see if
> it does diagnose any issues...
2013 Jul 24
2
Healing in glusterfs 3.3.1
Hi,
I have a glusterfs 3.3.1 setup with 2 servers and a replicated volume used
by 4 clients.
Sometimes from some clients I can't access some of the files. After I force
a full heal on the brick I see several files healed. Is this behavior
normal?
Thanks
--
Paulo Silva <paulojjs at gmail.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130724/f367eb48/attachment...
2017 Jun 01
3
Heal operation detail of EC volumes
Hi Serkan,
On 30/05/17 10:22, Serkan ?oban wrote:
> Ok I understand that heal operation takes place on server side. In
> this case I should see X KB
> out network traffic from 16 servers and 16X KB input traffic to the
> failed brick server right? So that process will get 16 chunks
> recalculate our chunk and write it to disk.
That should be the normal operatio...
2017 Jun 01
0
Heal operation detail of EC volumes
>Is it possible that this matches your observations ?
Yes that matches what I see. So 19 files is being in parallel by 19
SHD processes. I thought only one file is being healed at a time.
Then what is the meaning of disperse.shd-max-threads parameter? If I
set it to 2 then each SHD thread will heal two files at the same time?
>How many IOPS can handle your bricks ?
Bricks are 7200RPM NL-SAS disks. 70-80 random IOPS max. But write
pattern seems sequential, 30-40MB bu...
2012 Feb 05
2
Would difference in size (and content) of a file on replicated bricks be healed?
Hi...
Started playing with gluster. And the heal functions is my "target" for
testing.
Short description of my test
----------------------------
* 4 replicas on single machine
* glusterfs mounted locally
* Create file on glusterfs-mounted directory: date >data.txt
* Append to file on one of the bricks: hostname >>data.txt
* T...
2018 Feb 08
5
self-heal trouble after changing arbiter brick
...fs
Brick5: gv3:/data/glusterfs
Brick6: gv1:/data/gv23-arbiter (arbiter)
Brick7: gv4:/data/glusterfs
Brick8: gv5:/data/glusterfs
Brick9: pluto:/var/gv45-arbiter (arbiter)
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
storage.owner-gid: 1000
storage.owner-uid: 1000
cluster.self-heal-daemon: enable
The gv23-arbiter is the brick that was recently moved from other server (chronos) using the following command:
# gluster volume replace-brick myvol chronos:/mnt/gv23-arbiter gv1:/data/gv23-arbiter commit force
volume replace-brick: success: replace-brick commit force operation succ...
2017 Jun 02
1
?==?utf-8?q? Heal operation detail of EC volumes
Hi Serkan,
On Thursday, June 01, 2017 21:31 CEST, Serkan ?oban <cobanserkan at gmail.com> wrote:
?>Is it possible that this matches your observations ?
Yes that matches what I see. So 19 files is being in parallel by 19
SHD processes. I thought only one file is being healed at a time.
Then what is the meaning of disperse.shd-max-threads parameter? If I
set it to 2 then each SHD thread will heal two files at the same time?Each SHD normally heals a single file at a time. However there's an SHD on each node so all of them are trying to process dirty files. If one p...
2017 Jun 08
1
Heal operation detail of EC volumes
On Fri, Jun 2, 2017 at 1:01 AM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> >Is it possible that this matches your observations ?
> Yes that matches what I see. So 19 files is being in parallel by 19
> SHD processes. I thought only one file is being healed at a time.
> Then what is the meaning of disperse.shd-max-threads parameter? If I
> set it to 2 then each SHD thread will heal two files at the same time?
>
Yes that is the idea.
>
> >How many IOPS can handle your bricks ?
> Bricks are 7200RPM NL-SAS disks. 70-80 random IO...
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hi Karthik,
Thank you for your reply. The heal is still undergoing, as the /var/log/glusterfs/glustershd.log keeps growing, and there's a lot of pending entries in the heal info.
The gluster version is 3.10.9 and 3.10.10 (the version update in progress). It doesn't have info summary [yet?], and the heal info is way too long to attach h...
2012 May 03
2
[3.3 beta3] When should the self-heal daemon be triggered?
Hi,
I eventually installed three Debian unstable machines, so I could
install the GlusterFS 3.3 beta3.
I have a question about the self-heal daemon.
I'm trying to get a volume which is replicated, with two bricks.
I started up the volume, wrote some data, then killed one machine, and
then wrote more data to a few folders from the client machine.
Then I restarted the second brick server.
At this point, the second server seemed to...
2017 Nov 09
2
GlusterFS healing questions
Hi,
You can set disperse.shd-max-threads to 2 or 4 in order to make heal
faster. This makes my heal times 2-3x faster.
Also you can play with disperse.self-heal-window-size to read more
bytes at one time, but i did not test it.
On Thu, Nov 9, 2017 at 4:47 PM, Xavi Hernandez <jahernan at redhat.com> wrote:
> Hi Rolf,
>
> answers follow inline...
>
>...
2017 Nov 09
2
GlusterFS healing questions
Hi,
We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10
bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit
nics)
1.
Tests show that healing takes about double the time on healing 200gb vs
100, and abit under the double on 400gb vs 200gb bricksizes. Is this
expected behaviour? In light of this would make 6,4 tb bricksizes use ~ 377
hours to heal.
100gb brick heal: 18 hours (8+2)
200gb brick heal: 37 hours (8+2) +205%
400gb brick hea...
2017 Nov 09
0
GlusterFS healing questions
...line...
On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen <rolf at jotta.no> wrote:
> Hi,
>
> We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10
> bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit
> nics)
>
> 1.
> Tests show that healing takes about double the time on healing 200gb vs
> 100, and abit under the double on 400gb vs 200gb bricksizes. Is this
> expected behaviour? In light of this would make 6,4 tb bricksizes use ~ 377
> hours to heal.
>
> 100gb brick heal: 18 hours (8+2)
> 200gb brick heal: 37 hour...
2017 Nov 09
0
GlusterFS healing questions
Someone on the #gluster-users irc channel said the following :
"Decreasing features.locks-revocation-max-blocked to an absurdly low number is letting our distributed-disperse set heal again."
Is this something to concider? Does anyone else have experience with tweaking this to speed up healing?
Sent from my iPhone
> On 9 Nov 2017, at 18:00, Serkan ?oban <cobanserkan at gmail.com> wrote:
>
> Hi,
>
> You can set disperse.shd-max-threads to 2 or 4 in...
2018 Feb 09
1
self-heal trouble after changing arbiter brick
...M, "Karthik Subrahmanya" <ksubrahm at redhat.com (mailto:%22Karthik%20Subrahmanya%22%20<ksubrahm at redhat.com>)> wrote:
On Fri, Feb 9, 2018 at 3:23 PM, Seva Gluschenko <gvs at webkontrol.ru (mailto:gvs at webkontrol.ru)> wrote:
Hi Karthik,
Thank you for your reply. The heal is still undergoing, as the /var/log/glusterfs/glustershd.log keeps growing, and there's a lot of pending entries in the heal info.
The gluster version is 3.10.9 and 3.10.10 (the version update in progress). It doesn't have info summary [yet?], and the heal info is way too long to attach h...
2018 Mar 16
2
Disperse volume recovery and healing
Xavi, does that mean that even if every node was rebooted one at a time even without issuing a heal that the volume would have no issues after running gluster volume heal [volname] when all bricks are back online?
________________________________
From: Xavi Hernandez <jahernan at redhat.com>
Sent: Thursday, March 15, 2018 12:09:05 AM
To: Victor T
Cc: gluster-users at gluster.org
Subject:...