Displaying 20 results from an estimated 10000 matches similar to: "Disperse volume recovery and healing"
2018 Mar 16
2
Disperse volume recovery and healing
Xavi, does that mean that even if every node was rebooted one at a time even without issuing a heal that the volume would have no issues after running gluster volume heal [volname] when all bricks are back online?
________________________________
From: Xavi Hernandez <jahernan at redhat.com>
Sent: Thursday, March 15, 2018 12:09:05 AM
To: Victor T
Cc: gluster-users at gluster.org
Subject:
2018 Mar 15
0
Disperse volume recovery and healing
Hi Victor,
On Wed, Mar 14, 2018 at 12:30 AM, Victor T <hero_of_nothing_1 at hotmail.com>
wrote:
> I have a question about how disperse volumes handle brick failure. I'm
> running version 3.10.10 on all systems. If I have a disperse volume in a
> 4+2 configuration with 6 servers each serving 1 brick, and maintenance
> needs to be performed on all systems, are there any
2018 Mar 16
0
Disperse volume recovery and healing
On Fri, Mar 16, 2018 at 4:57 AM, Victor T <hero_of_nothing_1 at hotmail.com>
wrote:
> Xavi, does that mean that even if every node was rebooted one at a time
> even without issuing a heal that the volume would have no issues after
> running gluster volume heal [volname] when all bricks are back online?
>
No. After bringing up one brick and before stopping the next one, you need
2018 Mar 18
1
Disperse volume recovery and healing
No. After bringing up one brick and before stopping the next one, you need to be sure that there are no damaged files. You shouldn't reboot a node if "gluster volume heal <volname> info" shows damaged files.
What happens in this case then? I'm thinking about a situation where the servers are kept in an environment that we don't control - i.e. the cloud. If the VMs are
2018 Mar 20
0
Disperse volume recovery and healing
On Tue, Mar 20, 2018 at 5:26 AM, Victor T <hero_of_nothing_1 at hotmail.com>
wrote:
> That makes sense. In the case of "file damage," it would show up as files
> that could not be healed in logfiles or gluster volume heal [volume] info?
>
If the damage affects more bricks than the volume redundancy, then probably
yes. These files or directories will appear in
2017 Nov 09
2
GlusterFS healing questions
Hi,
You can set disperse.shd-max-threads to 2 or 4 in order to make heal
faster. This makes my heal times 2-3x faster.
Also you can play with disperse.self-heal-window-size to read more
bytes at one time, but i did not test it.
On Thu, Nov 9, 2017 at 4:47 PM, Xavi Hernandez <jahernan at redhat.com> wrote:
> Hi Rolf,
>
> answers follow inline...
>
> On Thu, Nov 9, 2017 at
2017 Nov 09
2
GlusterFS healing questions
Hi,
We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10
bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit
nics)
1.
Tests show that healing takes about double the time on healing 200gb vs
100, and abit under the double on 400gb vs 200gb bricksizes. Is this
expected behaviour? In light of this would make 6,4 tb bricksizes use ~ 377
hours to heal.
100gb
2017 Nov 09
0
GlusterFS healing questions
Someone on the #gluster-users irc channel said the following :
"Decreasing features.locks-revocation-max-blocked to an absurdly low number is letting our distributed-disperse set heal again."
Is this something to concider? Does anyone else have experience with tweaking this to speed up healing?
Sent from my iPhone
> On 9 Nov 2017, at 18:00, Serkan ?oban <cobanserkan at
2017 Nov 09
0
GlusterFS healing questions
Hi Rolf,
answers follow inline...
On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen <rolf at jotta.no> wrote:
> Hi,
>
> We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10
> bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit
> nics)
>
> 1.
> Tests show that healing takes about double the time on healing 200gb vs
> 100, and
2018 Mar 13
4
Can't heal a volume: "Please check if all brick processes are running."
Hi Anatoliy,
The heal command is basically used to heal any mismatching contents between
replica copies of the files.
For the command "gluster volume heal <volname>" to succeed, you should have
the self-heal-daemon running,
which is true only if your volume is of type replicate/disperse.
In your case you have a plain distribute volume where you do not store the
replica of any
2018 Mar 14
2
Can't heal a volume: "Please check if all brick processes are running."
On Wed, Mar 14, 2018 at 3:36 PM, Anatoliy Dmytriyev <tolid at tolid.eu.org>
wrote:
> Hi Karthik,
>
>
> Thanks a lot for the explanation.
>
> Does it mean a distributed volume health can be checked only by "gluster
> volume status " command?
>
Yes. I am not aware of any other command which can give the status of plain
distribute volume which is similar to
2018 Feb 02
1
How to trigger a resync of a newly replaced empty brick in replicate config ?
Hi,
I simplified the config in my first email, but I actually have 2x4 servers in replicate-distribute with each 4 bricks for 6 of them and 2 bricks for the remaining 2. Full healing will just take ages... for a just single brick to resync !
> gluster v status home
volume status home
Status of volume: home
Gluster process TCP Port RDMA Port Online Pid
2019 Dec 24
1
GFS performance under heavy traffic
Hi David,
On Dec 24, 2019 02:47, David Cunningham <dcunningham at voisonics.com> wrote:
>
> Hello,
>
> In testing we found that actually the GFS client having access to all 3 nodes made no difference to performance. Perhaps that's because the 3rd node that wasn't accessible from the client before was the arbiter node?
It makes sense, as no data is being generated towards
2018 Mar 19
3
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi,
On 03/19/2018 03:42 PM, TomK wrote:
> On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
> Removing NFS or NFS Ganesha from the equation, not very impressed on my
> own setup either.? For the writes it's doing, that's alot of CPU usage
> in top. Seems bottle-necked via a single execution core somewhere trying
> to facilitate read / writes to the other bricks.
>
>
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
Hi Karthik,
Thanks a lot for the explanation.
Does it mean a distributed volume health can be checked only by "gluster
volume status " command?
And one more question: cluster.min-free-disk is 10% by default. What
kind of "side effects" can we face if this option will be reduced to,
for example, 5%? Could you point to any best practice document(s)?
Regards,
Anatoliy
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
On Wed, Mar 14, 2018 at 5:42 PM, Karthik Subrahmanya <ksubrahm at redhat.com>
wrote:
>
>
> On Wed, Mar 14, 2018 at 3:36 PM, Anatoliy Dmytriyev <tolid at tolid.eu.org>
> wrote:
>
>> Hi Karthik,
>>
>>
>> Thanks a lot for the explanation.
>>
>> Does it mean a distributed volume health can be checked only by "gluster
>> volume
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Can we add a smarter error message for this situation by checking volume
type first?
Cheers,
Laura B
On Wednesday, March 14, 2018, Karthik Subrahmanya <ksubrahm at redhat.com>
wrote:
> Hi Anatoliy,
>
> The heal command is basically used to heal any mismatching contents
> between replica copies of the files.
> For the command "gluster volume heal <volname>"
2017 Jun 01
0
Heal operation detail of EC volumes
>Is it possible that this matches your observations ?
Yes that matches what I see. So 19 files is being in parallel by 19
SHD processes. I thought only one file is being healed at a time.
Then what is the meaning of disperse.shd-max-threads parameter? If I
set it to 2 then each SHD thread will heal two files at the same time?
>How many IOPS can handle your bricks ?
Bricks are 7200RPM NL-SAS
2017 Jun 01
3
Heal operation detail of EC volumes
Hi Serkan,
On 30/05/17 10:22, Serkan ?oban wrote:
> Ok I understand that heal operation takes place on server side. In
> this case I should see X KB
> out network traffic from 16 servers and 16X KB input traffic to the
> failed brick server right? So that process will get 16 chunks
> recalculate our chunk and write it to disk.
That should be the normal operation for a single
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Hi,
Maybe someone can point me to a documentation or explain this? I can't
find it myself.
Do we have any other useful resources except doc.gluster.org? As I see
many gluster options are not described there or there are no explanation
what is doing...
On 2018-03-12 15:58, Anatoliy Dmytriyev wrote:
> Hello,
>
> We have a very fresh gluster 3.10.10 installation.
> Our volume