similar to: Impossible to add new brick

Displaying 20 results from an estimated 8000 matches similar to: "Impossible to add new brick"

2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hey, Did the heal completed and you still have some entries pending heal? If yes then can you provide the following informations to debug the issue. 1. Which version of gluster you are running 2. gluster volume heal <volname> info summary or gluster volume heal <volname> info 3. getfattr -d -e hex -m . <filepath-on-brick> output of any one of the which is pending heal from all
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Can we add a smarter error message for this situation by checking volume type first? Cheers, Laura B On Wednesday, March 14, 2018, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > Hi Anatoliy, > > The heal command is basically used to heal any mismatching contents > between replica copies of the files. > For the command "gluster volume heal <volname>"
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
Hi Karthik, Thanks a lot for the explanation. Does it mean a distributed volume health can be checked only by "gluster volume status " command? And one more question: cluster.min-free-disk is 10% by default. What kind of "side effects" can we face if this option will be reduced to, for example, 5%? Could you point to any best practice document(s)? Regards, Anatoliy
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
On Wed, Mar 14, 2018 at 5:42 PM, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > > > On Wed, Mar 14, 2018 at 3:36 PM, Anatoliy Dmytriyev <tolid at tolid.eu.org> > wrote: > >> Hi Karthik, >> >> >> Thanks a lot for the explanation. >> >> Does it mean a distributed volume health can be checked only by "gluster >> volume
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Hi, Maybe someone can point me to a documentation or explain this? I can't find it myself. Do we have any other useful resources except doc.gluster.org? As I see many gluster options are not described there or there are no explanation what is doing... On 2018-03-12 15:58, Anatoliy Dmytriyev wrote: > Hello, > > We have a very fresh gluster 3.10.10 installation. > Our volume
2018 Feb 01
0
How to trigger a resync of a newly replaced empty brick in replicate config ?
You do not need to reset brick if brick path does not change. Replace the brick format and mount, then gluster v start volname force. To start self heal just run gluster v heal volname full. On Thu, Feb 1, 2018 at 6:39 PM, Alessandro Ipe <Alessandro.Ipe at meteo.be> wrote: > Hi, > > > My volume home is configured in replicate mode (version 3.12.4) with the bricks >
2017 Dec 02
1
BUG: After stop and start wrong port is advertised
On Sat, 2 Dec 2017 at 19:29, Jo Goossens <jo.goossens at hosted-power.com> wrote: > Hello Atin, > > > > > > Could you confirm this should have been fixed in 3.10.8? If so we'll test > it for sure! > Fix should be part of 3.10.8 which is awaiting release announcement. > > Regards > > Jo > > > > > > > -----Original
2017 Jun 19
0
different brick using the same port?
On Sun, Jun 18, 2017 at 1:40 PM, Yong Zhang <hiscal at outlook.com> wrote: > Hi, all > > > > I found two of my bricks from different volumes are using the same port > 49154 on the same glusterfs server node, is this normal? > No it's not. Can you please help me with the following information: 1. gluster --version 2. glusterd log & cmd_history logs from both
2018 Feb 02
1
How to trigger a resync of a newly replaced empty brick in replicate config ?
Hi, I simplified the config in my first email, but I actually have 2x4 servers in replicate-distribute with each 4 bricks for 6 of them and 2 bricks for the remaining 2. Full healing will just take ages... for a just single brick to resync ! > gluster v status home volume status home Status of volume: home Gluster process TCP Port RDMA Port Online Pid
2018 Feb 15
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
Hi, Have you checked for any file system errors on the brick mount point? I once was facing weird io errors and xfs_repair fixed the issue. What about the heal? Does it report any pending heals? On Feb 15, 2018 14:20, "Dave Sherohman" <dave at sherohman.org> wrote: > Well, it looks like I've stumped the list, so I did a bit of additional > digging myself: > >
2017 Jun 19
1
different brick using the same port?
Isn't this just brick multiplexing? On June 19, 2017 5:55:54 AM PDT, Atin Mukherjee <amukherj at redhat.com> wrote: >On Sun, Jun 18, 2017 at 1:40 PM, Yong Zhang <hiscal at outlook.com> wrote: > >> Hi, all >> >> >> >> I found two of my bricks from different volumes are using the same >port >> 49154 on the same glusterfs server node, is
2018 Feb 09
1
self-heal trouble after changing arbiter brick
Hi Karthik, Thank you very much, you made me much more relaxed. Below is getfattr output for a file from all the bricks: root at gv2 ~ # getfattr -d -e hex -m . /data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack getfattr: Removing leading '/' from absolute path names # file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
2018 Mar 12
2
Can't heal a volume: "Please check if all brick processes are running."
Hello, We have a very fresh gluster 3.10.10 installation. Our volume is created as distributed volume, 9 bricks 96TB in total (87TB after 10% of gluster disk space reservation) For some reasons I can?t ?heal? the volume: # gluster volume heal gv0 Launching heal operation to perform index self heal on volume gv0 has been unsuccessful on bricks that are down. Please check if all brick processes
2017 Jun 18
2
different brick using the same port?
Hi, all I found two of my bricks from different volumes are using the same port 49154 on the same glusterfs server node, is this normal? Status of volume: home-rabbitmq-qa Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.1.100:/glusterfsvolumes/home/ho me-rabbitmq-qa/brick
2018 Mar 14
2
Can't heal a volume: "Please check if all brick processes are running."
On Wed, Mar 14, 2018 at 3:36 PM, Anatoliy Dmytriyev <tolid at tolid.eu.org> wrote: > Hi Karthik, > > > Thanks a lot for the explanation. > > Does it mean a distributed volume health can be checked only by "gluster > volume status " command? > Yes. I am not aware of any other command which can give the status of plain distribute volume which is similar to
2018 Jan 22
0
BUG: After stop and start wrong port is advertised
The patch was definitely there in 3.12.3. Do you have the glusterd and brick logs handy with you when this happened? On Sun, Jan 21, 2018 at 10:21 PM, Alan Orth <alan.orth at gmail.com> wrote: > For what it's worth, I just updated some CentOS 7 servers from GlusterFS > 3.12.1 to 3.12.4 and hit this bug. Did the patch make it into 3.12.4? I had > to use Mike Hulsman's
2017 Dec 02
0
BUG: After stop and start wrong port is advertised
Hello Atin, ? ? Could you confirm this should have been fixed in 3.10.8? If so we'll test it for sure! Regards Jo ? ? -----Original message----- From:Atin Mukherjee <amukherj at redhat.com> Sent:Mon 30-10-2017 17:40 Subject:Re: [Gluster-users] BUG: After stop and start wrong port is advertised To:Jo Goossens <jo.goossens at hosted-power.com>; CC:gluster-users at
2017 Nov 17
0
Help with reconnecting a faulty brick
On 11/17/2017 03:41 PM, Daniel Berteaud wrote: > Le Jeudi, Novembre 16, 2017 13:07 CET, Ravishankar N <ravishankar at redhat.com> a ?crit: > >> On 11/16/2017 12:54 PM, Daniel Berteaud wrote: >>> Any way in this situation to check which file will be healed from >>> which brick before reconnecting ? Using some getfattr tricks ? >> Yes, there are afr
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hi Karthik, Thank you for your reply. The heal is still undergoing, as the /var/log/glusterfs/glustershd.log keeps growing, and there's a lot of pending entries in the heal info. The gluster version is 3.10.9 and 3.10.10 (the version update in progress). It doesn't have info summary [yet?], and the heal info is way too long to attach here. (It takes more than 20 minutes just to collect
2017 Sep 28
1
one brick one volume process dies?
On 13/09/17 20:47, Ben Werthmann wrote: > These symptoms appear to be the same as I've recorded in > this post: > > http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html > > On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee > <atin.mukherjee83 at gmail.com > <mailto:atin.mukherjee83 at gmail.com>> wrote: > > Additionally the