similar to: subnets other than bricks' - volume availability ?

Displaying 20 results from an estimated 5000 matches similar to: "subnets other than bricks' - volume availability ?"

2017 Dec 19
0
Upgrading from Gluster 3.8 to 3.12
On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com> wrote: > Hi, > > I have a cluster of 10 servers all running Fedora 24 along with > Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with > Gluster 3.12. I saw the documentation and did some testing but I > would like to run my plan through some (more?) educated minds. >
2017 Aug 04
1
Quotas not working after adding arbiter brick to replica 2
Thank you very much Sanoj, I ran your script once and it worked. I now have quotas again... Question: do you know in which release this issue will be fixed? > -------- Original Message -------- > Subject: Re: [Gluster-users] Quotas not working after adding arbiter brick to replica 2 > Local Time: August 4, 2017 3:28 PM > UTC Time: August 4, 2017 1:28 PM > From: sunnikri at
2017 Aug 04
0
Quotas not working after adding arbiter brick to replica 2
Hi mabi, This is a likely issue where the last gfid entry in the quota.conf file is stale (because the directory was deleted with quota limit on it being removed) (https://review.gluster.org/#/c/16507/) To fix the issue, we need to remove the last entry (last 17 bytes/ 16bytes based on quota version) in the file. Please use the below work around for the same until next upgrade. you only need to
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log. On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > > > On 13/09/17 06:21, Gaurav Yadav wrote: > >> Please provide the output of gluster volume info, gluster volume status >> and gluster peer status. >> >> Apart from above info, please provide glusterd logs,
2017 Sep 13
0
one brick one volume process dies?
These symptoms appear to be the same as I've recorded in this post: http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee <atin.mukherjee83 at gmail.com> wrote: > Additionally the brick log file of the same brick would be required. > Please look for if brick process went down or crashed. Doing a volume start
2017 Sep 28
1
one brick one volume process dies?
On 13/09/17 20:47, Ben Werthmann wrote: > These symptoms appear to be the same as I've recorded in > this post: > > http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html > > On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee > <atin.mukherjee83 at gmail.com > <mailto:atin.mukherjee83 at gmail.com>> wrote: > > Additionally the
2017 Sep 13
3
one brick one volume process dies?
On 13/09/17 06:21, Gaurav Yadav wrote: > Please provide the output of gluster volume info, gluster > volume status and gluster peer status. > > Apart? from above info, please provide glusterd logs, > cmd_history.log. > > Thanks > Gaurav > > On Tue, Sep 12, 2017 at 2:22 PM, lejeczek > <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
2017 Sep 13
2
one brick one volume process dies?
Additionally the brick log file of the same brick would be required. Please look for if brick process went down or crashed. Doing a volume start force should resolve the issue. On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote: > Please send me the logs as well i.e glusterd.logs and cmd_history.log. > > > On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
2017 Aug 02
0
Quotas not working after adding arbiter brick to replica 2
Hi Sanoj, I copied over the quota.conf file from the affected volume (node 1) and opened it up with a hex editor but can not recognize anything really except for the first few header/version bytes. I have attached it within this mail (compressed with bzip2) as requested. Should I recreate them manually? there where around 10 of them. Or is there a hope of recovering these quotas? Regards, M. >
2017 Aug 03
2
Quotas not working after adding arbiter brick to replica 2
I tried to re-create manually my quotas but not even that works now. Running the "limit-usage" command as showed below returns success: $ sudo gluster volume quota myvolume limit-usage /userdirectory 50GB volume quota : success but when I list the quotas using "list" nothing appears. What can I do to fix that issue with the quotas? > -------- Original Message -------- >
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 5:35 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote: > > > > Since arbiter bricks need not be of same size as the data bricks, if > you > > > > can configure three more arbiter bricks > > > > based on the guidelines in the doc [1], you can do it live and
2023 Feb 20
2
Gluster 11.0 upgrade
Hi again Xavi, I did some more testing on my virt machines with same setup: Number of Bricks: 1 x (2 + 1) = 3 If I do it the same way, I upgrade the arbiter first, I get the same behavior that the bricks do not start and the other nodes does not "see" the upgraded node. If I upgrade one of the other nodes (non arbiter) and restart glusterd on both the arbiter and the other the arbiter
2017 Aug 02
2
Quotas not working after adding arbiter brick to replica 2
Mabi, We have fixed a couple of issues in the quota list path. Could you also please attach the quota.conf file (/var/lib/glusterd/vols/ patchy/quota.conf) (Ideally, the first few bytes would be ascii characters followed by 17 bytes per directory on which quota limit is set) Regards, Sanoj On Tue, Aug 1, 2017 at 1:36 PM, mabi <mabi at protonmail.ch> wrote: > I also just noticed quite
2023 Feb 20
1
Gluster 11.0 upgrade
I made a recusive diff on the upgraded arbiter. /var/lib/glusterd/vols/gds-common is the upgraded aribiter /home/marcus/gds-common is one of the other nodes still on gluster 10 diff -r /var/lib/glusterd/vols/gds-common/bricks/urd-gds-030:-urd-gds-gds-common /home/marcus/gds-common/bricks/urd-gds-030:-urd-gds-gds-common 5c5 < listen-port=60419 --- > listen-port=0 11c11 <
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote: > > > Since arbiter bricks need not be of same size as the data bricks, if you > > > can configure three more arbiter bricks > > > based on the guidelines in the doc [1], you can do it live and you will > > > have the distribution count also unchanged. > > > > I can probably find
2023 Feb 21
2
Gluster 11.0 upgrade
Hi Xavi, Copy the same info file worked well and the gluster 11 arbiter is now up and running and all the nodes are communication the way they should. Just another note on something I discovered on my virt machines. All the three nodes has been upgarded to 11.0 and are working. If I run: gluster volume get all cluster.op-version I get: Option Value ------
2017 Sep 12
2
one brick one volume process dies?
hi everyone I have 3-peer cluster with all vols in replica mode, 9 vols. What I see, unfortunately, is one brick fails in one vol, when it happens it's always the same vol on the same brick. Command: gluster vol status $vol - would show brick not online. Restarting glusterd with systemclt does not help, only system reboot seem to help, until it happens, next time. How to troubleshoot this
2017 Sep 13
0
one brick one volume process dies?
Please provide the output of gluster volume info, gluster volume status and gluster peer status. Apart from above info, please provide glusterd logs, cmd_history.log. Thanks Gaurav On Tue, Sep 12, 2017 at 2:22 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > hi everyone > > I have 3-peer cluster with all vols in replica mode, 9 vols. > What I see, unfortunately, is one brick
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 4:18 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote: > > If you want to use the first two bricks as arbiter, then you need to be > > aware of the following things: > > - Your distribution count will be decreased to 2. > > What's the significance of this? I'm
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 1:40 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote: > > I will try to explain how you can end up in split-brain even with cluster > > wide quorum: > > Yep, the explanation made sense. I hadn't considered the possibility of > alternating outages. Thanks! > >