Displaying 11 results from an estimated 11 matches for "mordiggian".
2018 Feb 13
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
...0
Brick gandalf:/var/local/brick0/data 49155 0 Y
18732
Brick azathoth:/var/local/brick0/data 49155 0 Y
9507
Brick yog-sothoth:/var/local/brick0/data 49153 0 Y
39559
Brick cthulhu:/var/local/brick0/data 49152 0 Y
2682
Brick mordiggian:/var/local/brick0/data 49152 0 Y
39479
Self-heal Daemon on localhost N/A N/A Y
9614
Self-heal Daemon on saruman.lub.lu.se N/A N/A Y
15016
Self-heal Daemon on cthulhu.lub.lu.se N/A N/A Y
9756
Self-heal Daemon on gand...
2018 Feb 15
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
...49155 0 Y
> 18732
> Brick azathoth:/var/local/brick0/data 49155 0 Y
> 9507
> Brick yog-sothoth:/var/local/brick0/data 49153 0 Y
> 39559
> Brick cthulhu:/var/local/brick0/data 49152 0 Y
> 2682
> Brick mordiggian:/var/local/brick0/data 49152 0 Y
> 39479
> Self-heal Daemon on localhost N/A N/A Y
> 9614
> Self-heal Daemon on saruman.lub.lu.se N/A N/A Y
> 15016
> Self-heal Daemon on cthulhu.lub.lu.se N/A N/A Y...
2018 Feb 27
2
Quorum in distributed-replicate volume
...nodes.
OK, great. So basically just install the gluster server on the new
node(s), do a peer probe to add them to the cluster, and then
gluster volume create palantir replica 3 arbiter 1 [saruman brick] [gandalf brick] [arbiter 1] [azathoth brick] [yog-sothoth brick] [arbiter 2] [cthulhu brick] [mordiggian brick] [arbiter 3]
Or is there more to it than that?
--
Dave Sherohman
2018 Feb 15
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
...18732
> > Brick azathoth:/var/local/brick0/data 49155 0 Y
> > 9507
> > Brick yog-sothoth:/var/local/brick0/data 49153 0 Y
> > 39559
> > Brick cthulhu:/var/local/brick0/data 49152 0 Y
> > 2682
> > Brick mordiggian:/var/local/brick0/data 49152 0 Y
> > 39479
> > Self-heal Daemon on localhost N/A N/A Y
> > 9614
> > Self-heal Daemon on saruman.lub.lu.se N/A N/A Y
> > 15016
> > Self-heal Daemon on cthulhu.lub.lu.se...
2018 Feb 27
2
Quorum in distributed-replicate volume
...0
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: saruman:/var/local/brick0/data
Brick2: gandalf:/var/local/brick0/data
Brick3: azathoth:/var/local/brick0/data
Brick4: yog-sothoth:/var/local/brick0/data
Brick5: cthulhu:/var/local/brick0/data
Brick6: mordiggian:/var/local/brick0/data
Options Reconfigured:
features.scrub: Inactive
features.bitrot: off
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
network.ping-timeout: 1013
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefe...
2018 Feb 27
0
Quorum in distributed-replicate volume
...asically just install the gluster server on the new
> node(s), do a peer probe to add them to the cluster, and then
>
> gluster volume create palantir replica 3 arbiter 1 [saruman brick]
> [gandalf brick] [arbiter 1] [azathoth brick] [yog-sothoth brick] [arbiter
> 2] [cthulhu brick] [mordiggian brick] [arbiter 3]
>
gluster volume add-brick <volname> replica 3 arbiter 1 <arbiter 1> <arbiter
2> <arbiter 3>
is the command. It will convert the existing volume to arbiter volume and
add the specified bricks as arbiter bricks to the existing subvols.
Once they are succ...
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 4:18 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote:
> > If you want to use the first two bricks as arbiter, then you need to be
> > aware of the following things:
> > - Your distribution count will be decreased to 2.
>
> What's the significance of this? I'm
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote:
> If you want to use the first two bricks as arbiter, then you need to be
> aware of the following things:
> - Your distribution count will be decreased to 2.
What's the significance of this? I'm trying to find documentation on
distribution counts in gluster, but my google-fu is failing me.
> - Your data on
2018 Feb 27
0
Quorum in distributed-replicate volume
...er of Bricks: 3 x 2 = 6
> Transport-type: tcp
> Bricks:
> Brick1: saruman:/var/local/brick0/data
> Brick2: gandalf:/var/local/brick0/data
> Brick3: azathoth:/var/local/brick0/data
> Brick4: yog-sothoth:/var/local/brick0/data
> Brick5: cthulhu:/var/local/brick0/data
> Brick6: mordiggian:/var/local/brick0/data
> Options Reconfigured:
> features.scrub: Inactive
> features.bitrot: off
> transport.address-family: inet
> performance.readdir-ahead: on
> nfs.disable: on
> network.ping-timeout: 1013
> performance.quick-read: off
> performance.read-ahead: off
>...
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > > "In a replica 2 volume... If we set the client-quorum option to
> > > auto, then the first brick must always be up, irrespective of the
> > > status of the second brick. If only the second brick is up,
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > "In a replica 2 volume... If we set the client-quorum option to
> > auto, then the first brick must always be up, irrespective of the
> > status of the second brick. If only the second brick is up, the
> > subvolume becomes read-only."
> >
> By default client-quorum is