search for: sothoth

Displaying 11 results from an estimated 11 matches for "sothoth".

2018 Feb 15
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
Well, it looks like I've stumped the list, so I did a bit of additional digging myself: azathoth replicates with yog-sothoth, so I compared their brick directories. `ls -R /var/local/brick0/data | md5sum` gives the same result on both servers, so the filenames are identical in both bricks. However, `du -s /var/local/brick0/data` shows that azathoth has about 3G more data (445G vs 442G) than yog. This seems consistent w...
2018 Feb 13
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
...---------------------------------------------------------------------- Brick saruman:/var/local/brick0/data 49154 0 Y 10690 Brick gandalf:/var/local/brick0/data 49155 0 Y 18732 Brick azathoth:/var/local/brick0/data 49155 0 Y 9507 Brick yog-sothoth:/var/local/brick0/data 49153 0 Y 39559 Brick cthulhu:/var/local/brick0/data 49152 0 Y 2682 Brick mordiggian:/var/local/brick0/data 49152 0 Y 39479 Self-heal Daemon on localhost N/A N/A Y 9614 Self-heal Daemon on sarum...
2018 Feb 15
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
...ed the issue. What about the heal? Does it report any pending heals? On Feb 15, 2018 14:20, "Dave Sherohman" <dave at sherohman.org> wrote: > Well, it looks like I've stumped the list, so I did a bit of additional > digging myself: > > azathoth replicates with yog-sothoth, so I compared their brick > directories. `ls -R /var/local/brick0/data | md5sum` gives the same > result on both servers, so the filenames are identical in both bricks. > However, `du -s /var/local/brick0/data` shows that azathoth has about 3G > more data (445G vs 442G) than yog. >...
2018 Feb 27
2
Quorum in distributed-replicate volume
...ame replica subvol are not > on the same nodes. OK, great. So basically just install the gluster server on the new node(s), do a peer probe to add them to the cluster, and then gluster volume create palantir replica 3 arbiter 1 [saruman brick] [gandalf brick] [arbiter 1] [azathoth brick] [yog-sothoth brick] [arbiter 2] [cthulhu brick] [mordiggian brick] [arbiter 3] Or is there more to it than that? -- Dave Sherohman
2018 Feb 27
2
Quorum in distributed-replicate volume
...tir Type: Distributed-Replicate Volume ID: 48379a50-3210-41b4-9a77-ae143c8bcac0 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: saruman:/var/local/brick0/data Brick2: gandalf:/var/local/brick0/data Brick3: azathoth:/var/local/brick0/data Brick4: yog-sothoth:/var/local/brick0/data Brick5: cthulhu:/var/local/brick0/data Brick6: mordiggian:/var/local/brick0/data Options Reconfigured: features.scrub: Inactive features.bitrot: off transport.address-family: inet performance.readdir-ahead: on nfs.disable: on network.ping-timeout: 1013 performance.quick-read:...
2018 Feb 27
0
Quorum in distributed-replicate volume
...gt; on the same nodes. > > OK, great. So basically just install the gluster server on the new > node(s), do a peer probe to add them to the cluster, and then > > gluster volume create palantir replica 3 arbiter 1 [saruman brick] > [gandalf brick] [arbiter 1] [azathoth brick] [yog-sothoth brick] [arbiter > 2] [cthulhu brick] [mordiggian brick] [arbiter 3] > gluster volume add-brick <volname> replica 3 arbiter 1 <arbiter 1> <arbiter 2> <arbiter 3> is the command. It will convert the existing volume to arbiter volume and add the specified bricks as arbite...
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 4:18 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote: > > If you want to use the first two bricks as arbiter, then you need to be > > aware of the following things: > > - Your distribution count will be decreased to 2. > > What's the significance of this? I'm
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote: > If you want to use the first two bricks as arbiter, then you need to be > aware of the following things: > - Your distribution count will be decreased to 2. What's the significance of this? I'm trying to find documentation on distribution counts in gluster, but my google-fu is failing me. > - Your data on
2018 Feb 27
0
Quorum in distributed-replicate volume
...379a50-3210-41b4-9a77-ae143c8bcac0 > Status: Started > Snapshot Count: 0 > Number of Bricks: 3 x 2 = 6 > Transport-type: tcp > Bricks: > Brick1: saruman:/var/local/brick0/data > Brick2: gandalf:/var/local/brick0/data > Brick3: azathoth:/var/local/brick0/data > Brick4: yog-sothoth:/var/local/brick0/data > Brick5: cthulhu:/var/local/brick0/data > Brick6: mordiggian:/var/local/brick0/data > Options Reconfigured: > features.scrub: Inactive > features.bitrot: off > transport.address-family: inet > performance.readdir-ahead: on > nfs.disable: on > netwo...
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > > "In a replica 2 volume... If we set the client-quorum option to > > > auto, then the first brick must always be up, irrespective of the > > > status of the second brick. If only the second brick is up,
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > "In a replica 2 volume... If we set the client-quorum option to > > auto, then the first brick must always be up, irrespective of the > > status of the second brick. If only the second brick is up, the > > subvolume becomes read-only." > > > By default client-quorum is