search for: azathoth

Displaying 13 results from an estimated 13 matches for "azathoth".

2018 Feb 13
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
.../libvirt virtual machines using image files stored in gluster and accessed via libgfapi. Eight of these disk images are standalone, while the other eight are qcow2 images which all share a single backing file. For the most part, this is all working very well. However, one of the gluster servers (azathoth) causes three of the standalone VMs and all 8 of the shared-backing-image VMs to fail if it goes down. Any of the other gluster servers can go down with no problems; only azathoth causes issues. In addition, the kvm hosts have the gluster volume fuse mounted and one of them (out of five) detects...
2018 Feb 15
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
Well, it looks like I've stumped the list, so I did a bit of additional digging myself: azathoth replicates with yog-sothoth, so I compared their brick directories. `ls -R /var/local/brick0/data | md5sum` gives the same result on both servers, so the filenames are identical in both bricks. However, `du -s /var/local/brick0/data` shows that azathoth has about 3G more data (445G vs 442G) than y...
2018 Feb 15
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
...io errors and xfs_repair fixed the issue. What about the heal? Does it report any pending heals? On Feb 15, 2018 14:20, "Dave Sherohman" <dave at sherohman.org> wrote: > Well, it looks like I've stumped the list, so I did a bit of additional > digging myself: > > azathoth replicates with yog-sothoth, so I compared their brick > directories. `ls -R /var/local/brick0/data | md5sum` gives the same > result on both servers, so the filenames are identical in both bricks. > However, `du -s /var/local/brick0/data` shows that azathoth has about 3G > more data (...
2018 Feb 27
2
Quorum in distributed-replicate volume
...ng as the bricks of same replica subvol are not > on the same nodes. OK, great. So basically just install the gluster server on the new node(s), do a peer probe to add them to the cluster, and then gluster volume create palantir replica 3 arbiter 1 [saruman brick] [gandalf brick] [arbiter 1] [azathoth brick] [yog-sothoth brick] [arbiter 2] [cthulhu brick] [mordiggian brick] [arbiter 3] Or is there more to it than that? -- Dave Sherohman
2018 Feb 27
2
Quorum in distributed-replicate volume
...h brick is of what size. Volume Name: palantir Type: Distributed-Replicate Volume ID: 48379a50-3210-41b4-9a77-ae143c8bcac0 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: saruman:/var/local/brick0/data Brick2: gandalf:/var/local/brick0/data Brick3: azathoth:/var/local/brick0/data Brick4: yog-sothoth:/var/local/brick0/data Brick5: cthulhu:/var/local/brick0/data Brick6: mordiggian:/var/local/brick0/data Options Reconfigured: features.scrub: Inactive features.bitrot: off transport.address-family: inet performance.readdir-ahead: on nfs.disable: on network...
2001 Dec 01
3
DirectX 8.0
Hi, I'm trying to install the game "Aquanox" under Wine which needs DirectX 8.0. I've found a mail in the wine mailing list archive stating that wine already includes DirectX, but Aquanox keeps complaining about a missing d3d8.dll. Trying to install DirectX 8.0, I end up with "DirectX couldn't be installed on this computer". Any hints on this one? Sebastian --
2018 Feb 16
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
On Thu, Feb 15, 2018 at 09:34:02PM +0200, Alex K wrote: > Have you checked for any file system errors on the brick mount point? I hadn't. fsck reports no errors. > What about the heal? Does it report any pending heals? There are now. It looks like taking the brick offline to fsck it was enough to trigger gluster to recheck everything. I'll check after it finishes to see whether
2018 Feb 27
0
Quorum in distributed-replicate volume
...l are > not > > on the same nodes. > > OK, great. So basically just install the gluster server on the new > node(s), do a peer probe to add them to the cluster, and then > > gluster volume create palantir replica 3 arbiter 1 [saruman brick] > [gandalf brick] [arbiter 1] [azathoth brick] [yog-sothoth brick] [arbiter > 2] [cthulhu brick] [mordiggian brick] [arbiter 3] > gluster volume add-brick <volname> replica 3 arbiter 1 <arbiter 1> <arbiter 2> <arbiter 3> is the command. It will convert the existing volume to arbiter volume and add the specif...
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 4:18 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote: > > If you want to use the first two bricks as arbiter, then you need to be > > aware of the following things: > > - Your distribution count will be decreased to 2. > > What's the significance of this? I'm
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote: > If you want to use the first two bricks as arbiter, then you need to be > aware of the following things: > - Your distribution count will be decreased to 2. What's the significance of this? I'm trying to find documentation on distribution counts in gluster, but my google-fu is failing me. > - Your data on
2018 Feb 27
0
Quorum in distributed-replicate volume
...t; Type: Distributed-Replicate > Volume ID: 48379a50-3210-41b4-9a77-ae143c8bcac0 > Status: Started > Snapshot Count: 0 > Number of Bricks: 3 x 2 = 6 > Transport-type: tcp > Bricks: > Brick1: saruman:/var/local/brick0/data > Brick2: gandalf:/var/local/brick0/data > Brick3: azathoth:/var/local/brick0/data > Brick4: yog-sothoth:/var/local/brick0/data > Brick5: cthulhu:/var/local/brick0/data > Brick6: mordiggian:/var/local/brick0/data > Options Reconfigured: > features.scrub: Inactive > features.bitrot: off > transport.address-family: inet > performance.r...
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > > "In a replica 2 volume... If we set the client-quorum option to > > > auto, then the first brick must always be up, irrespective of the > > > status of the second brick. If only the second brick is up,
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > "In a replica 2 volume... If we set the client-quorum option to > > auto, then the first brick must always be up, irrespective of the > > status of the second brick. If only the second brick is up, the > > subvolume becomes read-only." > > > By default client-quorum is