Displaying 5 results from an estimated 5 matches for "445g".
Did you mean:
445
2006 Sep 07
5
Performance problem of ZFS ( Sol 10U2 )
...224B0001559BB471d0 - - 12 29 835K 1.02M
mirror 257G 439G 32 52 571K 1.04M
c8t22480001552D7AF8d0 - - 14 28 1003K 1.04M
c4t1d0 - - 14 32 1002K 1.04M
mirror 251G 445G 28 53 543K 1.02M
c8t227F0001552CB892d0 - - 13 28 897K 1.02M
c8t22250001559830A5d0 - - 13 30 897K 1.02M
mirror 17.4G 427G 22 38 339K 393K
c8t22FA00015529F784d0 - - 9 19 648K 393K...
2018 Feb 15
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
...ng myself:
azathoth replicates with yog-sothoth, so I compared their brick
directories. `ls -R /var/local/brick0/data | md5sum` gives the same
result on both servers, so the filenames are identical in both bricks.
However, `du -s /var/local/brick0/data` shows that azathoth has about 3G
more data (445G vs 442G) than yog.
This seems consistent with my assumption that the problem is on
yog-sothoth (everything is fine with only azathoth; there are problems
with only yog-sothoth) and I am reminded that a few weeks ago,
yog-sothoth was offline for 4-5 days, although it should have been
brought back u...
2006 Sep 29
0
Re: compiz Digest, Vol 7, Issue 20
...veman for even proceeding with this venture in
the first place, because it has developed into something very
exciting.
-David (Not Reveman ;P)
[ 2.6.16-gentoo-r12, AMD Opteron 165@1.8GHz Dual Core, 2x1G G.Skill
DDR533 184-pin, e-VGA nVidia 7900GT KO, Silverstone Strider 600W, Asus
A8N-SLI-Premium, 445G disk space]
2018 Feb 13
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
I'm using gluster for a virt-store with 3x2 distributed/replicated
servers for 16 qemu/kvm/libvirt virtual machines using image files
stored in gluster and accessed via libgfapi. Eight of these disk images
are standalone, while the other eight are qcow2 images which all share a
single backing file.
For the most part, this is all working very well. However, one of the
gluster servers
2018 Feb 15
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
...replicates with yog-sothoth, so I compared their brick
> directories. `ls -R /var/local/brick0/data | md5sum` gives the same
> result on both servers, so the filenames are identical in both bricks.
> However, `du -s /var/local/brick0/data` shows that azathoth has about 3G
> more data (445G vs 442G) than yog.
>
> This seems consistent with my assumption that the problem is on
> yog-sothoth (everything is fine with only azathoth; there are problems
> with only yog-sothoth) and I am reminded that a few weeks ago,
> yog-sothoth was offline for 4-5 days, although it should...