Displaying 6 results from an estimated 6 matches for "786g".
Did you mean:
786
2018 Feb 27
2
Quorum in distributed-replicate volume
...r-quorum-type: server
features.shard: on
cluster.data-self-heal-algorithm: full
storage.owner-uid: 64055
storage.owner-gid: 64055
For brick sizes, saruman/gandalf have
$ df -h /var/local/brick0
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/gandalf-gluster 885G 55G 786G 7% /var/local/brick0
and the other four have
$ df -h /var/local/brick0
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 11T 254G 11T 3% /var/local/brick0
--
Dave Sherohman
2018 Feb 27
0
Quorum in distributed-replicate volume
...luster.data-self-heal-algorithm: full
> storage.owner-uid: 64055
> storage.owner-gid: 64055
>
>
> For brick sizes, saruman/gandalf have
>
> $ df -h /var/local/brick0
> Filesystem Size Used Avail Use% Mounted on
> /dev/mapper/gandalf-gluster 885G 55G 786G 7% /var/local/brick0
>
> and the other four have
>
> $ df -h /var/local/brick0
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 11T 254G 11T 3% /var/local/brick0
>
If you want to use the first two bricks as arbiter, then you need to be
aware of the...
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > > "In a replica 2 volume... If we set the client-quorum option to
> > > auto, then the first brick must always be up, irrespective of the
> > > status of the second brick. If only the second brick is up,
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > "In a replica 2 volume... If we set the client-quorum option to
> > auto, then the first brick must always be up, irrespective of the
> > status of the second brick. If only the second brick is up, the
> > subvolume becomes read-only."
> >
> By default client-quorum is
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).
Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.
Load on the box is .59.
8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2008 Jun 30
4
Rebuild of kernel 2.6.9-67.0.20.EL failure
Hello list.
I'm trying to rebuild the 2.6.9.67.0.20.EL kernel, but it fails even without
modifications.
How did I try it?
Created a (non-root) build environment (not a mock )
Installed the kernel.scr.rpm and did a
rpmbuild -ba --target=`uname -m` kernel-2.6.spec 2> prep-err.log | tee
prep-out.log
The build failed at the end:
Processing files: kernel-xenU-devel-2.6.9-67.0.20.EL
Checking