Displaying 10 results from an estimated 10 matches for "55g".
Did you mean:
55
2017 Nov 13
1
Shared storage showing 100% used
...????????????????????????? 32G???? 0?? 32G?? 0% /dev
tmpfs?????????????????????????????? 32G???? 0?? 32G?? 0% /dev/shm
tmpfs?????????????????????????????? 32G?? 17M?? 32G?? 1% /run
tmpfs?????????????????????????????? 32G???? 0?? 32G?? 0% /sys/fs/cgroup
/dev/md124????????????????????????? 59G? 1.6G?? 55G?? 3% /usr
/dev/md153p2??????????????????????? 13T?? 34M?? 13T?? 1% /glusterfs/a4/b2
/dev/md151p1??????????????????????? 13T?? 34M?? 13T?? 1% /glusterfs/a2/b1
/dev/md151p2??????????????????????? 13T?? 34M?? 13T?? 1% /glusterfs/a2/b2
/dev/md152p1??????????????????????? 26T? 4.4T?? 22T? 17% /glusterfs...
2018 Feb 27
2
Quorum in distributed-replicate volume
...server-quorum-type: server
features.shard: on
cluster.data-self-heal-algorithm: full
storage.owner-uid: 64055
storage.owner-gid: 64055
For brick sizes, saruman/gandalf have
$ df -h /var/local/brick0
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/gandalf-gluster 885G 55G 786G 7% /var/local/brick0
and the other four have
$ df -h /var/local/brick0
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 11T 254G 11T 3% /var/local/brick0
--
Dave Sherohman
2002 Jun 02
1
General speed question
We have some speed/performance issues:
We have a 100M fullduplex private network setup to handle rsync transfers
to our "mirror" server with a command like:
time rsync -e ssh -avzl --delete --rsync-path=/usr/local/bin/rsync \
--exclude ".netscape/cache/" --delete-excluded \
bigserver:/staff1 /mirror/bigserver
It takes about 20 minutes to check/transfer files from
2012 Mar 24
3
FreeBSD 9.0 - GPT boot problems?
Hi,
I just installed FreeBSD 9.0-release / amd64 on a new machine (Acer Aspire X1470).
I installed from a usb memory stick (the default amd64 image), which I booted by pressing "F12" and selecting it from the boot menu on the machine.
I installed on a SSD (which replaced the hard drive originally in the machine), using the default scheme for 9.0 (GPT).
The installation was painless (many
2018 Feb 27
0
Quorum in distributed-replicate volume
...gt; cluster.data-self-heal-algorithm: full
> storage.owner-uid: 64055
> storage.owner-gid: 64055
>
>
> For brick sizes, saruman/gandalf have
>
> $ df -h /var/local/brick0
> Filesystem Size Used Avail Use% Mounted on
> /dev/mapper/gandalf-gluster 885G 55G 786G 7% /var/local/brick0
>
> and the other four have
>
> $ df -h /var/local/brick0
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 11T 254G 11T 3% /var/local/brick0
>
If you want to use the first two bricks as arbiter, then you need to be
aware o...
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > > "In a replica 2 volume... If we set the client-quorum option to
> > > auto, then the first brick must always be up, irrespective of the
> > > status of the second brick. If only the second brick is up,
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > "In a replica 2 volume... If we set the client-quorum option to
> > auto, then the first brick must always be up, irrespective of the
> > status of the second brick. If only the second brick is up, the
> > subvolume becomes read-only."
> >
> By default client-quorum is
2004 Nov 25
6
Logfile entry query
Hi,
I get frequent logfile entries from Shorewall similar to the following:
Nov 25 11:22:51 10.0.0.248 kernel: Shorewall:net2mill:DROP:IN=eth2
OUT=eth0 SRC=202.96.117.50 DST=10.0.0.10 LEN=56 TOS=0x00 PREC=0x00
TTL=241 ID=0 PROTO=ICMP TYPE=11 CODE=0 [SRC=10.0.0.10
DST=202.101.167.133 LEN=48 TOS=0x00 PREC=0x00 TTL=1
ID=13591 DF PROTO=TCP INCOMPLETE [8 bytes] ]
Could someone explain what the
2008 Jul 06
14
confusion and frustration with zpool
I have a zpool which has grown "organically". I had a 60Gb disk, I added a 120, I added a 500, I got a 750 and sliced it and mirrored the other pieces.
The 60 and the 120 are internal PATA drives, the 500 and 750 are Maxtor OneTouch USB drives.
The original system I created the 60+120+500 pool on was Solaris 10 update 3, patched to use ZFS sometime last fall (November I believe). In
2009 Jul 23
1
[PATCH server] changes required for fedora rawhide inclusion.
...XSS2HN{ta*&@`&lcCG5Vz{kS9sX9wGCsdiuZ<*%q8p5|c
z9^CJ>>TQZn8wzG?)@pvpCG;FCefw~^#koPcBTCA91=|em4>0t5tF5JVQ#@z5tZ+EH
z@(A%c{-&Xm at R)-85ltBY5ikPuZvArctOh5bY5lg^W84Tt79A=d2h&*rWRCcRJ$YHA
z_UWGIfKPtAWW at 0r$6?{TzHGQNt3)&(Q}HE)l_G$bN`M1<Ymm{8vmQPfJKObP6cL<u
zNee~l0Gm55G=aT0qP8E~D5Ki*pGUpaEu2*Cel=4kLtQ_5$%W{rk+--nYx)luw{T6h
z>rpFK$@*3~bR5a-yUM-zM~qvqI3uG!tf~IWyEZYWQsjjzv<iDHcxk#uTU5FC6MG*n
z5&QukLC&k%!Pfw_?jt5i`Yv*2`L)gyIKsR&D{yd^UNE6HmNuhbe?WdF(wNL}rJcU_
zfLi{pb>(IyF>@)t&H&y?h^iZq4C<oh_Go64lxT%lSoanuA#EW#Y=HF0 at qV=pL39mU
z<n...