Displaying 9 results from an estimated 9 matches for "254g".
Did you mean:
254
2008 Jun 17
6
mirroring zfs slice
...ll,
I had a slice with zfs file system which I want to mirror, I
followed the procedure mentioned in the amin guide I am getting this
error. Can you tell me what I did wrong?
root # zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
export 254G 230K 254G 0% ONLINE -
root # echo |format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c2t0d0 <DEFAULT cyl 35497 alt 2 hd 255 sec 63>
/pci at 7b,0/pci1022,7458 at 11/pci1000,3060 at 2/sd at 0,0
1. c2t2d0 <DEFAULT cyl 35497 alt...
2008 Dec 10
1
df returns weird values
Hi,
I'm starting to play with glusterfs, and I'm having a problem with the df
output.
The value seems to be wrong.
(on the client)
/var/mule-client$ du -sh
584K .
/var/mule-client$ df -h /var/mule-client/
Filesystem Size Used Avail Use% Mounted on
glusterfs 254G 209G 32G 88% /var/mule-client
(on the server)
/var/mule$ du -sh
584K .
Is it a known issue ?
I've mounted /var/mule-client using glusterfs -f mycnffile /var/mule-client
My client runs :
glusterfs --version
glusterfs 1.3.12 built on Dec 10 2008 11:32:16
Repository revision: glu...
2018 Feb 27
2
Quorum in distributed-replicate volume
...ruman/gandalf have
$ df -h /var/local/brick0
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/gandalf-gluster 885G 55G 786G 7% /var/local/brick0
and the other four have
$ df -h /var/local/brick0
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 11T 254G 11T 3% /var/local/brick0
--
Dave Sherohman
2010 Apr 14
1
Checksum errors on and after resilver
...lver completed after 4h9m with 0 errors on Mon Apr 12 18:12:26 2010
config:
NAME STATE READ WRITE CKSUM
sasuc8i ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c12t4d0 ONLINE 0 0 5 108K resilvered
c12t8d0 ONLINE 0 0 0 254G resilvered
c12t6d0 ONLINE 0 0 0
c12t7d0 ONLINE 0 0 0
c12t0d0 ONLINE 0 0 1 21.5K resilvered
c12t1d0 ONLINE 0 0 2 43K resilvered
c12t2d0 ONLINE 0 0 4 86K resilvered
c12t3d0 ONLINE 0...
2018 Feb 27
0
Quorum in distributed-replicate volume
.../brick0
> Filesystem Size Used Avail Use% Mounted on
> /dev/mapper/gandalf-gluster 885G 55G 786G 7% /var/local/brick0
>
> and the other four have
>
> $ df -h /var/local/brick0
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 11T 254G 11T 3% /var/local/brick0
>
If you want to use the first two bricks as arbiter, then you need to be
aware of the following things:
- Your distribution count will be decreased to 2.
- Your data on the first subvol i.e., replica subvol - 1 will be
unavailable till it is copied to the other sub...
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > > "In a replica 2 volume... If we set the client-quorum option to
> > > auto, then the first brick must always be up, irrespective of the
> > > status of the second brick. If only the second brick is up,
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > "In a replica 2 volume... If we set the client-quorum option to
> > auto, then the first brick must always be up, irrespective of the
> > status of the second brick. If only the second brick is up, the
> > subvolume becomes read-only."
> >
> By default client-quorum is
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.).
According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2010 Nov 11
8
zpool import panics
...metaslab 53 offset d4000000000 spacemap 268
free 10.4G
segments 1603 maxsize 6.90G
freepct 4%
metaslab 54 offset d8000000000 spacemap 253
free 256G
segments 1071 maxsize 254G
freepct 99%
metaslab 55 offset dc000000000 spacemap 0
free 256G
metaslab 56 offset e0000000000 spacemap 3077
free 256G
segments 313 maxsize 255G
freepct 99%
metaslab 57 offset e...