Displaying 19 results from an estimated 19 matches for "16t".
Did you mean:
16
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
197T 61T 136T 31% /volumedisk1
[root at stor2 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
> stor1data:/volumedisk0
> 101T 3,3T 97T 4% /volumedisk0
> stor1data:/volumedisk1
> 197T 61T 136T 31% /volumedisk1
>
>
> [root at stor2 ~]# df -h
> Filesystem Size Used Avail Use%...
2007 Jan 30
3
Export ZFS over NFS ?
...deployment with ZFS.
Consolidating a pair of other file servers that used to have
a dozen or so NFS exports in /etc/dfs/dfstab similar to;
/export/solaris/images
/export/tools
/export/ws
..... and so on....
For the new server, I have one large zfs pool;
-bash-3.00# df -hl
bigpool 16T 1.5T 15T 10% /export
that I am starting to populate. Should I simply share /export,
or should I separately share the individual dirs in /export
like the old dfstab did?
I am assuming that one single command;
# zfs set sharenfs=ro bigpool
would share /export as a read-only NFS point?...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...ug and now df shows the right size:
>>
>> That is good to hear.
>
>
>
>> [root at stor1 ~]# df -h
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
>> /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
>> stor1data:/volumedisk0
>> 101T 3,3T 97T 4% /volumedisk0
>> stor1data:/volumedisk1
>> 197T 61T 136T 31% /volumedisk1
>>
>>
>> [root at stor2 ~]# df -h
>> Filesystem...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...>>>
>>> That is good to hear.
>>
>>
>>
>>> [root at stor1 ~]# df -h
>>> Filesystem Size Used Avail Use% Mounted on
>>> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
>>> /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
>>> stor1data:/volumedisk0
>>> 101T 3,3T 97T 4% /volumedisk0
>>> stor1data:/volumedisk1
>>> 197T 61T 136T 31% /volumedisk1
>>>
>>>
>>> [root at stor2 ~]...
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...at is good to hear.
>>>
>>>
>>>
>>>> [root at stor1 ~]# df -h
>>>> Filesystem Size Used Avail Use% Mounted on
>>>> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
>>>> /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
>>>> stor1data:/volumedisk0
>>>> 101T 3,3T 97T 4% /volumedisk0
>>>> stor1data:/volumedisk1
>>>> 197T 61T 136T 31% /volumedisk1
>>>>
>>>>
>&g...
2023 Mar 18
1
hardware issues and new server advice
...to skip raid alltogether and rely on gluster replication
instead. (by compensating with three replicas per brick instead of two)
our options are:
6 of these:
AMD Ryzen 5 Pro 3600 - 6c/12t - 3.6GHz/4.2GHz
32GB - 128GB RAM
4 or 6 ? 6TB HDD SATA
6Gbit/s
or three of these:
AMD Ryzen 7 Pro 3700 - 8c/16t - 3.6GHz/4.4GHz
32GB - 128GB RAM
6? 14TB HDD SAS
6Gbit/s
i would configure 5 bricks on each server (leaving one disk as a hot
spare)
the engineers prefer the second option due to the architecture and SAS
disks. it is also cheaper.
i am concerned that 14TB disks will take to long to heal if one e...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...>>
>>>>
>>>>
>>>>> [root at stor1 ~]# df -h
>>>>> Filesystem Size Used Avail Use% Mounted on
>>>>> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
>>>>> /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
>>>>> stor1data:/volumedisk0
>>>>> 101T 3,3T 97T 4% /volumedisk0
>>>>> stor1data:/volumedisk1
>>>>> 197T 61T 136T 31% /volumedisk1
>>>>>
>...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.
The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.
Regards,
Nithya
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2011 Jun 25
1
Quota (and disk usage) is incorrectly reported on nfs client mounting XFS filesystem
...y 31 13:22:04 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
NFS client:
Linux nx8.priv 2.6.18-238.12.1.el5 #1 SMP Tue May 31 13:22:04 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
The NFS server is exporting a XFS filesystem:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgXX-lvXX 24T 16T 8.2T 66% /export
User foo (for anonymity) added ~3TB of data in the last 2-3 days.
On the NFS server, her quota is reported as ~5.8 TB:
Disk quotas for user foo (uid 1314):
Filesystem blocks quota limit grace files quota limit grace
/dev/mapper/vgXXX-lvXXX...
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi,
Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are running, there is no error in logs, however df shows a bad total size.
My configuration for one volume:
2010 Jul 20
2
LVM issue
Hi We use AoE disks for some of our systems. Currently, a 15.65Tb filesystem we have is full, I then extended the LVM by a further 4Tb but resize4fs could not handle a filesystem over 16Tb (CentOS 5.5). I then reduced the lvm by the same amount, and attempted to create a new LV, but get this error message in the process
lvcreate -v -ndata2 -L2T -t aoe
Test mode: Metadata will NOT be updated.
Setting logging type to disk
Finding volume group "aoe"
Test mode:...
2012 May 23
5
biggest disk partition on 5.8?
Hey folks,
I have a Sun J4400 SAS1 disk array with 24 x 1T drives in it connected
to a Sunfire x2250 running 5.8 ( 64 bit )
I used 'arcconf' to create a big RAID60 out of (see below).
But then I mount it and it is way too small
This should be about 20TB :
[root at solexa1 StorMan]# df -h /dev/sdb1
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 186G 60M
2009 Nov 26
5
rquota didnot show userquota (Solaris 10)
Hi,
we have a new fileserver running on X4275 hardware with Solaris 10U8.
On this fileserver we created one test dir with quota and mounted these
on another Solaris 10 system. Here the quota command didnot show the
used quota. Does this feature only work with OpenSolaris or is it
intended to work on Solaris 10?
Here what we did on the server:
# zfs create -o mountpoint=/export/home2
2010 Jul 22
4
[PATCH 1/3] ext3/ext4: Factor out disk addressability check
As part of adding support for OCFS2 to mount huge volumes, we need to
check that the sector_t and page cache of the system are capable of
addressing the entire volume.
An identical check already appears in ext3 and ext4. This patch moves
the addressability check into its own function in fs/libfs.c and
modifies ext3 and ext4 to invoke it.
Signed-off-by: Patrick LoPresti <lopresti at
2003 Mar 30
1
[RFC][patch] dynamic rolling block and sum sizes II
...K
261G 1022K 262K 5 2358K 9434K
1033G 2047K 516K 5 4650K 18M
2092G 2047K 1046K 6 10M 36M
4239G 4095K 1060K 6 10M 37M
16T 8191K 2091K 6 20M 73M
64T 15M 4126K 6 40M 145M
130T 15M 8359K 7 89M 293M
file length block_len block_count s2length xmit sums array_size
50...
2011 Nov 29
1
E2fsprogs 1.42 is released!
...d, and of course to all of the users of e2fsprogs.
Many thanks for your support, bug reports, code contributions, and
translations over the years.
Regards,
- Ted
E2fsprogs 1.42 (November 29, 2011)
==================================
This release of e2fsprogs has support for file systems > 16TB. Online
resize requires kernel support which will hopefully be in Linux
version 3.2. Offline support is not yet available for > 16TB file
systems, but will be coming.
This release of e2fsprogs has support for clustered allocation. This
reduces the number of block (now cluster) bitmaps by al...
2011 Sep 01
6
CentOS 6 Partitioning Map/Schema
Good Evening All,
I have a question regarding CentOS 6 server partitioning. Now I know
there are a lot of different ways to partition the system and different
opinions depending on the use of the server. I currently have a quad
core intel system running 8GB of RAM with 1 TB hard drive (single). In
the past as a FreeBSD user, I have always made a physical volume of the
root filesystem (/),
2006 Oct 03
1
HP Toolbox kills Samba
...M0:_YOCU<??.B/IH^6,\.]IZ]JA\<[F]OO2[&G,<&V)N];K>P;3$0WJB?-,X[
M9Q\WMZX_?-$8GFZNU'YK]&O]RVYMHSMHMNKE-ZS$6Y6R>+"G,MO>W=O>/<Q6
M=WO9X+)YFIUTSMI9KY^U.OUX_5[_X]JC1IV9)6#&# [@``[@@!ET9M"904]Q
M(6!*ZCDV2C;R(- @.. `#N `X8GP1'ARH.G2A\"#LBHH=O-8Q.16T$88MJ&8
M&TD=Z]I7<W(KI2ZE$9XC89&0="7@``[@@"XE74JZE.Q L53-:867:$YB#%H#
M#N `#M"<:$XT9]6G,7SZ0^]!"16J<*A0!::$@E8NL WTW!VBZ*)BBY]_#%[B
M\]$_Z& X@ ,XF+<#03"A`\$6=C<UFA)2R_DUVHUJ,M'&HVJ27@FMUN[U!W*:
MZK]J*.4JUK->\WTY]+%RJ]T\?[G_27AK;[0;H&...