Displaying 15 results from an estimated 15 matches for "25t".
Did you mean:
256
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
197T 61T 136T 31% /volumedisk1
[root at stor2 ~]# df -h
Filesystem S...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...lt;jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
> stor1data:/volumedisk0
> 101T 3,3T 97T 4% /volumedisk0
> stor1data:/volumedisk1
> 197T 61T 136T 31% /volumedisk1
>
>
> [root at s...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...; Hi Nithya,
>>
>> I applied the workarround for this bug and now df shows the right size:
>>
>> That is good to hear.
>
>
>
>> [root at stor1 ~]# df -h
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
>> /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
>> stor1data:/volumedisk0
>> 101T 3,3T 97T 4% /volumedisk0
>> stor1data:/volumedisk1
>> 197T 61T 136T 31% /volumedisk1
>&...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...applied the workarround for this bug and now df shows the right size:
>>>
>>> That is good to hear.
>>
>>
>>
>>> [root at stor1 ~]# df -h
>>> Filesystem Size Used Avail Use% Mounted on
>>> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
>>> /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
>>> stor1data:/volumedisk0
>>> 101T 3,3T 97T 4% /volumedisk0
>>> stor1data:/volumedisk1
>>> 197T 61T 136T 31...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...Size Used Avail Use% Mounted on
> /dev/sda2 48G 21G 25G 46% /
> tmpfs 32G 80K 32G 1% /dev/shm
> /dev/sda1 190M 62M 119M 35% /boot
> /dev/sda4 395G 251G 124G 68% /data
> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
> stor1data:/volumedisk0
> 76T 1,6T 74T 3% /volumedisk0
> stor1data:/volumedisk1
> *148T* 42T 106T 29% /volumedisk1
>
> Exactly 1 bri...
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...s bug and now df shows the right size:
>>>>
>>>> That is good to hear.
>>>
>>>
>>>
>>>> [root at stor1 ~]# df -h
>>>> Filesystem Size Used Avail Use% Mounted on
>>>> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
>>>> /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
>>>> stor1data:/volumedisk0
>>>> 101T 3,3T 97T 4% /volumedisk0
>>>> stor1data:/volumedisk1
>>>>...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...size:
>>>>>
>>>>> That is good to hear.
>>>>
>>>>
>>>>
>>>>> [root at stor1 ~]# df -h
>>>>> Filesystem Size Used Avail Use% Mounted on
>>>>> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
>>>>> /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
>>>>> stor1data:/volumedisk0
>>>>> 101T 3,3T 97T 4% /volumedisk0
>>>>> stor1data:/volumedisk1
>>>>>...
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 48G 21G 25G 46% /
tmpfs 32G 80K 32G 1% /dev/shm
/dev/sda1 190M 62M 119M 35% /boot
/dev/sda4 395G 251G 124G 68% /data
/dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
/dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
stor1data:/volumedisk0
76T 1,6T 74T 3% /volumedisk0
stor1data:/volumedisk1
*148T* 42T 106T 29% /volumedisk1
Exactly 1 brick minus: 196,4 TB - 49,1TB = 148T...
2013 Mar 16
1
different size of nodes
...n
performance.write-behind-window-size: 4MB
performance.cache-refresh-timeout: 1
performance.cache-size: 4GB
network.frame-timeout: 60
performance.cache-max-file-size: 1GB
As you can see 2 of the bricks are smaller and they're full.
The gluster volume is not full of course:
gl0:/w-vol 25T 21T 4.0T 84% /W/Projects
I'm not able to write to the volume. Why? Is it an issue? If so, is it known?
How can I stop writing to full nodes?
Thanks,
tamas
2018 Jan 31
1
df does not show full volume capacity after update to 3.12.4
...45T 14T 77% /bricks/data_A2
/dev/sdd1 59T 39M 59T 1% /bricks/data_A4
/dev/sdc1 59T 1.9T 57T 4% /bricks/data_A3
server-A:/dataeng 350T 83T 268T 24% /dataeng
And on server-B:
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 59T 34T 25T 58% /bricks/data_B2
/dev/sdc1 59T 2.0T 57T 4% /bricks/data_B3
/dev/sdd1 59T 39M 59T 1% /bricks/data_B4
/dev/sda1 59T 38T 22T 64% /bricks/data_B1
server-B:/dataeng 350T 83T 268T 24% /dataeng
Eva Freer
From: Sam McLeod <mailinglists at smcleod....
2010 Sep 29
4
XFS on a 25 TB device
Hello all,
I have just configured a 64-bit CentOS 5.5 machine to support an XFS
filesystem as specified in the subject line. The filesystem will be used to
store an extremely large number of files (in the tens of millions). Due to
its extremely large size, would there be any non-standard XFS
build/configuration options I should consider?
Thanks.
Boris.
-------------- next part --------------
An
2004 Jun 09
1
Samba client filesize problems
...and W2k machine.
The original smbclient did this so I upgraded to the latest, it didn't
help. The server is W2k with an NTFS volume, these are video files - the
windows software breaks the files at the 2G limit, but for some
reason some are reported as 2G others as huge !!
One day I might own 25T of disk, but not today.
Anyone any ideas ?
Thanks,
Jon
[root@jonspc bin]# smbmount
Usage: mount.smbfs service mountpoint [-o options,...]
Version 2.2.9
[root@jonspc bin]# uname -a
Linux jonspc 2.4.20-8 #1 Thu Mar 13 17:18:24 EST 2003 i686 athlon i386
GNU/Linux
[root@jonspc record]# ls -lt
tota...
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
We noticed something similar.
Out of interest, does du -sh . show the same size?
--
Sam McLeod (protoporpoise on IRC)
https://smcleod.net
https://twitter.com/s_mcleod
Words are my own opinions and do not necessarily represent those of my employer or partners.
> On 31 Jan 2018, at 12:47 pm, Freer, Eva B. <freereb at ornl.gov> wrote:
>
> After OS update to CentOS 7.4 or RedHat
2018 Jan 31
4
df does not show full volume capacity after update to 3.12.4
After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster 3.12.4, the ?df? command shows only part of the available space on the mount point for multi-brick volumes. All nodes are at 3.12.4. This occurs on both servers and clients.
We have 2 different server configurations.
Configuration 1: A distributed volume of 8 bricks with 4 on each server. The initial configuration had 4 bricks of
2008 Jun 30
4
Rebuild of kernel 2.6.9-67.0.20.EL failure
Hello list.
I'm trying to rebuild the 2.6.9.67.0.20.EL kernel, but it fails even without
modifications.
How did I try it?
Created a (non-root) build environment (not a mock )
Installed the kernel.scr.rpm and did a
rpmbuild -ba --target=`uname -m` kernel-2.6.spec 2> prep-err.log | tee
prep-out.log
The build failed at the end:
Processing files: kernel-xenU-devel-2.6.9-67.0.20.EL
Checking