Displaying 9 results from an estimated 9 matches for "638g".
Did you mean:
638
2009 Jan 12
1
ZFS size is different ?
Hi all,
I have 2 questions about ZFS.
1. I have create a snapshot in my pool1/data1, and zfs send/recv it to pool2/data2. but I found the USED in zfs list is different:
NAME USED AVAIL REFER MOUNTPOINT
pool2/data2 160G 1.44T 159G /pool2/data2
pool1/data 176G 638G 175G /pool1/data1
It keep about 30,000,000 files.
The content of p_pool/p1 and backup/p_backup is almost same. But why is the size different?
2. /pool2/data2 is a RAID5 Disk Array with 8 disks, and , and /pool1/data1 is a RAIDZ2 with 5 disks.
The configure like this:
NAME STAT...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...34T 33% /mnt/glusterfs/vol1
stor2data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor2data:/volumedisk1
197T 61T 136T 31% /volumedisk1
[root at stor3 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 25T 638G 24T 3% /mnt/disk_b1/glusterfs/vol0
/dev/sdb2 25T 654G 24T 3% /mnt/disk_b2/glusterfs/vol0
/dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1
/dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1
stor3data:/volumedisk0
1...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...a:/volumedisk0
> 101T 3,3T 97T 4% /volumedisk0
> stor2data:/volumedisk1
> 197T 61T 136T 31% /volumedisk1
>
>
> [root at stor3 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 25T 638G 24T 3% /mnt/disk_b1/glusterfs/vol0
> /dev/sdb2 25T 654G 24T 3% /mnt/disk_b2/glusterfs/vol0
> /dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1
> /dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1
> stor3data:/volumedisk0
>...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...101T 3,3T 97T 4% /volumedisk0
>> stor2data:/volumedisk1
>> 197T 61T 136T 31% /volumedisk1
>>
>>
>> [root at stor3 ~]# df -h
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/sdb1 25T 638G 24T 3% /mnt/disk_b1/glusterfs/vol0
>> /dev/sdb2 25T 654G 24T 3% /mnt/disk_b2/glusterfs/vol0
>> /dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1
>> /dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1
>> stor3data:...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...7T 4% /volumedisk0
>>> stor2data:/volumedisk1
>>> 197T 61T 136T 31% /volumedisk1
>>>
>>>
>>> [root at stor3 ~]# df -h
>>> Filesystem Size Used Avail Use% Mounted on
>>> /dev/sdb1 25T 638G 24T 3% /mnt/disk_b1/glusterfs/vol0
>>> /dev/sdb2 25T 654G 24T 3% /mnt/disk_b2/glusterfs/vol0
>>> /dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1
>>> /dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1
>>...
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...;>> stor2data:/volumedisk1
>>>> 197T 61T 136T 31% /volumedisk1
>>>>
>>>>
>>>> [root at stor3 ~]# df -h
>>>> Filesystem Size Used Avail Use% Mounted on
>>>> /dev/sdb1 25T 638G 24T 3% /mnt/disk_b1/glusterfs/vol0
>>>> /dev/sdb2 25T 654G 24T 3% /mnt/disk_b2/glusterfs/vol0
>>>> /dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1
>>>> /dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...umedisk1
>>>>> 197T 61T 136T 31% /volumedisk1
>>>>>
>>>>>
>>>>> [root at stor3 ~]# df -h
>>>>> Filesystem Size Used Avail Use% Mounted on
>>>>> /dev/sdb1 25T 638G 24T 3% /mnt/disk_b1/glusterfs/vol0
>>>>> /dev/sdb2 25T 654G 24T 3% /mnt/disk_b2/glusterfs/vol0
>>>>> /dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1
>>>>> /dev/sdd1 50T 15T 35T 30% /mnt/disk_...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.
The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.
Regards,
Nithya
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi,
Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are running, there is no error in logs, however df shows a bad total size.
My configuration for one volume: