Displaying 8 results from an estimated 8 matches for "volumedisk0".
Did you mean:
volumedisk1
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...0PB
1475199 0 0 completed 64:31:30
stor3data 703964 16384.0PB
1475983 0 0 completed 64:37:55
volume rebalance: volumedisk1: success
[root at stor1 ~]# gluster volume rebalance volumedisk0 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- ----------- ----------- ------------
---------...
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...0PB
1475199 0 0 completed 64:31:30
stor3data 703964 16384.0PB
1475983 0 0 completed 64:37:55
volume rebalance: volumedisk1: success
[root at stor1 ~]# gluster volume rebalance volumedisk0 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- ----------- ----------- ------------
---------...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...> stor3data
851053 (stor1) - 845250 (stor3) = 5803 files of difference !
In adition, correct me if I'm wrong, stor3data should have 50% of
probability to store a new file (even taking into account the algorithm of
DHT with filename patterns)
Thanks,
Greetings.
Jose V.
Status of volume: volumedisk0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick stor1data:/mnt/glusterfs/vol0/bri
ck1 49152 0 Y
13533
Brick stor2data:/mnt/glusterfs...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...ithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
197T 61T 136T 31% /volumedisk1
[root at stor2 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...patterns)
>
> Theoretically yes , but again, it depends on the filenames and their hash
distribution.
Please send us the output of :
gluster volume rebalance <volname> status
for the volume.
Regards,
Nithya
> Thanks,
> Greetings.
>
> Jose V.
>
> Status of volume: volumedisk0
> Gluster process TCP Port RDMA Port Online
> Pid
> ------------------------------------------------------------
> ------------------
> Brick stor1data:/mnt/glusterfs/vol0/bri
> ck1 49152 0 Y
>...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...hows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
> stor1data:/volumedisk0
> 101T 3,3T 97T 4% /volumedisk0
> stor1data:/volumedisk1
> 197T 61T 136T 31% /volumedisk1
>
>
> [root at stor2 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4...
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...46% /
tmpfs 32G 80K 32G 1% /dev/shm
/dev/sda1 190M 62M 119M 35% /boot
/dev/sda4 395G 251G 124G 68% /data
/dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
/dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
stor1data:/volumedisk0
76T 1,6T 74T 3% /volumedisk0
stor1data:/volumedisk1
*148T* 42T 106T 29% /volumedisk1
Exactly 1 brick minus: 196,4 TB - 49,1TB = 148TB
It's a production system so I hope you can help me.
Thanks in advance.
Jose V.
Below some other data...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...32G 80K 32G 1% /dev/shm
> /dev/sda1 190M 62M 119M 35% /boot
> /dev/sda4 395G 251G 124G 68% /data
> /dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0
> /dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1
> stor1data:/volumedisk0
> 76T 1,6T 74T 3% /volumedisk0
> stor1data:/volumedisk1
> *148T* 42T 106T 29% /volumedisk1
>
> Exactly 1 brick minus: 196,4 TB - 49,1TB = 148TB
>
> It's a production system so I hope you can help me.
>
> Thanks in...