Displaying 8 results from an estimated 8 matches for "stor1data".
Did you mean:
sterndata
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
197T 61T 136T 31% /volumedisk1
[root at stor2 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/de...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were very...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...nd now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
> stor1data:/volumedisk0
> 101T 3,3T 97T 4% /volumedisk0
> stor1data:/volumedisk1
> 197T 61T 136T 31% /volumedisk1
>
>
> [root at stor2 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation. This task finished successfully (you can see info below)
> and number of files...
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
.../glusterfs/vol0
2018-03-01 6:32 GMT+01:00 Nithya Balachandran <nbalacha at redhat.com>:
> Hi Jose,
>
> On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
>
>> Hi Nithya,
>>
>> My initial setup was composed of 2 similar nodes: stor1data and
>> stor2data. A month ago I expanded both volumes with a new node: stor3data
>> (2 bricks per volume).
>> Of course, then to add the new peer with the bricks I did the 'balance
>> force' operation. This task finished successfully (you can see info below)
>>...
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...no error in logs, however df shows a bad total size.
My configuration for one volume: volumedisk1
[root at stor1 ~]# gluster volume status volumedisk1 detail
Status of volume: volumedisk1
------------------------------------------------------------------------------
Brick : Brick stor1data:/mnt/glusterfs/vol1/brick1
TCP Port : 49153
RDMA Port : 0
Online : Y
Pid : 13579
File System : xfs
Device : /dev/sdc1
Mount Options : rw,noatime
Inode Size : 512
Disk Space Free : 35.0TB
Total Disk Sp...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...c/glusterfs/vol1
gluster volume add-brick volumedisk1 stor3data:/mnt/disk_d/glusterfs/vol1
gluster volume rebalance volumedisk0 start force
gluster volume rebalance volumedisk1 start force
For some reason , could be unbalanced the assigned range of DHT for
stor3data bricks ? Could be minor than stor1data and stor2data ? ,
Any way to verify it ?
Any way to modify/rebalance the DHT range between bricks in order to unify
the DHT range per brick ?.
Thanks a lot,
Greetings.
Jose V.
2018-03-01 10:39 GMT+01:00 Jose V. Carri?n <jocarbur at gmail.com>:
> Hi Nithya,
> Below the output of...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...ad total size.
>
> My configuration for one volume: volumedisk1
> [root at stor1 ~]# gluster volume status volumedisk1 detail
>
> Status of volume: volumedisk1
> ------------------------------------------------------------
> ------------------
> Brick : Brick stor1data:/mnt/glusterfs/vol1/brick1
> TCP Port : 49153
> RDMA Port : 0
> Online : Y
> Pid : 13579
> File System : xfs
> Device : /dev/sdc1
> Mount Options : rw,noatime
> Inode Size : 512
>...