Displaying 4 results from an estimated 4 matches for "stor3node".
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...et]
0-volumedisk1-dht: Rebalance is completed. Time taken is 232675.00 secs
[2018-02-13 03:47:48.703351] I [MSGID: 109028]
[dht-rebalance.c:5057:gf_defrag_status_get]
0-volumedisk1-dht: Files migrated: 703964, size: 14046969178073, lookups:
1475983, failures: 0, skipped: 0
Checking my logs the new stor3node and the rebalance task was executed on
2018-02-10 . From this date to now I have been storing new files.
The exact sequence of commands to add the new node was:
gluster peer probe stor3data
gluster volume add-brick volumedisk0 stor3data:/mnt/disk_b1/glusterfs/vol0
gluster volume add-brick volum...
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...et] 0-volumedisk1-dht: Rebalance is
completed. Time taken is 232675.00 secs
[2018-02-13 03:47:48.703351] I [MSGID: 109028]
[dht-rebalance.c:5057:gf_defrag_status_get] 0-volumedisk1-dht: Files
migrated: 703964, size: 14046969178073, lookups: 1475983, failures: 0,
skipped: 0
Checking my logs the new stor3node and the rebalance task was executed on
2018-02-10 . From this date to now I have been storing new files.
The sequence of commands to add the node was:
gluster peer probe stor3data
gluster volume add-brick volumedisk0 stor3data:/mnt/disk_b1/glusterfs/vol0
gluster volume add-brick volumedisk0 sto...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation.
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were very similar .
For volumedisk1 I