Hi All,
We are adding a new brick (91TB) to our existing gluster volume (328
TB). The new brick is on a new physical server and we want to make sure
that we are doing this correctly (the existing volume had 2 bricks on a
single physical server).? Both are running glusterfs 3.7.11. The steps
we've followed are
* Setup glusterfs and created brick on new server mseas-data3
* Peer probe commands in order to make sure that meas-data2 and
mseas-data3 are in a Trusted Storage Pool
* Added brick3 (lives on mseas-data3, is 91TB) to data-volume on
mseas-data2 (328TB)
* Ran *gluster volume rebalance data-volume fix-layout start *cmd
(fix-layout to NOT migrate data during process, we did not want to
yet)? Still running as of this Email
The first question we have is at what point should the additional 91 TB
be visible to the client servers? Currently when we run "df -h" on any
client we still only see the original 328 TB. Does the additional space
only appear to the clients after the rebalance using fix-layout finishes?
The follow-up question is how long should the rebalance using fix-layout
take?
Some additional information
# gluster volume info
Volume Name: data-volume
Type: Distribute
Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: mseas-data2:/mnt/brick1
Brick2: mseas-data2:/mnt/brick2
Brick3: mseas-data3:/export/sda/brick3
Options Reconfigured:
diagnostics.client-log-level: ERROR
network.inode-lru-limit: 50000
performance.md-cache-timeout: 60
performance.open-behind: off
disperse.eager-lock: off
auth.allow: *
server.allow-insecure: on
nfs.exports-auth-enable: on
diagnostics.brick-sys-log-level: WARNING
performance.readdir-ahead: on
nfs.disable: on
nfs.export-volumes: off
Thanks
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: phaley at mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical Engineering Fax: (617) 253-8125
MIT, Room 5-213 http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA 02139-4301
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20180729/36d75b49/attachment.html>
Nithya Balachandran
2018-Jul-30 16:43 UTC
[Gluster-users] Questions on adding brick to gluster
Hi, On 30 July 2018 at 00:45, Pat Haley <phaley at mit.edu> wrote:> > Hi All, > > We are adding a new brick (91TB) to our existing gluster volume (328 TB). > The new brick is on a new physical server and we want to make sure that we > are doing this correctly (the existing volume had 2 bricks on a single > physical server). Both are running glusterfs 3.7.11. The steps we've > followed are > > - Setup glusterfs and created brick on new server mseas-data3 > - Peer probe commands in order to make sure that meas-data2 and > mseas-data3 are in a Trusted Storage Pool > - Added brick3 (lives on mseas-data3, is 91TB) to data-volume on > mseas-data2 (328TB) > - Ran *gluster volume rebalance data-volume fix-layout start *cmd > (fix-layout to NOT migrate data during process, we did not want to yet) > Still running as of this Email > > The first question we have is at what point should the additional 91 TB be > visible to the client servers? Currently when we run "df -h" on any client > we still only see the original 328 TB. Does the additional space only > appear to the clients after the rebalance using fix-layout finishes? > > No, this should be available immediately. Do you see any error messages inthe client log files when running df? Does running the command from a fresh mount succeed?> The follow-up question is how long should the rebalance using fix-layout > take? >This is dependent on the number of directories on the volume. Regards, Nithya> > Some additional information > > # gluster volume info > > Volume Name: data-volume > Type: Distribute > Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18 > Status: Started > Number of Bricks: 3 > Transport-type: tcp > Bricks: > Brick1: mseas-data2:/mnt/brick1 > Brick2: mseas-data2:/mnt/brick2 > Brick3: mseas-data3:/export/sda/brick3 > Options Reconfigured: > diagnostics.client-log-level: ERROR > network.inode-lru-limit: 50000 > performance.md-cache-timeout: 60 > performance.open-behind: off > disperse.eager-lock: off > auth.allow: * > server.allow-insecure: on > nfs.exports-auth-enable: on > diagnostics.brick-sys-log-level: WARNING > performance.readdir-ahead: on > nfs.disable: on > nfs.export-volumes: off > > Thanks > > -- > > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- > Pat Haley Email: phaley at mit.edu > Center for Ocean Engineering Phone: (617) 253-6824 > Dept. of Mechanical Engineering Fax: (617) 253-8125 > MIT, Room 5-213 http://web.mit.edu/phaley/www/ > 77 Massachusetts Avenue > Cambridge, MA 02139-4301 > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180730/a9472b14/attachment.html>