On April 3, 2020 10:12:19 PM GMT+03:00, Valerio Luccio <valerio.luccio at nyu.edu> wrote:>Hello, > >I have a live gluster setup like this: > > Volume Name: MRIData > Type: Distributed-Replicate > Volume ID: e051ac20-ead1-4648-9ac6-a29b531515ca > Status: Started > Snapshot Count: 0 > Number of Bricks: 6 x 2 = 12 > Transport-type: tcp > Bricks: > Brick1: hydra1:/gluster1/data > Brick2: hydra1:/gluster2/data > Brick3: hydra1:/gluster3/data > Brick4: hydra2:/gluster1/data > Brick5: hydra2:/gluster2/data > Brick6: hydra2:/gluster3/data > Brick7: hydra3:/gluster1/data > Brick8: hydra3:/gluster2/data > Brick9: hydra3:/gluster3/data > Brick10: hydra4:/gluster1/data > Brick11: hydra4:/gluster2/data > Brick12: hydra4:/gluster3/data > Options Reconfigured: > transport.address-family: inet > nfs.disable: on > nfs.exports-auth-enable: on > features.cache-invalidation: off > >This was set up as a replica 2 and I would like now to turn into a >replica 3. I believe it's possible without loss of data (combination of > >"replace-brick" and other commands), but cannot find any docs >describing >the process. > >Am I right that it is possible ? Where can I find documentation ? > >Thanks,Yeah it's possible. You can add an arbiter or a full replica depending on your needs. Actually the info described here is quite usefull: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes Instead of arbiter, you can place a full brick, but you need to follow the warning: 1. Remove geo replication 2. Disable self healing Of course,it is better to create a test volume (even with a few GB for the bricks) and test on it before executing on production. Best Regards, Strahil Nikolov
On 4/4/20 2:46 AM, Strahil Nikolov wrote:> On April 3, 2020 10:12:19 PM GMT+03:00, Valerio Luccio <valerio.luccio at nyu.edu> wrote: > Yeah it's possible. > You can add an arbiter or a full replica depending on your needs. > > Actually the info described here is quite usefull: > https://urldefense.proofpoint.com/v2/url?u=https-3A__access.redhat.com_documentation_en-2Dus_red-5Fhat-5Fgluster-5Fstorage_3.4_html_administration-5Fguide_creating-5Farbitrated-5Freplicated-5Fvolumes&d=DwIFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=zZK0dca4HNf-XwnAN9ais1C3ncS0n2x39pF7yr-muHY&m=tSZu2-FAfl_wZ-fDTZs_9RX0yVi5E1k8zQcWDLRtnH4&s=L8vM6goPONBDhHhfjkbdPX4q8JotiXohL_0A3iVh_3k&e> > Instead of arbiter, you can place a full brick, but you need to follow the warning: > 1. Remove geo replication > 2. Disable self healing > > Of course,it is better to create a test volume (even with a few GB for the bricks) and test on it before executing on production. > > > Best Regards, > Strahil NikolovThanks for the prompt reply. -- As a result of Coronavirus-related precautions, NYU and the Center for Brain Imaging operations will be managed remotely until further notice. All telephone calls and e-mail correspondence are being monitored remotely during our normal business hours of 9am-5pm, Monday through Friday. For MRI scanner-related emergency, please contact: Keith Sanzenbach at keith.sanzenbach at nyu.edu and/or Pablo Velasco at pablo.velasco at nyu.edu For computer/hardware/software emergency, please contact: Valerio Luccio at valerio.luccio at nyu.edu For TMS/EEG-related emergency, please contact: Chrysa Papadaniil at chrysa at nyu.edu For CBI-related administrative emergency, please contact: Jennifer Mangan at jennifer.mangan at nyu.edu Valerio Luccio (212) 998-8736 Center for Brain Imaging 4 Washington Place, Room 158 New York University New York, NY 10003 "In an open world, who needs windows or gates ?" -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200404/626311f8/attachment.html>
Hi, I'm afraid I still need some help. When I originally set up my gluster (about 3 years ago), I set it up as Distributed-Replicated without specifying a replica count and I believe that defaulted to a replica 2. I have 4 servers with 3 RAIDs attached to each server. This was my result: Number of Bricks: 6 x 2 = 12 Transport-type: tcp Bricks: Brick1: hydra1:/gluster1/data Brick2: hydra1:/gluster2/data Brick3: hydra1:/gluster3/data Brick4: hydra2:/gluster1/data Brick5: hydra2:/gluster2/data Brick6: hydra2:/gluster3/data Brick7: hydra3:/gluster1/data Brick8: hydra3:/gluster2/data Brick9: hydra3:/gluster3/data Brick10: hydra4:/gluster1/data Brick11: hydra4:/gluster2/data Brick12: hydra4:/gluster3/data If I understand this correctly I have 6 sub-volumes with Brick2 replica of Brick1, Brick4 of Brick3, etc. Correct ? I realize now that it would probably have been better to specify a different order, but now I cannot change it. Now I want to store oVirt images on the Gluster and it requires either replica 1 or replica 3. I need to be able to reuse the bricks I have and was planning to remove some bricks, initialize them and add them back as a replica 3. Am I supposed to remove 6 bricks, one from each sub-volume ? Will that work ? Will I lose storage space ? Can I just remove a brick from each server and use those for the replica 3 ? Thanks for all the help. -- As a result of Coronavirus-related precautions, NYU and the Center for Brain Imaging operations will be managed remotely until further notice. All telephone calls and e-mail correspondence are being monitored remotely during our normal business hours of 9am-5pm, Monday through Friday. For MRI scanner-related emergency, please contact: Keith Sanzenbach at keith.sanzenbach at nyu.edu and/or Pablo Velasco at pablo.velasco at nyu.edu For computer/hardware/software emergency, please contact: Valerio Luccio at valerio.luccio at nyu.edu For TMS/EEG-related emergency, please contact: Chrysa Papadaniil at chrysa at nyu.edu For CBI-related administrative emergency, please contact: Jennifer Mangan at jennifer.mangan at nyu.edu Valerio Luccio (212) 998-8736 Center for Brain Imaging 4 Washington Place, Room 158 New York University New York, NY 10003 "In an open world, who needs windows or gates ?" -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200409/acea6d46/attachment.html>