Strahil Nikolov
2019-May-26 13:38 UTC
[Gluster-users] [ovirt-users] Re: Single instance scaleup.
Yeah,it seems different from the docs.I'm adding the gluster users list ,as
they are more experienced into that.
@Gluster-users,
can you provide some hint how to add aditional replicas to the below volumes ,
so they become 'replica 2 arbiter 1' or 'replica 3' type volumes
?
Best Regards,Strahil Nikolov
? ??????, 26 ??? 2019 ?., 15:16:18 ?. ???????+3, Leo David <leoalex at
gmail.com> ??????:
Thank you Strahil,The engine and ssd-samsung are distributed...So these are the
ones that I need to have replicated accross new nodes.I am not very sure about
the procedure to accomplish this.Thanks,
Leo
On Sun, May 26, 2019, 13:04 Strahil <hunter86_bg at yahoo.com> wrote:
Hi Leo,
As you do not have a distributed volume , you can easily switch to replica 2
arbiter 1 or replica 3 volumes.
You can use the following for adding the bricks:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Expanding_Volumes.html
Best Regards,
Strahil Nikoliv
On May 26, 2019 10:54, Leo David <leoalex at gmail.com> wrote:
Hi Stahil,Thank you so much for yout input !
?gluster volume info
Volume Name: engine
Type: Distribute
Volume ID: d7449fc2-cc35-4f80-a776-68e4a3dbd7e1
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.80.191:/gluster_bricks/engine/engine
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
performance.low-prio-threads: 32
performance.strict-o-direct: off
network.remote-dio: off
network.ping-timeout: 30
user.cifs: off
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
cluster.eager-lock: enableVolume Name: ssd-samsung
Type: Distribute
Volume ID: 76576cc6-220b-4651-952d-99846178a19e
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.80.191:/gluster_bricks/sdc/data
Options Reconfigured:
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
user.cifs: off
network.ping-timeout: 30
network.remote-dio: off
performance.strict-o-direct: on
performance.low-prio-threads: 32
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on
The other two hosts will be 192.168.80.192/193??- this is gluster dedicated
network over 10GB sfp+ switch.- host 2?wil have identical harware configuration
with host 1 ( each disk is actually a raid0 array )- host 3 has:?? -? 1 ssd for
OS?? -??1 ssd - for adding to engine volume in a full replica 3?? -? 2 ssd's
in a raid 1 array?to be added?as arbiter for the data volume ( ssd-samsung )So
the plan is to have "engine"? scaled in a full replica 3,? and
"ssd-samsung" scalled in a replica 3 arbitrated.
On Sun, May 26, 2019 at 10:34 AM Strahil <hunter86_bg at yahoo.com> wrote:
Hi Leo,
Gluster is quite smart, but in order to provide any hints , can you provide
output of 'gluster volume info <glustervol>'.
If you have 2 more systems , keep in mind that it is best to mirror the storage
on the second replica (2 disks on 1 machine -> 2 disks on the new machine),
while for the arbiter this is not neccessary.
What is your network and NICs ? Based on my experience , I can recommend at
least 10 gbit/s? interfase(s).
Best Regards,
Strahil Nikolov
On May 26, 2019 07:52, Leo David <leoalex at gmail.com> wrote:
Hello Everyone,Can someone help me to clarify this ?I have a single-node 4.2.8
installation ( only two gluster storage domains - distributed? single drive
volumes ). Now I just got two identintical servers and I would like to go for a
3 nodes bundle.Is it possible ( after joining the new nodes to the cluster ) to
expand the existing volumes across the new nodes and change them to replica 3
arbitrated ?If so, could you share with me what would it be the procedure ?Thank
you very much !
Leo
--
Best regards, Leo David
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20190526/0c1047b3/attachment.html>
Leo David
2019-May-27 10:54 UTC
[Gluster-users] [ovirt-users] Re: Single instance scaleup.
Hi, Any suggestions ? Thank you very much ! Leo On Sun, May 26, 2019 at 4:38 PM Strahil Nikolov <hunter86_bg at yahoo.com> wrote:> Yeah, > it seems different from the docs. > I'm adding the gluster users list ,as they are more experienced into that. > > @Gluster-users, > > can you provide some hint how to add aditional replicas to the below > volumes , so they become 'replica 2 arbiter 1' or 'replica 3' type volumes ? > > > Best Regards, > Strahil Nikolov > > ? ??????, 26 ??? 2019 ?., 15:16:18 ?. ???????+3, Leo David < > leoalex at gmail.com> ??????: > > > Thank you Strahil, > The engine and ssd-samsung are distributed... > So these are the ones that I need to have replicated accross new nodes. > I am not very sure about the procedure to accomplish this. > Thanks, > > Leo > > On Sun, May 26, 2019, 13:04 Strahil <hunter86_bg at yahoo.com> wrote: > > Hi Leo, > As you do not have a distributed volume , you can easily switch to replica > 2 arbiter 1 or replica 3 volumes. > > You can use the following for adding the bricks: > > > https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Expanding_Volumes.html > > Best Regards, > Strahil Nikoliv > On May 26, 2019 10:54, Leo David <leoalex at gmail.com> wrote: > > Hi Stahil, > Thank you so much for yout input ! > > gluster volume info > > > Volume Name: engine > Type: Distribute > Volume ID: d7449fc2-cc35-4f80-a776-68e4a3dbd7e1 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 > Transport-type: tcp > Bricks: > Brick1: 192.168.80.191:/gluster_bricks/engine/engine > Options Reconfigured: > nfs.disable: on > transport.address-family: inet > storage.owner-uid: 36 > storage.owner-gid: 36 > features.shard: on > performance.low-prio-threads: 32 > performance.strict-o-direct: off > network.remote-dio: off > network.ping-timeout: 30 > user.cifs: off > performance.quick-read: off > performance.read-ahead: off > performance.io-cache: off > cluster.eager-lock: enable > Volume Name: ssd-samsung > Type: Distribute > Volume ID: 76576cc6-220b-4651-952d-99846178a19e > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 > Transport-type: tcp > Bricks: > Brick1: 192.168.80.191:/gluster_bricks/sdc/data > Options Reconfigured: > cluster.eager-lock: enable > performance.io-cache: off > performance.read-ahead: off > performance.quick-read: off > user.cifs: off > network.ping-timeout: 30 > network.remote-dio: off > performance.strict-o-direct: on > performance.low-prio-threads: 32 > features.shard: on > storage.owner-gid: 36 > storage.owner-uid: 36 > transport.address-family: inet > nfs.disable: on > > The other two hosts will be 192.168.80.192/193 - this is gluster > dedicated network over 10GB sfp+ switch. > - host 2 wil have identical harware configuration with host 1 ( each disk > is actually a raid0 array ) > - host 3 has: > - 1 ssd for OS > - 1 ssd - for adding to engine volume in a full replica 3 > - 2 ssd's in a raid 1 array to be added as arbiter for the data volume > ( ssd-samsung ) > So the plan is to have "engine" scaled in a full replica 3, and > "ssd-samsung" scalled in a replica 3 arbitrated. > > > > > On Sun, May 26, 2019 at 10:34 AM Strahil <hunter86_bg at yahoo.com> wrote: > > Hi Leo, > > Gluster is quite smart, but in order to provide any hints , can you > provide output of 'gluster volume info <glustervol>'. > If you have 2 more systems , keep in mind that it is best to mirror the > storage on the second replica (2 disks on 1 machine -> 2 disks on the new > machine), while for the arbiter this is not neccessary. > > What is your network and NICs ? Based on my experience , I can recommend > at least 10 gbit/s interfase(s). > > Best Regards, > Strahil Nikolov > On May 26, 2019 07:52, Leo David <leoalex at gmail.com> wrote: > > Hello Everyone, > Can someone help me to clarify this ? > I have a single-node 4.2.8 installation ( only two gluster storage domains > - distributed single drive volumes ). Now I just got two identintical > servers and I would like to go for a 3 nodes bundle. > Is it possible ( after joining the new nodes to the cluster ) to expand > the existing volumes across the new nodes and change them to replica 3 > arbitrated ? > If so, could you share with me what would it be the procedure ? > Thank you very much ! > > Leo > > > > -- > Best regards, Leo David > >-- Best regards, Leo David -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190527/639c849d/attachment-0001.html>