similar to: Gluster Geo replication

Displaying 20 results from an estimated 700 matches similar to: "Gluster Geo replication"

2023 Nov 03
1
Gluster Geo replication
While creating the Geo-replication session it mounts the secondary Volume to see the available size. To mount the secondary volume in Primary, port 24007 and 49152-49664 of the secondary volume needs to be accessible from the Primary (Only in the node from where the Geo-rep create command is executed). This need to be changed to use SSH(bug). Alternatively use georep setup tool
2023 Nov 03
0
Gluster Geo replication
Hi, You simply need to enable port 22 on the geo-replication slave side. This will allow the master node to establish an SSH connection with the slave server and transfer data securely over SSH. Thanks, Anant ________________________________ From: Gluster-users <gluster-users-bounces at gluster.org> on behalf of dev devops <dev.devops12 at gmail.com> Sent: 31 October 2023 3:10 AM
2017 Sep 07
1
Firewalls and ports and protocols
Reading the documentation there is conflicting information: According to https://wiki.centos.org/HowTos/GlusterFSonCentOS we only need port TCP open between 2 GlusterFS servers: Ports TCP:24007-24008 are required for communication between GlusterFS nodes and each brick requires another TCP port starting at 24009. According to
2023 Mar 29
1
gluster csi driver
Looking at this code, it's way more than I was looking for, too. I just need a replacement for the in-tree driver. I have a volume. I have about a half dozen pods that use that volume. I just need the same capabilities as the in-tree driver to satisfy that need. I want to use kadalu to replace the hacky thing I'm still doing using hostpath_pv, but last time I checked, it didn't build
2023 Mar 29
1
gluster csi driver
Hi Joe, On Wed, Mar 29, 2023, 12:55 PM Joe Julian <me at joejulian.name> wrote: > I was chatting with Humble about the removed gluster support for > Kubernetes 1.26 and the long deprecated CSI driver. > > I'd like to bring it back from archive and maintain it. If anybody would > like to participate, that'd be great! If I'm just maintaining it for my > own use,
2023 Oct 27
1
State of the gluster project
It is very unfortunate that Gluster is not maintained. From Kadalu Technologies, we are trying to set up a small team dedicated to maintain GlusterFS for the next three years. This will be only possible if we get funding from community and companies. The details about the proposal is here?https://kadalu.tech/gluster/ About Kadalu Technologies: Kadalu Technologies was started in 2019 by a few
2023 Oct 27
1
State of the gluster project
Maybe a bit OT... I'm no expert on either, but the concepts are quite similar. Both require "extra" nodes (metadata and monitor), but those can be virtual machines or you can host the services on OSD machines. We don't use snapshots, so I can't comment on that. My experience with Ceph is limited to having it working on Proxmox. No experience yet with CephFS. BeeGFS is
2018 Jun 13
4
Samba 4.8 RODC not working
On Wed, 13 Jun 2018 10:05:23 +0200 (CEST) Gaetan SLONGO <gslongo at it-optics.com> wrote: > Hi Rowland, > > > Same, as said; winbind isn't started :-) > > > > [root at dmzrodc ~]# ps ax | egrep "ntp|bind|named|samba|?mbd" > 650 ? Ss 0:00 /usr/sbin/ntpd -u ntp:ntp -g > 1205 ? Ss 0:00 /usr/sbin/samba -D > 1225 ? S 0:00 /usr/sbin/samba
2009 Feb 03
3
Videoconference one-to-many
Dear all, I've implemented an Asterisk 1.4 with SIP service for voip and video. So I can establish a voip + video connection *one-to-one* only....it works OK. But I'd like to implement a videoconference *one-to-many* in order to intercommunicate many clients, is it possible with Asterisk 1.4 ??? (multicast is better than brodcast in this situation of course) Thanks a lot, Alejandro
2019 Apr 24
2
Iptables blocks out going connetion some times
Hi?guys. There is a wierd problem with iptables recently, hopes somebody can help me. I have installed Centos 7.2.1511 on a bare metal Dell server these days, disabled firewalld and enabled iptables.services, and setup a group of very simple rules, as the following: # iptables-save # Generated by iptables-save v1.4.21 on Tue Apr 23 09:15:14 2019 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Don't forget to test with the georep key. I think it was /var/lib/glusterd/geo-replication/secret.pem Best Regards, Strahil Nikolov ? ??????, 27 ?????? 2024 ?. ? 07:24:07 ?. ???????+2, Strahil Nikolov <hunter86_bg at yahoo.com> ??????: Hi Anant, i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the
2013 Oct 14
1
centos 6.x glusterfs 3.2.7 firewall blocking
centos 6.x gluster --version glusterfs 3.2.7 built on Jun 11 2012 13:22:29 The problem is that when i'm trying to probe like this: gluster peer probe [hostname] It never probe's because the firewall is blocking (when I turn it of on both sides everything works) But I want to keep the firewall running. A google search give's me serveral possible ports to open , so I
2023 Mar 29
1
gluster csi driver
I was chatting with Humble about the removed gluster support for Kubernetes 1.26 and the long deprecated CSI driver. I'd like to bring it back from archive and maintain it. If anybody would like to participate, that'd be great! If I'm just maintaining it for my own use, then so be it. ? It has been archived for 4 years, so it's going to need a bit of work. I also don't
2018 Apr 04
2
glusterd2 problem
Hello! Installed packages from SIG on centos7 , at first start it works, but after restart- not: ?glusterd2 --config /etc/glusterd2/glusterd2.toml DEBU[2018-04-04 09:28:16.945267] Starting GlusterD???????????????????????????? pid=221581 source="[main.go:55:main.main]" version=v4.0.0-0 INFO[2018-04-04 09:28:16.945824] loaded configuration from file???????????????
2018 Apr 06
0
glusterd2 problem
Hi Dmitry, How many nodes does the cluster have ? If the quorum is lost (majority of nodes are down), additional recovery steps are necessary to bring it back up: https://github.com/gluster/glusterd2/wiki/Recovery On Wed, Apr 4, 2018 at 11:08 AM, Dmitry Melekhov <dm at belkam.com> wrote: > Hello! > > Installed packages from SIG on centos7 , > > at first start it works,
2019 Apr 24
2
答复: Iptables blocks out going connetion some times
Hello, Stephen, thank you for input. Yes, these servers have the same firewall rules, and both of them have the same problem from time to time, most of time they are good. Actually, these servers are newly installed to be used as the Glusterfs storage server, so not much data flowing at this time. >From the sysctl output, I suppose it can't be a conntrack table overflow :
2018 Mar 28
3
Announcing Gluster release 4.0.1 (Short Term Maintenance)
On Wed, Mar 28, 2018 at 02:57:55PM +1300, Thing wrote: > Hi, > > Thanks, yes, not very familiar with Centos and hence googling took a while > to find a 4.0 version at, > > https://wiki.centos.org/SpecialInterestGroup/Storage The announcement for Gluster 4.0 in CentOS should contain all the details that you need as well:
2024 Feb 18
1
Graceful shutdown doesn't stop all Gluster processes
Well, you prepare the host for shutdown, right ? So why don't you setup systemd to start the container and shut it down before the bricks ? Best Regards, Strahil Nikolov ? ?????, 16 ???????? 2024 ?. ? 18:48:36 ?. ???????+2, Anant Saraswat <anant.saraswat at techblue.co.uk> ??????: Hi Strahil, Yes, we mount the fuse to the physical host and then use bind mount to
2001 Jul 12
1
Prioritizing streams
I have two servers on a network that need to intercommunicate a lot (file sharing and authentication information). I''d like to prioritise that traffic on them from their other network traffic, but I don''t want to think in terms of necessarily fixing bandwidths; I just want the inter-server communication to go out first if there''s a backlog. I could decide, I guess,
2024 Feb 16
2
Graceful shutdown doesn't stop all Gluster processes
Okay, I understand. Yes, it would be beneficial to include an option for skipping the client processes. This way, we could utilize the 'stop-all-gluster-processes.sh' script with that option to stop the gluster server process while retaining the fuse mounts. ________________________________ From: Aravinda <aravinda at kadalu.tech> Sent: 16 February 2024 12:36 PM To: Anant Saraswat