Displaying 20 results from an estimated 1000 matches similar to: "Firewalls and ports and protocols"
2023 Oct 31
2
Gluster Geo replication
Hi All,
What are the ports needed to be opened for Gluster Geo replication ? We
have a very closed setup, I could gather below info, does all of these
ports need to be open on master and slave for inter communication or just
22 would work since it's using the rsync over ssh for actual data push ?
*?* *Port 22 (TCP):* Used by SSH for secure data communication in
Geo-replication.
*?* *Port 24007
2018 Mar 28
3
Announcing Gluster release 4.0.1 (Short Term Maintenance)
On Wed, Mar 28, 2018 at 02:57:55PM +1300, Thing wrote:
> Hi,
>
> Thanks, yes, not very familiar with Centos and hence googling took a while
> to find a 4.0 version at,
>
> https://wiki.centos.org/SpecialInterestGroup/Storage
The announcement for Gluster 4.0 in CentOS should contain all the
details that you need as well:
2023 Nov 03
1
Gluster Geo replication
While creating the Geo-replication session it mounts the secondary Volume to see the available size. To mount the secondary volume in Primary, port 24007 and 49152-49664 of the secondary volume needs to be accessible from the Primary (Only in the node from where the Geo-rep create command is executed). This need to be changed to use SSH(bug). Alternatively use georep setup tool
2018 Apr 06
0
glusterd2 problem
Hi Dmitry,
How many nodes does the cluster have ?
If the quorum is lost (majority of nodes are down), additional recovery
steps are necessary to bring it back up:
https://github.com/gluster/glusterd2/wiki/Recovery
On Wed, Apr 4, 2018 at 11:08 AM, Dmitry Melekhov <dm at belkam.com> wrote:
> Hello!
>
> Installed packages from SIG on centos7 ,
>
> at first start it works,
2018 Apr 04
2
glusterd2 problem
Hello!
Installed packages from SIG on centos7 ,
at first start it works, but after restart- not:
?glusterd2 --config /etc/glusterd2/glusterd2.toml
DEBU[2018-04-04 09:28:16.945267] Starting
GlusterD???????????????????????????? pid=221581
source="[main.go:55:main.main]" version=v4.0.0-0
INFO[2018-04-04 09:28:16.945824] loaded configuration from
file???????????????
2018 Jun 13
4
Samba 4.8 RODC not working
On Wed, 13 Jun 2018 10:05:23 +0200 (CEST)
Gaetan SLONGO <gslongo at it-optics.com> wrote:
> Hi Rowland,
>
>
> Same, as said; winbind isn't started :-)
>
>
>
> [root at dmzrodc ~]# ps ax | egrep "ntp|bind|named|samba|?mbd"
> 650 ? Ss 0:00 /usr/sbin/ntpd -u ntp:ntp -g
> 1205 ? Ss 0:00 /usr/sbin/samba -D
> 1225 ? S 0:00 /usr/sbin/samba
2016 Dec 31
2
HowTos/GlusterFSonCentOS
Hello,
I'd like to add some information at the end of the last section about
extending glusterfs volumes.
7. Extend GlusterFS Volumes without downtime
https://wiki.centos.org/HowTos/GlusterFSonCentOS
I plan on adding notes regarding re-balancing the data.
My wiki user name is MichaelBear
Thanks,
--
---~~.~~---
Mike
// SilverTip257 //
-------------- next part --------------
An HTML
2013 Oct 14
1
centos 6.x glusterfs 3.2.7 firewall blocking
centos 6.x
gluster --version
glusterfs 3.2.7 built on Jun 11 2012 13:22:29
The problem is that when i'm trying to probe like this:
gluster peer probe [hostname]
It never probe's because the firewall is blocking (when I turn it of on both
sides everything works)
But I want to keep the firewall running.
A google search give's me serveral possible ports to open , so I
2016 Dec 31
2
HowTos/GlusterFSonCentOS
On Sat, Dec 31, 2016 at 11:32 AM, Mike - st257 <silvertip257 at gmail.com> wrote:
>
>
> On Sat, Dec 31, 2016 at 2:20 PM, Mike - st257 <silvertip257 at gmail.com>
> wrote:
>>
>> Hello,
>>
>> I'd like to add some information at the end of the last section about
>> extending glusterfs volumes.
>>
>> 7. Extend GlusterFS Volumes
2023 Nov 03
0
Gluster Geo replication
Hi,
You simply need to enable port 22 on the geo-replication slave side. This will allow the master node to establish an SSH connection with the slave server and transfer data securely over SSH.
Thanks,
Anant
________________________________
From: Gluster-users <gluster-users-bounces at gluster.org> on behalf of dev devops <dev.devops12 at gmail.com>
Sent: 31 October 2023 3:10 AM
2017 Sep 04
2
heal info OK but statistics not working
1) one peer, out of four, got separated from the network,
from the rest of the cluster.
2) that unavailable(while it was unavailable) peer got
detached with "gluster peer detach" command which succeeded,
so now cluster comprise of three peers
3) Self-heal daemon (for some reason) does not start(with an
attempt to restart glusted) on the peer which probed that
fourth peer.
4) fourth
2018 Jun 13
0
Samba 4.8 RODC not working
Hi Louis, Hi Rowland,
I will respond to both in this mail.
Yes winbind is installed :
[root at dmzrodc ~]# which winbindd
/usr/sbin/winbindd
[root at dmzrodc ~]# rpm -qa |grep winbind
sernet-samba-winbind-4.8.2-10.el7.x86_64
I know about *mbd processes. so strange.. This is why I'm posting here :-)
I joined the RODC following the procedure available on the wiki page
2017 Sep 04
0
heal info OK but statistics not working
Ravi/Karthick,
If one of the self heal process is down, will the statstics heal-count
command work?
On Mon, Sep 4, 2017 at 7:24 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> 1) one peer, out of four, got separated from the network, from the rest of
> the cluster.
> 2) that unavailable(while it was unavailable) peer got detached with
> "gluster peer detach" command
2017 Jun 29
0
Change of listening ports - is it possible?
On 28-Jun-2017 10:02 PM, "Rafa? Radecki" <radecki.rafal at gmail.com> wrote:
Hi All.
I am spawning gluster clusters in docker containers and in current
architecture for every volume we want to spawn different set of containers
possibly on the same group of servers. Ports used by
glusterd/glusterfsd/glusterfs are:
24007
24008
49152 and higher
There is a problem once I try to
2018 Jun 13
2
Samba 4.8 RODC not working
On Wed, 13 Jun 2018 09:46:03 +0200 (CEST)
Gaetan SLONGO <gslongo at it-optics.com> wrote:
> Hi,
>
> Here is the current process list. We can see missing winbind and *mbd
> processes :
>
>
>
> [root at dmzrodc ~]# netstat -plaunt | egrep "ntp|bind|named|samba|?mbd"
I wouldn't worry about 'winbind' not being in the output of the above
2018 Jun 13
0
Samba 4.8 RODC not working
 If its really urgent then u would really suggest, invest in samba a bit and pay them to get this working.
Thats what sernet can do for you. Get commercial support.
Im pretty much out of options, execpt upgrade to 4.8 and try it again.
Greetz,
Louis
Van: Gaetan SLONGO [mailto:gslongo at it-optics.com]
Verzonden: woensdag 13 juni 2018 10:40
Aan: Rowland Penny; L.P.H. van Belle
2017 Jun 28
0
Change of listening ports - is it possible?
Hi All.
I am spawning gluster clusters in docker containers and in current
architecture for every volume we want to spawn different set of containers
possibly on the same group of servers. Ports used by
glusterd/glusterfsd/glusterfs are:
24007
24008
49152 and higher
There is a problem once I try to spawn a second gluster container on the
same server since by default i tries to allocate the same
2003 Dec 10
0
dyn.load for c code
I am learning how to load C code into R-1.8.0 on Windows 98. To this end I wrote
a small c program, downloaded the tools, perl, and mingw on the "building
R for windows" page, and proceeded to create libR.a & libRblas.a as explained
in the readme.packages. I started with a simple c program called mysum.c that can
be found on:
2017 Oct 24
0
brick is down but gluster volume status says it's fine
On Tue, Oct 24, 2017 at 11:13 PM, Alastair Neil <ajneil.tech at gmail.com>
wrote:
> gluster version 3.10.6, replica 3 volume, daemon is present but does not
> appear to be functioning
>
> peculiar behaviour. If I kill the glusterfs brick daemon and restart
> glusterd then the brick becomes available - but one of my other volumes
> bricks on the same server goes down in
2017 Oct 24
2
brick is down but gluster volume status says it's fine
gluster version 3.10.6, replica 3 volume, daemon is present but does not
appear to be functioning
peculiar behaviour. If I kill the glusterfs brick daemon and restart
glusterd then the brick becomes available - but one of my other volumes
bricks on the same server goes down in the same way it's like wack-a-mole.
any ideas?
[root at gluster-2 bricks]# glv status digitalcorpora
> Status