similar to: tinc with ha firewall

Displaying 20 results from an estimated 3000 matches similar to: "tinc with ha firewall"

2016 Jan 27
0
HA firewall with tinc
Hi Saverio, I found conflict: 172.16.1.10 00:50:56:1b:ba:5e VMware, Inc. 172.16.1.10 00:50:56:2b:12:e6 VMware, Inc. (DUP: 2) 172.16.1.10 00:50:56:2b:12:e6 VMware, Inc. (DUP: 3) 172.16.1.10 00:50:56:2b:12:e6 VMware, Inc. (DUP: 4) 172.16.1.10 00:50:56:2b:12:e6 VMware, Inc. (DUP: 5) So my assumptions were wrong ! :D Probably Virtual
2016 Jan 27
6
HA firewall with tinc
I have 2 firewall in HA with keepalived. Can I use active the same tinc configuration on 2 firewalls ? using tun Interface with same ip on all 2 nodes is a problem ? tun device advertise itself on the network having an IP/MAC pairs (ARP) or the IP is only used by the system internally for routing so using the same configuration is right ? so one firewall be active, the other is passive. With this
2016 Jan 27
0
HA firewall with tinc
I think it should work at least for TUN virtual interface as TUn works at IP level. This is a sample configuration. firewall1 lan = 172.16.1.11/19 (ALWAYS ACTIVE) - "Physical Network Interface" - system config as ifcfg-... 172.16.1.10/19 (VIP Keepalived Make active) - Active/Passive configuration with firewall2 firewall1 vpndr1
2016 Jan 27
0
HA firewall with tinc
This is what I want to avoid :D I want an active Tinc virtual interface active with ip identical of the other firewall, without ip conflict on the same network. Do you know if Tun type virtual interface on one host can have same ip address of another host in the same network without ip conflict ? ie if a tun virtual interface can work active without transmitting on real network ? or if such a
2016 Jan 27
0
HA firewall with tinc
This is a vpn for Disater Recovery sites, so it is not necessary to have a seamless failover, strictly speaking. Encryption instead is mandatory. Testing we found that on Keepalived failover remote Tinc take few seconds to reset the connection and correctly re-connect to the new active firewall (probably new firewall resetting the connection + PingTimeout + some seconds to reconnect). This is
2016 Jan 22
0
tinc with ha firewall
Ok, I think synching 2 firewalls are best solution with keepalived active/passive HA, too. I'll try this solution to see if all goes straitforward between failover/failback and tinc communications. Thank you Guus. Best regards Roberto -----Original Message----- From: tinc [mailto:tinc-bounces at tinc-vpn.org] On Behalf Of Guus Sliepen Sent: venerd? 22 gennaio 2016 10.24 To: tinc at
2020 Apr 08
2
alternatives for imapproxy
Hi System debian 8.11 and dovecot-2.2.36.4 My webmail is roundcube with imapproxy. I have one problem. My dovecot servers is are in a cluster with keepalived like: dovecot1----VIP-IP--------dovecot2 All works fine I have a problem with imapproxy when a server dovecot1 had a problem (kernel panic sic!) Keepalived works perfecty and moved VIP to dovecot2 - all works fine for normal users but
2019 Jan 07
1
doveadm + HA
Hi I have two server directors in ring and 5 dovecot servers (2.2.36) IP for IMAP and POP3 is a VIP (keepalived) What is the best solutions to get realy HA for 5 dovecot servers ? Maby corosync+pacemeker ? But this solution is too problematic and hardcore Why I need HA ? Doveadmin is too lazy and doveadm director does not know that one machine broke down and still sends traffic
2019 Apr 11
8
High availability of Dovecot
Hi, list, I'm going to deploy postfix + dovecot + CephFS( as Mail Storage). Basically I want to use two servers for them, which is kind of HA. My idea is that using keepalived or Pacemaker to host a VIP, which could fail over the other server once one is down. And I'll use Haproxy or Nginx to schedule connections to one of those server based on source IP( Session stickiness),
2016 Jan 22
1
Error starting tinc
I get this error starting tincd: tincd -n vpndr -d5 -D tincd 1.0.26 (Jan 22 2016 19:28:17) starting, debug level 5 /dev/net/tun is a Linux tun/tap device (tun mode) Executing script tinc-up System call `getaddrinfo' failed: Name or service not known Terminating Also keepalived return an error when tincd start. Starting as a daemon. Joutnalctl show this: Jan 22 23:14:49 systemd[1]:
2014 Sep 17
2
Active/Passive Samba Cluster for Shared NFS Backend
Hello, I am working on setting up an Active/Passive Samba cluster on Ubuntu 14.04 using Samba 4.1.6. Samba will be sharing an NFS mount so that it can be accessible to CIFS clients. Thus, the server setup is as follows: -- cifs0 -- / \ / \ NFS_Server VIP --- CIFS clients \ / \ /
2015 Sep 29
1
Keepalived vrrp problem
Em 29-09-2015 15:03, Gordon Messmer escreveu: > On 09/29/2015 09:14 AM, Tim Dunphy wrote: >> And if I do an ifconfig command I see no evidence of an eth1 existing. > > "ifconfig -a" will show you all of your interfaces. Maybe there is a confusion here. Sounds like Tim thought keepalived would create that eth1, like a tunnel interface, but it won't. You have to
2016 Feb 08
0
tinc ha
I need a second tinc vpn server (physical machine) to be up and running, so if first tinc vpn server (virtual machine) goes down we can connect to remote site to do management (remote site is not isolated). If I use 2 different tinc vpn servers on the remote sites all two connected to primary tinc sites (HQ Site), can I have a robust solution, using some route prioritization with ip route ? so I
2016 Jan 22
1
un/Tap IP Configuration
I read tinc documentation part: " For Branch A BranchA would be configured like this: In /etc/tinc/company/tinc-up: # Real interface of internal network: # ifconfig eth0 10.1.54.1 netmask 255.255.0.0 ifconfig $INTERFACE 10.1.54.1 netmask 255.0.0.0 ... Note that the IP addresses of eth0 and tap0 are the same. This is quite possible, if you make sure that the netmasks of the
2015 Sep 29
3
Keepalived vrrp problem
Hey guys, I'm trying to install keepalived 1.2.19 on a centos 6.5 machine. I did an install from source. And when I start keepalived this is what I'm seeing in the logs. It's reporting that the VRRP_Instance(VI_1) Now in FAULT state. Here's more of that log entry: Sep 29 12:06:58 USECLSNDMNRDBA Keepalived_vrrp[44943]: VRRP Instance = VI_1 Sep 29 12:06:58 USECLSNDMNRDBA
2013 Dec 17
1
Project pre planning
Hello GlusterFS users, can anybody give me please his opinion about the following facts and questions: 4 storage server with 16 SATA bays, connected by GigE: Q1: Volume will be set up as distributed-replicated. Maildir, FTP Dir, htdocs, file store directory => as sub dir's in one big GlusterVolume or each dir in it's own GlusterVolume? Q2: Set up the bricks as a collection of
2018 Feb 26
1
NFS Ganesha HA w/ GlusterFS
On 02/25/2018 08:29 PM, TomK wrote: > Hey Guy's, > > A success story instead of a question. > > With your help, managed to get the HA component working with HAPROXY and > keepalived to build a fairly resilient NFS v4 VM cluster.? ( Used > Gluster, NFS Ganesha v2.60, HAPROXY, keepalived w/ selinux enabled ) > > If someone needs or it could help your work, please PM
2018 Feb 26
0
NFS Ganesha HA w/ GlusterFS
Hey Guy's, A success story instead of a question. With your help, managed to get the HA component working with HAPROXY and keepalived to build a fairly resilient NFS v4 VM cluster. ( Used Gluster, NFS Ganesha v2.60, HAPROXY, keepalived w/ selinux enabled ) If someone needs or it could help your work, please PM me for the written up post or I could just post here if the lists allow it.
2018 Feb 26
1
NFS Ganesha HA w/ GlusterFS
I would like to see the steps for reference, can you provide a link or just post them on mail list? On Mon, Feb 26, 2018 at 4:29 AM, TomK <tomkcpr at mdevsys.com> wrote: > Hey Guy's, > > A success story instead of a question. > > With your help, managed to get the HA component working with HAPROXY and > keepalived to build a fairly resilient NFS v4 VM cluster. ( Used
2015 Aug 16
2
wordpess can't connect to DB but mediawiki can
> > You were doing this (looking at the mysql.db table) on your > "db.example.com" machine, correct? db.example.com is a load balanced VIP. The VIP is being handled by keepalived and HA/Proxy. There are two DB's setup in master/master replication. The two databases and two load balancers are on AWS. The web server and varnish servers are on digital ocean. I setup a grant