similar to: tinc 1.1pre17 on fedora 30

Displaying 20 results from an estimated 3000 matches similar to: "tinc 1.1pre17 on fedora 30"

2019 May 02
4
Aw: Re: very high traffic without any load
2019 Aug 26
0
tinc 1.1pre17 on fedora 30
On Sun, Aug 25, 2019 at 02:41:03PM +0200, Christopher Klinge wrote: > I am trying to run tinc version 1.1pre17 on fedora 30 hosts and I am running > into a problem. Building and starting tinc works just fine. [...] > However, the hosts cannot connect to each other. When checking the logs, the > following appears over and over again, for any combination of hosts: > > Error
2019 May 03
3
Aw: Re: very high traffic without any load
2019 May 01
4
very high traffic without any load
Hi everyone,   I am new to using tinc and currently trying to set up a full IPv6 mesh between 4 servers of mine. Setting it up went smoothly and all of the tinc clients do connect properly. Routing through the network works fine as well. There is however a large amount of management traffic which I assume should not be the case.   Here is a quick snapshot using "tinc -n netname top"
2019 May 06
4
very high traffic without any load
Lars, interesting - do you have an example of what that might look like in the config file? Thanks! On Sun, May 5, 2019 at 6:00 PM Lars Kruse <lists at sumpfralle.de> wrote: > Hello Christoph, > > I am glad, that you discovered the source of the problem! > > > Am Sat, 4 May 2019 08:30:28 +0200 > schrieb "Christopher Klinge" <Christ.Klinge at web.de>:
2017 Jul 20
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
Hi, Thank you for the answer and sorry for delay: 2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: 1. What does the glustershd.log say on all 3 nodes when you run the > command? Does it complain anything about these files? > No, glustershd.log is clean, no extra log after command on all 3 nodes > 2. Are these 12 files also present in the 3rd data brick?
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: > > Could you check if the self-heal daemon on all nodes is connected to the 3 > bricks? You will need to check the glustershd.log for that. > If it is not connected, try restarting the shd using `gluster volume start > engine force`, then launch the heal command like you did earlier and see if > heals
2019 May 04
0
very high traffic without any load
Hello Christopher, Am Fri, 3 May 2019 20:06:54 +0200 schrieb "Christopher Klinge" <Christ.Klinge at web.de>: > I did some digging, and thus far I could not find any other culprit other > than tinc itself. The packages that are being sent are addressed directly to > the other tinc hosts on their vpn addresses. During my latest tests, within > about 12 seconds 100MB of
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 02:20 PM, yayo (j) wrote: > Hi, > > Thank you for the answer and sorry for delay: > > 2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>>: > > 1. What does the glustershd.log say on all 3 nodes when you run > the command? Does it complain anything about these files? > > >
2019 May 02
2
Aw: Re: Re: very high traffic without any load
2017 Jul 19
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/19/2017 08:02 PM, Sahina Bose wrote: > [Adding gluster-users] > > On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com > <mailto:jaganz at gmail.com>> wrote: > > Hi all, > > We have an ovirt cluster hyperconverged with hosted engine on 3 > full replicated node . This cluster have 2 gluster volume: > > - data: volume for
2017 Jul 19
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
[Adding gluster-users] On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com> wrote: > Hi all, > > We have an ovirt cluster hyperconverged with hosted engine on 3 full > replicated node . This cluster have 2 gluster volume: > > - data: volume for the Data (Master) Domain (For vm) > - engine: volume fro the hosted_storage Domain (for hosted engine) > >
2019 May 01
0
very high traffic without any load
Sound like your tincs are talking to each other over and over. Can we get some info from tinc.conf and tinc host files.. need to see some ip address configs for tinc. On Wed, May 1, 2019, 10:46 AM Christopher Klinge <Christ.Klinge at web.de> wrote: > Hi everyone, > > I am new to using tinc and currently trying to set up a full IPv6 mesh > between 4 servers of mine. Setting it
2000 Jun 14
2
TCP connection forwarding troubles
For some time I have routinely websurfed across a forwarded TCP connection using SSH. The other end of the TCP tunnel connects to a Squid proxy cache on the same machine. This usually works. But I see lots of error messages on each end of the form Jun 13 13:22:02 tunnel sshd[32378]: error: channel 0: chan_shutdown_read: shutdown() failed for fd5 [i1 o128]: Transport endpoint is not connected
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 03:42 PM, yayo (j) wrote: > > 2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>>: > > > Could you check if the self-heal daemon on all nodes is connected > to the 3 bricks? You will need to check the glustershd.log for that. > If it is not connected, try restarting the shd using
2019 May 04
2
Aw: Re: very high traffic without any load
2011 Sep 09
1
Slow performance - 4 hosts, 10 gigabit ethernet, Gluster 3.2.3
Hi everyone, I am seeing slower-than-expected performance in Gluster 3.2.3 between 4 hosts with 10 gigabit eth between them all. Each host has 4x 300GB SAS 15K drives in RAID10, 6-core Xeon E5645 @ 2.40GHz and 24GB RAM running Ubuntu 10.04 64-bit (I have also tested with Scientific Linux 6.1 and Debian Squeeze - same results on those as well). All of the hosts mount the volume using the FUSE
2019 May 06
1
very high traffic without any load
Hello Christopher, Am Mon, 6 May 2019 21:57:09 +0200 schrieb "Christopher Klinge" <Christ.Klinge at web.de>: > shouldn't these two rules work as well? >   > ip route add <remote public ipv6>/64 via 1111:1::1 > ip route add <remote public ipv6>/0 dev<own internet interface> >   > According to my knowledge thus far, linux should pick routes
2017 Jul 21
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: > > But it does say something. All these gfids of completed heals in the log > below are the for the ones that you have given the getfattr output of. So > what is likely happening is there is an intermittent connection problem > between your mount and the brick process, leading to pending heals again >
2020 Jan 09
2
Upgrade 2.2.27 to 2.3.9.2: master(imap): net_connect_unix(imap) failed: Resource temporarily unavailable
As a workaround for the titular issue, I have tried enabling the "imap-hibernate" service on a couple of servers to reduce the number of running imap processes. Since ~50-60% of clients are in IDLE at any one time, this allows us to reduce the number of running imap processes to less than half of what it was. After this I have yet to see the "net_connect_unix(imap)