Displaying 20 results from an estimated 2000 matches similar to: "Problems with packages being dropped between nodes in the vpn"
2014 Nov 12
2
Connection failing between 2 nodes with dropped packets error
Hi,
I'm sometimes getting a failure of connecting 2 nodes when Tinc is started
and configured in a LAN. In the logs, there are some unexpected dropped
packets with very high or negative seq. I can reproduce this issue ~2% of
the time.
When this happens, the 2 nodes can no longer ping or ssh each other through
the tunnel interface but using eth0 works fine. The connection can recover
after at
2015 Aug 19
0
Seeing: "Got REQ_KEY from XXX while we already started a SPTPS session!"
I'm running tinc 1.1pre11 with AutoConnect set to 'yes' and I recently
started seeing lots of these messages on my VPN and cannot connect to
various hosts from other hosts:
(I have obscured the hostnames and vpn name, but otherwise this is a direct
paste from syslog)
Aug 19 14:51:51 AAA tinc.nnn[2217]: Got REQ_KEY from XXX while we already
started a SPTPS session!
Aug 19 14:51:54 AAA
2008 Feb 22
0
lustre error
Dear All,
Yesterday evening or cluster has stopped.
Two of our nodes tried to take the resource from each other, they
haven''t seen the other side, if I saw well.
I stopped heartbeat, resources, start it again, and back to online,
worked fine.
This morning I saw this in logs:
Feb 22 03:25:07 node4 kernel: Lustre:
7:0:(linux-debug.c:98:libcfs_run_upcall()) Invoked LNET upcall
2018 May 10
0
Tinc 1.1pre15 double-crash
Hello,
this morning I apparently had tinc crash on me.
In 2 independent tinc clusters of 3 nodes each (but located in the same datacenter), one tinc process crashed in each of the clusters.
One process apparently with `status=6/ABRT`, the other with `status=11/SEGV`.
Interestingly, they crashed with only 5 minutes difference.
The only thing I can come up with that might explain this correlation
2014 Jul 16
2
Some questions about SPTPS
I've been using SPTPS (a.k.a ExperimentalProtocol) for a while now, but
I've only recently started looking into the details of the protocol
itself. I have some questions about the design:
- I am not sure what the thread model for SPTPS is when compared with
the legacy protocol. SPTPS is vastly more complex than the legacy
protocol (it adds a whole new handshake mechanism), and
2015 May 16
0
"Invalid KEX record length" during SPTPS key regeneration and related issues
On Sat, May 16, 2015 at 04:53:33PM +0100, Etienne Dechamps wrote:
> I believe there is a design flaw in the way SPTPS key regeneration
> works, because upon reception of the KEX message the other nodes will
> send both KEX and SIG messages at the same time. However, the node
> expects SIG to arrive after KEX. Therefore, there is an implicit
> assumption that messages won't
2010 Feb 01
0
[LLVMdev] Crash in PBQP register allocator
On Sun, 2010-01-31 at 13:28 +1100, Lang Hames wrote:
> Hi Sebastian,
>
> It boils down to this: The previous heuristic solver could return
> infinite cost solutions in some rare cases (despite finite-cost
> solutions existing). The new solver is still heuristic, but it should
> always return a finite cost solution if one exists. It does this by
> avoiding early reduction of
2005 Dec 09
0
RE: nodebytes and leafwords
hi kuhlen,
what you said is correct. i am talking about how
you are going to arrange these codewords into an
array, i.e. in the function _make_decode_table.
there he uses node bytes and leaf words for memory
management. i got a 24 bit platform. so if i assume
that max. codeword length that could be possible as
24 bits can i allocate a memory of (2 * used entries - 2),
to arrange the whole tree in
2010 Jan 31
2
[LLVMdev] Crash in PBQP register allocator
Hi Sebastian,
It boils down to this: The previous heuristic solver could return
infinite cost solutions in some rare cases (despite finite-cost
solutions existing). The new solver is still heuristic, but it should
always return a finite cost solution if one exists. It does this by
avoiding early reduction of infinite spill cost nodes via R1 or R2.
To illustrate why the early reductions can be a
2012 Feb 22
3
Error 400 on SERVER: Cannot append, variable node_data is defined in this scope at
Hi,
I have an problem that I can''t get resolved. I have an hash like
www.krzywanski.net/archives/703.
With this hash i whould like the add some extra hashes before passing to
the module, i have tryed the code below.
node testnode {
class { ''testclass'':
nodes_data => {
''node1'' => { ''server'' =>
2015 May 16
2
"Invalid KEX record length" during SPTPS key regeneration and related issues
Hi,
I'm currently trying to troubleshoot what appears to be a very subtle
bug (most likely a race condition) in SPTPS that causes state to
become corrupted during SPTPS key regeneration.
The tinc version currently deployed to my production nodes is git
7ac5263, which is somewhat old (2014-09-06), but I think this is still
relevant because the affected code paths haven't really changed
2012 Sep 29
1
quota severe performace issue help
Dear gluster experts,
We have encountered a severe performance issue related to quota feature of
gluster.
My underlying fs is lvm with xfs format.
The problem is if quota is enabled the io performance is about 26MB/s but
with quota disabled the io performance is 216MB/s.
Any one known what's the problem? BTW I have reproduce it several times and
it is related to quota indeed.
Here's the
2018 May 14
0
Node to Node UDP Tunnels HOWTO?
Here are a few facts that should make things clearer.
Regarding keys:
- The key used for the metaconnections (routing protocol over TCP) - i.e.
the one you configure in your host files - is NOT the same as the key used
for UDP data tunnels.
- The key for data tunnels is negotiated over the metaconnections, by
sending REQ_KEY and ANS_KEY messages over the metagraph (i.e. the graph of
2016 May 18
0
Upgrade to 1.1pre14
Hello,
After upgrading to 1.1pre14, enabling ExperimentalProtocol,
I receive a lot of messages like these:
Received short packet from nodename (ip port 655)
Handshake phase not finished yet from nodename (ip port 21785)
Got REQ_KEY from node while we already started a SPTPS session!
Invalid packet seqno: 0 != 1 from node (ip port 21785)
Failed to verify SIG record from node (ip port 21785)
No
2012 Aug 24
1
RJSONIO/rjson maximum depth?
Hi All,
has anyone run into maximum depth of nested JSON arrays in either rjson or
RJSONIO ?
I seem to be able to get up to 10 depth levels without problem, but
crossing over to 11 either causes an error or fails to load the nodes
properly.
with RJSONIO I tried:
a = fromJSON('data/myJSON.json', depth=1000)
but I still get this error:
Error in fromJSON(content, handler, default.size,
2017 Aug 24
1
using both ConnectTo and AutoConnect to avoid network partitions
Thanks Guus
I have one more question.
- We see several log messages that we dont currently understand - Can you
comment on what they mean and if they are concerning? I've obfuscated IP's
and node names so please ignore those. Our tinc daemon command is: tincd -n
<vpn name>
-- Received short packet
-- Got REQ_KEY from node003 while we already started a SPTPS session!
-- Invalid
2008 Jul 14
1
Node fence on RHEL4 machine running 1.2.8-2
Hello,
We have a four-node RHEL4 RAC cluster running OCFS2 version 1.2.8-2 and
the 2.6.9-67.0.4hugemem kernel. The cluster has been really stable since
we upgraded to 1.2.8-2 early this year, but this morning, one of the
nodes fenced and rebooted itself, and I wonder if anyone could glance at
the below remote syslogs and offer an opinion as to why.
First, here's the output of
2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
Hi Dietmar,
I am trying to understand the problem and have few questions.
1. Is trashcan enabled only on master volume?
2. Does the 'rm -rf' done on master volume synced to slave ?
3. If trashcan is disabled, the issue goes away?
The geo-rep error just says the it failed to create the directory
"Oracle_VM_VirtualBox_Extension" on slave.
Usually this would be because of gfid
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
Hi Kotresh,
thanks for your repsonse...
answers inside...
best regards
Dietmar
Am 13.03.2018 um 06:38 schrieb Kotresh Hiremath Ravishankar:
> Hi Dietmar,
>
> I am trying to understand the problem and have few questions.
>
> 1. Is trashcan enabled only on master volume?
no, trashcan is also enabled on slave. settings are the same as on
master but trashcan on slave is complete
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
Hello,
in regard to
https://bugzilla.redhat.com/show_bug.cgi?id=1434066
i have been faced to another issue when using the trashcan feature on a
dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4)
for e.g. removing an entire directory with subfolders :
tron at gl-node1:/myvol-1/test1/b1$ rm -rf *
afterwards listing files in the trashcan :
tron at gl-node1:/myvol-1/test1$