Displaying 20 results from an estimated 1000 matches similar to: "Disconnected gluster node things it is still connected..."
2017 Jun 01
0
"Another Transaction is in progres..."
Thanks for the suggestion, this solved it for us, and we probably found the
cause as well. We had performance co-pilot running and it was continously
enabling profiling on volumes...
We found the reference to the node that had the lock, and restarted
glusterd on that node, and all went well from there on.
Krist
On 31 May 2017 at 15:56, Vijay Bellur <vbellur at redhat.com> wrote:
>
2017 Aug 24
3
NFS versus Fuse file locking problem (NFS works, fuse doesn't...)
Hi
This is gluster 3.8.4. Volume options are out of the box. Sharding is off
(and I don't think enabling it would matter)
I haven't done much performance tuning. For one thing, using a simple
script that just creates files I can easily flood the network, so I don't
expect a performance issue.
The problem we see is that after a certain time the fuse clients completely
stop accepting
2017 May 31
2
"Another Transaction is in progres..."
Hi all,
I am trying to do trivial things, like setting quota, or just querying the
status and keep getting
"Another transaction is in progres for <some volume>"
These messages pop up, then disappear for a while, then pop up again...
What do these messages mean? How do I figure out which "transaction" is
meant here, and what do I do about it?
Krist
--
Vriendelijke
2017 Jun 02
1
File locking...
Hi all,
A few questions.
- Is POSIX locking enabled when using the native client? I would assume yes.
- What other settings/tuneables exist when it comes to file locking?
Krist
--
Vriendelijke Groet | Best Regards | Freundliche Gr??e | Cordialement
------------------------------
Krist van Besien | Senior Architect | Red Hat EMEA Cloud Practice | RHCE |
RHCSA Open Stack
@: krist at
2017 Aug 24
0
NFS versus Fuse file locking problem (NFS works, fuse doesn't...)
Hi Krist,
What are your volume options on that setup? Have you tried tuning it for
the kind of workload and files size you have?
I would definitely do some tests with feature.shard=on/off first. If shard
is on, try playing with features.shard-block-size.
Do you have jumbo frames (MTU=9000) enabled across the switch and nodes? if
you have concurrent clients writing/reading, it could be beneficial
2017 Aug 22
0
Performance testing with sysbench...
Hi all,
I'm doing some performance test...
If I test a simple sequential write using dd I get a thorughput of about
550 Mb/s. When I do a sequential write test using sysbench this drops to
about 200. Is this due to the way sysbench tests? Or has in this case the
performance of sysbench itself become the bottleneck?
Krist
--
Vriendelijke Groet | Best Regards | Freundliche Gr??e |
2017 Jul 26
0
Heketi and Geo Replication.
Hello,
Is it possible to set up a Heketi Managed gluster cluster in one
datacenter, and then have geo replication for all volumes to a second
cluster in another datacenter?
I've been looking at that, but haven't really found a recipe/solution for
this.
Ideally what I want is that when a volume is created in cluster1, that a
slave volume is automatically created in cluster2, and
2017 Oct 19
0
Trying to remove a brick (with heketi) fails...
Hello,
I have a gluster cluster with 4 nodes, that is managed using heketi. I want
to test the removeal of one node.
We have several volumes on it, some with rep=2, others with rep=3.
I get the following error:
[root at CTYI1458 .ssh]# heketi-cli --user admin --secret "******" node remove
749850f8e5fd23cf6a224b7490499659
Error: Failed to remove device, error: Cannot replace brick
2017 Aug 24
2
NFS versus Fuse file locking problem (NFS works, fuse doesn't...)
Hi all,
I usualy advise clients to use the native client if at all possible, as it
is very robust. But I am running in to problems here.
In this case the gluster system is used to store video streams. Basicaly
the setup is the following:
- A gluster cluster of 3 nodes, with ample storage. They export several
volumes.
- The network is 10GB, switched.
- A "recording server" which
2017 Aug 25
0
NFS versus Fuse file locking problem (NFS works, fuse doesn't...)
On Thu, Aug 24, 2017 at 9:01 AM, Krist van Besien <krist at redhat.com> wrote:
> Hi
> This is gluster 3.8.4. Volume options are out of the box. Sharding is off
> (and I don't think enabling it would matter)
>
> I haven't done much performance tuning. For one thing, using a simple
> script that just creates files I can easily flood the network, so I don't
>
2006 Oct 20
4
tcpsnoop problem
Hello,
I have the following problem:
On a solaris 10 machine, with 5 "zones" there is a process that is
talking to the wrong db server. I need to find out which process this
is, so I can analize this further. I have tried to doing this using
tcpsnoop from the DTrace toolkit, but without success.
This is what I''ve done.
First I started tcpsnoop, dumping it''s output
2017 Jul 04
2
I need a sanity check.
2017 Dec 10
0
Problems with packages being dropped between nodes in the vpn
Hi
I have some problems with my vpn. Im running version 1.1pre15 on all nodes.
I have four nodes in my network.
Node1 -> connects to Node2
Node2 -> connects to Node1
Node3 -> connects to Node1 and Node2
Node4 -> connects to Node1 and Node2
The problem is the connection between Node3 and Node4. The traffic is going via Node1 and Node2. Its unstable. package drops almost all the time
2008 Feb 22
0
lustre error
Dear All,
Yesterday evening or cluster has stopped.
Two of our nodes tried to take the resource from each other, they
haven''t seen the other side, if I saw well.
I stopped heartbeat, resources, start it again, and back to online,
worked fine.
This morning I saw this in logs:
Feb 22 03:25:07 node4 kernel: Lustre:
7:0:(linux-debug.c:98:libcfs_run_upcall()) Invoked LNET upcall
2014 Jan 16
2
Your opinion about RHCSA certification
Hello to all,
I'm currently studying (and collecting notes here
https://github.com/fdicarlo/RHCSA_cs) for RHCSA. My plan is to RHCSA
-> RHCE and then RHCSS.
What I want to ask you is:
- What do you think about it?
- Did you find it useful?
- Do you have any advices?
Best regards,
Fabrizio
--
"The intuitive mind is a sacred gift and the rational mind is a
faithful servant. We have
2008 Jul 14
1
Node fence on RHEL4 machine running 1.2.8-2
Hello,
We have a four-node RHEL4 RAC cluster running OCFS2 version 1.2.8-2 and
the 2.6.9-67.0.4hugemem kernel. The cluster has been really stable since
we upgraded to 1.2.8-2 early this year, but this morning, one of the
nodes fenced and rebooted itself, and I wonder if anyone could glance at
the below remote syslogs and offer an opinion as to why.
First, here's the output of
2014 Nov 12
2
Connection failing between 2 nodes with dropped packets error
Hi,
I'm sometimes getting a failure of connecting 2 nodes when Tinc is started
and configured in a LAN. In the logs, there are some unexpected dropped
packets with very high or negative seq. I can reproduce this issue ~2% of
the time.
When this happens, the 2 nodes can no longer ping or ssh each other through
the tunnel interface but using eth0 works fine. The connection can recover
after at
2011 Mar 03
1
OCFS2 1.4 + DRBD + iSCSI problem with DLM
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20110303/0fbefee6/attachment.html
2010 May 06
10
No connection between nodes on same LAN
Hi all,
I am currently deploying tinc as an alternative to OpenVPN.
My setup includes a lot of nodes and some of them are sitting together
behind the same router on the same network segment.
(E.g. connected to the same switch.)
I noticed, that those nodes do never talk directly to each other via their
private ip-addresses, but instead use the NATed address they got from the
router.
2004 Aug 06
0
Re: [Flac-dev] Unified codec interface
That's what UCI is trying to do. I'm hoping for just a simple unified Ogg
interface for the audio codecs that Ogg supports.
-dwh-
On 31 Jan 2003, Csillag Krist?f wrote:
> Here is what I imagined (just vague thoughts, nothing polished):
>
> Let's suppose we have a hypotetical library called
> "Free Universal Codec Kit" - ..um...well.. Frunick for short :)