Displaying 20 results from an estimated 2000 matches similar to: "GFS2 hangs after one node going down"
2013 Mar 21
0
GFS2 hangs after one node going down
Hi guys,
my goal is to create a reliable virtualization environment using CentOS
6.4 and KVM, I've three nodes and a clustered GFS2.
The enviroment is up and working, but I'm worry for the reliability, if
I turn the network interface down on one node to simulate a crash (for
example on the node "node6.blade"):
1) GFS2 hangs (processes go in D state) until node6.blade get
2010 Jan 31
2
[LLVMdev] Crash in PBQP register allocator
Hi Sebastian,
It boils down to this: The previous heuristic solver could return
infinite cost solutions in some rare cases (despite finite-cost
solutions existing). The new solver is still heuristic, but it should
always return a finite cost solution if one exists. It does this by
avoiding early reduction of infinite spill cost nodes via R1 or R2.
To illustrate why the early reductions can be a
2014 Nov 12
2
Connection failing between 2 nodes with dropped packets error
Hi,
I'm sometimes getting a failure of connecting 2 nodes when Tinc is started
and configured in a LAN. In the logs, there are some unexpected dropped
packets with very high or negative seq. I can reproduce this issue ~2% of
the time.
When this happens, the 2 nodes can no longer ping or ssh each other through
the tunnel interface but using eth0 works fine. The connection can recover
after at
2012 Sep 29
1
quota severe performace issue help
Dear gluster experts,
We have encountered a severe performance issue related to quota feature of
gluster.
My underlying fs is lvm with xfs format.
The problem is if quota is enabled the io performance is about 26MB/s but
with quota disabled the io performance is 216MB/s.
Any one known what's the problem? BTW I have reproduce it several times and
it is related to quota indeed.
Here's the
2012 Mar 07
1
[HELP!]GFS2 in the xen 4.1.2 does not work!
[This email is either empty or too large to be displayed at this time]
2009 Aug 03
1
CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?
Hi everybody,
I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not
understand some points.
It is possible to run the CTDB defining it under services section in
cluster.conf but running it on the second node shuts down the process at the
first one. My CTDB configuration implies 2 active-active nodes.
Does CTDB care if the node starts with clean_start="0" or
2005 Aug 08
3
Reg. getting codewords from codelengths
Hi,
I am a bit confused on how code-words are derived from the codeword
lengths. I will appreciate if someone can point me in the correct direction.
I will take the example of an actual codebook that i found in a valid
vorbis encoded file as shown below.
[SK] +------Codebook [0] --------
[SK] Codebook Dimensions = 1
[SK] Codebook Entries = 8
[SK] Unordered
[SK] 1, 6, 3, 7, 2, 5, 4, 7,
[SK] NO
2010 Feb 01
0
[LLVMdev] Crash in PBQP register allocator
On Sun, 2010-01-31 at 13:28 +1100, Lang Hames wrote:
> Hi Sebastian,
>
> It boils down to this: The previous heuristic solver could return
> infinite cost solutions in some rare cases (despite finite-cost
> solutions existing). The new solver is still heuristic, but it should
> always return a finite cost solution if one exists. It does this by
> avoiding early reduction of
2005 Nov 28
1
nodebytes,leafwords
hello all,
we are developing and porting vorbis1decoder on a 24 bit
platform. in the process we came across somedoubts about
node bytes and leaf words.
from the specification we got that we are arranging
the huffman codeword tree into an array. the nodebytes are the
number of bytes that are required to represent a node and
leafwords are the no. of bytes required to represent the leaf
i.e the
2005 Dec 09
0
RE: nodebytes and leafwords
hi kuhlen,
what you said is correct. i am talking about how
you are going to arrange these codewords into an
array, i.e. in the function _make_decode_table.
there he uses node bytes and leaf words for memory
management. i got a 24 bit platform. so if i assume
that max. codeword length that could be possible as
24 bits can i allocate a memory of (2 * used entries - 2),
to arrange the whole tree in
2010 Jan 28
0
[LLVMdev] Crash in PBQP register allocator
Hi Lang,
I'm surprised about the fact that you omit R1/R2 reductions in some cases.
Can you give a more detailed description of the bug (e.g. a PBQP dump)?
Best regards,
Sebastian
Lang Hames wrote:
> Hi Sachin, llvm-dev,
>
> I've just committed a new PBQP solver which, among other things,
> should take care of this bug.
>
> Please let me know how it works out for you.
2018 Sep 14
3
NUMA issues on virtualized hosts
Hello,
I have cluster with AMD EPYC 7351 cpu. Two CPUs per node. I have performance
8-NUMA configuration:
This is from hypervizor:
[root@hde10 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
Hi Kotresh,
thanks for your repsonse...
answers inside...
best regards
Dietmar
Am 13.03.2018 um 06:38 schrieb Kotresh Hiremath Ravishankar:
> Hi Dietmar,
>
> I am trying to understand the problem and have few questions.
>
> 1. Is trashcan enabled only on master volume?
no, trashcan is also enabled on slave. settings are the same as on
master but trashcan on slave is complete
2018 Sep 14
1
Re: NUMA issues on virtualized hosts
Hello again,
when the iozone writes slow. This is how slabtop looks like:
62476752 62476728 0% 0.10K 1601968 39 6407872K buffer_head
1000678 999168 0% 0.56K 142954 7 571816K radix_tree_node
132184 125911 0% 0.03K 1066 124 4264K kmalloc-32
118496 118224 0% 0.12K 3703 32 14812K kmalloc-node
73206 56467 0% 0.19K 3486 21
2010 Jan 26
3
[LLVMdev] Crash in PBQP register allocator
Hi Sachin, llvm-dev,
I've just committed a new PBQP solver which, among other things,
should take care of this bug.
Please let me know how it works out for you.
Cheers,
Lang.
On Tue, Dec 15, 2009 at 5:54 PM, Lang Hames <lhames at gmail.com> wrote:
> Hi Sachin,
>
> Yes. Bernhard Scholz and I have just discussed a fix for this. I hope to
> commit it in the next few days. I
2011 Jun 08
2
Looking for gfs2-kmod SRPM
I'm searching for the SRPM corresponding to this installed RPM.
% yum list | grep gfs2
gfs2-kmod-debuginfo.x86_64 1.92-1.1.el5_2.2
It is missing from:
http://msync.centos.org/centos-5/5/os/SRPMS/
What I need from the SRPM are the patches. I'm working through
some issues using the source code, and the patches in the RedHat
SRPM
2011 Jun 11
2
mmap in GFS2 on rhel 6.1
Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster
Backend with GFS2, also we are using dovecot as a Director for user node
persistence, everything was ok until we started stress testing the solution
with imaptest, we had many deadlocks, cluster filesystems corruptions and
hangs, specially in index filesystem, we have configured the backend as if
they were on a NFS like setup
2010 Mar 27
1
DRBD,GFS2 and GNBD without all clustered cman stuff
Hi all,
Where i want to arrive:
1) having two storage server replicating partition with DRBD
2) exporting via GNBD from the primary server the drbd with GFS2
3) inporting the GNBD on some nodes and mount it with GFS2
Assuming no logical error are done in the last points logic this is the
situation:
Server 1: LogVol09, DRDB configured as /dev/drbd0 replicated to Server 2.
DRBD seems to work
2009 Mar 20
1
Centos 5.2 ,5.3 and GFS2
Hello,
I will create a Xen cluster and using GFS2 (with conga, ...) to create
a new Xen cluster.
I know that GFS2 is prod ready since RHEL 5.3.
Do you know whent Centos 5.3 will be ready ?
Can I install my GFS2 FS with centos 5.2 and then "simply" upgrade to
5.3 without reinstallation ?
Tx
2014 Mar 10
1
gfs2 and quotas - system crash
I have tried sending this before, but it did not appear to get through.
Hello,
When using gfs2 with quotas on a SAN that is providing storage to two
clustered systems running CentOS6.5, one of the systems
can crash. This crash appears to be caused when a user tries
to add something to a SAN disk when they have exceeded their
quota on that disk. Sometimes a stack trace is produced in