Displaying 20 results from an estimated 3000 matches similar to: "GFS2 hangs after one node going down"
2013 Mar 21
1
GFS2 hangs after one node going down
Hi guys,
my goal is to create a reliable virtualization environment using CentOS
6.4 and KVM, I've three nodes and a clustered GFS2.
The enviroment is up and working, but I'm worry for the reliability, if
I turn the network interface down on one node to simulate a crash (for
example on the node "node6.blade"):
1) GFS2 hangs (processes go in D state) until node6.blade get
2014 Nov 12
2
Connection failing between 2 nodes with dropped packets error
Hi,
I'm sometimes getting a failure of connecting 2 nodes when Tinc is started
and configured in a LAN. In the logs, there are some unexpected dropped
packets with very high or negative seq. I can reproduce this issue ~2% of
the time.
When this happens, the 2 nodes can no longer ping or ssh each other through
the tunnel interface but using eth0 works fine. The connection can recover
after at
2010 Feb 01
0
[LLVMdev] Crash in PBQP register allocator
On Sun, 2010-01-31 at 13:28 +1100, Lang Hames wrote:
> Hi Sebastian,
>
> It boils down to this: The previous heuristic solver could return
> infinite cost solutions in some rare cases (despite finite-cost
> solutions existing). The new solver is still heuristic, but it should
> always return a finite cost solution if one exists. It does this by
> avoiding early reduction of
2010 Jan 31
2
[LLVMdev] Crash in PBQP register allocator
Hi Sebastian,
It boils down to this: The previous heuristic solver could return
infinite cost solutions in some rare cases (despite finite-cost
solutions existing). The new solver is still heuristic, but it should
always return a finite cost solution if one exists. It does this by
avoiding early reduction of infinite spill cost nodes via R1 or R2.
To illustrate why the early reductions can be a
2005 Dec 09
0
RE: nodebytes and leafwords
hi kuhlen,
what you said is correct. i am talking about how
you are going to arrange these codewords into an
array, i.e. in the function _make_decode_table.
there he uses node bytes and leaf words for memory
management. i got a 24 bit platform. so if i assume
that max. codeword length that could be possible as
24 bits can i allocate a memory of (2 * used entries - 2),
to arrange the whole tree in
2012 Sep 29
1
quota severe performace issue help
Dear gluster experts,
We have encountered a severe performance issue related to quota feature of
gluster.
My underlying fs is lvm with xfs format.
The problem is if quota is enabled the io performance is about 26MB/s but
with quota disabled the io performance is 216MB/s.
Any one known what's the problem? BTW I have reproduce it several times and
it is related to quota indeed.
Here's the
2009 Aug 03
1
CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?
Hi everybody,
I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not
understand some points.
It is possible to run the CTDB defining it under services section in
cluster.conf but running it on the second node shuts down the process at the
first one. My CTDB configuration implies 2 active-active nodes.
Does CTDB care if the node starts with clean_start="0" or
2011 Jan 27
7
[PATCH]: xl: fix broken cpupool-numa-split
Hi,
the implementation of xl cpupool-numa-split is broken. It basically
deals with only one poolid, but there are two to consider: the one from
the original root CPUpool, the other from the newly created one.
On my machine the current output looks like:
root@dosorca:/data/images# xl cpupool-numa-split
libxl: error: libxl.c:2803:libxl_create_cpupool Could not create cpupool
error on creating
2012 Jul 19
1
duplicate domain ids!?
Somehow I've ended up with duplicate domids. domid=15 name=node14
(names sanitized)
# virsh list --all
Id Name State
----------------------------------------------------
1 node1 running
2 node2 running
3 node3 running
5 node4 running
6
2018 Sep 14
0
Re: NUMA issues on virtualized hosts
Hello,
ok, I found that cpu pinning was wrong, so I corrected it to be 1:1. The issue
with iozone remains the same.
The spec is running, however, it runs slower than 1-NUMA case.
The corrected XML looks like follows:
<cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3'
2013 May 03
1
sanlockd, virtlock and GFS2
Hi,
I'm trying to put in place a KVM cluster (using clvm and gfs2), but I'm
running into some issues with either sanlock or virtlockd. All virtual
machines are handled via the cluster (in /etc/cluser/cluster.conf) but I
want some kind of locking to be in place as extra security measurement.
Sanlock
=======
At first I tried sanlock, but it seems if one node goes down
unexpectedly,
2011 Jun 11
2
mmap in GFS2 on rhel 6.1
Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster
Backend with GFS2, also we are using dovecot as a Director for user node
persistence, everything was ok until we started stress testing the solution
with imaptest, we had many deadlocks, cluster filesystems corruptions and
hangs, specially in index filesystem, we have configured the backend as if
they were on a NFS like setup
2009 Feb 21
1
GFS2/OCFS2 scalability
Andreas Dilger wrote:
> On Feb 20, 2009 20:23 +0300, Kirill Kuvaldin wrote:
>> I'm evaluating different cluster file systems that can work with large
>> clustered environment, e.g. hundreds of nodes connected to a SAN over
>> FC.
>>
>> So far I looked at OCFS2 and GFS2, they both worked nearly the same
>> in terms of performance, but since I ran my
2010 Mar 27
1
DRBD,GFS2 and GNBD without all clustered cman stuff
Hi all,
Where i want to arrive:
1) having two storage server replicating partition with DRBD
2) exporting via GNBD from the primary server the drbd with GFS2
3) inporting the GNBD on some nodes and mount it with GFS2
Assuming no logical error are done in the last points logic this is the
situation:
Server 1: LogVol09, DRDB configured as /dev/drbd0 replicated to Server 2.
DRBD seems to work
2010 Dec 14
1
Samba slowness serving SAN-based GFS2 filesystems
Ok,
I'm experiencing slowness serving SAN-based GFS2 filesystems (of a specific
SAN configuration).
Here's my layout:
I have a server cluster.
OS= RHEL 5.4 (both nodes...)
kernel= 2.6.18-194.11.3.el5
Samba= samba-3.0.33-3.14.el5
*On this cluster are 6 GFS2 Clustered filesystems.
*4 of these volumes belong to one huge LUN (1.8 TB), spanning 8 disks. The
other 2 remaining volumes are 1
2008 Jun 26
0
CEBA-2008:0501 CentOS 5 i386 gfs2-kmod Update
CentOS Errata and Bugfix Advisory 2008:0501
Upstream details at : https://rhn.redhat.com/errata/RHBA-2008-0501.html
The following updated files have been uploaded and are currently
syncing to the mirrors: ( md5sum Filename )
i386:
629f45a15a6cef05327f23d73524358d kmod-gfs2-1.92-1.1.el5_2.2.i686.rpm
dcc5d2905e9c0cf4d424000ad24c6a5b kmod-gfs2-PAE-1.92-1.1.el5_2.2.i686.rpm
2014 Mar 10
1
gfs2 and quotas - system crash
I have tried sending this before, but it did not appear to get through.
Hello,
When using gfs2 with quotas on a SAN that is providing storage to two
clustered systems running CentOS6.5, one of the systems
can crash. This crash appears to be caused when a user tries
to add something to a SAN disk when they have exceeded their
quota on that disk. Sometimes a stack trace is produced in
2009 Mar 20
1
Centos 5.2 ,5.3 and GFS2
Hello,
I will create a Xen cluster and using GFS2 (with conga, ...) to create
a new Xen cluster.
I know that GFS2 is prod ready since RHEL 5.3.
Do you know whent Centos 5.3 will be ready ?
Can I install my GFS2 FS with centos 5.2 and then "simply" upgrade to
5.3 without reinstallation ?
Tx
2011 Jun 08
2
Looking for gfs2-kmod SRPM
I'm searching for the SRPM corresponding to this installed RPM.
% yum list | grep gfs2
gfs2-kmod-debuginfo.x86_64 1.92-1.1.el5_2.2
It is missing from:
http://msync.centos.org/centos-5/5/os/SRPMS/
What I need from the SRPM are the patches. I'm working through
some issues using the source code, and the patches in the RedHat
SRPM
2008 Jun 26
0
CEBA-2008:0501 CentOS 5 x86_64 gfs2-kmod Update
CentOS Errata and Bugfix Advisory 2008:0501
Upstream details at : https://rhn.redhat.com/errata/RHBA-2008-0501.html
The following updated files have been uploaded and are currently
syncing to the mirrors: ( md5sum Filename )
x86_64:
b934e068a7daf76e15080d43dd3101ca kmod-gfs2-1.92-1.1.el5_2.2.x86_64.rpm
109ffdfbb849dda14a89a6964310c254 kmod-gfs2-xen-1.92-1.1.el5_2.2.x86_64.rpm
Source: