Displaying 20 results from an estimated 10000 matches similar to: "Sanlock disk leases on drbd/gfs2 volume"
2010 Mar 27
1
DRBD,GFS2 and GNBD without all clustered cman stuff
Hi all,
Where i want to arrive:
1) having two storage server replicating partition with DRBD
2) exporting via GNBD from the primary server the drbd with GFS2
3) inporting the GNBD on some nodes and mount it with GFS2
Assuming no logical error are done in the last points logic this is the
situation:
Server 1: LogVol09, DRDB configured as /dev/drbd0 replicated to Server 2.
DRBD seems to work
2011 Feb 27
1
Recover botched drdb gfs2 setup .
Hi.
The short story... Rush job, never done clustered file systems before,
vlan didn't support multicast. Thus I ended up with drbd working ok
between the two servers but cman / gfs2 not working, resulting in what
was meant to be a drbd primary/primary cluster being a primary/secondary
cluster until the vlan could be fixed with gfs only mounted on the one
server. I got the single server
2009 Jun 05
2
Dovecot + DRBD/GFS mailstore
Hi guys,
I'm looking at the possibility of running a pair of servers with
Dovecot LDA/imap/pop3 using internal drives with DRBD and GFS (or
other clustered FS) for the mail storage and ext3 for the root drive.
I'm currently using maildrop for delivery and Dovecot imap/pop3 with
the stores over NFS. I'm looking for better performance but still
keeping the HA element I have now with
2009 Jun 05
1
DRBD+GFS - Logical Volume problem
Hi list.
I am dealing with DRBD (+GFS as its DLM). GFS configuration needs a
CLVMD configuration. So, after syncronized my (two) /dev/drbd0 block
devices, I start the clvmd service and try to create a clustered
logical volume. I get this:
On "alice":
[root at alice ~]# pvcreate /dev/drbd0
Physical volume "/dev/drbd0" successfully created
[root at alice ~]# vgcreate
2010 Apr 30
5
Mount drbd/gfs logical volume from domU
Hi list,
I setup on 2 Xen Dom0s drbd/gfs a logical volume, this works as primary/primary so both DomUs will be able to write on them at the same time. But I dont know how to mount them from my domUs, I can see them with fdisk -l. The partition is /dev/xvdb1
SHould I install gfs on domUs and mount them on each as gfs partitions?
[root@p3x0501 ~]# fdisk -l
Disk /dev/xvda: 5368 MB, 5368709120
2012 Mar 13
2
libvirt with sanlock
Hello,
I configured libvirtd with the sanlock lock manager plugin:
# rpm -qa | egrep "libvirt-0|sanlock-[01]"
libvirt-lock-sanlock-0.9.4-23.el6_2.4.x86_64
sanlock-1.8-2.el6.x86_64
libvirt-0.9.4-23.el6_2.4.x86_64
# egrep -v "^#|^$" /etc/libvirt/qemu-sanlock.conf
auto_disk_leases = 1
disk_lease_dir = "/var/lib/libvirt/sanlock"
host_id = 4
# mount | grep sanlock
2010 Oct 05
0
WG: HOWTO samba4 centos5.5 named dnsupdate drbd simple failover
centOs5.5/samba4/named here is a short guide setting it up to work.
I added TSIG for bind-master amd bind-slave. Update to samba4 alpha13 added (installing git on CentOs 5.5).
If you do this howto right now you will start with samba4 alpha13. You do not need the update section. But you need
git for your installation because the rsync-thing is broken!!!!!!
First of all do not install the bind
2013 May 03
1
sanlockd, virtlock and GFS2
Hi,
I'm trying to put in place a KVM cluster (using clvm and gfs2), but I'm
running into some issues with either sanlock or virtlockd. All virtual
machines are handled via the cluster (in /etc/cluser/cluster.conf) but I
want some kind of locking to be in place as extra security measurement.
Sanlock
=======
At first I tried sanlock, but it seems if one node goes down
unexpectedly,
2011 Nov 25
0
Failed to start a "virtual machine " service on RHCS in CentOS 6
Hi? All:
I have two physical machines as KVM hosts (clusterA.RHCS and clusterB.RHCS) , an iscsi target set into GFS.
All I want is a HA Cluster which could migrate all the virtual machines on a node to another when the first node failed into some error status.
So I created a cluster "cluster" using RHCS ,added the two hosts into the cluster . created a fence device .
for every virtual
2010 Aug 09
2
HOWTO samba4 centos5.5 named dnsupdate drbd simple failover
centOs5.5/samba4/named here is a short guide setting it up to work.
First of all do not install the bind package coming with centos 5.5!!
Install needs for samba
yum install libacl* gnutls* readline* python* gdb* autoconf*
Named installation:
Here is a description on what to do:
http://jason.roysdon.net/2009/10/16/building-bind-9-6-on-rhel5-centos5-for-d
nssec-nsec3-support/
The steps,
yum
2011 Jun 30
1
Xen with DRBD, mount DRBD device / Filesystem type?
Hi everyone,
I´m using Citrix XenServer with DRBD. But something went really wrong, so I
need a fresh Install.My only question is, how can I mount the DRBD
partition? It is /dev/drbd1 , but if I try to mount it, I need to provide
the filesystem type. Does anyone know what I have to do? I only need to copy
some data from it, it doesnt matter if it will be destroyed. I tried GFS as
filesystem
2008 Jan 02
4
Xen, GFS, GNBD and DRBD?
Hi all,
We're looking at deploying a small Xen cluster to run some of our
smaller applications. I'm curious to get the lists opinions and advice
on what's needed.
The plan at the moment is to have two or three servers running as the
Xen dom0 hosts and two servers running as storage servers. As we're
trying to do this on a small scale, there is no means to hook the
2010 Aug 16
1
WG: HOWTO samba4 centos5.5 named dnsupdate drbd simple failover
centOs5.5/samba4/named here is a short guide setting it up to work.
First of all do not install the bind package coming with centos 5.5!!
Install needs for samba
yum install libacl* gnutls* readline* python* gdb* autoconf*
Named installation:
Here is a description on what to do:
http://jason.roysdon.net/2009/10/16/building-bind-9-6-on-rhel5-centos5-for-d
nssec-nsec3-support/
The steps,
yum
2009 Jan 24
0
Best practices for httpd & MySQL under Xen w/DRBD & iSCSI?
Hi All,
I apologize in advance if this strays too far from the etiquette on the
Xen userlist, however the amount of help, brainpower and experience I''ve
received from this list with Xen and "peripheral" related issues (i.e.
DRBD) have been worth more than it''s weight on gold and I''m hoping
someone will be kind enough to give me a "best practices"
2011 Jun 11
2
mmap in GFS2 on rhel 6.1
Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster
Backend with GFS2, also we are using dovecot as a Director for user node
persistence, everything was ok until we started stress testing the solution
with imaptest, we had many deadlocks, cluster filesystems corruptions and
hangs, specially in index filesystem, we have configured the backend as if
they were on a NFS like setup
2013 Aug 21
2
Dovecot tuning for GFS2
Hello,
I'm deploing a new email cluster using Dovecot over GFS2. Actually I'm
using courier over GFS.
Actually I'm testing Dovecot with these parameters:
mmap_disable = yes
mail_fsync = always
mail_nfs_storage = yes
mail_nfs_index = yes
lock_method = fcntl
Are they correct?
RedHat GFS support mmap, so is it better to enable it or leave it disabled?
The documentation suggest the
2006 Nov 07
4
gnbd vs drbd
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Up until now, I have been using drbd for file custers with great success.
Yes, it is a PITA, and sometimes you can get annoying sincronization
issues (mostly on lab situations).
Now I have been considering giving gnbd (with cs/gfs) a try.
Do any of you ever crossed this path ? Any comparisons or comments ?
TIA,
- --
Rodrigo Barbosa
"Quid
2006 Oct 12
5
AoE LVM2 DRBD Xen Setup
Hello everybody,
I am in the process of setting up a really cool xen serverfarm. Backend
storage will be an LVMed AoE-device on top of DRBD.
The goal is to have the backend storage completely redundant.
Picture:
|RAID| |RAID|
|DRBD1| <----> |DRBD2|
\ /
|VMAC|
| AoE |
|global LVM VG|
/ | \
|Dom0a| |Dom0b| |Dom0c|
| |
2009 Aug 03
1
CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?
Hi everybody,
I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not
understand some points.
It is possible to run the CTDB defining it under services section in
cluster.conf but running it on the second node shuts down the process at the
first one. My CTDB configuration implies 2 active-active nodes.
Does CTDB care if the node starts with clean_start="0" or
2007 Jun 17
0
CentOS-announce Digest, Vol 28, Issue 14
Send CentOS-announce mailing list submissions to
centos-announce at centos.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.centos.org/mailman/listinfo/centos-announce
or, via email, send a message with subject or body 'help' to
centos-announce-request at centos.org
You can reach the person managing the list at
centos-announce-owner at centos.org
When