similar to: gnbd vs drbd

Displaying 20 results from an estimated 300 matches similar to: "gnbd vs drbd"

2007 Aug 15
5
GNBD and DRBD kernel mods
Hi all, I am using heartbeat and drbd on CentOS 5 (see http://www.centos.org/modules/newbb/viewtopic.php?topic_id=7816&forum=41 for details).. DRBD and heartbeat are working, and now I want to put GNBD on top on this. However, I installed the latest CentOS-plus kernel (2.6.18-8.1.8.el5.centos.plus) but there doesn't appear to be a kmod-gndb for this kernel. Looks like the latest
2010 Mar 27
1
DRBD,GFS2 and GNBD without all clustered cman stuff
Hi all, Where i want to arrive: 1) having two storage server replicating partition with DRBD 2) exporting via GNBD from the primary server the drbd with GFS2 3) inporting the GNBD on some nodes and mount it with GFS2 Assuming no logical error are done in the last points logic this is the situation: Server 1: LogVol09, DRDB configured as /dev/drbd0 replicated to Server 2. DRBD seems to work
2008 Jan 02
4
Xen, GFS, GNBD and DRBD?
Hi all, We're looking at deploying a small Xen cluster to run some of our smaller applications. I'm curious to get the lists opinions and advice on what's needed. The plan at the moment is to have two or three servers running as the Xen dom0 hosts and two servers running as storage servers. As we're trying to do this on a small scale, there is no means to hook the
2010 Jul 01
0
GNBD/LVM problem
Hello all: I'm having a strange problem with GNBD and LVM on two fully updated CentOS 5.5 x86_64 systems. On node1, I have exported a gnbd volume. lvcreate -L 500M -n mirrortest_lv01 mirrorvg gnbd_serv gnbd_export -d /dev/mirrorvg/mirrortest_lv01 -e node1_lv01 On node2 I have imported the volume: gnbd_import -i node1 Next, on node2 I attempt to create a mirrored LV with the
2007 Oct 03
0
CEBA-2007:0953 CentOS 5 i386 gnbd-kmod Update
CentOS Errata and Bugfix Advisory 2007:0953 Upstream details at : https://rhn.redhat.com/errata/RHBA-2007-0953.html The following updated files have been uploaded and are currently syncing to the mirrors: ( md5sum Filename ) i386: 4e3a0ef4b2ed1019df88196f418c86b7 kmod-gnbd-0.1.3-4.2.6.18_8.1.14.el5.i686.rpm 7eb54b0cee9bd23cf4eb3e0b98c10205 kmod-gnbd-PAE-0.1.3-4.2.6.18_8.1.14.el5.i686.rpm
2007 Oct 26
0
CEBA-2007:0977 CentOS 5 i386 gnbd-kmod Update
CentOS Errata and Bugfix Advisory 2007:0977 Upstream details at : https://rhn.redhat.com/errata/RHBA-2007-0977.html The following updated files have been uploaded and are currently syncing to the mirrors: ( md5sum Filename ) i386: 3308267c6757d7219f6c6b0df781c78c kmod-gnbd-0.1.3-4.2.6.18_8.1.15.el5.i686.rpm 936ca942c0bcf7e7a20e2a489e8d9d49 kmod-gnbd-PAE-0.1.3-4.2.6.18_8.1.15.el5.i686.rpm
2007 Sep 21
0
CEBA-2007:0885 CentOS 5 i386 gnbd-kmod Update
CentOS Errata and Bugfix Advisory 2007:0885 Upstream details at : https://rhn.redhat.com/errata/RHBA-2007-0885.html The following updated files have been uploaded and are currently syncing to the mirrors: ( md5sum Filename ) i386: c8c9a11857ac481dbcb1a9a1304f1833 kmod-gnbd-0.1.3-4.2.6.18_8.1.10.el5.i686.rpm 926604b20cf494d3e6c0a30d0cd7368b kmod-gnbd-PAE-0.1.3-4.2.6.18_8.1.10.el5.i686.rpm
2008 Mar 04
0
Device-mapper-multipath not working correctly with GNBD devices
Hi all, I am trying to configure a failover multipath between 2 GNBD devices. I have a 4 nodes Redhat Cluster Suite (RCS) cluster. 3 of them are used for running services, 1 of them for central storage. In the future I am going to introduce another machine for central storage. The 2 storage machine are going to share/export the same disk. The idea is not to have a single point of failure
2007 Oct 03
0
CEBA-2007:0953 CentOS 5 x86_64 gnbd-kmod Update
CentOS Errata and Bugfix Advisory 2007:0953 Upstream details at : https://rhn.redhat.com/errata/RHBA-2007-0953.html The following updated files have been uploaded and are currently syncing to the mirrors: ( md5sum Filename ) x86_64: 33ba3b44c574c0749ada598ba3ff74d0 kmod-gnbd-0.1.3-4.2.6.18_8.1.14.el5.x86_64.rpm 0c15de12384cc4b4130576fe2a3b3ffc
2007 Oct 26
0
CEBA-2007:0977 CentOS 5 x86_64 gnbd-kmod Update
CentOS Errata and Bugfix Advisory 2007:0977 Upstream details at : https://rhn.redhat.com/errata/RHBA-2007-0977.html The following updated files have been uploaded and are currently syncing to the mirrors: ( md5sum Filename ) x86_64: a063f498832ca79bf139ad3e05b3c374 kmod-gnbd-0.1.3-4.2.6.18_8.1.15.el5.x86_64.rpm 9b1d8cdb61fcef11aa72a4dd4b75dc1e
2007 Sep 21
0
CEBA-2007:0885 CentOS 5 x86_64 gnbd-kmod Update
CentOS Errata and Bugfix Advisory 2007:0885 Upstream details at : https://rhn.redhat.com/errata/RHBA-2007-0885.html The following updated files have been uploaded and are currently syncing to the mirrors: ( md5sum Filename ) x86_64: 0a4e96f62cb787b49ebde18a41f9d4ea kmod-gnbd-0.1.3-4.2.6.18_8.1.10.el5.x86_64.rpm 7f4022b8524043e4e6d2cc8d8d827c55
2008 Dec 12
0
gnbd and xen
hi all.. Is it recommended to use GNBD in production cluster? . The main purpose of this cluster would be to provide high availablity of Xen virtual machines. I don''t have a fencing device hence I am giving a thought to use GNBD since it can be used in fencing without any fencing hardware. Also, any pointers to the tutorial on using gnbd with RHCS and xen would be great. Thanks Paras.
2006 Feb 08
0
GNBD vs NFS
Anyone have any experience with GNBD? What kind of performance and security differences are there in running GNBD vs NFS? We're planning an implementation with an external storage array (DAS) and want to use a shared filesystem. We have 10 or so clients that will be connecting to the shared filesystem. We have duplicate systems and arrays and were planning on using keepalived for
2006 Aug 16
1
gnbd help on centos
I've googled for this, but everything I find tends to talk about cluster and doesn't give an example close enough that I can figure this out. I have read the Red Hat Cluster Suite Configuring and Managing a Cluster <http://www.redhat.com/docs/manuals/csgfs/browse/rh-cs-en/> links from http://www.redhat.com/docs/manuals/csgfs/. (I think these are mirred on centos.org, but I
2007 Jun 22
0
automate gnbd process
Wondering if anyone can point me in the right direction for automating the whole gnbd process. Thanks in advance i.e gnbd_serv gnbd_export gnbd_import modprobe gnbd _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2011 Dec 29
0
ocfs2 with RHCS and GNBD on RHEL?
Does anyone have OCFS2 running with the "Red Hat Cluster Suite" on RHEL? I'm trying to create a more or less completely fault tolerant solution with two storage servers syncing storage with dual-primary DRBD and offering it up via multipath to nodes for OCFS2. I was able to successfully multipath a dual-primary DRBD based GFS2 volume in this manner using RHCS and GNBD. But switched
2006 Jan 23
1
GFS and gnbd for 2.6.9-22.0.2 kernel
hi all, Someboby knows when gfs and gbd will be released for 2.6.9-22.0.2 kernel?? Thanks. -- CL Martinez carlopmart {at} gmail {d0t} com
2005 May 11
5
Xen reboots on dom-U disk stress...
Hi all I tried to run bonnie++ disk stresser in dom-U, who''s disk is backed with non-local (on nfs) loop-back file. The machine rebooted pretty quickly. So, how do I tell what''s barfing? Is it Xen? Is it dom-0 (nfs or loop-back)? I looked in dom-0''s /var/log/messages and didn''t see any obvious record of a dom-0 whoopsie (but is that the right place to look
2006 Jan 17
8
2.0.7 -> 3.0.0 upgrade
Hey all, I''m thinking about moving my 2.0.7 install over to 3.0.0 for eventual use in a production environment. Two questions: 1) Do people think 3.0.0 is ready for production - or is it just a testing/unstable playpen? 2) What do I have to change - just the xen-2.0.7.gz for do I have to recompile the dom0/domU kernels too? Cheers, Matthew Walster
2008 Nov 14
5
[RFC] Splitting cluster.git into separate projects/trees
Hi everybody, as discussed and agreed at the Cluster Summit we need to split our tree to make life easier in the long run (etc. etc.). We need to decide how we want to do it and there are different approaches to that. I was able to think of 3. There might be more and I might not have taken everything into consideration so comments and ideas are welcome. At this point we haven't really