similar to: SAN Block Device locking

Displaying 20 results from an estimated 7000 matches similar to: "SAN Block Device locking"

2006 May 19
3
relocation time about 20sec? and yours?
Hi List, i have running xen-3.0.2 and my migration time for DomU from Server A to Server B is about 20sec. The Servers A and B are connected over 1GBit/s Links and the storrage is connected via iSCSI. The iSCSI volumes have a performance about 20-40MByte/s. Servers A is a 2x2GHz Xeon with 4GB Ram and Server B is a 1.8GHz AMD with 2GB Ram. The documentation states about some 60-300msec, so my
2014 Feb 12
0
Re: Right way to do SAN-based shared storage?
On Wed, 12 Feb 2014 21:51:53 +0100 urgrue <urgrue@bulbous.org> wrote: > I'm trying to set up SAN-based shared storage in KVM, key word being > "shared" across multiple KVM servers for a) live migration and b) > clustering purposes. But it's surprisingly sparsely documented. For > starters, what type of pool should I be using? It's indeed not documented
2007 Jul 09
0
Cannot Start CLVMD on second node of cluster
Hi, I'm trying to configure Red Hat GFS but have problems starting CLVMD on the second node of a 3-nodes cluster. I can start ccsd, cman, and fenced successfully, and clvmd on any other nodes first time. ------------------------- CenOS 4.4 Kernel: 2.6.9-42.0.3.ELsmp [root at server2 ~]# cman_tool nodes Node Votes Exp Sts Name 1 1 3 M server3 2 1 3 M server4 3
2010 Mar 08
4
Error with clvm
Hi, I get this error when i try to start clvm (debian lenny) : This is a clvm version with openais # /etc/init.d/clvm restart Deactivating VG ::. Stopping Cluster LVM Daemon: clvm. Starting Cluster LVM Daemon: clvmCLVMD[86475770]: Mar 8 11:25:27 CLVMD started CLVMD[86475770]: Mar 8 11:25:27 Our local node id is -1062730132 CLVMD[86475770]: Mar 8 11:25:27 Add_internal_client, fd = 7
2019 Jan 11
1
CentOS 7 as a Fibre Channel SAN Target
For quite some time I?ve been using FreeNAS to provide services as a NAS over ethernet and SAN over Fibre Channel to CentOS 7 servers each using their own export, not sharing the same one. It?s time for me to replace my hardware and I have a new R720XD that I?d like to use in the same capacity but configure CentOS 7 as a Fibre Channel target rather than use FreeNAS any further. I?m doing
2019 Jan 11
0
CentOS 7 as a Fibre Channel SAN Target
For quite some time I?ve been using FreeNAS to provide services as a NAS over ethernet and SAN over Fibre Channel to CentOS 7 servers each using their own export, not sharing the same one. It?s time for me to replace my hardware and I have a new R720XD that I?d like to use in the same capacity but configure CentOS 7 as a Fibre Channel target rather than use FreeNAS any further. I?m doing
2020 Jan 04
0
CentOS 7 as a Fibre Channel SAN Target
In waiting, I tried CentOS 8 which was an even bigger bust. I wiped that clean and tried again with Fedora 31. Same darn error "Could not create Target in configFS". Anyone?? Thank you, Steffan Cline steffan at hldns.com 602-793-0014 ?On 1/2/20, 2:00 AM, "CentOS on behalf of Steffan Cline via CentOS" <centos-bounces at centos.org on behalf of centos at centos.org>
2009 Jun 05
1
DRBD+GFS - Logical Volume problem
Hi list. I am dealing with DRBD (+GFS as its DLM). GFS configuration needs a CLVMD configuration. So, after syncronized my (two) /dev/drbd0 block devices, I start the clvmd service and try to create a clustered logical volume. I get this: On "alice": [root at alice ~]# pvcreate /dev/drbd0 Physical volume "/dev/drbd0" successfully created [root at alice ~]# vgcreate
2009 Mar 12
5
Alternatives to cman+clvmd ?
I currently have a few CentOS 5.2 based Xen clusters at different sites. These are built around a group of 3 or more Xen nodes (blades) and some sort of shared storage (FC or iSCSI) carved up by LVM and allocated to the domUs. I am "managing" the shared storage (from the dom0 perspective) using cman+clvmd, so that changes to the LVs (rename/resize/create/delete/etc) are
2014 Oct 29
2
CentOS 6.5 RHCS fence loops
Hi Guys, I'm using centos 6.5 as guest on RHEV and rhcs for cluster web environment. The environtment : web1.example.com web2.example.com When cluster being quorum, the web1 reboots by web2. When web2 is going up, web2 reboots by web1. Does anybody know how to solving this "fence loop" ? master_wins="1" is not working properly, qdisk also. Below the cluster.conf, I
2004 Aug 06
7
Problem with Streams > 160KBit/s
Hi, does anybody set up an icecast server delivering streams > 160KBit/s ? I tried that with liveice and shout but in both cases the stream stutters very much. So, I have eliminated the Net (stream to localhost), the CPU (P3/800 should be fast enough), and lame (can encode 320kBit/s 3 times faster then the wav has secs). So the Problem should be icecast. can anybody please enlight me?
2014 Feb 12
3
Right way to do SAN-based shared storage?
I'm trying to set up SAN-based shared storage in KVM, key word being "shared" across multiple KVM servers for a) live migration and b) clustering purposes. But it's surprisingly sparsely documented. For starters, what type of pool should I be using?
2009 Feb 05
2
How to implement HA and Live Migration with a SAN?
Hello, I''ve configured two xen hosts (dom0) sharing a LUN on a SAN. My firs obiective is to run different domU on the two hosts, and implement live migration between them. Subsequently, I''d like to implement HA, so if a xen host goes down, domUs will be restarted on the other one. What''s the better way to obtain this? I think I need a cluster file system to
2007 Jan 19
0
Re: [osol-discuss] Possibility to change GUID zfs pool at import
Hello Nico, Friday, January 19, 2007, 7:53:06 PM, you wrote: NVdM> Hi, NVdM> Looking if someone already found a solution, or workaround, to change the GUID of a zfs pool. NVdM> Explain me some more in depth, by use of tools like ShadowImage NVdM> on storage Arrays like the Sun Storagetek 99xx (but also for SUN NVdM> related storage Arrays or IBM, EMC and other storage vendors)
2006 Apr 12
2
bootup error - undefined symbol: lvm_snprintf after yum update
This is an x86_64 system that I just updated from 4.2 to 4.3 (including the csgfs stuff). When I watch the bootup on the console, I see an error: lvm.static: symbol lookup error: /usr/lib64/liblvm2clusterlock.so: undefined symbol: lvm_snprintf This error comes immediately after the "Activating VGs" line, so it appears to be triggered by the vgchange command in the clvmd startup
2006 Nov 06
1
Segmentation fault on LVM
Hi all, I have installed rhcs on a CentOS 4.4 server with clvmd. When server reboots display a segmentation fault on line 504 in /etc/rc.d/rc.sysinit file, here: if [ -x /sbin/lvm.static ]; then 500 if /sbin/lvm.static vgscan --mknodes --ignorelockingfailure > /dev/null 2>&1 ; then 501 action $"Setting up Logical Volume Management:"
2004 Feb 02
1
4 samba domains/one ldap backend/2 methods/which to use?
in both methods tried, we can't successfully add xp machines to the domain at the remote locations main samba is on our main campus, behind a 10.10 internal lan remote samba's are on remote campuses, behind a 10.xx network 10.11 10.12 all connected with our internal lan via VPN ###################################################################### Method 1) ALL PDC's, using same ldap
2005 Oct 14
1
Does anyone Know if tha avaya 4621 IP phone work wiht asteisk?
Does anyone Know if tha avaya 4621 IP phone work wiht asteisk? if it work it has featuras working Thanks Ignacio -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.digium.com/pipermail/asterisk-users/attachments/20051014/d3a65784/attachment.htm
2005 Feb 15
6
xen-testing and redhat-cluster devel
Hi, I''m using xen on two-node redhat cluster (CVS devel version), using lvm as storage backend. redhat cluster is used to synchronize LVM metadata (using clvmd) and as storage for domain configs and dom-U kernels (with gfs). Latest version of redhat cluster works with xen-2.0.4, but not with xen-2.0-testing. ccsd failed to start on 2.0-testing. Anyone knows what the problem is? I
2009 Feb 13
5
GFS + Restarting iptables
Dear List, I have one last little problem with setting up an cluster. My gfs Mount will hang as soon as I do an iptables restart on one of the nodes.. First, let me describe my setup: - 4 nodes, all running an updated Centos 5.2 installation - 1 Dell MD3000i ISCSI SAN - All nodes are connected by Dell?s Supplied RDAC driver Everything is running stable when the cluster is started (tested