Displaying 20 results from an estimated 8000 matches similar to: "fencing timeout- increasing min time"
2012 Aug 02
1
XEN HA Cluster with LVM fencing and live migration ? The right way ?
Hi,
I am trying to build a rock solid XEN High availability cluster. The
platform is SLES 11 SP1 running on 2 HP DL585 both connected through HBA
fiber channel to the SAN (HP EVA).
XEN is running smoothly and I''m even amazed with the live migration
performances (this is the first time I have the chance to try it in such a
nice environment).
XEN apart the SLES heartbeat cluster is
2010 Jan 18
1
Getting Closer (was: Fencing options)
One more follow on,
The combination of kernel.panic=60 and kernel.printk=7 4 1 7 seems to
have netted the culrptit:
E01-netconsole.log:Jan 18 09:45:10 E01 (10,0):o2hb_write_timeout:137
ERROR: Heartbeat write timeout to device dm-12 after 60000
milliseconds
E01-netconsole.log:Jan 18 09:45:10 E01
(10,0):o2hb_stop_all_regions:1517 ERROR: stopping heartbeat on all
active regions.
2007 Apr 27
1
has anyone experienced problems with ocfs2 1.2.5-1 using Emulex LP10000 HBA cards and EMC CX700 SAN's?
Does anyone have any experience with Emulex HBA cards (LP10000) using OCFS2,
Linux AS4 U4 x86_64 AMD? I'm trying to find out whether this is a verified
combination, if anyone has successfully used it. I have that
hardware/sofware combination, and am experiencing
stability/performance/panic/hang issues with OCFS2.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2007 Aug 08
0
pcifront (CONFIG_XEN_PCIDEV_FRONTEND=m) support in RHEL 4.5 x86 Dom U
Dear All,
The production server supports Intel Virtualization Technology. Processor is
an Intel Xeon 1.86 GHz Quad Core. 8 GB DDR2 memory.
There is also an Emulex LightPulse Fiber Channel HBA adapter.
The host operating system (Dom 0) is RHEL 5 x86 with Xen Virtualization
technology. Dom 0 kernel is 2.6.18-8.el5xen. I have recompiled the Dom 0
kernel so that pciback
2005 Feb 18
2
CentOS-4 RC1 (i386) Bugfixes
All,
There are 3 bugfix updates for CentOS-4 (RC1).
1. mod_perl needed to be recompiled after the RH errata for perl was
incorporated, but it was not flagged by the perl update.
( https://bugzilla.caosity.org/show_bug.cgi?id=803 ) (thanks Joshua
Hirsh)
2. httpd identified itself as Apache/2.0.52 (Red Hat) Server instead of
CentOS ( https://bugzilla.caosity.org/show_bug.cgi?id=806 ) (thanks
2014 Apr 20
1
Ext4 mess .... and EXT4-fs error (device sdc): ext4_mb_generate_buddy
Hi,
I'm faced with a very strange behaviour:
Centos 6.5 server, Hardware ISCSI HBA from Emulex OneConnect, most
recent drivers and firmware from emulex installed.
Directly attached a 10G ISCSI Storage from QSan. Two Raid volumes, Raid
5, 8 Disks each at 2 TB so 14 TB each logic raid volume.
Both volumes are logged in and usable as sdb and sdc to the server.
Formatted with ext4 -m 0 -v
2008 Aug 29
7
FC-HBA assigned to guest domain does not work.
I assigned FC-HBA to guest domain, but it did not work.
FC-HBA seems to write its internal memory which is mapped to host
memory space via pci transaction. But there is no mapping in IOMMU''s
page table, so that page fault occurs in IOMMU.
I think that MMIO resource mapped via p2m table should be mapped via
IOMMU''s page table too. In other word, XEN_DOMCTL_memory_mapping
2009 Nov 20
1
Using local disk for cache on an iSCSI zvol...
I''m just wondering if anyone has tried this, and what the performance
has been like.
Scenario:
I''ve got a bunch of v20z machines, with 2 disks. One has the OS on it,
and the other is free. As these are disposable client machines, I''m not
going to mirror the OS disk.
I have a disk server with a striped mirror zpool, carved into a bunch of
zvols, each exported via
2013 Oct 22
0
HP proliant bl460c G7 on CentOS
Hi list,
I have installed CentOS on an hp proliant blade server, bl 460c g7, after
installing I've seen that network is not working properly. It recognises
around 8 interfaces (?), mii-tool says that there is not any link present
on those, and there's only 2 NIC' per bay.
I've searched on Hp support page but there's only one driver for storage.
Does anybody has had the same
2004 Sep 14
6
initrd / initramfs future
Hello,
I would like to know if initrd is here to stay, now that klibc and
initramfs are ready.
As the multipath-tools maintainer, I'm facing the choice to
1) put the multipath configuration tool in the initrd
* dynamic binary is possible
* storage hba drivers as modules loaded
* no klibc limitations (no mntent for libsysfs ...)
2) put the multipath configuration tool in the initramfs
*
2015 Apr 14
1
HBA enumeration and multipath configuration
# cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core)
# uname -r
3.10.0-123.20.1.el7.x86_64
Hi,
We use iSCSI over a 10G Ethernet Adapter and SRP over an Infiniband adapter to provide multipathing
to our storage:
# lspci | grep 10-Gigabit
81:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
81:00.1 Ethernet controller: Intel Corporation
2009 Sep 25
1
OCFS2 Upgrade
I have inherited an Oracle system which we have installed on a client
site with the specs below. There was a kernel upgrade done by the patch
management team at their site which broke the cluster. Once the kernel
was reverted the system came back up. Am somewhat of a newbie to ocfs,
etc and am trying to verify something. To do a kernel upgrade the ocfs2
(CRS) and ASM would have to be upgraded
2009 Sep 27
0
SUMMARY : multipath using defaults rather than multipath.conf contents for some devices (?) - why ?
The reason for the behaviour observed below turned out to be that the
device entry in /etc/multipath.conf was inadvertently appended *after* the devices
section , rather than inside it - so that we had
#devices {
# device {
# blah blah
# } (file has a bunch of defaults commented out)
# etc
#}
#
#
device {
our settings
}
*rather than*
2013 Jan 07
5
mpt_sas multipath problem?
Greetings,
We''re trying out a new JBOD here. Multipath (mpxio) is not working,
and we could use some feedback and/or troubleshooting advice.
The OS is oi151a7, running on an existing server with a 54TB pool
of internal drives. I believe the server hardware is not relevant
to the JBOD issue, although the internal drives do appear to the
OS with multipath device names (despite the fact
2006 May 18
0
Node crashed after remove a path
Hi,
I have a 2-node cluster on 2 Dell PowerEdge 2650.
When remove a device path, and both nodes crashed.
Any help would be appreciated.
Thanks!
Roger---
Configuration:
Oracle: 10.2.0.1.0 x86
Oracle home: on OCFS2 shared with multipath
Oracle datafiles: OCFS2 shared with multipath
cat redhat-release
Red Hat Enterprise Linux ES release 4 (Nahant Update 2)
uname -a
Linux sqa-pe2650-40
2007 Oct 23
0
multipath using 2 NICs (or HBA?)
I'm told that we cannot do Multipath I/O on our iSCSI SAN on RHEL
with 2 network cards. I could use 1 network card, but need an HBA.
Is this true? Do I need an HBA, or can I do Multipath using 2 NICs?
We're running RHEL 4 and CentOS 4 and 5 servers on this network.
I have been reading through the device-mapper documentation, but I
have not found anything (unless I'm not clear on
2007 Apr 15
1
Multipath-root (mpath) problems with CentOS 5
Hi list!
I have a server with dual port Qlogic iSCSI HBA. I set up the same LUN for
both ports, and boot the CentOS installer with "linux mpath".
Installer detects multipathing fine, and creates mpath0 device for root disk.
Installation goes fine, and the system boots up and works fine after the
install from the multipath root device.
After install the setup is like this:
LUN 0 on
2017 Jan 03
2
Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
I am trying to copy(~7TB of data using rsync) between two server in same
data center in the backend its using EMC VMAX3
After copying ~30-40GB of data multipath start failing
Dec 15 01:57:53 test.example.com multipathd:
360000970000196801239533037303434: Recovered to normal mode
Dec 15 01:57:53 test.example.com multipathd:
360000970000196801239533037303434: remaining active paths: 1
Dec 15
2009 Sep 17
1
multipath using defaults rather than multipath.conf contents for some devices (?) - why ?
hi all
We have a rh linux server connected to two HP SAN controllers, one an HSV200 (on the way out),
the other an HSV400 (on the way in). (Via a Qlogic HBA).
/etc/multipath.conf contains this :
device
{
vendor "(COMPAQ|HP)"
product "HSV1[01]1|HSV2[01]0|HSV300|HSV4[05]0"
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
2005 Feb 01
1
Updates to CentOS-4Beta
1. There are updates to CentOS-4Beta for the i386 and x86_64 arches.
The following RPMS have been changed:
a. createrepo-0.4.2-1.noarch.rpm - This is an update from the upstream
maintainer.
b. yum-2.1.13-1.c4.noarch.rpm - This is an update from the upstream
maintainer.
c. firefox-1.0-6.centos4.3.i386.rpm - The original build did not strip
the library files of unnecessary symbols, causing