Displaying 20 results from an estimated 500 matches similar to: "Mount fails with error status -22?"
2007 Apr 15
1
Multipath-root (mpath) problems with CentOS 5
Hi list!
I have a server with dual port Qlogic iSCSI HBA. I set up the same LUN for
both ports, and boot the CentOS installer with "linux mpath".
Installer detects multipathing fine, and creates mpath0 device for root disk.
Installation goes fine, and the system boots up and works fine after the
install from the multipath root device.
After install the setup is like this:
LUN 0 on
2008 Jan 31
2
ISCSI help
I am fairly new to ISCSI and SAN technology but having recently invested
in the technology I am trying to find out exactly what can and can not
be manipulated, filesystem wise, without requiring a reboot. I am using
the inbuilt software ISCSI initiator and multipathing in CentOS 5.1.
My steps so far.
Create 10GB volume on SAN
# iscsiadm -m session -R
# fdisk /dev/mapper/mpath0
# kpartx -a
2009 Oct 27
1
/etc/rc.local and /etc/fstab
Upon system boot, is it ok to mount OCFS2 mounts from /etc/rc.local
rather than /etc/fstab ?
Are there any downsides to using rc.local that you are aware of?
Example /etc/rc.local script:
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.
2008 Mar 04
0
Device-mapper-multipath not working correctly with GNBD devices
Hi all,
I am trying to configure a failover multipath between 2 GNBD devices.
I have a 4 nodes Redhat Cluster Suite (RCS) cluster. 3 of them are used for
running services, 1 of them for central storage. In the future I am going to
introduce another machine for central storage. The 2 storage machine are
going to share/export the same disk. The idea is not to have a single point
of failure
2010 May 27
2
Multipathing with Sun 7310
Dear list,
we have a relatively new Sun Storage 7310, where we connect CentOS 5.5
Servers (IBM LS21/LS41 Blades) via Brocade Switches, 4GBit FC. The Blades
boot from SAN via qla2xxx, and have no harddisks at all. We want them to
use multipathing from the very beginning, so /boot and / are already seen
by multipathd. Problem is, that the Sun 7310 has two storage heads which
run in
2011 Jan 20
2
useless tools - tunefs.ocfs2,fsck.ocfs2
One of ocfs2 filesystem has some errors.
1. fsck.ocfs2 informs me that : "I/O error on channel while reading
.. " It was NOT TRUE - I was able to read and write entire storage over
the network multiple times.
2. becouse of CRC errors and suggestion to disable metaecc I run
tunefs.ocfs2 --fs-features=nometaecc /dev/xxx
funefs allocate 9,89 GB of virtual memory and 95% of
2013 Nov 07
1
IBM Storwize V3700 storage - device names
Hello,
I have IBM Storwize V3700 storage, connected to 2 IBM x3550 M4 servers
via fiber channel. The servers are with QLogic ISP2532-based 8Gb Fibre
Channel to
PCI Express HBA cards and run Centos 5.10
When I export a volume to the servers, each of them sees the volume
twice, i.e /dev/sdb and /dev/sdc, with the same size.
Previously I have installed many systems with IBM DS3500 series of
2011 Sep 02
5
Linux kernel crash due to ocfs2
Hello,
we have a pair of IBM P570 servers running RHEL5.2
kernel 2.6.18-92.el5.ppc64
We have Oracle RAC on ocfs2 storage
ocfs2 is 1.4.7-1 for the above kernel (downloaded from oracle oss site)
Recently both servers have been crashing with the following error:
Assertion failure in journal_dirty_metadata() at
fs/jbd/transaction.c:1130: "handle->h_buffer_credits > 0"
kernel BUG in
2008 Sep 02
1
[PATCH] ocfs2: Fix a bug in direct IO read.
ocfs2 will become read-only if we try to read the bytes which pass
the end of i_size. This can be easily reproduced by following steps:
1. mkfs a ocfs2 volume with bs=4k cs=4k and nosparse.
2. create a small file(say less than 100 bytes) and we will create the file
which is allocated 1 cluster.
3. read 8196 bytes from the kernel using O_DIRECT which exceeds the limit.
4. The ocfs2 volume
2009 Jan 06
1
[PATCH] ocfs2: Add statistics for the checksum and ecc operations.
It would be nice to know how often we get checksum failures. Even
better, how many of them we can fix with the single bit ecc. So, we add
a statistics structure. The structure can be installed into debugfs
wherever the user wants.
For ocfs2, we'll put it in the superblock-specific debugfs directory and
pass it down from our higher-level functions. The stats are only
registered with
2013 Feb 27
2
ocfs2 bug reports, any advices? thanks
Hi,
I setup two nodes, 192.168.20.20, and 192.168.20.21,
The os is Ubuntu1204 with Kernel version 3.0:
root at Server21:~# uname -a
Linux Server21 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
Server20 reboot for the disconnection with iSCSI SAN, so Server20 recovery resource locks for Server21.
Server20:
Feb 27 09:29:31 Server20 kernel:
2013 Feb 27
2
ocfs2 bug reports, any advices? thanks
Hi,
I setup two nodes, 192.168.20.20, and 192.168.20.21,
The os is Ubuntu1204 with Kernel version 3.0:
root at Server21:~# uname -a
Linux Server21 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
Server20 reboot for the disconnection with iSCSI SAN, so Server20 recovery resource locks for Server21.
Server20:
Feb 27 09:29:31 Server20 kernel:
2007 Apr 17
1
mount.ocfs2 blah
Hi,
In the ongoing drama surrounding this upgrade, I have encountered
another issue that I am unable to currently resolve.
mount.ocfs2 /dev/sdb1 /mnt
mount.ocfs2: Stale NFS file handle while mounting /dev/sdb1 on /mnt.
Check 'dmesg' for more information on this error.
dmesg:
(3701,1):ocfs2_populate_inode:240 ERROR: file entry generation does not
match superblock!
2009 Apr 30
0
[PATCH] ocfs2: Add statistics for the checksum and ecc operations.
It would be nice to know how often we get checksum failures. Even
better, how many of them we can fix with the single bit ecc. So, we add
a statistics structure. The structure can be installed into debugfs
wherever the user wants.
For ocfs2, we'll put it in the superblock-specific debugfs directory and
pass it down from our higher-level functions. The stats are only
registered with
2017 Dec 18
2
Upgrading from Gluster 3.8 to 3.12
Hi,
I have a cluster of 10 servers all running Fedora 24 along with
Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
Gluster 3.12. I saw the documentation and did some testing but I
would like to run my plan through some (more?) educated minds.
The current setup is:
Volume Name: vol0
Distributed-Replicate
Number of Bricks: 2 x (2 + 1) = 6
Bricks:
Brick1:
2013 Mar 20
2
Writing to the data brick path instead of fuse mount?
So I noticed if I create files in the data brick path, the files travel to
the other hosts too. Can I use the data brick path instead of using a fuse
mount instead. I'm running two machines with two replicas. What happens
if I do stripes? Some machines are clients as well as servers. Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2014 Aug 21
1
Cluster blocked, so as to reboot all nodes to avoid it. Is there any patchs for it? Thanks.
Hi, everyone
And we have the blocked cluster several times, and the log is always, we have to reboot all the node of the cluster to avoid it.
Is there any patch that had fix this bug?
[<ffffffff817539a5>] schedule_timeout+0x1e5/0x250
[<ffffffff81755a77>] wait_for_completion+0xa7/0x160
[<ffffffff8109c9b0>] ? try_to_wake_up+0x2c0/0x2c0
[<ffffffffa0564063>]
2014 Aug 21
1
Cluster blocked, so as to reboot all nodes to avoid it. Is there any patchs for it? Thanks.
Hi, everyone
And we have the blocked cluster several times, and the log is always, we have to reboot all the node of the cluster to avoid it.
Is there any patch that had fix this bug?
[<ffffffff817539a5>] schedule_timeout+0x1e5/0x250
[<ffffffff81755a77>] wait_for_completion+0xa7/0x160
[<ffffffff8109c9b0>] ? try_to_wake_up+0x2c0/0x2c0
[<ffffffffa0564063>]
2008 Feb 26
0
mapper device perms on reboot
How can I get the mapper device permissions set on reboot?
When I attempted to try this...
# cat /etc/udev/permissions.d/50-udev.permissions | grep mapper
mapper/mpath*:oracle:dba:0660
But it did not seem to work...
# ls -l /dev/mapper/mpath*
brw-rw---- 1 root disk 253, 2 Feb 26 20:56 /dev/mapper/mpath0
brw-rw---- 1 root disk 253, 5 Feb 26 20:56 /dev/mapper/mpath0p1
brw-rw---- 1 root disk
2008 Nov 14
1
kickstart install on SAN with multipath
Using CentOS 5.1, though with a few hours work I could update
to 5.2.. I can install to SAN with a single path no problem
but I'd like to be able to use dm-multipath. From the kickstart
docs it seems this is supported but there is no information
as to what the various options mean
http://www.centos.org/docs/5/html/5.1/Installation_Guide/s1-kickstart2-options.html
--
multipath (optional)