Displaying 20 results from an estimated 1000 matches similar to: "OCFS2 and Cloning"
2007 Jul 07
2
Adding new nodes to OCFS2?
I looked around, found older post which seems not applicable anymore. I
have a cluster of 2 nodes right now, which has 3 OCFS2 file systems. All
the file systems were formatted with 4 node slots. I added the two news
nodes (by hand, by ocfs2console and o2cb_ctl), so my
/etc/ofcfs/cluster.conf looks right:
node:
ip_port = 7777
ip_address = 192.168.201.1
number = 0
2008 Jan 23
1
OCFS2 DLM problems
Hello everyone, once again.
We are running into a problem, which has shown now 2 times, possible 3
(once the systems looked different.)
The environment is 6 HP DL360/380 g5 servers with eth0 being the public
interface, eth1 and bond0 (eth2 and eth3) used for clusterware and bond0
also used for OCFS2. The bond0 interface is in active/passive mode.
There are no network errors counters showing and
2007 Jul 14
1
Kernel panic in ext3:dx_probe, help needed
This may or may not be ext3 related but I am trying to find any pointers
which might help me. I got a number of HP Proliant DL380 g5 with a P400
controller and also two qla2400 cards. The OS is RedHat EL4 U5 x86_64.
Every time during reboot these systems panic after the last umount and I
believe before the cciss driver is getting unloaded. The last messages I
am able to see are:
md: stopping
2008 Oct 22
2
Another node is heartbeating in our slot! errors with LUN removal/addition
Greetings,
Last night I manually unpresented and deleted a LUN (a SAN snapshot)
that was presented to one node in a four node RAC environment running
OCFS2 v1.4.1-1. The system then rebooted with the following error:
Oct 21 16:45:34 ausracdb03 kernel: (27,1):o2hb_write_timeout:166 ERROR:
Heartbeat write timeout to device dm-24 after 120000 milliseconds
Oct 21 16:45:34 ausracdb03 kernel:
2010 May 21
2
fsck.ocfs2 using huge amount of memory?
We are setting up 2 new EL5 U4 machines to replace our current database servers running our demo environment. We use 3Par SANs and their snap clone options. The current production system we snap clone from is EL4 U5 with ocfs2 1.2.9, the new servers have ocfs2 1.4.3 installed. Part of the refresh process is to run fsck.ocfs2 on the volume to recover, but right now as I am trying to run it on our
2007 Jul 29
1
6 node cluster with unexplained reboots
We just installed a new cluster with 6 HP DL380g5, dual single port Qlogic 24xx HBAs connected via two HP 4/16 Storageworks switches to a 3Par S400. We are using the 3Par recommended config for the Qlogic driver and device-mapper-multipath giving us 4 paths to the SAN. We do see some SCSI errors where DM-MP is failing a path after get a 0x2000 error from the SAN controller, but the path gets puts
2008 Oct 03
2
OCFS2 with Loop device
hi there
i try to setup OCFS2 with loop device /dev/loop0
i've 4 servers running SLES10 SP2.
internal ip's: 192.168.55.1, .2, .3 and .6
my cluster.conf:
--------------------------------------------
node:
ip_port = 7777
ip_address = 192.168.55.1
number = 0
name = www
cluster = cawww
node:
ip_port = 7777
ip_address = 192.168.55.2
2008 Sep 25
1
ocfs2 filesystem seems out of sync
Hi there
I recently installed an OCFS2 filesystem on our FC-SAN. Everything
seemed to work fine and I could read & write the filesystem from both
servers that are mounting it. After a while though, writes coming from
one node do not appear on the other node and vice versa.
I am not sure what's causing this, and not very experienced at debugging
filesystems. If anybody has any
2006 Jul 20
4
Problems under Redhat EL3 and ext3
I am running into performance issues with ext3. Historically we had our
image files (pictures of cars, currently 5.3 million) sub divided into a
directory structure [0-9]/[0-9]/[0-9]/[0-9], where we would take the
first 4 letters/numbers of the file name and use that to put it into
this structure. Letters [a-cA-C] would become a 0, [d-fD-F] a 1, etc. As
the file names used to be based on VIN
2008 Aug 30
1
Input and suggestions: Mail mirror setup
Hello, everyone.
To make life for our operations team easier I am looking at setting up a
mail mirror of any email
which gets sent from certain machines. My current ideas go along the
following lines:
Each machine will use the Postfix "always_bcc" option and set it to an
address like mailmirror at mailmirror.openlane.com.
On mailmirror I am looking at setting up Dovecot with public
2008 Apr 16
0
EXT3 and SAN Snap Shot, Best practice?
As RedHat has a limited choice of file systems it supports, I have a
need to use EXT3 together with Oracle and a SAN SnapShot (3Par
Snapclone). I was wondering if anyone could give me some feed back as to
the "best" method to do that.
So far I am thinking:
Put Oracle into Backup mode
Run sync (or multiple times)
Execute Snapshot command on SAN (takes less then 1 second).
Take Oracle
2010 Jul 14
2
tunefs.lustre --print fails on mounted mdt/ost with mmp
Just checking to be sure this isn''t a known bug or problem. I couldn''t
find a bz for this, but it would appear that tunefs.lustre --print fails
on a lustre mdt or ost device if mounted with mmp.
Is this expected behavior?
TIA
mds1-gps:~ # tunefs.lustre --print /dev/mapper/mdt1
checking for existing Lustre data: not found
tunefs.lustre FATAL: Device /dev/mapper/mdt1 has not
2009 Jul 01
1
mounting a snapshot for backup.
Hi
I have an ocfs2 volume (1.4.1) on lvm2, which is made available on 2 nodes via dual primary drbd that is on it. I wanted to backup my volume, so naturally I made a snapshot of it via the lvm2, and tried to mount it for backing up. However, I got the following error in dmesg :
[319981.478168] (23483,0):ocfs2_fill_super:700 ERROR: Unable to create per-mount debugfs root.
I then searched
2009 Sep 17
2
stop tunefs.ocfs2
Hi all,
I upgraded ocfs from 1.2 to 1.4 after the update I launched tunefs.ocfs2 to enable the new ocfs2 features (sparse files and unwritten extents). tunefs.ocfs2 is now running since 2 days (12T partition) and I need my system back to production, can I safety abort tunefs.ocfs2?
thanks
Nicola
2011 Jan 20
2
useless tools - tunefs.ocfs2,fsck.ocfs2
One of ocfs2 filesystem has some errors.
1. fsck.ocfs2 informs me that : "I/O error on channel while reading
.. " It was NOT TRUE - I was able to read and write entire storage over
the network multiple times.
2. becouse of CRC errors and suggestion to disable metaecc I run
tunefs.ocfs2 --fs-features=nometaecc /dev/xxx
funefs allocate 9,89 GB of virtual memory and 95% of
2008 Jun 27
0
OCFS2 1.2.9-1 kernel modules for RedHat EL4 kernel 2.6.9-67.0.20?
I am looking at upgrading from the RHEL4U5 kernel set to U6 or beyond.
Updates has 2.6.9-67.0.20, oss site only has rpms for up to
2.6.9-67.0.15. Any ETA for .20 modules?
Ulf Zimmermann | Senior System Architect
OPENLANE
4600 Bohannon Drive, Suite 100
Menlo Park, CA 94025
O: 650-532-6382 M: (510) 396-1764 F: (510) 580-0929
Email: ulf at openlane.com | Web: www.openlane.com
2009 Jul 28
2
[PATCH 9-10/10] Quota support for disabling sparse feature
Hi,
I'm sending a patch for proper quota support when disabling sparse feature.
The second patch fixes a minor problem in tunefs.ocfs2 when disabling the
sparse feature. In a few days I plan to resend the whole "quota support" series
with all the changes people request included...
Honza
2010 Jun 01
1
debugfs.ocfs2 and Feature Incompat
Hi!
When i'm running debugfs.ocfs2 i get the following result:
debugfs.ocfs2 -R "stats" /dev/sdb
Revision: 0.90
Mount Count: 0 Max Mount Count: 20
State: 0 Errors: 0
Check Interval: 0 Last Check: Mon May 10 12:17:37 2010
Creator OS: 0
Feature Compat: 3 backup-super strict-journal-super
Feature Incompat: 8016 sparse
2007 Sep 30
4
Question about increasing node slots
We have a test 10gR2 RAC cluster using ocfs2 filesystems for the
Clusterware files and the Database files.
We need to increase the node slots to accomodate new RAC nodes. Is it
true that we will need to umount these filesystems for the upgrade (i.e.
Database and Clusterware also)?
We are planning to use the following command format to perform the node
slot increase:
# tunefs.ocfs2 ?N 3
2009 Aug 25
1
Clear Node
I am trying to make a mysql standby setup with 2 machines, one primary
and one hot standby, which both share disk for the data directory. I
used tunefs.ocfs2 to change the number of open slots to 1 since only
one machine should be accessing it at a time. This way it is fairly
safe to assume one shouldn't clobber the other's data. Only problem
is, if one node dies, the mount lock still