similar to: OCFS and split mirror backups

Displaying 20 results from an estimated 11000 matches similar to: "OCFS and split mirror backups"

2006 May 17
1
OCFS and backups to tape
Hi, we have installed ocfs2 on Rhat 4. It is being used as a general clustered file system. There are no oracle binaries or datafiles on the ocfs volume. So no need for RMAN or oarcle agents etc etc. The customer would like to backup the volume using arcserve. However CA are saying that they cannot backup from an ocfs volume. Has anyone out there got any words of wisdom about backing up
2004 Nov 22
3
Mode context extremely poor performance.
Hi all, I curently have a big problem. One request (listed above) using context take up to 1000 more time than the on RAW or ext2 database. I have ran this request on a single IA32 machine with Redhat and dbf on ext2. The average reponse time is less than a sec. The same request on RAC 4 nodes cluster on RAW take the same average time. On ext2 idem. But on OCFS it took up to 15 sec randomly
2004 Nov 22
3
Mode context extremely poor performance.
Hi all, I curently have a big problem. One request (listed above) using context take up to 1000 more time than the on RAW or ext2 database. I have ran this request on a single IA32 machine with Redhat and dbf on ext2. The average reponse time is less than a sec. The same request on RAC 4 nodes cluster on RAW take the same average time. On ext2 idem. But on OCFS it took up to 15 sec randomly
2004 Sep 03
2
From OCFS to tape via tar (and back again)
We're using RMAN to back up our 9.2 RAC database to an OCFS v1 volume. We have an existing shell script that we use for copying files from disk to tape via tar, one file at a time. (Don't ask why. It's a legacy script. Long story.) We're tweaking this script to use --o_direct when tarring the file to tape and that seems to be working fine: # tape device is /dev/nst0 $ tar
2004 Sep 01
2
ocfs doesn't free space?
an ocfs-volume was nearly full (only 800MB free). i deleted some datafiles to free space: $ df -h . Filesystem Size Used Avail Use% Mounted on /dev/sdp1 10G 5.3G 4.8G 53% /db/DPS so there are more than 4GB available. $ sqlplus /nolog SQL*Plus: Release 9.2.0.4.0 - Production on Wed Sep 1 12:57:48 2004 Copyright (c) 1982, 2002, Oracle Corporation. All rights
2005 Apr 17
0
ORA-00600: [3020], async enabled
We are running 2-node RAC on RedHat Linux-86 (32-bit) the following was done on (4/10/2005) 1. upgrade OS to update 4 (kernel-smp-2.4.21-27.0.2.EL) 2. upgrade OCFS 1.0.12 to ocfs-2.4.21-EL-smp-1.0.14-1 3. upgrade from 9.2.0.5 to 9.2.0.6 (IN 2-Node RAC) 4. change defualt temporary tablespace to regular one (Rac bug fix) 5. Enable aysnc mode by applying patch 3208258_9206 ** Patch 4153303 applied on
2005 Feb 11
3
OCFS file system used as archived redo destination is corrupted
we started using an ocfs file system about 4 months ago as the shared archived redo destination for the 4-node rac instances (HP dl380, msa1000, RH AS 2.1) . last night we are seeing some weird behavior, and my guess is the inode directory in the file system is getting corrupted. I've always had a bad feeling about OCFS not being very robust at handling constant file creation and deletion
2004 Jun 18
1
OCFS Performance vs. Raw
We are seeing some performance issues runnig on 64-bit Linux. Running against a raw partition we can get 100MB/s write speed when creating a datafile. Using OCFS we get 30-40MB/s and at times it is so slow it is hard to measure. Anyone else see these issues? We are running the latest versions of both Oracle and OCFS. I've opened a ticket with Oracle but am interested in other's
2004 Sep 20
1
(28552) ERROR: err=-14, Linux/ocfsmain.c, 1887 ; error in mapping iobuf; need to fail out
we are running OCFS on 2.4.21-15.0.4.ELsmp ocfs-2.4.21-EL-smp-1.0.12-1 ocfs-support-1.0.10-1 ocfs-tools-1.0.10-1 I have been deleting datafiles, more than 3 times successfully for rman duplication, from a mount point /data1 (total 191G). Last week when I tried to delete datafiles from the same directory, it did delete datafiles but did not release (reclaim) all space. It still showed that 20G of
2003 Nov 11
5
ocfs issues with 9.2.0.3
I am starting to see a few issues with OCFS and 9.2.0.3 RAC on redHat Linux AS 2.1. Wondering if there is anyone out there experiencing similar issues... a few pointers to the issues.. 1. OCFS read/write performance is way lower than a read/write to a raw device.. i can give you some comparison numbers.. 2. Writes to shared disk with ocfs would get locked up by one server.. it doesnt have to
2003 Nov 11
5
ocfs issues with 9.2.0.3
I am starting to see a few issues with OCFS and 9.2.0.3 RAC on redHat Linux AS 2.1. Wondering if there is anyone out there experiencing similar issues... a few pointers to the issues.. 1. OCFS read/write performance is way lower than a read/write to a raw device.. i can give you some comparison numbers.. 2. Writes to shared disk with ocfs would get locked up by one server.. it doesnt have to
2003 Nov 13
2
Disappointing Performance Using 9i RAC with OCFS on Linux
Wim, Thanks for your prompt response on this. The tpmC figures look very impressive, and tpmC is read intensive. I had already read note 236679.1 "Comparing Performance Between RAW IO vs OCFS vs EXT2/3" which I guess is the article to which you are referring; it made me suspect that the poor performance was due to the lack of an OS IO cache but I wasn't sure. The database is
2003 Nov 13
2
Disappointing Performance Using 9i RAC with OCFS on Linux
Wim, Thanks for your prompt response on this. The tpmC figures look very impressive, and tpmC is read intensive. I had already read note 236679.1 "Comparing Performance Between RAW IO vs OCFS vs EXT2/3" which I guess is the article to which you are referring; it made me suspect that the poor performance was due to the lack of an OS IO cache but I wasn't sure. The database is
2004 Nov 29
0
Re: Mode context extremely poor performance and varyio
Stephen, all, Thank u very much for your answer and i wish you an happy thanksgiving. Currently we tried to migrate our base on RAW devices already using e41smp, SecurePath (HP) 3C and Qlogic 7.00. The average response time of our sql request on ia32 is 0.32Sec very regularly. On our Itanium on OCFS that's very randomly between 1 and 15Sec, and on RAW between <1s and 3 sec. David is an
2004 Nov 29
0
Re: Mode context extremely poor performance and varyio
Stephen, all, Thank u very much for your answer and i wish you an happy thanksgiving. Currently we tried to migrate our base on RAW devices already using e41smp, SecurePath (HP) 3C and Qlogic 7.00. The average response time of our sql request on ia32 is 0.32Sec very regularly. On our Itanium on OCFS that's very randomly between 1 and 15Sec, and on RAW between <1s and 3 sec. David is an
2009 Mar 12
1
Unable to delete a file...
Hello all, I'm using OCFS version 1.0.15-PROD5 on RHAS3 Update 4. My RMAN script for delete obsolete fails. Debugging problem I've found that is not able to remove a file. Seems that this file is in use and any operation is not possible. Reading metalink I've found some docs about problem with cp/mv/dd/tar on files under OCFS and suggestion is always to use option --o_direct. In my
2004 Nov 29
2
"Linux Error: 28: No space left on device"
# uname -a Linux sgl122 2.4.9-e.35enterprise #1 SMP Wed Jan 7 15:11:27 EST 2004 i686 unknown # rpm -qa|grep ocfs ocfs-2.4.9-e-enterprise-1.0.12-1 ocfs-support-1.0.10-1 ocfs-tools-1.0.10-1 Oracle 10.1.0.3 RAC on egenera blades. "df" shows large amounts of free space (15GB, approx 50%), yet I keep getting "Linux Error: 28: No space left on device" when doing RMAN
2006 Jan 12
1
ocfs2 questions
We are in the process of upgrading to OCFS2. We have recently restored our Production Database to a Development platform configured with OCFS2 with RMAN. No problems. As for the Production migration, we understand that you cannot mount an OCFS volume (our current configuration) with OCFS2. We are interested in mounting an EXT3 file system, performing a cold RMAN backup, copying the datafiles
2005 Jun 06
1
FW: RMAN backup error
Skipped content of type multipart/alternative-------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 2743 bytes Desc: Glacier Bkgrd.jpg Url : http://oss.oracle.com/pipermail/ocfs-users/attachments/20050606/b81975f2/attachment.jpe
2005 May 17
1
[Linux OCFS] About using dump command for backup
Hi all, This is Takahiro from Japan. I want to confirm whether dump command can be used (supported) on Linux OCFS or not. I mean, I want to know we can specify in 5th place except for '0' in /etc/fstab. [/etc/fstab] /dev/sdb1 /data ocfs _netdev 0 0 === Any information is helpful for me. Thanks, Takahiro -- --------------------------------------------------------- TAKAHIRO YOSHIMURA