Sheridan, Matt
2004-Sep-03 17:06 UTC
[Ocfs-users] From OCFS to tape via tar (and back again)
We're using RMAN to back up our 9.2 RAC database to an OCFS v1 volume. We have an existing shell script that we use for copying files from disk to tape via tar, one file at a time. (Don't ask why. It's a legacy script. Long story.) We're tweaking this script to use --o_direct when tarring the file to tape and that seems to be working fine: # tape device is /dev/nst0 $ tar --o_direct -cvf /dev/nst0 /ocfs/RMAN_test_file ./RMAN_test_file But, to pull that same file back off tape and save it to the original OCFS location, it seems that we cannot use --o_direct: # skipping commands to reset the tape pointer to the proper location $ tar --o_direct -xvf /dev/nst0 tar: /dev/nst0: Cannot open: Bad file descriptor tar: Error is not recoverable: exiting now But, without --o_direct, it works fine: # skipping commands to reset the tape pointer to the proper location $ tar -xvf /dev/nst0 ./RMAN_test_file We're assuming that using the above method (--o_direct when reading from OCFS and writing to tape, no --o_direct when reading from tape and writing to OCFS) provides us with a proper, working file that RMAN could use for restores. Is this assumption correct? Or, is there some issue that may bite us later? Like something related to direct I/O, stale cache buffers, incorrect byte-alignment, sunspots, whatever. P.S. Our RMAN backups are happening purely on one node of the cluster, though the OCFS volume we're writing to will be mounted by the other nodes. We may decide at some point in the future to try backing up from multiple nodes simultaneously, but for now we're keeping it simple. Would backing up from multiple nodes simultaneously change the answer to the above question? For the record, we're using the following: * Red Hat Enterprise Linux Advanced Server 3.0 Taroon Update 2 (kernel 2.4.21-15.ELsmp) * Oracle Enterprise Edition 9.2.0.5 * ocfs-2.4.21-EL-smp-1.0.13-1 * ocfs-support-1.1.2-1 * ocfs-tools-1.1.2-1 * coreutils-4.5.3-35 * tar-1.13.25-16 Thanks in advance, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: http://oss.oracle.com/pipermail/ocfs-users/attachments/20040903/0a15ba9a/attachment.html
Sunil Mushran
2004-Sep-03 18:13 UTC
[Ocfs-users] From OCFS to tape via tar (and back again)
not sure about sunspots, but as long as you do... sync; sync; sync; after tar -x, you should be fine. bottomline, you have to flush the cache to disk. considering o_direct is working during backups (does not work only during restore), I see no problems with parallel backups... as in from the filesystem point of view. On Fri, 2004-09-03 at 15:06, Sheridan, Matt wrote:> We're using RMAN to back up our 9.2 RAC database to an OCFS v1 volume. > We have an existing shell script that we use for copying files from > disk to tape via tar, one file at a time. (Don't ask why. It's a > legacy script. Long story.) We're tweaking this script to use > --o_direct when tarring the file to tape and that seems to be working > fine: > > # tape device is /dev/nst0 > $ tar --o_direct -cvf /dev/nst0 /ocfs/RMAN_test_file > ./RMAN_test_file > > But, to pull that same file back off tape and save it to the original > OCFS location, it seems that we cannot use --o_direct: > > # skipping commands to reset the tape pointer to the proper location > $ tar --o_direct -xvf /dev/nst0 > tar: /dev/nst0: Cannot open: Bad file descriptor > tar: Error is not recoverable: exiting now > > But, without --o_direct, it works fine: > > # skipping commands to reset the tape pointer to the proper location > $ tar -xvf /dev/nst0 > ./RMAN_test_file > > We're assuming that using the above method (--o_direct when reading > from OCFS and writing to tape, no --o_direct when reading from tape > and writing to OCFS) provides us with a proper, working file that RMAN > could use for restores. > > Is this assumption correct? Or, is there some issue that may bite us > later? Like something related to direct I/O, stale cache buffers, > incorrect byte-alignment, sunspots, whatever. > > P.S. Our RMAN backups are happening purely on one node of the cluster, > though the OCFS volume we're writing to will be mounted by the other > nodes. We may decide at some point in the future to try backing up > from multiple nodes simultaneously, but for now we're keeping it > simple. Would backing up from multiple nodes simultaneously change the > answer to the above question? > > For the record, we're using the following: > * Red Hat Enterprise Linux Advanced Server 3.0 Taroon Update 2 > (kernel 2.4.21-15.ELsmp) > * Oracle Enterprise Edition 9.2.0.5 > * ocfs-2.4.21-EL-smp-1.0.13-1 > * ocfs-support-1.1.2-1 > * ocfs-tools-1.1.2-1 > * coreutils-4.5.3-35 > * tar-1.13.25-16 > Thanks in advance, > Matt > > ______________________________________________________________________ > _______________________________________________ > Ocfs-users mailing list > Ocfs-users@oss.oracle.com > http://oss.oracle.com/mailman/listinfo/ocfs-users
WIM.COEKAERTS@ORACLE.COM
2004-Sep-04 03:40 UTC
[Ocfs-users] From OCFS to tape via tar (and back again)
this is probably because coreutils's tar that you use is trying to open /dev/stn with odirect and fails. Philip should look at this, can you file a bugzilla thing in the coreutils issue bit and assign it to philip.copeland@oracle.com ? pretty sure it's just that and yes you should bea ble to backup from / to multiple nodes at same time etc makesure you use large blocksizes