Silviu Marin-Caea
2020-Feb-26 14:05 UTC
[Ocfs2-users] OCFS2 best practices and usage options.
NFS, there's no reason for OCFS2 just for backup/restore. I would save it to a NFS export over 10 Gb network, not a mounted LUN, so you don't have to worry about filesystem integrity in case node1 fails. To restore, you would not need any additional steps like mounting the LUN on node2. You would just restore from the mounted NFS export. And I would not use rman compression, so the restore is faster. But even so, restoring 40 TB over 10 Gb would still take 1-2 days. On Tue, Feb 4, 2020, at 21:49, Jim Andrews wrote:> Hi All,> I am new to OCFS2 and this group. I am not using oracle RAC. I am trying to decide if ocfs2 is the correct tool for the job and I was hoping to get a little feedback about the following scenario.> Thanks in advance! > -Jim A>> We have a scenario such that we plan to do an image copy of the database files using an incremental updated RMAN back to keep them current as well as to have them available on demand to node1 and node2 for restores. Node 1 is where the primary DB is and where RMAN runs for backups. The backup files will go on the ocfs2 file system. If node 1 goes down, we still want the RMAN backup available for restore on either node1 or node2 if necessary. Having the shared file system seemed like an appropriate option. Our ocfs2 file system requirement is about 40 TB which suggests a cluster size of 16k.>> *Oracle support had this to say:*> My experts are not recommending going with ocfs2 because of various reasons for your scenario.> 1. Since multiple nodes uses the file system, locking overhead(DLM) will be there.> 2. Using ocfs2 in this case will introduce cluster stack on the db nodes. With the cluster stack comes node monitoring, and potentially eviction in case of issues. Which will put your databases in risk. which seems unnecessary in this case because the machines typically don't access the data concurrently, but still mounted.> 3. For storing the backups, cluster file systems are not needed.>> Recommendations from our side for your scenario are:> 1. Use file systems like xfs,ext4 and mount the file system on primary node to take backup. When you want to do a restore on node2, export this file system using NFS, then mount it on node2 for restoring from the backup. In this case, if your node1 is completely powered off, then you cannot use export. But you can still ask storage team to attach this LUN to the node2 and mount it. Then you can restore the db if you want.> *2*. If there is a NFS share on the Storage server, then export the LUN as a nfs file share from the storage server end and mount the file system as nfs file system on both the db nodes. This way, node2 can access the data all the time even when the primary db node is down.>> I was leaning toward recommended option 2. Option 1 does not make the storage readily available on node 2.>> Please provide comments, opinions, suggestions. Is OCFS2 the right tool for the job or NFS. Will the performance be ok in either case.> Thank you> -Jim A> _______________________________________________ > Ocfs2-users mailing list > Ocfs2-users at oss.oracle.com > https://oss.oracle.com/mailman/listinfo/ocfs2-users-------------- next part -------------- An HTML attachment was scrubbed... URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20200226/a6fcf70a/attachment.html
Thank you for your reply. Much appreciated! My DBA has concerns. He mentioned he read case studies that say i/o to ocfs is considerably faster than i/o to NFS and that also I should explain our environment further such that we are writing massive files. we're running a 30TB database where we will be incrementally backing up using 15+ threads to the storage location on a daily basis, and an incremental around 10TB each month. Currently our incremental takes about 4-5 hours to complete, and the full RMAN backup about 18-21 hours. He feels ocfs2 is more of a high availability solution than NFS. But I feel NFS presented from NAS to both hosts makes it always available on both hosts anyway. Problem is a may not get my global SAN to present NAS NFS. And also I am unsure if my network team can provide NAS NFS over a 10GB pipe. Hopefully you're ok with another reply. Thank you again! -Jim A From: ocfs2-users-bounces at oss.oracle.com <ocfs2-users-bounces at oss.oracle.com> On Behalf Of Silviu Marin-Caea Sent: Wednesday, February 26, 2020 9:06 AM To: ocfs2-users at oss.oracle.com Subject: Re: [Ocfs2-users] OCFS2 best practices and usage options. CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. NFS, there's no reason for OCFS2 just for backup/restore. I would save it to a NFS export over 10 Gb network, not a mounted LUN, so you don't have to worry about filesystem integrity in case node1 fails. To restore, you would not need any additional steps like mounting the LUN on node2. You would just restore from the mounted NFS export. And I would not use rman compression, so the restore is faster. But even so, restoring 40 TB over 10 Gb would still take 1-2 days. On Tue, Feb 4, 2020, at 21:49, Jim Andrews wrote: Hi All, I am new to OCFS2 and this group. I am not using oracle RAC. I am trying to decide if ocfs2 is the correct tool for the job and I was hoping to get a little feedback about the following scenario. Thanks in advance! -Jim A We have a scenario such that we plan to do an image copy of the database files using an incremental updated RMAN back to keep them current as well as to have them available on demand to node1 and node2 for restores. Node 1 is where the primary DB is and where RMAN runs for backups. The backup files will go on the ocfs2 file system. If node 1 goes down, we still want the RMAN backup available for restore on either node1 or node2 if necessary. Having the shared file system seemed like an appropriate option. Our ocfs2 file system requirement is about 40 TB which suggests a cluster size of 16k. Oracle support had this to say: My experts are not recommending going with ocfs2 because of various reasons for your scenario. 1. Since multiple nodes uses the file system, locking overhead(DLM) will be there. 2. Using ocfs2 in this case will introduce cluster stack on the db nodes. With the cluster stack comes node monitoring, and potentially eviction in case of issues. Which will put your databases in risk. which seems unnecessary in this case because the machines typically don't access the data concurrently, but still mounted. 3. For storing the backups, cluster file systems are not needed. Recommendations from our side for your scenario are: 1. Use file systems like xfs,ext4 and mount the file system on primary node to take backup. When you want to do a restore on node2, export this file system using NFS, then mount it on node2 for restoring from the backup. In this case, if your node1 is completely powered off, then you cannot use export. But you can still ask storage team to attach this LUN to the node2 and mount it. Then you can restore the db if you want. 2. If there is a NFS share on the Storage server, then export the LUN as a nfs file share from the storage server end and mount the file system as nfs file system on both the db nodes. This way, node2 can access the data all the time even when the primary db node is down. I was leaning toward recommended option 2. Option 1 does not make the storage readily available on node 2. Please provide comments, opinions, suggestions. Is OCFS2 the right tool for the job or NFS. Will the performance be ok in either case. Thank you -Jim A _______________________________________________ Ocfs2-users mailing list Ocfs2-users at oss.oracle.com<mailto:Ocfs2-users at oss.oracle.com> https://oss.oracle.com/mailman/listinfo/ocfs2-users -------------- next part -------------- An HTML attachment was scrubbed... URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20200226/6f6a4ece/attachment-0001.html