Hi, new to all of this OCFS2 stuff...planning on using it for a production 10gR2, EBS 11.5.10.2 RH 4 system...4 nodes sharing disks...all connected to an EMC storage array. Each node is a 2xXeon 3GHz(+) (with HT = 4) CPU box, with 8 G RAM. Looking at using OCFS2 for Shared OH, Shared APPL_TOP, and Datafiles etc. Doing some very basic perfoamnce testing as I found OCFS2 very slow when applying 11.5.10.2 Maintnance Patch. On 'normal' etx3 file systems - copying 18G (APPL_TOP - 100,000 files) from one file system to another took 15 minutes. On another box - with NOTHING else running...copying same filesystem structure from one OCFS2 filesystem to another OCFS2 filesystem took > 1.5 hours - I killed it eventually. kswapd was going mad - consuming LOTS of CPU - top of 'top' majority of the time when copying in progress. Tried reformating the 'target' file system with 64K cluster size - block size was/is 4K...again same result - slow copying of files, kswapd going mad... I'm I missing something - or is OCFS2 not supposed to be a 'general purpose' clustered file system? I know it supposed to be certified for shared OH ? Project Manager is getting scared - not happy with performance - say OCFS2 is not ready for the prime time...? wants to start using GFS from RH...but costs a small fortune...not sure we can justify the cost... Comments & advise please Peter Send instant messages to your online friends http://au.messenger.yahoo.com
Peter McMahon wrote:>Hi, >new to all of this OCFS2 stuff...planning on using it >for a production 10gR2, EBS 11.5.10.2 RH 4 system...4 >nodes sharing disks...all connected to an EMC storage >array. Each node is a 2xXeon 3GHz(+) (with HT = 4) CPU >box, with 8 G RAM. > >Looking at using OCFS2 for Shared OH, Shared APPL_TOP, >and Datafiles etc. > >Doing some very basic perfoamnce testing as I found >OCFS2 very slow when applying 11.5.10.2 Maintnance >Patch. > >On 'normal' etx3 file systems - copying 18G (APPL_TOP >- 100,000 files) from one file system to another took >15 minutes. > >On another box - with NOTHING else running...copying >same filesystem structure from one OCFS2 filesystem to >another OCFS2 filesystem took > 1.5 hours - I killed >it eventually. kswapd was going mad - consuming LOTS >of CPU - top of 'top' majority of the time when >copying in progress. > >Tried reformating the 'target' file system with 64K >cluster size - block size was/is 4K...again same >result - slow copying of files, kswapd going mad... > >I'm I missing something - or is OCFS2 not supposed to >be a 'general purpose' clustered file system? I know >it supposed to be certified for shared OH ? > > >Not really. Even if Oracle says it's supposed to be a general purpose filesystem, it's primary purpose is to enable the deployment of a clustered database. You are faulting a TL for not being able to haul all of your belongings when you're not really buying it for that purpose.>Project Manager is getting scared - not happy with >performance - say OCFS2 is not ready for the prime >time...? wants to start using GFS from RH...but costs >a small fortune...not sure we can justify the cost... > >Comments & advise please > > >Test your configuration for what it's meant for: namely RAC. You may need to reconsider the use of ocfs2 for shared oracle home's but that shouldn't be a show stopper.>Peter > > > > > >Send instant messages to your online friends http://au.messenger.yahoo.com > >_______________________________________________ >Ocfs2-users mailing list >Ocfs2-users at oss.oracle.com >http://oss.oracle.com/mailman/listinfo/ocfs2-users > > > >
Peter McMahon wrote:> I'm I missing something - or is OCFS2 not supposed to > be a 'general purpose' clustered file system? I know > it supposed to be certified for shared OH ? > > Project Manager is getting scared - not happy with > performance - say OCFS2 is not ready for the prime > time...? wants to start using GFS from RH...but costs > a small fortune...not sure we can justify the cost...Peter, Unless your infrastructure is such that ASM does not make sense, why not try it? The problem I have with OCFS2 currently is that you have to have an OCFS2 filesystem per LUN; there is no support for LVM, etc. This means that if your EMC storage only gives you 200GB meta-LUNs, that's your max OCFS2 filesystem size. One of the reasons we're not pursuing ASM is backups; our IT groups have shunned RMAN for years and ASM requires the use of RMAN for backups. This means our internal backup tools that handle backups (including EMC-specific commands) would have to be re-written to use RMAN. /Brian/