Dear All... I need some information regarding OCFS performance in my Linux Box, herewith is my environment details : 1. We are using RHAS 2.1 with kernel 2.4.9-e.27 Enterprise 2. OCFS version : 2.4.9-e-enterprise-1.0.9-6 3. Oracle RDBMS : 9.2.0.4 RAC with 5 Nodes 4. Storage = EVA 6000 with 8 TB SIZE 5. We have 1 DiskGroup and 51 LUNs configured in EVA6000. My Question is : 1. It takes arround 15 minutes to mount arround 51 ocfs file system, is this a normal situation? 2. I monitor the OS using VMSTAT without starting the RAC server, column IO (bo and bi) it's giving 3 digits value continuously, then I unmount all the OCFS filesystem, again monitor the IO using VMSTAT, column IO (bo and bi) it's giving 1 digits value, any idea why this is happen? I have raised this issue to HP engineers who provide the HW, have not got the answer yet. Thanks in advance Rgds/Jeram
Wim Coekaerts
2004-Jun-01 20:21 UTC
[Ocfs-users] OCFS 1.0.9-6 performance with EVA 6000 Storage
yeah it takes a long time, but you should be able to mount in parallel you do one by one, you know how long it takes to mount one, so just time 51 also, iowait being high with 51 volumes and notihng running means that you have bad io throughput, we have a customer tha thas more and has Zero problems, but they have ons of hba's and tons of physical arrays you probably end up exhausting the device queue if you only have 1 in 1.0.12 there will be a way to reduce the heartbeat to a few seconds instead of 2 per second Wim On Wed, Jun 02, 2004 at 08:04:26AM +0700, Jeram wrote:> Dear All... > > I need some information regarding OCFS performance in my Linux Box, herewith > is my environment details : > 1. We are using RHAS 2.1 with kernel 2.4.9-e.27 Enterprise > 2. OCFS version : 2.4.9-e-enterprise-1.0.9-6 > 3. Oracle RDBMS : 9.2.0.4 RAC with 5 Nodes > 4. Storage = EVA 6000 with 8 TB SIZE > 5. We have 1 DiskGroup and 51 LUNs configured in EVA6000. > My Question is : > 1. It takes arround 15 minutes to mount arround 51 ocfs file system, is this > a normal situation? > 2. I monitor the OS using VMSTAT without starting the RAC server, column IO > (bo and bi) it's giving 3 digits value continuously, then I unmount all the > OCFS filesystem, again monitor the IO using VMSTAT, column IO (bo and bi) > it's giving 1 digits value, any idea why this is happen? > I have raised this issue to HP engineers who provide the HW, have not got > the answer yet. > Thanks in advance > Rgds/Jeram > > > > > _______________________________________________ > Ocfs-users mailing list > Ocfs-users@oss.oracle.com > http://oss.oracle.com/mailman/listinfo/ocfs-users
Sunil Mushran
2004-Jun-01 20:29 UTC
[Ocfs-users] OCFS 1.0.9-6 performance with EVA 6000 Storage
Heartbeating in ocfs is currently per volume. The nmthread reads 36 sectors and writes 1 sector every second or so. The io in vmstat you see is due to heartbeat. As far as the mount is concerned, the mount thread waits for the nmthread the stabilize, 10 secs or so. We are working on making the heartbeat configurable. 1.0.12 will have some stuff regarding that.... hb and timeout values. It will not be activated by default. We are still working out the details. That will reduce the hb related io. If you want to use 51 mounts, make sure your hardware can handle the io. For e.g., if you see ocfs msgs like, "Removing nodes" and "Adding nodes" without a node performing any mount/umount, you have a problem. In anycase, you should use 1.0.11 at the least. In 1.0.10, we doubled the timeout from 1.0.9. Hope this helps. Sunil On Tue, 2004-06-01 at 18:04, Jeram wrote:> Dear All... > > I need some information regarding OCFS performance in my Linux Box, herewith > is my environment details : > 1. We are using RHAS 2.1 with kernel 2.4.9-e.27 Enterprise > 2. OCFS version : 2.4.9-e-enterprise-1.0.9-6 > 3. Oracle RDBMS : 9.2.0.4 RAC with 5 Nodes > 4. Storage = EVA 6000 with 8 TB SIZE > 5. We have 1 DiskGroup and 51 LUNs configured in EVA6000. > My Question is : > 1. It takes arround 15 minutes to mount arround 51 ocfs file system, is this > a normal situation? > 2. I monitor the OS using VMSTAT without starting the RAC server, column IO > (bo and bi) it's giving 3 digits value continuously, then I unmount all the > OCFS filesystem, again monitor the IO using VMSTAT, column IO (bo and bi) > it's giving 1 digits value, any idea why this is happen? > I have raised this issue to HP engineers who provide the HW, have not got > the answer yet. > Thanks in advance > Rgds/Jeram > > > > > _______________________________________________ > Ocfs-users mailing list > Ocfs-users@oss.oracle.com > http://oss.oracle.com/mailman/listinfo/ocfs-users
Hi Wim... Thanks for your quick respons, what is the best way to reduce the IO Contention in my environment? I start the LINUX Box one by One...if i start all the 5 nodes in parallel it will give IO error....so for 51 LUNs it takes arround 15 minutes to get complet. Please advice... Rgds/Jeram -----Original Message----- From: Wim Coekaerts [mailto:wim.coekaerts@oracle.com] Sent: Wednesday, June 02, 2004 8:21 AM To: Jeram Cc: ocfs-users@oss.oracle.com; ocfs-devel@oss.oracle.com Subject: Re: [Ocfs-users] OCFS 1.0.9-6 performance with EVA 6000 Storage yeah it takes a long time, but you should be able to mount in parallel you do one by one, you know how long it takes to mount one, so just time 51 also, iowait being high with 51 volumes and notihng running means that you have bad io throughput, we have a customer tha thas more and has Zero problems, but they have ons of hba's and tons of physical arrays you probably end up exhausting the device queue if you only have 1 in 1.0.12 there will be a way to reduce the heartbeat to a few seconds instead of 2 per second Wim On Wed, Jun 02, 2004 at 08:04:26AM +0700, Jeram wrote:> Dear All... > > I need some information regarding OCFS performance in my Linux Box,herewith> is my environment details : > 1. We are using RHAS 2.1 with kernel 2.4.9-e.27 Enterprise > 2. OCFS version : 2.4.9-e-enterprise-1.0.9-6 > 3. Oracle RDBMS : 9.2.0.4 RAC with 5 Nodes > 4. Storage = EVA 6000 with 8 TB SIZE > 5. We have 1 DiskGroup and 51 LUNs configured in EVA6000. > My Question is : > 1. It takes arround 15 minutes to mount arround 51 ocfs file system, isthis> a normal situation? > 2. I monitor the OS using VMSTAT without starting the RAC server, columnIO> (bo and bi) it's giving 3 digits value continuously, then I unmount allthe> OCFS filesystem, again monitor the IO using VMSTAT, column IO (bo and bi) > it's giving 1 digits value, any idea why this is happen? > I have raised this issue to HP engineers who provide the HW, have not got > the answer yet. > Thanks in advance > Rgds/Jeram > > > > > _______________________________________________ > Ocfs-users mailing list > Ocfs-users@oss.oracle.com > http://oss.oracle.com/mailman/listinfo/ocfs-users
Hi Sunil... Thanks for your response, I will try to use 1.0.11, and observe the performance... Rgds/Jeram -----Original Message----- From: Sunil Mushran [mailto:Sunil.Mushran@oracle.com] Sent: Wednesday, June 02, 2004 8:30 AM To: Jeram Cc: ocfs-users@oss.oracle.com; ocfs-devel@oss.oracle.com Subject: Re: [Ocfs-users] OCFS 1.0.9-6 performance with EVA 6000 Storage Heartbeating in ocfs is currently per volume. The nmthread reads 36 sectors and writes 1 sector every second or so. The io in vmstat you see is due to heartbeat. As far as the mount is concerned, the mount thread waits for the nmthread the stabilize, 10 secs or so. We are working on making the heartbeat configurable. 1.0.12 will have some stuff regarding that.... hb and timeout values. It will not be activated by default. We are still working out the details. That will reduce the hb related io. If you want to use 51 mounts, make sure your hardware can handle the io. For e.g., if you see ocfs msgs like, "Removing nodes" and "Adding nodes" without a node performing any mount/umount, you have a problem. In anycase, you should use 1.0.11 at the least. In 1.0.10, we doubled the timeout from 1.0.9. Hope this helps. Sunil On Tue, 2004-06-01 at 18:04, Jeram wrote:> Dear All... > > I need some information regarding OCFS performance in my Linux Box,herewith> is my environment details : > 1. We are using RHAS 2.1 with kernel 2.4.9-e.27 Enterprise > 2. OCFS version : 2.4.9-e-enterprise-1.0.9-6 > 3. Oracle RDBMS : 9.2.0.4 RAC with 5 Nodes > 4. Storage = EVA 6000 with 8 TB SIZE > 5. We have 1 DiskGroup and 51 LUNs configured in EVA6000. > My Question is : > 1. It takes arround 15 minutes to mount arround 51 ocfs file system, isthis> a normal situation? > 2. I monitor the OS using VMSTAT without starting the RAC server, columnIO> (bo and bi) it's giving 3 digits value continuously, then I unmount allthe> OCFS filesystem, again monitor the IO using VMSTAT, column IO (bo and bi) > it's giving 1 digits value, any idea why this is happen? > I have raised this issue to HP engineers who provide the HW, have not got > the answer yet. > Thanks in advance > Rgds/Jeram > > > > > _______________________________________________ > Ocfs-users mailing list > Ocfs-users@oss.oracle.com > http://oss.oracle.com/mailman/listinfo/ocfs-users
Hi Wim... Ok Then...I will try .11 first, and awaiting for .12,meanwhile I am waiting for HP Engineers whether they have any good idea from eva6000 point of view.. Thanks a lot for your informations. Rgds/Jeram -----Original Message----- From: Wim Coekaerts [mailto:wim.coekaerts@oracle.com] Sent: Wednesday, June 02, 2004 8:41 AM To: Jeram Cc: Sunil Mushran; ocfs-users@oss.oracle.com Subject: Re: [Ocfs-users] OCFS 1.0.9-6 performance with EVA 6000 Storage 1.0.11 won't change amount of io. if you already have io problems you have to use .12, which should be out any day.. On Wed, Jun 02, 2004 at 08:35:49AM +0700, Jeram wrote:> Hi Sunil... > > Thanks for your response, I will try to use 1.0.11, and observe the > performance... > > Rgds/Jeram > > -----Original Message----- > From: Sunil Mushran [mailto:Sunil.Mushran@oracle.com] > Sent: Wednesday, June 02, 2004 8:30 AM > To: Jeram > Cc: ocfs-users@oss.oracle.com; ocfs-devel@oss.oracle.com > Subject: Re: [Ocfs-users] OCFS 1.0.9-6 performance with EVA 6000 Storage > > > Heartbeating in ocfs is currently per volume. The nmthread reads 36 > sectors and writes 1 sector every second or so. The io in vmstat you see > is due to heartbeat. > > As far as the mount is concerned, the mount thread waits for the > nmthread the stabilize, 10 secs or so. > > We are working on making the heartbeat configurable. 1.0.12 will have > some stuff regarding that.... hb and timeout values. It will not be > activated by default. We are still working out the details. That will > reduce the hb related io. > > If you want to use 51 mounts, make sure your hardware can handle the io. > For e.g., if you see ocfs msgs like, "Removing nodes" and "Adding nodes" > without a node performing any mount/umount, you have a problem. In > anycase, you should use 1.0.11 at the least. In 1.0.10, we doubled the > timeout from 1.0.9. > > Hope this helps. > Sunil > > On Tue, 2004-06-01 at 18:04, Jeram wrote: > > Dear All... > > > > I need some information regarding OCFS performance in my Linux Box, > herewith > > is my environment details : > > 1. We are using RHAS 2.1 with kernel 2.4.9-e.27 Enterprise > > 2. OCFS version : 2.4.9-e-enterprise-1.0.9-6 > > 3. Oracle RDBMS : 9.2.0.4 RAC with 5 Nodes > > 4. Storage = EVA 6000 with 8 TB SIZE > > 5. We have 1 DiskGroup and 51 LUNs configured in EVA6000. > > My Question is : > > 1. It takes arround 15 minutes to mount arround 51 ocfs file system, is > this > > a normal situation? > > 2. I monitor the OS using VMSTAT without starting the RAC server, column > IO > > (bo and bi) it's giving 3 digits value continuously, then I unmount all > the > > OCFS filesystem, again monitor the IO using VMSTAT, column IO (bo andbi)> > it's giving 1 digits value, any idea why this is happen? > > I have raised this issue to HP engineers who provide the HW, have notgot> > the answer yet. > > Thanks in advance > > Rgds/Jeram > > > > > > > > > > _______________________________________________ > > Ocfs-users mailing list > > Ocfs-users@oss.oracle.com > > http://oss.oracle.com/mailman/listinfo/ocfs-users > _______________________________________________ > Ocfs-users mailing list > Ocfs-users@oss.oracle.com > http://oss.oracle.com/mailman/listinfo/ocfs-users