similar to: Major RAC slowdown

Displaying 20 results from an estimated 8000 matches similar to: "Major RAC slowdown"

2004 Mar 02
3
Odd errors while mounting an OCFS filesystem
Hello again. I am setting up a new pair of servers to run RAC. They're connected via fibre-channel to a hardware RAID array, and both are able to see the exposed LUNs. When I create an OCFS filesystem on one node with mkfs.ocfs, I can mount it. When I try to mount from the other node, however, it fails. After that, the filesystem is left in a state where neither node can mount it. The
2003 Nov 21
5
Disappointing IMPORT Performance Using 9i RAC with OCFS on Linux
Skipped content of type multipart/alternative
2003 Nov 21
5
Disappointing IMPORT Performance Using 9i RAC with OCFS on Linux
Skipped content of type multipart/alternative
2004 Feb 11
4
Multiple interconnects
(Yep, it's me again) We've worked around some minor glitches and now have a pair of nodes happily sharing an OCFS volume. I was wondering, though, if it was possible to configure a second private IP address so that the nodes could communicate over more than one Gigabit Ethernet connection. Our RAC books and online docs make some vague references to multiple interconnects, but I have yet
2003 Nov 13
2
Disappointing Performance Using 9i RAC with OCFS on Linux
Wim, Thanks for your prompt response on this. The tpmC figures look very impressive, and tpmC is read intensive. I had already read note 236679.1 "Comparing Performance Between RAW IO vs OCFS vs EXT2/3" which I guess is the article to which you are referring; it made me suspect that the poor performance was due to the lack of an OS IO cache but I wasn't sure. The database is
2003 Nov 13
2
Disappointing Performance Using 9i RAC with OCFS on Linux
Wim, Thanks for your prompt response on this. The tpmC figures look very impressive, and tpmC is read intensive. I had already read note 236679.1 "Comparing Performance Between RAW IO vs OCFS vs EXT2/3" which I guess is the article to which you are referring; it made me suspect that the poor performance was due to the lack of an OS IO cache but I wasn't sure. The database is
2004 Nov 24
4
ORA-01207 after SAN maintenance
We had a situation over the weekend with our production database that we can't figure out, hoping someone can shed some light. Specifics: Oracle 9.2.0.4 OS is Redhat AS2.1 ocfs-2.4.9-e-summit-1.0.12-1 ocfs-tools-1.0.10-1 ocfs-support-1.0.10-1 ocfs-2.4.9-e-enterprise-1.0.12-1 All database, redo, undo, and control files are on ocfs, archived logs are on ext3. We shut down the database for san
2004 Feb 09
2
Driver versions for RHEL 3 kernels
I'm preparing a pair of Red Hat Enterprise Linux 3 servers on which to test RAC. Currently they are running the 2.4.21-4.0.2 kernel. I saw that the 1.0.9-9 OCFS binaries are tied to the 4.0.1 kernel, and confirmed that myself. The 1.0.9-12 packages are dated a few days later and don't refer to a specific kernel version, so I assumed they would work with 4.0.2. However, when I try to
2003 Nov 13
1
E-Business 11i.9 and RAC 9.2.0.3
Hi, I am currently involved in a project with E-Business 11i.9. For the production enviroment, ct. wants to implement Load Balance and Fail Over - both middle tier and database tier, the last with RAC. As E11i creates lots of tablespaces, the best way seems to be OCFS. We installed & configured OCFS partitions. The next step was to install E11i multi tier, single instance - already
2004 Jun 30
1
RAC on RedHat or SuSE
I know this isn't entirely OCFS related, but out of curiosity, but what experience have you had running RAC on RedHat or SuSE? I know Oracle supports both, but is one a better choice than the other? Seems to me like Oracle has been working more with RedHat. Does anyone know what Oracle mostly uses internally? What distro do you guys develop on? Would there be any drawbacks for choosing to
2005 Dec 16
3
Server crashed with Common/ocfsgencreate.c, Common/ocfsgenvote.c
Hi Experts, We have a 4nodes RAC running and recently one is down due to hardware (fibre optics card) failure. Since running on 3-nodes RAC, the surviving server just keep crashing. We cannot figure out why is this happening but checking /var/log/messages we have these error (notice the msg before crashing at 8:32): Dec 12 08:30:45 x335-142 kernel: (2) ERROR: file entry name did not match inode,
2004 Sep 28
8
OCFS and BCM5700
Hi... I have a strange problem with my Private NIC Channel, here is my environment details : 1. RHAS 2.1 with kernel 2.4.9-e.27 Enterprise 2. OCFS version : 2.4.9-e-enterprise-1.0.9-6 3. Oracle RDBMS : 9.2.0.4 RAC with 5 Nodes 4. Private NIC Channel: Broadcom BCM5700 My Private NIC channel is down intermittently, we just tracing the root of the problems by identify each product installed the
2004 Mar 06
1
OCFS and multipathing
I've got my RAC cluster running pretty smoothly now (thanks again for all the help so far), but I only have single connections between the servers and the RAID array. The servers each have two Qlogic HBAs, and I'd like to find out if there's any reasonable way to implement multipathing. The platform is RHEL 3, and Red Hat's knowledgebase indicates that they strongly recommend
2003 Jul 10
1
Using RAC
Hello, I've successfully configured RAC on Redhat AS, with external firewire + ocfs. The whole process was done without any errors and have created the database. (Thanks to Wim Coekaerts useful article). But now, how do I actually use it? If I have a client making a connection, to which node do I connect? Are there also any recommended procedures to test RAC's capabilities? Thanks.
2005 Nov 10
1
Can OCFS and GFS co-exist on the same RHEL 3 RAC node??
Dear Experts, One of our client wants to use OCFS for their 9i RAC(2 node) on RHEL3 and also use GFS on the nodes. Would there be any issues using OCFS for Oracle 9i RAC and using GFS for failing over things like print services on the same node? Has this been tested? I know these are competing products and OCFS 2.0is not an option now for the client. They need GFS for their custom apps. Any ideas
2005 Jan 31
3
Clarification on 1.0.14-1
I'm pleased to see that the new release has full support for asynchronous I/O. I was surprised, though, since I thought that the aio bug in the 2.4 kernels couldn't be fixed without breaking binary compatibility. Was there a special fix for this in the RHEL3 update 4 kernel? Anyway, thanks for all your efforts. We're looking forward to testing aio with the new version next week.
2004 Jul 16
12
OCFS Database too slow
Hi All, we are using Red Hat 2.1 Kernel e38 along with MSA 1000. ocfs version being used is $ rpm -qa | grep ocfs ocfs-tools-1.0.10-1 ocfs-2.4.9-e-enterprise-1.0.12-1 ocfs-support-1.0.10-1 Database Version is 9.2.0.5 However we find that the performance of the database on OCFS is too slow. even a select count(1) from all_tables takes like a while to complete. We initially assumed RAC is
2004 Mar 30
1
OCFS install trouble w/ new kernel - unresolved symbols
I successfully have OCFS running in a two-node RAC environment using RHAS 3 (2.4.21-4.EL) using Qlogic 2310 cards. IO performance, however, was terrible with the combination of OCFS and the Qlogic cards. After extensive testing, OCFS did fine on its own, as did the fibre cards. Put the two together, and everything literally ran twice as slow. Support at Qlogic pointed me to
2004 Apr 22
1
A couple more minor questions about OCFS and RHE L3
Sort of a followup... We've been running OCFS in sync mode for a little over a month now, and it has worked reasonably well. Performance is still a bit spotty, but we're told that the next kernel update for RHEL3 should improve the situation. We might eventually move to Polyserve's cluster filesystem for its multipathing capability and potentially better performance, but at least we
2003 Aug 25
1
ocfs and dbca
Hi all, hoping someone has some insight on this issue. I am working on a first install of RAC. I have installed OCFS 1.0.9.4 (upgraded from 1.0.8 before installing the rest of the oracle software) on a 3 node RedHat AS 2.1 cluster. The kernel is 2.4.9-e.25.enterprise. Oracle software installed 9.2.0.4. I fdisk'd the storage devices, mkfs.ocfs on the partitions, and mounted successfully on