Displaying 7 results from an estimated 7 matches for "dataguard".
2007 Jan 31
2
Patch to fix the 255 status code problem
Hi,
Currently using openssh-4.5p1 on Solaris 8 in conjunction with Oracle 8i
dataguard. Is there a patch available to prevent ssh returning status
code 255 for a successful execution of a remote connection/command.
Many Thanks,
Tim Mann
2010 Nov 10
4
Why I cant use WEB CEO?
Hi,
I? trying to install WEB CEO in UBUNTU 10.10 but it dont work
2010 Aug 20
3
Load at 5, no CPU I/O or swap in use
...4 on a Dell R910 server 16
cores/32 hyperthreaded with 64GB of memory. It is our main Oracle 11g
DB server for one of our customers and is attached to an MD 3000
storage array. We are having a load averaging around 5 but see no swap
in use, CPUs are pretty much idle and no I/O wait. We have Oracle
dataguard turned on in transactional mode. I've checked everything
that I can think of, there are no Oracle processes running which would
cause a spike. Anyone have any ideas as to what to check next?
I have another R910 configured the same way and do not see any issues
with the 3 databases running on t...
2001 May 04
0
Exit status strangeness
Hello,
Trying to get Oracle DataGuard running, which basically does a lot
of work between two replicating databases via rsh/ssh. It is breaking
because it pays very close attention to the exit status of ssh commands.
We are using OpenSSH 2.5.2p2 (also tried 2.9p1, same result) on Solaris 7
and 8. This seems to be Solaris specific...
2011 Sep 02
5
Linux kernel crash due to ocfs2
Hello,
we have a pair of IBM P570 servers running RHEL5.2
kernel 2.6.18-92.el5.ppc64
We have Oracle RAC on ocfs2 storage
ocfs2 is 1.4.7-1 for the above kernel (downloaded from oracle oss site)
Recently both servers have been crashing with the following error:
Assertion failure in journal_dirty_metadata() at
fs/jbd/transaction.c:1130: "handle->h_buffer_credits > 0"
kernel BUG in
2013 Oct 07
2
GlusterFS as underlying Replicated Disk for App Server
Hi All,
We have a requirement for a common replicated filesystem between our two
datacentres, mostly for DR and patching purposes when running Weblogic
clusters.
For those that are not acquainted, Weblogic has a persistent store that it
uses for global transaction logs amongst other things. This store can be
hosted on shared disk (usually NFS), or in recent versions within an Oracle
DB.