similar to: Oracle DB giving zpools a wedgie

Displaying 20 results from an estimated 3000 matches similar to: "Oracle DB giving zpools a wedgie"

2007 Sep 17
1
Strange behavior zfs and soalris cluster
Hi All, Two and three-node clusters with SC3.2 and S10u3 (120011-14). If a node is rebooted when using SCSI3-PGR the node is not able to take the zpool by HAStoragePlus due to reservation conflict. SCSI2-PGRE is okay. Using the same SAN-LUN:s in a metaset (SVM) and HAStoragePlus works okay with PGR and PGRE. (both SMI and EFI-labled disks) If using scshutdown and restart all nodes then it will
2002 Jul 09
1
Oracle on ext3?
Hi All, we´re currently running an Oracle DB un SuSE 7.2 with kernel 2.2.19 and 0.0.7b ext3 Patch. Do you have any recommendations for maximum performance with the Database? Tablespace Sizes are about 15 GB User, 10 GB Temp. Main Operation of DB is reporting (index load, table scan, arithmetic operations). Which mount option should be best (concerning performance)? Hints welcome. -- Christian
2007 Sep 17
2
zpool create -f not applicable to hot spares
Hello zfs-discuss, If you do ''zpool create -f test A B C spare D E'' and D or E contains UFS filesystem then despite of -f zpool command will complain that there is UFS file system on D. workaround: create a test pool with -f on D and E, destroy it and that create first pool with D and E as hotspares. I''ve tested it on s10u3 + patches - can someone confirm
2007 Aug 01
0
ZFS and Oracle asynchronous I/O
We have an Oracle 10.2.0.3 installation on a Sun T2000 logical domain. The virtual disk has a ZFS file system. When we try to create a tablespace we get these errors: WARNING: aiowait timed out 1 times WARNING: aiowait timed out 2 times WARNING: aiowait timed out 3 times ... Does disk_asynch_io = false need to be set with ZFS? This page:
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state after a drive in a raidz2 vdev has been successfully replaced. In this particular case drive c0t6d0 was failing so I ran, zpool offline home/c0t6d0 zpool replace home c0t6d0 c8t1d0 and after the resilvering finished the pool reports a degraded state. Hopefully this is incorrect. At this point is the vdev in question now has
2006 May 19
3
Oracle on ZFS vs. UFS
Hi, I''m preparing a personal TPC-H benchmark. The goal is not to measure or optimize the database performance, but to compare ZFS to UFS in similar configurations. At the moment I''m preparing the tests at home. The test setup is as follows: . Solaris snv_37 . 2 x AMD Opteron 252 . 4 GB RAM . 2 x 80 GB ST380817AS . Oracle 10gR2 (small SGA (320m)) The disks also contain the OS
2005 Aug 29
1
Data corruption
We used rsync 2.6.3 on a couple of Solaris 8 machines to update an Oracle database from one machine to another. Here is the procedure I used: The source database was up and running so this operation was similar to doing a hot backup. I queried the source database for a list of tablespace names, and for each tablespace, I queried the list of datafiles. I put the tablespace in hot backup mode,
2010 Dec 09
3
ZFS Prefetch Tuning
Hi All, Is there a way to tune the zfs prefetch on a per pool basis? I have a customer that is seeing slow performance on a pool the contains multiple tablespaces from an Oracle database, looking at the LUNs associated to that pool they are constantly at 80% - 100% busy. Looking at the output from arcstat for the miss % on data, prefetch and metadata we are getting around 5 - 10 % on data,
2007 Apr 12
10
How to bind the oracle 9i data file to zfs volumes
Experts, I''m installing Oracle 9i on Solaris 10 11/06(update 3),I created some zfs volumes which will be used by oracle data file,as: # zfs create -V 200m ora_pool/controlfile01_200m # zfs create -V 800m ora_pool/system_800m ... # ls -l /dev/zvol/rdsk/ora_pool lrwxrwxrwx 1 root root 39 Apr 11 12:23 controlfile01_200m -> ../../../../devices/pseudo/zfs at 0:1c,raw
2004 Sep 01
2
ocfs doesn't free space?
an ocfs-volume was nearly full (only 800MB free). i deleted some datafiles to free space: $ df -h . Filesystem Size Used Avail Use% Mounted on /dev/sdp1 10G 5.3G 4.8G 53% /db/DPS so there are more than 4GB available. $ sqlplus /nolog SQL*Plus: Release 9.2.0.4.0 - Production on Wed Sep 1 12:57:48 2004 Copyright (c) 1982, 2002, Oracle Corporation. All rights
2007 Sep 25
2
ZFS speed degraded in S10U4 ?
Hi Guys, I''m playing with Blade 6300 to check performance of compressed ZFS with Oracle database. After some really simple tests I noticed that default (well, not really default, some patches applied, but definitely noone bother to tweak disk subsystem or something else) installation of S10U3 is actually faster than S10U4, and a lot faster. Actually it''s even faster on
2004 Nov 22
3
Mode context extremely poor performance.
Hi all, I curently have a big problem. One request (listed above) using context take up to 1000 more time than the on RAW or ext2 database. I have ran this request on a single IA32 machine with Redhat and dbf on ext2. The average reponse time is less than a sec. The same request on RAC 4 nodes cluster on RAW take the same average time. On ext2 idem. But on OCFS it took up to 15 sec randomly
2004 Nov 22
3
Mode context extremely poor performance.
Hi all, I curently have a big problem. One request (listed above) using context take up to 1000 more time than the on RAW or ext2 database. I have ran this request on a single IA32 machine with Redhat and dbf on ext2. The average reponse time is less than a sec. The same request on RAC 4 nodes cluster on RAW take the same average time. On ext2 idem. But on OCFS it took up to 15 sec randomly
2007 Jan 09
2
ZFS Hot Spare Behavior
I physically removed a disk (c3t8d0 used by ZFS ''pool01'') from a 3310 JBOD connected to a V210 running s10u3 (11/06) and ''zpool status'' reported this: # zpool status pool: pool01 state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the
2009 Jan 06
2
cluster member hangs during reboot
Hi All, I inherited a 4-node ocfs2 cluster and recently 2 ocfs2 filesystems were added to be use as temp tablespace.?One of the four nodes rebooted during the ?creation of the tablespace and hanged at the message below...and it just sits there. If I put the server into rescue mode and comment out all the filesystems it boots up fine and than I can mount the ocfs2 filesystem manuelly but it cannot
2012 Mar 02
1
OCFS2 1.2/1.6
We are in the process of migrating to new database servers. Our current RAC clusters are running OCFS2 1.2.9 on RHEL 4. Our new servers are running OCFS2 1.6 OEL5. If possible, we would like to minimize the amount of data that needs to move as we migrate to the new servers. We have the following questions: 1. Can we mount an existing OCFS2 1.2 volume on a servers running OCFS2 1.6?
2013 Feb 20
1
Announce: Module puppetlabs/postgresql 2.1.0 Available
A new release of the puppetlabs/postgresql module is now available on the Forge: https://forge.puppetlabs.com/puppetlabs/postgresql/2.1.0 Changelog ======== This release is primarily a feature release, introducing some new helpful constructs to the module. For starters, we''ve added the line `include ''postgresql_conf_extras.conf''` by default so extra parameters not
2008 Oct 03
0
Destroying old zpools
I have a couple zpools that are corrupted and need to destroy them. However, I can''t seem to get this done because zfs will not let me import the pools. Is there a way to forcefully destroy a left over zpool regardless of the state it''s in? Thanks. Brian. -- This message posted from opensolaris.org
2006 Aug 02
0
OCFS and split mirror backups
Are there any known issues with using a split mirror of the OCFS partitions for backups? One option we are looking at is as follows: 1. Put tablespace in backup mode. 2. Split the mirror at the SAN level. 3. end tablespace backup mode. 4. mount the OCFS mirror under a different mount point. 5. backup the files on the OCFS mirror using RMAN. The question is how feasibile it is for step #4 to
2005 Apr 17
0
ORA-00600: [3020], async enabled
We are running 2-node RAC on RedHat Linux-86 (32-bit) the following was done on (4/10/2005) 1. upgrade OS to update 4 (kernel-smp-2.4.21-27.0.2.EL) 2. upgrade OCFS 1.0.12 to ocfs-2.4.21-EL-smp-1.0.14-1 3. upgrade from 9.2.0.5 to 9.2.0.6 (IN 2-Node RAC) 4. change defualt temporary tablespace to regular one (Rac bug fix) 5. Enable aysnc mode by applying patch 3208258_9206 ** Patch 4153303 applied on