similar to: Question about filesystem ownership

Displaying 20 results from an estimated 5000 matches similar to: "Question about filesystem ownership"

2004 Feb 12
3
Ocfs mount issues
I am trying to mount the ocfs partitions using the following command Mount -t ocfs -o uid=oracle,gid=dba /dev/sda /ocfs01 as user oracle and group dba. However it mounts the volume as root. But if I use the ocfstool for the first and mount it as oracle:dba, the subsequent mounts using the above command line mounts the volume as oracle:dba. Is there something that I am missing or I will have
2004 Mar 02
3
Odd errors while mounting an OCFS filesystem
Hello again. I am setting up a new pair of servers to run RAC. They're connected via fibre-channel to a hardware RAID array, and both are able to see the exposed LUNs. When I create an OCFS filesystem on one node with mkfs.ocfs, I can mount it. When I try to mount from the other node, however, it fails. After that, the filesystem is left in a state where neither node can mount it. The
2004 Apr 19
5
OCFS Hang
Greetings, Having read about the previous OSFS hangs, I think this one that we are seeing is different, but I'm not sure if this is caused by OCFS or the Linux OS. We are running OCFS Version 1.09 with Linux AS 3.0/9i RAC. We have a 2 node Intel Cluster (Node 1 and Node 2). This morning the DBA tried to do an "ls" command on /u06/oradata/database
2004 Apr 21
1
Fwd: RE: OCFS Hang
Oh yeah - easy way to check, Randy: Next time your node hangs, get on the OTHER NODE and go into each directory where files are being opened (datafiles, archivelogs, controlefiles, redo logs, etc) and delete a file (you can create one first then delete it). If this causes the hung node to recover then you're having the same problem I was having. Jeremy >>> "Jeremy
2004 Aug 30
2
FW: Observations
Hi Sunil, I'm looking into this thread now. Does this mean we cannot use FTP option to copy OCFS files to ext3? If so, is there any ftp version available for OCFS, similar to cp --o_direct? Also, is there any version of sync available for OCFS (in normal FS, sync does a refresh of FS from kernel cache so that the FS is consistent). By this can we say that the FS shard by both nodes is
2004 Apr 22
1
A couple more minor questions about OCFS and RHE L3
Sort of a followup... We've been running OCFS in sync mode for a little over a month now, and it has worked reasonably well. Performance is still a bit spotty, but we're told that the next kernel update for RHEL3 should improve the situation. We might eventually move to Polyserve's cluster filesystem for its multipathing capability and potentially better performance, but at least we
2004 Jun 29
1
seg fault using ocfstool
Hi all, i have the following configuration: RHAS 2.1 2.4.9-e.40enterprise #1 SMP (2 node cluster). ocfs-support-1.1.2-1 ocfs-tools-1.1.2-1 ocfs-2.4.9-e-enterprise-1.0.12-1 i followed the users guide to install and configure ocfs. During definition and formatting of ocfs-partitions the ocfstool crashed. Now restarting ocfstool fails with the following error: ocfstool Abnormal termination!
2004 Aug 23
2
Changing a node's hostname.
What steps do I need to take to change a host's name? I modified the /etc/ocfs.conf file but ocfstool still reports the old name under the "Configured Nodes" tab. I'm also wondering where ocfstool stores the partition information that it shows under the device list. I dumped a couple ocfs partitions but they still show up in ocfstool. Thanks, Don
2004 Jun 18
2
Problems with OCFSTOOL after LUN Maintenance
In installing 10g\RAC we hit an issue that has been indentified as a bug with CRS where the cluster locking files cannot be implemented under OCFS. Supposedly they work fine when implemented as Raw Devices. We needed to reclaim some space from existing LUN's to create the raw devices as we had expected to able to put all Oracle related files under OCFS (Yes, we belived the hype). This
2004 Oct 20
1
i-node showing 100 % used whereas the partitionsare empty
Hi Sunil, I had filed a bug and saw your response stating that it would be fixed in version 14. In the meanwhile , what we want to know is whether this bug is a minor bug and can be ignored for now. Does reporting 100% inodes cause any problem for the OCFS file system or can we ignore this bug and go into production. Also can you tell us by when version 14 would be released. R'gds
2004 Jul 16
12
OCFS Database too slow
Hi All, we are using Red Hat 2.1 Kernel e38 along with MSA 1000. ocfs version being used is $ rpm -qa | grep ocfs ocfs-tools-1.0.10-1 ocfs-2.4.9-e-enterprise-1.0.12-1 ocfs-support-1.0.10-1 Database Version is 9.2.0.5 However we find that the performance of the database on OCFS is too slow. even a select count(1) from all_tables takes like a while to complete. We initially assumed RAC is
2004 Mar 24
4
Follow up on async I/O question
A few weeks back we opened a TAR with Oracle support to determine whether an OCFS (1.0.9-12) + async configuration was considered supportable. At first a technician said yes, but I followed up with Wim's explanation of how that combination is potentially troublesome and inadequately tested. The technician double-checked, then confirmed that OCFS + async is considered risky. He did say that a
2004 Feb 11
4
Multiple interconnects
(Yep, it's me again) We've worked around some minor glitches and now have a pair of nodes happily sharing an OCFS volume. I was wondering, though, if it was possible to configure a second private IP address so that the nodes could communicate over more than one Gigabit Ethernet connection. Our RAC books and online docs make some vague references to multiple interconnects, but I have yet
2004 Aug 30
3
Observations
Hi, we have a 2 node / 3 node RAC installation with OCFS. We have the following observations. 1. TIME STAMP Issue We have noticed that the time stamp which is shown on the datafiles doesnt remain the same even after a shutdown normal /shutdown immediate i.e If I shutdown all RAC instances ( A , B , C) using shutdown normal / immediate, the timestamp on the datafiles are not the same. Even
2005 Apr 17
2
Quorum error
Had a problem starting Oracle after expanding an EMC Metalun. We get the following errors: >WARNING: OemInit2: Opened file(/oradata/dbf/quorum.dbf 8), tid = main:1024 file = oem.c, line = 491 {Sun Apr 17 10:33:41 2005 } >ERROR: ReadOthersDskInfo(): ReadFile(/oradata/dbf/quorum.dbf) failed(5) - (0) bytes read, tid = main:1024 file = oem.c, line = 1396 {Sun Apr 17 10:33:41 2005 }
2004 Jun 08
3
Major RAC slowdown
Hello again. Our production cluster has begun experiencing some vicious slowdowns that may (or may not) be related to the filesystems. When the problem occurs, the load average on the servers jumps up to 30 or higher. Usually one node will climb while the other drops, then they will switch places a few minutes later. At one point, we had one node's load average up over 300. Our site
2005 Nov 17
1
Startup error- new install
Looking for any ideas where I need to look to fix this: I'm installing RHEL3 AS (update 4) on Dell PowerEdge 6850's. I've installed the hugemem kernels on these boxes and need to install and run ocfs. Kernel: ------- 2.4.21-27.0.4.ELhugemem Loaded the ocfs rpm's --------------------- # rpm -qa | grep ocfs ocfs-2.4.21-EL-smp-1.0.14-1 ocfs-support-1.1.5-1 ocfs-2.4.21-EL-1.0.14-1
2003 Aug 11
1
Strange "feature"
Hi all! Doing my first steps with OCFS (1.0.9), I ran across a nifty little "feature".... We've been trying if DMP with QLogic 2300 HBAs works without having DMP activated (ok, blame on me...). ocfstool allows you to mount different partitions on the same mountpoint, but after we tried that, everything went to state "D" and we chose to reboot the whole cluster (4
2005 Aug 23
3
Not mounting on boot
Specs: Oracle 9.2.0.4 OS is Redhat AS2.1 ocfs-2.4.9-e-summit-1.0.12-1 ocfs-tools-1.0.10-1 ocfs-support-1.0.10-1 ocfs-2.4.9-e-enterprise-1.0.12-1 Shared Storage: Dell/EMC CX600 naviagentcli-6.19.0.4.14-1.noarch.rpm PowerPath 4.4 My system was originally installed by Dell. Since then I've upgraded the OCFS and a few other pkgs. But ever since the beginning the ocfs drives mounted on boot.
2004 Oct 13
1
i-node showing 100 % used whereas the partitions are empty
Output df -i ------------------ Filesystem Inodes IUsed IFree IUse% Mounted on / /dev/sde 348548 348195 353 100% /ocfsa01 /dev/sdf 348548 348195 353 100% /ocfsa02 /dev/sdg 348548 348195 353 100% /ocfsa03 /dev/sdk 139410 138073 1337 100% /ocfsq01 Output df -kP ----------------------- Filesystem