similar to: Really high io wait times with OCFS2 in a mail server

Displaying 20 results from an estimated 10000 matches similar to: "Really high io wait times with OCFS2 in a mail server"

2007 May 22
1
Re: Ocfs2-users Digest, Vol 41, Issue 21
Dear all, > Caveats > ======= > Features which OCFS2 does not support yet: > - extended attributes > - readonly mount > - shared writeable mmap > - loopback is supported, but data written will not > be cluster coherent. > - quotas > - cluster aware flock <---------------------------------------- >
2007 May 21
1
slow file creation
Hi all, I'm troubleshooting an ocfs2 performance problem where creating files in a directory containing ~180k files is taking significant time to complete. Sometimes creating an empty file will take >100 seconds to complete. This is a three node cluster. I'm currently running OCFS2 1.2.3-1. Are there any changes in a recent version that may address this issue? What should I look
2007 Jul 05
1
ZFS on CLARiiON SAN Hardware?
Does anyone on the list have definitive information on whether ZFS works with CLARiiON devices? bash-3.00# uname -a SunOS XXXXXXX 5.10 Generic_118833-33 sun4u sparc SUNW,Sun-Fire-V245 bash-3.00# powermt display dev=all Pseudo name=emcpower0a CLARiiON ID=APM00033500540 [XXXXXXX] Logical device ID=600601607C550E00F25F4629AFBEDB11 [LUN 61] state=alive; policy=BasicFailover; priority=0; queued-IOs=0
2015 Jun 25
0
LVM hatred, was Re: /boot on a separate partition?
On 06/25/2015 01:20 PM, Chris Adams wrote: > ...It's basically a way to assemble one arbitrary set of block devices > and then divide them into another arbitrary set of block devices, but > now separate from the underlying physical structure. > Regular partitions have various limitations (one big one on Linux > being that modifying the partition table of a disk with in-use
2004 Mar 02
0
Fail to create OCFS volumn on the hard disk via a QLA2312 fiberchannel card.
/dev/rawctl is part of the "raw block device" driver interface. You need to turn on the raw device driver option (CONFIG_RAW_DRIVER) in your kernel build. I have always created a new file system from my 2.4 kernel (that has all possible modules built and properly configured to load when needed), so I don't know for sure if mkfs.ocfs will work correctly after turning on the raw
2005 Aug 04
3
Ocfs and EMC Powerpath
A couple years ago, we moved to Oracle RAC on Linux using ocfs that is SAN attached to an EMC CLARiiON. At the time, there was a reason that we did NOT use EMC's Powerpath (I just can't recall what that reason was). What I'd like to know is if there are any issues with introducing Powerpath now. * RHEL 2.1 AS * 2.4.9-e.38enterprise * ocfs-2.4.9-e-enterprise-1.0.13-1 * EMC
2005 Aug 04
0
[Fwd: Re: Ocfs and EMC Powerpath]
That should say below, we are using Powerpath on the CX300's AND the CX600's... ----------------------------------------------------------------------- We are using Powerpath with CX300's and Emulex HBA's, also with OCFS on Red Hat 2.1 AS. Kernel upgrades are the biggest issue that we have, as some of the drivers are dependent on modules built for specific kernels, and if your
2013 Apr 24
0
EMC Clariion stanby power supply
Hello All. I'm working on building support for the Clariion range of standby power supplies. I've found you can get hold of these easily on ebay like: http://www.ebay.co.uk/itm/EMC-CLARiiON-Standby-Power-Supply-118031985- SPS-1000W-/160373295258 There is a problem that I haven't been able to work around. After 90 seconds without power, the device shuts down. This is because
2020 Jun 30
1
what's the advantage of NetworkManager for server?
On 6/29/20 11:20 AM, Gordon Messmer wrote: > ... > In the event of a power loss, many servers will boot faster than the > managed Ethernet switch they are attached to.? Systems managed by > network-scripts may not set up their network because there is no > carrier at the time that networks-scripts start up. > > Network-manager, on the other hand, will set up networking
2009 Oct 27
4
EMC CX4 Clariion
Hi, We had received a new EMC storage (Clariion CX4) and the EMC analist has told us that CentOS aren't on their support list, but RHEL are :) Well anybody on the list has a CentOS host talking (iSCSI) with an EMC storage, preferably using the EMC Powerpath software, and can talk about you experience? Or I will need to buy some REHL licenses ? I'll put my hands on it only next
2009 Dec 16
0
FW: Import a SAN cloned disk
-----Original Message----- From: Bone, Nick Sent: 16 December 2009 16:33 To: oab Subject: RE: [zfs-discuss] Import a SAN cloned disk Hi I know that EMC don''t recommend a SnapView snapshot being added to the original hosts Storage Group although it is not prevented. I tried this just now & assigning the Clariion snapshot of the pool LUN to the same host. Although the snapshot LUN
2007 Jul 09
0
Kernel panic with OCFS2 1.2.6 for EL5 (answer to multipath question)
Hello This is a bit off-topic, but here goes: Try installing the "device-mapper-multipath" package and follow the documentation below - residing at the powerlink.emc.com site: "Native Multipath Failover Based on DM-MPIO for v2.6.x Linux Kernel and EMC(r) Storage Arrays" When configured this works just as good as PowerPath. This works great on RHEL / CentOS 5 Daniel On
2009 Feb 13
1
[PATCH 1/1] OCFS2: add IO error check in ocfs2_get_sector() -v3
checks IO error in ocfs2_get_sector(). this patch is based on Linus' git. Signed-off-by: Wengang Wang <wen.gang.wang at oracle.com> -- diff -up ./fs/ocfs2/super.c.orig ./fs/ocfs2/super.c --- ./fs/ocfs2/super.c.orig 2009-02-12 18:05:19.023685000 -0800 +++ ./fs/ocfs2/super.c 2009-02-12 18:07:13.995623000 -0800 @@ -1537,6 +1537,13 @@ static int ocfs2_get_sector(struct super
2008 Sep 02
1
[PATCH] ocfs2: Fix a bug in direct IO read.
ocfs2 will become read-only if we try to read the bytes which pass the end of i_size. This can be easily reproduced by following steps: 1. mkfs a ocfs2 volume with bs=4k cs=4k and nosparse. 2. create a small file(say less than 100 bytes) and we will create the file which is allocated 1 cluster. 3. read 8196 bytes from the kernel using O_DIRECT which exceeds the limit. 4. The ocfs2 volume
2009 Feb 12
0
[PATCH 1/1] OCFS2: add IO error check in ocfs2_get_sector()
checks IO error in ocfs2_get_sector(). this patch is based on 1.4 git. Signed-off-by: Wengang wang <wen.gang.wang at oracle.com> -- Index: fs/ocfs2/super.c =================================================================== --- fs/ocfs2/super.c (revision 128) +++ fs/ocfs2/super.c (working copy) @@ -1203,6 +1203,11 @@ static int ocfs2_get_sector(struct super unlock_buffer(*bh);
2006 Oct 11
2
out of memory... doing heavy IO on ocfs2 is wasting (low) memory?!
What's the status on this? I've researched Bugzilla, SVN, and the lists and haven't seen any mention of it yet being fixed as of yet. Kurt or Sunil, do you have a patch available that I could try? Otherwise, what's the Bugzilla ID so I can follow it's progress. Any help you can give would be appreciated. Thanks! -Jonah
2009 Feb 12
2
[PATCH 1/1] OCFS2: add IO error check in ocfs2_get_sector() -v2
checks IO error in ocfs2_get_sector(). this patch is based on 1.4 git. Signed-off-by: Wengang wang <wen.gang.wang at oracle.com> -- Index: fs/ocfs2/super.c =================================================================== --- fs/ocfs2/super.c (revision 128) +++ fs/ocfs2/super.c (working copy) @@ -1203,6 +1203,12 @@ static int ocfs2_get_sector(struct super unlock_buffer(*bh);
2004 Mar 16
0
Re: Revision 559 of ocfs2/src/inc/io.h
John, This makes a little more sense if you take a look at the related changes over in hash.c and the change to the prototype of ocfs_bh_sem_lock_modify in proto.h. This is one of those instances where you absolutely require a macro in order to get at the __FUNCTION__, __FILE__ and __LINE__ preprocessor magic. The problem we were trying to solve here was that we were hitting a lot of
2011 Oct 09
1
Btrfs High IO-Wait
Hi, I have high IO-Wait on the ods (ceph), the osd are running a v3.1-rc9 kernel. I also experience high IO-rates, around 500IO/s reported via iostat. Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 6.80 0.00 62.40 18.35 0.04 5.29 0.00 5.29 5.29 3.60 sdb
2008 Jun 04
1
OCFS2 and direct-io writes
I am looking at possibility of using OCFS2 with an existing application that requires very high throughput for read and write file access. Files are created by single writer (process) and can be read by multiple reader, possibly while the file is being written. 100+ different files may be written simultaneously, and can be read by 1000+ readers. I am currently using XFS on a local filesystem,