similar to: slowdown - fragmentation?

Displaying 20 results from an estimated 8000 matches similar to: "slowdown - fragmentation?"

2009 Aug 25
1
Clear Node
I am trying to make a mysql standby setup with 2 machines, one primary and one hot standby, which both share disk for the data directory. I used tunefs.ocfs2 to change the number of open slots to 1 since only one machine should be accessing it at a time. This way it is fairly safe to assume one shouldn't clobber the other's data. Only problem is, if one node dies, the mount lock still
2009 Dec 16
4
No space left on device
On Sat, Feb 28, 2009 at 18:13:12 PST 2009, Joel Becker wrote: > On Sat, Feb 28, 2009 at 12:09:37PM +0000, Nuno Fernandes wrote: > > > That's rather odd. What is the blocksize and cluster size of > > > this filesystem? Can you send me the output of 'debugfs.ocfs2 -R "stat > > > /.zbr" /dev/hda3'? > > > > > > Joel > >
2010 Jan 26
2
No space left on device in one node
Hi! We operate a 2-node cluster running OCFS2 on top of DRBD. It shows about 4.3 GB free space on the OCFS2 filesystem using df on both nodes, but one node can't even write 10 MB: df (ouput identical on both the nodes) $ df -k /cluster Filesystem 1K-blocks Used Available Use% Mounted on /dev/drbd0 83883484 80071096 3812388 96% /cluster $ df -i /cluster
2010 Jun 14
3
Diagnosing some OCFS2 error messages
Hello. I am experimenting with OCFS2 on Suse Linux Enterprise Server 11 Service Pack 1. I am performing various stress tests. My current exercise involves writing to files using a shared-writable mmap() from two nodes. (Each node mmaps and writes to different files; I am not trying to access the same file from multiple nodes.) Both nodes are logging messages like these: [94355.116255]
2006 Aug 12
1
OCFS- EMC Issue
I?ve an issue related to Oracle 10g RAC. I?ve 2 node cluster each being Dell 2850 Server with RHEL 4.0 I?ve EMC CX300 SAN storage with following partitions /orasoft 10 Gb OCFS2 File system /oracrs 2 Gb OCFS2 File system /orabackup 100 Gb OCFS2 File system The datafiles are on ASM which is not directly visible in OS.
2011 Dec 06
2
OCFS2 showing "No space left on device" on a device with free space
Hi , I am getting the error "No space left on device" on a device with free space which is ocfs2 filesystem. Additional information is as below, [root at sai93 staging]# debugfs.ocfs2 -n -R "stats" /dev/sdb1 | grep -i "Cluster Size" Block Size Bits: 12 Cluster Size Bits: 15 [root at sai93 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release
2012 May 30
4
Reproducing fragmentation and out of space error
Recently I ran into a situation where an ocfs2 (1.4) volume was reporting it was out of space when it was not. Deleting some files helped short term but the problem quickly comes back. I believe this is due to the fragmentation bug that I have seen references to in the mailing list archive. I am trying to reproduce the problem on a test system so that I can validate that upgrading to 1.6
2009 Jan 26
1
ocfs2 + drbd primary/primary "No space left on device"
Hello. I'm having issues using ocfs2 and drbd in dual primary mode. After running some filesystem test's that create a lot of small files I run really fast into "No space left on device" The non failing node is able to write/read from the filesystem. And the failing node is also able to delete/read from the filesystem Ubuntu custom kernel 2.6.27.2 o2cb_ctl version 1.3.9 drbd
2008 Feb 27
6
"no space left on device" related to directory limit
Hello, We have a 3-node cluster setup with ocfs2. Since friday one of the nodes went down and would not become clustermember after a reboot because it was unable to write to the ocfs2 filesystem. Message: no space left on device. There is plenty of diskspace though. No problem whatsoever to create a file / directory on the filesystem using one of the other nodes. Today one of the remaining
2009 May 07
1
df & du - that old chestnut
Afternoon, We have an ocfs2 release 1.4 filesystem shared between two nodes (RHEL5). The filesystem in question is used exclusively for Oracle RMAN backups. A df -h shows the following: [root at imsthdb07 ~]# df -h /data/orabackup Filesystem Size Used Avail Use% Mounted on /dev/mapper/eva_mpio_myserver07_08_oracle_bkup0 250G
2013 Nov 01
1
How do I check fragmentation amount?
How can I check the amount on fragmentation on an OCFS2 volume? Thanks, Andy
2006 Oct 19
1
Fragmentation problem: Archive logs on ocfs1 and ocfs2
Hello All, I have few questions around our use of ocfs1/2 for archive logs on 10G RAC. Is there an article out there describing why fragmentation is a special concern for ocfs1/2? Are there ways to remove fragmentation short of rebuilding the fs? Is there a way to estimate how often we will need to rebuild the fs? Any special tools/packages available to handle this issue? Regards, Pradeep.
2009 Aug 21
1
Ghost files in OCFS2 filesystem
Hi, I have encountered an issue on an Oracle RAC cluster using ocfs2, OS is RH Linux 5.3. One of the ocfs2 filesystems appears to be 97% full, yet when I look at the files in there they only equal about 13gig (filesystems is 40gig in size). I have seen this sort of thing in HP-UX but that involved a process who's output file was deleted but the process hadn't been stopped properly, once
2007 Sep 06
1
60% full and writes fail..
I have a setup with lot's of small files (Maildir), in 4 different volumes and for some reason the volumes are full when they reach 60% usage (as reported by df ). This was ofcourse a bit of a supprise for me .. lots of failed writes, bounced messages and very angry customers. Has anybody on this list seen this before (not the angry customers ;-) ? Regards, =paulv # echo "ls
2010 Dec 09
1
Extremely poor write performance, but read appears to be okay
Hello, I'm writing from the otherside of the world from where my systems are, so details are coming in slow. We have a 6TB OCFS2 volume across 20 or so nodes all running OEL5.4 running ocfs2-1.4.4. The system has worked fairly well for the last 6-8 months. Something has happened over the last few weeks which has driven write performance nearly to a halt. I'm not sure how to proceed, and
2011 Mar 22
6
bug resolve yet for export OCFS2 volume to NFS client ?
I found this from ocfs1.4 document: g) NFS OCFS2 volumes can be exported as NFS volumes. This support is limited to NFS version 3, which translates to Linux kernel version 2.4 or later. Users must mount the NFS volumes on the clients using the nordirplus mount option. This disables the READDIRPLUS RPC call to workaround a bug in NFSD, detailed in the following link:
2011 Mar 11
3
What could cause slow down betwen OCFS2 1.2.9 and 1.4.4
We upgraded our production database cluster (6 node) from EL4 Update 5 to EL5 Update 5, including upgrading OCFS2 from 1.2.9 to 1.4.4. We are now noticing slowdown of batch jobs in Oracle, while hotbackup runs faster. One thing we saw is that journal mode changed from write-back to ordered, as we don't specify journal mode during mount. Oracle sees this as slowdown based on higher IO latency,
2009 May 20
1
[Fwd: Re: Unable to fix corrupt directories with fsck.ocfs2]
Robin, To me, anyone else includes the kernel of the current node. Well, if it is unclear the man page should be revised. Also a big warning message on ocfs2.fsck would be nice, after all we all make mistakes. But this is only my two cents. Running fsck on any journaled filesystem will replay the journal. This will cause corruption if the filesystem is mounted read/write, even if the
2007 Feb 21
1
Performance Problems while reading
Hi all We are using a 2 node cluster with drbd 8 (primary/primary state) and ocfs2. Reading a file on one node while it will be written on the other node is very slow. Reading a file on node while it will be written on the same node is fast. In the first case the node which wants to read the file has to ask the other to downgrade the locklevel. In my opinion this is a bottleneck, if the files are
2010 Mar 12
1
[PATCH] ocfs2: Always try for maximum bits with new local alloc windows
What we were doing before was to ask for the current window size as the maximum allocation. This had the effect of limiting the amount of allocation we could get for the local alloc during times when the window size was shrunk due to fragmentation. In some cases, that could actually *increase* fragmentation by artificially limiting the number of bits we can accept. So while we still want to ask