similar to: 60% full and writes fail..

Displaying 20 results from an estimated 100 matches similar to: "60% full and writes fail.."

2006 Sep 20
6
ocfs2 - disk usage inconsistencies
Hi all. I have a 50 GB OCFS2 file system. I'm currently using ~26GB of space but df is reporting 43 GB used. Any ideas how to find out where the missing 17GB is at? The file system was formatted with a 16K cluster & 4K block size. Thanks, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL:
2008 Feb 27
6
"no space left on device" related to directory limit
Hello, We have a 3-node cluster setup with ocfs2. Since friday one of the nodes went down and would not become clustermember after a reboot because it was unable to write to the ocfs2 filesystem. Message: no space left on device. There is plenty of diskspace though. No problem whatsoever to create a file / directory on the filesystem using one of the other nodes. Today one of the remaining
2011 Feb 17
0
Fwd: Re: Determining which version of ocfs2 tools a filesystem was created with.
Sorry all, forgot to hit reply-all. ---------- Forwarded Message ---------- Subject: Re: [Ocfs2-users] Determining which version of ocfs2 tools a filesystem was created with. Date: Thursday 17 February 2011, 12:33:36 From: Mikey Austin <mikey at mikeyaustin.com> To: Sunil Mushran <sunil.mushran at oracle.com> On Wednesday 09 February 2011 11:40:01 you wrote: > On 02/07/2011
2004 Aug 02
6
Calculating volume size from superblock
Another simple question. How do I calculate the size of the volume from the superblock? Do I just use the two fields: u_int32_t s_blocksize_bits; /* Blocksize for this fs */ u_int32_t s_clustersize_bits; /* Clustersize for this fs */ What is the formula to use? Thanks, John
2012 May 30
4
Reproducing fragmentation and out of space error
Recently I ran into a situation where an ocfs2 (1.4) volume was reporting it was out of space when it was not. Deleting some files helped short term but the problem quickly comes back. I believe this is due to the fragmentation bug that I have seen references to in the mailing list archive. I am trying to reproduce the problem on a test system so that I can validate that upgrading to 1.6
2010 Dec 09
2
servers blocked on ocfs2
Hi, we have recently started to use ocfs2 on some RHEL 5.5 servers (ocfs2-1.4.7) Some days ago, two servers sharing an ocfs2 filesystem, and with quite virtual services, stalled, in what it seems on ocfs2 issue. This are the lines in their messages files: =====node heraclito (0)======================================== /Dec 4 09:15:06 heraclito kernel: o2net: connection to node parmenides
2010 Nov 23
1
Understanding debugfs.ocfs2 output
This is related to the "No space on OCFS2 volume" error discussed here this past Sep/Oct. Our Oracle support rep pointed us to Metalink note #1232702.1 and suggested we should script something up to periodically check the free contiguous blocks in the group chains for the volume in question. Reading the note, I get how to get Clusters per Group X Bits per Cluster from the "stat
2015 Jul 25
0
extent alloc gd abnormally cleared
Hi All, We have encountered a case that extent alloc gd has been abnormally cleared. In our environment, the volume has been formatted with 128 slots but actually has 64 slots in use. Since extent alloc is allocated when formatting, and extent_alloc:0107 (inode 259) hasn't been use in our environment, I have no idea how it can happen. Does any one have an idea? The fsck log is attached below:
2009 Jan 26
1
ocfs2 + drbd primary/primary "No space left on device"
Hello. I'm having issues using ocfs2 and drbd in dual primary mode. After running some filesystem test's that create a lot of small files I run really fast into "No space left on device" The non failing node is able to write/read from the filesystem. And the failing node is also able to delete/read from the filesystem Ubuntu custom kernel 2.6.27.2 o2cb_ctl version 1.3.9 drbd
2009 Jul 27
11
[PATCH 0/8] Quota support for ocfs2-tools
Hi, I'm sending a series of patches implementing quota support into ocfs2-tools. It's the same as the original huge patch I've sent but now it's split as Joel asked. I've also realized that when disabling SPARSE feature, we should update quota information. That piece of code is missing, I'll implement it soon. Comments welcome. Honza
2011 Dec 20
8
ocfs2 - Kernel panic on many write/read from both
Sorry i don`t copy everything: TEST-MAIL1# echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 5239722 26198604 246266859 TEST-MAIL1# echo "ls //orphan_dir:0001"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 6074335 30371669 285493670 TEST-MAIL2 ~ # echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 5239722 26198604
2008 Jun 24
1
[RFC][PATCH] btrfs orphan code
Hello, I want to throw this out here now that I''ve got most of the heavy lifting done for this code to make sure what I''m doing is ok for now. I''ve added an ORPHAN_DIR item key to have a hidden dir per root. Right now it just does it for whatever the default root is on mount, but I''m going to fix that to do the orphan dir check/creation on lookup of a
2010 Nov 03
2
[PATCH 1/2] Ocfs2: Add a new code 'OCFS2_INFO_FREEINODE' for o2info ioctl.
The new code is dedicated to calculate free inodes number of all inode_allocs, then return the info to userpace in terms of an array. Specially, flag 'OCFS2_INFO_FL_NON_COHERENT', manipulated by '--cluster-coherent' from userspace, is now going to be involved. setting the flag on means no cluster coherency considered, usually, userspace tools choose none-coherency strategy by
2009 Oct 19
1
About DISK space of OCFS2.
Hi ALL I have a question about DISK space of OCFS2. I copy a file by a "cp" command after check the DISK space by "df -k" command. There is no change when I cheked the DISK space by "df -k"command again. I show below an procedure. ------------------------------------------------------------------------------- root at CPU_N:/fm/bbb> ls -l total 3 -rwxr-xr-x 1
2008 Jun 09
0
OCFS2 1.2.9-1 for RHEL4 and RHEL5 released
All, We are pleased to announce the release of OCFS2 1.2.9-1 for RHEL4 and RHEL5 on x86, x86_64, ppc64 and ia64 architectures. This release includes bug fixes, most of which have been backported from the mainline kernel. Some of the more interesting ones have been described in detail. For the full list of changes, please refer to the news.
2008 Jun 09
0
OCFS2 1.2.9-1 for RHEL4 and RHEL5 released
All, We are pleased to announce the release of OCFS2 1.2.9-1 for RHEL4 and RHEL5 on x86, x86_64, ppc64 and ia64 architectures. This release includes bug fixes, most of which have been backported from the mainline kernel. Some of the more interesting ones have been described in detail. For the full list of changes, please refer to the news.
2009 Feb 28
1
[PATCH 1/1] Patch to recover orphans from the slot during mount
Currently we only queue recovery during mount if the journal is dirty. If the last node holding orphans in other node's orphan directory dies and is the first one to mount then it only recovers its orphan directory which leaves the orphans in other nodes slots. Since the other nodes journals are clean they will not queue to recover their orphan directory. This patch queues to recover orphans
2009 Feb 19
2
Patch to recover orphans in offline slots
This patch is against ocfs2-1.4 and also applies to ocfs2-1.2. ocfs2 mainline requires only the first portion of the patch and hence will make a separate patch for that.
2009 Jul 30
11
[PATCH 0/9] Quota support for ocfs2-tools (version 2)
Hi, this is the next version of quota support for quota tools. I've addressed all the comments of Tao, Joel and others. Sparse feature disabling also correctly updates quota information now and the patch is merged into the tunefs support patch. Honza
2009 Mar 04
2
[PATCH 1/1] Patch to recover orphans in offline slots during recovery and mount
During recovery, a node recovers orphans in it's slot and the dead node(s). But if the dead nodes were holding orphans in offline slots, they will be left unrecovered. If the dead node is the last one to die and is holding orphans in other slots and is the first one to mount, then it only recovers it's own slot, which leaves orphans in offline slots. This patch queues complete_recovery