Magnus Månsson
2006-Oct-13 14:44 UTC
FW: e2defrag - Unable to allocate buffer for inode priorities
I have made some more research and found out the following ..
thor:~# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
-[cut]-
/dev/mapper/vgraid-data
475987968 227652 475760316 1% /data
thor:~# strace e2defrag -r /dev/vgraid/data
-[cut]-
mmap2(NULL, 1903955968, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)
= 0x46512000
(delay 15 seconds while allocating memory)
mmap2(NULL, 475992064, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)
-1 ENOMEM (Cannot allocate memory)
-[cut]-
The first allocation seems to be 4 bytes per available inode on my filesystem. I
wish now that I created the FS with less inodes, and there is another question.
What's the gain of having less available inodes? If I recreated my
filesystem, would it be an idea to make one inode per hundred block or something
since that still is way more than I need? Would I gain speed from it?
-----Original Message-----
From: Magnus M?nsson
Sent: den 13 oktober 2006 14:14
To: 'ext3-users at redhat.com'
Subject: e2defrag - Unable to allocate buffer for inode priorities
Hi, first of all, apologies if this isn't the right mailing list but it was
the best I could find. If you know a better mailing list, please tell me.
Today I tried to defrag one of my filesystems. It's a 3.5T large filesystem
that has 6 software-raids in the bottom and then merged together using lvm. I
was running ext3 but removed the journal flag with thor:~# tune2fs -O
^has_journal /dev/vgraid/data
After that I fsckd just to be sure I wouldnt meet any unexpected problems.
So now it was time to defrag, I used this command:
thor:~# e2defrag -r /dev/vgraid/data
After about 15 seconds (after it ate all my 1.5G of RAM) I got this answer:
e2defrag (/dev/vgraid/data): Unable to allocate buffer for inode priorities
I am using Debian unstable and here is the version information from e2defrag:
thor:~# e2defrag -V
e2defrag 0.73pjm1
RCS version $Id: defrag.c,v 1.4 1997/08/17 14:23:57 linux Exp $
I also tried to use -p 256, -p 128, -p 64 to see if it used less memory then, it
didn't seem like that to me, took the same time for the program to abort.
Is there any way to get around this problem? The answer might be to get 10G of
RAM, but that's not very realistic, 2G sure, but I think that's the
limit on my motherboard. A huge amount of swapfiles may solve it, and that's
probably doable, but it will be enormous slow I guess?
Why do I want to defrag? Well, fsck gives this nice info to me:
/dev/vgraid/data: 227652/475987968 files (41.2% non-contiguous),
847539147/951975936 blocks
41% sounds like a lot in my ears and I am having a constant read of files on the
drives, it's to slow already.
Very thankful for ideas or others experiences, maybe it's just not possible
with such large partition with todays tools, hey ext[23] only supports 4T.
Let's hope ext4 comes within a year in the mainstream kernels.
PS! Please CC me since I am not on the list so I dont have to wait for
marc's archive to get the mails.
--
Magnus M?nsson
Systems administrator
Massive Entertainment AB
Malm?, Sweden
Office: +46-40-6001000
Magnus Månsson
2006-Oct-13 16:55 UTC
FW: e2defrag - Unable to allocate buffer for inode priorities
I have now upgraded my server from 1.5G of RAM to 4G of RAM. It get's a bit
longer, it now looks like this with strace:
mmap2(NULL, 1903955968, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)
= 0x464a7000
(15 second delay)
mmap2(NULL, 475992064, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x29eb6000
(this I didn't have memory enough to before)
mmap2(NULL, 1903955968, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)
= -1 ENOMEM (Cannot allocate memory)
(here it wants another 2G RAM, sorry I dont have 2G-modules .. )
So if noone has any idea, I am stuck until I can find 4 pieces of 2G DDR400
modules. :(
--
Magnus M?nsson
-----Original Message-----
From: Magnus M?nsson
Sent: Fri 10/13/2006 4:32 PM
To: ext3-users at redhat.com
Cc: Magnus M?nsson
Subject: FW: e2defrag - Unable to allocate buffer for inode priorities
I have made some more research and found out the following ..
thor:~# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
-[cut]-
/dev/mapper/vgraid-data
475987968 227652 475760316 1% /data
thor:~# strace e2defrag -r /dev/vgraid/data
-[cut]-
mmap2(NULL, 1903955968, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)
= 0x46512000
(delay 15 seconds while allocating memory)
mmap2(NULL, 475992064, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)
-1 ENOMEM (Cannot allocate memory)
-[cut]-
The first allocation seems to be 4 bytes per available inode on my filesystem. I
wish now that I created the FS with less inodes, and there is another question.
What's the gain of having less available inodes? If I recreated my
filesystem, would it be an idea to make one inode per hundred block or something
since that still is way more than I need? Would I gain speed from it?
-----Original Message-----
From: Magnus M?nsson
Sent: den 13 oktober 2006 14:14
To: 'ext3-users at redhat.com'
Subject: e2defrag - Unable to allocate buffer for inode priorities
Hi, first of all, apologies if this isn't the right mailing list but it was
the best I could find. If you know a better mailing list, please tell me.
Today I tried to defrag one of my filesystems. It's a 3.5T large filesystem
that has 6 software-raids in the bottom and then merged together using lvm. I
was running ext3 but removed the journal flag with thor:~# tune2fs -O
^has_journal /dev/vgraid/data
After that I fsckd just to be sure I wouldnt meet any unexpected problems.
So now it was time to defrag, I used this command:
thor:~# e2defrag -r /dev/vgraid/data
After about 15 seconds (after it ate all my 1.5G of RAM) I got this answer:
e2defrag (/dev/vgraid/data): Unable to allocate buffer for inode priorities
I am using Debian unstable and here is the version information from e2defrag:
thor:~# e2defrag -V
e2defrag 0.73pjm1
RCS version $Id: defrag.c,v 1.4 1997/08/17 14:23:57 linux Exp $
I also tried to use -p 256, -p 128, -p 64 to see if it used less memory then, it
didn't seem like that to me, took the same time for the program to abort.
Is there any way to get around this problem? The answer might be to get 10G of
RAM, but that's not very realistic, 2G sure, but I think that's the
limit on my motherboard. A huge amount of swapfiles may solve it, and that's
probably doable, but it will be enormous slow I guess?
Why do I want to defrag? Well, fsck gives this nice info to me:
/dev/vgraid/data: 227652/475987968 files (41.2% non-contiguous),
847539147/951975936 blocks
41% sounds like a lot in my ears and I am having a constant read of files on the
drives, it's to slow already.
Very thankful for ideas or others experiences, maybe it's just not possible
with such large partition with todays tools, hey ext[23] only supports 4T.
Let's hope ext4 comes within a year in the mainstream kernels.
PS! Please CC me since I am not on the list so I dont have to wait for
marc's archive to get the mails.
--
Magnus M?nsson
Systems administrator
Massive Entertainment AB
Malm?, Sweden
Office: +46-40-6001000
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://listman.redhat.com/archives/ext3-users/attachments/20061013/dbda7d2b/attachment.htm>