similar to: ext3 + quota + 2.4.19 + load

Displaying 20 results from an estimated 4000 matches similar to: "ext3 + quota + 2.4.19 + load"

2003 Jan 16
1
ext3 + quota + rh7.3
hi, Can I use quota with ext3 on a loaded system without experimenting deadlocks nowadays? I'm using rh7.3 kernel 2.4.18-19.7.x thanks -- Juan Pablo Abuyeres <jpabuyer@tecnoera.com>
2002 Jun 21
1
ext3+quota+load: deadlock
Hi, I'm having what looks like a deadlock using kernel 2.4.18 + ext3 + quota on a pretty loaded system. meanwhile, i moved back to ext2 again. I'm using mount-2.11g and Quota utilities version 3.05 (I also tried with 3.06). I also tried with kernel 2.4.16 just in case, but the problem exists anyway. Anything I can do? JP
2002 May 16
2
Ext3-0.9.18 available
Hi, ext3-0.9.18 is now available for 2.4.19-pre8. Some of the fixes in this release are already in the 2.4.19-pre8, but there are some important new fixes in the patch and users are encouraged to upgrade. This release fixes all known outstanding bug reports. The full patch against linux-2.4.19-pre8, and a tarball of the individual fixes in this patch set, is now propagating to
2003 Mar 27
2
So, what about stable quota support in ext3fs?
Good evening. We have some heavy-loaded servers on ext2, and we want to migrate to ext3fs. But we need full and stable quota support. I have headrd that there are some problems in quota usage under ext3fs. Is it true? Should we decline the ext3 usage as impossible in our servers? We need high-level stability in our server (hosting). Thanks before. -- Best regards,
2006 Jan 13
1
Calls through madiatrix with incorrect disposition
hi guys, I have an asterisk server and a mediatrix 1204 gateway. I make calls through the mediatrix unit (only outgoing calls). The problem is, every call I make through the mediatrix unit is logged in the cdr as 'ANSWERED', even if the call was 'NO ANSWER' in practice. Any ideas how to make cdr records accurate? Thanks! -------------- next part -------------- An HTML
2003 Mar 05
1
Re: re problems with ext3 well if think it is
Simon May wrote: > Hi I'm hoping you maybe able to help > I have late last year converted all my machines to ext3 no problems > now I have one machine crashing once every 4 / 5 days > I have used a crash dump and see the following > > <0>Assertion failure in do_get_write_access() at transaction.c:589: > "handle->h_buffer_credits > 0" > >
2002 Aug 21
1
Ext3 indexed directory extension.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, Searching in the ext3 filesystem mailing list I have seen that there is an indexed directory extension for it. Is this extension stable code ? Has anyone test it ? How may I obtain and install it ? Is it available in any of the last kernel releases ? Greetings. - --- Carles Xavier Munyoz Baldó carles@descom.es Descom Consulting Telf: +34
2004 Dec 07
3
Increase size of ext3 filesystem WHILE MOUNTED
Hi, We will have to go and use SuSE (30 servers) just because ext3 filesystem cannot be increase while the filesystem is mounted. I Don't understand why RedHat do not support filesystems that can do thing that ext3 cannot do. When is RedHat going to understand that in a production environnement (30 servers that is seems will be SuSE) we need extend filesystem online. IBM JS,
2001 Dec 19
3
ext3 inode error 28
hello: I have been reviewin my message slog and have found the following message: Dec 19 06:27:28 server02 kernel: EXT3-fs error (device sd(8,7)) in ext3_new_inode: error 28 What is error 28 and should I be worried about it? Ray Turcotte
2002 Oct 09
1
Periodic lockup problem with ext3
On several machines with ext3 we have a periodic "unresponsiveness" problem. Take for example our mailserver: When it handles a lot of email (lots of deliveries to Maildirs), it shovels the data into the Maildirs. But every now and then (the interval being >> 5s, the commit interval) the machine becomes unresponsive, your hear a lot of disk activity, and after about 12-18s the
2005 Mar 26
7
Shrinking a ext3 filesystem ?
I installed CentOS on my home-server with 2 IDE 160GB MAXTOR HDD / RAID-1, LVM and ext3 partitions. Previous OS on this machine was FC2. I often "play" with LVM and, sometimes, have to extand or reduce some volumes size. I was surprised to see that resize2fs isn''t included anymore ! The replacing tool is ext2online but this one seems to only be able to grow a filesystem (not
2002 Apr 30
2
RAID-5/LVM/ext3
Hello: We trying to configure one machine (Compaq Proliant ML760) with 8 disks (72GB each disk) with RAID. We are thinking to use ext3 and LVM in RedHat 7.2 for manage one filesystem with 500GB. This filesystems have to store near off 5.000.000 of files. Is this possible? Could I resize the filesystems/volume to 1TB? what are the ext3 and LVM limits? someone have tested sismilar environment? I
2002 Jul 19
1
lilo causes a "Unexpected dirty buffer encountered at do_get_write_access:597 (03:02 blocknr 0)"
On my Debian box: Package: lilo Version: 1:22.2-5 Severity: normal lilo seems to cause a kernel warning (see subject) when / is a ext3 partition. Maybe a kernel problem? Who knows. I'm running 2.4.19-rc1-ac7 -- System Information: Debian Release: testing/unstable Architecture: i386 Kernel: Linux hummus 2.4.19-rc1-ac7 #1 Wed Jul 17 22:14:20 CEST 2002 i686 Locale: LANG=C, LC_CTYPE=C Versions
2002 Aug 04
2
Kernel 2.4.19 ext3 problem
My system is RH7.2 with custom 2.4.18 or 2.4.19 kernel(s), downloaded from kernel.org. My root filesystem is ext3 on ataraid. I'm using initrd. My system comes up fine with 2.4.18 kernel, but when trying to boot a new 2.4.19 kernel I get the following sequence:
2002 Jul 23
4
ext3 device reported to be 100% full, but we do not know where?
Hello to everybody here, We have a strange problem with ext3. df reports 28 of 30 GB to be used (rest may be slack) which it calls 100% used. But with du we can only find 13 GB, most of it actually in pretty large files (archives). Where are the other 17 GB gone? Thanks Michael -- Hostsharing eG / Boytinstr. 10 / D-22143 Hamburg phone+fax:+49/700/HOSTSHARI(ing) (= +49/700/46787427)
2003 Jun 18
3
ext3 2.4.21 htree tests
Hi, Just thought I'd share some test results of mine in case anyone is interested. Basically the tests are siumulating what our product does with files - although the tests do it a lot quicker (not as many files though). The test is to create 1 million files (each containing the text of the file number) spread over a number of directories. The files are then removed in the same manner as
2002 Aug 20
5
unmountable ext3 root recovery
After a (hardware) crash yesterday, I was unable to boot up due to unrecoverable ide errors (according to the printk()s) when accessing the root filesystem's journal for recovery. Unable to recover, I tried deleting the has_journal option, but that was disallowed given that the needs_recovery flag was set. I saw no way to unset that flag. Unable to access the backups (they were on a fw
2001 Jul 20
3
ext2resize for Ext3
Hi. What is the state of ext2resize for Ext3? How about the online-ext3-patch? Regards, Christian -- * Christian A. Lademann, ZLS Software GmbH mailto:lademann@zls.de * ZLS Software GmbH * Frankfurter Strasse 59 Postfach 1628 mailto:zls@zls.de * D-65779 Kelkheim D-65766 Kelkheim http://www.zls.de * Telefon +49-6195-9902-0 Telefax
2002 May 21
4
Bad directories appearing in ext3 after upgrade 2.4.16 -> 2.4.18+cvs
Hi, I recently upgraded one of my fileservers from 2.4.16 to 2.4.18 plus the ext3-cvs.patch that Andrew Morton pointed me to for addressing and assertion failure. Since then I have been getting lots of errors like: May 21 14:07:03 glass kernel: EXT3-fs error (device md(9,0)): ext3_add_entry: bad entry in directory #2945366: rec_len %% 4 != 0 - offset=0, inode=1886221359, rec_len=24927,
2004 Feb 05
3
increasing ext3 or io responsiveness
Our Invoice posting routine (intensive harddrive io) freezes every few seconds to flush the cache. Reading this: https://listman.redhat.com/archives/ext3-users/2002-November/msg00070.html I decided to try: # elvtune -r 2048 -w 131072 /dev/sda # echo "90 500 0 0 600000 600000 95 20 0" >/proc/sys/vm/bdflush # run_post_routine # elvtune -r 128 -w 512 /dev/sda # echo "30 500 0 0