similar to: Problems under Redhat EL3 and ext3

Displaying 20 results from an estimated 10000 matches similar to: "Problems under Redhat EL3 and ext3"

2007 Jul 14
1
Kernel panic in ext3:dx_probe, help needed
This may or may not be ext3 related but I am trying to find any pointers which might help me. I got a number of HP Proliant DL380 g5 with a P400 controller and also two qla2400 cards. The OS is RedHat EL4 U5 x86_64. Every time during reboot these systems panic after the last umount and I believe before the cciss driver is getting unloaded. The last messages I am able to see are: md: stopping
2002 Oct 21
3
htree questions
I decided that I would try out 2.5.44, and I noticed that htree was merged. If I don't do the tune2fs -O dir_index, and e2fsck -D, the (exisintg) fs won't use htree, right? Once I do the tune2fs and e2fsck, will I still be able to go back to a non-htree kernel if needed? (Will a htree-ized fs work on a non-htree kernel?) I'm guessing that it won't. I've seen a 2.4 htree
2005 Feb 04
2
Failures they e2fsck doesn't find
Hi, I've run many time e2fsck, but in a special dir ls tells me: ls: r?cksendung-wlan.dvi: No such file or directory ls: baf?g_r?ckmeldung.latex: No such file or directory ls: finpr?f.pdf: No such file or directory $ cat finpr?f.pdf cat: finpr?f.pdf: Datei oder Verzeichnis nicht gefunden I don't know what to do? How can I find the failure? If I cat the files with debugfs, I see the
2006 Jan 23
2
Ext3 filesystem access after downgrade from v4.2 to v3.6
I need to downgrade a system from Centos x4.2 to v3.6 (x86) due to performance problems with Arkeia Network Backup and AIT-4 tape drives. The backup database is stored on a v4.2 created ext3 partition. When accessing this partition after the downgrade, Centos complains on boot that fsck.ext3: Filesystem has unsupported feature(s) (/dev/sda5) *fsck: Get a newer version of e2fsck! [fail] If I
2006 Oct 04
2
EXT3 and large directories
I have an ext3 filesystem that has several directories and each directory gets a large number of files inserted and then deleted over time. The filesystem is basically used as a temp store before files are processed. The issue is over time the directory scans get extremely slow even if the directories are empty. I have noticed the directories can range in size from 4k - 100M even when they are
2006 Mar 28
2
FC5: "ext_attr" and "large_file" features for ext3 file systems ???
Hi, Fedora Core ext3 file systems newbie questions: Just interested in the Linux ext3 features but got confused with "large_file" and "ext_attr". First, what's the "large_file" feature REALLY means? For file systems created with same commands and options some file systems have it on while some not. It is said that the feature is automatic -- If there is a
2006 Mar 17
1
[RFC] mke2fs with DIR_INDEX, RESIZE_INODE by default
I've been thinking recently that we should re-enable DIR_INDEX in mke2fs by default. When it first came out, we had done this and were bitten by a few bugs in the code. However, this code has been in heavy use for several thousand filesystem years in Lustre, if not elsewhere, and I'm inclined to think it is pretty safe these days. Likewise, RHEL/FC have had RESIZE_INODE as a standard
2007 Mar 28
1
ext3 usage guidance
Is there a document anywhere offering guidance on the optimum use of ext3 filesystems? Googling shows nothing useful and the Linux ext3 FAQ is not very forthcoming. I'm particularly interested in: 1. The effect on performance of large numbers of (generally) small files One of my ext3 filesystems has 750K files on a 36GB disk, and backup with tar takes forever. Even 'find /fs -type
2005 Jun 08
1
clone RHEL 4 ext3 partition
Hi, I'm about to roll out a whole bunch of Redhat Enterprise 4 workstations and have run into problems cloning from the original. Normally I would use ghost (v7.5) because it does a nice job when cloning to a different sized disk.Unfortunately it comes up with read error 29004. Looking around it seems that Symantec don't support Fedora Core 3 (with Ghost v.8 - don't know if v.9 works
2005 Oct 19
1
EXT3 journalling issue
Hello, I have 2 boxes with 1.5TB storage with ext3 fs, and the kernel is 2.6.11.8. I'm using E2fsprogs 1.37 for FS creation. And, Filesystem revision #: 1 (dynamic) There are 2 scenarios: 1. All SATA drives, RAID5 2. All PATA drives, RAID5 and wrapped in log volumes. I'm having lots of issues with fsck. I did search, but somehow not getting the right information. needs_recovery
2010 May 21
2
fsck.ocfs2 using huge amount of memory?
We are setting up 2 new EL5 U4 machines to replace our current database servers running our demo environment. We use 3Par SANs and their snap clone options. The current production system we snap clone from is EL4 U5 with ocfs2 1.2.9, the new servers have ocfs2 1.4.3 installed. Part of the refresh process is to run fsck.ocfs2 on the volume to recover, but right now as I am trying to run it on our
2011 Mar 11
3
What could cause slow down betwen OCFS2 1.2.9 and 1.4.4
We upgraded our production database cluster (6 node) from EL4 Update 5 to EL5 Update 5, including upgrading OCFS2 from 1.2.9 to 1.4.4. We are now noticing slowdown of batch jobs in Oracle, while hotbackup runs faster. One thing we saw is that journal mode changed from write-back to ordered, as we don't specify journal mode during mount. Oracle sees this as slowdown based on higher IO latency,
2002 Aug 21
1
Ext3 indexed directory extension.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, Searching in the ext3 filesystem mailing list I have seen that there is an indexed directory extension for it. Is this extension stable code ? Has anyone test it ? How may I obtain and install it ? Is it available in any of the last kernel releases ? Greetings. - --- Carles Xavier Munyoz Baldó carles@descom.es Descom Consulting Telf: +34
2008 Feb 17
2
Anyone have an idea how to find file i/o throughput?
We got a remote Oracle 10g R2 standby running on OCFS2. Initial when we started the standby, read I/O was < 5MB/sec on average. Since then it has grown to over 40MB/sec (longer average, it peaks much higher). Here is a graph showing this: http://www.alameda.net/~ulf/dbphx01.png We also have a local standby running (on EXT3) which is not showing the same symptom. I am trying to find where all
2007 Jul 29
1
6 node cluster with unexplained reboots
We just installed a new cluster with 6 HP DL380g5, dual single port Qlogic 24xx HBAs connected via two HP 4/16 Storageworks switches to a 3Par S400. We are using the 3Par recommended config for the Qlogic driver and device-mapper-multipath giving us 4 paths to the SAN. We do see some SCSI errors where DM-MP is failing a path after get a 0x2000 error from the SAN controller, but the path gets puts
2014 Aug 25
2
filesystem
I hope this is the right list. I have created an ext2 filesystem and removed the dir_index feature. I don't know if this kind of experimentation is going to help me learn something about filesystems or not. Well what is dir_index? Then I ran e2fsck -f -v -pD and the /dev file. Now what did I remove? Htree. I guess it can always be put back and it's on an experimental filesystem.
2003 Apr 07
1
2.4.20 and htree
Apologies for the newbie question: I have a (stock) 2.4.20 build (*not* -ac), and I'm trying to work with large ext3 directories. By large, I mean 160,000 files per directory. (Yes, I know it would be better in nested directories but such is life). I feel htree would benefit me. Close reading of the 2.4 changelog suggests that htree isn't in there - only a patch to prevent non-htree
2008 Jan 23
1
OCFS2 DLM problems
Hello everyone, once again. We are running into a problem, which has shown now 2 times, possible 3 (once the systems looked different.) The environment is 6 HP DL360/380 g5 servers with eth0 being the public interface, eth1 and bond0 (eth2 and eth3) used for clusterware and bond0 also used for OCFS2. The bond0 interface is in active/passive mode. There are no network errors counters showing and
2007 Mar 19
1
rebooting more often to stop fsck problems and total disk loss
Hi, I run several hundred servers that are used heavily (webhosting, etc.) all day long. Quite often we'll have a server that either needs a really long fsck (10 hours - 200 gig drive) or an fsck that evntually results in everything going to lost+found (pretty much a total loss). Would rebooting these servers monthly (or some other frequency) stop this? Is it correct to visualize this as
2006 May 07
1
Fedora Core 4 and FC5's NEW EXT3 file system: "Reserved GDT blocks" ???
Hi, I've installed a few Fedora Core 4 and Fedora Core 5 recently, and found that the new ext3 file systems created with new mkfs.ext3(1.38+) has one more field than EXT3 created with old mkfs.ext3(1.34-), even the latter's dir_index feature was turned on and file systems were upgraded with "e2fsck -y -f -D" command. I have three questions thereafter: 1) what does the