Maybe I should've started a new thread with this question (it was in the /proc/sys/vm/bdflush thread), so I am now :) According to tests performed for this article: http://www-106.ibm.com/developerworks/linux/library/l-fs8/ "ext3's data=journal mode is incredibly well-suited to situations where data needs to be read from and written to disk at the same time." This is the situation that exists on my box. My question is: has anyone had any other real-world experience to backup the claims of the above statement? I would love to get better performance, AND have better data integrity, but that seems to good to be true :) Also, if I do use data=journal would I need to retune the journal/fs? It is a ~400GB file system with some fairly large files (none over 2GB due to limitations with UniVerse - the RDBMS system we are using). Thanks, Andy. PS Here are the specs for my CPU and disk subsystem. Let me know if you need any more info: Quad Xeon 1.4GHz w/HT enabled 8GB RAM Dell PERC (LSI/AMI Megaraid) HW RAID card 24 36GB 15K SCSI - /dev/md0 is ext3 file system on a software RAID0 array across 12 HW RAID1 arrays Andrew Rechenberg Infrastructure Team, Sherman Financial Group arechenberg@shermanfinancialgroup.com Phone: 513.707.3809 Fax: 513.707.3838
Hi, On Mon, Nov 25, 2002 at 09:48:22AM -0500, Rechenberg, Andrew wrote:> Maybe I should've started a new thread with this question (it was in the > /proc/sys/vm/bdflush thread), so I am now :) > > According to tests performed for this article: > > http://www-106.ibm.com/developerworks/linux/library/l-fs8/> "ext3's data=journal mode is incredibly well-suited to situations where > data needs to be read from and written to disk at the same time." This > is the situation that exists on my box.It's particularly suited when you have applications performing on-disk transactions --- ie. when they are using O_SYNC or fsync() to flush data to disk. The data-journaling allows the data to be written sequentially in the log. However, _eventually_ it will still need to write the data elsewhere on disk (unless it rapidly gets deleted), so the effect is really to mitigate the cost of the application synchronisation, rather than anything else. So it really depends very much on what the application is doing whether or not this makes much of a difference. Cheers, Stephen
Dear All, I have recently bought a PC with 120 GB had drive. Windows XP was the operation system. I did see all the space - 120 GB and it worked fine. I deleted all the paritions already on the hard drive and tried to install RedHat 6.2 on it. No matter what I tried, I could just use 1/4 of the hard drive (about 30 GB). If trying to create a partition with larger size, it always ended up with - "inode out of bounds" and thus failure in installing the operating system. I had also tried to create 4 primary partitions, each with 30 GB. I did manage to install the system. However, the happiness only lasted a short while - once rebooted, the file system seamt corrupted (need fsck). Reading through Google news groups, I learnt that RedHat 6.2 with kernel versison 2.2.14 can handle large disk size (> 34 GB). However, I own experience told another story - it seams that RedHat 6.2 could NOT handle hard drive size above 34 GB! Would you gurus shed a light on this? Thank you very much in advance. James Wang __________________________________________________ Do you Yahoo!? Yahoo! Mail Plus Powerful. Affordable. Sign up now. http://mailplus.yahoo.com
On Mon, 25 Nov 2002, james wang wrote:> Dear All, > > I have recently bought a PC with 120 GB had drive. > Windows XP was the operation system. I did see all > the space - 120 GB and it worked fine. > > I deleted all the paritions already on the hard drive > and tried to install RedHat 6.2 on it. No matter what > I tried, I could just use 1/4 of the hard drive (about > 30 GB). If trying to create a partition with larger > size, it always ended up with - "inode out of bounds" > and thus failure in installing the operating system. > > I had also tried to create 4 primary partitions, each > with 30 GB. I did manage to install the system. > However, the happiness only lasted a short while - > once rebooted, the file system seamt corrupted (need > fsck). > > Reading through Google news groups, I learnt that > RedHat 6.2 with kernel versison 2.2.14 can handle > large disk size (> 34 GB). However, I own experience > told another story - it seams that RedHat 6.2 could > NOT handle hard drive size above 34 GB! > > Would you gurus shed a light on this?I am no guru but 6.2 is ancient. How about trying something more current. I know 8.0 can handle 120 Gig drives. Besides what does this have to do with ext3. Ext3 did not come with 6.2. I do not think it was invented/stable yet. AFAIK 2.2.x kernels cannot do ext3 without patches. HTH, -- .............Tom "Nothing would please me more than being able to tdiehl@rogueind.com hire ten programmers and deluge the hobby market with good software." -- Bill Gates 1976 We are still waiting ....