similar to: Re: Please go elsewhere, I don't have time

Displaying 20 results from an estimated 1000 matches similar to: "Re: Please go elsewhere, I don't have time"

2002 Feb 04
0
Re: 2GB of Waste? How can it be? -- some JFS are worse than non-JFS
"IT3 Stuart Blake Tener, USNR-R" wrote: > In terms of enterprise reliability, I understand, however, having an > "office recommended journaling filesystem", Some journaling filesystems can be _worse_ than non-journaled. If the recovery mechanism of the JFS is to "aggressively" go to the journal, journal mis-reads can _toast_ a filesystem. I'll take a full
2002 Jan 15
0
Stopping to say "thanx" ...
I'll try to keep this short (yeah right! ;-). Being primarily a user, I find myself bitching, analyzing and complaining about things I don't stop to understand half the time. I've done more than may share in this regard the various filesystems over the years. I've done a few LUG and tradeshow presentations over the past year, trying to inform different peer admins what Linux JFS
2009 Dec 24
6
benchmark results
I've had the chance to use a testsystem here and couldn't resist running a few benchmark programs on them: bonnie++, tiobench, dbench and a few generic ones (cp/rm/tar/etc...) on ext{234}, btrfs, jfs, ufs, xfs, zfs. All with standard mkfs/mount options and +noatime for all of them. Here are the results, no graphs - sorry: http://nerdbynature.de/benchmarks/v40z/2009-12-22/ Reiserfs
2009 Dec 24
6
benchmark results
I've had the chance to use a testsystem here and couldn't resist running a few benchmark programs on them: bonnie++, tiobench, dbench and a few generic ones (cp/rm/tar/etc...) on ext{234}, btrfs, jfs, ufs, xfs, zfs. All with standard mkfs/mount options and +noatime for all of them. Here are the results, no graphs - sorry: http://nerdbynature.de/benchmarks/v40z/2009-12-22/ Reiserfs
2006 Aug 31
3
debian unstable & ext3
I'm running Linux travis 2.6.15-1-686 #2 Mon Mar 6 15:27:08 UTC 2006 i686 GNU/Linux on a laptop with ext3 on / Some time ago things started getting weird in the following way: I do a fairly normal hack, ^Z, make, test loop when developing and it seems that vim is calling fsync or sync and that is then flushing everything to disk. My tests create maybe 10 dozen files in ~30MB and for some
2002 Feb 13
4
[Off-topic] Battery Backed NVRAM for journals ...
I've seen this come up on occassion, but every NAS OEM that uses NVRAM I've ever talked to over the last 18 months won't tell me anything about their equipment nor their suppliers. Recently, Micro Memory contacted me. Is anyone here using their products? FYI, their 64-bit PCI 128MB-1GB NVRAM board is here:
2002 Feb 04
0
Re: 2GB of Waste? How can it be? -- missing the point
"IT3 Stuart Blake Tener, USNR-R" wrote: > With regard to ReiserFS, I don't see why allowing it to be > installed that way, means it must be a supported configuration. > RH could easily simply allow you to install in that manner, > but support it via their technical support. Mandrake may operate that way, but RedHat does _not_ -- at least that has been my experience.
2001 Jun 08
1
VALinux's 2.4.5 beta kernel with Ext3
Anyone try this yet? ftp://ftp.valinux.com/pub/software/kernel/beta/2.4.5-beta2va3.11/ List of SRPM contents follows. -- TheBS atomic-lookup.patch atomicalloc.patch byteprofiling.patch comtrol-1.23.patch configs-2.4.5.tar.gz copy-user-reschedule.patch dac960-enclosure-quiet.patch dma-livelock-fix.patch e100-1.5.5.tar.gz e1000-3.0.7.tar.gz eepro100-speedo-1.patch emu10k1-tone.patch
2017 Oct 27
2
kmod-jfs on Centos 6
On 10/26/2017 08:01 PM, Akemi Yagi wrote: > On Thu, Oct 26, 2017 at 4:17 PM, H <agents at meddatainc.com> wrote: > >> On October 26, 2017 6:31:04 PM EDT, Akemi Yagi <amyagi at gmail.com> wrote: >>> On Thu, Oct 26, 2017 at 3:11 PM, H <agents at meddatainc.com> wrote: >>> >>>> On 04/18/2017 12:54 PM, H wrote: >>>>> A couple
2017 Oct 27
0
kmod-jfs on Centos 6
On Thu, Oct 26, 2017 at 5:22 PM, H <agents at meddatainc.com> wrote: > On 10/26/2017 08:01 PM, Akemi Yagi wrote: > > On Thu, Oct 26, 2017 at 4:17 PM, H <agents at meddatainc.com> wrote: > > > >> On October 26, 2017 6:31:04 PM EDT, Akemi Yagi <amyagi at gmail.com> > wrote: > >>> On Thu, Oct 26, 2017 at 3:11 PM, H <agents at
2001 Mar 20
3
Interesting interaction between journal recovery and slow boots
For some time now I have been puzzled as to why certain portions of my system boot were quite slow -- but only after journal recoveries. I was fearing that there was some ugly interaction between the recovery and the use of the journal shortly afterward but alas that is not the case. So just in case anybody else is seeing this problem and decides to try to hunt it down, let me save you some
2017 Oct 26
2
kmod-jfs on Centos 6
On 04/18/2017 12:54 PM, H wrote: > A? couple of days ago I submitted a request to ElRepo and kmod-jfs is now available for CentOS 7 as well. > > On 04/12/2017 12:58 AM, H wrote: >> Thank you, installed it and it worked fine. Now I am looking for the same for CentOS 7... It did not look like you have that in your repository? >> >> >> On 3/13/2017 1:09 PM, Nux!
2017 Oct 27
0
kmod-jfs on Centos 6
On Thu, Oct 26, 2017 at 4:17 PM, H <agents at meddatainc.com> wrote: > On October 26, 2017 6:31:04 PM EDT, Akemi Yagi <amyagi at gmail.com> wrote: > >On Thu, Oct 26, 2017 at 3:11 PM, H <agents at meddatainc.com> wrote: > > > >> On 04/18/2017 12:54 PM, H wrote: > >> > A couple of days ago I submitted a request to ElRepo and kmod-jfs >
2001 Feb 28
1
Crash-report; 2.2.19pre14-ext3-0.0.6b
Our main NFS-server, running Debian Potato, died this morning whilst under quite heavy load - due to a runaway perl-script forking of some 200 instances of "/bin/cp" (the load was ~ 50). The server is running 2.2.19pre14 with ext3-0.0.6b. The server is running kernel-nfs. Assertion failure in journal_dirty_metadata() at transaction.c line 796: "bh->b_next_transaction ==
2001 Oct 16
0
2.2.19 hang
This is a 2.2.19 machine with ext3-0.0.7a and quota support running. The symptoms are a particular NFS export hangs (for linux clients but not Solaris clients?) the local filesystem gives the following: EXT3-fs warning (device sd(8,49)): ext3_free_blocks: bit already cleared for block 2213 EXT3-fs error (device sd(8,49)): ext3_free_blocks: Freeing blocks not in datazone - block = 1563120916,
2006 Dec 10
1
Help with Samba+JFS
I have a network server running FC5, with a hardware raid 3 card using 5 drives, as one large (1.2TB) partition in JFS. I chose JFS because of a recommendation for performance from a MythTV tutorial, but I don't really know much about file systems and am suspecting JFS to be causing my problems. I run samba, apache and MythTV on this machine, and there is essentially only one problem as far
2017 Oct 26
0
kmod-jfs on Centos 6
On Thu, Oct 26, 2017 at 3:11 PM, H <agents at meddatainc.com> wrote: > On 04/18/2017 12:54 PM, H wrote: > > A couple of days ago I submitted a request to ElRepo and kmod-jfs is > now available for CentOS 7 as well. > > > > Did not have a need to mount a JFS disk on my CentOS 7 system until today > and it does not want to be mounted, instead complaining
2017 Oct 26
2
kmod-jfs on Centos 6
On October 26, 2017 6:31:04 PM EDT, Akemi Yagi <amyagi at gmail.com> wrote: >On Thu, Oct 26, 2017 at 3:11 PM, H <agents at meddatainc.com> wrote: > >> On 04/18/2017 12:54 PM, H wrote: >> > A couple of days ago I submitted a request to ElRepo and kmod-jfs >is >> now available for CentOS 7 as well. >> > >> >> Did not have a need to
2008 Nov 20
1
XFS or JFS on CentOS 5?
Hi folks... trying to pick between jfs and xfs for a filesystem. In the past we've used jfs with CentOS + centosplus, however, an older post indicated that this may not be the best choice as the version of jfs included with the centosplus kernel would only be as new as the version that was included in the 2.6.18 kernel as RH doesn't backport fixes... It looks like xfs isn't part of
2008 Oct 01
1
JFS in CentOS
Hello all, I'm relatively new to CentOS, but I've been using linux as my main operating system on both the desktop and server ends for the past 4 years. I currently have a PIII server with two 160GB IDE hard drives in it, in a virtual RAID 1 array. At the time of installation, the only FS choices for the largest partition, 120GB, were ext2 and ext3. I chose ext3, but now am wishing