Displaying 20 results from an estimated 5000 matches similar to: "Desktop Filesystem Benchmarks in 2.6.3"
2004 Mar 03
0
Desktop Filesystem Benchmarks in 2.6.3
Unfortunately it is a bit more complex, and the truth is less
complementary to us than what you write. Reiser4's CPU usage has come
down a lot, but it still consumes more CPU than V3. It should consume
less, and Zam is currently working on making writes more CPU efficient.
As soon as I get funding from somewhere and can stop worrying about
money, I will do a complete code review, and
2004 Mar 03
2
Desktop Filesystem Benchmarks in 2.6.3
XFS is the best filesystem.
David Weinehall wrote:
>On Tue, Mar 02, 2004 at 03:33:13PM -0700, Dax Kelson wrote:
>
>
>>On Tue, 2004-03-02 at 09:34, Peter Nelson wrote:
>>
>>
>>>Hans Reiser wrote:
>>>
>>>I'm confused as to why performing a benchmark out of cache as opposed to
>>>on disk would hurt performance?
>>>
2004 Mar 06
1
Desktop Filesystem Benchmarks in 2.6.3
I don't think that XFS is a desktop filesystem at all.
This is from XFS FAQ:
qoute
------------
Q: Why do I see binary NULLS in some files after recovery when I
unplugged the power?
If it hurts don't do that!
* NOTE: XFS 1.1 and kernels => 2.4.18 has the asynchronous delete path
which means that you will see a lot less of these problems. If you still
have not updated to the 1.1
2015 Jan 07
5
reboot - is there a timeout on filesystem flush?
> On Jan 6, 2015, at 5:50 PM, Les Mikesell <lesmikesell at gmail.com> wrote:
>
> On Tue, Jan 6, 2015 at 6:37 PM, Gary Greene <ggreene at minervanetworks.com> wrote:
>>
>>
>> Almost every controller and drive out there now lies about what is and isn?t flushed to disk, making it nigh on impossible for the Kernel to reliably know 100% of the time that the
2015 Jan 07
5
reboot - is there a timeout on filesystem flush?
On Jan 6, 2015, at 4:28 PM, Fran Garcia <franchu.garcia at gmail.com> wrote:
>
> On Tue, Jan 6, 2015 at 6:12 PM, Les Mikesell <> wrote:
>> I've had a few systems with a lot of RAM and very busy filesystems
>> come up with filesystem errors that took a manual 'fsck -y' after what
>> should have been a clean reboot. This is particularly annoying on
2007 Aug 19
0
HTB qdisc within HTB root qdisc
Hello...
Im trying to setup HTB to allow me to shape traffic from two upstreams
that meets on single lan0 interface. I prefer to use HTB qdisc
within HTB root qdisc for cleaner rules design.
Seems that it doesnt work at all. tc -s class show doesnt
show any traffic on other classes attached to HTB qdisc.
Linux 2.6.20.7
iproute-2.6.20-070313
Weird thing is that tc -s class show that 1: and 2:
2008 Jan 24
2
btrfs benchmarks
Hi,
I`ve find about BtrFS just this week, so I`ve not tested it so far. I`ll do it as soon as I got a spare disk to experiment with. But, I`ve two questions regarding BtrFS. First, do you plan inclusion of BtrFS into mainline kernel and if so, when do you expect this to happen? Second, I would like to see some more benchmarks of BtrFS, so far you provided comparison to Ext3 and XFS, which is
2015 Jan 07
2
reboot - is there a timeout on filesystem flush?
> On Jan 6, 2015, at 9:23 PM, Gordon Messmer <gordon.messmer at gmail.com> wrote:
>
> On 01/06/2015 04:37 PM, Gary Greene wrote:
>> This has been discussed to death on various lists, including the
>> LKML...
>>
>> Almost every controller and drive out there now lies about what is
>> and isn?t flushed to disk, making it nigh on impossible for the
2009 Jul 20
3
Digium TDM400P in Soekris net5501-70?
Hello -
I've been running Asterisk (quite happily!) for several years now
using a Digium TDM400P card in an old Linux box (P4 1.6 w/ 256MB RAM).
I'm also running another old PC running m0n0wall as a firewall.
Between these two boxes, that run 24x7, I'm drawing a lot more power
than needed and hoping to make a dent in my monthly electric bill by
consolidating the two into a single box
2020 Sep 30
3
External harddisk
On 09/30/2020 05:40 AM, John Pierce wrote:
> On Tue, Sep 29, 2020, 8:33 AM H <agents at meddatainc.com> wrote:
>
>> I have an old external harddisk, Toshiba 320 Gb, with a USB connector that
>> I wanted to check for contents. It did not start up when connected and I
>> could not hear the motor spinning. After leaving it in the freezer
>> overnight the motor
2010 May 26
14
creating a fast ZIL device for $200
Recently, I''ve been reading through the ZIL/slog discussion and
have the impression that a lot of folks here are (like me)
interested in getting a viable solution for a cheap, fast and
reliable ZIL device.
I think I can provide such a solution for about $200, but it
involves a lot of development work.
The basic idea: the main problem when using a HDD as a ZIL device
are the cache flushes
2020 Sep 30
2
External harddisk
> Since you have taken the disk apart it will now be useless as within the
> enclosure there could have been a vacuum or an inert gas.
>From what I know gas filled disks didn't exist in the times when 3X0GB was
on a 2" drive.
>
> You will never be able to recover any data on the disk unless you go and
> pay
> for a professional data recovery organisation to read the
2016 Feb 09
3
Utility to zero unused blocks on disk
On Mon, 2016-02-08 at 14:22 -0800, John R Pierce wrote:
> the only truly safe way to destroy data on magnetic media is to grind
> the media up into filings or melt it down in a furnace.
I unscrew the casing, extract the disk platter(s), slide a very strong
magnet over both sides of the platter surface then bend the platter in
half.
How secure is that ?
I can't afford a machine that
2009 May 15
1
Filesystem experience question was Migration questions
Doing a cursory Google scan on journaled Linux filesystems, it seems
that the three ground-up journaled FSes: XFS, reiser and JFS all have
their separate strong points but all compare favorably. Reiser does a
better job with many small files...which would seem to be the reality of
maildir formatted inboxes.
Any comments on that? Any war stories, that is, any comments on
reliability,
2017 Sep 08
2
cyrus spool on btrfs?
On Fri, September 8, 2017 3:06 pm, John R Pierce wrote:
> On 9/8/2017 12:52 PM, Valeri Galtsev wrote:
>> Thanks. That seems to clear fog a little bit. I still would like to hear
>> manufacturers/models here. My choices would be: Areca or LSI (bought out
>> by Intel, so former LSI chipset and microcode/firmware) and as SSD
>> Samsung
>> Evo SATA III. Does anyone who
2006 Apr 09
10
Trying to do some very simple ingress limiting, no success
Hi,
I am trying to do some simple ingress limiting based on fwmark. I know
the ability and sense to do INGRESS limiting is ehm... limited ;-) but
still I want to try it.
I tried several things.
=== 1 ===
tcq ingress handle ffff:
tcf parent ffff: protocol ip prio 1 handle 1 fw police rate 12mbit burst 10k drop
tcf parent ffff: protocol ip prio 1 handle 2 fw police rate 10mbit burst 10k drop
2020 Sep 30
1
External harddisk
On 09/30/2020 12:03 PM, Simon Matter wrote:
>> Since you have taken the disk apart it will now be useless as within the
>> enclosure there could have been a vacuum or an inert gas.
> From what I know gas filled disks didn't exist in the times when 3X0GB was
> on a 2" drive.
>
>> You will never be able to recover any data on the disk unless you go and
>>
2012 Oct 07
0
rsync patch
I've made a small patch to rsync that adds three options that are
useful in data-recovery situations. I don't know whether the
maintainer will want to add this to the official distribution, but he
is free to do so if he wishes. At present, I don't have anywhere to
host the patch but I wanted to make it available so it may be tested
more thoroughly.
To be honest, I haven't
2016 Feb 09
1
Utility to zero unused blocks on disk
> -----Original Message-----
> From: centos-bounces at centos.org [mailto:centos-bounces at centos.org] On
> Behalf Of EGO-II.1
> Sent: den 9 februari 2016 09:00
> To: CentOS mailing list
> Subject: Re: [CentOS] Utility to zero unused blocks on disk
>
>
>
> >> the only truly safe way to destroy data on magnetic media is to grind
> >> the media up into
2002 Sep 20
2
RAID1 + Ext3 + Automatic Power Resets
I am testing EXT3 as a filesystem for a server whose
power supply is failure prone.
In order to do the test, I have a lever that I can
control from PC1 that can press the reset button on
PC2. PC2's reset button is automatically pressed once
every 120 seconds (the boot sequence on PC2 takes 80
seconds).
While PC2 is booted, PC1 directs email and web
requests at PC2, so that the PC2 disks are