Displaying 10 results from an estimated 10 matches for "420mb".
Did you mean:
20mb
2019 Jan 05
0
Re: [PATCH nbdkit 0/7] server: Implement NBD_FLAG_CAN_MULTI_CONN.
...6MiB/s (216MB/s)(24.2GiB/120002msec)
write: IOPS=52.8k, BW=206MiB/s (216MB/s)(24.2GiB/120002msec)
file:
read: IOPS=48.3k, BW=189MiB/s (198MB/s)(22.1GiB/120001msec)
write: IOPS=48.3k, BW=189MiB/s (198MB/s)(22.1GiB/120001msec)
With multi-conn (-C 8):
memory:
read: IOPS=103k, BW=401MiB/s (420MB/s)(46.0GiB/120002msec)
write: IOPS=103k, BW=401MiB/s (420MB/s)(46.0GiB/120002msec)
file:
read: IOPS=49.2k, BW=192MiB/s (202MB/s)(22.5GiB/120001msec)
write: IOPS=49.2k, BW=192MiB/s (202MB/s)(22.5GiB/120001msec)
So you can see that the file plugin doesn't move at all, which is not
too su...
2019 Jan 05
1
Re: [PATCH nbdkit 0/7] server: Implement NBD_FLAG_CAN_MULTI_CONN.
...resting numbers, concentrating on the memory
plugin and RAM disks. All are done using 8 threads and multi-conn, on
a single unloaded machine with 16 cores, using a Unix domain socket.
(1) The memory plugin using the sparse array, as implemented upstream
in 1.9.8:
read: IOPS=103k, BW=401MiB/s (420MB/s)(46.0GiB/120002msec)
write: IOPS=103k, BW=401MiB/s (420MB/s)(46.0GiB/120002msec)
(2) I moved the locking to around calls to the sparse array code, and
changed the thread model to parallel:
read: IOPS=112k, BW=437MiB/s (458MB/s)(51.2GiB/120001msec)
write: IOPS=112k, BW=437MiB/s (458MB/s)(...
2007 Jul 05
1
Request for enlightenment..
I figure I'm probably trying to do something stupid, but here goes.
I'm trying to boot a hard disk image (with one 420MB partition, lilo in
the MBR) containing debian etch (kernel 2.6.18) using memdisk (onto a
dell 2950 with 8G of ram).
Lilo (22.6.1) fails on memdisk 3.35, 3.36, 3.50, 3.51, and 3.52-pre,
with L 99 99 99 99 et al, but works on 2.x, 3.00 and 3.31.
However, the kernel fails to snag the emulated har...
2001 Sep 20
1
OT: Ogg Vorbis and Bitrate
>>>> I've heard lots of discussion about it,
>>>> but what I was taught in school was kilo
>>>> was greek for 1000. In most usages "kilo XXX"
>>>> means "1000 of XXX".
This is correct, but...
>>>> In electronics terms
>>>> I understand we use it collectively wrong
>>>> from a linguistic
2019 Jan 05
4
Re: [PATCH nbdkit 0/7] server: Implement NBD_FLAG_CAN_MULTI_CONN.
On Fri, Jan 04, 2019 at 05:26:07PM -0600, Eric Blake wrote:
> On 1/4/19 4:08 PM, Richard W.M. Jones wrote:
> > First thing to say is that I need to do a *lot* more testing on this,
> > so this is just an early peek. In particular, although it passed
> > ‘make check && make check-valgrind’ I have *not* tested it against a
> > multi-conn-aware client such as the
2002 Dec 04
0
[Fwd: [RESEND] 2.4.20: ext3: Assertion failure in journal_forget()/Oops on another system]
...mpletely different system
using ext3 on software raid 0 and raid 1 with data=ordered that again
points to a problem with ext3. The ksymoops output is attached. I'm
really beginning to get worried.
Below is my previous post.
--------------------------
This started to happen during larger (10MB-420MB) rsync based writes to
a striped ext3 partition (/dev/md11) residing on 4 scsi disks which is
mounted with defaults, i.e. data=ordered (rsync over 100Mbps link):
Dec 1 12:25:43 pollux kernel: EXT3-fs error (device md(9,11)):
ext3_new_block:
Allocating block in system zone - block = 114696
Dec 1...
2003 Dec 18
3
long startup times for large file systems
Howdy,
Rsync has been churning away for 45 mins, presumably bulding an in-core
list of files to be copied to the destination. This is a very very large
filesystem we are copying locally - approximately 4.2million files
(WebCT). The resident process size for rsync has grown to 72Mb. - is
this normal behaviour for a file system this size, and does rsync have
the ability to handle such a large
2019 Jan 05
15
[PATCH nbdkit v2 01/11] server: Implement NBD_FLAG_CAN_MULTI_CONN.
For existing commits, this is almost identical to v1, except that I
updated some commit messages and reordered the commits in a somewhat
more logical sequence.
The main changes are the extra commits:
[06/11] plugins: Return NBD_FLAG_CAN_MULTI_CONN from some readonly plugins.
- Readonly plugins that can set the flag unconditionally.
[09/11] partitioning: Return NBD_FLAG_CAN_MULTI_CONN.
[10/11]
2007 Jan 11
4
Help understanding some benchmark results
...e is relatively good most of the time (with "striping", "mirroring" and raidz[2]''s with fewer numbers of disks).
Examples:
* 8-disk RAID0 on Linux returns about 190MB/s write and 245MB/sec read, while a ZFS raidz using the same disks returns about 120MB/sec write, but 420MB/sec read.
* 16-disk RAID10 on Linux returns 165MB/sec and 440MB/sec write and read, while a ZFS pool with 8 mirrored disks returns 140MB/sec write and 410MB/sec read.
* 16-disk RAID6 on Linux returns 126MB/sec write, 162MB/sec read, while a 16-disk raidz2 returns 80MB/sec write and 142MB/sec read....
2008 Oct 03
4
fxp performance with POLLING
Hello again :)
With POLLING enabled I experience about 10%-25% performance drop when
copying files over network. Tested with both SAMBA and NFS. Is it normal?
FreeBSD 7.1-PRERELEASE #0: Sat Sep 6 01:52:12 CEST 2008
fxp0: <Intel 82801DB (ICH4) Pro/100 Ethernet> port 0xc800-0xc83f mem
0xe1021000-0xe1021fff irq 20 at device 8.0 on pci1
# ifconfig fxp0
fxp0: