Displaying 8 results from an estimated 8 matches for "310mb".
Did you mean:
10mb
2017 Oct 27
5
Poor gluster performance on large files.
...e mount:
1000MB/s write
325MB/s read
Distributed only servers 1+2:
Fuse mount on server 1:
900MB/s write iozone 4 streams
320MB/s read iozone 4 streams
single stream read 91MB/s @64K, 141MB/s @1M
simultaneous iozone 4 stream 5G files
Server 1: 1200MB/s write, 200MB/s read
Server 2: 950MB/s write, 310MB/s read
I did some earlier single brick tests with samba VFS and 3 workstations and
got up to 750MB/s write and 800MB/s read aggregate but that's still not
good.
These are the only volume settings tweaks I have made (after much single box
testing to find what actually made a difference):
per...
2009 Jul 02
2
rsync transfer rates over ssh
...r central office is connected to the internet via T1 and the
other office is connected to the internet via DSL. rsync is being run at
the remote office to pull data down from the central office so I expect
to see transfer speeds at a maximum of ~150KB/s, or so. Instead I'm
seeing this on a ~310mb file (just one example):
0 0% 0.00kB/s 0:00:00
13372944 4% 12.75MB/s 0:00:23
22294152 7% 10.63MB/s 0:00:26
32173104 10% 10.23MB/s 0:00:26
....
313022664 99% 2.63MB/s 0:00:00
314732544 100% 6.05MB/s 0:00:49 (xfer#223,...
2017 Oct 30
0
Poor gluster performance on large files.
...gt; Distributed only servers 1+2:
> Fuse mount on server 1:
> 900MB/s write iozone 4 streams
> 320MB/s read iozone 4 streams
> single stream read 91MB/s @64K, 141MB/s @1M
> simultaneous iozone 4 stream 5G files
> Server 1: 1200MB/s write, 200MB/s read
> Server 2: 950MB/s write, 310MB/s read
>
> I did some earlier single brick tests with samba VFS and 3 workstations
> and got up to 750MB/s write and 800MB/s read aggregate but that's still not
> good.
>
> These are the only volume settings tweaks I have made (after much single
> box testing to find what a...
2017 Oct 27
0
Poor gluster performance on large files.
...gt; Distributed only servers 1+2:
> Fuse mount on server 1:
> 900MB/s write iozone 4 streams
> 320MB/s read iozone 4 streams
> single stream read 91MB/s @64K, 141MB/s @1M
> simultaneous iozone 4 stream 5G files
> Server 1: 1200MB/s write, 200MB/s read
> Server 2: 950MB/s write, 310MB/s read
>
> I did some earlier single brick tests with samba VFS and 3 workstations and got up to 750MB/s write and 800MB/s read aggregate but that's still not good.
>
> These are the only volume settings tweaks I have made (after much single box testing to find what actually mad...
2015 Feb 27
0
[LLVMdev] SVN dump seed file (was: svnsync of llvm tree)
...te history of the repository, then a git clone of the git-svn mirror will give you this very cheaply and with the added bonus that you can then commit to the local copy and still push things upstream (and merge changes from upstream). A fresh clone of the llvm and clang git mirrors transfers about 310MB for LLVM and about 190MB for Clang.
What do you want to do with the svnsync copy?
David
> On 27 Feb 2015, at 10:27, Oliver Schneider <llvm at assarbad.net> wrote:
>
> Hi folks,
>
> in a rather old thread on this list titled "svnsync of llvm tree"
> <http://...
2015 Feb 27
2
[LLVMdev] SVN dump seed file (was: svnsync of llvm tree)
Hi folks,
in a rather old thread on this list titled "svnsync of llvm tree"
<http://comments.gmane.org/gmane.comp.compilers.llvm.devel/42523> we
noticed that an svnsync would fail due to a few particularly big commits
that apparently caused OOM conditions on the server. The error and the
revision number were consistent for different people.
That seems to be fixed now. I succeeded
2003 Mar 21
2
not enough virtual memory
Greetings,
I have been trying to install Cool Edit Pro 1.2. I'm running
Mandrake 8.1, which came with Wine release 20010731. This is a
dual-boot system with MS-DOS 6.22 in addition to Mandrake 8.1.
There NO version of Windoze of any kind is installed. I have 512
MB of RAM; the DOS partition has 941 MB free and 777 permissions.
I mount the CD-ROM drive, cd to it, and execute wine setup.exe
2007 Jan 11
4
Help understanding some benchmark results
G''day, all,
So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern.
However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now