search for: 57mb

Displaying 8 results from an estimated 8 matches for "57mb".

Did you mean: 50mb
2006 Nov 09
7
Weird Samba upload performance on Gigabit network
Here's a weird one that may have nothing to do with Samba and more to do with network frame sizes. I have recently upgraded the network infrastructure to support gigabit speeds. - OSX Tiger to OSX Tiger file transfers operate at gigabit speeds. - Samba 3.0.23C (Suse 10) to OSX Tiger file transfers operate at gigabit speeds. - BUT OSX Tiger to Samba 3.0.23C (Suse 10) file transfers
2012 Apr 17
2
[LLVMdev] InstCombine adds bit masks, confuses self, others
...ay/povray 2.015 2.246 +11.5% +47mB External/SPEC/CINT2000/255_vortex/255_vortex 1.814 2.044 +12.7% +52mB SingleSource/Benchmarks/Shootout-C++/heapsort 1.871 2.132 +13.9% +57mB SingleSource/Benchmarks/Shootout-C++/ary3 1.087 1.264 +16.3% +65mB MultiSource/Benchmarks/SciMark2-C/scimark2 27.491 23.596 -14.2% -66mB MultiSource/Benchmarks/Olden/bisort/bisort 0.360 0....
2012 Apr 16
0
[LLVMdev] InstCombine adds bit masks, confuses self, others
On Tue, Apr 17, 2012 at 12:23 AM, Jakob Stoklund Olesen <stoklund at 2pi.dk>wrote: > I am not sure how best to fix this. If possible, InstCombine's > canonicalization shouldn't hide arithmetic progressions behind bit masks. The entire concept of cleverly converting arithmetic to bit masks seems like the perfect domain for DAGCombine instead of InstCombine: 1) We know the
2008 Nov 06
45
''zfs recv'' is very slow
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 hi, i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6). i''m using ''zfs send -i'' to replicate changes on A to B. however, the ''zfs recv'' on B is running extremely slowly. if i run the zfs send on A and redirect output to a file, it sends at 2MB/sec. but when i use ''zfs send
2012 Apr 16
5
[LLVMdev] InstCombine adds bit masks, confuses self, others
Look at this silly function: $ cat small.c unsigned f(unsigned a, unsigned *p) { unsigned x = a/4; p[0] = x; p[1] = x+x; return p[1] - 2*p[0]; } GCC turns this into straightforward code and figures out the 0 return value: shrl $2, %edi movl %edi, (%rsi) addl %edi, %edi movl %edi, 4(%rsi) movl $0, %eax ret LLVM optimizes the code: $ clang -O -S -o- small.c -emit-llvm define i32
2004 Aug 02
4
reducing memmoves
Attached is a patch that makes window strides constant when files are walked with a constant block size. In these cases, it completely avoids all memmoves. In my simple local test of rsyncing 57MB of 10 local files, memmoved bytes went from 18MB to zero. I haven't tested this for a big variety of file cases. I think that this will always reduce the memmoves involved with walking a large file, but perhaps there's some case I'm not seeing. Also, the memmove cost if obviously neg...
2008 Aug 05
31
Btrfs v0.16 released
Hello everyone, Btrfs v0.16 is available for download, please see http://btrfs.wiki.kernel.org/ for download links and project information. v0.16 has a shiny new disk format, and is not compatible with filesystems created by older Btrfs releases. But, it should be the fastest Btrfs yet, with a wide variety of scalability fixes and new features. There were quite a few contributors this time
2007 May 14
37
Lots of overhead with ZFS - what am I doing wrong?
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only 35MB/s!?. I am getting basically the same result whether it is single zfs drive, mirror or a stripe (I am testing with two Seagate 7200.10 320G drives hanging off the same interface