search for: 23mb

Displaying 20 results from an estimated 24 matches for "23mb".

Did you mean: 20mb
2008 May 08
1
Restoring a DomU HVM-Domain is "slow" (Bandwidth 23MB/sec from a ramdisk, xen3.2.1)
Hi, i do some tests with restoring HVM winxp-domU''s. Even, if i restore a saved DomU from a ramdisk, the Restoring-process has only a bandwidth about 23MB/sec: Here a example restoring an 512MB HVM opensuse10.3 DomU ...... [2008-05-07 22:40:12 3314] DEBUG (XendCheckpoint:218) restore:shadow=0x5, _static_max=0x20000000, _static_min=0x0, [2008-05-07 22:40:12 3314] DEBUG (balloon:132) Balloon: 12803896 KiB free; need 537600; done. [2008-05-07 22:40:12...
2013 Jul 30
3
SMB throughput inquiry, Jeremy, and James' bow tie
...my home network to end-to-end GbE. My clients are Windows XP SP3 w/hot fixes, and my Samba server is 3.5.6 atop vanilla kernel.org Linux 3.2.6 and Debian 6.0.6. With FDX fast ethernet steady SMB throughput was ~8.5MB/s. FTP and HTTP throughput were ~11.5MB/s. With GbE steady SMB throughput is ~23MB/s, nearly a 3x improvement, making large file copies such as ISOs much speedier. However ProFTPd and Lighttpd throughput are both a steady ~48MB/s, just over double the SMB throughput. I've tweaked the various Windows TCP stack registry settings, WindowScaling ON, Timestamps OFF, 256KB TcpWin...
2006 Nov 14
2
Problem with file size
...iors: list of 4 - (matrix 6x6, 2 vectors of length 6, vector of length 2) - all num params: list of 4: centers [238304 x 3 x 2]: num scales [238304 x 3 x 2]: num N [238304 x 3]: num f0 [scalar]: num If I save this environment to a file, I get a file of 23MB. Great. Session 2: Analogous to "Session 1", but replace 238304 by 262264. If I save the environment on Session 2, I get a file of 8.4GB. I applied object.size on each of the objects in each environment, and this is what I got: For Session 1: index1: 16204864 index2: 16204864 p...
2006 Nov 14
2
Problem with file size
...iors: list of 4 - (matrix 6x6, 2 vectors of length 6, vector of length 2) - all num params: list of 4: centers [238304 x 3 x 2]: num scales [238304 x 3 x 2]: num N [238304 x 3]: num f0 [scalar]: num If I save this environment to a file, I get a file of 23MB. Great. Session 2: Analogous to "Session 1", but replace 238304 by 262264. If I save the environment on Session 2, I get a file of 8.4GB. I applied object.size on each of the objects in each environment, and this is what I got: For Session 1: index1: 16204864 index2: 16204864 p...
2017 Oct 27
5
Poor gluster performance on large files.
...brick tests with samba VFS and 3 workstations and got up to 750MB/s write and 800MB/s read aggregate but that's still not good. These are the only volume settings tweaks I have made (after much single box testing to find what actually made a difference): performance.cache-size 1GB (Default 23MB) performance.client-io-threads on performance.io-thread-count 64 performance.read-ahead-page-count 16 performance.stat-prefetch on server.event-threads 8 (default?) client.event-threads 8 Any help given is appreciated! -------------- next part -------------- An HTML attachment was scrubbed....
2005 Jul 09
5
pxelinux and 2.6.9 kernel parameters
I am using RHEL4 with kernel 2.6.9. I can load this on my hardware by passing the following kernel parameter: memmap=496M at 16M, and it loads up and runs OK. If I do not use this parameter, I get a kernel panic. When I try to load the same software onto the same hardware using PXE, I add the same parameter on the server's tftpboot/pxelinux/pxelinux.cfg/default file: append
2017 Oct 30
0
Poor gluster performance on large files.
...and 3 workstations > and got up to 750MB/s write and 800MB/s read aggregate but that's still not > good. > > These are the only volume settings tweaks I have made (after much single > box testing to find what actually made a difference): > performance.cache-size 1GB (Default 23MB) > performance.client-io-threads on > performance.io-thread-count 64 > performance.read-ahead-page-count 16 > performance.stat-prefetch on > server.event-threads 8 (default?) > client.event-threads 8 > > Any help given is appreciated! > > ________________________...
2017 Oct 27
0
Poor gluster performance on large files.
...th samba VFS and 3 workstations and got up to 750MB/s write and 800MB/s read aggregate but that's still not good. > > These are the only volume settings tweaks I have made (after much single box testing to find what actually made a difference): > performance.cache-size 1GB (Default 23MB) > performance.client-io-threads on > performance.io-thread-count 64 > performance.read-ahead-page-count 16 > performance.stat-prefetch on > server.event-threads 8 (default?) > client.event-threads 8 > > Any help given is appreciated! > ___________________________...
2003 Oct 20
1
v2.5.6 on AIX and large files
...for it, seem to imply either LAN/IP-stack problems or problems with the source data. I've got a problem believing either of these since: a. I managed to send the failing file systems quite successfully using standard "rcp" (remote copy) command. In fact rcp was twice as fast as rsync (23MB/s+ v's 10-11MB/s) which I'm worried about. b. No other problems noted with intersystem comms or these filesystems. c. Other file systems which worked fine with rsync contained more data - and were therefore larger - than some of the failing ones, so it's not a file system size issue. A...
2006 Jul 01
1
The ZFS Read / Write roundabout
Hey all - Was playing a little with zfs today and noticed that when I was untarring a 2.5gb archive both from and onto the same spindle in my laptop, I noticed that the bytes red and written over time was seesawing between approximately 23MB/s and 0MB/s. It seemed like we read and read and read till we were all full up, then wrote until we were empty, and so the cycle went. Now: as it happens, 31MB/s is about as fast as it gets on this disk at that part of the platter (using dd and large block size on the rdev). (iirc, it actually st...
2008 Jul 04
3
[Wine 0.9.60] Can't install Command & Conquer Kane
Hello guys, When I try to install Command & Conquer Kane's edition, I get this error: "The wizard was interrupted before C&C Kane's edition could be completely installed" I googled "The wizard was interrupted before could be completely installed" And got a Microsoft tutorial about adding a registry key that was a binary value... it was a permissions issue,
2010 Oct 09
4
[LLVMdev] LTO, plugins and binary sizes
...on linux x86-64. Gcc is the 4.4.4 included with Fedora 13. The results are: gcc -O3: 32MB gcc -Os: 25MB clang lto -Os: 22MB I then decided to try to link without export-dynamic, since it produces some fairly large tables and blocks many optimizations. The new results were gcc -O3: 30MB gcc -Os: 23MB clang lto -Os: 18 MB The full patches I used are attached. I hope to get the non-hackish bits reviewed, starting by the fix to 8313. Cheers, -- Rafael Ávila de Espíndola
2019 Apr 10
4
Feasibility of cling/llvm interpreter for JIT replacement
Dear Sir/Madam Our company, 4Js software, has developed an SQL data base software that runs under different operating systems: Windows, Linux, Mac OS X. This software compiles each SQL statement into a C program that is compiled "on the fly" and executed by our JIT, Just In Time compiler. We wanted to port it to Apple's iOS, and spent a lot of time retargetting the JIT for
2008 Feb 06
1
Re-map Portion of Ruby Heap Holding Optree to Save Child Server Memory?
I wonder if it is possible to reduce the memory footprint of a pack of 6 mongrels by about 70 meg. In reading Hongli''s blog about his revisions to Ruby GC (last Monday Matz if he could use Hongli''s work), I was wondering at the size (in RSS) of all the mongrels in a cluster. It has always seemed just too big for my C-binary intuition. Why is so much of the memory of the parent
2007 Aug 30
15
ZFS, XFS, and EXT4 compared
I have a lot of people whispering "zfs" in my virtual ear these days, and at the same time I have an irrational attachment to xfs based entirely on its lack of the 32000 subdirectory limit. I''m not afraid of ext4''s newness, since really a lot of that stuff has been in Lustre for years. So a-benchmarking I went. Results at the bottom:
2003 Oct 26
0
rsync Digest, Vol 10, Issue 14
...y either LAN/IP-stack problems or problems with the source > data. I've got a problem believing either of these since: > a. I managed to send the failing file systems quite successfully using > standard "rcp" (remote copy) command. In fact rcp was twice as fast as > rsync (23MB/s+ v's 10-11MB/s) which I'm worried about. > b. No other problems noted with intersystem comms or these filesystems. > c. Other file systems which worked fine with rsync contained more data - > and were therefore larger - than some of the failing ones, so it's not a > file s...
2004 Jul 02
0
1.0-test24 and some mbox benchmarking
...s I should shrink it from Dovecot too. Rewriting works by reading the file forward into buffer and writing the changes as needed. This is quite fast, but it means the buffer can grow large, and if the rewrite gets interrupted everything in the buffer gets lost. In my test mbox this would have been 23MB of lost data, but normally much less. Benchmarks ---------- I simply rewrote a 1.4GB mbox containing 361052 mails, Linux kernel mailing list archives from years 96-02. Computer is Athlon XP 2700+ with 1GB of memory. reads/writes were counted using Linux's iostat command. Nothing else was be...
2006 Apr 05
0
Slow performance to samba server with OSX client
...3. the realtek nic is in full duplex mode (see ethtool dump) 4. the imac samba version is 3.0.10 5. performance using scp is acceptable (11.5 MB/s) 6. setting the delayed_ack on the iMac to 0 makes hardly any difference (still 2 hours) Tweaking the samba config helps a little, but nowhere near the 23MB/s I get when running tcpdump. I also thought it might have something to do with lookups dns or lmhost or something, but that doesn't explain this. I'm lost. Cheers, Pim --------- delayed ack --------- sudo sysctl -w net.inet.tcp.delayed_ack=0 ----------- SCP COPY ---------------- VT...
2008 Mar 03
7
DO NOT REPLY [Bug 5299] New: 2.6.9 client cannot receive files from 3.0.0 server
https://bugzilla.samba.org/show_bug.cgi?id=5299 Summary: 2.6.9 client cannot receive files from 3.0.0 server Product: rsync Version: 3.0.0 Platform: x86 OS/Version: Windows XP Status: NEW Severity: major Priority: P3 Component: core AssignedTo: wayned@samba.org ReportedBy:
2008 Jun 08
8
Windows GPLPV under OpenSolaris not working
...inder, the wiki page is > http://wiki.xensource.com/xenwiki/XenWindowsGplPv > > This release fixes a bug which could be hit under high disk load (high > number of outstanding requests with fragmented sg lists) and would > reduce performance dramatically - my iometer testing went from 23MB/s to > 0.5MB/s! > > Also, xenvbd will now dump some stats to the kernel debug logs (use > debugview from sysinternals to view these) every 60 seconds. I''m not > sure how useful these might be yet. > > > James > Hello! Now to do exhaustive description of my st...