search for: 39mb

Displaying 10 results from an estimated 10 matches for "39mb".

Did you mean: 32mb
2010 Dec 10
4
qemu VS tapdisk2 VS blkback benchmarks
...ty. SMART support is: Enabled TEST RESULTS ------------ The test is simple: write 1GB of data to disk and measure bandwidth and cpu usage. - tapdisk2 on raw file bandwidth: 32MB/s average cpu usage: 22% - qemu on raw file bandwidth: 33MB/s average cpu usage: 12% - blkback on LVM bandwidth: 39MB/s - qemu on LVM bandwidth: 38MB/s CONCLUSIONS ----------- Qemu beats tapdisk2 on raw files (the bandwidth is the same but the cpu usage is lower). Qemu has similar performances to blkback on LVM from the bandwidth perspective, but I didn''t measure the cpu usage in that case. Cheers,...
2010 Jan 15
0
Win7 can and cannot join domain; speed issues? (tests to /dev/zero & /dev/null?)
...figure out why it's so slow. Writes are faster than reads. My tests are a bit weird. To test out write, I write to /dev/null on the target sys, and to test read, I'm reading from /dev/zero. Locally, these copies return instantaneously. But over the network I get about 34MB/s read, and 39MB/s write. But oddly smbd is nearly 100% cpu bound. I was using 'dd' with a 1GB block size. So shouldn't 'smbd' usually have been asleep awaiting I/O completion (which is near instantaneous). I'd expect to be getting more along the lines of 60-70MB/s R+W (Gigabit network w...
2007 Oct 25
7
TC (HTB) doesn''t work well when network is congested?
...atch ip dst 192.168.5.141/32 classid 1:10 I ran a test in which all 10 clients send/receive packets to/from the server simultaneously. But Client 1 only got 20mbps bandwidth for sending, and 38mpbs for receiving. If I limit the rate of both classes 1:1 to 60mbps instead of 125mbps, Client 1 got 39mbps for sending, and 40mbps for receiving. I am not sure what might cause this. Is it because TC doesn''t work well when network is congested? Or my script is not right? Thanks a lot, william
2010 May 05
0
Migration problem
...eckpoint:423) xc_save: failed to get the suspend evtchn port [2010-05-05 11:32:15 3165] INFO (XendCheckpoint:423) 1: sent 136192, skipped 538, delta 10803ms, dom0 42%, target 0%, sent 413Mb/s, dirtied 5Mb/s 1711 pages 2: sent 1309, skipped 9, delta 48ms, dom0 56%, target 0%, sent 893Mb/s, dirtied 39Mb/s 58 pages 3: sent 58, skipped 0, delta 10ms, dom0 100%, target 0%, sent 190Mb/s, dirtied 32Mb/s 10 pages 4: sent 10, skipped 0, Start last iterationint:423) Saving memory pages: iter 4 0% [2010-05-05 11:32:26 3165] DEBUG (XendCheckpoint:394) suspend [2010-05-05 11:32:26 3165] DEBUG (XendCheckp...
2003 Jul 23
10
malloc does not return null when out of memory
We have a little soekris box running freebsd that uses racoon for key management. It's used for setting up an ipsec tunnel. I noticed that one of these devices lost the tunnel this morning. I looked in the log and saw this Jul 23 01:37:57 m0n0wall /kernel: pid 80 (racoon), uid 0, was killed: out of swap space I reproduced this problem using this code. #include <stdlib.h> int
2015 Jun 24
6
LVM hatred, was Re: /boot on a separate partition?
On 06/23/2015 09:15 AM, Jason Warr wrote: >> That said, I prefer virtual machines over multiboot environments, and I >> absolutely despise LVM --- that cursed thing is never getting on my >> drives. Never again, that is... > > I'm curious what has made some people hate LVM so much. I wondered the same thing, especially in the context of someone who prefers virtual
2015 Jun 24
6
LVM hatred, was Re: /boot on a separate partition?
...write bandwidth at about 12.5% (1/8) native disk write performance. Yesterday I moved a bunch of VMs from a file-backed virt server (set up by someone else) to one that used logical volumes. Block write speed on the old server, measured with bonnie++, was about 21.6MB/s in the guest and about 39MB/s on the host. So, less bad than a few years prior, but still bad. (And yes, all of those numbers are bad. It's a 3ware controller, what do you expect?) LVM backed guests measure very nearly the same as bare metal performance. After migration, bonnie++ reports about 180MB/s block write...
2006 May 10
7
mongrel vs. scgi
I''ve liked the scgi runner ever since it came out. I like the way it''scontrolled, the way it''s clusterable, and the fact that it runs onwin32 platforms so easily. Now it''s May 2006, and the scgi runner hasn''t changed since October,and now we have mongrel. I keep seeing hints of clustering in mongrel,as well. I just downloaded the win32 installer for
2011 Feb 22
6
how to optimize CentOS XEN dom0?
Hi, I have a problematic CentOS XEN server and hope someone could point me in the right direction to optimize it a bit. The server runs on a Core2Quad 9300, with 8GB RAM (max motherboard can take, 1U chassis) on an Intel motherboard with a 1TB SATA HDD. dom0 is set to 512MB limit with a few small XEM VM's running: root at zaxen01:[~]$ xm list Name ID
2013 Oct 10
97
[Bug 70354] New: Failed to initialise context object: 2D_NVC0 (0) (for my GeForce GT 750M)
https://bugs.freedesktop.org/show_bug.cgi?id=70354 Priority: medium Bug ID: 70354 Assignee: nouveau at lists.freedesktop.org Summary: Failed to initialise context object: 2D_NVC0 (0) (for my GeForce GT 750M) QA Contact: xorg-team at lists.x.org Severity: normal Classification: Unclassified OS: