search for: 230mb

Displaying 13 results from an estimated 13 matches for "230mb".

Did you mean: 200mb
2016 Feb 29
1
Network speed between two guests on same host.
...1) From an physical PC to a guest (it doesn't matter on which host), I get almost 1Gb/s. They are connected through a 1Gb/s swich => very good! 2) From a guest on one host to a guest on the other host => plusminus 1Gb/s => Okay! 3) Between two guests on the same host => plusminus 230Mb/s ??? The guests have network over a bridged interface, so I tried the same test over a NAT interface => The same 230Mb/s... Is there a way to tweak connection speeds between two guest running on the same host? Thanks in advance... An piece of the xml dump of one of the guests: <domain ty...
2007 Sep 03
2
Mongrel and Memory usage.
Hi, I am using mongrel for a long time... But just now I see a problem. I am running 3 instances using mongrel_cluster and nginx as a reverse proxy. When I start the cluster I have a memory usage of 20MB per instance... In the end of the day this usage is something about 230MB for each instance. It should be a memory leak? Thanks for the help. -- Fernando Lujan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070903/3b3d3f1a/attachment.html
2007 Feb 25
3
R/C++/memory leaks
...m concerned about the following. In square brackets you see R's total virtual memory use (VIRT in `top`): 1) Load library and data [178MB] (if I run gc(), then [122MB]) 2) Just before .C [223MB] 3) Just before freeing memory [325MB] 4) Just after freeing memory [288MB] 5) After running gc() [230MB] So although the freeMemory function works (frees 37MB), R ends up using 100MB more after the function call than before it. ls() only returns the data object so no new objects have been added to the workspace. Do any of you have any idea what could be eating this memory? Many thanks, Erne...
2015 Dec 01
1
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
On 01/12/2015 00:20, Ming Lin wrote: > qemu-nvme: 148MB/s > vhost-nvme + google-ext: 230MB/s > qemu-nvme + google-ext + eventfd: 294MB/s > virtio-scsi: 296MB/s > virtio-blk: 344MB/s > > "vhost-nvme + google-ext" didn't get good enough performance. I'd expect it to be on par of qemu-nvme with ioeventfd but the question is: why should it be better? For v...
2015 Dec 01
0
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
On Tue, 2015-12-01 at 17:02 +0100, Paolo Bonzini wrote: > > On 01/12/2015 00:20, Ming Lin wrote: > > qemu-nvme: 148MB/s > > vhost-nvme + google-ext: 230MB/s > > qemu-nvme + google-ext + eventfd: 294MB/s > > virtio-scsi: 296MB/s > > virtio-blk: 344MB/s > > > > "vhost-nvme + google-ext" didn't get good enough performance. > > I'd expect it to be on par of qemu-nvme with ioeventfd but the question...
2015 Dec 01
1
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
On 01/12/2015 00:20, Ming Lin wrote: > qemu-nvme: 148MB/s > vhost-nvme + google-ext: 230MB/s > qemu-nvme + google-ext + eventfd: 294MB/s > virtio-scsi: 296MB/s > virtio-blk: 344MB/s > > "vhost-nvme + google-ext" didn't get good enough performance. I'd expect it to be on par of qemu-nvme with ioeventfd but the question is: why should it be better? For v...
2007 Jan 19
18
Cheap ZFS homeserver.
So after toying around with some stuff a few months back I got bogged down and set this project aside for a while. Time to revisit. <BR><BR> Looking around there still is not a good "these cards/motherboards" work list. the HCL is hardly ever updated, and its far more geared towards business use than hobbyist/home use. So bearing all of that in mind I will need the
2009 Nov 10
3
Error: cannot allocate vector of size...
I'm trying to import a table into R the file is about 700MB. Here's my first try: > DD<-read.table("01uklicsam-20070301.dat",header=TRUE) Error: cannot allocate vector of size 15.6 Mb In addition: Warning messages: 1: In scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings, : Reached total allocation of 1535Mb: see help(memory.size) 2: In scan(file, what,
2015 Nov 20
15
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Hi, This is the first attempt to add a new qemu nvme backend using in-kernel nvme target. Most code are ported from qemu-nvme and also borrow code from Hannes Reinecke's rts-megasas. It's similar as vhost-scsi, but doesn't use virtio. The advantage is guest can run unmodified NVMe driver. So guest can be any OS that has a NVMe driver. The goal is to get as good performance as
2015 Nov 20
15
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Hi, This is the first attempt to add a new qemu nvme backend using in-kernel nvme target. Most code are ported from qemu-nvme and also borrow code from Hannes Reinecke's rts-megasas. It's similar as vhost-scsi, but doesn't use virtio. The advantage is guest can run unmodified NVMe driver. So guest can be any OS that has a NVMe driver. The goal is to get as good performance as
2008 Jan 03
23
deployment survey
Hello Mongrels, Building on the last messages about Fastthread, can we get a detailed survey of the different ways people are deploying their applications? It will help with near-future Mongrel development. Please include the following things: * Framework, if any (Camping, Merb, Rails, Nitro, Ramaze, IOWA, Rack...) * Mongrel version * Mongrel handlers used (rails, dirhandler, camping,
2010 Jun 11
24
[Xen-API] [XCP]: RC1 of XCP 0.5 available for testing
Hi everyone, The first release candidate of the Xen Cloud Platform (XCP) version 0.5 is now available for testing from: http://www.xen.org/products/cloud_source_0.5.html XCP-0.5 is intended to be a *stable* release, suitable for long-term production use. Please download this release candidate and give it a thorough workout! Cheers, Dave _______________________________________________ xen-api
2010 Jun 11
24
[Xen-API] [XCP]: RC1 of XCP 0.5 available for testing
Hi everyone, The first release candidate of the Xen Cloud Platform (XCP) version 0.5 is now available for testing from: http://www.xen.org/products/cloud_source_0.5.html XCP-0.5 is intended to be a *stable* release, suitable for long-term production use. Please download this release candidate and give it a thorough workout! Cheers, Dave _______________________________________________ xen-api