similar to: Limit i/o capacitiy?

Displaying 20 results from an estimated 900 matches similar to: "Limit i/o capacitiy?"

2008 Apr 26
1
Xen and Torque
Dear Xen users. Have anyone tried to integrate Xen with Torque resource management system? Could you please help me with an advice for a system I''m developing that relies on torque. Let me describe the system first. The part of the system that talks with torque should request a certain amount on nodes of a cluster and launch there a virtual machine instance (one vm instance per host).
2006 May 10
6
how many mongrels to start
is there a way to determine how best to determine the number of mongrel processes to start? Right now i am running 2 in production but I see some people run about 8 or so. What is the cutoff and determening factor for this ? thanks adam -------------- next part -------------- An HTML attachment was scrubbed... URL:
2006 Sep 07
1
httperf
Hi, has anyone run httperf in Xen.if so cna you share the peformance. Also can someone please send me the source for this. the HP site that hosts it does not respond and i do see any other sites that have the source for httperf. Thanks PKrishna _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2007 Feb 27
11
Mongrel performing only half as fast as Apache?
I''m trying to do some initial benchmarking of our setup, mainly just to establish baselines. I''m essentially using the process Zed outlines in a previous message: http://rubyforge.org/pipermail/mongrel-users/2006-May/000200.html What I''m running into is that Mongrel appears only half as fast as Apache when serving a small static HTML file. If I then add in Apache with
2007 Jun 29
3
mongrel tuning with httperf - suspicious results
Hello all, I''m attempting to test/tune a mongrel cluster according to the tuning instructions on the mongrel site (using httperf). Anecdotally, the site itself ''feels'' snappy, but testing it with httperf reveals what appears to be terrible throughput. I''m kind of at a loss to describe the results, and was hoping someone could verify that I''m testing
2006 Jun 29
8
Is This a Performance Concern?
I''m running on a brand new MacBook Pro with a relatively clean working set. using Mongrel in production mode on port 3000. The home page does not hit the database and I''m getting: Processing HomeController#index (for 127.0.0.1 at 2006-06-29 14:59:02) [GET] Session ID: e11f7df52bffff304ca7c88e672ef71a Parameters: {"action"=>"index",
2018 Apr 04
13
[Bug 105884] New: Firefox causes a crash in the nouveau driver on GTX 1060
https://bugs.freedesktop.org/show_bug.cgi?id=105884 Bug ID: 105884 Summary: Firefox causes a crash in the nouveau driver on GTX 1060 Product: xorg Version: git Hardware: x86-64 (AMD64) OS: Linux (All) Status: NEW Severity: major Priority: medium Component:
2006 Mar 15
6
Mongrel Web Server 0.3.11 -- Edge Rails and Win32 Compliant
Hello Folks, This is the big release of Mongrel that''s been in the works for a while now (well, like a week). It is chock full of changes and features, but mostly it syncs up the Win32 side of things, and validates that Edge Rails works without problems. It also features a more extensive and useful example of the GemPlugins called mongrel_config. First the usual stuff for people
2007 Mar 06
4
[PIMP] Topfunky''s httperf PeepCode screencast (Zed A. Shaw)
Hi, Thanks Zed - this is very interesting. One item in particular caught my eye: Does anyone on this list have any comments or validation that Rails 1.2.1 is 2-4 times as slow as Rails 1.1.6? Topfunky provided a link that purports what looks like really horrible performance and memory characteristics for Rails 1.2.1, even v. 1.1.6:
2017 Nov 15
11
[Bug 103753] New: Visual glitches on GTX 1060 6GB/4.13.x
https://bugs.freedesktop.org/show_bug.cgi?id=103753 Bug ID: 103753 Summary: Visual glitches on GTX 1060 6GB/4.13.x Product: xorg Version: git Hardware: x86-64 (AMD64) OS: Linux (All) Status: NEW Severity: major Priority: medium Component: Driver/nouveau Assignee: nouveau at
2018 Apr 17
5
Getting glusterfs to expand volume size to brick size
pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol 3: option shared-brick-count 3 dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol 3: option shared-brick-count 3 dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol 3: option shared-brick-count 3 Sincerely, Artem --
2006 Aug 28
6
Why the render speed is still so slow under apache?
<% for demand in @demands %> <% cache(:action => ''list'',:part => article.id) do -%> <%= render :partial => ''article''} %> <% end %> <% end %> Under webrick,the time that list rendering costs will be very soon,but under apache2.2+mongrel_cluster, the rendering still takes a long time- which occupies about 95% of the
2018 Jan 16
10
[Bug 104652] New: None of the video outputs are usable for GTX 1060 - jerky video very few seconds
https://bugs.freedesktop.org/show_bug.cgi?id=104652 Bug ID: 104652 Summary: None of the video outputs are usable for GTX 1060 - jerky video very few seconds Product: xorg Version: git Hardware: x86-64 (AMD64) OS: Linux (All) Status: NEW Severity: major Priority: medium
2007 Apr 03
8
FastCGI performing better than Mongrel - what am I doing wrong?
I tried benchmarking the same site behind an NGINX proxy with both fastcgi and mongrel, and for some reason mongrel is performing pretty poorly in comparison. Any idea what I might be doing wrong? Here''s my benchmarks for 1 fcgi: Server Software: nginx/0.4.0 Server Hostname: eship.com.br Server Port: 80 Document Path: / Document Length: 95
2016 Apr 09
2
[GPUCC] how to remove _ZL21__nvvm_reflect_anchorv() automatically?
David's change makes nvvm_reflect_anchor unnecessary. The issue with dots in names generated by llvm still needs to be fixed. On Apr 9, 2016 8:32 AM, "Jingyue Wu" <jingyue at google.com> wrote: > Artem, > > With David's http://reviews.llvm.org/rL265060, do you think > __nvvm_reflect_anchor is still necessary? > > On Fri, Apr 8, 2016 at 9:37 AM, Yuanfeng
2014 Oct 24
4
[LLVMdev] Cross-Block Dead Store Elimination
Hi, It looks like the DeadStoreElimination optimization doesn't work across BasicBlock boundaries. The project I'm working on (https://github.com/trailofbits/mcsema), would tremendously benefit from even simple cross-block DSE. There was a patch to do non-local DSE few years ago (http://lists.cs.uiuc.edu/pipermail/llvmdev/2010-January/028751.html), but seems that the patch was never
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
Ok, it looks like the same problem. @Amar, this fix is supposed to be in 4.0.1. Is it possible to regenerate the volfiles to fix this? Regards, Nithya On 17 April 2018 at 09:57, Artem Russakovskii <archon810 at gmail.com> wrote: > pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count > dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol > 3:
2018 Apr 18
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Thanks for the link. Looking at the status of that doc, it isn't quite ready yet, and there's no mention of the option. Does it mean that whatever is ready now in 4.0.1 is incomplete but can be enabled via granular-entry-heal=on, and when it is complete, it'll become the default and the flag will simply go away? Is there any risk enabling the option now in 4.0.1? Sincerely, Artem
2006 Jun 20
1
Performance tweak when local files are not served by mongrel
Hello Zed ! I''ve experimented a simple but limited performance tweak in the mongrel rails loader. With the following apache 2.2 mod proxy loadbalancer setup : # Redirect all non-static requests to cluster RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteRule ^/(.*)$ balancer://mongrel_cluster%{REQUEST_URI} [P,QSA,L] We can assume that mongrel is called only when the
2018 Apr 18
3
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Following up here on a related and very serious for us issue. I took down one of the 4 replicate gluster servers for maintenance today. There are 2 gluster volumes totaling about 600GB. Not that much data. After the server comes back online, it starts auto healing and pretty much all operations on gluster freeze for many minutes. For example, I was trying to run an ls -alrt in a folder with 7300