Displaying 2 results from an estimated 2 matches for "60meg".
Did you mean:
30meg
2007 Sep 12
1
Mongrel instance dies unexpectedly, but cleanly...
...you startup the second
> mongrel more than likely the VPS system is sending the second Mongrel
> a kill signal because you''ve gone over your resource allocation.
>
> ~Wayne
Thanks - are there other resources that would be contentious other than
memory? I believe I had about 60Meg still available (even w/ 2 mongrels
already running), and each mongrel only takes about 30 MB w/ my app.
I ended up reducing the number of apache mpm''s, and then on a lark started
w/ 3 mongrel servers to see if 1 or 2 would die. They''re all still up, but
after any one dies, I ne...
2006 Nov 03
0
question about exact behaviour with data=ordered.
...memory, and I write a 2
gig file. This is below 10% and so background_dirty_threshold wont
cause any write out.
Suppose that a regular journal commit is then triggered.
Am I correct in thinking this will flush out the full 2Gig, causing
the commit to take about 30 seconds if the drive sustains
60Meg/second?
If so, what other operations will be blocked while the commit happens?
I assume sync updates (rm, chmod, mkdir etc) will block?
Is it safe to assume that normaly async writes won't block?
What about if they extend the file and so change the file size?
What about atime updates? Could th...