search for: 392m

Displaying 6 results from an estimated 6 matches for "392m".

Did you mean: 392
2013 May 24
0
Problem After adding Bricks
...712k total, 16310088k used, 95624k free, 12540824k buffers Swap: 1999868k total, 9928k used, 1989940k free, 656604k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2460 root 20 0 391m 38m 1616 S 250 0.2 4160:51 glusterfsd 2436 root 20 0 392m 40m 1624 S 243 0.3 4280:26 glusterfsd 2442 root 20 0 391m 39m 1620 S 187 0.2 3933:46 glusterfsd 2454 root 20 0 391m 36m 1620 S 118 0.2 3870:23 glusterfsd 2448 root 20 0 391m 38m 1624 S 110 0.2 3720:50 glusterfsd 2472 root 20 0 393m 42m 1624 S...
2013 Mar 13
1
glsuterfs cpu parallelism option?
...;re setting up a glusterfs server apir with replication and am testing performance. While doing that we stumbled over that in top: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 5151 root 20 0 6184m 5.9g 2540 R 200 9.3 28:27.99 glusterfs 5618 root 20 0 392m 34m 2116 S 38 0.1 24:53.10 glusterfsd When under heavy load, CPU usage will not exceed 200% hence I'm assuming it only spawns two threads. Can that be tweaked? My hope is that when that is increased and glusterfs can use more cpus, the overall performance will increase drastically. Th...
1998 Aug 08
0
Apache bug, eats memory...
...0.0% nice, 6.1% system, 0.4% interrupt, 0.0% idle | Mem: 82M Active, 5692K Inact, 31M Wired, 4572K Cache, 8349K Buf, 616K Free | Swap: 512M Total, 402M Used, 110M Free, 79% Inuse, 5412K In, 748K Out | PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND | 29176 www -18 0 392M 85612K swread 0:57 6.83% 6.83% httpd |---cut--- Ben Laurie (team Apache) <ben@ALGROUP.CO.UK> responded swift: | And here''s a band-aid for 1.3.1 - I''m sure we''ll come up with something | better soon. This (untested) patch should prevent the worst effects. A | si...
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate of about 400K/s. (When this pool was first set up we saw rates in the MB/s range during a scrub). Both zpool iostat and an iostat -Xn show lots of idle disk times, no above average service times, no abnormally high busy percentages. Load on the box is .59. 8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance. I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.). According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
1998 Aug 02
0
ipportfw - security
...0.0% nice, 6.1% system, 0.4% interrupt, 0.0% idle | Mem: 82M Active, 5692K Inact, 31M Wired, 4572K Cache, 8349K Buf, 616K Free | Swap: 512M Total, 402M Used, 110M Free, 79% Inuse, 5412K In, 748K Out | PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND | 29176 www -18 0 392M 85612K swread 0:57 6.83% 6.83% httpd |---cut--- Ben Laurie (team Apache) <ben@ALGROUP.CO.UK> responded swift: | And here's a band-aid for 1.3.1 - I'm sure we'll come up with something | better soon. This (untested) patch should prevent the worst effects. A | similar patch sho...