Displaying 8 results from an estimated 8 matches for "sgpool".
Did you mean:
pgpool
2007 Feb 15
2
Re: [Linux-HA] OCFS2 - Memory hog?
...journal_handle 74 169 20 169 1
journal_head 583 1224 52 72 1
revoke_table 6 254 12 254 1
revoke_record 0 0 16 203 1
qla2xxx_srbs 244 360 128 30 1
scsi_cmd_cache 106 130 384 10 1
sgpool-256 32 32 4096 1 1
sgpool-128 42 42 2048 2 1
sgpool-64 44 44 1024 4 1
sgpool-32 48 48 512 8 1
sgpool-16 75 75 256 15 1
sgpool-8 153 210 128 30 1
scsi_i...
2011 Sep 01
1
No buffer space available - loses network connectivity
...48 100% 0.16K 2 24 8K sigqueue
48 8 16% 0.08K 1 48 4K crq_pool
45 27 60% 0.25K 3 15 12K mnt_cache
45 29 64% 0.25K 3 15 12K dquot
45 32 71% 0.25K 3 15 12K sgpool-8
40 19 47% 0.19K 2 20 8K key_jar
32 32 100% 0.50K 4 8 16K sgpool-16
32 32 100% 1.00K 8 4 32K sgpool-32
32 32 100% 2.00K 16 2 64K sgpool-64
32 32 100% 4.00K 32...
2011 Sep 01
0
No buffer space available - loses network connectivity
...8 48 100% 0.16K 2 24 8K sigqueue
48 8 16% 0.08K 1 48 4K crq_pool
45 27 60% 0.25K 3 15 12K mnt_cache
45 29 64% 0.25K 3 15 12K dquot
45 32 71% 0.25K 3 15 12K sgpool-8
40 19 47% 0.19K 2 20 8K key_jar
32 32 100% 0.50K 4 8 16K sgpool-16
32 32 100% 1.00K 8 4 32K sgpool-32
32 32 100% 2.00K 16 2 64K sgpool-64
32 32 100% 4.00K 32 1...
2006 Apr 09
0
Slab memory usage on dom0 increases by 128MB/day
...9 416 9 1 : tunables 54 27
8 : slabdata 1 1 0
posix_timers_cache 0 0 104 38 1 : tunables 120
60 8 : slabdata 0 0 0
uid_cache 3 61 64 61 1 : tunables 120 60
8 : slabdata 1 1 0
sgpool-128 32 33 2560 3 2 : tunables 24 12
8 : slabdata 11 11 0
sgpool-64 34 36 1280 3 1 : tunables 24 12
8 : slabdata 12 12 0
sgpool-32 32 36 640 6 1 : tunables 54 27
8 : slabdata...
2007 Aug 05
3
OOM killer observed during heavy I/O from VMs (XEN 3.0.4 and XEN 3.1)
...6 254 12 254
revoke_record 0 0 16 203
dm_tio 11142 11165 16 203
dm_io 11105 11154 20 169
scsi_cmd_cache 10 10 384 10
Cache Num Total Size Pages
sgpool-128 34 34 3072 2
sgpool-64 35 35 1536 5
sgpool-32 35 35 768 5
sgpool-16 36 40 384 10
sgpool-8 40 40 192 20
scsi_io_context 0 0...
2013 Nov 19
5
xenwatch: page allocation failure: order:4, mode:0x10c0d0 xen_netback:xenvif_alloc: Could not allocate netdev for vif16.0
...438 438 56 73
PING 0 0 1216 26
UDP 150 150 1280 25
tw_sock_TCP 399 399 192 21
TCP 104 104 2368 13
fscache_cookie_jar 0 0 192 21
sgpool-128 54 78 5120 6
sgpool-64 72 72 2560 12
sgpool-32 325 325 1280 25
sgpool-16 277 350 640 25
blkdev_integrity 0 0 112 36
blkdev_queue 165 165...
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys,
My users are reporting some issues with memory on our lustre 1.8.1 clients.
It looks like when they submit a single job at a time the run time was about
4.5 minutes. However, when they ran multiple jobs (10 or less) on a client
with 192GB of memory on a single node the run time for each job was
exceeding 3-4X the run time for the single process. They also noticed that
the swap space
2013 Apr 19
14
[GIT PULL] (xen) stable/for-jens-3.10
Hey Jens,
Please in your spare time (if there is such a thing at a conference)
pull this branch:
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-jens-3.10
for your v3.10 branch. Sorry for being so late with this.
<blurb>
It has the ''feature-max-indirect-segments'' implemented in both backend
and frontend. The current problem with the backend and