Hello, last night I had a dom0 that was reported down by our monitoring and it got available again after 50 minutes. I also had message that some domU on that machine where not online. Can it be that one domU has that much load that it can take down all the others inlc. dom0? right after the dom0 got available again i could see that it had a high load: "CRITICAL - load average: 0.18, 5.97, 81.42" I remember watching somethin similiar on another dom0, i could see there that there where some python processes going wild. In the logfile for the problem machine from today i find this: Is this Xen related, there is nothing else on the machine running? My machine are CentOS 5.2 only with packages from the officiall repositories(xen 3.1.0). Jan 28 06:07:32 x1blade1 kernel: python invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0 Jan 28 06:07:32 x1blade1 kernel: Jan 28 06:07:32 x1blade1 kernel: Call Trace: Jan 28 06:07:32 x1blade1 kernel: [<ffffffff802b4896>] out_of_memory+0x8b/0x203 Jan 28 06:07:32 x1blade1 kernel: [<ffffffff8020f05e>] __alloc_pages+0x22b/0x2b4 Jan 28 06:07:32 x1blade1 kernel: [<ffffffff802129fb>] __do_page_cache_readahead+0xd0/0x21c Jan 28 06:07:32 x1blade1 kernel: [<ffffffff802606a8>] __wait_on_bit_lock+0x5b/0x66 Jan 28 06:07:33 x1blade1 nrpe[345]: Error: Could not complete SSL handshake. 5 Jan 28 06:07:33 x1blade1 nrpe[342]: Error: Could not complete SSL handshake. 5 Jan 28 06:07:33 x1blade1 kernel: [<ffffffff8023fd18>] __lock_page+0x5e/0x64 Jan 28 06:07:36 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53536 Jan 28 06:08:35 x1blade1 kernel: [<ffffffff802132c0>] filemap_nopage+0x148/0x322 Jan 28 06:09:10 x1blade1 kernel: [<ffffffff80208ba1>] __handle_mm_fault+0x3d9/0xf4d Jan 28 06:09:25 x1blade1 kernel: [<ffffffff80261869>] _spin_lock_irqsave+0x9/0x14 Jan 28 06:10:28 x1blade1 kernel: [<ffffffff802641bf>] do_page_fault+0xe4c/0x11e0 Jan 28 06:13:01 x1blade1 kernel: [<ffffffff8025d823>] error_exit+0x0/0x6e Jan 28 06:13:54 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53536 Jan 28 06:15:16 x1blade1 kernel: Jan 28 06:16:23 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53536 Jan 28 06:18:38 x1blade1 kernel: Mem-info: Jan 28 06:18:50 x1blade1 snmpd[4492]: Connection from UDP: [172.17.3.161]:50744 Jan 28 06:18:50 x1blade1 kernel: DMA per-cpu: Jan 28 06:18:50 x1blade1 snmpd[4492]: Received SNMP packet(s) from UDP: [172.17.3.161]:50744 Jan 28 06:18:50 x1blade1 kernel: cpu 0 hot: high 186, batch 31 used:73 ...repeats a lot Jan 28 06:18:54 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 Jan 28 06:18:54 x1blade1 kernel: HighMem per-cpu: empty Jan 28 06:18:54 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 Jan 28 06:18:54 x1blade1 kernel: Free pages: 16204kB (0kB HighMem) Jan 28 06:18:54 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 Jan 28 06:18:54 x1blade1 kernel: Active:970160 inactive:1278826 dirty:2 writeback:0 unstable:0 free:4051 slab:39078 mapped-file:1131 mapped-anon:2248568 pagetables:167956 Jan 28 06:18:55 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 Jan 28 06:18:55 x1blade1 kernel: DMA free:16204kB min:16204kB low:20252kB high:24304kB active:3883200kB inactive:5112872kB present:16411728kB pages_scanned:22590050 all_unreclaimable? yes Jan 28 06:18:55 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 Jan 28 06:18:55 x1blade1 kernel: lowmem_reserve[]: 0 0 0 0 Jan 28 06:18:55 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 Jan 28 06:18:55 x1blade1 kernel: DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no Jan 28 06:18:55 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 Jan 28 06:18:55 x1blade1 kernel: lowmem_reserve[]: 0 0 0 0 Jan 28 06:18:55 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 Jan 28 06:18:55 x1blade1 kernel: Normal free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no Jan 28 06:18:56 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 Jan 28 06:18:56 x1blade1 kernel: lowmem_reserve[]: 0 0 0 0 Jan 28 06:18:56 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 Jan 28 06:18:56 x1blade1 kernel: HighMem free:0kB min:128kB low:128kB high:128kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no Jan 28 06:18:56 x1blade1 kernel: lowmem_reserve[]: 0 0 0 0 Jan 28 06:18:56 x1blade1 kernel: DMA: 25*4kB 7*8kB 5*16kB 1*32kB 1*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 3*4096kB = 16204kB Jan 28 06:18:56 x1blade1 kernel: DMA32: empty Jan 28 06:18:56 x1blade1 kernel: Normal: empty Jan 28 06:18:57 x1blade1 kernel: HighMem: empty Jan 28 06:18:57 x1blade1 kernel: Swap cache: add 513883, delete 513883, find 28240/28470, race 0+0 Jan 28 06:18:57 x1blade1 kernel: Free swap = 0kB Jan 28 06:18:57 x1blade1 kernel: Total swap = 2048276kB Jan 28 06:18:57 x1blade1 kernel: Free swap: 0kB Jan 28 06:18:57 x1blade1 kernel: 4102932 pages of RAM Jan 28 06:18:57 x1blade1 kernel: 97982 reserved pages Jan 28 06:18:57 x1blade1 kernel: 753073 pages shared Jan 28 06:18:58 x1blade1 kernel: 0 pages swap cached Jan 28 06:18:58 x1blade1 kernel: Out of memory: Killed process 6991 (python). greetings .r _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Jan 28, 2009 at 2:49 PM, Heiko <rupertt@gmail.com> wrote:> Can it be that one domU has that much load that it can take down all > the others inlc. dom0?It should not. However it depends on how you set it up. Can you share how you setup dom0 and domU, in particular CPU and memory allocation? A "good" dom0 setup would tipically : - has its own dedicated CPU (usually dom0), not used by domUs. - has enough memory (512MB would do) - not running any unneeded services (e.g. not running nfs server, http server, etc.)> Jan 28 06:07:32 x1blade1 kernel: python invoked oom-killer: > gfp_mask=0x201d2, order=0, oomkilladj=0 > Jan 28 06:07:32 x1blade1 kernel: > Jan 28 06:07:32 x1blade1 kernel: Call Trace: > Jan 28 06:07:32 x1blade1 kernel: [<ffffffff802b4896>] out_of_memory+0x8b/0x203 > Jan 28 06:07:32 x1blade1 kernel: [<ffffffff8020f05e>] __alloc_pages+0x22b/0x2b4This is bad. Are you perhaps running lots of services on dom0? Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Jan 28, 2009 at 9:15 AM, Fajar A. Nugraha <fajar@fajar.net> wrote:> On Wed, Jan 28, 2009 at 2:49 PM, Heiko <rupertt@gmail.com> wrote: >> Can it be that one domU has that much load that it can take down all >> the others inlc. dom0? > > It should not. However it depends on how you set it up. Can you share > how you setup dom0 and domU, in particular CPU and memory allocation? >Hello, i dont have set any option to give dom0 a seperate CPU or RAM, do I have to set (dom0-cpus 0) to (dom0-cpus 1) for that? For the memory i saw a option that goes with the grub command. these are my domU on that host: [root@x1blade1:~]# xm list Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 10147 8 r----- 388610.5 auto-input-vm1 1 999 2 -b---- 10847.7 distribution-vm1 5 999 2 -b---- 145234.6 monitoring-1 2 1999 2 -b---- 1060094.1 translator-vm1 4 999 2 -b---- 63999.2 uat-vm1 3 999 2 -b---- 9666.5> A "good" dom0 setup would tipically : > - has its own dedicated CPU (usually dom0), not used by domUs. > - has enough memory (512MB would do) > - not running any unneeded services (e.g. not running nfs server, http > server, etc.) > >> Jan 28 06:07:32 x1blade1 kernel: python invoked oom-killer: >> gfp_mask=0x201d2, order=0, oomkilladj=0 >> Jan 28 06:07:32 x1blade1 kernel: >> Jan 28 06:07:32 x1blade1 kernel: Call Trace: >> Jan 28 06:07:32 x1blade1 kernel: [<ffffffff802b4896>] out_of_memory+0x8b/0x203 >> Jan 28 06:07:32 x1blade1 kernel: [<ffffffff8020f05e>] __alloc_pages+0x22b/0x2b4 > > This is bad. Are you perhaps running lots of services on dom0? >just xen and nagios nrpe which checks some system info and the domU for there status. in this "top" output you can see that python does something: top - 08:25:00 up 11 days, 11:51, 1 user, load average: 0.27, 0.28, 0.37 Tasks: 196 total, 1 running, 195 sleeping, 0 stopped, 0 zombie Cpu(s): 5.5%us, 7.1%sy, 0.0%ni, 84.7%id, 0.5%wa, 0.0%hi, 1.0%si, 1.3%st Mem: 10390528k total, 937536k used, 9452992k free, 267344k buffers Swap: 2048276k total, 10380k used, 2037896k free, 77120k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6976 root 16 0 8756 1116 424 S 50 0.0 884:31.08 xenstored 368 root 15 0 368m 7000 1296 S 17 0.1 1:12.43 python 7329 root 18 0 143m 7316 2656 S 6 0.1 0:00.06 python 7342 root 18 0 143m 7312 2656 S 5 0.1 0:00.05 python 7351 root 18 0 143m 7312 2656 S 5 0.1 0:00.05 python greetings Heiko> Regards, > > Fajar >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Jan 28, 2009 at 3:30 PM, Heiko <rupertt@gmail.com> wrote:> i dont have set any option to give dom0 a seperate CPU or RAM, > do I have to set (dom0-cpus 0) to (dom0-cpus 1) for that?It would be best, yes. Here''s what I use on xend-config.sxp (dom0-cpus 1) (dom0-min-mem 256) and on grub''s menu.lst kernel /xen.gz-2.6.18-128.el5 dom0_mem=512M dom0_vcpus_pin> these are my domU on that host: > > [root@x1blade1:~]# xm list > Name ID Mem(MiB) VCPUs State Time(s) > Domain-0 0 10147 8 r----- 388610.5This shows dom0 has 10G mem, which is good>> >>> Jan 28 06:07:32 x1blade1 kernel: python invoked oom-killer: >>> gfp_mask=0x201d2, order=0, oomkilladj=0This one shows python eats the memory. Which is veeeeery bad. Now python is used for lots of things (xend among them), but in my setup (where dom0 can only use 256-512M, and it only runs xend service) it never acted up like that. So the next question is what other programs on your dom0 uses python, and look at it. You can disable them to see if it has any effects. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Jan 28, 2009 at 9:47 AM, Fajar A. Nugraha <fajar@fajar.net> wrote:> On Wed, Jan 28, 2009 at 3:30 PM, Heiko <rupertt@gmail.com> wrote: >> i dont have set any option to give dom0 a seperate CPU or RAM, >> do I have to set (dom0-cpus 0) to (dom0-cpus 1) for that? > > It would be best, yes. > Here''s what I use on xend-config.sxp > > (dom0-cpus 1) > (dom0-min-mem 256) >Hello, i did set this option and will restart the xend later? Can i do the xend restart without any influence on the domUs?> and on grub''s menu.lst > > kernel /xen.gz-2.6.18-128.el5 dom0_mem=512M dom0_vcpus_pin >did that too>> these are my domU on that host: >> >> [root@x1blade1:~]# xm list >> Name ID Mem(MiB) VCPUs State Time(s) >> Domain-0 0 10147 8 r----- 388610.5 > > This shows dom0 has 10G mem, which is good > >>> >>>> Jan 28 06:07:32 x1blade1 kernel: python invoked oom-killer: >>>> gfp_mask=0x201d2, order=0, oomkilladj=0 > > This one shows python eats the memory. Which is veeeeery bad. > Now python is used for lots of things (xend among them), but in my > setup (where dom0 can only use 256-512M, and it only runs xend > service) it never acted up like that. > > So the next question is what other programs on your dom0 uses python, > and look at it. You can disable them to see if it has any effects. >mmh, in the top below I can see that much pyhton stuff. My xen domU check uses "xm list", can that be a problem? This check gets exectuted for each domU, so i can happen that it runs as much at one as there are domUs. http://www.nagiosexchange.org/cgi-bin/page.cgi?g=Detailed%2F2272.html;d=1 21259 ? S 0:00 \_ nrpe -c /etc/nagios/nrpe.cfg -d 21260 ? S 0:00 \_ /bin/bash /usr/lib64/nagios/plugins/check_xen domU monitoring-1 21275 ? S 0:00 \_ /bin/bash /usr/lib64/nagios/plugins/check_xen domU monitoring-1 21276 ? R 0:00 \_ python /usr/sbin/xm list 21277 ? S 0:00 \_ grep monitoring-1 21278 ? S 0:00 \_ awk { print $6 } greetings. .r here is the rest of top: PID TTY STAT TIME COMMAND 1 ? Ss 1:47 init [3] 2 ? S 0:08 [migration/0] 3 ? SN 0:54 [ksoftirqd/0] 4 ? S 0:00 [watchdog/0] 5 ? S 0:04 [migration/1] 6 ? SN 0:08 [ksoftirqd/1] 7 ? S 0:00 [watchdog/1] 8 ? S 0:07 [migration/2] 9 ? SN 0:17 [ksoftirqd/2] 10 ? S 0:00 [watchdog/2] 11 ? S 0:05 [migration/3] 12 ? SN 0:16 [ksoftirqd/3] 13 ? S 0:00 [watchdog/3] 14 ? S 0:04 [migration/4] 15 ? SN 0:10 [ksoftirqd/4] 16 ? S 0:00 [watchdog/4] 17 ? S 0:05 [migration/5] 18 ? SN 0:45 [ksoftirqd/5] 19 ? S 0:00 [watchdog/5] 20 ? S 0:05 [migration/6] 21 ? SN 0:40 [ksoftirqd/6] 22 ? S 0:00 [watchdog/6] 23 ? S 0:03 [migration/7] 24 ? SN 0:13 [ksoftirqd/7] 25 ? S 0:00 [watchdog/7] 26 ? S< 0:03 [events/0] 27 ? S< 0:00 [events/1] 28 ? S< 0:00 [events/2] 29 ? S< 0:00 [events/3] 30 ? S< 0:00 [events/4] 31 ? S< 0:00 [events/5] 32 ? S< 0:00 [events/6] 33 ? S< 0:02 [events/7] 34 ? S< 0:00 [khelper] 35 ? S< 0:00 [kthread] 37 ? S< 0:00 \_ [xenwatch] 38 ? S< 0:00 \_ [xenbus] 47 ? S< 0:00 \_ [kblockd/0] 48 ? S< 0:00 \_ [kblockd/1] 49 ? S< 0:00 \_ [kblockd/2] 50 ? S< 0:00 \_ [kblockd/3] 51 ? S< 0:00 \_ [kblockd/4] 52 ? S< 0:00 \_ [kblockd/5] 53 ? S< 0:00 \_ [kblockd/6] 54 ? S< 0:00 \_ [kblockd/7] 55 ? S< 0:00 \_ [kacpid] 159 ? S< 0:00 \_ [cqueue/0] 160 ? S< 0:00 \_ [cqueue/1] 161 ? S< 0:00 \_ [cqueue/2] 162 ? S< 0:00 \_ [cqueue/3] 163 ? S< 0:00 \_ [cqueue/4] 164 ? S< 0:00 \_ [cqueue/5] 165 ? S< 0:00 \_ [cqueue/6] 166 ? S< 0:00 \_ [cqueue/7] 170 ? S< 0:00 \_ [khubd] 172 ? S< 0:00 \_ [kseriod] 271 ? S< 15:03 \_ [kswapd0] 272 ? S< 0:00 \_ [aio/0] 273 ? S< 0:00 \_ [aio/1] 274 ? S< 0:00 \_ [aio/2] 275 ? S< 0:00 \_ [aio/3] 276 ? S< 0:00 \_ [aio/4] 277 ? S< 0:00 \_ [aio/5] 278 ? S< 0:00 \_ [aio/6] 279 ? S< 0:00 \_ [aio/7] 421 ? S< 0:00 \_ [kpsmoused] 526 ? S< 0:00 \_ [scsi_eh_0] 536 ? S< 0:00 \_ [scsi_eh_1] 537 ? S< 0:07 \_ [usb-storage] 539 ? S< 0:00 \_ [scsi_eh_2] 540 ? S< 0:01 \_ [usb-storage] 542 ? S< 2:19 \_ [kjournald] 569 ? S< 0:02 \_ [kauditd] 3436 ? S< 0:00 \_ [kmpathd/0] 3437 ? S< 0:00 \_ [kmpathd/1] 3438 ? S< 0:00 \_ [kmpathd/2] 3439 ? S< 0:00 \_ [kmpathd/3] 3440 ? S< 0:00 \_ [kmpathd/4] 3441 ? S< 0:00 \_ [kmpathd/5] 3442 ? S< 0:00 \_ [kmpathd/6] 3443 ? S< 0:00 \_ [kmpathd/7] 3488 ? S< 0:00 \_ [kjournald] 4993 ? SN 4:26 \_ [kipmi0] 7690 ? S< 0:01 \_ [xvd 1] 7691 ? S< 0:00 \_ [xvd 1 07:00] 8102 ? S< 0:32 \_ [xvd 2] 8103 ? S< 0:00 \_ [xvd 2 07:01] 8365 ? S< 0:01 \_ [xvd 3] 8366 ? S< 0:00 \_ [xvd 3 07:02] 9887 ? S< 0:01 \_ [xvd 4] 9888 ? S< 0:00 \_ [xvd 4 07:03] 10387 ? S< 0:00 \_ [xvd 5] 10389 ? S< 0:00 \_ [xvd 5 07:04] 356 ? S 0:00 \_ [pdflush] 358 ? S 0:00 \_ [pdflush] 603 ? S<s 0:00 /sbin/udevd -d 4090 ? S<sl 2:12 auditd 4092 ? S<s 0:23 \_ python /sbin/audispd 4122 ? Ss 1:01 syslogd -m 0 4125 ? Ss 0:33 klogd -x 4141 ? Ss 2:35 irqbalance 4176 ? Ss 0:00 portmap 4201 ? Ss 0:00 rpc.statd 4252 ? Ss 0:28 rpc.idmapd 4279 ? Ss 0:00 dbus-daemon --system 4294 ? Ss 0:00 /usr/sbin/hcid 4298 ? Ss 0:00 /usr/sbin/sdpd 4343 ? S< 0:00 [krfcommd] 4391 ? Ssl 0:33 pcscd 4423 ? Ss 0:00 /usr/bin/hidd --server 4453 ? Ssl 1:48 automount 4476 ? Ss 0:00 /usr/sbin/acpid 4492 ? Sl 6:33 /usr/sbin/snmpd -Lsd -Lf /dev/null -p /var/run/snmpd.pid -a 4525 ? Ss 2:08 /usr/sbin/sshd 25055 ? Ss 0:00 \_ sshd: root@notty 25068 ? Ss 0:00 | \_ /usr/libexec/openssh/sftp-server 16943 ? Ss 0:00 \_ sshd: root@pts/3 16945 pts/3 Ss 0:00 \_ -bash 17955 pts/3 R+ 0:00 \_ ps afx 17956 pts/3 D+ 0:00 \_ -bash 4569 ? Ss 0:00 cupsd 4587 ? SLs 0:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g 4602 ? Ss 0:41 nrpe -c /etc/nagios/nrpe.cfg -d 4626 ? Ss 3:58 sendmail: accepting connections 4634 ? Ss 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue 4650 ? Ss 0:00 gpm -m /dev/input/mice -t exps2 4665 ? Ss 1:06 crond 4694 ? Ss 0:07 /usr/sbin/atd 4711 ? Ssl 0:14 /etc/delloma.d/oma/bin/dsm_om_shrsvc32d 5306 ? Ssl 6:52 /opt/dell/srvadmin/dataeng/bin/dsm_sa_datamgr32d 5307 ? Ss 0:00 \_ /opt/dell/srvadmin/dataeng/bin/dsm_sa_datamgr32d 6285 ? Ssl 1:06 /opt/dell/srvadmin/dataeng/bin/dsm_sa_eventmgr32d 6297 ? Ssl 2:32 /opt/dell/srvadmin/dataeng/bin/dsm_sa_snmp32d 6337 ? Ss 0:00 /etc/delloma.d/iws/bin/linux/dsm_om_connsvc32d -run 6338 ? Sl 7:47 \_ /etc/delloma.d/iws/bin/linux/dsm_om_connsvc32d -run 6365 ? S 0:00 libvirt_qemud --system --daemon 6573 ? S 0:00 \_ dnsmasq --keep-in-foreground --strict-order --bind-interfaces --pid-file --conf-file --listen-address 192.168.122.1 --except-interface lo --dhcp-leasefile=/var/lib/libvirt/dhcp-default.leases --dhcp-range 192.168.122.2,192.168.122.254 6553 ? S 18:37 /usr/bin/python /usr/sbin/yum-updatesd 6578 ? Ss 3:02 hald 6579 ? S 0:00 \_ hald-runner 6591 ? S 0:00 \_ hald-addon-acpi: listening on acpid socket /var/run/acpid.socket 6594 ? S 0:00 \_ hald-addon-keyboard: listening on /dev/input/event3 6598 ? S 0:00 \_ hald-addon-keyboard: listening on /dev/input/event2 6605 ? S 0:00 \_ hald-addon-keyboard: listening on /dev/input/event0 6617 ? S 1:03 \_ hald-addon-storage: polling /dev/scd0 6619 ? S 0:38 \_ hald-addon-storage: polling /dev/sdb 6976 ? S 885:27 xenstored --pid-file /var/run/xenstore.pid 6990 ? S 0:00 python /usr/sbin/xend start 368 ? Sl 1:27 \_ python /usr/sbin/xend start 6992 ? Ssl 0:00 blktapctrl 6995 ? Sl 0:00 xenconsoled --log none --log-dir /var/log/xen/console 7304 ? S 0:00 /usr/sbin/smartd -q never 7365 ? Sl 3:17 tapdisk /dev/xen/tapctrlwrite1 /dev/xen/tapctrlread1 7569 ? S< 0:00 [loop0] 7760 ? Sl 21:18 tapdisk /dev/xen/tapctrlwrite2 /dev/xen/tapctrlread2 7927 ? S< 0:00 [loop1] 8166 ? Sl 3:49 tapdisk /dev/xen/tapctrlwrite3 /dev/xen/tapctrlread3 8353 ? S< 0:00 [loop2] 8378 tty1 Ss+ 0:00 /sbin/mingetty tty1 8379 tty2 Ss+ 0:00 /sbin/mingetty tty2 8380 tty3 Ss+ 0:00 /sbin/mingetty tty3 8381 tty4 Ss+ 0:00 /sbin/mingetty tty4 8382 tty5 Ss+ 0:00 /sbin/mingetty tty5 8383 tty6 Ss+ 0:00 /sbin/mingetty tty6 9667 ? Sl 3:18 tapdisk /dev/xen/tapctrlwrite4 /dev/xen/tapctrlread4 9843 ? S< 0:00 [loop3] 9970 ? Sl 2:30 tapdisk /dev/xen/tapctrlwrite5 /dev/xen/tapctrlread5 10189 ? S< 0:00 [loop4] 348 ? S 0:00 nrpe -c /etc/nagios/nrpe.cfg -d 350 ? S 0:00 nrpe -c /etc/nagios/nrpe.cfg -d> Regards, > > Fajar >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Jan 28, 2009 at 4:05 PM, Heiko <rupertt@gmail.com> wrote:> On Wed, Jan 28, 2009 at 9:47 AM, Fajar A. Nugraha <fajar@fajar.net> wrote: >> On Wed, Jan 28, 2009 at 3:30 PM, Heiko <rupertt@gmail.com> wrote: >>> i dont have set any option to give dom0 a seperate CPU or RAM, >>> do I have to set (dom0-cpus 0) to (dom0-cpus 1) for that? >> >> It would be best, yes. >> Here''s what I use on xend-config.sxp >> >> (dom0-cpus 1) >> (dom0-min-mem 256) >> > Hello, > > i did set this option and will restart the xend later?You don''t need to restart actually. Just run xm vcpu-set 0 1 xm vcpu-pin 0 0 0 This will set dom0 to use only 1 CPU, CPU0. To have other domU NOT use that CPU, you need to add this option on every domU config cpus="^0" If you don''t want to restart domUs, then simply look for the ones that''s using CPU0 and relocate it xm vcpu-list xm vcpu-pin ... Note that this doesn''t directly related to your problem though, but IMHO it''s a best-practice.> Can i do the xend restart without any influence on the domUs?You should be. But again, it''s not necessary in this scenario.> mmh, in the top below I can see that much pyhton stuff. > My xen domU check uses "xm list", can that be a problem? > This check gets exectuted for each domU, so i can happen that it runs > as much at one as there are domUs.Probably. You should limit it to run only one at a time (I don''t know how though).> 6553 ? S 18:37 /usr/bin/python /usr/sbin/yum-updatesdYou MIGHT want to try disabling this as well. service yum-updatesd stop chkconfig yum-updatesd off It uses python, and not very useful anyway if you don''t use a GUI (gnome,kde) on that server. On a side note, you might want to add some memory monitoring on that server. That should give you some info on when, how, and why you can get out-of-memory. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Jan 28, 2009 at 10:31 AM, Fajar A. Nugraha <fajar@fajar.net> wrote:> On Wed, Jan 28, 2009 at 4:05 PM, Heiko <rupertt@gmail.com> wrote: >> On Wed, Jan 28, 2009 at 9:47 AM, Fajar A. Nugraha <fajar@fajar.net> wrote: >>> On Wed, Jan 28, 2009 at 3:30 PM, Heiko <rupertt@gmail.com> wrote: >>>> i dont have set any option to give dom0 a seperate CPU or RAM, >>>> do I have to set (dom0-cpus 0) to (dom0-cpus 1) for that? >>> >>> It would be best, yes. >>> Here''s what I use on xend-config.sxp >>> >>> (dom0-cpus 1) >>> (dom0-min-mem 256) >>> >> Hello, >> >> i did set this option and will restart the xend later? > > You don''t need to restart actually. > Just run > > xm vcpu-set 0 1 > xm vcpu-pin 0 0 0 > > This will set dom0 to use only 1 CPU, CPU0. To have other domU NOT use > that CPU, you need to add this option on every domU config > > cpus="^0" > > If you don''t want to restart domUs, then simply look for the ones > that''s using CPU0 and relocate it > > xm vcpu-list > xm vcpu-pin ... > > Note that this doesn''t directly related to your problem though, but > IMHO it''s a best-practice. > >> Can i do the xend restart without any influence on the domUs? > > You should be. But again, it''s not necessary in this scenario. > >> mmh, in the top below I can see that much pyhton stuff. >> My xen domU check uses "xm list", can that be a problem? >> This check gets exectuted for each domU, so i can happen that it runs >> as much at one as there are domUs. > > Probably. You should limit it to run only one at a time (I don''t know > how though). > >> 6553 ? S 18:37 /usr/bin/python /usr/sbin/yum-updatesd > > You MIGHT want to try disabling this as well. > > service yum-updatesd stop > chkconfig yum-updatesd off > > It uses python, and not very useful anyway if you don''t use a GUI > (gnome,kde) on that server. > > On a side note, you might want to add some memory monitoring on that > server. That should give you some info on when, how, and why you can > get out-of-memory. >Hello Fajar, thanks for all the tips. i will make these changes to all my dom0. Do the following line show me that no domU is usign CPU0? [root@x1blade1:/etc/xen]# xm vcpu-list Name ID VCPUs CPU State Time(s) CPU Affinity Domain-0 0 0 0 r-- 123346.0 0 Domain-0 0 1 - --p 33745.7 any cpu Domain-0 0 2 - --p 36581.9 any cpu Domain-0 0 3 - --p 34744.3 any cpu Domain-0 0 4 - --p 33350.9 any cpu Domain-0 0 5 - --p 46965.1 any cpu Domain-0 0 6 - --p 43458.5 any cpu Domain-0 0 7 - --p 38519.7 any cpu auto-input-vm1 1 0 2 -b- 7267.0 any cpu auto-input-vm1 1 1 2 -b- 3639.3 any cpu distribution-vm1 5 0 2 -b- 98132.9 any cpu distribution-vm1 5 1 4 -b- 48432.8 any cpu monitoring-1 2 0 6 r-- 528173.3 any cpu monitoring-1 2 1 7 r-- 538368.0 any cpu translator-vm1 4 0 4 -b- 47545.9 any cpu translator-vm1 4 1 2 -b- 17323.2 any cpu uat-vm1 3 0 1 -b- 6479.1 any cpu uat-vm1 3 1 5 -b- 3240.7 any cpu greetings .r> Regards, > > Fajar >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Jan 28, 2009 at 10:31 AM, Fajar A. Nugraha <fajar@fajar.net> wrote:> On Wed, Jan 28, 2009 at 4:05 PM, Heiko <rupertt@gmail.com> wrote: >> On Wed, Jan 28, 2009 at 9:47 AM, Fajar A. Nugraha <fajar@fajar.net> wrote: >>> On Wed, Jan 28, 2009 at 3:30 PM, Heiko <rupertt@gmail.com> wrote: >>>> i dont have set any option to give dom0 a seperate CPU or RAM, >>>> do I have to set (dom0-cpus 0) to (dom0-cpus 1) for that? >>> >>> It would be best, yes. >>> Here''s what I use on xend-config.sxp >>> >>> (dom0-cpus 1) >>> (dom0-min-mem 256) >>> >> Hello, >> >> i did set this option and will restart the xend later? > > You don''t need to restart actually. > Just run > > xm vcpu-set 0 1 > xm vcpu-pin 0 0 0 > > This will set dom0 to use only 1 CPU, CPU0. To have other domU NOT use > that CPU, you need to add this option on every domU config > > cpus="^0" >Hello again. it seems xen is ignoring this parameter. I did set it but, the domU still take CPU 0. i did: #xm vcpu-set 0 1 #xm vcpu-pin 0 0 0 and put cpus="^0" into my domU configs cheers .r> If you don''t want to restart domUs, then simply look for the ones > that''s using CPU0 and relocate it > > xm vcpu-list > xm vcpu-pin ... > > Note that this doesn''t directly related to your problem though, but > IMHO it''s a best-practice. > >> Can i do the xend restart without any influence on the domUs? > > You should be. But again, it''s not necessary in this scenario. > >> mmh, in the top below I can see that much pyhton stuff. >> My xen domU check uses "xm list", can that be a problem? >> This check gets exectuted for each domU, so i can happen that it runs >> as much at one as there are domUs. > > Probably. You should limit it to run only one at a time (I don''t know > how though). > >> 6553 ? S 18:37 /usr/bin/python /usr/sbin/yum-updatesd > > You MIGHT want to try disabling this as well. > > service yum-updatesd stop > chkconfig yum-updatesd off > > It uses python, and not very useful anyway if you don''t use a GUI > (gnome,kde) on that server. > > On a side note, you might want to add some memory monitoring on that > server. That should give you some info on when, how, and why you can > get out-of-memory. > > Regards, > > Fajar >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> On Wed, Jan 28, 2009 at 3:30 PM, Heiko <rupertt@gmail.com> wrote: >> i dont have set any option to give dom0 a seperate CPU or RAM, >> do I have to set (dom0-cpus 0) to (dom0-cpus 1) for that? > > It would be best, yes. > Here''s what I use on xend-config.sxp > > (dom0-cpus 1) > (dom0-min-mem 256) > > and on grub''s menu.lst > > kernel /xen.gz-2.6.18-128.el5 dom0_mem=512M dom0_vcpus_pin >Sorry for the thread jack, this will be short... This is the first I''ve heard of the "dom0_vcpus_pin" option. So that is the same as xm vcpu-set 0 1 xm vcpu-pin 0 0 0 where Dom0 will be pinned to CPU0 at boot? What version of Xen did that option make its appearance? If so, that would be nice since I''ve been pinning Dom0 to CPU0 in rc.local and that always seemed weird to pin after it had booted. Thanks, Ryan>> these are my domU on that host: >> >> [root@x1blade1:~]# xm list >> Name ID Mem(MiB) VCPUs State >> Time(s) >> Domain-0 0 10147 8 r----- >> 388610.5 > > This shows dom0 has 10G mem, which is good > >>> >>>> Jan 28 06:07:32 x1blade1 kernel: python invoked oom-killer: >>>> gfp_mask=0x201d2, order=0, oomkilladj=0 > > This one shows python eats the memory. Which is veeeeery bad. > Now python is used for lots of things (xend among them), but in my > setup (where dom0 can only use 256-512M, and it only runs xend > service) it never acted up like that. > > So the next question is what other programs on your dom0 uses python, > and look at it. You can disable them to see if it has any effects. > > Regards, > > Fajar > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Jan 28, 2009 at 9:05 PM, Ryan Burke <burke@tailorhosting.com> wrote:>> Here''s what I use on xend-config.sxp >> >> (dom0-cpus 1) >> (dom0-min-mem 256) >> >> and on grub''s menu.lst >> >> kernel /xen.gz-2.6.18-128.el5 dom0_mem=512M dom0_vcpus_pin >> > > Sorry for the thread jack, this will be short... > > This is the first I''ve heard of the "dom0_vcpus_pin" option. So that is > the same as > xm vcpu-set 0 1 > xm vcpu-pin 0 0 0 > where Dom0 will be pinned to CPU0 at boot?It''s the same as the vcpu-pin command line if you only assign one CPU to dom0. The vcpu-set line is the equivalent of (dom0-cpus 1) on xend-config.sxp> What version of Xen did that > option make its appearance? >Not sure. According to http://wiki.xensource.com/xenwiki/XenBooting , it''s available since 3.0. I''ve been using it on RHEL 5''s stock xen (which is based on xen 3.1.2), and it works.> If so, that would be nice since I''ve been pinning Dom0 to CPU0 in rc.local > and that always seemed weird to pin after it had booted.Yeah, that''s what I did as well before I read the documentation :D Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Jan 28, 2009 at 4:57 PM, Heiko <rupertt@gmail.com> wrote:> thanks for all the tips. i will make these changes to all my dom0. > Do the following line show me that no domU is usign CPU0?Yes. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Jan 28, 2009 at 7:20 PM, Heiko <rupertt@gmail.com> wrote:> Hello again. > > it seems xen is ignoring this parameter. > I did set it but, the domU still take CPU 0. > > i did: > > #xm vcpu-set 0 1 > #xm vcpu-pin 0 0 0This one only affects dom0, it doesn''t affect any domU.> > and put cpus="^0" into my domU configs >Did you start/restart the domU after changing those config? If not, and you don''t want to restart domU, then you need to manually pin each domU that''s using CPU0 to other CPUs. See "xm vcpu-pin" for command line help. If yes, what OS/xen version are you using? What does "xm create --help_config | grep cpu" say? What does "man xmdomain.cfg" says about cpus? Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Jan 28, 2009 at 3:33 PM, Fajar A. Nugraha <fajar@fajar.net> wrote:> On Wed, Jan 28, 2009 at 7:20 PM, Heiko <rupertt@gmail.com> wrote: >> Hello again. >> >> it seems xen is ignoring this parameter. >> I did set it but, the domU still take CPU 0. >> >> i did: >> >> #xm vcpu-set 0 1 >> #xm vcpu-pin 0 0 0 > > This one only affects dom0, it doesn''t affect any domU. > >> >> and put cpus="^0" into my domU configs >> > > Did you start/restart the domU after changing those config? If not, > and you don''t want to restart domU, then you need to manually pin each > domU that''s using CPU0 to other CPUs. See "xm vcpu-pin" for command > line help. > > If yes, what OS/xen version are you using? > What does "xm create --help_config | grep cpu" say? > What does "man xmdomain.cfg" says about cpus? >Hello, i did stop the domU and started it manually, they till used CPU0 when i put cpus="1,2,3" there it works. The manpage says I can use the negation, the grep give me this: cpus=CPUS CPUS to run the domain on. manually pinning the cores work also fine: prod_rd_vpn 2 0 2 -b- 155.4 1-3 prod_rd_vpn 2 1 1 -b- 123.4 1-3 I am using CentOS 5.2 greetings .r> Regards, > > Fajar >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> On Wed, Jan 28, 2009 at 3:33 PM, Fajar A. Nugraha <fajar@fajar.net> wrote: >> On Wed, Jan 28, 2009 at 7:20 PM, Heiko <rupertt@gmail.com> wrote: >>> Hello again. >>> >>> it seems xen is ignoring this parameter. >>> I did set it but, the domU still take CPU 0. >>> >>> i did: >>> >>> #xm vcpu-set 0 1 >>> #xm vcpu-pin 0 0 0 >> >> This one only affects dom0, it doesn''t affect any domU. >> >>> >>> and put cpus="^0" into my domU configs >>> >> >> Did you start/restart the domU after changing those config? If not, >> and you don''t want to restart domU, then you need to manually pin each >> domU that''s using CPU0 to other CPUs. See "xm vcpu-pin" for command >> line help. >> >> If yes, what OS/xen version are you using? >> What does "xm create --help_config | grep cpu" say? >> What does "man xmdomain.cfg" says about cpus? >> > > Hello, > i did stop the domU and started it manually, they till used CPU0 > when i put cpus="1,2,3" there it works.I''ve seen the problem before. What I needed to do was" cpus="0-n,^0" where n=(number of cores in the system - 1). Ex. 4 core system cpus="0-3,^0" will allow the domU''s to share CPU''s 1,2,3 and not use CPU0 (reserved for Dom0)> The manpage says I can use the negation, > the grep give me this: > cpus=CPUS CPUS to run the domain on. > > manually pinning the cores work also fine: > > prod_rd_vpn 2 0 2 -b- 155.4 1-3 > prod_rd_vpn 2 1 1 -b- 123.4 1-3 > > I am using CentOS 5.2 > > greetings > > .r > >> Regards, >> >> Fajar >> > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Jan 28, 2009 at 10:14 PM, Ryan Burke <burke@tailorhosting.com> wrote:>> i did stop the domU and started it manually, they till used CPU0 >> when i put cpus="1,2,3" there it works. > > I''ve seen the problem before. What I needed to do was" > cpus="0-n,^0" > where n=(number of cores in the system - 1). > > Ex. 4 core system > cpus="0-3,^0" > will allow the domU''s to share CPU''s 1,2,3 and not use CPU0 (reserved for > Dom0) >That''s weird. I''ve been using both RHEL 5.2 and 5.3, with RH''s stock Xen, both behaved properly (e.g. cpus="^0") works fine. Good hint on the workaround though, I''ll keep that in mind if I see similar problem. On a side note, my Xen version (RH''s xen-3.0.3-80.el5) seems to be capable of doing automatic vcpu rebalancing. i.e. starting a new domain can cause other domain''s CPU allocation to change automatically. I''ve tried shutting and starting a domU several times, resulting the other domUs to have their CPU allocation change, but none of them uses CPU0. So all I can say is that perhaps this Xen version properly recognize "cpus=^0" while Heiko''s don''t. Updating might help. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
There is one other possibility that could be going on here. I saw a very similar failure mode to what Heiko saw once, when I had a domU mistakenly try to do hardware Raid maintenance and scanning from inside domU. Not pretty. that sent the dom0 load very high and eventually led to a crash of everything. Steve Timm On Wed, 28 Jan 2009, Heiko wrote:> Hello, > > last night I had a dom0 that was reported down by our monitoring and > it got available again after 50 minutes. > I also had message that some domU on that machine where not online. > > Can it be that one domU has that much load that it can take down all > the others inlc. dom0? > right after the dom0 got available again i could see that it had a > high load: "CRITICAL - load average: 0.18, 5.97, 81.42" > I remember watching somethin similiar on another dom0, i could see > there that there where some python processes going wild. > In the logfile for the problem machine from today i find this: > Is this Xen related, there is nothing else on the machine running? > My machine are CentOS 5.2 only with packages from the officiall > repositories(xen 3.1.0). > > > Jan 28 06:07:32 x1blade1 kernel: python invoked oom-killer: > gfp_mask=0x201d2, order=0, oomkilladj=0 > Jan 28 06:07:32 x1blade1 kernel: > Jan 28 06:07:32 x1blade1 kernel: Call Trace: > Jan 28 06:07:32 x1blade1 kernel: [<ffffffff802b4896>] out_of_memory+0x8b/0x203 > Jan 28 06:07:32 x1blade1 kernel: [<ffffffff8020f05e>] __alloc_pages+0x22b/0x2b4 > Jan 28 06:07:32 x1blade1 kernel: [<ffffffff802129fb>] > __do_page_cache_readahead+0xd0/0x21c > Jan 28 06:07:32 x1blade1 kernel: [<ffffffff802606a8>] > __wait_on_bit_lock+0x5b/0x66 > Jan 28 06:07:33 x1blade1 nrpe[345]: Error: Could not complete SSL handshake. 5 > Jan 28 06:07:33 x1blade1 nrpe[342]: Error: Could not complete SSL handshake. 5 > Jan 28 06:07:33 x1blade1 kernel: [<ffffffff8023fd18>] __lock_page+0x5e/0x64 > Jan 28 06:07:36 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53536 > Jan 28 06:08:35 x1blade1 kernel: [<ffffffff802132c0>] > filemap_nopage+0x148/0x322 > Jan 28 06:09:10 x1blade1 kernel: [<ffffffff80208ba1>] > __handle_mm_fault+0x3d9/0xf4d > Jan 28 06:09:25 x1blade1 kernel: [<ffffffff80261869>] > _spin_lock_irqsave+0x9/0x14 > Jan 28 06:10:28 x1blade1 kernel: [<ffffffff802641bf>] > do_page_fault+0xe4c/0x11e0 > Jan 28 06:13:01 x1blade1 kernel: [<ffffffff8025d823>] error_exit+0x0/0x6e > Jan 28 06:13:54 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53536 > Jan 28 06:15:16 x1blade1 kernel: > Jan 28 06:16:23 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53536 > Jan 28 06:18:38 x1blade1 kernel: Mem-info: > Jan 28 06:18:50 x1blade1 snmpd[4492]: Connection from UDP: [172.17.3.161]:50744 > Jan 28 06:18:50 x1blade1 kernel: DMA per-cpu: > Jan 28 06:18:50 x1blade1 snmpd[4492]: Received SNMP packet(s) from > UDP: [172.17.3.161]:50744 > Jan 28 06:18:50 x1blade1 kernel: cpu 0 hot: high 186, batch 31 used:73 > ...repeats a lot > Jan 28 06:18:54 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 > Jan 28 06:18:54 x1blade1 kernel: HighMem per-cpu: empty > Jan 28 06:18:54 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 > Jan 28 06:18:54 x1blade1 kernel: Free pages: 16204kB (0kB HighMem) > Jan 28 06:18:54 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 > Jan 28 06:18:54 x1blade1 kernel: Active:970160 inactive:1278826 > dirty:2 writeback:0 unstable:0 free:4051 slab:39078 mapped-file:1131 > mapped-anon:2248568 pagetables:167956 > Jan 28 06:18:55 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 > Jan 28 06:18:55 x1blade1 kernel: DMA free:16204kB min:16204kB > low:20252kB high:24304kB active:3883200kB inactive:5112872kB > present:16411728kB pages_scanned:22590050 all_unreclaimable? yes > Jan 28 06:18:55 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 > Jan 28 06:18:55 x1blade1 kernel: lowmem_reserve[]: 0 0 0 0 > Jan 28 06:18:55 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 > Jan 28 06:18:55 x1blade1 kernel: DMA32 free:0kB min:0kB low:0kB > high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 > all_unreclaimable? no > Jan 28 06:18:55 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 > Jan 28 06:18:55 x1blade1 kernel: lowmem_reserve[]: 0 0 0 0 > Jan 28 06:18:55 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 > Jan 28 06:18:55 x1blade1 kernel: Normal free:0kB min:0kB low:0kB > high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 > all_unreclaimable? no > Jan 28 06:18:56 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 > Jan 28 06:18:56 x1blade1 kernel: lowmem_reserve[]: 0 0 0 0 > Jan 28 06:18:56 x1blade1 snmpd[4492]: Connection from UDP: [172.17.4.161]:53542 > Jan 28 06:18:56 x1blade1 kernel: HighMem free:0kB min:128kB low:128kB > high:128kB active:0kB inactive:0kB present:0kB pages_scanned:0 > all_unreclaimable? no > Jan 28 06:18:56 x1blade1 kernel: lowmem_reserve[]: 0 0 0 0 > Jan 28 06:18:56 x1blade1 kernel: DMA: 25*4kB 7*8kB 5*16kB 1*32kB > 1*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 3*4096kB = 16204kB > Jan 28 06:18:56 x1blade1 kernel: DMA32: empty > Jan 28 06:18:56 x1blade1 kernel: Normal: empty > Jan 28 06:18:57 x1blade1 kernel: HighMem: empty > Jan 28 06:18:57 x1blade1 kernel: Swap cache: add 513883, delete > 513883, find 28240/28470, race 0+0 > Jan 28 06:18:57 x1blade1 kernel: Free swap = 0kB > Jan 28 06:18:57 x1blade1 kernel: Total swap = 2048276kB > Jan 28 06:18:57 x1blade1 kernel: Free swap: 0kB > Jan 28 06:18:57 x1blade1 kernel: 4102932 pages of RAM > Jan 28 06:18:57 x1blade1 kernel: 97982 reserved pages > Jan 28 06:18:57 x1blade1 kernel: 753073 pages shared > Jan 28 06:18:58 x1blade1 kernel: 0 pages swap cached > Jan 28 06:18:58 x1blade1 kernel: Out of memory: Killed process 6991 (python). > > > greetings > > .r > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- ------------------------------------------------------------------ Steven C. Timm, Ph.D (630) 840-8525 timm@fnal.gov http://home.fnal.gov/~timm/ Fermilab Computing Division, Scientific Computing Facilities, Grid Facilities Department, FermiGrid Services Group, Assistant Group Leader. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users