Hi, I have a strange problem here. My Java Virtual Machine crashes on a DomU when mem-set is 1792 MB but is stable when mem-set has been called with 1791 MB This happens running JBoss, and is an actual libjvm.so Segfault. It doesn''t report anything about memory etc. Running: very recent quad core Xeon 2Ghz, Xen 3.1 from OpenSolaris 09/06 OpenSolaris 09/06 Dom0 Ubuntu 8.04 with 2.6.24-25-xen kernel image JVMs (any... 1.5.0 - 1.7, either openjdk or sun) I finally tracked down the crash to that single 1MB adjustment of RAM. although the JVM is only configured to use 512MB of RAM MAX (-Xmx512m) This is very bizarre. Has anybody seen behaviour before, or have any comments about why it may be happening? One thought.... Java requires contiguous memory which I believe Xen does not provide to the guest. I would have thought however that the addressable memory space that an application sees can be mapped to appear contiguous? Thanks for any input, or suggestions Rob -- Rob Shepherd BEng PhD - Director / Senior Engineer - DataCymru Ltd _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Nov 10, 2009 at 01:52:34PM +0000, Rob Shepherd wrote:> Hi, > > I have a strange problem here. > > My Java Virtual Machine crashes on a DomU when mem-set is 1792 MB but is > stable when mem-set has been called with 1791 MB > > This happens running JBoss, and is an actual libjvm.so Segfault. It > doesn''t report anything about memory etc. > > Running: > very recent quad core Xeon 2Ghz, > Xen 3.1 from OpenSolaris 09/06 > OpenSolaris 09/06 Dom0 > Ubuntu 8.04 with 2.6.24-25-xen kernel image > JVMs (any... 1.5.0 - 1.7, either openjdk or sun) > > I finally tracked down the crash to that single 1MB adjustment of RAM. > > although the JVM is only configured to use 512MB of RAM MAX (-Xmx512m) > > This is very bizarre. Has anybody seen behaviour before, or have any > comments about why it may be happening? > > One thought.... Java requires contiguous memory which I believe Xen does > not provide to the guest. > I would have thought however that the addressable memory space that an > application sees can be mapped to appear contiguous? > > Thanks for any input, or suggestions >Does domU kernel dmesg have errors? How does the domU kernel memory layout change when you switch between 1791 and 1792 MB of RAM? (it''s in the beginning of dmesg / domU kernel boot messages). Have you tried any other domU kernels? Ubuntu 2.6.24 kernel is known to be buggy.. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Nov 10, 2009 at 8:52 PM, Rob Shepherd <rs@datacymru.net> wrote:> I finally tracked down the crash to that single 1MB adjustment of RAM. > > although the JVM is only configured to use 512MB of RAM MAX (-Xmx512m) > > This is very bizarre. Has anybody seen behaviour before, or have any > comments about why it may be happening?Java uses MORE memory then what you set on -Xmx. For example, I set -Xmx512m on an app and "top" shows 693MB RES, 1392M VIRT. Perhaps you disable swap so that all VIRT needs to be on memory, thus oom-killer killed it?> One thought.... Java requires contiguous memory which I believe Xen does not > provide to the guest. I would have thought however that the addressable > memory space that an application sees can be mapped to appear contiguous?It should be that way. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha wrote:> On Tue, Nov 10, 2009 at 8:52 PM, Rob Shepherd <rs@datacymru.net> wrote: > > >> I finally tracked down the crash to that single 1MB adjustment of RAM. >> >> although the JVM is only configured to use 512MB of RAM MAX (-Xmx512m) >> >> This is very bizarre. Has anybody seen behaviour before, or have any >> comments about why it may be happening? >> > > Java uses MORE memory then what you set on -Xmx. For example, I set > -Xmx512m on an app and "top" shows 693MB RES, 1392M VIRT. > > Perhaps you disable swap so that all VIRT needs to be on memory, thus > oom-killer killed it? >I though this earlier but no. swap total == swap free == 4194296 kB One would expect an OOM Error to come through a JVM stacktrace. This is a JVM Segfault. Nevertheless, thank you for your input. I''d be glad of any further suggestions. Kind regards Rob -- Rob Shepherd BEng PhD - Director / Senior Engineer - DataCymru Ltd _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> One would expect an OOM Error to come through a JVM stacktrace. This is > a JVM Segfault.I wouldn''t expect that, or anything else logical, to come from JVM. OOM may very well just segfault the clients it''s killing, so check dmesg or syslog to see what happened. -- John Madden Sr UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
John Madden wrote:>> One would expect an OOM Error to come through a JVM stacktrace. This >> is a JVM Segfault. > > I wouldn''t expect that, or anything else logical, to come from JVM. > OOM may very well just segfault the clients it''s killing, so check > dmesg or syslog to see what happened. > >Nothing in dmesg/syslog etc. By the way. Logical out of memory errors are supposed to come from the JVM. see: http://java.sun.com/j2se/1.5.0/docs/api/java/lang/OutOfMemoryError.html One can simulate this by growing the heap size. Thus seeing a segfault is a problem with the JVM, (and/or it''s supporting/underlying OS subsystems) Thank you though Rob -- Rob Shepherd BEng PhD - Director / Senior Engineer - DataCymru Ltd _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen wrote:> On Tue, Nov 10, 2009 at 01:52:34PM +0000, Rob Shepherd wrote: > >> Hi, >> >> I have a strange problem here. >> >> My Java Virtual Machine crashes on a DomU when mem-set is 1792 MB but is >> stable when mem-set has been called with 1791 MB >> >> This happens running JBoss, and is an actual libjvm.so Segfault. It >> doesn''t report anything about memory etc. >> >> Running: >> very recent quad core Xeon 2Ghz, >> Xen 3.1 from OpenSolaris 09/06 >> OpenSolaris 09/06 Dom0 >> Ubuntu 8.04 with 2.6.24-25-xen kernel image >> JVMs (any... 1.5.0 - 1.7, either openjdk or sun) >> >> I finally tracked down the crash to that single 1MB adjustment of RAM. >> >> although the JVM is only configured to use 512MB of RAM MAX (-Xmx512m) >> >> This is very bizarre. Has anybody seen behaviour before, or have any >> comments about why it may be happening? >> >> One thought.... Java requires contiguous memory which I believe Xen does >> not provide to the guest. >> I would have thought however that the addressable memory space that an >> application sees can be mapped to appear contiguous? >> >> Thanks for any input, or suggestions >> >> > > Does domU kernel dmesg have errors? > > How does the domU kernel memory layout change when you switch between > 1791 and 1792 MB of RAM? (it''s in the beginning of dmesg / domU kernel > boot messages). > > Have you tried any other domU kernels? Ubuntu 2.6.24 kernel is known to > be buggy.. > > -- Pasi >Thank you for your input Pasi, Below is a unified diff of the appropriate parts or kern.log when booted with 1791 and 1792 MB respectively. I see no non-linear or unrealistic changes in the scenario. What do you think? Also, this was taken on a debian kernel (2.6.26-2-xen-amd64) which bears the same result as the ubuntu 2.6.24 kernel. I am about to try a xen HVM image. Any further input is greatly appreciated A great many thanks Rob Here''s the kern.log diff> --- kern.log-1791MB-a 2009-11-13 21:47:51.944593412 +0000 > +++ kern.log-1792MB-a 2009-11-13 21:49:36.885698062 +0000 > @@ -3,11 +3,11 @@ > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] Linux version > 2.6.26-2-xen-amd64 (Debian 2.6.26-20) (dannf@debian.org) (gcc version > 4.1.3 20080704 (prerelease) (Debian 4.1.2-25)) #1 SMP Mon Oct 26 > 12:43:40 UTC 2009 > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] Command line: > root=UUID=1141efa3-ef46-4265-81b2-8b7c7332b26c ro console=hvc0 xencons=tty > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] BIOS-provided > physical RAM map: > -Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] Xen: > 0000000000000000 - 0000000070700000 (usable) > -Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] Entering > add_active_range(0, 0, 460544) 0 entries of 256 used > -Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] max_pfn_mapped = > 460544 > +Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] Xen: > 0000000000000000 - 0000000070800000 (usable) > +Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] Entering > add_active_range(0, 0, 460800) 0 entries of 256 used > +Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] max_pfn_mapped = > 460800 > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] init_memory_mapping > -Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] Entering > add_active_range(0, 0, 460544) 0 entries of 256 used > +Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] Entering > add_active_range(0, 0, 460800) 0 entries of 256 used > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] early res: 0 > [200000-631917] TEXT DATA BSS > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] early res: 1 > [632000-1cb4fff] Xen provided > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] early res: 2 > [1cb5000-1cb5fff] INITMAP > @@ -18,18 +18,18 @@ > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] Normal > 1048576 -> 1048576 > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] Movable zone > start PFN for each node > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] early_node_map[1] > active PFN ranges > -Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] 0: 0 > -> 460544 > -Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] On node 0 > totalpages: 460544 > +Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] 0: 0 > -> 460800 > +Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] On node 0 > totalpages: 460800 > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] DMA zone: 56 > pages used for memmap > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] DMA zone: 0 > pages reserved > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] DMA zone: 4040 > pages, LIFO batch:0 > -Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] DMA32 zone: > 6241 pages used for memmap > -Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] DMA32 zone: > 450207 pages, LIFO batch:31 > +Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] DMA32 zone: > 6244 pages used for memmap > +Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] DMA32 zone: > 450460 pages, LIFO batch:31 > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] Normal zone: 0 > pages used for memmap > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] Movable zone: 0 > pages used for memmap > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] PERCPU: > Allocating 22192 bytes of per cpu data > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] NR_CPUS: 32, > nr_cpu_ids: 4 > -Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] Built 1 zonelists > in Zone order, mobility grouping on. Total pages: 454247 > +Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] Built 1 zonelists > in Zone order, mobility grouping on. Total pages: 454500 > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] Kernel command > line: root=UUID=1141efa3-ef46-4265-81b2-8b7c7332b26c ro console=hvc0 > xencons=tty > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] Initializing CPU#0 > Nov 13 22:00:00 dev-perftest kernel: [ 0.000000] PID hash table > entries: 4096 (order: 12, 32768 bytes) > @@ -39,5 +39,5 @@ > Nov 13 22:00:00 dev-perftest kernel: [ 0.004000] Dentry cache hash > table entries: 262144 (order: 9, 2097152 bytes) > Nov 13 22:00:00 dev-perftest kernel: [ 0.004000] Inode-cache hash > table entries: 131072 (order: 8, 1048576 bytes) > Nov 13 22:00:00 dev-perftest kernel: [ 0.004000] Software IO TLB > disabled > -Nov 13 22:00:00 dev-perftest kernel: [ 0.004000] Memory: > 1774016k/1842176k available (2279k kernel code, 59540k reserved, 1023k > data, 216k init) > +Nov 13 22:00:00 dev-perftest kernel: [ 0.004000] Memory: > 1775040k/1843200k available (2279k kernel code, 59540k reserved, 1023k > data, 216k init) > Nov 13 22:00:00 dev-perftest kernel: [ 0.004000] CPA: page pool > initialized 1 of 1 pages preallocated-- Rob Shepherd BEng PhD - Director / Senior Engineer - DataCymru Ltd Reg. England and Wales - 06731289 - TechniumCAST, LL57 4HJ rs@datacymru.net - 08452575006 - 07596154845 - www.datacymru.net _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Update: It works fine with Oracle/BEA''s JRockit VM, which according to a google can operate with a non-contiguous heap. Is heap (memory allocation) coniguity a problem for Xen? Many thanks Rob Rob Shepherd wrote:> Hi, > > I have a strange problem here. > > My Java Virtual Machine crashes on a DomU when mem-set is 1792 MB but is > stable when mem-set has been called with 1791 MB > > This happens running JBoss, and is an actual libjvm.so Segfault. It > doesn''t report anything about memory etc. > > Running: > very recent quad core Xeon 2Ghz, > Xen 3.1 from OpenSolaris 09/06 > OpenSolaris 09/06 Dom0 > Ubuntu 8.04 with 2.6.24-25-xen kernel image > JVMs (any... 1.5.0 - 1.7, either openjdk or sun) > > I finally tracked down the crash to that single 1MB adjustment of RAM. > > although the JVM is only configured to use 512MB of RAM MAX (-Xmx512m) > > This is very bizarre. Has anybody seen behaviour before, or have any > comments about why it may be happening? > > One thought.... Java requires contiguous memory which I believe Xen does > not provide to the guest. I would have thought however that the > addressable memory space that an application sees can be mapped to > appear contiguous? > > Thanks for any input, or suggestions > > Rob >-- Rob Shepherd BEng PhD - Director / Senior Engineer - DataCymru Ltd _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users