Check the xen log files for more info:
/var/log/xend-debug.log
/var/log/xend.log
You don''t say which architecture you''re running.. if its i386,
it could
be a PAE issue. More recent FC5 xen kernels have PAE turned on by
default, but the older ones don''t, causing it to barf when trying to
start a non-PAE kernel inside a PAE dom0.
Although I think from memory this only affects the kernel-xen package,
and not the kernel-xen0/kernel-xenU packages (unsure now.. I''ve moved
to
FC6 since test2).
Either way, the xen log files will be a good source to start figuring
out whats up.
Adrian Chadd wrote:> I''ve recently upgraded a xen-3 FC5 dom0 host to the latest Xen
packages
> (2187_FC5xen0 and xenU), and this happened in a domU shortly after boot:
>
> ------------[ cut here ]------------
> kernel BUG at net/core/dev.c:1206!
> invalid opcode: 0000 [#1]
> SMP
> Modules linked in: xt_tcpudp iptable_mangle iptable_nat ip_nat ip_conntrack
nfnetlink iptable_filter ip_tables ipv6 x_tables xennet dm_snapshot dm_zero
dm_mirror dm_mod raid1
> CPU: 0
> EIP: 0061:[<c055821a>] Not tainted VLI
> EFLAGS: 00010297 (2.6.17-1.2187_FC5xenU #1)
> EIP is at skb_gso_segment+0x29/0xc9
> eax: 00000000 ebx: c7849ec4 ecx: 00050003 edx: c05f7700
> esi: c7849ec4 edi: 00000008 ebp: c7eb8000 esp: c0651b84
> ds: 007b es: 007b ss: 0069
> Process swapper (pid: 0, threadinfo=c0650000 task=c05f2800)
> Stack: <0>00000001 c7849ec4 c69ed300 c055938b c7849ec4 00050003
00000001 c7eb8000
> c7849ec4 c7eb8180 00000000 c0564e1e c7849ec4 c7eb8000 c12a7400
00000000
> c7eb8000 c0650000 c7849ec4 c055af7e c7eb8000 c7d8bcf4 c7d8bd14
c69ed2cc
> Call Trace:
> <c055938b> dev_hard_start_xmit+0x174/0x203 <c0564e1e>
__qdisc_run+0xe0/0x19a
> <c055af7e> dev_queue_xmit+0x1ce/0x2cc <c0575232>
ip_output+0x1b6/0x1ec
> <c0574ace> ip_queue_xmit+0x374/0x3b3 <c05addbf>
_spin_lock_irqsave+0x22/0x27
> <c05adebd> _spin_unlock_irqrestore+0x9/0x31 <c042146a>
__mod_timer+0x96/0x9e
> <c05821f3> tcp_transmit_skb+0x5d2/0x602 <c0583b84>
__tcp_push_pending_frames+0x6b7/0x789
> <c057f9b0> tcp_data_queue+0x518/0x97f <c05814f4>
tcp_rcv_established+0x60d/0x695
> <c058653c> tcp_v4_do_rcv+0x23/0x2ce <c0588a86>
tcp_v4_rcv+0x8ee/0x964
> <c05708ff> ip_local_deliver+0x58/0x1fd <c05709fe>
ip_local_deliver+0x157/0x1fd
> <c057086d> ip_rcv+0x3e9/0x423 <c055901f>
netif_receive_skb+0x21a/0x298
> <c908fca9> netif_poll+0x8b0/0xae1 [xennet] <c055ac7f>
net_rx_action+0xcd/0x1fe
> <c041d6db> __do_softirq+0x70/0xef <c041d79a>
do_softirq+0x40/0x67
> <c040665f> do_IRQ+0x1f/0x25 <c051a159>
evtchn_do_upcall+0x66/0x9f
> <c0404d79> hypervisor_callback+0x3d/0x48 <c0407aad>
safe_halt+0x84/0xa7
> <c0402bde> xen_idle+0x46/0x4e <c0402cfd> cpu_idle+0x94/0xad
> <c0655772> start_kernel+0x346/0x34c
>
> Has anyone seen this at all? Any ideas where to start digging for clues?
>
>
>
> Adrian
>
>
>
> --
> Fedora-xen mailing list
> Fedora-xen@redhat.com
> https://www.redhat.com/mailman/listinfo/fedora-xen
>