Attached is a patch to enable Xen VMDq (AKA Netchannel2 vmq) for the ixgbe driver. This is intended for testing and development purposes and should not be considered to be production-quality code. Please note that this does NOT apply to the Xen Linux kernel; it applies to the ixgbe 1.3.47 release, available from http://sourceforge.net/projects/e1000/. You''ll obviously also need an Intel 82598-based 10 Gigabit network card and some sort of link partner. This will build against the current Netchannel2 source available from http:xenbits.xen.org/ext/netchannel2. You''ll need to enable "Net channel 2 support for multi-queue devices" in your kernel config. To enable VMDq functionality, load the driver with the command-line parameter VMDQ=<num queues>, as in: $ insmod ~/ixgbe-1.3.47/src/ixgbe.ko VMDQ=8 You can then set up PV domains to use the device by modifying your VM configuration file from vif = [ ''<whatever>'' ] to vif2 = [ ''pdev=<netdev>'' ] where <netdev> is the interface name for your 82598 board, e.g peth0 in dom0. Known issues (at least, known by me): 1) Must manually attach bridge device after starting domU vm. Netchannel2 backend devices show up as ethNN, not vifN.M, so the scripts don''t automatically attach the interface. Once your VM starts, do ifconfig -a to see which new interface got added. Then use "brctl addif" to add this new interface the the bridge. 2) No broadcast replication. This is a big one. Incoming broadcasts will ONLY go to dom0. This means that your VMs can send ARP requests and initiate IP sessions to outside machines, but outside machines cannot initiate connections because the ARP requests don''t go to the domU VMs. 3) No loopback. VMs cannot communicate with other VMs (including dom0) on the same machine. Once I get this out, I''ll start working on a proper backport of the driver into the Xen kernel (2.6.18.8) tree. I''ll remove as much of the compatibility cruft as is prudent and properly integrate it into the Kbuild stuff. When that''s done, I''ll send a complete patchset to this list, including signed-off-by lines which can then be checked in to Mercurial. Please review and comment, and if possible test. Thanks, Mitch _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Whoops! Attached the wrong patch file. Please use this one. Sorry for the confusion. -Mitch>-----Original Message----- >From: xen-devel-bounces@lists.xensource.com >[mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of >Mitch Williams >Sent: Friday, January 16, 2009 3:06 PM >To: xen-devel@lists.xensource.com >Cc: steven.smith@eu.citrix.com; Ronciak, John; joserenato.santos@hp.com >Subject: [Xen-devel] RFC: Xen VMDq patch for ixgbe > >Attached is a patch to enable Xen VMDq (AKA Netchannel2 vmq) for the >ixgbe driver. This is intended for testing and development purposes >and should not be considered to be production-quality code. > >Please note that this does NOT apply to the Xen Linux kernel; it >applies to the ixgbe 1.3.47 release, available from >http://sourceforge.net/projects/e1000/. You''ll obviously also need an >Intel 82598-based 10 Gigabit network card and some sort of link >partner. This will build against the current Netchannel2 source >available from http:xenbits.xen.org/ext/netchannel2. You''ll need to >enable "Net channel 2 support for multi-queue devices" in your kernel >config. > >To enable VMDq functionality, load the driver with the command-line >parameter VMDQ=<num queues>, as in: > >$ insmod ~/ixgbe-1.3.47/src/ixgbe.ko VMDQ=8 > >You can then set up PV domains to use the device by modifying your VM >configuration file from > vif = [ ''<whatever>'' ] >to > vif2 = [ ''pdev=<netdev>'' ] >where <netdev> is the interface name for your 82598 board, e.g >peth0 in dom0. > >Known issues (at least, known by me): >1) Must manually attach bridge device after starting domU vm. >Netchannel2 backend devices show up as ethNN, not vifN.M, so the >scripts don''t automatically attach the interface. Once your VM >starts, do ifconfig -a to see which new interface got added. Then use >"brctl addif" to add this new interface the the bridge. >2) No broadcast replication. This is a big one. Incoming broadcasts >will ONLY go to dom0. This means that your VMs can send ARP requests >and initiate IP sessions to outside machines, but outside machines >cannot initiate connections because the ARP requests don''t go to the >domU VMs. >3) No loopback. VMs cannot communicate with other VMs (including >dom0) on the same machine. > >Once I get this out, I''ll start working on a proper backport of the >driver into the Xen kernel (2.6.18.8) tree. I''ll remove as much of >the compatibility cruft as is prudent and properly integrate it into >the Kbuild stuff. When that''s done, I''ll send a complete patchset to >this list, including signed-off-by lines which can then be checked in >to Mercurial. > >Please review and comment, and if possible test. > >Thanks, >Mitch >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Mitch, It seems this last patch is not working properly I get a kernel panic when the driver module loads (see panic message below). Apparently the driver is trying to allocate guest memory (vmq_alloc_skb()) before the queue is allocated to a guest (there is no guest running when the ixgbe module is loaded). This is probably something easy to fix I am able to use the previous version of the patch (version 1.3.31.3 attached) with no problem. Not sure what changed from version 1.3.31.3 to this new version 1.3.47. I am also attaching the buggy patch for version 1.3.47 that I am using so you can verify if I am using the right patch ... Could you please take a look at this? Thanks Renato ==================================== Unable to handle kernel paging request at 0000000000007800 RIP: [<ffffffff804600a4>] vmq_alloc_skb+0x64/0x1f0 PGD 79807067 PUD 79710067 PMD 0 Oops: 0000 [1] SMP CPU 0 Modules linked in: video thermal fan button battery asus_acpi ac ixgbe Pid: 0, comm: swapper Not tainted 2.6.18.8-xen0 #73 RIP: e030:[<ffffffff804600a4>] [<ffffffff804600a4>] vmq_alloc_skb+0x64/0x1f0 RSP: e02b:ffffffff80793ca0 EFLAGS: 00010206 RAX: ffff88007acc5dd8 RBX: 0000000000000000 RCX: 0000000000080000 RDX: ffff88007b5d7740 RSI: 0000000000000001 RDI: ffff88007a450000 RBP: ffffffff80793cd0 R08: 0000000000000001 R09: ffff88007a450c80 R10: 000000000000003f R11: 000000000000012c R12: 0000000000007800 R13: 0000000000000000 R14: 0000000000000500 R15: 00000000000005f4 FS: 00002ba4a6b1eda0(0000) GS:ffffffff80737000(0000) knlGS:0000000000000000 CS: e033 DS: 0000 ES: 0000 Process swapper (pid: 0, threadinfo ffffffff8074c000, task ffffffff8064d440) Stack: ffff88007a450000 ffffc200118f0000 0000000000000000 0000000000000000 ffff88007b279070 ffff88007a450500 ffffffff80793d30 ffffffff88000d2f 000003ff80793d50 ffff880075df0000 ffff88007a450000 ffff88007fe90800 Call Trace: <IRQ> [<ffffffff88000d2f>] :ixgbe:ixgbe_alloc_rx_buffers+0x15f/0x2e0 [<ffffffff8800320b>] :ixgbe:ixgbe_clean_rx_irq+0x9eb/0xaa0 [<ffffffff8800798e>] :ixgbe:ixgbe_clean_rxonly_many+0xbe/0x210 [<ffffffff8800de3e>] :ixgbe:__kc_adapter_clean+0x2e/0x50 [<ffffffff8052eba4>] net_rx_action+0xc4/0x1c0 [<ffffffff80239eec>] __do_softirq+0x9c/0x140 [<ffffffff8020b604>] call_softirq+0x1c/0x28 [<ffffffff8020d7cc>] do_softirq+0x6c/0x100 [<ffffffff80239d48>] irq_exit+0x48/0x50 [<ffffffff80430b92>] evtchn_do_upcall+0x232/0x250 [<ffffffff8020b13a>] do_hypervisor_callback+0x1e/0x2c <EOI> [<ffffffff802063aa>] hypercall_page+0x3aa/0x1000 [<ffffffff802063aa>] hypercall_page+0x3aa/0x1000 [<ffffffff8020eed2>] raw_safe_halt+0xc2/0xf0 [<ffffffff80209b15>] xen_idle+0x75/0x90 [<ffffffff8020926a>] cpu_idle+0xba/0xe0 [<ffffffff802073b6>] rest_init+0x26/0x30 [<ffffffff807568f5>] start_kernel+0x265/0x270 [<ffffffff8075623d>] _sinittext+0x23d/0x250 Code: 4d 39 a6 00 73 00 00 4d 8d ae e8 72 00 00 0f 84 58 01 00 00 RIP [<ffffffff804600a4>] vmq_alloc_skb+0x64/0x1f0 RSP <ffffffff80793ca0> CR2: 0000000000007800 <0>Kernel panic - not syncing: Aiee, killing interrupt handler! (XEN) Domain 0 crashed: ''noreboot'' set - not rebooting.> -----Original Message----- > From: Williams, Mitch A [mailto:mitch.a.williams@intel.com] > Sent: Friday, January 16, 2009 3:57 PM > To: Williams, Mitch A; xen-devel@lists.xensource.com > Cc: steven.smith@eu.citrix.com; Ronciak, John; Santos, Jose Renato G > Subject: RE: [Xen-devel] RFC: Xen VMDq patch for ixgbe > > Whoops! Attached the wrong patch file. Please use this one. > > Sorry for the confusion. > -Mitch > > >-----Original Message----- > >From: xen-devel-bounces@lists.xensource.com > >[mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Mitch > >Williams > >Sent: Friday, January 16, 2009 3:06 PM > >To: xen-devel@lists.xensource.com > >Cc: steven.smith@eu.citrix.com; Ronciak, John; > joserenato.santos@hp.com > >Subject: [Xen-devel] RFC: Xen VMDq patch for ixgbe > > > >Attached is a patch to enable Xen VMDq (AKA Netchannel2 vmq) for the > >ixgbe driver. This is intended for testing and development purposes > >and should not be considered to be production-quality code. > > > >Please note that this does NOT apply to the Xen Linux kernel; it > >applies to the ixgbe 1.3.47 release, available from > >http://sourceforge.net/projects/e1000/. You''ll obviously > also need an > >Intel 82598-based 10 Gigabit network card and some sort of link > >partner. This will build against the current Netchannel2 source > >available from http:xenbits.xen.org/ext/netchannel2. You''ll need to > >enable "Net channel 2 support for multi-queue devices" in > your kernel > >config. > > > >To enable VMDq functionality, load the driver with the command-line > >parameter VMDQ=<num queues>, as in: > > > >$ insmod ~/ixgbe-1.3.47/src/ixgbe.ko VMDQ=8 > > > >You can then set up PV domains to use the device by > modifying your VM > >configuration file from > > vif = [ ''<whatever>'' ] > >to > > vif2 = [ ''pdev=<netdev>'' ] > >where <netdev> is the interface name for your 82598 board, > e.g peth0 in > >dom0. > > > >Known issues (at least, known by me): > >1) Must manually attach bridge device after starting domU vm. > >Netchannel2 backend devices show up as ethNN, not vifN.M, so the > >scripts don''t automatically attach the interface. Once your > VM starts, > >do ifconfig -a to see which new interface got added. Then > use "brctl > >addif" to add this new interface the the bridge. > >2) No broadcast replication. This is a big one. Incoming > broadcasts > >will ONLY go to dom0. This means that your VMs can send ARP > requests > >and initiate IP sessions to outside machines, but outside machines > >cannot initiate connections because the ARP requests don''t go to the > >domU VMs. > >3) No loopback. VMs cannot communicate with other VMs (including > >dom0) on the same machine. > > > >Once I get this out, I''ll start working on a proper backport of the > >driver into the Xen kernel (2.6.18.8) tree. I''ll remove as > much of the > >compatibility cruft as is prudent and properly integrate it into > >the Kbuild stuff. When that''s done, I''ll send a complete > patchset to > >this list, including signed-off-by lines which can then be > checked in > >to Mercurial. > > > >Please review and comment, and if possible test. > > > >Thanks, > >Mitch > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
I''ll take a look. I''ve been running this code, so I (obviously) didn''t see any failure. They''ve just updated ixgbe on sourceforge, so I''m spinning a new patch that will work with that driver. I''ll probably get that out tomorrow. I''ll double-check to make sure we''re not allocating memory before the queue is activated. -Mitch>-----Original Message----- >From: Santos, Jose Renato G [mailto:joserenato.santos@hp.com] >Sent: Wednesday, January 21, 2009 4:17 PM >To: Williams, Mitch A; xen-devel@lists.xensource.com >Cc: steven.smith@eu.citrix.com; Ronciak, John >Subject: RE: [Xen-devel] RFC: Xen VMDq patch for ixgbe > > >Mitch, > >It seems this last patch is not working properly >I get a kernel panic when the driver module loads (see panic >message below). >Apparently the driver is trying to allocate guest memory >(vmq_alloc_skb()) before the queue is allocated to a guest >(there is no guest running when the ixgbe module is loaded). >This is probably something easy to fix > >I am able to use the previous version of the patch (version >1.3.31.3 attached) with no problem. Not sure what changed from >version 1.3.31.3 to this new version 1.3.47. I am also >attaching the buggy patch for version 1.3.47 that I am using >so you can verify if I am using the right patch ... > >Could you please take a look at this? > >Thanks > >Renato > >====================================> >Unable to handle kernel paging request at 0000000000007800 RIP: > [<ffffffff804600a4>] vmq_alloc_skb+0x64/0x1f0 >PGD 79807067 PUD 79710067 PMD 0 >Oops: 0000 [1] SMP >CPU 0 >Modules linked in: video thermal fan button battery asus_acpi ac ixgbe >Pid: 0, comm: swapper Not tainted 2.6.18.8-xen0 #73 >RIP: e030:[<ffffffff804600a4>] [<ffffffff804600a4>] >vmq_alloc_skb+0x64/0x1f0 >RSP: e02b:ffffffff80793ca0 EFLAGS: 00010206 >RAX: ffff88007acc5dd8 RBX: 0000000000000000 RCX: 0000000000080000 >RDX: ffff88007b5d7740 RSI: 0000000000000001 RDI: ffff88007a450000 >RBP: ffffffff80793cd0 R08: 0000000000000001 R09: ffff88007a450c80 >R10: 000000000000003f R11: 000000000000012c R12: 0000000000007800 >R13: 0000000000000000 R14: 0000000000000500 R15: 00000000000005f4 >FS: 00002ba4a6b1eda0(0000) GS:ffffffff80737000(0000) >knlGS:0000000000000000 >CS: e033 DS: 0000 ES: 0000 >Process swapper (pid: 0, threadinfo ffffffff8074c000, task >ffffffff8064d440) >Stack: ffff88007a450000 ffffc200118f0000 0000000000000000 >0000000000000000 > ffff88007b279070 ffff88007a450500 ffffffff80793d30 ffffffff88000d2f > 000003ff80793d50 ffff880075df0000 ffff88007a450000 ffff88007fe90800 >Call Trace: > <IRQ> [<ffffffff88000d2f>] :ixgbe:ixgbe_alloc_rx_buffers+0x15f/0x2e0 > [<ffffffff8800320b>] :ixgbe:ixgbe_clean_rx_irq+0x9eb/0xaa0 > [<ffffffff8800798e>] :ixgbe:ixgbe_clean_rxonly_many+0xbe/0x210 > [<ffffffff8800de3e>] :ixgbe:__kc_adapter_clean+0x2e/0x50 > [<ffffffff8052eba4>] net_rx_action+0xc4/0x1c0 > [<ffffffff80239eec>] __do_softirq+0x9c/0x140 > [<ffffffff8020b604>] call_softirq+0x1c/0x28 > [<ffffffff8020d7cc>] do_softirq+0x6c/0x100 > [<ffffffff80239d48>] irq_exit+0x48/0x50 > [<ffffffff80430b92>] evtchn_do_upcall+0x232/0x250 > [<ffffffff8020b13a>] do_hypervisor_callback+0x1e/0x2c > <EOI> [<ffffffff802063aa>] hypercall_page+0x3aa/0x1000 > [<ffffffff802063aa>] hypercall_page+0x3aa/0x1000 > [<ffffffff8020eed2>] raw_safe_halt+0xc2/0xf0 > [<ffffffff80209b15>] xen_idle+0x75/0x90 > [<ffffffff8020926a>] cpu_idle+0xba/0xe0 > [<ffffffff802073b6>] rest_init+0x26/0x30 > [<ffffffff807568f5>] start_kernel+0x265/0x270 > [<ffffffff8075623d>] _sinittext+0x23d/0x250 > > >Code: 4d 39 a6 00 73 00 00 4d 8d ae e8 72 00 00 0f 84 58 01 00 00 >RIP [<ffffffff804600a4>] vmq_alloc_skb+0x64/0x1f0 > RSP <ffffffff80793ca0> >CR2: 0000000000007800 > <0>Kernel panic - not syncing: Aiee, killing interrupt handler! > (XEN) Domain 0 crashed: ''noreboot'' set - not rebooting. > > >> -----Original Message----- >> From: Williams, Mitch A [mailto:mitch.a.williams@intel.com] >> Sent: Friday, January 16, 2009 3:57 PM >> To: Williams, Mitch A; xen-devel@lists.xensource.com >> Cc: steven.smith@eu.citrix.com; Ronciak, John; Santos, Jose Renato G >> Subject: RE: [Xen-devel] RFC: Xen VMDq patch for ixgbe >> >> Whoops! Attached the wrong patch file. Please use this one. >> >> Sorry for the confusion. >> -Mitch >> >> >-----Original Message----- >> >From: xen-devel-bounces@lists.xensource.com >> >[mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Mitch >> >Williams >> >Sent: Friday, January 16, 2009 3:06 PM >> >To: xen-devel@lists.xensource.com >> >Cc: steven.smith@eu.citrix.com; Ronciak, John; >> joserenato.santos@hp.com >> >Subject: [Xen-devel] RFC: Xen VMDq patch for ixgbe >> > >> >Attached is a patch to enable Xen VMDq (AKA Netchannel2 >vmq) for the >> >ixgbe driver. This is intended for testing and development >purposes >> >and should not be considered to be production-quality code. >> > >> >Please note that this does NOT apply to the Xen Linux kernel; it >> >applies to the ixgbe 1.3.47 release, available from >> >http://sourceforge.net/projects/e1000/. You''ll obviously >> also need an >> >Intel 82598-based 10 Gigabit network card and some sort of link >> >partner. This will build against the current Netchannel2 source >> >available from http:xenbits.xen.org/ext/netchannel2. >You''ll need to >> >enable "Net channel 2 support for multi-queue devices" in >> your kernel >> >config. >> > >> >To enable VMDq functionality, load the driver with the command-line >> >parameter VMDQ=<num queues>, as in: >> > >> >$ insmod ~/ixgbe-1.3.47/src/ixgbe.ko VMDQ=8 >> > >> >You can then set up PV domains to use the device by >> modifying your VM >> >configuration file from >> > vif = [ ''<whatever>'' ] >> >to >> > vif2 = [ ''pdev=<netdev>'' ] >> >where <netdev> is the interface name for your 82598 board, >> e.g peth0 in >> >dom0. >> > >> >Known issues (at least, known by me): >> >1) Must manually attach bridge device after starting domU vm. >> >Netchannel2 backend devices show up as ethNN, not vifN.M, so the >> >scripts don''t automatically attach the interface. Once your >> VM starts, >> >do ifconfig -a to see which new interface got added. Then >> use "brctl >> >addif" to add this new interface the the bridge. >> >2) No broadcast replication. This is a big one. Incoming >> broadcasts >> >will ONLY go to dom0. This means that your VMs can send ARP >> requests >> >and initiate IP sessions to outside machines, but outside machines >> >cannot initiate connections because the ARP requests don''t >go to the >> >domU VMs. >> >3) No loopback. VMs cannot communicate with other VMs (including >> >dom0) on the same machine. >> > >> >Once I get this out, I''ll start working on a proper backport of the >> >driver into the Xen kernel (2.6.18.8) tree. I''ll remove as >> much of the >> >compatibility cruft as is prudent and properly integrate it into >> >the Kbuild stuff. When that''s done, I''ll send a complete >> patchset to >> >this list, including signed-off-by lines which can then be >> checked in >> >to Mercurial. >> > >> >Please review and comment, and if possible test. >> > >> >Thanks, >> >Mitch >> > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Maybe, you are not running in VMDq mode. Do you see the following message when you run "dmesg" in dom0? "Netchannel2 using vmq mode for guest n" I just noticed the tip of the current netchannel2 does not include the modifications of a previous changeset. I suspect this may have been lost in a merge with latest xen code. Without this missing code MSI is disabled by Xen which causes VMDq to be disabled. I needed to appply this patch (attached) maually to enable MSI and be able to run in VMDq mode. Please apply the patch and check if your code is still running without problems Thanks Renato> -----Original Message----- > From: Williams, Mitch A [mailto:mitch.a.williams@intel.com] > Sent: Wednesday, January 21, 2009 4:21 PM > To: Santos, Jose Renato G; xen-devel@lists.xensource.com > Cc: steven.smith@eu.citrix.com; Ronciak, John > Subject: RE: [Xen-devel] RFC: Xen VMDq patch for ixgbe > > I''ll take a look. I''ve been running this code, so I > (obviously) didn''t see any failure. > > They''ve just updated ixgbe on sourceforge, so I''m spinning a > new patch that will work with that driver. I''ll probably get > that out tomorrow. I''ll double-check to make sure we''re not > allocating memory before the queue is activated. > > > -Mitch > > >-----Original Message----- > >From: Santos, Jose Renato G [mailto:joserenato.santos@hp.com] > >Sent: Wednesday, January 21, 2009 4:17 PM > >To: Williams, Mitch A; xen-devel@lists.xensource.com > >Cc: steven.smith@eu.citrix.com; Ronciak, John > >Subject: RE: [Xen-devel] RFC: Xen VMDq patch for ixgbe > > > > > >Mitch, > > > >It seems this last patch is not working properly I get a > kernel panic > >when the driver module loads (see panic message below). > >Apparently the driver is trying to allocate guest memory > >(vmq_alloc_skb()) before the queue is allocated to a guest > (there is no > >guest running when the ixgbe module is loaded). > >This is probably something easy to fix > > > >I am able to use the previous version of the patch (version > >1.3.31.3 attached) with no problem. Not sure what changed > from version > >1.3.31.3 to this new version 1.3.47. I am also attaching the buggy > >patch for version 1.3.47 that I am using so you can verify if I am > >using the right patch ... > > > >Could you please take a look at this? > > > >Thanks > > > >Renato > > > >====================================> > > >Unable to handle kernel paging request at 0000000000007800 RIP: > > [<ffffffff804600a4>] vmq_alloc_skb+0x64/0x1f0 PGD 79807067 PUD > >79710067 PMD 0 > >Oops: 0000 [1] SMP > >CPU 0 > >Modules linked in: video thermal fan button battery > asus_acpi ac ixgbe > >Pid: 0, comm: swapper Not tainted 2.6.18.8-xen0 #73 > >RIP: e030:[<ffffffff804600a4>] [<ffffffff804600a4>] > >vmq_alloc_skb+0x64/0x1f0 > >RSP: e02b:ffffffff80793ca0 EFLAGS: 00010206 > >RAX: ffff88007acc5dd8 RBX: 0000000000000000 RCX: 0000000000080000 > >RDX: ffff88007b5d7740 RSI: 0000000000000001 RDI: ffff88007a450000 > >RBP: ffffffff80793cd0 R08: 0000000000000001 R09: ffff88007a450c80 > >R10: 000000000000003f R11: 000000000000012c R12: 0000000000007800 > >R13: 0000000000000000 R14: 0000000000000500 R15: 00000000000005f4 > >FS: 00002ba4a6b1eda0(0000) GS:ffffffff80737000(0000) > >knlGS:0000000000000000 > >CS: e033 DS: 0000 ES: 0000 > >Process swapper (pid: 0, threadinfo ffffffff8074c000, task > >ffffffff8064d440) > >Stack: ffff88007a450000 ffffc200118f0000 0000000000000000 > >0000000000000000 ffff88007b279070 ffff88007a450500 ffffffff80793d30 > >ffffffff88000d2f 000003ff80793d50 ffff880075df0000 ffff88007a450000 > >ffff88007fe90800 Call Trace: > > <IRQ> [<ffffffff88000d2f>] :ixgbe:ixgbe_alloc_rx_buffers+0x15f/0x2e0 > > [<ffffffff8800320b>] :ixgbe:ixgbe_clean_rx_irq+0x9eb/0xaa0 > > [<ffffffff8800798e>] :ixgbe:ixgbe_clean_rxonly_many+0xbe/0x210 > > [<ffffffff8800de3e>] :ixgbe:__kc_adapter_clean+0x2e/0x50 > > [<ffffffff8052eba4>] net_rx_action+0xc4/0x1c0 [<ffffffff80239eec>] > >__do_softirq+0x9c/0x140 [<ffffffff8020b604>] > call_softirq+0x1c/0x28 > >[<ffffffff8020d7cc>] do_softirq+0x6c/0x100 [<ffffffff80239d48>] > >irq_exit+0x48/0x50 [<ffffffff80430b92>] > evtchn_do_upcall+0x232/0x250 > >[<ffffffff8020b13a>] do_hypervisor_callback+0x1e/0x2c <EOI> > >[<ffffffff802063aa>] hypercall_page+0x3aa/0x1000 > [<ffffffff802063aa>] > >hypercall_page+0x3aa/0x1000 [<ffffffff8020eed2>] > >raw_safe_halt+0xc2/0xf0 [<ffffffff80209b15>] xen_idle+0x75/0x90 > >[<ffffffff8020926a>] cpu_idle+0xba/0xe0 [<ffffffff802073b6>] > >rest_init+0x26/0x30 [<ffffffff807568f5>] start_kernel+0x265/0x270 > >[<ffffffff8075623d>] _sinittext+0x23d/0x250 > > > > > >Code: 4d 39 a6 00 73 00 00 4d 8d ae e8 72 00 00 0f 84 58 01 > 00 00 RIP > >[<ffffffff804600a4>] vmq_alloc_skb+0x64/0x1f0 RSP <ffffffff80793ca0> > >CR2: 0000000000007800 > > <0>Kernel panic - not syncing: Aiee, killing interrupt handler! > > (XEN) Domain 0 crashed: ''noreboot'' set - not rebooting. > > > > > >> -----Original Message----- > >> From: Williams, Mitch A [mailto:mitch.a.williams@intel.com] > >> Sent: Friday, January 16, 2009 3:57 PM > >> To: Williams, Mitch A; xen-devel@lists.xensource.com > >> Cc: steven.smith@eu.citrix.com; Ronciak, John; Santos, > Jose Renato G > >> Subject: RE: [Xen-devel] RFC: Xen VMDq patch for ixgbe > >> > >> Whoops! Attached the wrong patch file. Please use this one. > >> > >> Sorry for the confusion. > >> -Mitch > >> > >> >-----Original Message----- > >> >From: xen-devel-bounces@lists.xensource.com > >> >[mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Mitch > >> >Williams > >> >Sent: Friday, January 16, 2009 3:06 PM > >> >To: xen-devel@lists.xensource.com > >> >Cc: steven.smith@eu.citrix.com; Ronciak, John; > >> joserenato.santos@hp.com > >> >Subject: [Xen-devel] RFC: Xen VMDq patch for ixgbe > >> > > >> >Attached is a patch to enable Xen VMDq (AKA Netchannel2 > >vmq) for the > >> >ixgbe driver. This is intended for testing and development > >purposes > >> >and should not be considered to be production-quality code. > >> > > >> >Please note that this does NOT apply to the Xen Linux kernel; it > >> >applies to the ixgbe 1.3.47 release, available from > >> >http://sourceforge.net/projects/e1000/. You''ll obviously > >> also need an > >> >Intel 82598-based 10 Gigabit network card and some sort of link > >> >partner. This will build against the current Netchannel2 source > >> >available from http:xenbits.xen.org/ext/netchannel2. > >You''ll need to > >> >enable "Net channel 2 support for multi-queue devices" in > >> your kernel > >> >config. > >> > > >> >To enable VMDq functionality, load the driver with the > command-line > >> >parameter VMDQ=<num queues>, as in: > >> > > >> >$ insmod ~/ixgbe-1.3.47/src/ixgbe.ko VMDQ=8 > >> > > >> >You can then set up PV domains to use the device by > >> modifying your VM > >> >configuration file from > >> > vif = [ ''<whatever>'' ] > >> >to > >> > vif2 = [ ''pdev=<netdev>'' ] > >> >where <netdev> is the interface name for your 82598 board, > >> e.g peth0 in > >> >dom0. > >> > > >> >Known issues (at least, known by me): > >> >1) Must manually attach bridge device after starting domU vm. > >> >Netchannel2 backend devices show up as ethNN, not vifN.M, so the > >> >scripts don''t automatically attach the interface. Once your > >> VM starts, > >> >do ifconfig -a to see which new interface got added. Then > >> use "brctl > >> >addif" to add this new interface the the bridge. > >> >2) No broadcast replication. This is a big one. Incoming > >> broadcasts > >> >will ONLY go to dom0. This means that your VMs can send ARP > >> requests > >> >and initiate IP sessions to outside machines, but outside > machines > >> >cannot initiate connections because the ARP requests don''t > >go to the > >> >domU VMs. > >> >3) No loopback. VMs cannot communicate with other VMs (including > >> >dom0) on the same machine. > >> > > >> >Once I get this out, I''ll start working on a proper > backport of the > >> >driver into the Xen kernel (2.6.18.8) tree. I''ll remove as > >> much of the > >> >compatibility cruft as is prudent and properly integrate it into > >> >the Kbuild stuff. When that''s done, I''ll send a complete > >> patchset to > >> >this list, including signed-off-by lines which can then be > >> checked in > >> >to Mercurial. > >> > > >> >Please review and comment, and if possible test. > >> > > >> >Thanks, > >> >Mitch > >> > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> Maybe, you are not running in VMDq mode. > Do you see the following message when you run "dmesg" in dom0? > "Netchannel2 using vmq mode for guest n" > > I just noticed the tip of the current netchannel2 does not include the modifications of a previous changeset. I suspect this may have been lost in a merge with latest xen code. Without this missing code MSI is disabled by Xen which causes VMDq to be disabled. > I needed to appply this patch (attached) maually to enable MSI and be able to run in VMDq mode.Oops, sorry about that. I''ve (re-)applied the patch to tip and pushed it out. Steven.> > -----Original Message----- > > From: Williams, Mitch A [mailto:mitch.a.williams@intel.com] > > Sent: Wednesday, January 21, 2009 4:21 PM > > To: Santos, Jose Renato G; xen-devel@lists.xensource.com > > Cc: steven.smith@eu.citrix.com; Ronciak, John > > Subject: RE: [Xen-devel] RFC: Xen VMDq patch for ixgbe > > > > I''ll take a look. I''ve been running this code, so I > > (obviously) didn''t see any failure. > > > > They''ve just updated ixgbe on sourceforge, so I''m spinning a > > new patch that will work with that driver. I''ll probably get > > that out tomorrow. I''ll double-check to make sure we''re not > > allocating memory before the queue is activated. > > > > > > -Mitch > > > > >-----Original Message----- > > >From: Santos, Jose Renato G [mailto:joserenato.santos@hp.com] > > >Sent: Wednesday, January 21, 2009 4:17 PM > > >To: Williams, Mitch A; xen-devel@lists.xensource.com > > >Cc: steven.smith@eu.citrix.com; Ronciak, John > > >Subject: RE: [Xen-devel] RFC: Xen VMDq patch for ixgbe > > > > > > > > >Mitch, > > > > > >It seems this last patch is not working properly I get a > > kernel panic > > >when the driver module loads (see panic message below). > > >Apparently the driver is trying to allocate guest memory > > >(vmq_alloc_skb()) before the queue is allocated to a guest > > (there is no > > >guest running when the ixgbe module is loaded). > > >This is probably something easy to fix > > > > > >I am able to use the previous version of the patch (version > > >1.3.31.3 attached) with no problem. Not sure what changed > > from version > > >1.3.31.3 to this new version 1.3.47. I am also attaching the buggy > > >patch for version 1.3.47 that I am using so you can verify if I am > > >using the right patch ... > > > > > >Could you please take a look at this? > > > > > >Thanks > > > > > >Renato > > > > > >====================================> > > > > >Unable to handle kernel paging request at 0000000000007800 RIP: > > > [<ffffffff804600a4>] vmq_alloc_skb+0x64/0x1f0 PGD 79807067 PUD > > >79710067 PMD 0 > > >Oops: 0000 [1] SMP > > >CPU 0 > > >Modules linked in: video thermal fan button battery > > asus_acpi ac ixgbe > > >Pid: 0, comm: swapper Not tainted 2.6.18.8-xen0 #73 > > >RIP: e030:[<ffffffff804600a4>] [<ffffffff804600a4>] > > >vmq_alloc_skb+0x64/0x1f0 > > >RSP: e02b:ffffffff80793ca0 EFLAGS: 00010206 > > >RAX: ffff88007acc5dd8 RBX: 0000000000000000 RCX: 0000000000080000 > > >RDX: ffff88007b5d7740 RSI: 0000000000000001 RDI: ffff88007a450000 > > >RBP: ffffffff80793cd0 R08: 0000000000000001 R09: ffff88007a450c80 > > >R10: 000000000000003f R11: 000000000000012c R12: 0000000000007800 > > >R13: 0000000000000000 R14: 0000000000000500 R15: 00000000000005f4 > > >FS: 00002ba4a6b1eda0(0000) GS:ffffffff80737000(0000) > > >knlGS:0000000000000000 > > >CS: e033 DS: 0000 ES: 0000 > > >Process swapper (pid: 0, threadinfo ffffffff8074c000, task > > >ffffffff8064d440) > > >Stack: ffff88007a450000 ffffc200118f0000 0000000000000000 > > >0000000000000000 ffff88007b279070 ffff88007a450500 ffffffff80793d30 > > >ffffffff88000d2f 000003ff80793d50 ffff880075df0000 ffff88007a450000 > > >ffff88007fe90800 Call Trace: > > > <IRQ> [<ffffffff88000d2f>] :ixgbe:ixgbe_alloc_rx_buffers+0x15f/0x2e0 > > > [<ffffffff8800320b>] :ixgbe:ixgbe_clean_rx_irq+0x9eb/0xaa0 > > > [<ffffffff8800798e>] :ixgbe:ixgbe_clean_rxonly_many+0xbe/0x210 > > > [<ffffffff8800de3e>] :ixgbe:__kc_adapter_clean+0x2e/0x50 > > > [<ffffffff8052eba4>] net_rx_action+0xc4/0x1c0 [<ffffffff80239eec>] > > >__do_softirq+0x9c/0x140 [<ffffffff8020b604>] > > call_softirq+0x1c/0x28 > > >[<ffffffff8020d7cc>] do_softirq+0x6c/0x100 [<ffffffff80239d48>] > > >irq_exit+0x48/0x50 [<ffffffff80430b92>] > > evtchn_do_upcall+0x232/0x250 > > >[<ffffffff8020b13a>] do_hypervisor_callback+0x1e/0x2c <EOI> > > >[<ffffffff802063aa>] hypercall_page+0x3aa/0x1000 > > [<ffffffff802063aa>] > > >hypercall_page+0x3aa/0x1000 [<ffffffff8020eed2>] > > >raw_safe_halt+0xc2/0xf0 [<ffffffff80209b15>] xen_idle+0x75/0x90 > > >[<ffffffff8020926a>] cpu_idle+0xba/0xe0 [<ffffffff802073b6>] > > >rest_init+0x26/0x30 [<ffffffff807568f5>] start_kernel+0x265/0x270 > > >[<ffffffff8075623d>] _sinittext+0x23d/0x250 > > > > > > > > >Code: 4d 39 a6 00 73 00 00 4d 8d ae e8 72 00 00 0f 84 58 01 > > 00 00 RIP > > >[<ffffffff804600a4>] vmq_alloc_skb+0x64/0x1f0 RSP <ffffffff80793ca0> > > >CR2: 0000000000007800 > > > <0>Kernel panic - not syncing: Aiee, killing interrupt handler! > > > (XEN) Domain 0 crashed: ''noreboot'' set - not rebooting. > > > > > > > > >> -----Original Message----- > > >> From: Williams, Mitch A [mailto:mitch.a.williams@intel.com] > > >> Sent: Friday, January 16, 2009 3:57 PM > > >> To: Williams, Mitch A; xen-devel@lists.xensource.com > > >> Cc: steven.smith@eu.citrix.com; Ronciak, John; Santos, > > Jose Renato G > > >> Subject: RE: [Xen-devel] RFC: Xen VMDq patch for ixgbe > > >> > > >> Whoops! Attached the wrong patch file. Please use this one. > > >> > > >> Sorry for the confusion. > > >> -Mitch > > >> > > >> >-----Original Message----- > > >> >From: xen-devel-bounces@lists.xensource.com > > >> >[mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Mitch > > >> >Williams > > >> >Sent: Friday, January 16, 2009 3:06 PM > > >> >To: xen-devel@lists.xensource.com > > >> >Cc: steven.smith@eu.citrix.com; Ronciak, John; > > >> joserenato.santos@hp.com > > >> >Subject: [Xen-devel] RFC: Xen VMDq patch for ixgbe > > >> > > > >> >Attached is a patch to enable Xen VMDq (AKA Netchannel2 > > >vmq) for the > > >> >ixgbe driver. This is intended for testing and development > > >purposes > > >> >and should not be considered to be production-quality code. > > >> > > > >> >Please note that this does NOT apply to the Xen Linux kernel; it > > >> >applies to the ixgbe 1.3.47 release, available from > > >> >http://sourceforge.net/projects/e1000/. You''ll obviously > > >> also need an > > >> >Intel 82598-based 10 Gigabit network card and some sort of link > > >> >partner. This will build against the current Netchannel2 source > > >> >available from http:xenbits.xen.org/ext/netchannel2. > > >You''ll need to > > >> >enable "Net channel 2 support for multi-queue devices" in > > >> your kernel > > >> >config. > > >> > > > >> >To enable VMDq functionality, load the driver with the > > command-line > > >> >parameter VMDQ=<num queues>, as in: > > >> > > > >> >$ insmod ~/ixgbe-1.3.47/src/ixgbe.ko VMDQ=8 > > >> > > > >> >You can then set up PV domains to use the device by > > >> modifying your VM > > >> >configuration file from > > >> > vif = [ ''<whatever>'' ] > > >> >to > > >> > vif2 = [ ''pdev=<netdev>'' ] > > >> >where <netdev> is the interface name for your 82598 board, > > >> e.g peth0 in > > >> >dom0. > > >> > > > >> >Known issues (at least, known by me): > > >> >1) Must manually attach bridge device after starting domU vm. > > >> >Netchannel2 backend devices show up as ethNN, not vifN.M, so the > > >> >scripts don''t automatically attach the interface. Once your > > >> VM starts, > > >> >do ifconfig -a to see which new interface got added. Then > > >> use "brctl > > >> >addif" to add this new interface the the bridge. > > >> >2) No broadcast replication. This is a big one. Incoming > > >> broadcasts > > >> >will ONLY go to dom0. This means that your VMs can send ARP > > >> requests > > >> >and initiate IP sessions to outside machines, but outside > > machines > > >> >cannot initiate connections because the ARP requests don''t > > >go to the > > >> >domU VMs. > > >> >3) No loopback. VMs cannot communicate with other VMs (including > > >> >dom0) on the same machine. > > >> > > > >> >Once I get this out, I''ll start working on a proper > > backport of the > > >> >driver into the Xen kernel (2.6.18.8) tree. I''ll remove as > > >> much of the > > >> >compatibility cruft as is prudent and properly integrate it into > > >> >the Kbuild stuff. When that''s done, I''ll send a complete > > >> patchset to > > >> >this list, including signed-off-by lines which can then be > > >> checked in > > >> >to Mercurial. > > >> > > > >> >Please review and comment, and if possible test. > > >> > > > >> >Thanks, > > >> >Mitch > > >> > > > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel