Philippe Combes
2010-Dec-17 10:57 UTC
[Xen-users] Yet another question about multiple NICs
Dear Xen users, I have tried for weeks to have a domU connected to both NICs of the dom0, each in a different LAN. Google gave me plenty of tutos and HowTos about the subject, including the Xen and the Debian Xen wiki''s, of course. It seems so simple ! Some advise to use a simple wrapper to /etc/xen/network-bridge, others to let it aside and to set bridges on my own. But there must be something obvious that I miss, something so obvious that no manual need to explain it, because I tried every solution and variant I found on the Internet with no success. My dom0 first ran CentOS 5.5, Xen 3.0.3. I tried to have eth1 up and configured both in dom0 and in a domU. I never succeeded (details below), so I followed the advice of some colleagues who told me my issues might have come from running a Debian lenny domU on a CentOS dom0 (because the domU used the CentOS kernel instead of the one of Debian lenny, which is more recent). So now my dom0 runs an up-to-date Debian lenny, with Xen 3.2.1, but I have the same behaviour when trying to get two interfaces in a domU. As I said it before, I tried several configurations, but let''s stick for now to one based on the network-bridge script. In /etc/network/interfaces: auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet dhcp In /etc/xen/xend-config.sxp: (network-script network-bridge-wrapper) /etc/xen/scripts/network-bridge-wrapper: #!/bin/bash dir=$(dirname "$0") "$dir/network-bridge" "$@" vifnum=0 netdev=eth0 bridge=eth0 "$dir/network-bridge" "$@" vifnum=1 netdev=eth1 bridge=eth1 In domU configuration file: vif = [ ''mac=00:16:3E:55:AF:C2,bridge=eth0'', ''mac=00:16:3E:55:AF:C3,bridge=eth1'' ] With this configuration, I get both bridges eth<i> configured and usable: I mean I can ping one machine of every LAN through the corresponding interface. When I start a domU however, the dom0 and the domU are alternatively connected to the LAN of eth1, but mutually exclusively. In other words, the dom0 is connected to the LAN on eth1 for a couple of minutes, but not the domU, and then, with no other reason than inactivity on the interface, it switches to the reverse situation: domU connected, not the dom0. After another couple of minutes of inactivity, back to the first situation, and so on... I noticed that the ''switch'' does not occur if the one that is currently connected performs a continuous ping on another machine of the LAN. This happened with the CentOS too. But I did not try anything else under that distro. Under Debian, I tried to have dom0''s eth1 down (no IP), but then the domU''s eth1 does not work at all, not even periodically. I was pretty sure the issue came from the way my bridges were configured, that there was something different with the dom0 primary interface, etc. Hence I tried all solutions I could find on the Internet with no success. I then made a simple test. Instead of binding domU''s eth<i> to dom0''s eth<i>, I bound it to dom0''s eth<1-i>: I changed vif = [ ''mac=00:16:3E:55:AF:C2,bridge=eth0'', ''mac=00:16:3E:55:AF:C3,bridge=eth1'' ] to vif = [ ''mac=00:16:3E:55:AF:C3,bridge=eth1'', ''mac=00:16:3E:55:AF:C2,bridge=eth0'' ] I was very surprised to see that dom0''s eth0, domU''s eth0 and dom0''s eth1 were all working normally, not domU''s eth1. There was no alternance between dom0''s eth0 and domU''s eth1 there, probably because there is always some kind of activity on dom0''s eth0 (NFS, monitoring). So it seems that my issue is NOT related to the dom0 bridges, but to the order of the vifs in the domU description. However, in the xend.log file, there is no difference in the way both vifs are processed. [2010-12-16 14:51:27 3241] INFO (XendDomainInfo:1514) createDevice: vif : {''bridge'': ''eth1'', ''mac'': ''00:16:3E:55:AF:C2 '', ''uuid'': ''9dbf60c7-d785-96e2-b036-dc21b669735c''} [2010-12-16 14:51:27 3241] DEBUG (DevController:118) DevController: writing {''mac'': ''00:16:3E:55:AF:C2'', ''handle'': ''0'' , ''protocol'': ''x86_64-abi'', ''backend-id'': ''0'', ''state'': ''1'', ''backend'': ''/local/domain/0/backend/vif/2/0''} to /local/d omain/2/device/vif/0. [2010-12-16 14:51:27 3241] DEBUG (DevController:120) DevController: writing {''bridge'': ''eth1'', ''domain'': ''inpiftest'', ''handle'': ''0'', ''uuid'': ''9dbf60c7-d785-96e2-b036-dc21b669735c'', ''script'': ''/etc/xen/scripts/vif-bridge'', ''mac'': ''00:16: 3E:55:AF:C2'', ''frontend-id'': ''2'', ''state'': ''1'', ''online'': ''1'', ''frontend'': ''/local/domain/2/device/vif/0''} to /local/d omain/0/backend/vif/2/0. [2010-12-16 14:51:27 3241] INFO (XendDomainInfo:1514) createDevice: vif : {''bridge'': ''eth0'', ''mac'': ''00:16:3E:55:AF:C3 '', ''uuid'': ''1619a9f8-8113-2e3c-e566-9ca9552a3a93''} [2010-12-16 14:51:27 3241] DEBUG (DevController:118) DevController: writing {''mac'': ''00:16:3E:55:AF:C3'', ''handle'': ''1'' , ''protocol'': ''x86_64-abi'', ''backend-id'': ''0'', ''state'': ''1'', ''backend'': ''/local/domain/0/backend/vif/2/1''} to /local/d omain/2/device/vif/1. [2010-12-16 14:51:27 3241] DEBUG (DevController:120) DevController: writing {''bridge'': ''eth0'', ''domain'': ''inpiftest'', ''handle'': ''1'', ''uuid'': ''1619a9f8-8113-2e3c-e566-9ca9552a3a93'', ''script'': ''/etc/xen/scripts/vif-bridge'', ''mac'': ''00:16: 3E:55:AF:C3'', ''frontend-id'': ''2'', ''state'': ''1'', ''online'': ''1'', ''frontend'': ''/local/domain/2/device/vif/1''} to /local/d omain/0/backend/vif/2/1. There I am stuck, and it is very frustrating. It looks so simple when reading at tutos, that I clearly missed something obvious, but what ? Any clue, any track to follow down will be welcome, truly. Please do not hesitate to ask me for relevant logs, or for any experiment you would think useful. Thanks for your help, Philippe. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Felix Kuperjans
2010-Dec-17 11:11 UTC
Re: [Xen-users] Yet another question about multiple NICs
Hello Philippe, just some questions: - Do you use a firewall in dom0 oder domU? - Are those two physical interfaces probably connected to the same physical network? - Can you post the outputs of the following commands in both dom0 and domU when your setup has just startet: # ip addr show # ip route show # iptables -nvL And dom0 only: # brctrl show Note that those commands require iproute2, bridge-utils (dom0 only) and iptables to be installed on the machine. I can make some guesses right now, too: - If the second question would be answered with yes, you must use two physical network or you are creating a loop. - Is your bridge really named equally to your network interface (i.e. both eth0) or is the network interface renamed? Probably something got confused there (ip addr will show it anyway). Regards, Felix Am 17.12.2010 11:57, schrieb Philippe Combes:> Dear Xen users, > > I have tried for weeks to have a domU connected to both NICs of the > dom0, each in a different LAN. Google gave me plenty of tutos > and HowTos about the subject, including the Xen and the Debian Xen > wiki''s, of course. It seems so simple ! > Some advise to use a simple wrapper to /etc/xen/network-bridge, others > to let it aside and to set bridges on my own. > But there must be something obvious that I miss, something so obvious > that no manual need to explain it, because I tried every solution and > variant I found on the Internet with no success. > > My dom0 first ran CentOS 5.5, Xen 3.0.3. I tried to have eth1 up and > configured both in dom0 and in a domU. I never succeeded (details > below), so I followed the advice of some colleagues who told me my > issues might have come from running a Debian lenny domU on a CentOS > dom0 (because the domU used the CentOS kernel instead of the one of > Debian lenny, which is more recent). > > So now my dom0 runs an up-to-date Debian lenny, with Xen 3.2.1, but I > have the same behaviour when trying to get two interfaces in a domU. > As I said it before, I tried several configurations, but let''s stick > for now to one based on the network-bridge script. > In /etc/network/interfaces: > auto eth0 > iface eth0 inet dhcp > auto eth1 > iface eth1 inet dhcp > In /etc/xen/xend-config.sxp: > (network-script network-bridge-wrapper) > /etc/xen/scripts/network-bridge-wrapper: > #!/bin/bash > dir=$(dirname "$0") > "$dir/network-bridge" "$@" vifnum=0 netdev=eth0 bridge=eth0 > "$dir/network-bridge" "$@" vifnum=1 netdev=eth1 bridge=eth1 > In domU configuration file: > vif = [ ''mac=00:16:3E:55:AF:C2,bridge=eth0'', > ''mac=00:16:3E:55:AF:C3,bridge=eth1'' ] > > With this configuration, I get both bridges eth<i> configured and > usable: I mean I can ping one machine of every LAN through the > corresponding interface. > > When I start a domU however, the dom0 and the domU are alternatively > connected to the LAN of eth1, but mutually exclusively. In other > words, the dom0 is connected to the LAN on eth1 for a couple of > minutes, but not the domU, and then, with no other reason than > inactivity on the interface, it switches to the reverse situation: > domU connected, not the dom0. After another couple of minutes of > inactivity, back to the first situation, and so on... > I noticed that the ''switch'' does not occur if the one that is > currently connected performs a continuous ping on another machine of > the LAN. > > This happened with the CentOS too. But I did not try anything else > under that distro. Under Debian, I tried to have dom0''s eth1 down (no > IP), but then the domU''s eth1 does not work at all, not even > periodically. > > I was pretty sure the issue came from the way my bridges were > configured, that there was something different with the dom0 primary > interface, etc. Hence I tried all solutions I could find on the > Internet with no success. > I then made a simple test. Instead of binding domU''s eth<i> to dom0''s > eth<i>, I bound it to dom0''s eth<1-i>: I changed > vif = [ ''mac=00:16:3E:55:AF:C2,bridge=eth0'', > ''mac=00:16:3E:55:AF:C3,bridge=eth1'' ] > to > vif = [ ''mac=00:16:3E:55:AF:C3,bridge=eth1'', > ''mac=00:16:3E:55:AF:C2,bridge=eth0'' ] > I was very surprised to see that dom0''s eth0, domU''s eth0 and dom0''s > eth1 were all working normally, not domU''s eth1. There was no > alternance between dom0''s eth0 and domU''s eth1 there, probably because > there is always some kind of activity on dom0''s eth0 (NFS, monitoring). > > So it seems that my issue is NOT related to the dom0 bridges, but to > the order of the vifs in the domU description. However, in the > xend.log file, there is no difference in the way both vifs are processed. > [2010-12-16 14:51:27 3241] INFO (XendDomainInfo:1514) createDevice: > vif : {''bridge'': ''eth1'', ''mac'': ''00:16:3E:55:AF:C2 > '', ''uuid'': ''9dbf60c7-d785-96e2-b036-dc21b669735c''} > [2010-12-16 14:51:27 3241] DEBUG (DevController:118) DevController: > writing {''mac'': ''00:16:3E:55:AF:C2'', ''handle'': ''0'' > , ''protocol'': ''x86_64-abi'', ''backend-id'': ''0'', ''state'': ''1'', > ''backend'': ''/local/domain/0/backend/vif/2/0''} to /local/d > omain/2/device/vif/0. > [2010-12-16 14:51:27 3241] DEBUG (DevController:120) DevController: > writing {''bridge'': ''eth1'', ''domain'': ''inpiftest'', > ''handle'': ''0'', ''uuid'': ''9dbf60c7-d785-96e2-b036-dc21b669735c'', > ''script'': ''/etc/xen/scripts/vif-bridge'', ''mac'': ''00:16: > 3E:55:AF:C2'', ''frontend-id'': ''2'', ''state'': ''1'', ''online'': ''1'', > ''frontend'': ''/local/domain/2/device/vif/0''} to /local/d > omain/0/backend/vif/2/0. > [2010-12-16 14:51:27 3241] INFO (XendDomainInfo:1514) createDevice: > vif : {''bridge'': ''eth0'', ''mac'': ''00:16:3E:55:AF:C3 > '', ''uuid'': ''1619a9f8-8113-2e3c-e566-9ca9552a3a93''} > [2010-12-16 14:51:27 3241] DEBUG (DevController:118) DevController: > writing {''mac'': ''00:16:3E:55:AF:C3'', ''handle'': ''1'' > , ''protocol'': ''x86_64-abi'', ''backend-id'': ''0'', ''state'': ''1'', > ''backend'': ''/local/domain/0/backend/vif/2/1''} to /local/d > omain/2/device/vif/1. > [2010-12-16 14:51:27 3241] DEBUG (DevController:120) DevController: > writing {''bridge'': ''eth0'', ''domain'': ''inpiftest'', > ''handle'': ''1'', ''uuid'': ''1619a9f8-8113-2e3c-e566-9ca9552a3a93'', > ''script'': ''/etc/xen/scripts/vif-bridge'', ''mac'': ''00:16: > 3E:55:AF:C3'', ''frontend-id'': ''2'', ''state'': ''1'', ''online'': ''1'', > ''frontend'': ''/local/domain/2/device/vif/1''} to /local/d > omain/0/backend/vif/2/1. > > There I am stuck, and it is very frustrating. It looks so simple when > reading at tutos, that I clearly missed something obvious, but what ? > Any clue, any track to follow down will be welcome, truly. Please do > not hesitate to ask me for relevant logs, or for any experiment you > would think useful. > > Thanks for your help, > Philippe. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Simon Hobson
2010-Dec-17 12:03 UTC
Re: [Xen-users] Yet another question about multiple NICs
Felix Kuperjans wrote:>- Is your bridge really named equally to your network interface (i.e. >both eth0) or is the network interface renamed? Probably something got >confused there (ip addr will show it anyway).Don''t know if it''s a "Debian thing" or not, but my setup is the same - eth0 will get renamed to peth0, and a bridge created called eth0. Philippe Combes:> In /etc/xen/xend-config.sxp: > (network-script network-bridge-wrapper) > /etc/xen/scripts/network-bridge-wrapper: > #!/bin/bash > dir=$(dirname "$0") > "$dir/network-bridge" "$@" vifnum=0 netdev=eth0 bridge=eth0 > "$dir/network-bridge" "$@" vifnum=1 netdev=eth1 bridge=eth1 > In domU configuration file: > vif = [ ''mac=00:16:3E:55:AF:C2,bridge=eth0'', > ''mac=00:16:3E:55:AF:C3,bridge=eth1'' ]That looks just like mine. This is just a bit of a hunch, but if you are still having the problem, can you do this : At the other network device, run "arp -an" and see what MAC address is reported for the device (Dom0 or DomU) that''s pinging it. Do this twice, when one is working, and when the other is working - the entries you are interested in are the ones for the IPs of your Dom0 and DomU Also, I assume you have checked that the Dom0 and DomU are getting different IPs haven''t you ? -- Simon Hobson Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed author Gladys Hobson. Novels - poetry - short stories - ideal as Christmas stocking fillers. Some available as e-books. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Philippe Combes
2010-Dec-17 12:32 UTC
Re: [Xen-users] Yet another question about multiple NICs
Hi Felix, After so long fighting alone with this, it gives some comfort to have so quick an answer. Thanks. Felix Kuperjans a écrit :> just some questions: > - Do you use a firewall in dom0 oder domU?No. Unless there is some default hidden firewall in the default installation of debian lenny :)> - Are those two physical interfaces probably connected to the same > physical network?No. I wrote: "each in a different LAN". This is what I meant. To connect both networks to one another, I would need a routing machine.> - Can you post the outputs of the following commands in both dom0 and > domU when your setup has just startet:In dom0... -- $ ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: peth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:14:4f:40:ca:74 brd ff:ff:ff:ff:ff:ff inet6 fe80::214:4fff:fe40:ca74/64 scope link valid_lft forever preferred_lft forever 3: peth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 100 link/ether 00:14:4f:40:ca:75 brd ff:ff:ff:ff:ff:ff inet6 fe80::214:4fff:fe40:ca75/64 scope link valid_lft forever preferred_lft forever 4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:14:4f:40:ca:76 brd ff:ff:ff:ff:ff:ff 5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:14:4f:40:ca:77 brd ff:ff:ff:ff:ff:ff 6: vif0.0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 7: veth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 8: vif0.1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 9: veth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 10: vif0.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 11: veth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 12: vif0.3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 13: veth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 14: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:14:4f:40:ca:74 brd ff:ff:ff:ff:ff:ff inet 172.16.113.121/25 brd 172.16.113.127 scope global eth0 inet6 fe80::214:4fff:fe40:ca74/64 scope link valid_lft forever preferred_lft forever 15: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:14:4f:40:ca:75 brd ff:ff:ff:ff:ff:ff inet 192.168.24.123/25 brd 192.168.24.127 scope global eth1 inet6 fe80::214:4fff:fe40:ca75/64 scope link valid_lft forever preferred_lft forever 16: vif1.0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 32 link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever 17: vif1.1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 32 link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever -- -- $ ip route show 172.16.113.0/25 dev eth0 proto kernel scope link src 172.16.113.121 192.168.24.0/25 dev eth1 proto kernel scope link src 192.168.24.123 default via 192.168.24.125 dev eth1 default via 172.16.113.126 dev eth0 I tried to remove the first ''default'' route, with route del default..., but nothing changed. -- -- $ iptables -nvL Chain INPUT (policy ACCEPT 744 packets, 50919 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 22 packets, 1188 bytes) pkts bytes target prot opt in out source destination 3 219 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in vif1.0 Chain OUTPUT (policy ACCEPT 582 packets, 76139 bytes) pkts bytes target prot opt in out source destination -- -- $ brctl show bridge name bridge id STP enabled interfaces eth0 8000.00144f40ca74 no peth0 vif1.0 eth1 8000.00144f40ca75 no peth1 vif1.1 -- In the dom1... -- # ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:16:3e:55:af:c2 brd ff:ff:ff:ff:ff:ff inet 172.16.113.81/25 brd 172.16.113.127 scope global eth0 inet6 fe80::216:3eff:fe55:afc2/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:16:3e:55:af:c3 brd ff:ff:ff:ff:ff:ff inet 192.168.24.81/25 brd 192.168.24.127 scope global eth1 inet6 fe80::216:3eff:fe55:afc3/64 scope link valid_lft forever preferred_lft forever -- -- # ip route show 172.16.113.0/25 dev eth0 proto kernel scope link src 172.16.113.81 192.168.24.0/25 dev eth1 proto kernel scope link src 192.168.24.81 -- -- # iptables -nvL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination -- I could not see anything weird in these outputs. Can you ?> - Is your bridge really named equally to your network interface (i.e. > both eth0) or is the network interface renamed? Probably something got > confused there (ip addr will show it anyway).In Xen 3.2.1, the network-bridge script renames eth<i> to peth<i>, bring it down and set a bridge with the name eth<i>. Regards, Philippe> > Am 17.12.2010 11:57, schrieb Philippe Combes: >> Dear Xen users, >> >> I have tried for weeks to have a domU connected to both NICs of the >> dom0, each in a different LAN. Google gave me plenty of tutos >> and HowTos about the subject, including the Xen and the Debian Xen >> wiki''s, of course. It seems so simple ! >> Some advise to use a simple wrapper to /etc/xen/network-bridge, others >> to let it aside and to set bridges on my own. >> But there must be something obvious that I miss, something so obvious >> that no manual need to explain it, because I tried every solution and >> variant I found on the Internet with no success. >> >> My dom0 first ran CentOS 5.5, Xen 3.0.3. I tried to have eth1 up and >> configured both in dom0 and in a domU. I never succeeded (details >> below), so I followed the advice of some colleagues who told me my >> issues might have come from running a Debian lenny domU on a CentOS >> dom0 (because the domU used the CentOS kernel instead of the one of >> Debian lenny, which is more recent). >> >> So now my dom0 runs an up-to-date Debian lenny, with Xen 3.2.1, but I >> have the same behaviour when trying to get two interfaces in a domU. >> As I said it before, I tried several configurations, but let''s stick >> for now to one based on the network-bridge script. >> In /etc/network/interfaces: >> auto eth0 >> iface eth0 inet dhcp >> auto eth1 >> iface eth1 inet dhcp >> In /etc/xen/xend-config.sxp: >> (network-script network-bridge-wrapper) >> /etc/xen/scripts/network-bridge-wrapper: >> #!/bin/bash >> dir=$(dirname "$0") >> "$dir/network-bridge" "$@" vifnum=0 netdev=eth0 bridge=eth0 >> "$dir/network-bridge" "$@" vifnum=1 netdev=eth1 bridge=eth1 >> In domU configuration file: >> vif = [ ''mac=00:16:3E:55:AF:C2,bridge=eth0'', >> ''mac=00:16:3E:55:AF:C3,bridge=eth1'' ] >> >> With this configuration, I get both bridges eth<i> configured and >> usable: I mean I can ping one machine of every LAN through the >> corresponding interface. >> >> When I start a domU however, the dom0 and the domU are alternatively >> connected to the LAN of eth1, but mutually exclusively. In other >> words, the dom0 is connected to the LAN on eth1 for a couple of >> minutes, but not the domU, and then, with no other reason than >> inactivity on the interface, it switches to the reverse situation: >> domU connected, not the dom0. After another couple of minutes of >> inactivity, back to the first situation, and so on... >> I noticed that the ''switch'' does not occur if the one that is >> currently connected performs a continuous ping on another machine of >> the LAN. >> >> This happened with the CentOS too. But I did not try anything else >> under that distro. Under Debian, I tried to have dom0''s eth1 down (no >> IP), but then the domU''s eth1 does not work at all, not even >> periodically. >> >> I was pretty sure the issue came from the way my bridges were >> configured, that there was something different with the dom0 primary >> interface, etc. Hence I tried all solutions I could find on the >> Internet with no success. >> I then made a simple test. Instead of binding domU''s eth<i> to dom0''s >> eth<i>, I bound it to dom0''s eth<1-i>: I changed >> vif = [ ''mac=00:16:3E:55:AF:C2,bridge=eth0'', >> ''mac=00:16:3E:55:AF:C3,bridge=eth1'' ] >> to >> vif = [ ''mac=00:16:3E:55:AF:C3,bridge=eth1'', >> ''mac=00:16:3E:55:AF:C2,bridge=eth0'' ] >> I was very surprised to see that dom0''s eth0, domU''s eth0 and dom0''s >> eth1 were all working normally, not domU''s eth1. There was no >> alternance between dom0''s eth0 and domU''s eth1 there, probably because >> there is always some kind of activity on dom0''s eth0 (NFS, monitoring). >> >> So it seems that my issue is NOT related to the dom0 bridges, but to >> the order of the vifs in the domU description. However, in the >> xend.log file, there is no difference in the way both vifs are processed. >> [2010-12-16 14:51:27 3241] INFO (XendDomainInfo:1514) createDevice: >> vif : {''bridge'': ''eth1'', ''mac'': ''00:16:3E:55:AF:C2 >> '', ''uuid'': ''9dbf60c7-d785-96e2-b036-dc21b669735c''} >> [2010-12-16 14:51:27 3241] DEBUG (DevController:118) DevController: >> writing {''mac'': ''00:16:3E:55:AF:C2'', ''handle'': ''0'' >> , ''protocol'': ''x86_64-abi'', ''backend-id'': ''0'', ''state'': ''1'', >> ''backend'': ''/local/domain/0/backend/vif/2/0''} to /local/d >> omain/2/device/vif/0. >> [2010-12-16 14:51:27 3241] DEBUG (DevController:120) DevController: >> writing {''bridge'': ''eth1'', ''domain'': ''inpiftest'', >> ''handle'': ''0'', ''uuid'': ''9dbf60c7-d785-96e2-b036-dc21b669735c'', >> ''script'': ''/etc/xen/scripts/vif-bridge'', ''mac'': ''00:16: >> 3E:55:AF:C2'', ''frontend-id'': ''2'', ''state'': ''1'', ''online'': ''1'', >> ''frontend'': ''/local/domain/2/device/vif/0''} to /local/d >> omain/0/backend/vif/2/0. >> [2010-12-16 14:51:27 3241] INFO (XendDomainInfo:1514) createDevice: >> vif : {''bridge'': ''eth0'', ''mac'': ''00:16:3E:55:AF:C3 >> '', ''uuid'': ''1619a9f8-8113-2e3c-e566-9ca9552a3a93''} >> [2010-12-16 14:51:27 3241] DEBUG (DevController:118) DevController: >> writing {''mac'': ''00:16:3E:55:AF:C3'', ''handle'': ''1'' >> , ''protocol'': ''x86_64-abi'', ''backend-id'': ''0'', ''state'': ''1'', >> ''backend'': ''/local/domain/0/backend/vif/2/1''} to /local/d >> omain/2/device/vif/1. >> [2010-12-16 14:51:27 3241] DEBUG (DevController:120) DevController: >> writing {''bridge'': ''eth0'', ''domain'': ''inpiftest'', >> ''handle'': ''1'', ''uuid'': ''1619a9f8-8113-2e3c-e566-9ca9552a3a93'', >> ''script'': ''/etc/xen/scripts/vif-bridge'', ''mac'': ''00:16: >> 3E:55:AF:C3'', ''frontend-id'': ''2'', ''state'': ''1'', ''online'': ''1'', >> ''frontend'': ''/local/domain/2/device/vif/1''} to /local/d >> omain/0/backend/vif/2/1. >> >> There I am stuck, and it is very frustrating. It looks so simple when >> reading at tutos, that I clearly missed something obvious, but what ? >> Any clue, any track to follow down will be welcome, truly. Please do >> not hesitate to ask me for relevant logs, or for any experiment you >> would think useful. >> >> Thanks for your help, >> Philippe. >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Philippe Combes
2010-Dec-17 12:45 UTC
Re: [Xen-users] Yet another question about multiple NICs
Simon Hobson wrote :> > That looks just like mine.That''s exactly why I''m banging my head against the wall :)> This is just a bit of a hunch, but if you are still having the problem, > can you do this : > > At the other network device, run "arp -an" and see what MAC address is > reported for the device (Dom0 or DomU) that''s pinging it. Do this twice, > when one is working, and when the other is working - the entries you are > interested in are the ones for the IPs of your Dom0 and DomUI ping''ed both IPs from another machine on the LAN (the one which eth1''s are connected to). As expected, the MAC addresses stored in the ARP table is the one of the working device (physical or virtual).> > Also, I assume you have checked that the Dom0 and DomU are getting > different IPs haven''t you ?Sure. And there is no other machine with the same IPs on the LAN. And I did not use the same IPs when the dom0 ran CentOS. Thanks, and any idea still welcome ! Philippe _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Simon Hobson
2010-Dec-17 13:08 UTC
Re: [Xen-users] Yet another question about multiple NICs
I think the next thing I''d be doing is firing up wireshark (or rather it''s text-only brother tshark). On Dom0, get the network working and ping another machine on the lan. Fire up tshark on peth<n> and watch the traffic - you should see both the ping request and reply. Fire up a DomU, and do the same ping - which I gather doesn''t work. Keep the ping going from Dom0. Keep watching the packet trace in Dom0 - of interest here are things like : Did DomU send an ARP request for the remote device ? Did the remote device reply ? Are the ping requests going out ? Are the replies coming back ? To the right MAC ? If you see requests going out, but no reply, try firing up a packet sniffer on the remote machine and see if the requests are reaching it. Also, apart from the initial messages* when you fire up the DomU, are there any other bridge related messages in the logs ? * From memory, it should log : Interface added Interface going into learning mode Interface going into active mode -- Simon Hobson Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed author Gladys Hobson. Novels - poetry - short stories - ideal as Christmas stocking fillers. Some available as e-books. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Felix Kuperjans
2010-Dec-17 13:21 UTC
Re: [Xen-users] Yet another question about multiple NICs
Hi Philippe, I forgot about Xen''s renaming... The firewall rules do nothing special, they won''t hurt anything. Ip addresses are also correct (on both sides), but the routes are probably not ok: - The dom1 does not have a default route - so it will not be able to reach anything outside the two subnets (but should reach anything inside of them). - It''s interesting that dom1''s firewall output shows that no packages were processed, so maybe you didn''t ping anything since the last reboot from dom1 or the firewall was loaded by reading it''s statistics... Still no reasons why you can''t ping local machines from the dom1 (and sometimes even not from dom0). Have you tried pinging each other, so dom0 -> dom1 and vice versa? The only remaining thing that denies communication would be ARP, so the output of: # ip neigh show on both machines *directly after* a ping would be nice (within a few seconds - use && and a time-terminated ping). Regards, Felix Am 17.12.2010 13:32, schrieb Philippe Combes:> Hi Felix, > > After so long fighting alone with this, it gives some comfort to have > so quick an answer. Thanks. > > Felix Kuperjans a écrit : >> just some questions: >> - Do you use a firewall in dom0 oder domU? > > No. Unless there is some default hidden firewall in the default > installation of debian lenny :) > >> - Are those two physical interfaces probably connected to the same >> physical network? > > No. I wrote: "each in a different LAN". This is what I meant. To > connect both networks to one another, I would need a routing machine. > >> - Can you post the outputs of the following commands in both dom0 and >> domU when your setup has just startet: > > In dom0... > -- > $ ip addr show > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: peth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc > pfifo_fast state UP qlen 1000 > link/ether 00:14:4f:40:ca:74 brd ff:ff:ff:ff:ff:ff > inet6 fe80::214:4fff:fe40:ca74/64 scope link > valid_lft forever preferred_lft forever > 3: peth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc > pfifo_fast state UP qlen 100 > link/ether 00:14:4f:40:ca:75 brd ff:ff:ff:ff:ff:ff > inet6 fe80::214:4fff:fe40:ca75/64 scope link > valid_lft forever preferred_lft forever > 4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 > link/ether 00:14:4f:40:ca:76 brd ff:ff:ff:ff:ff:ff > 5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 > link/ether 00:14:4f:40:ca:77 brd ff:ff:ff:ff:ff:ff > 6: vif0.0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN > link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff > 7: veth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN > link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff > 8: vif0.1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN > link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff > 9: veth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN > link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff > 10: vif0.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN > link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff > 11: veth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN > link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff > 12: vif0.3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN > link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff > 13: veth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN > link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff > 14: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue > state UNKNOWN > link/ether 00:14:4f:40:ca:74 brd ff:ff:ff:ff:ff:ff > inet 172.16.113.121/25 brd 172.16.113.127 scope global eth0 > inet6 fe80::214:4fff:fe40:ca74/64 scope link > valid_lft forever preferred_lft forever > 15: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue > state UNKNOWN > link/ether 00:14:4f:40:ca:75 brd ff:ff:ff:ff:ff:ff > inet 192.168.24.123/25 brd 192.168.24.127 scope global eth1 > inet6 fe80::214:4fff:fe40:ca75/64 scope link > valid_lft forever preferred_lft forever > 16: vif1.0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc > pfifo_fast state UNKNOWN qlen 32 > link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff > inet6 fe80::fcff:ffff:feff:ffff/64 scope link > valid_lft forever preferred_lft forever > 17: vif1.1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc > pfifo_fast state UNKNOWN qlen 32 > link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff > inet6 fe80::fcff:ffff:feff:ffff/64 scope link > valid_lft forever preferred_lft forever > -- > > -- > $ ip route show > 172.16.113.0/25 dev eth0 proto kernel scope link src 172.16.113.121 > 192.168.24.0/25 dev eth1 proto kernel scope link src 192.168.24.123 > default via 192.168.24.125 dev eth1 > default via 172.16.113.126 dev eth0 > > I tried to remove the first ''default'' route, with route del > default..., but nothing changed. > -- > > -- > $ iptables -nvL > Chain INPUT (policy ACCEPT 744 packets, 50919 bytes) > pkts bytes target prot opt in out source destination > > Chain FORWARD (policy ACCEPT 22 packets, 1188 bytes) > pkts bytes target prot opt in out source destination > 3 219 ACCEPT all -- * * 0.0.0.0/0 > 0.0.0.0/0 PHYSDEV match --physdev-in vif1.0 > > Chain OUTPUT (policy ACCEPT 582 packets, 76139 bytes) > pkts bytes target prot opt in out source destination > -- > > -- > $ brctl show > bridge name bridge id STP enabled interfaces > eth0 8000.00144f40ca74 no peth0 > vif1.0 > eth1 8000.00144f40ca75 no peth1 > vif1.1 > -- > > > In the dom1... > -- > # ip addr show > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast > state UNKNOWN qlen 1000 > link/ether 00:16:3e:55:af:c2 brd ff:ff:ff:ff:ff:ff > inet 172.16.113.81/25 brd 172.16.113.127 scope global eth0 > inet6 fe80::216:3eff:fe55:afc2/64 scope link > valid_lft forever preferred_lft forever > 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast > state UNKNOWN qlen 1000 > link/ether 00:16:3e:55:af:c3 brd ff:ff:ff:ff:ff:ff > inet 192.168.24.81/25 brd 192.168.24.127 scope global eth1 > inet6 fe80::216:3eff:fe55:afc3/64 scope link > valid_lft forever preferred_lft forever > -- > > -- > # ip route show > 172.16.113.0/25 dev eth0 proto kernel scope link src 172.16.113.81 > 192.168.24.0/25 dev eth1 proto kernel scope link src 192.168.24.81 > -- > > -- > # iptables -nvL > Chain INPUT (policy ACCEPT 0 packets, 0 bytes) > pkts bytes target prot opt in out source destination > > Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) > pkts bytes target prot opt in out source destination > > Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) > pkts bytes target prot opt in out source destination > -- > > > > I could not see anything weird in these outputs. Can you ? > > >> - Is your bridge really named equally to your network interface (i.e. >> both eth0) or is the network interface renamed? Probably something got >> confused there (ip addr will show it anyway). > > In Xen 3.2.1, the network-bridge script renames eth<i> to peth<i>, > bring it down and set a bridge with the name eth<i>. > > > Regards, > Philippe > > >> >> Am 17.12.2010 11:57, schrieb Philippe Combes: >>> Dear Xen users, >>> >>> I have tried for weeks to have a domU connected to both NICs of the >>> dom0, each in a different LAN. Google gave me plenty of tutos >>> and HowTos about the subject, including the Xen and the Debian Xen >>> wiki''s, of course. It seems so simple ! >>> Some advise to use a simple wrapper to /etc/xen/network-bridge, others >>> to let it aside and to set bridges on my own. >>> But there must be something obvious that I miss, something so obvious >>> that no manual need to explain it, because I tried every solution and >>> variant I found on the Internet with no success. >>> >>> My dom0 first ran CentOS 5.5, Xen 3.0.3. I tried to have eth1 up and >>> configured both in dom0 and in a domU. I never succeeded (details >>> below), so I followed the advice of some colleagues who told me my >>> issues might have come from running a Debian lenny domU on a CentOS >>> dom0 (because the domU used the CentOS kernel instead of the one of >>> Debian lenny, which is more recent). >>> >>> So now my dom0 runs an up-to-date Debian lenny, with Xen 3.2.1, but I >>> have the same behaviour when trying to get two interfaces in a domU. >>> As I said it before, I tried several configurations, but let''s stick >>> for now to one based on the network-bridge script. >>> In /etc/network/interfaces: >>> auto eth0 >>> iface eth0 inet dhcp >>> auto eth1 >>> iface eth1 inet dhcp >>> In /etc/xen/xend-config.sxp: >>> (network-script network-bridge-wrapper) >>> /etc/xen/scripts/network-bridge-wrapper: >>> #!/bin/bash >>> dir=$(dirname "$0") >>> "$dir/network-bridge" "$@" vifnum=0 netdev=eth0 bridge=eth0 >>> "$dir/network-bridge" "$@" vifnum=1 netdev=eth1 bridge=eth1 >>> In domU configuration file: >>> vif = [ ''mac=00:16:3E:55:AF:C2,bridge=eth0'', >>> ''mac=00:16:3E:55:AF:C3,bridge=eth1'' ] >>> >>> With this configuration, I get both bridges eth<i> configured and >>> usable: I mean I can ping one machine of every LAN through the >>> corresponding interface. >>> >>> When I start a domU however, the dom0 and the domU are alternatively >>> connected to the LAN of eth1, but mutually exclusively. In other >>> words, the dom0 is connected to the LAN on eth1 for a couple of >>> minutes, but not the domU, and then, with no other reason than >>> inactivity on the interface, it switches to the reverse situation: >>> domU connected, not the dom0. After another couple of minutes of >>> inactivity, back to the first situation, and so on... >>> I noticed that the ''switch'' does not occur if the one that is >>> currently connected performs a continuous ping on another machine of >>> the LAN. >>> >>> This happened with the CentOS too. But I did not try anything else >>> under that distro. Under Debian, I tried to have dom0''s eth1 down (no >>> IP), but then the domU''s eth1 does not work at all, not even >>> periodically. >>> >>> I was pretty sure the issue came from the way my bridges were >>> configured, that there was something different with the dom0 primary >>> interface, etc. Hence I tried all solutions I could find on the >>> Internet with no success. >>> I then made a simple test. Instead of binding domU''s eth<i> to dom0''s >>> eth<i>, I bound it to dom0''s eth<1-i>: I changed >>> vif = [ ''mac=00:16:3E:55:AF:C2,bridge=eth0'', >>> ''mac=00:16:3E:55:AF:C3,bridge=eth1'' ] >>> to >>> vif = [ ''mac=00:16:3E:55:AF:C3,bridge=eth1'', >>> ''mac=00:16:3E:55:AF:C2,bridge=eth0'' ] >>> I was very surprised to see that dom0''s eth0, domU''s eth0 and dom0''s >>> eth1 were all working normally, not domU''s eth1. There was no >>> alternance between dom0''s eth0 and domU''s eth1 there, probably because >>> there is always some kind of activity on dom0''s eth0 (NFS, monitoring). >>> >>> So it seems that my issue is NOT related to the dom0 bridges, but to >>> the order of the vifs in the domU description. However, in the >>> xend.log file, there is no difference in the way both vifs are >>> processed. >>> [2010-12-16 14:51:27 3241] INFO (XendDomainInfo:1514) createDevice: >>> vif : {''bridge'': ''eth1'', ''mac'': ''00:16:3E:55:AF:C2 >>> '', ''uuid'': ''9dbf60c7-d785-96e2-b036-dc21b669735c''} >>> [2010-12-16 14:51:27 3241] DEBUG (DevController:118) DevController: >>> writing {''mac'': ''00:16:3E:55:AF:C2'', ''handle'': ''0'' >>> , ''protocol'': ''x86_64-abi'', ''backend-id'': ''0'', ''state'': ''1'', >>> ''backend'': ''/local/domain/0/backend/vif/2/0''} to /local/d >>> omain/2/device/vif/0. >>> [2010-12-16 14:51:27 3241] DEBUG (DevController:120) DevController: >>> writing {''bridge'': ''eth1'', ''domain'': ''inpiftest'', >>> ''handle'': ''0'', ''uuid'': ''9dbf60c7-d785-96e2-b036-dc21b669735c'', >>> ''script'': ''/etc/xen/scripts/vif-bridge'', ''mac'': ''00:16: >>> 3E:55:AF:C2'', ''frontend-id'': ''2'', ''state'': ''1'', ''online'': ''1'', >>> ''frontend'': ''/local/domain/2/device/vif/0''} to /local/d >>> omain/0/backend/vif/2/0. >>> [2010-12-16 14:51:27 3241] INFO (XendDomainInfo:1514) createDevice: >>> vif : {''bridge'': ''eth0'', ''mac'': ''00:16:3E:55:AF:C3 >>> '', ''uuid'': ''1619a9f8-8113-2e3c-e566-9ca9552a3a93''} >>> [2010-12-16 14:51:27 3241] DEBUG (DevController:118) DevController: >>> writing {''mac'': ''00:16:3E:55:AF:C3'', ''handle'': ''1'' >>> , ''protocol'': ''x86_64-abi'', ''backend-id'': ''0'', ''state'': ''1'', >>> ''backend'': ''/local/domain/0/backend/vif/2/1''} to /local/d >>> omain/2/device/vif/1. >>> [2010-12-16 14:51:27 3241] DEBUG (DevController:120) DevController: >>> writing {''bridge'': ''eth0'', ''domain'': ''inpiftest'', >>> ''handle'': ''1'', ''uuid'': ''1619a9f8-8113-2e3c-e566-9ca9552a3a93'', >>> ''script'': ''/etc/xen/scripts/vif-bridge'', ''mac'': ''00:16: >>> 3E:55:AF:C3'', ''frontend-id'': ''2'', ''state'': ''1'', ''online'': ''1'', >>> ''frontend'': ''/local/domain/2/device/vif/1''} to /local/d >>> omain/0/backend/vif/2/1. >>> >>> There I am stuck, and it is very frustrating. It looks so simple when >>> reading at tutos, that I clearly missed something obvious, but what ? >>> Any clue, any track to follow down will be welcome, truly. Please do >>> not hesitate to ask me for relevant logs, or for any experiment you >>> would think useful. >>> >>> Thanks for your help, >>> Philippe. >>> >>> _______________________________________________ >>> Xen-users mailing list >>> Xen-users@lists.xensource.com >>> http://lists.xensource.com/xen-users >>> >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2010-Dec-17 15:55 UTC
Re: [Xen-users] Yet another question about multiple NICs
On Fri, Dec 17, 2010 at 7:45 PM, Philippe Combes <Philippe.Combes@enseeiht.fr> wrote:> Simon Hobson wrote : >> At the other network device, run "arp -an" and see what MAC address is >> reported for the device (Dom0 or DomU) that''s pinging it. Do this twice, >> when one is working, and when the other is working - the entries you are >> interested in are the ones for the IPs of your Dom0 and DomU > > I ping''ed both IPs from another machine on the LAN (the one which eth1''s are > connected to). As expected, the MAC addresses stored in the ARP table is the > one of the working device (physical or virtual).Do you have access to the switch? It''s possible that your switch/router (whatever your dom0 is connected to) only allows one MAC per port. The easiest way to test this is to use a crossover-cable from one of dom0''s interface to a PC/notebook, and see if both dom0 and domU can communicate with it. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thomas Jensen
2010-Dec-17 17:21 UTC
Re: [Xen-users] Yet another question about multiple NICs
Philippe, I too struggled at first with multiple NICs in a Debian Lenny machine. The problem became a lot easier to fix when I set udev rules on the NICs. Otherwise, I noticed that the same NIC would come online with a different name (eth0, eth1, etc.) after a reboot. Once I set the udev rules, I was able to really get to the root of the networking problem. I now have a three NIC firewall virtualized through Xen. One NIC is a physical NIC passed to it through pciback.hide. The other two NICs in the firewall DomU are virtual interfaces. Finally, I had found an article on the Debian site which provided some guidance on the Xen wrapper script when I was setting up my machine. However, the article had a typo or something in it which wasn''t working for me. I remember posting a comment on the site which fixed the issue for me. I could try to find that article again and/or share the wrapper script that ended up working for my setup. --- Tom Jensen | President Digital Toolbox Email | tom.jensen@digitaltoolbox-inc.com On Fri, 17 Dec 2010 14:21:03 +0100, Felix Kuperjans <felix@desaster-games.com> wrote:> Hi Philippe, > > I forgot about Xen''s renaming... The firewall rules do nothing special, > they won''t hurt anything. > Ip addresses are also correct (on both sides), but the routes are > probably not ok: > - The dom1 does not have a default route - so it will not be able to > reach anything outside the two subnets (but should reach anything inside > of them). > - It''s interesting that dom1''s firewall output shows that no packages > were processed, so maybe you didn''t ping anything since the last reboot > from dom1 or the firewall was loaded by reading it''s statistics... > Still no reasons why you can''t ping local machines from the dom1 (and > sometimes even not from dom0). Have you tried pinging each other, so > dom0 -> dom1 and vice versa? > > The only remaining thing that denies communication would be ARP, so the > output of: > # ip neigh show > on both machines *directly after* a ping would be nice (within a few > seconds - use && and a time-terminated ping). > > Regards, > Felix > > Am 17.12.2010 13:32, schrieb Philippe Combes: >> Hi Felix, >> >> After so long fighting alone with this, it gives some comfort to have >> so quick an answer. Thanks. >> >> Felix Kuperjans a écrit : >>> just some questions: >>> - Do you use a firewall in dom0 oder domU? >> >> No. Unless there is some default hidden firewall in the default >> installation of debian lenny :) >> >>> - Are those two physical interfaces probably connected to the same >>> physical network? >> >> No. I wrote: "each in a different LAN". This is what I meant. To >> connect both networks to one another, I would need a routing machine. >> >>> - Can you post the outputs of the following commands in both dom0 and >>> domU when your setup has just startet: >> >> In dom0... >> -- >> $ ip addr show >> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> inet 127.0.0.1/8 scope host lo >> inet6 ::1/128 scope host >> valid_lft forever preferred_lft forever >> 2: peth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc >> pfifo_fast state UP qlen 1000 >> link/ether 00:14:4f:40:ca:74 brd ff:ff:ff:ff:ff:ff >> inet6 fe80::214:4fff:fe40:ca74/64 scope link >> valid_lft forever preferred_lft forever >> 3: peth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc >> pfifo_fast state UP qlen 100 >> link/ether 00:14:4f:40:ca:75 brd ff:ff:ff:ff:ff:ff >> inet6 fe80::214:4fff:fe40:ca75/64 scope link >> valid_lft forever preferred_lft forever >> 4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 >> link/ether 00:14:4f:40:ca:76 brd ff:ff:ff:ff:ff:ff >> 5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 >> link/ether 00:14:4f:40:ca:77 brd ff:ff:ff:ff:ff:ff >> 6: vif0.0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN >> link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff >> 7: veth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN >> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff >> 8: vif0.1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN >> link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff >> 9: veth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN >> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff >> 10: vif0.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN >> link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff >> 11: veth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN >> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff >> 12: vif0.3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN >> link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff >> 13: veth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN >> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff >> 14: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue >> state UNKNOWN >> link/ether 00:14:4f:40:ca:74 brd ff:ff:ff:ff:ff:ff >> inet 172.16.113.121/25 brd 172.16.113.127 scope global eth0 >> inet6 fe80::214:4fff:fe40:ca74/64 scope link >> valid_lft forever preferred_lft forever >> 15: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue >> state UNKNOWN >> link/ether 00:14:4f:40:ca:75 brd ff:ff:ff:ff:ff:ff >> inet 192.168.24.123/25 brd 192.168.24.127 scope global eth1 >> inet6 fe80::214:4fff:fe40:ca75/64 scope link >> valid_lft forever preferred_lft forever >> 16: vif1.0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc >> pfifo_fast state UNKNOWN qlen 32 >> link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff >> inet6 fe80::fcff:ffff:feff:ffff/64 scope link >> valid_lft forever preferred_lft forever >> 17: vif1.1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc >> pfifo_fast state UNKNOWN qlen 32 >> link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff >> inet6 fe80::fcff:ffff:feff:ffff/64 scope link >> valid_lft forever preferred_lft forever >> -- >> >> -- >> $ ip route show >> 172.16.113.0/25 dev eth0 proto kernel scope link src 172.16.113.121 >> 192.168.24.0/25 dev eth1 proto kernel scope link src 192.168.24.123 >> default via 192.168.24.125 dev eth1 >> default via 172.16.113.126 dev eth0 >> >> I tried to remove the first ''default'' route, with route del >> default..., but nothing changed. >> -- >> >> -- >> $ iptables -nvL >> Chain INPUT (policy ACCEPT 744 packets, 50919 bytes) >> pkts bytes target prot opt in out source destination >> >> Chain FORWARD (policy ACCEPT 22 packets, 1188 bytes) >> pkts bytes target prot opt in out source destination >> 3 219 ACCEPT all -- * * 0.0.0.0/0 >> 0.0.0.0/0 PHYSDEV match --physdev-in vif1.0 >> >> Chain OUTPUT (policy ACCEPT 582 packets, 76139 bytes) >> pkts bytes target prot opt in out source destination >> -- >> >> -- >> $ brctl show >> bridge name bridge id STP enabled interfaces >> eth0 8000.00144f40ca74 no peth0 >> vif1.0 >> eth1 8000.00144f40ca75 no peth1 >> vif1.1 >> -- >> >> >> In the dom1... >> -- >> # ip addr show >> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> inet 127.0.0.1/8 scope host lo >> inet6 ::1/128 scope host >> valid_lft forever preferred_lft forever >> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast >> state UNKNOWN qlen 1000 >> link/ether 00:16:3e:55:af:c2 brd ff:ff:ff:ff:ff:ff >> inet 172.16.113.81/25 brd 172.16.113.127 scope global eth0 >> inet6 fe80::216:3eff:fe55:afc2/64 scope link >> valid_lft forever preferred_lft forever >> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast >> state UNKNOWN qlen 1000 >> link/ether 00:16:3e:55:af:c3 brd ff:ff:ff:ff:ff:ff >> inet 192.168.24.81/25 brd 192.168.24.127 scope global eth1 >> inet6 fe80::216:3eff:fe55:afc3/64 scope link >> valid_lft forever preferred_lft forever >> -- >> >> -- >> # ip route show >> 172.16.113.0/25 dev eth0 proto kernel scope link src 172.16.113.81 >> 192.168.24.0/25 dev eth1 proto kernel scope link src 192.168.24.81 >> -- >> >> -- >> # iptables -nvL >> Chain INPUT (policy ACCEPT 0 packets, 0 bytes) >> pkts bytes target prot opt in out source destination >> >> Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) >> pkts bytes target prot opt in out source destination >> >> Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) >> pkts bytes target prot opt in out source destination >> -- >> >> >> >> I could not see anything weird in these outputs. Can you ? >> >> >>> - Is your bridge really named equally to your network interface (i.e. >>> both eth0) or is the network interface renamed? Probably something got >>> confused there (ip addr will show it anyway). >> >> In Xen 3.2.1, the network-bridge script renames eth<i> to peth<i>, >> bring it down and set a bridge with the name eth<i>. >> >> >> Regards, >> Philippe >> >> >>> >>> Am 17.12.2010 11:57, schrieb Philippe Combes: >>>> Dear Xen users, >>>> >>>> I have tried for weeks to have a domU connected to both NICs of the >>>> dom0, each in a different LAN. Google gave me plenty of tutos >>>> and HowTos about the subject, including the Xen and the Debian Xen >>>> wiki''s, of course. It seems so simple ! >>>> Some advise to use a simple wrapper to /etc/xen/network-bridge, others >>>> to let it aside and to set bridges on my own. >>>> But there must be something obvious that I miss, something so obvious >>>> that no manual need to explain it, because I tried every solution and >>>> variant I found on the Internet with no success. >>>> >>>> My dom0 first ran CentOS 5.5, Xen 3.0.3. I tried to have eth1 up and >>>> configured both in dom0 and in a domU. I never succeeded (details >>>> below), so I followed the advice of some colleagues who told me my >>>> issues might have come from running a Debian lenny domU on a CentOS >>>> dom0 (because the domU used the CentOS kernel instead of the one of >>>> Debian lenny, which is more recent). >>>> >>>> So now my dom0 runs an up-to-date Debian lenny, with Xen 3.2.1, but I >>>> have the same behaviour when trying to get two interfaces in a domU. >>>> As I said it before, I tried several configurations, but let''s stick >>>> for now to one based on the network-bridge script. >>>> In /etc/network/interfaces: >>>> auto eth0 >>>> iface eth0 inet dhcp >>>> auto eth1 >>>> iface eth1 inet dhcp >>>> In /etc/xen/xend-config.sxp: >>>> (network-script network-bridge-wrapper) >>>> /etc/xen/scripts/network-bridge-wrapper: >>>> #!/bin/bash >>>> dir=$(dirname "$0") >>>> "$dir/network-bridge" "$@" vifnum=0 netdev=eth0 bridge=eth0 >>>> "$dir/network-bridge" "$@" vifnum=1 netdev=eth1 bridge=eth1 >>>> In domU configuration file: >>>> vif = [ ''mac=00:16:3E:55:AF:C2,bridge=eth0'', >>>> ''mac=00:16:3E:55:AF:C3,bridge=eth1'' ] >>>> >>>> With this configuration, I get both bridges eth<i> configured and >>>> usable: I mean I can ping one machine of every LAN through the >>>> corresponding interface. >>>> >>>> When I start a domU however, the dom0 and the domU are alternatively >>>> connected to the LAN of eth1, but mutually exclusively. In other >>>> words, the dom0 is connected to the LAN on eth1 for a couple of >>>> minutes, but not the domU, and then, with no other reason than >>>> inactivity on the interface, it switches to the reverse situation: >>>> domU connected, not the dom0. After another couple of minutes of >>>> inactivity, back to the first situation, and so on... >>>> I noticed that the ''switch'' does not occur if the one that is >>>> currently connected performs a continuous ping on another machine of >>>> the LAN. >>>> >>>> This happened with the CentOS too. But I did not try anything else >>>> under that distro. Under Debian, I tried to have dom0''s eth1 down (no >>>> IP), but then the domU''s eth1 does not work at all, not even >>>> periodically. >>>> >>>> I was pretty sure the issue came from the way my bridges were >>>> configured, that there was something different with the dom0 primary >>>> interface, etc. Hence I tried all solutions I could find on the >>>> Internet with no success. >>>> I then made a simple test. Instead of binding domU''s eth<i> to dom0''s >>>> eth<i>, I bound it to dom0''s eth<1-i>: I changed >>>> vif = [ ''mac=00:16:3E:55:AF:C2,bridge=eth0'', >>>> ''mac=00:16:3E:55:AF:C3,bridge=eth1'' ] >>>> to >>>> vif = [ ''mac=00:16:3E:55:AF:C3,bridge=eth1'', >>>> ''mac=00:16:3E:55:AF:C2,bridge=eth0'' ] >>>> I was very surprised to see that dom0''s eth0, domU''s eth0 and dom0''s >>>> eth1 were all working normally, not domU''s eth1. There was no >>>> alternance between dom0''s eth0 and domU''s eth1 there, probably because >>>> there is always some kind of activity on dom0''s eth0 (NFS, monitoring). >>>> >>>> So it seems that my issue is NOT related to the dom0 bridges, but to >>>> the order of the vifs in the domU description. However, in the >>>> xend.log file, there is no difference in the way both vifs are >>>> processed. >>>> [2010-12-16 14:51:27 3241] INFO (XendDomainInfo:1514) createDevice: >>>> vif : {''bridge'': ''eth1'', ''mac'': ''00:16:3E:55:AF:C2 >>>> '', ''uuid'': ''9dbf60c7-d785-96e2-b036-dc21b669735c''} >>>> [2010-12-16 14:51:27 3241] DEBUG (DevController:118) DevController: >>>> writing {''mac'': ''00:16:3E:55:AF:C2'', ''handle'': ''0'' >>>> , ''protocol'': ''x86_64-abi'', ''backend-id'': ''0'', ''state'': ''1'', >>>> ''backend'': ''/local/domain/0/backend/vif/2/0''} to /local/d >>>> omain/2/device/vif/0. >>>> [2010-12-16 14:51:27 3241] DEBUG (DevController:120) DevController: >>>> writing {''bridge'': ''eth1'', ''domain'': ''inpiftest'', >>>> ''handle'': ''0'', ''uuid'': ''9dbf60c7-d785-96e2-b036-dc21b669735c'', >>>> ''script'': ''/etc/xen/scripts/vif-bridge'', ''mac'': ''00:16: >>>> 3E:55:AF:C2'', ''frontend-id'': ''2'', ''state'': ''1'', ''online'': ''1'', >>>> ''frontend'': ''/local/domain/2/device/vif/0''} to /local/d >>>> omain/0/backend/vif/2/0. >>>> [2010-12-16 14:51:27 3241] INFO (XendDomainInfo:1514) createDevice: >>>> vif : {''bridge'': ''eth0'', ''mac'': ''00:16:3E:55:AF:C3 >>>> '', ''uuid'': ''1619a9f8-8113-2e3c-e566-9ca9552a3a93''} >>>> [2010-12-16 14:51:27 3241] DEBUG (DevController:118) DevController: >>>> writing {''mac'': ''00:16:3E:55:AF:C3'', ''handle'': ''1'' >>>> , ''protocol'': ''x86_64-abi'', ''backend-id'': ''0'', ''state'': ''1'', >>>> ''backend'': ''/local/domain/0/backend/vif/2/1''} to /local/d >>>> omain/2/device/vif/1. >>>> [2010-12-16 14:51:27 3241] DEBUG (DevController:120) DevController: >>>> writing {''bridge'': ''eth0'', ''domain'': ''inpiftest'', >>>> ''handle'': ''1'', ''uuid'': ''1619a9f8-8113-2e3c-e566-9ca9552a3a93'', >>>> ''script'': ''/etc/xen/scripts/vif-bridge'', ''mac'': ''00:16: >>>> 3E:55:AF:C3'', ''frontend-id'': ''2'', ''state'': ''1'', ''online'': ''1'', >>>> ''frontend'': ''/local/domain/2/device/vif/1''} to /local/d >>>> omain/0/backend/vif/2/1. >>>> >>>> There I am stuck, and it is very frustrating. It looks so simple when >>>> reading at tutos, that I clearly missed something obvious, but what ? >>>> Any clue, any track to follow down will be welcome, truly. Please do >>>> not hesitate to ask me for relevant logs, or for any experiment you >>>> would think useful. >>>> >>>> Thanks for your help, >>>> Philippe. >>>> >>>> _______________________________________________ >>>> Xen-users mailing list >>>> Xen-users@lists.xensource.com >>>> http://lists.xensource.com/xen-users >>>> >>> >>> _______________________________________________ >>> Xen-users mailing list >>> Xen-users@lists.xensource.com >>> http://lists.xensource.com/xen-users >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Bastian Blank
2010-Dec-18 12:52 UTC
Re: [Xen-users] Yet another question about multiple NICs
On Fri, Dec 17, 2010 at 11:57:23AM +0100, Philippe Combes wrote:> I have tried for weeks to have a domU connected to both NICs of the > dom0, each in a different LAN. Google gave me plenty of tutos > and HowTos about the subject, including the Xen and the Debian Xen > wiki''s, of course. It seems so simple ! > Some advise to use a simple wrapper to /etc/xen/network-bridge, others > to let it aside and to set bridges on my own.Where? The Debian documentation clearly stats: Read the bridge-utils-interfaces man page. Bastian -- Where there''s no emotion, there''s no motive for violence. -- Spock, "Dagger of the Mind", stardate 2715.1 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Philippe Combes
2010-Dec-19 15:16 UTC
Re: [Xen-users] Yet another question about multiple NICs
Hello Simon, Thanks for your help. Simon Hobson wrote :> I think the next thing I''d be doing is firing up wireshark (or rather > it''s text-only brother tshark). > > On Dom0, get the network working and ping another machine on the lan. > Fire up tshark on peth<n> and watch the traffic - you should see both > the ping request and reply. > Fire up a DomU, and do the same ping - which I gather doesn''t work. Keep > the ping going from Dom0. > Keep watching the packet trace in Dom0 - of interest here are things like :I am afraid we are about to reach the (short) limits of my competences in networking. I tried nevertheless, and looking at the trace below, I think I can answer your questions, if I really executed what you meant.> Did DomU send an ARP request for the remote device ?Yes.> Did the remote device reply ? > Are the ping requests going out ? > Are the replies coming back ? To the right MAC ?No, No, No. $ ping 192.168.24.125 & tshark -i peth1 [1] 21099 PING 192.168.24.125 (192.168.24.125) 56(84) bytes of data. Running as user "root" and group "root". This could be dangerous. Capturing on peth1 0.000000 SunMicro_40:ca:75 -> Broadcast ARP Who has 192.168.24.125? Tell 192.168.24.123 64 bytes from 192.168.24.125: icmp_seq=1 ttl=64 time=2004 ms 64 bytes from 192.168.24.125: icmp_seq=2 ttl=64 time=1004 ms 64 bytes from 192.168.24.125: icmp_seq=3 ttl=64 time=4.48 ms 1.000061 SunMicro_40:ca:75 -> Broadcast ARP Who has 192.168.24.125? Tell 192.168.24.123 1.000280 QuantaCo_e0:81:2c -> SunMicro_40:ca:75 ARP 192.168.24.125 is at 00:16:36:e0:81:2c 1.000293 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 1.000296 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 1.000299 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 1.000522 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 1.000541 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 1.000545 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 64 bytes from 192.168.24.125: icmp_seq=4 ttl=64 time=0.137 ms 2.000149 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 2.000276 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 2.208653 Cisco_c8:90:30 -> Cisco_c8:90:30 LOOP Reply 64 bytes from 192.168.24.125: icmp_seq=5 ttl=64 time=0.298 ms 3.000210 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 3.000501 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 3.034484 Cisco_c8:90:30 -> CDP/VTP/DTP/PAgP/UDLD CDP Device ID: sw_admin-3.gridmip.cict.fr Port ID: FastEthernet0/48 64 bytes from 192.168.24.125: icmp_seq=6 ttl=64 time=0.213 ms 4.000290 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 4.000496 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 64 bytes from 192.168.24.125: icmp_seq=7 ttl=64 time=0.128 ms 5.000360 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 5.000476 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 64 bytes from 192.168.24.125: icmp_seq=8 ttl=64 time=0.291 ms 6.000424 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 6.000458 QuantaCo_e0:81:2c -> SunMicro_40:ca:75 ARP Who has 192.168.24.123? Tell 192.168.24.125 6.000467 SunMicro_40:ca:75 -> QuantaCo_e0:81:2c ARP 192.168.24.123 is at 00:14:4f:40:ca:75 6.000708 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 64 bytes from 192.168.24.125: icmp_seq=9 ttl=64 time=0.204 ms 7.000496 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 7.000693 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 64 bytes from 192.168.24.125: icmp_seq=10 ttl=64 time=0.366 ms -------------->>> Launching the ping from dom1 7.497007 Xensourc_55:af:c3 -> Broadcast ARP Who has 192.168.24.125? Tell 192.168.24.81 8.000575 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 8.000932 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 64 bytes from 192.168.24.125: icmp_seq=11 ttl=64 time=0.276 ms 8.497069 Xensourc_55:af:c3 -> Broadcast ARP Who has 192.168.24.125? Tell 192.168.24.81 9.000660 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 9.000928 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 64 bytes from 192.168.24.125: icmp_seq=12 ttl=64 time=0.189 ms 9.497141 Xensourc_55:af:c3 -> Broadcast ARP Who has 192.168.24.125? Tell 192.168.24.81 10.000729 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 10.000912 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 64 bytes from 192.168.24.125: icmp_seq=13 ttl=64 time=0.355 ms 10.517213 Xensourc_55:af:c3 -> Broadcast ARP Who has 192.168.24.125? Tell 192.168.24.81 11.000792 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 11.001140 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 64 bytes from 192.168.24.125: icmp_seq=14 ttl=64 time=0.273 ms 11.517283 Xensourc_55:af:c3 -> Broadcast ARP Who has 192.168.24.125? Tell 192.168.24.81 12.000869 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 12.001136 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 12.211749 Cisco_c8:90:30 -> Cisco_c8:90:30 LOOP Reply 64 bytes from 192.168.24.125: icmp_seq=15 ttl=64 time=0.174 ms 12.517356 Xensourc_55:af:c3 -> Broadcast ARP Who has 192.168.24.125? Tell 192.168.24.81 13.000938 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 13.001106 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply -------------->>> Stopping the ping from dom1 64 bytes from 192.168.24.125: icmp_seq=16 ttl=64 time=0.348 ms 14.000996 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 14.001338 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 64 bytes from 192.168.24.125: icmp_seq=17 ttl=64 time=0.262 ms 15.001079 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 15.001335 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 64 bytes from 192.168.24.125: icmp_seq=18 ttl=64 time=0.176 ms 16.001153 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 16.001322 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 64 bytes from 192.168.24.125: icmp_seq=19 ttl=64 time=0.338 ms 17.001222 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 17.001554 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 64 bytes from 192.168.24.125: icmp_seq=20 ttl=64 time=0.255 ms 18.001291 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 18.001539 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 64 bytes from 192.168.24.125: icmp_seq=21 ttl=64 time=0.166 ms ^C 19.001359 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request 19.001519 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply 56 packets captured> If you see requests going out, but no reply, try firing up a packet > sniffer on the remote machine and see if the requests are reaching it.I used tshark on the target too. No packet reaches it.> Also, apart from the initial messages* when you fire up the DomU, are > there any other bridge related messages in the logs ? > * From memory, it should log : > Interface added > Interface going into learning mode > Interface going into active mode >I found no such message in my logs, but I remember I saw them on the console, once when I had an access to it. But looking those messages, I found something I never saw before, because it was in /var/log/syslog, and I only looked in /var/log/xen/* so far: ---- logger: /etc/xen/scripts/vif-bridge: Successful vif-bridge online for vif1.0, bridge eth0 . logger: /etc/xen/scripts/block: Writing backend/vbd/1/51713/hotplug-status connected to x enstore. logger: /etc/xen/scripts/vif-bridge: Writing backend/vif/1/0/hotplug-status connected to xenstore. logger: /etc/xen/scripts/vif-bridge: iptables -A FORWARD -m physdev --physdev-in vif1.1 -j ACCEPT failed.#012If you are using iptables, this may affect networking for guest domains. logger: /etc/xen/scripts/vif-bridge: Successful vif-bridge online for vif1.1, bridge eth1 . logger: /etc/xen/scripts/vif-bridge: Writing backend/vif/1/1/hotplug-status connected to xenstore. ---- When I invert the vifs in the dom1 description, I get the same error about iptables for the second vif. Have anyone any idea how I could follow down this new track ? iptables -nvL seems ok. Anything else to check for ? Regards and thanks, Philippe _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Philippe Combes
2010-Dec-19 15:17 UTC
Re: [Xen-users] Yet another question about multiple NICs
Fajar A. Nugraha wrote :> Do you have access to the switch? > It''s possible that your switch/router (whatever your dom0 is connected > to) only allows one MAC per port. The easiest way to test this is to > use a crossover-cable from one of dom0''s interface to a PC/notebook, > and see if both dom0 and domU can communicate with it.Hi Fajar, Many thanks for your answer. I did not know such configurations could exist on the switches of the LAN. Your theory first seduced me a lot and I was ready to ask for the guy who configured the switches to investigate, when I realized that your idea does not meet this simple fact: when I connect dom1''s eth0 to dom0''s eth1, it works, on that specific switched LAN. I guess your test with the laptop is useless, isn''t it ? Pity. But thanks again. Philippe. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Philippe Combes
2010-Dec-19 15:20 UTC
Re: [Xen-users] Yet another question about multiple NICs
Felix Kuperjans a écrit :> Hi Philippe, > > I forgot about Xen''s renaming... The firewall rules do nothing special, > they won''t hurt anything. > Ip addresses are also correct (on both sides), but the routes are > probably not ok: > - The dom1 does not have a default route - so it will not be able to > reach anything outside the two subnets (but should reach anything inside > of them).It needs not so far.> - It''s interesting that dom1''s firewall output shows that no packages > were processed, so maybe you didn''t ping anything since the last reboot > from dom1 or the firewall was loaded by reading it''s statistics...You requested for the outputs "when <my> system has just started". Hence no packet, I guess. But shouldn''t there be at least those exchanged for the ssh connection to the dom1 ? Anyway, after one minute or so, I get on the dom1: # iptables -nvL Chain INPUT (policy ACCEPT 23 packets, 884 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 4 packets, 816 bytes) pkts bytes target prot opt in out source destination> Still no reasons why you can''t ping local machines from the dom1 (and > sometimes even not from dom0). Have you tried pinging each other, so > dom0 -> dom1 and vice versa?Yes I tried, and it has always worked while dom0''s eth1 was up.> The only remaining thing that denies communication would be ARP, so the > output of: > # ip neigh show > on both machines *directly after* a ping would be nice (within a few > seconds - use && and a time-terminated ping).Nothing on a machine when not connected. But when connected (here the dom0): $ ip neigh show 192.168.24.125 dev eth1 lladdr 00:16:36:e0:81:2c REACHABLE 172.16.113.100 dev eth0 lladdr 00:16:38:4c:04:00 DELAY 172.16.113.123 dev eth0 lladdr 00:16:36:e0:81:2e STALE 172.16.113.124 dev eth0 lladdr 00:1b:24:3d:ca:95 REACHABLE 172.16.113.106 dev eth0 lladdr 00:16:38:28:b5:39 REACHABLE Does that give you any clue for further investigations ? Thanks again, Philippe _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Philippe Combes
2010-Dec-19 15:22 UTC
Re: [Xen-users] Yet another question about multiple NICs
Hi Thomas, Thomas Jensen wrote :> Philippe, > > I too struggled at first with multiple NICs in a Debian Lenny machine. > The problem became a lot easier to fix when I set udev rules on the > NICs. Otherwise, I noticed that the same NIC would come online with a > different name (eth0, eth1, etc.) after a reboot.I have no such problem. The ordering of the interfaces is stable over the reboots, on the dom0 as well as on the domU''s.> Finally, I had found an article on the Debian site which provided some > guidance on the Xen wrapper script when I was setting up my machine. > However, the article had a typo or something in it which wasn''t working > for me. I remember posting a comment on the site which fixed the issue > for me. I could try to find that article again and/or share the wrapper > script that ended up working for my setup.Oh yes please, that may be very interesting ! Thanks, Philippe _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2010-Dec-19 15:26 UTC
Re: [Xen-users] Yet another question about multiple NICs
On Sun, Dec 19, 2010 at 10:17 PM, Philippe Combes <Philippe.Combes@enseeiht.fr> wrote:> when I connect dom1''s eth0 to dom0''s eth1, it works, on that > specific switched LAN.What do you mean "connect dom1''s eth0 to dom0''s eth1"? Didn''t you say eth0 and eth1 are on different LAN? -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jpp@jppozzi.dyndns.org
2010-Dec-19 15:34 UTC
Re: [Xen-users] Yet another question about multiple NICs
Le dimanche 19 décembre 2010 à 16:16 +0100, Philippe Combes a écrit :> Hello Simon, > > Thanks for your help. > > Simon Hobson wrote : > > I think the next thing I''d be doing is firing up wireshark (or rather > > it''s text-only brother tshark). > > > > On Dom0, get the network working and ping another machine on the lan. > > Fire up tshark on peth<n> and watch the traffic - you should see both > > the ping request and reply. > > Fire up a DomU, and do the same ping - which I gather doesn''t work. Keep > > the ping going from Dom0. > > Keep watching the packet trace in Dom0 - of interest here are things like : > > I am afraid we are about to reach the (short) limits of my competences > in networking. I tried nevertheless, and looking at the trace below, I > think I can answer your questions, if I really executed what you meant. > > > Did DomU send an ARP request for the remote device ? > Yes. > > > Did the remote device reply ? > > Are the ping requests going out ? > > Are the replies coming back ? To the right MAC ? > No, No, No. > > $ ping 192.168.24.125 & tshark -i peth1 > [1] 21099 > PING 192.168.24.125 (192.168.24.125) 56(84) bytes of data. > Running as user "root" and group "root". This could be dangerous. > Capturing on peth1 > 0.000000 SunMicro_40:ca:75 -> Broadcast ARP Who has > 192.168.24.125? Tell 192.168.24.123 > 64 bytes from 192.168.24.125: icmp_seq=1 ttl=64 time=2004 ms > 64 bytes from 192.168.24.125: icmp_seq=2 ttl=64 time=1004 ms > 64 bytes from 192.168.24.125: icmp_seq=3 ttl=64 time=4.48 ms > 1.000061 SunMicro_40:ca:75 -> Broadcast ARP Who has > 192.168.24.125? Tell 192.168.24.123 > 1.000280 QuantaCo_e0:81:2c -> SunMicro_40:ca:75 ARP 192.168.24.125 > is at 00:16:36:e0:81:2c > 1.000293 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 1.000296 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 1.000299 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 1.000522 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 1.000541 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 1.000545 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 64 bytes from 192.168.24.125: icmp_seq=4 ttl=64 time=0.137 ms > 2.000149 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 2.000276 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 2.208653 Cisco_c8:90:30 -> Cisco_c8:90:30 LOOP Reply > 64 bytes from 192.168.24.125: icmp_seq=5 ttl=64 time=0.298 ms > 3.000210 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 3.000501 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 3.034484 Cisco_c8:90:30 -> CDP/VTP/DTP/PAgP/UDLD CDP Device ID: > sw_admin-3.gridmip.cict.fr Port ID: FastEthernet0/48 > 64 bytes from 192.168.24.125: icmp_seq=6 ttl=64 time=0.213 ms > 4.000290 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 4.000496 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 64 bytes from 192.168.24.125: icmp_seq=7 ttl=64 time=0.128 ms > 5.000360 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 5.000476 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 64 bytes from 192.168.24.125: icmp_seq=8 ttl=64 time=0.291 ms > 6.000424 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 6.000458 QuantaCo_e0:81:2c -> SunMicro_40:ca:75 ARP Who has > 192.168.24.123? Tell 192.168.24.125 > 6.000467 SunMicro_40:ca:75 -> QuantaCo_e0:81:2c ARP 192.168.24.123 > is at 00:14:4f:40:ca:75 > 6.000708 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 64 bytes from 192.168.24.125: icmp_seq=9 ttl=64 time=0.204 ms > 7.000496 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 7.000693 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 64 bytes from 192.168.24.125: icmp_seq=10 ttl=64 time=0.366 ms > -------------->>> Launching the ping from dom1 > 7.497007 Xensourc_55:af:c3 -> Broadcast ARP Who has > 192.168.24.125? Tell 192.168.24.81 > 8.000575 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 8.000932 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 64 bytes from 192.168.24.125: icmp_seq=11 ttl=64 time=0.276 ms > 8.497069 Xensourc_55:af:c3 -> Broadcast ARP Who has > 192.168.24.125? Tell 192.168.24.81 > 9.000660 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 9.000928 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 64 bytes from 192.168.24.125: icmp_seq=12 ttl=64 time=0.189 ms > 9.497141 Xensourc_55:af:c3 -> Broadcast ARP Who has > 192.168.24.125? Tell 192.168.24.81 > 10.000729 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 10.000912 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 64 bytes from 192.168.24.125: icmp_seq=13 ttl=64 time=0.355 ms > 10.517213 Xensourc_55:af:c3 -> Broadcast ARP Who has > 192.168.24.125? Tell 192.168.24.81 > 11.000792 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 11.001140 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 64 bytes from 192.168.24.125: icmp_seq=14 ttl=64 time=0.273 ms > 11.517283 Xensourc_55:af:c3 -> Broadcast ARP Who has > 192.168.24.125? Tell 192.168.24.81 > 12.000869 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 12.001136 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 12.211749 Cisco_c8:90:30 -> Cisco_c8:90:30 LOOP Reply > 64 bytes from 192.168.24.125: icmp_seq=15 ttl=64 time=0.174 ms > 12.517356 Xensourc_55:af:c3 -> Broadcast ARP Who has > 192.168.24.125? Tell 192.168.24.81 > 13.000938 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 13.001106 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > -------------->>> Stopping the ping from dom1 > 64 bytes from 192.168.24.125: icmp_seq=16 ttl=64 time=0.348 ms > 14.000996 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 14.001338 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 64 bytes from 192.168.24.125: icmp_seq=17 ttl=64 time=0.262 ms > 15.001079 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 15.001335 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 64 bytes from 192.168.24.125: icmp_seq=18 ttl=64 time=0.176 ms > 16.001153 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 16.001322 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 64 bytes from 192.168.24.125: icmp_seq=19 ttl=64 time=0.338 ms > 17.001222 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 17.001554 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 64 bytes from 192.168.24.125: icmp_seq=20 ttl=64 time=0.255 ms > 18.001291 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 18.001539 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 64 bytes from 192.168.24.125: icmp_seq=21 ttl=64 time=0.166 ms > ^C 19.001359 192.168.24.123 -> 192.168.24.125 ICMP Echo (ping) request > 19.001519 192.168.24.125 -> 192.168.24.123 ICMP Echo (ping) reply > 56 packets captured > > > > > If you see requests going out, but no reply, try firing up a packet > > sniffer on the remote machine and see if the requests are reaching it. > > I used tshark on the target too. No packet reaches it. > > > Also, apart from the initial messages* when you fire up the DomU, are > > there any other bridge related messages in the logs ? > > * From memory, it should log : > > Interface added > > Interface going into learning mode > > Interface going into active mode > > > > I found no such message in my logs, but I remember I saw them on > the console, once when I had an access to it. > But looking those messages, I found something I never saw before, > because it was in /var/log/syslog, and I only looked in /var/log/xen/* > so far: > ---- > logger: /etc/xen/scripts/vif-bridge: Successful vif-bridge online for > vif1.0, bridge eth0 > . > logger: /etc/xen/scripts/block: Writing > backend/vbd/1/51713/hotplug-status connected to x > enstore. > logger: /etc/xen/scripts/vif-bridge: Writing > backend/vif/1/0/hotplug-status connected to > xenstore. > logger: /etc/xen/scripts/vif-bridge: iptables -A FORWARD -m physdev > --physdev-in vif1.1 > -j ACCEPT failed.#012If you are using iptables, this may affect > networking for guest domains. > logger: /etc/xen/scripts/vif-bridge: Successful vif-bridge online for > vif1.1, bridge eth1 > . > logger: /etc/xen/scripts/vif-bridge: Writing > backend/vif/1/1/hotplug-status connected to > xenstore. > ---- > > When I invert the vifs in the dom1 description, I get the same error > about iptables for the second vif. > Have anyone any idea how I could follow down this new track ? iptables > -nvL seems ok. Anything else to check for ? > > Regards and thanks, > PhilippeHello, Udev rules are mandatory on Debian systems with XEN, I use it always and in /etc/network/interfaces : auto br0 iface br0 inet static address 192.168.1.8 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 mtu 1500 txqueuelen 4096 gateway 192.168.1.11 bridge_ports eth0 bridge_maxwait 1 Regards JP P> > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Philippe Combes
2010-Dec-19 15:41 UTC
Re: [Xen-users] Yet another question about multiple NICs
Fajar A. Nugraha a écrit :> On Sun, Dec 19, 2010 at 10:17 PM, Philippe Combes > <Philippe.Combes@enseeiht.fr> wrote: >> when I connect dom1''s eth0 to dom0''s eth1, it works, on that >> specific switched LAN. > > What do you mean "connect dom1''s eth0 to dom0''s eth1"? > > Didn''t you say eth0 and eth1 are on different LAN? >I mean, as I explained in my first message, that if I exchange the vifs in the dom1 description, then the interface eth0 of the dom1 is "connected" to the dom0''s eth1. I configure dom1''s eth0 just like its eth1 was before the exchange, and vice versa. So my problem should not come from the switches of the LAN which dom0''s eth1 is on. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Simon Hobson
2010-Dec-19 16:57 UTC
Re: [Xen-users] Yet another question about multiple NICs
Philippe Combes wrote:>>Did DomU send an ARP request for the remote device ? >Yes. > >>Did the remote device reply ? >>Are the ping requests going out ? >>Are the replies coming back ? To the right MAC ? >No, No, No. > >$ ping 192.168.24.125 & tshark -i peth1<snip>>>If you see requests going out, but no reply, try firing up a packet >>sniffer on the remote machine and see if the requests are reaching >>it. > >I used tshark on the target too. No packet reaches it.Well I''m stumped now ! We can see ARP requests going out via peth1, but they don''t arrive at the other device - so they are either not being transmitted, or the switch is blocking them. I''d still suggest changing nothing except to connect the machine direct* to something (eg a laptop) and try again - just to completely eliminate any potential switch problem. Having said that, it''s not a problem I''ve personally come across. * Or use a known "dumb" switch so you can have the rest of the network connected (so you get DHCP) and then unplug it from the rest of the network for testing.>I found no such message in my logs, but I remember I saw them on >the console, once when I had an access to it. >But looking those messages, I found something I never saw before, >because it was in /var/log/syslog, and I only looked in /var/log/xen/* so far: >---- >logger: /etc/xen/scripts/vif-bridge: Successful vif-bridge online for >vif1.0, bridge eth0 >. >logger: /etc/xen/scripts/block: Writing >backend/vbd/1/51713/hotplug-status connected to x >enstore. >logger: /etc/xen/scripts/vif-bridge: Writing >backend/vif/1/0/hotplug-status connected to >xenstore. >logger: /etc/xen/scripts/vif-bridge: iptables -A FORWARD -m physdev >--physdev-in vif1.1 >-j ACCEPT failed.#012If you are using iptables, this may affect >networking for guest domains. >logger: /etc/xen/scripts/vif-bridge: Successful vif-bridge online for >vif1.1, bridge eth1 >. >logger: /etc/xen/scripts/vif-bridge: Writing >backend/vif/1/1/hotplug-status connected to >xenstore.Well I''ve no idea what''s wrong here. The line that''s failing reads : Append a rule to the FORWARD table, match (-m) using the physdev module, macthing in put port (--physdev-in) vif1.1, and jump (-j) to the ACCEPT rule. In other words - for any packets entering via bridge port vif1.1, forward them. Now, I''ve just checked on one of my work servers, and it does indeed have rules like these. # iptables -L -vn ... Chain FORWARD (policy ACCEPT 180M packets, 36G bytes) pkts bytes target prot opt in out source destination 46M 50G ACCEPT all -- * * xx.xx.xx.xx 0.0.0.0/0 PHYSDEV match --physdev-in xxxxx 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in xxxxx udp spt:68 dpt:67 While I see from an earlier message that your iptables is empty. However, It shouldn''t matter since the default policy on your FORWARD chain is accept - ie anything not expressly blocked should be passed. Is it possible that you don''t have physdev matching available in your Dom0 installation ? I don''t think this is anything to do with your problem, but could account for the error message. As an aside, I can now see one thing that setting the guest IP address does - it includes the IP address in the iptables rules added for the guest when it starts. -- Simon Hobson Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed author Gladys Hobson. Novels - poetry - short stories - ideal as Christmas stocking fillers. Some available as e-books. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thomas Jensen
2010-Dec-19 18:31 UTC
Re: [Xen-users] Yet another question about multiple NICs
On Sun, 19 Dec 2010 16:22:28 +0100, Philippe Combes <Philippe.Combes@enseeiht.fr> wrote:> Hi Thomas, > > > Thomas Jensen wrote : >> Philippe, >> > I too struggled at first with multiple NICs in a Debian Lenny machine. > The problem became a lot easier to fix when I set udev rules on the >> NICs. Otherwise, I noticed that the same NIC would come online with a >> different name (eth0, eth1, etc.) after a reboot. > > I have no such problem. The ordering of the interfaces is stable over > the reboots, on the dom0 as well as on the domU''s. > > >> Finally, I had found an article on the Debian site which provided some >> guidance on the Xen wrapper script when I was setting up my machine. > However, the article had a typo or something in it which wasn''t working >> for me. I remember posting a comment on the site which fixed the issue >> for me. I could try to find that article again and/or share the wrapper >> script that ended up working for my setup. > > Oh yes please, that may be very interesting !Philippe, This is the article I was referring to: http://www.debian-administration.org/article/470/Using_multiple_network_cards_in_XEN_3.0 Make sure to read all the comments as there are some corrections and updates to the article. The combination of udev rules, this modified wrapper script, and a few DNS entries on my DomUs were the three steps I needed to take in order to get multiple NICs configured on my Debian Lenny setup.> > Thanks, > Philippe_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Felix Kuperjans
2010-Dec-19 18:50 UTC
Re: [Xen-users] Yet another question about multiple NICs
Answers within quotes: Am 19.12.2010 16:20, schrieb Philippe Combes:> > Felix Kuperjans a écrit : >> Hi Philippe, >> >> I forgot about Xen''s renaming... The firewall rules do nothing special, >> they won''t hurt anything. >> Ip addresses are also correct (on both sides), but the routes are >> probably not ok: >> - The dom1 does not have a default route - so it will not be able to >> reach anything outside the two subnets (but should reach anything inside >> of them). > > It needs not so far. > >> - It''s interesting that dom1''s firewall output shows that no packages >> were processed, so maybe you didn''t ping anything since the last reboot >> from dom1 or the firewall was loaded by reading it''s statistics... > > You requested for the outputs "when <my> system has just started". > Hence no packet, I guess. But shouldn''t there be at least those exchanged > for the ssh connection to the dom1 ? > Anyway, after one minute or so, I get on the dom1: > # iptables -nvL > Chain INPUT (policy ACCEPT 23 packets, 884 bytes) > pkts bytes target prot opt in out source > destination > > Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) > pkts bytes target prot opt in out source > destination > > Chain OUTPUT (policy ACCEPT 4 packets, 816 bytes) > pkts bytes target prot opt in out source > destinationThat looks better.> > >> Still no reasons why you can''t ping local machines from the dom1 (and >> sometimes even not from dom0). Have you tried pinging each other, so >> dom0 -> dom1 and vice versa? > > Yes I tried, and it has always worked while dom0''s eth1 was up.So it''s only impossible to ping the domU from other machines on the network (and vice versa)? I think Fajar is probably right with his guess that your physical switches are managed. That means they do traffic filtering on their ports based on the mac addresses. Which switch models do you use on your two networks?> >> The only remaining thing that denies communication would be ARP, so the >> output of: >> # ip neigh show >> on both machines *directly after* a ping would be nice (within a few >> seconds - use && and a time-terminated ping). > > Nothing on a machine when not connected. But when connected (here the > dom0): > $ ip neigh show > 192.168.24.125 dev eth1 lladdr 00:16:36:e0:81:2c REACHABLE > 172.16.113.100 dev eth0 lladdr 00:16:38:4c:04:00 DELAY > 172.16.113.123 dev eth0 lladdr 00:16:36:e0:81:2e STALE > 172.16.113.124 dev eth0 lladdr 00:1b:24:3d:ca:95 REACHABLE > 172.16.113.106 dev eth0 lladdr 00:16:38:28:b5:39 REACHABLEARP seems to work at least on the Domain-0 *if* one of those IP addresses is the one of the domU... Can you try doing this on the DomU when pinging a host in the network?> > Does that give you any clue for further investigations ? > Thanks again, > Philippe > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Philippe Combes
2010-Dec-20 03:12 UTC
Re: [Xen-users] Yet another question about multiple NICs
Felix Kuperjans wrote :> Answers within quotes: >> >>> - It''s interesting that dom1''s firewall output shows that no packages >>> were processed, so maybe you didn''t ping anything since the last reboot >>> from dom1 or the firewall was loaded by reading it''s statistics... >> You requested for the outputs "when <my> system has just started". >> Hence no packet, I guess. But shouldn''t there be at least those exchanged >> for the ssh connection to the dom1 ? >> Anyway, after one minute or so, I get on the dom1: >> # iptables -nvL >> Chain INPUT (policy ACCEPT 23 packets, 884 bytes) >> pkts bytes target prot opt in out source >> destination >> >> Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) >> pkts bytes target prot opt in out source >> destination >> >> Chain OUTPUT (policy ACCEPT 4 packets, 816 bytes) >> pkts bytes target prot opt in out source >> destination > That looks better. >> >>> Still no reasons why you can''t ping local machines from the dom1 (and >>> sometimes even not from dom0). Have you tried pinging each other, so >>> dom0 -> dom1 and vice versa? >> Yes I tried, and it has always worked while dom0''s eth1 was up. > So it''s only impossible to ping the domU from other machines on the > network (and vice versa)? > I think Fajar is probably right with his guess that your physical > switches are managed. That means they do traffic filtering on their > ports based on the mac addresses. > Which switch models do you use on your two networks?I already answered Fajar in this thread: when the FIRST vif of dom1 is connected to dom0''s eth1, then the behaviour on that switched LAN is normal, while the traffic on the routed LAN of dom0''s eth0 issues the bug. So my issue is definetely related to the instanciation of the SECOND interface of dom1, whatever network it is connected to. Or there is some kind of black magic underneath...>>> The only remaining thing that denies communication would be ARP, so the >>> output of: >>> # ip neigh show >>> on both machines *directly after* a ping would be nice (within a few >>> seconds - use && and a time-terminated ping). >> Nothing on a machine when not connected. But when connected (here the >> dom0): >> $ ip neigh show >> 192.168.24.125 dev eth1 lladdr 00:16:36:e0:81:2c REACHABLE >> 172.16.113.100 dev eth0 lladdr 00:16:38:4c:04:00 DELAY >> 172.16.113.123 dev eth0 lladdr 00:16:36:e0:81:2e STALE >> 172.16.113.124 dev eth0 lladdr 00:1b:24:3d:ca:95 REACHABLE >> 172.16.113.106 dev eth0 lladdr 00:16:38:28:b5:39 REACHABLE > ARP seems to work at least on the Domain-0 *if* one of those IP > addresses is the one of the domU... > Can you try doing this on the DomU when pinging a host in the network?I did ! As requested ! And as you know, dom0 and dom1 are alternatively connected. When dom0 is connected (172.16.113.121 on eth0 and 192.168.24.123 on eth1) I get the trace above, but nothing from dom1. When dom1 is connected, I get a similar trace from dom1, but nothing from dom0 (I mean nothing on network 192.168.24.0) Regards, Philippe _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2010-Dec-20 03:25 UTC
Re: [Xen-users] Yet another question about multiple NICs
On Mon, Dec 20, 2010 at 10:12 AM, Philippe Combes <Philippe.Combes@enseeiht.fr> wrote:>> So it''s only impossible to ping the domU from other machines on the >> network (and vice versa)? >> I think Fajar is probably right with his guess that your physical >> switches are managed. That means they do traffic filtering on their >> ports based on the mac addresses. >> Which switch models do you use on your two networks? > > I already answered Fajar in this thread: when the FIRST vif of dom1 is > connected to dom0''s eth1, then the behaviour on that switched LAN is normal, > while the traffic on the routed LAN of dom0''s eth0 issues the bug. > So my issue is definetely related to the instanciation of the SECOND > interface of dom1, whatever network it is connected to. Or there is some > kind of black magic underneath..."black magic" is the word I''d use to describe Xen''s default network-bridge script :D To narrow-down possible cause, can you DISABLE xen''s default network bridge script (or in your case, network-bridge-wrapper, replace it with /bin/true) and setup bridges MANUALLY like JP''s example in /etc/network/interfaces. You''ll then have br0 and br1 bridges where you setup IP addresses, and also use that in domU''s config. Personally I always create my own bridges instead of relying on Xen''s network-bridge script (at that time I needed vlan and bonding support, which is not possible with the default network-bridge script) -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Felix Kuperjans
2010-Dec-20 12:41 UTC
Re: [Xen-users] Yet another question about multiple NICs
I meant to do ip neigh within dom1 - you wrote your output was from dom0. Is the dom0 able to reach machines on the networks? Regards, Felix Am 20.12.2010 04:12, schrieb Philippe Combes:> > > Felix Kuperjans wrote : >> Answers within quotes: >>> >>>> - It''s interesting that dom1''s firewall output shows that no packages >>>> were processed, so maybe you didn''t ping anything since the last >>>> reboot >>>> from dom1 or the firewall was loaded by reading it''s statistics... >>> You requested for the outputs "when <my> system has just started". >>> Hence no packet, I guess. But shouldn''t there be at least those >>> exchanged >>> for the ssh connection to the dom1 ? >>> Anyway, after one minute or so, I get on the dom1: >>> # iptables -nvL >>> Chain INPUT (policy ACCEPT 23 packets, 884 bytes) >>> pkts bytes target prot opt in out source >>> destination >>> >>> Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) >>> pkts bytes target prot opt in out source >>> destination >>> >>> Chain OUTPUT (policy ACCEPT 4 packets, 816 bytes) >>> pkts bytes target prot opt in out source >>> destination >> That looks better. >>> >>>> Still no reasons why you can''t ping local machines from the dom1 (and >>>> sometimes even not from dom0). Have you tried pinging each other, so >>>> dom0 -> dom1 and vice versa? >>> Yes I tried, and it has always worked while dom0''s eth1 was up. >> So it''s only impossible to ping the domU from other machines on the >> network (and vice versa)? >> I think Fajar is probably right with his guess that your physical >> switches are managed. That means they do traffic filtering on their >> ports based on the mac addresses. >> Which switch models do you use on your two networks? > > I already answered Fajar in this thread: when the FIRST vif of dom1 is > connected to dom0''s eth1, then the behaviour on that switched LAN is > normal, while the traffic on the routed LAN of dom0''s eth0 issues the > bug. > So my issue is definetely related to the instanciation of the SECOND > interface of dom1, whatever network it is connected to. Or there is > some kind of black magic underneath... > > >>>> The only remaining thing that denies communication would be ARP, so >>>> the >>>> output of: >>>> # ip neigh show >>>> on both machines *directly after* a ping would be nice (within a few >>>> seconds - use && and a time-terminated ping). >>> Nothing on a machine when not connected. But when connected (here the >>> dom0): >>> $ ip neigh show >>> 192.168.24.125 dev eth1 lladdr 00:16:36:e0:81:2c REACHABLE >>> 172.16.113.100 dev eth0 lladdr 00:16:38:4c:04:00 DELAY >>> 172.16.113.123 dev eth0 lladdr 00:16:36:e0:81:2e STALE >>> 172.16.113.124 dev eth0 lladdr 00:1b:24:3d:ca:95 REACHABLE >>> 172.16.113.106 dev eth0 lladdr 00:16:38:28:b5:39 REACHABLE >> ARP seems to work at least on the Domain-0 *if* one of those IP >> addresses is the one of the domU... >> Can you try doing this on the DomU when pinging a host in the network? > > I did ! As requested ! And as you know, dom0 and dom1 are > alternatively connected. When dom0 is connected (172.16.113.121 on > eth0 and 192.168.24.123 on eth1) I get the trace above, but nothing > from dom1. When dom1 is connected, I get a similar trace from dom1, > but nothing from dom0 (I mean nothing on network 192.168.24.0) > > Regards, > Philippe > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Philippe Combes
2010-Dec-21 01:58 UTC
Re: [Xen-users] Yet another question about multiple NICs
I am a bit confused now. What did I explained so badly ? Do I understand you well ? I said I ran ip neigh from both machines, as requested. And from dom1 the output is similar to the output I dumped, but only when the connection succeeds. Because, once again, dom1 and dom0 are never "connected" at the same time, that is the issue. When dom0 (resp. dom1) fails to connect to the network, then ''ip neigh'' shows nothing. Regards, Philippe Felix Kuperjans wrote:> I meant to do ip neigh within dom1 - you wrote your output was from dom0. > Is the dom0 able to reach machines on the networks? > > Regards, > Felix > > Am 20.12.2010 04:12, schrieb Philippe Combes: >> >> Felix Kuperjans wrote : >>> Answers within quotes: >>>>> - It''s interesting that dom1''s firewall output shows that no packages >>>>> were processed, so maybe you didn''t ping anything since the last >>>>> reboot >>>>> from dom1 or the firewall was loaded by reading it''s statistics... >>>> You requested for the outputs "when <my> system has just started". >>>> Hence no packet, I guess. But shouldn''t there be at least those >>>> exchanged >>>> for the ssh connection to the dom1 ? >>>> Anyway, after one minute or so, I get on the dom1: >>>> # iptables -nvL >>>> Chain INPUT (policy ACCEPT 23 packets, 884 bytes) >>>> pkts bytes target prot opt in out source >>>> destination >>>> >>>> Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) >>>> pkts bytes target prot opt in out source >>>> destination >>>> >>>> Chain OUTPUT (policy ACCEPT 4 packets, 816 bytes) >>>> pkts bytes target prot opt in out source >>>> destination >>> That looks better. >>>>> Still no reasons why you can''t ping local machines from the dom1 (and >>>>> sometimes even not from dom0). Have you tried pinging each other, so >>>>> dom0 -> dom1 and vice versa? >>>> Yes I tried, and it has always worked while dom0''s eth1 was up. >>> So it''s only impossible to ping the domU from other machines on the >>> network (and vice versa)? >>> I think Fajar is probably right with his guess that your physical >>> switches are managed. That means they do traffic filtering on their >>> ports based on the mac addresses. >>> Which switch models do you use on your two networks? >> I already answered Fajar in this thread: when the FIRST vif of dom1 is >> connected to dom0''s eth1, then the behaviour on that switched LAN is >> normal, while the traffic on the routed LAN of dom0''s eth0 issues the >> bug. >> So my issue is definetely related to the instanciation of the SECOND >> interface of dom1, whatever network it is connected to. Or there is >> some kind of black magic underneath... >> >> >>>>> The only remaining thing that denies communication would be ARP, so >>>>> the >>>>> output of: >>>>> # ip neigh show >>>>> on both machines *directly after* a ping would be nice (within a few >>>>> seconds - use && and a time-terminated ping). >>>> Nothing on a machine when not connected. But when connected (here the >>>> dom0): >>>> $ ip neigh show >>>> 192.168.24.125 dev eth1 lladdr 00:16:36:e0:81:2c REACHABLE >>>> 172.16.113.100 dev eth0 lladdr 00:16:38:4c:04:00 DELAY >>>> 172.16.113.123 dev eth0 lladdr 00:16:36:e0:81:2e STALE >>>> 172.16.113.124 dev eth0 lladdr 00:1b:24:3d:ca:95 REACHABLE >>>> 172.16.113.106 dev eth0 lladdr 00:16:38:28:b5:39 REACHABLE >>> ARP seems to work at least on the Domain-0 *if* one of those IP >>> addresses is the one of the domU... >>> Can you try doing this on the DomU when pinging a host in the network? >> I did ! As requested ! And as you know, dom0 and dom1 are >> alternatively connected. When dom0 is connected (172.16.113.121 on >> eth0 and 192.168.24.123 on eth1) I get the trace above, but nothing >> from dom1. When dom1 is connected, I get a similar trace from dom1, >> but nothing from dom0 (I mean nothing on network 192.168.24.0) >> >> Regards, >> Philippe >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Philippe Combes
2010-Dec-21 01:59 UTC
Re: [Xen-users] Yet another question about multiple NICs
Simon Hobson wrote:> We can see ARP requests going out via peth1, but they don''t arrive at > the other device - so they are either not being transmitted, or the > switch is blocking them. > > I''d still suggest changing nothing except to connect the machine direct* > to something (eg a laptop) and try again - just to completely eliminate > any potential switch problem. Having said that, it''s not a problem I''ve > personally come across. > > * Or use a known "dumb" switch so you can have the rest of the network > connected (so you get DHCP) and then unplug it from the rest of the > network for testing.OK. I still think that it has nothing to do with the switches of 192.168.24.0, because when I set the description of dom1 to have its FIRST interface on that network, that FIRST interface works great (and eth1 on dom0 as well), while the SECOND interface, now on the routed network that used to work great, goes ill. So OK, I will run the test with the laptop, so that anybody here (inc. me) are convinced that it does not come from the switches. But unfortunately, I will get no physical access to the machine before the beginning of next year.> Well I''ve no idea what''s wrong here. The line that''s failing reads : > Append a rule to the FORWARD table, match (-m) using the physdev module, > macthing in put port (--physdev-in) vif1.1, and jump (-j) to the ACCEPT > rule. > In other words - for any packets entering via bridge port vif1.1, > forward them. > > Now, I''ve just checked on one of my work servers, and it does indeed > have rules like these. > # iptables -L -vn > ... > Chain FORWARD (policy ACCEPT 180M packets, 36G bytes) > pkts bytes target prot opt in out source > destination > 46M 50G ACCEPT all -- * * xx.xx.xx.xx > 0.0.0.0/0 PHYSDEV match --physdev-in xxxxx > 0 0 ACCEPT udp -- * * 0.0.0.0/0 > 0.0.0.0/0 PHYSDEV match --physdev-in xxxxx udp spt:68 dpt:67 > > While I see from an earlier message that your iptables is empty. > However, It shouldn''t matter since the default policy on your FORWARD > chain is accept - ie anything not expressly blocked should be passed. > > Is it possible that you don''t have physdev matching available in your > Dom0 installation ? > I don''t think this is anything to do with your problem, but could > account for the error message.Hmmm. I hacked the vif-common.sh file to get more information on this (and retrieve the error message from iptables). I could get two kinds of errors: "iptables: Resource temporarily unavailable" or "iptables: Bad rule (does a matching rule exist in that chain?)" But it occurs only at the first creation of a dom1 with two vifs, one on every NIC, after dom0 has just booted, and only for the second vif in the declaration. In ANY other case (single vif on whatever NIC, subsequent domU creation, etc.), no error.> As an aside, I can now see one thing that setting the guest IP address > does - it includes the IP address in the iptables rules added for the > guest when it starts.Whether I specify ip=192.168.24.81 in the description of dom1 or not does not change anything to the problem. Only the iptables on dom0 are more specific with the IP. Regards, Philippe _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Philippe Combes
2010-Dec-21 01:59 UTC
Re: [Xen-users] Yet another question about multiple NICs
Thomas Jensen wrote:> > Philippe, > > This is the article I was referring to: > http://www.debian-administration.org/article/470/Using_multiple_network_cards_in_XEN_3.0 > > Make sure to read all the comments as there are some corrections and > updates to the article.You''re right. When I found this article in my Google searches, I thought it was not relevant for my specific case. But reading through all comments showed me my error. So I checked every point, and the only new thing I noticed was: comment the (vif-script vif-bridge) line in /etc/xen/xend-config.sxp I did it, but it changes nothing, for vif-bridge is called by default.> The combination of udev rules, this modified wrapper script, and a few > DNS entries on my DomUs were the three steps I needed to take in order > to get multiple NICs configured on my Debian Lenny setup.So, last thing to check: the udev rules. The only related issue I found in my Google searches was that the interfaces were not named properly across the reboots. I have no such issue. Am I wrong, or is there something else to fix in the udev rules ? Thanks for your time, Philippe _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Simon Hobson
2010-Dec-21 07:50 UTC
Re: [Xen-users] Yet another question about multiple NICs
Philippe Combes wrote:>So, last thing to check: the udev rules. The only related issue I >found in my Google searches was that the interfaces were not named >properly across the reboots. I have no such issue. Am I wrong, or is >there something else to fix in the udev rules ?With current versions of Debian the default is that NICs get consistent names. The first time a NIC is seen, udev will assign it the next available number and create a persistent rule for it so it will not change number in the future. The rules are stored in (IIRC) /dev/udev/rules.d/70-persistent-net-rules where you can change them - personally I like to rename them to more useful names like ethext, ethint, etc. -- Simon Hobson Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed author Gladys Hobson. Novels - poetry - short stories - ideal as Christmas stocking fillers. Some available as e-books. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Felix Kuperjans
2010-Dec-21 13:20 UTC
Re: [Xen-users] Yet another question about multiple NICs
Ok now I got the point... When it fails to connect, can you still ping the domain-0 or is it really nothing then? The domain-0 should be always at least reachable with your setup and should then show up in ip neigh on the dom-1 (and if the dom-1 doesn''t reach anything else, it will be the only valid entry). I think there should be some "FAILED" entries when a machine cannot connect to the network, that''s why I asked explicitly again. If there are even no FAILED entries I think it even did not ask, which means that some routes or address settings must be wrong. Regards, Felix Am 21.12.2010 02:58, schrieb Philippe Combes:> I am a bit confused now. What did I explained so badly ? Do I > understand you well ? > I said I ran ip neigh from both machines, as requested. And from dom1 > the output is similar to the output I dumped, but only when the > connection succeeds. Because, once again, dom1 and dom0 are never > "connected" at the same time, that is the issue. When dom0 (resp. > dom1) fails to connect to the network, then ''ip neigh'' shows nothing. > > Regards, > Philippe > > Felix Kuperjans wrote: >> I meant to do ip neigh within dom1 - you wrote your output was from >> dom0. >> Is the dom0 able to reach machines on the networks? >> >> Regards, >> Felix >> >> Am 20.12.2010 04:12, schrieb Philippe Combes: >>> >>> Felix Kuperjans wrote : >>>> Answers within quotes: >>>>>> - It''s interesting that dom1''s firewall output shows that no >>>>>> packages >>>>>> were processed, so maybe you didn''t ping anything since the last >>>>>> reboot >>>>>> from dom1 or the firewall was loaded by reading it''s statistics... >>>>> You requested for the outputs "when <my> system has just started". >>>>> Hence no packet, I guess. But shouldn''t there be at least those >>>>> exchanged >>>>> for the ssh connection to the dom1 ? >>>>> Anyway, after one minute or so, I get on the dom1: >>>>> # iptables -nvL >>>>> Chain INPUT (policy ACCEPT 23 packets, 884 bytes) >>>>> pkts bytes target prot opt in out source >>>>> destination >>>>> >>>>> Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) >>>>> pkts bytes target prot opt in out source >>>>> destination >>>>> >>>>> Chain OUTPUT (policy ACCEPT 4 packets, 816 bytes) >>>>> pkts bytes target prot opt in out source >>>>> destination >>>> That looks better. >>>>>> Still no reasons why you can''t ping local machines from the dom1 >>>>>> (and >>>>>> sometimes even not from dom0). Have you tried pinging each other, so >>>>>> dom0 -> dom1 and vice versa? >>>>> Yes I tried, and it has always worked while dom0''s eth1 was up. >>>> So it''s only impossible to ping the domU from other machines on the >>>> network (and vice versa)? >>>> I think Fajar is probably right with his guess that your physical >>>> switches are managed. That means they do traffic filtering on their >>>> ports based on the mac addresses. >>>> Which switch models do you use on your two networks? >>> I already answered Fajar in this thread: when the FIRST vif of dom1 is >>> connected to dom0''s eth1, then the behaviour on that switched LAN is >>> normal, while the traffic on the routed LAN of dom0''s eth0 issues the >>> bug. >>> So my issue is definetely related to the instanciation of the SECOND >>> interface of dom1, whatever network it is connected to. Or there is >>> some kind of black magic underneath... >>> >>> >>>>>> The only remaining thing that denies communication would be ARP, so >>>>>> the >>>>>> output of: >>>>>> # ip neigh show >>>>>> on both machines *directly after* a ping would be nice (within a few >>>>>> seconds - use && and a time-terminated ping). >>>>> Nothing on a machine when not connected. But when connected (here the >>>>> dom0): >>>>> $ ip neigh show >>>>> 192.168.24.125 dev eth1 lladdr 00:16:36:e0:81:2c REACHABLE >>>>> 172.16.113.100 dev eth0 lladdr 00:16:38:4c:04:00 DELAY >>>>> 172.16.113.123 dev eth0 lladdr 00:16:36:e0:81:2e STALE >>>>> 172.16.113.124 dev eth0 lladdr 00:1b:24:3d:ca:95 REACHABLE >>>>> 172.16.113.106 dev eth0 lladdr 00:16:38:28:b5:39 REACHABLE >>>> ARP seems to work at least on the Domain-0 *if* one of those IP >>>> addresses is the one of the domU... >>>> Can you try doing this on the DomU when pinging a host in the network? >>> I did ! As requested ! And as you know, dom0 and dom1 are >>> alternatively connected. When dom0 is connected (172.16.113.121 on >>> eth0 and 192.168.24.123 on eth1) I get the trace above, but nothing >>> from dom1. When dom1 is connected, I get a similar trace from dom1, >>> but nothing from dom0 (I mean nothing on network 192.168.24.0) >>> >>> Regards, >>> Philippe >>> >>> _______________________________________________ >>> Xen-users mailing list >>> Xen-users@lists.xensource.com >>> http://lists.xensource.com/xen-users >>> >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Philippe Combes
2011-Feb-06 07:34 UTC
[Xen-users] Re: Yet another question about multiple NICs [FIXED]
Hi all, This is an old thread already, but I have had higher priorities and I have not been able to check for your last propositions for a few weeks. I confess that I first considered irrelevant for my case the suggestions of JP P., Thomas J. and Simon H. about the order of the NICs in my domU debian system. I was too confident in my tests... I realized I was wrong when I tried to connect the first vif of a *fresh* domU to eth1 of dom0 (and the 2nd vif to eth0 of dom0). Indeed I had eventually the same symptoms on the problematic subnet. I do not know now what kind of test made me tell you it worked, and I am sorry for this wrong statement. After that, the questions from Fajar A. N. about the configuration of the switches became still more sensible. I had promised I would perform the test with my laptop and a crossover-cable. And indeed, it worked perfectly with the default way (wrapping the embedded network-bridge script in xen-config.sxp) Then I had hard time trying to reconfigure the switches, because it was the first time I had to do such things, and I could only use documentation found on the web. No root passwd, etc. Anyway I succeeded at last to deactivate the port security policy and everything is fine now. Thank you very much to all for your very precious help ! Regards, Philippe Philippe Combes wrote:> Dear Xen users, > > I have tried for weeks to have a domU connected to both NICs of the > dom0, each in a different LAN. Google gave me plenty of tutos > and HowTos about the subject, including the Xen and the Debian Xen > wiki''s, of course. It seems so simple ! > Some advise to use a simple wrapper to /etc/xen/network-bridge, others > to let it aside and to set bridges on my own. > But there must be something obvious that I miss, something so obvious > that no manual need to explain it, because I tried every solution and > variant I found on the Internet with no success. > > My dom0 first ran CentOS 5.5, Xen 3.0.3. I tried to have eth1 up and > configured both in dom0 and in a domU. I never succeeded (details > below), so I followed the advice of some colleagues who told me my > issues might have come from running a Debian lenny domU on a CentOS dom0 > (because the domU used the CentOS kernel instead of the one of Debian > lenny, which is more recent). > > So now my dom0 runs an up-to-date Debian lenny, with Xen 3.2.1, but I > have the same behaviour when trying to get two interfaces in a domU. As > I said it before, I tried several configurations, but let''s stick for > now to one based on the network-bridge script. > In /etc/network/interfaces: > auto eth0 > iface eth0 inet dhcp > auto eth1 > iface eth1 inet dhcp > In /etc/xen/xend-config.sxp: > (network-script network-bridge-wrapper) > /etc/xen/scripts/network-bridge-wrapper: > #!/bin/bash > dir=$(dirname "$0") > "$dir/network-bridge" "$@" vifnum=0 netdev=eth0 bridge=eth0 > "$dir/network-bridge" "$@" vifnum=1 netdev=eth1 bridge=eth1 > In domU configuration file: > vif = [ ''mac=00:16:3E:55:AF:C2,bridge=eth0'', > ''mac=00:16:3E:55:AF:C3,bridge=eth1'' ] > > With this configuration, I get both bridges eth<i> configured and > usable: I mean I can ping one machine of every LAN through the > corresponding interface. > > When I start a domU however, the dom0 and the domU are alternatively > connected to the LAN of eth1, but mutually exclusively. In other words, > the dom0 is connected to the LAN on eth1 for a couple of minutes, but > not the domU, and then, with no other reason than inactivity on the > interface, it switches to the reverse situation: domU connected, not the > dom0. After another couple of minutes of inactivity, back to the first > situation, and so on... > I noticed that the ''switch'' does not occur if the one that is currently > connected performs a continuous ping on another machine of the LAN. > > This happened with the CentOS too. But I did not try anything else under > that distro. Under Debian, I tried to have dom0''s eth1 down (no IP), but > then the domU''s eth1 does not work at all, not even periodically. > > I was pretty sure the issue came from the way my bridges were > configured, that there was something different with the dom0 primary > interface, etc. Hence I tried all solutions I could find on the > Internet with no success. > I then made a simple test. Instead of binding domU''s eth<i> to dom0''s > eth<i>, I bound it to dom0''s eth<1-i>: I changed > vif = [ ''mac=00:16:3E:55:AF:C2,bridge=eth0'', > ''mac=00:16:3E:55:AF:C3,bridge=eth1'' ] > to > vif = [ ''mac=00:16:3E:55:AF:C3,bridge=eth1'', > ''mac=00:16:3E:55:AF:C2,bridge=eth0'' ] > I was very surprised to see that dom0''s eth0, domU''s eth0 and dom0''s > eth1 were all working normally, not domU''s eth1. There was no alternance > between dom0''s eth0 and domU''s eth1 there, probably because there is > always some kind of activity on dom0''s eth0 (NFS, monitoring). > > So it seems that my issue is NOT related to the dom0 bridges, but to the > order of the vifs in the domU description. However, in the xend.log > file, there is no difference in the way both vifs are processed. > [2010-12-16 14:51:27 3241] INFO (XendDomainInfo:1514) createDevice: vif > : {''bridge'': ''eth1'', ''mac'': ''00:16:3E:55:AF:C2 > '', ''uuid'': ''9dbf60c7-d785-96e2-b036-dc21b669735c''} > [2010-12-16 14:51:27 3241] DEBUG (DevController:118) DevController: > writing {''mac'': ''00:16:3E:55:AF:C2'', ''handle'': ''0'' > , ''protocol'': ''x86_64-abi'', ''backend-id'': ''0'', ''state'': ''1'', ''backend'': > ''/local/domain/0/backend/vif/2/0''} to /local/d > omain/2/device/vif/0. > [2010-12-16 14:51:27 3241] DEBUG (DevController:120) DevController: > writing {''bridge'': ''eth1'', ''domain'': ''inpiftest'', > ''handle'': ''0'', ''uuid'': ''9dbf60c7-d785-96e2-b036-dc21b669735c'', ''script'': > ''/etc/xen/scripts/vif-bridge'', ''mac'': ''00:16: > 3E:55:AF:C2'', ''frontend-id'': ''2'', ''state'': ''1'', ''online'': ''1'', > ''frontend'': ''/local/domain/2/device/vif/0''} to /local/d > omain/0/backend/vif/2/0. > [2010-12-16 14:51:27 3241] INFO (XendDomainInfo:1514) createDevice: vif > : {''bridge'': ''eth0'', ''mac'': ''00:16:3E:55:AF:C3 > '', ''uuid'': ''1619a9f8-8113-2e3c-e566-9ca9552a3a93''} > [2010-12-16 14:51:27 3241] DEBUG (DevController:118) DevController: > writing {''mac'': ''00:16:3E:55:AF:C3'', ''handle'': ''1'' > , ''protocol'': ''x86_64-abi'', ''backend-id'': ''0'', ''state'': ''1'', ''backend'': > ''/local/domain/0/backend/vif/2/1''} to /local/d > omain/2/device/vif/1. > [2010-12-16 14:51:27 3241] DEBUG (DevController:120) DevController: > writing {''bridge'': ''eth0'', ''domain'': ''inpiftest'', > ''handle'': ''1'', ''uuid'': ''1619a9f8-8113-2e3c-e566-9ca9552a3a93'', ''script'': > ''/etc/xen/scripts/vif-bridge'', ''mac'': ''00:16: > 3E:55:AF:C3'', ''frontend-id'': ''2'', ''state'': ''1'', ''online'': ''1'', > ''frontend'': ''/local/domain/2/device/vif/1''} to /local/d > omain/0/backend/vif/2/1. > > There I am stuck, and it is very frustrating. It looks so simple when > reading at tutos, that I clearly missed something obvious, but what ? > Any clue, any track to follow down will be welcome, truly. Please do not > hesitate to ask me for relevant logs, or for any experiment you would > think useful. > > Thanks for your help, > Philippe._______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users