Hello Group, We have configured 3 servers with below configuration: 2 Processors: E5620 32GB Ram 1x140GB SAS drive 3x1TB Sata drive We are planning to create around 25-30 VMs with xen (Standard version from Centos 5.8 repo). There are 20 VMs currently running on all three nodes. The problem that we are facing on two of them is, network on some VMs gets stuck. We cannot ping or SSH the VMs from outside the network, but xm console shows the VMs are fine. The only confusion is why only on 2 VMs, and that too on 2 nodes? Can someone please direct me in the right direction? Also, please let me know if more information is required. _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
>From: xen-users-bounces@lists.xen.org [xen-users-bounces@lists.xen.org] On Behalf Of DN Singh [dnsingh.dns@gmail.com] >Sent: 27 August 2012 11:53 >To: xen-users@lists.xensource.com >Subject: [Xen-users] Weird issue on heavy server > >Hello Group, > >We have configured 3 servers with below configuration: > >2 Processors: E5620 >32GB Ram >1x140GB SAS drive >3x1TB Sata drive > >We are planning to create around 25-30 VMs with xen (Standard version from Centos 5.8 repo). There are 20 VMs currently running on all three nodes. >The problem that we are facing on two of them is, network on some VMs gets stuck. >We cannot ping or SSH the VMs from outside the network, but xm console shows the VMs are fine. The only confusion is why only on 2 VMs, and that too on 2 nodes? > >Can someone please direct me in the right direction? Also, please let me know if more information is required.Hello there, just shot in the dark - do they have unique MAC addresses on their virtual NICs? Regards Matej
El 27/08/12 04:53, DN Singh escribió:> The problem that we are facing on two of them is, network on some VMs > gets stuck. > We cannot ping or SSH the VMs from outside the network, but xm console > shows the VMs are fine. The only confusion is why only on 2 VMs, and > that too on 2 nodes?Local firewall? iptables -L -v shall answer. Are they PV or HVM? What about "inside network"? Can a VM (controlled via console) do ping or whatever? Does it behaves differently while communicating with outside world and other VM on the same host?> Can someone please direct me in the right direction? Also, please let me > know if more information is required.Check with tcpdump, if you see traffic on your bridge while tying to communicates with VM''s. -- Alexandre Kouznetsov
Alexandre, I think I've run into this as well, on 4.1.2. As far as I could tell, it was an issue with the bridged ethernet shared by my dom-Us (at the time, all hvm). Each domain had its own MAC address. The domains could still communicate amongst themselves internally on the Xen server, but nothing was being forwarded between the bridge and the physical network. A "service network restart" on my Fedora 13 dom-0 (2.6.32.40 kernel) fixed the problem, but I still have no idea why it did that in the first place. I speculate that it might have had something to do with the allocation of vif/tap devices and the fact that they don't seem to be properly recycled when enough VMs are shut down and restarted on the system. Or at least, they always seem to increment as far as I can remember. Andy -- Sent from my iPhone appendage On Aug 27, 2012, at 11:48, Alexandre Kouznetsov <alk@ondore.com> wrote:> El 27/08/12 04:53, DN Singh escribió: >> The problem that we are facing on two of them is, network on some VMs >> gets stuck. >> We cannot ping or SSH the VMs from outside the network, but xm console >> shows the VMs are fine. The only confusion is why only on 2 VMs, and >> that too on 2 nodes? > Local firewall? iptables -L -v shall answer. > > Are they PV or HVM? > > What about "inside network"? Can a VM (controlled via console) do ping or whatever? Does it behaves differently while communicating with outside world and other VM on the same host? > >> Can someone please direct me in the right direction? Also, please let me >> know if more information is required. > Check with tcpdump, if you see traffic on your bridge while tying to communicates with VM's. > > -- > Alexandre Kouznetsov > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
Hello All, 1) They all have unique MAC adresses (They are generated by virt-clone utility) 2) The VM can ping to outside world, but it cannot be pinged from outside.(Using xm console) 3) There is no firewall in the VM. 4) Dom0 has default Centos firewall as below: -------------------------------- Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT udp -- anywhere anywhere udp dpt:bootps ACCEPT tcp -- anywhere anywhere tcp dpt:bootps Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in vif1.0 ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in vif2.0 ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in vif3.0 ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in vif4.0 ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in vif5.0 ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in vif6.0 ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in vif7.0 ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in vif8.0 ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in vif9.0 ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in vif10.0 ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in vif16.0 ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in vif17.0 ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in vif18.0 ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in vif19.0 ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in vif20.0 ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in vif21.0 ------------------------------------ 5) Current VMs are as below: ------------------------------------- Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 11633 16 r----- 9196.0 vm01 1 1024 1 -b---- 1652.9 vm02 2 1024 1 -b---- 1646.2 vm03 3 1024 1 -b---- 1644.9 vm04 4 1024 1 -b---- 1651.9 vm05 5 1024 1 -b---- 1653.6 vm06 6 1024 1 -b---- 2146.9 vm07 7 1024 1 -b---- 1270.6 vm08 8 1024 1 -b---- 1576.8 vm09 9 1024 1 -b---- 1657.8 vm10 10 1024 1 -b---- 595.5 vm11 21 1024 1 -b---- 6.0 vm16 16 1024 1 -b---- 476.5 vm17 17 1024 1 -b---- 479.7 vm18 18 1024 1 -b---- 479.9 vm19 19 1024 1 -b---- 480.1 vm20 20 1024 1 -b---- 478.1 ----------------------------- The problem is currently on vm11, whereas all other VMs are working fine. On Tue, Aug 28, 2012 at 1:00 AM, Andrew Pitman <andrewpitman@comcast.net>wrote:> Alexandre, > > I think I''ve run into this as well, on 4.1.2. As far as I could tell, it > was an issue with the bridged ethernet shared by my dom-Us (at the time, > all hvm). Each domain had its own MAC address. The domains could still > communicate amongst themselves internally on the Xen server, but nothing > was being forwarded between the bridge and the physical network. A > "service network restart" on my Fedora 13 dom-0 (2.6.32.40 kernel) fixed > the problem, but I still have no idea why it did that in the first place. > > I speculate that it might have had something to do with the allocation of > vif/tap devices and the fact that they don''t seem to be properly recycled > when enough VMs are shut down and restarted on the system. Or at least, > they always seem to increment as far as I can remember. > > Andy > -- > Sent from my iPhone appendage > > On Aug 27, 2012, at 11:48, Alexandre Kouznetsov <alk@ondore.com> wrote: > > > El 27/08/12 04:53, DN Singh escribió: > >> The problem that we are facing on two of them is, network on some VMs > >> gets stuck. > >> We cannot ping or SSH the VMs from outside the network, but xm console > >> shows the VMs are fine. The only confusion is why only on 2 VMs, and > >> that too on 2 nodes? > > Local firewall? iptables -L -v shall answer. > > > > Are they PV or HVM? > > > > What about "inside network"? Can a VM (controlled via console) do ping > or whatever? Does it behaves differently while communicating with outside > world and other VM on the same host? > > > >> Can someone please direct me in the right direction? Also, please let me > >> know if more information is required. > > Check with tcpdump, if you see traffic on your bridge while tying to > communicates with VM''s. > > > > -- > > Alexandre Kouznetsov > > > > > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xen.org > > http://lists.xen.org/xen-users > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
Hi. El 28/08/12 06:09, DN Singh escribió:> 2) The VM can ping to outside world, but it cannot be pinged from > outside.(Using xm console)I would assume, this means it''s ok on Ethernet level and below. Silly, but when pinging from outside, are you using the correct vm11''s IP?> 3) There is no firewall in the VM.What OS does VM uses, anyway? Can you show us it''s "iptables -L -v".> 4) Dom0 has default Centos firewall as below: > [...]As I can see, vm11 should not behave differently than others from firewall''s point of view.> ACCEPT all -- anywhere anywhere > REJECT all -- anywhere anywhereThis looks little bit weired, but still should work. What about tcpdump, on Dom0''s bridge and on vm11? Can it see any incoming traffic for vm11 at all? -- Alexandre Kouznetsov