Hello, i''m trying to understand Xen/OVS - VLAN offloading In my testbed: #ovs-vsctl show Bridge vlannet Port "vif5.0" tag: 1002 Interface "vif5.0" Port vlannet-bond Interface "vlannet2" Interface "vlannet1" Port vlannet Interface vlannet type: internal Port "vif3.0" tag: 1002 Interface "vif3.0" ovs_version: "1.10.0" 1) Xen Dom0 HW interface -> ethtool -k vlannet1 .. rx-vlan-offload: on tx-vlan-offload: on rx-vlan-filter: on [fixed] .. 2) OVS system interface -> ethtool -k ovs-system .. rx-vlan-offload: off [fixed] tx-vlan-offload: on rx-vlan-filter: off [fixed] .. 3) DomU netback interface -> ethtool -k ovs-system .. rx-vlan-offload: off [fixed] tx-vlan-offload: off [fixed] rx-vlan-filter: off [fixed] .. As i see, VLAN offloading is partially implemented in OVS and didn''t implemented in Xen means VLAN tagged traffic inside VM will make additional latency. Can anyone have info about OVS->VM VLAN offloading configuration? Thanks. -- */Best regards,/* /Eugene Istomin/ _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
On Fri, 2013-05-31 at 06:53 +0300, Eugene Istomin wrote:> As i see, VLAN offloading is partially implemented in OVS and didn''t > implemented in Xen means VLAN tagged traffic inside VM will make > additional latency.I''m not sure what you mean here, but it looks to me like you have VLAN tags enabled on the VIF device in dom0, which means that the guest will see frames without the VLAN headers. VLAN offload on the VIF would only matter if you were using trunk ports on the vswitch, which I don''t think you are. (Although your configuration was a bit hard to read due to being posted as HTML and then mangled somewhere along the line). Ian
Ian, in my testbed untagged by OVS have ~2 times more bandwith than untagged by VM. All interfaces have MTU=9000 1)untagged by VM interface (in OVS like "trunks: [1002]") #atop from VM NET | transport | tcpi 22733 | tcpo 80191 | udpi 0 | udpo 4 | NET | eth0 ---- | pcki 22736 | pcko 80243 | si 12 Mbps | so 5777 Mbps | NET | vlan100 ---- | pcki 22738 | pcko 80245 | si 9495 Kbps | so 5775 Mbps | #atop from Dom0 CPU | sys 57% | irq 39% cpu | sys 58% | irq 41% .. NET | vif1.0 ---- | pcki 227727 | pcko 797502 | si 10 Mbps | so 5743 Mbps NET | vif2.0 ---- | pcki 797748 | pcko 227717 | si 5736 Mbps | so 12 Mbps 2) untagged by OVS interface (in OVS like "tag: 1002") #atop from VM - untagged by OVS interface NET | transport | tcpi 8495 | tcpo 163131 | udpi 0 | udpo 0 NET | eth1 ---- | pcki 8495 | pcko 24718 | si 4485 Kbps | so 11 Gbps #atop from Dom0 CPU | sys 96% | irq 4% cpu | sys 96% | irq 4% .. NET | vif1.1 ---- | pcki 75974 | pcko 247608 | si 3160 Kbps | so 11 Gbps NET | vif2.1 ---- | pcki 247616 | pcko 75971 | si 11 Gbps | so 4011 Kbps As you can see second variant have full netback sys load in DOM0. Second have high number of irq and high numbers of pcki/pcko. Is this behavior correct? -- Best regards, Eugene Istomin On Friday, May 31, 2013 08:41:29 AM Ian Campbell wrote:> On Fri, 2013-05-31 at 06:53 +0300, Eugene Istomin wrote: > > As i see, VLAN offloading is partially implemented in OVS and didn''t > > implemented in Xen means VLAN tagged traffic inside VM will make > > additional latency. > > I''m not sure what you mean here, but it looks to me like you have VLAN > tags enabled on the VIF device in dom0, which means that the guest will > see frames without the VLAN headers. > > VLAN offload on the VIF would only matter if you were using trunk ports > on the vswitch, which I don''t think you are. (Although your > configuration was a bit hard to read due to being posted as HTML and > then mangled somewhere along the line). > > Ian
On Fri, 2013-05-31 at 11:18 +0300, Eugene Istomin wrote:> Ian, > > in my testbed untagged by OVS have ~2 times more bandwith than untagged by VM. > > All interfaces have MTU=9000 > > > 1)untagged by VM interface (in OVS like "trunks: [1002]") > > #atop from VM > NET | transport | tcpi 22733 | tcpo 80191 | udpi 0 | udpo 4 | > NET | eth0 ---- | pcki 22736 | pcko 80243 | si 12 Mbps | so 5777 Mbps | > NET | vlan100 ---- | pcki 22738 | pcko 80245 | si 9495 Kbps | so 5775 Mbps | > > #atop from Dom0 > CPU | sys 57% | irq 39% > cpu | sys 58% | irq 41% > .. > NET | vif1.0 ---- | pcki 227727 | pcko 797502 | si 10 Mbps | so 5743 Mbps > NET | vif2.0 ---- | pcki 797748 | pcko 227717 | si 5736 Mbps | so 12 Mbps > > > > 2) untagged by OVS interface (in OVS like "tag: 1002") > #atop from VM - untagged by OVS interface > NET | transport | tcpi 8495 | tcpo 163131 | udpi 0 | udpo 0 > NET | eth1 ---- | pcki 8495 | pcko 24718 | si 4485 Kbps | so 11 Gbps > > #atop from Dom0 > CPU | sys 96% | irq 4% > cpu | sys 96% | irq 4% > .. > NET | vif1.1 ---- | pcki 75974 | pcko 247608 | si 3160 Kbps | so 11 Gbps > NET | vif2.1 ---- | pcki 247616 | pcko 75971 | si 11 Gbps | so 4011 Kbps > > > As you can see second variant have full netback sys load in DOM0. Second have high number > of irq and high numbers of pcki/pcko. > Is this behavior correct?I''d have expected the second case to be lower overhead, which it is. I would expect the first case to be higher overhead, which it is, but it seems a lot higher than I would have handwavily expected -- I''m not sure why vlan offload on the vif device should matter to that extent. Wei, what do you think of implementing vif offload on the netback vif devices? I don''t necessarily mean over the wire protocol, although that might be worth investigating separately, just at the netdev interface -- i.e. inserting the VLAN header into the ring as part of xen_netbk_tx_build_gops() processing or whatever? Ian.
Jesse from OVS maillist said: "You are seeing the result of TSO not functioning in the presence of vlans. This is one of the other offloads that I was referring to before but it''s not directly the result of the features that you showed. Regardless, this is a limitation of Xen and not something that OVS affects" I go deeper in testbed VM vlans and find: #ethtool -k vlan1002 tx-checksum-ip-generic: off generic-segmentation-offload: off tx-nocache-copy: off tx-checksumming: off #ethtool -K vlan1002 tx on Could not change any device features Is linux have offload on vnet vlan interfaces like VXLAN currently have (http://lists.openwall.net/netdev/2013/02/16/2)? -- Best regards, Eugene Istomin On Friday, May 31, 2013 09:46:10 AM Ian Campbell wrote:> On Fri, 2013-05-31 at 11:18 +0300, Eugene Istomin wrote: > > Ian, > > > > in my testbed untagged by OVS have ~2 times more bandwith than untagged by > > VM. > > > > All interfaces have MTU=9000 > > > > > > 1)untagged by VM interface (in OVS like "trunks: [1002]") > > > > #atop from VM > > NET | transport | tcpi 22733 | tcpo 80191 | udpi 0 | udpo > > 4 | NET | eth0 ---- | pcki 22736 | pcko 80243 | si 12 Mbps | > > so 5777 Mbps | NET | vlan100 ---- | pcki 22738 | pcko 80245 | si 9495 > > Kbps | so 5775 Mbps | > > > > #atop from Dom0 > > CPU | sys 57% | irq 39% > > cpu | sys 58% | irq 41% > > .. > > NET | vif1.0 ---- | pcki 227727 | pcko 797502 | si 10 Mbps | so > > 5743 Mbps NET | vif2.0 ---- | pcki 797748 | pcko 227717 | si 5736 > > Mbps | so 12 Mbps > > > > > > > > 2) untagged by OVS interface (in OVS like "tag: 1002") > > #atop from VM - untagged by OVS interface > > NET | transport | tcpi 8495 | tcpo 163131 | udpi 0 | udpo > > 0 NET | eth1 ---- | pcki 8495 | pcko 24718 | si 4485 Kbps | so > > 11 Gbps > > > > #atop from Dom0 > > CPU | sys 96% | irq 4% > > cpu | sys 96% | irq 4% > > .. > > NET | vif1.1 ---- | pcki 75974 | pcko 247608 | si 3160 Kbps | so > > 11 Gbps NET | vif2.1 ---- | pcki 247616 | pcko 75971 | si 11 Gbps > > | so 4011 Kbps > > > > > > As you can see second variant have full netback sys load in DOM0. Second > > have high number of irq and high numbers of pcki/pcko. > > Is this behavior correct? > > I''d have expected the second case to be lower overhead, which it is. I > would expect the first case to be higher overhead, which it is, but it > seems a lot higher than I would have handwavily expected -- I''m not sure > why vlan offload on the vif device should matter to that extent. > > Wei, what do you think of implementing vif offload on the netback vif > devices? I don''t necessarily mean over the wire protocol, although that > might be worth investigating separately, just at the netdev interface -- > i.e. inserting the VLAN header into the ring as part of > xen_netbk_tx_build_gops() processing or whatever? > > Ian.
Surely, by ''vnet'' i meaned ''vconfig''. -- Best regards, Eugene Istomin On Friday, May 31, 2013 01:20:06 PM Eugene Istomin wrote:> Jesse from OVS maillist said: > > "You are seeing the result of TSO not functioning in the presence of > vlans. This is one of the other offloads that I was referring to > before but it''s not directly the result of the features that you > showed. Regardless, this is a limitation of Xen and not something that > OVS affects" > > I go deeper in testbed VM vlans and find: > > #ethtool -k vlan1002 > tx-checksum-ip-generic: off > generic-segmentation-offload: off > tx-nocache-copy: off > tx-checksumming: off > > #ethtool -K vlan1002 tx on > Could not change any device features > > Is linux have offload on vnet vlan interfaces like VXLAN currently have > (http://lists.openwall.net/netdev/2013/02/16/2)? > > On Fri, 2013-05-31 at 11:18 +0300, Eugene Istomin wrote: > > > Ian, > > > > > > in my testbed untagged by OVS have ~2 times more bandwith than untagged > > > by > > > VM. > > > > > > All interfaces have MTU=9000 > > > > > > > > > 1)untagged by VM interface (in OVS like "trunks: [1002]") > > > > > > #atop from VM > > > NET | transport | tcpi 22733 | tcpo 80191 | udpi 0 | udpo > > > > > > 4 | NET | eth0 ---- | pcki 22736 | pcko 80243 | si 12 Mbps | > > > > > > so 5777 Mbps | NET | vlan100 ---- | pcki 22738 | pcko 80245 | si > > > 9495 > > > Kbps | so 5775 Mbps | > > > > > > #atop from Dom0 > > > CPU | sys 57% | irq 39% > > > cpu | sys 58% | irq 41% > > > .. > > > NET | vif1.0 ---- | pcki 227727 | pcko 797502 | si 10 Mbps | so > > > 5743 Mbps NET | vif2.0 ---- | pcki 797748 | pcko 227717 | si 5736 > > > Mbps | so 12 Mbps > > > > > > > > > > > > 2) untagged by OVS interface (in OVS like "tag: 1002") > > > #atop from VM - untagged by OVS interface > > > NET | transport | tcpi 8495 | tcpo 163131 | udpi 0 | udpo > > > > > > 0 NET | eth1 ---- | pcki 8495 | pcko 24718 | si 4485 Kbps | so > > > > > > 11 Gbps > > > > > > #atop from Dom0 > > > CPU | sys 96% | irq 4% > > > cpu | sys 96% | irq 4% > > > .. > > > NET | vif1.1 ---- | pcki 75974 | pcko 247608 | si 3160 Kbps | so > > > 11 Gbps NET | vif2.1 ---- | pcki 247616 | pcko 75971 | si 11 > > > Gbps > > > > > > | so 4011 Kbps > > > > > > As you can see second variant have full netback sys load in DOM0. Second > > > have high number of irq and high numbers of pcki/pcko. > > > Is this behavior correct? > > > > I''d have expected the second case to be lower overhead, which it is. I > > would expect the first case to be higher overhead, which it is, but it > > seems a lot higher than I would have handwavily expected -- I''m not sure > > why vlan offload on the vif device should matter to that extent. > > > > Wei, what do you think of implementing vif offload on the netback vif > > devices? I don''t necessarily mean over the wire protocol, although that > > might be worth investigating separately, just at the netdev interface -- > > i.e. inserting the VLAN header into the ring as part of > > xen_netbk_tx_build_gops() processing or whatever? > > > > Ian. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-users
On Fri, May 31, 2013 at 09:46:10AM +0100, Ian Campbell wrote:> On Fri, 2013-05-31 at 11:18 +0300, Eugene Istomin wrote: > > Ian, > > > > in my testbed untagged by OVS have ~2 times more bandwith than untagged by VM. > > > > All interfaces have MTU=9000 > > > > > > 1)untagged by VM interface (in OVS like "trunks: [1002]") > > > > #atop from VM > > NET | transport | tcpi 22733 | tcpo 80191 | udpi 0 | udpo 4 | > > NET | eth0 ---- | pcki 22736 | pcko 80243 | si 12 Mbps | so 5777 Mbps | > > NET | vlan100 ---- | pcki 22738 | pcko 80245 | si 9495 Kbps | so 5775 Mbps | > > > > #atop from Dom0 > > CPU | sys 57% | irq 39% > > cpu | sys 58% | irq 41% > > .. > > NET | vif1.0 ---- | pcki 227727 | pcko 797502 | si 10 Mbps | so 5743 Mbps > > NET | vif2.0 ---- | pcki 797748 | pcko 227717 | si 5736 Mbps | so 12 Mbps > > > > > > > > 2) untagged by OVS interface (in OVS like "tag: 1002") > > #atop from VM - untagged by OVS interface > > NET | transport | tcpi 8495 | tcpo 163131 | udpi 0 | udpo 0 > > NET | eth1 ---- | pcki 8495 | pcko 24718 | si 4485 Kbps | so 11 Gbps > > > > #atop from Dom0 > > CPU | sys 96% | irq 4% > > cpu | sys 96% | irq 4% > > .. > > NET | vif1.1 ---- | pcki 75974 | pcko 247608 | si 3160 Kbps | so 11 Gbps > > NET | vif2.1 ---- | pcki 247616 | pcko 75971 | si 11 Gbps | so 4011 Kbps > > > > > > As you can see second variant have full netback sys load in DOM0. Second have high number > > of irq and high numbers of pcki/pcko. > > Is this behavior correct? > > I''d have expected the second case to be lower overhead, which it is. I > would expect the first case to be higher overhead, which it is, but it > seems a lot higher than I would have handwavily expected -- I''m not sure > why vlan offload on the vif device should matter to that extent. > > Wei, what do you think of implementing vif offload on the netback vif > devices? I don''t necessarily mean over the wire protocol, although that > might be worth investigating separately, just at the netdev interface -- > i.e. inserting the VLAN header into the ring as part of > xen_netbk_tx_build_gops() processing or whatever? >Possibly, by inserting the vlan tag into an extra info slot. Wei.> Ian.