Folks, I''m playing with maxbw on links (as opposed to flows) in Crossbow, and I have a couple of questions. First, the limts seem only advisory. The first example has the main host talking to a zone that has 172.16.17.100 configured on znic0. When there is no maxbw, the throughtput is as expected; when maxbw is 55M the throughput only drops to 76 Mbps: # netperf -H 172.16.17.100 TCP STREAM TEST from ::ffff:0.0.0.0 (0.0.0.0) port 0 AF_INET to ::ffff:172.16.17.100 (172.16.17.100) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 49152 49152 49152 10.00 4018.07 # # dladm set-linkprop -p maxbw=55M znic0 # dladm show-linkprop -p maxbw znic0 LINK PROPERTY PERM VALUE DEFAULT POSSIBLE znic0 maxbw rw 55 -- -- # netperf -H 172.16.17.100 TCP STREAM TEST from ::ffff:0.0.0.0 (0.0.0.0) port 0 AF_INET to ::ffff:172.16.17.100 (172.16.17.100) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 49152 49152 49152 10.01 76.11 # The second issue is that I can only get maxbw to work when a zone is involved. In this example vphsy0 is on a physical interface and traffic is coming over a 100 Mbit switch. The throughput seems to be the same regardless of maxbw: # dladm show-linkprop -p maxbw vphys0 LINK PROPERTY PERM VALUE DEFAULT POSSIBLE vphys0 maxbw rw -- -- -- # ifconfig vphys0 vphys0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 6 inet 192.168.17.1 netmask ffffff00 broadcast 192.168.17.255 ether 2:8:20:80:96:f2 # client% netperf -H 192.168.17.1 TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.17.1 (192.168.17.1) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 49152 16384 16384 10.01 87.74 client% # dladm set-linkprop -p maxbw=55M vphys0 # dladm show-linkprop -p maxbw vphys0 LINK PROPERTY PERM VALUE DEFAULT POSSIBLE vphys0 maxbw rw 55 -- -- # client% netperf -H 192.168.17.1 TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.17.1 (192.168.17.1) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 49152 16384 16384 10.01 87.73 client% Likewise, if the host connects to itself uning VNICs built on physical links or etherstubs the maxbw setting seems to have no effect. Only when I cross zones do I see it working. Thanks for any help, swagman
Patrick, the first issue, is the inaccuracy of the bandwidth limit, particularly at low bandwidth values. I''ll double check with Sunay who mentioned the other day a fix in his work space to be pushed out soon. The second one is the fact that you need to have zones with exclusive stacks in order to go through the vnics in communications within the same machine. This works as designed. If the two communicating processes are on the same zone (global or not), or par of the shared IP stack (i n the global zone or a non exclusive zone), then IP does its best to communicate via the shortest path, which is the loopback interface. Packets are looped back at IP level and do not leave the stack in this case. Kais. On 06/10/09 14:38, Patrick J. McEvoy wrote:> Folks, > > I''m playing with maxbw on links (as opposed to flows) in Crossbow, and I > have a couple of questions. First, the limts seem only advisory. The first > example has the main host talking to a zone that has 172.16.17.100 > configured on znic0. When there is no maxbw, the throughtput is > as expected; when maxbw is 55M the throughput only drops to 76 Mbps: > > # netperf -H 172.16.17.100 > TCP STREAM TEST from ::ffff:0.0.0.0 (0.0.0.0) port 0 AF_INET to ::ffff:172.16.17.100 (172.16.17.100) port 0 AF_INET > Recv Send Send > Socket Socket Message Elapsed > Size Size Size Time Throughput > bytes bytes bytes secs. 10^6bits/sec > > 49152 49152 49152 10.00 4018.07 > # > > # dladm set-linkprop -p maxbw=55M znic0 > # dladm show-linkprop -p maxbw znic0 > LINK PROPERTY PERM VALUE DEFAULT POSSIBLE > znic0 maxbw rw 55 -- -- > # netperf -H 172.16.17.100 > TCP STREAM TEST from ::ffff:0.0.0.0 (0.0.0.0) port 0 AF_INET to ::ffff:172.16.17.100 (172.16.17.100) port 0 AF_INET > Recv Send Send > Socket Socket Message Elapsed > Size Size Size Time Throughput > bytes bytes bytes secs. 10^6bits/sec > > 49152 49152 49152 10.01 76.11 > # > > The second issue is that I can only get maxbw to work when a zone > is involved. In this example vphsy0 is on a physical interface and > traffic is coming over a 100 Mbit switch. The throughput seems to > be the same regardless of maxbw: > > # dladm show-linkprop -p maxbw vphys0 > LINK PROPERTY PERM VALUE DEFAULT POSSIBLE > vphys0 maxbw rw -- -- -- > # ifconfig vphys0 > vphys0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 6 > inet 192.168.17.1 netmask ffffff00 broadcast 192.168.17.255 > ether 2:8:20:80:96:f2 > # > > client% netperf -H 192.168.17.1 > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.17.1 (192.168.17.1) port 0 AF_INET > Recv Send Send > Socket Socket Message Elapsed > Size Size Size Time Throughput > bytes bytes bytes secs. 10^6bits/sec > > 49152 16384 16384 10.01 87.74 > client% > > # dladm set-linkprop -p maxbw=55M vphys0 > # dladm show-linkprop -p maxbw vphys0 > LINK PROPERTY PERM VALUE DEFAULT POSSIBLE > vphys0 maxbw rw 55 -- -- > # > > client% netperf -H 192.168.17.1 > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.17.1 (192.168.17.1) port 0 AF_INET > Recv Send Send > Socket Socket Message Elapsed > Size Size Size Time Throughput > bytes bytes bytes secs. 10^6bits/sec > > 49152 16384 16384 10.01 87.73 > client% > > Likewise, if the host connects to itself uning VNICs built on physical > links or etherstubs the maxbw setting seems to have no effect. Only > when I cross zones do I see it working. > > Thanks for any help, > swagman > _______________________________________________ > crossbow-discuss mailing list > crossbow-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/crossbow-discuss > >
Kais,> the first issue, is the inaccuracy of the bandwidth limit, particularly > at low bandwidth values. I''ll double check with Sunay who mentioned > the other day a fix in his work space to be pushed out soon.Thanks, I''ll wait to hear.> The second one is the fact that you need to have zones with exclusive > stacks in order to go through the vnics in communications within the > same machine. This works as designed. If the two communicating > processes are on the same zone (global or not), or par of the shared > IP stack (i n the global zone or a non exclusive zone), then IP does > its best to communicate via the shortest path, which is the loopback > interface. Packets are looped back at IP level and do not leave the > stack in this case.OK, that makes sense within the same machine, but in the example I gave I was hitting a VNIC on a physical link from a different physical client across a 100 Mbit physical switch, and the throughput was about 87 Mbps regardless of maxbw on the VNIC. Could this also be the low bandwidth problem? What is considered low bandwidth? Thanks, Patrick
Patrick, On 06/10/09 21:46, Patrick J. McEvoy wrote:> OK, that makes sense within the same machine, but in the example > I gave I was hitting a VNIC on a physical link from a different > physical client across a 100 Mbit physical switch, and the > throughput was about 87 Mbps regardless of maxbw on the VNIC. > Could this also be the low bandwidth problem? What is considered > low bandwidth? >Getting 87Mbps when specifying 50 or 100 is a bit too high. I checked with a vnic over e1000g . When I set 50Mbps on the vnic I get 47.98 Mbps. When I set no maxbw on the vnic, I get almost the link speed (96.66Mpbs out of the switch''s 100Mbps). Just to make sure, your physical link isn''t plumbed, up and running with an address in the same subnet as the vnic''s, right? ip from the client could be shooting at the physical NIC''s address, and no packets are getting through the vnic. Kais.> Thanks, > Patrick > >
> Getting 87Mbps when specifying 50 or 100 is a bit too high. > I checked with a vnic over e1000g . When I set 50Mbps > on the vnic I get 47.98 Mbps. > When I set no maxbw on the vnic, I get almost the > link speed (96.66Mpbs out of the switch''s 100Mbps).I just upgraded my network to Gigabit, and I''m limiting VNICs to 100 or 500 Megabit, and I''m getting reasonable results. I can go back and try below 100 Mbit.> Just to make sure, your physical link isn''t plumbed, > up and running with an address in the same subnet as the vnic''s, > right? ip from the client could be shooting at the physical NIC''s > address, and no packets are getting through the vnic.My physical link was plumbed, up, and configured with an IP address on the same subnet as the VNIC, but the client was firing at the VNIC''s IP address. I am assuming this means data goes through the VNIC''s stack? -- This message posted from opensolaris.org
Things are working better now (this is with the physical interface not plumbed). Before setting maxbw: opensol# dladm create-vnic -l bge1 vbge10 opensol# ifconfig vbge10 plumb opensol# ifconfig vbge10 10.10.17.1/24 up opensol# netserver & client% ifconfig eth1 10.10.17.2/24 up client% netperf -H 10.10.17.1 TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.10.17.1 (10.10.17.1) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 49152 8196 8196 10.00 940.64 client% After setting maxbw to 50Mbit: opensol# dladm set-linkprop -p maxbw=50M vbge10 opensol# client% netperf -H 10.10.17.1 TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.10.17.1 (10.10.17.1) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 49152 8196 8196 10.02 47.16 client% Not sure if moving to better hardware made the difference. -- This message posted from opensolaris.org