Hi, Hopefully this won''t be too off-topic (I''ve seen both bonding & vlan mentioned on the list, but not really together). I''ve tried to get bonding (2 x 100Mb EEPro, but will want to try on 1000BaseT) and vlans to work together, but without luck. I can get them working fine (seemingly at least - I didn''t tried bursting on the bonded port) individually. However, when I bond the ports together and then run vlans on top, it doesn''t seem to work (although there are no errors when configuring the interfaces). FWIW, this is: * RedHat 7.3 ("customised" RH73 kernel - 2.4.18) * Intel EEPro 100Mb dual-port NIC * Extreme Summit4 switch. Any suggestions gratefully accepted. Ivan -- Ivan Beveridge <ivan@dreamtime.org> <ivan@dreamtim.demon.co.uk>
With Intel cards use the e100 driver instead. Also Intel has another piece of software called IANS. Intel Advanced Networking System. It will allow you to use up to 8 Intel cards as one, or do load balancing, and etc.Up to 8 ethernet interfaces can be grouped together. It''s assume. I use a dual nic with a Cisco switch, and I am able to send out one interface and receive in the other. Although that config relies on my switch just as much as the IANS driver. Load balancing does not. Here is a link to download both http://downloadfinder.intel.com/scripts-df/filter_results.asp?strOSs=39&strTypes=PLU%2CDRV%2CSPH%2CUTL&ProductID=416&OSFullName=Linux*&submit=Go%21 Actually the latest e100 driver 2.x I could not compile in RedHat 7.3? I have emailed Intel, but I have not heard back yet. So get the 1.8.x version ftp://aiedownload.intel.com/df-support/2896/eng/ ftp://aiedownload.intel.com/df-support/2896/eng/e100-1.8.38.tar.gz ftp://aiedownload.intel.com/df-support/2896/eng/e100-2.0.30.tar.gz With IANS the latest version compiles and works fine ftp://aiedownload.intel.com/df-support/2895/eng/ ftp://aiedownload.intel.com/df-support/2895/eng/ians-1.7.17.tar.gz Make sure to see the read me''s. For further clarification here are some examples of how to config after compiling and installing both IANS, and he e100 driver, updated. Here is my IANS config in /etc/ians/ians.conf TEAM deth TEAMING_MODE FEC VLAN_MODE off MEMBER eth0 PRIORITY no_priority MEMBER eth1 PRIORITY no_priority VADAPTER vdeth0 I then have the following in my /etc/modules.conf #Intel Pro100+ Dual Port Adapter alias eth0 e100 alias eth1 e100 options e100 e100_speed_duplex=4,4 IFS=0,0 alias vdeth0 ians post-install ians /usr/sbin/ianscfg -r I then go to /etc/sysconfig/network-scripts and get rid off of all the ifcfg-eth*. You can save one, as you will want to rename it to ifcfg-vdeth0 or what ever you named you new dual ethernet interface. Make sure it''s IP address and etc is what you want for your new dual ethernet interface. If you do not when you start the networking services it will try to use the interfaces like normal. This way it''s starts the virtual dual ethernet interface 0, and loads what it needs. Now if you look at my IANS config, you will see FEC. That is Fast Ethernet Channeling that I use with my Cisco switch to achieve bonding of the two interfaces. So something on the other end of the two cables must support this. Otherwise load balancing will be the only thing you get. Make sure to read the read me''s but the above should help out allot. Do not try to use another driver other than e100. Use e100, it screams. Also you can try other things like the Beowulf channel bonging, but I have not been able to replicate the behavior of IANS. Make sure to use fixed speeds on both ends, and use the e100 modules args to adjust this. It''s covered in the read me''s. Good luck hope that helps. On Sat, 2002-06-22 at 08:43, Ivan A. Beveridge wrote:> Hi, > > Hopefully this won''t be too off-topic (I''ve seen both bonding & vlan > mentioned on the list, but not really together). > > I''ve tried to get bonding (2 x 100Mb EEPro, but will want to try on > 1000BaseT) and vlans to work together, but without luck. I can get them > working fine (seemingly at least - I didn''t tried bursting on the > bonded port) individually. However, when I bond the ports together and > then run vlans on top, it doesn''t seem to work (although there are > no errors when configuring the interfaces). > > FWIW, this is: > * RedHat 7.3 ("customised" RH73 kernel - 2.4.18) > * Intel EEPro 100Mb dual-port NIC > * Extreme Summit4 switch. > > Any suggestions gratefully accepted. > > > Ivan > -- > Ivan Beveridge <ivan@dreamtime.org> > <ivan@dreamtim.demon.co.uk> > _______________________________________________ > LARTC mailing list / LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/ >-- Sincerely, William L. Thomson Jr. Support Group Obsidian-Studios Inc. 439 Amber Way Petaluma, Ca. 94952 Phone 707.766.9509 Fax 707.766.8989 http://www.obsidian-studios.com
On Sat, Jun 22, 2002 at 11:20:51AM -0700, William L. Thomson Jr. wrote:> With Intel cards use the e100 driver instead. Also Intel has another > piece of software called IANS. Intel Advanced Networking System. > It will allow you to use up to 8 Intel cards as one, or do load > balancing, and etc.Up to 8 ethernet interfaces can be grouped together. > It''s assume. > > I use a dual nic with a Cisco switch, and I am able to send out one > interface and receive in the other. Although that config relies on my > switch just as much as the IANS driver. Load balancing does not.Hrm - wouldn''t it be better to bond them both and use full-duplex to get 200Mbps each way? Just curious :) [further stuff about e100 & IANS] Thanks for that _very_ comprehansive help - I''ll have a look at that lot :^) However, I''m looking to use a dual gig card (something like a SysKonnect SK-9844), with the ports bonded, and then running vlans over the resulting bonded channel. http://www.syskonnect.com/syskonnect/products/b0101_ethernet_9844.html WRT channel bonding, I was using ifenslave (using redhat''s ifcfg- scripts and the bonding module), although I see there is an ''ethernet link aggregation'' (802.3ad) driver, which the syskonnect help seems to point to: http://www.st.rim.or.jp/~yumo/#veth I''ve no idea what the difference is between this and bonding. Anyway (hopefully not pushing my luck too much here :) anyone used the syskonnect with some kind of channel bonding/aggregation and vlans? Cheers Ivan -- Ivan Beveridge <ivan@dreamtime.org> <ivan@dreamtim.demon.co.uk> <iabeveridge@iee.org>
On Sat, 2002-06-22 at 15:07, Ivan A. Beveridge wrote:> On Sat, Jun 22, 2002 at 11:20:51AM -0700, William L. Thomson Jr. wrote: > > With Intel cards use the e100 driver instead. Also Intel has another > > piece of software called IANS. Intel Advanced Networking System. > > It will allow you to use up to 8 Intel cards as one, or do load > > balancing, and etc.Up to 8 ethernet interfaces can be grouped together. > > It''s assume. > > > > I use a dual nic with a Cisco switch, and I am able to send out one > > interface and receive in the other. Although that config relies on my > > switch just as much as the IANS driver. Load balancing does not. > > Hrm - wouldn''t it be better to bond them both and use full-duplex to get > 200Mbps each way? Just curious :)Using IANS I have exactly that I have one in full duplex for receiving and one full duplex for sending. In theory I should be getting what you mention, 200mps each way, but I since each line is send and receive I should have 200mps send, and 200mps receive, for a total of 400mps. IANS also works with gigabit cards as well. In fact I remember reading or discussing with an Intel support guy that IANS can be used with other ethernet cards as long as one in each group is an Intel card.> [further stuff about e100 & IANS] > > Thanks for that _very_ comprehansive help - I''ll have a look at that lot :^)No problem, I thought I would include some actual things you could use instead of just a recommendation. If you are using Intel cards, which I highly recommend, then IANS is a must, if you are wanting to use more than one interface in a machine as a single interface.> However, I''m looking to use a dual gig card (something like a SysKonnect > SK-9844), with the ports bonded, and then running vlans over the resulting > bonded channel.Your original post did not mention that. You mentioned a dual Intel nic. Either way if you have one Intel nic you may still be able to achieve what you want. IANS does support gigabit but all cards in the group must be the same speed. No mixing 100''s with 10''s or 1000''s. All 100''s all 10''s or all 1000''s. I would stick to Intel nic''s. They have yet to let me down, and you can get them retail or dirt cheap used on Ebay. Like $10 for a Pro 100 adapter (the mini ones), and gig and dual cards for between $50-$100. You should be able to use vlans with IANS. Read the read me.> http://www.syskonnect.com/syskonnect/products/b0101_ethernet_9844.htmlI will have to look into this. But I am an Intel nic guy. Very bias. :)> WRT channel bonding, I was using ifenslave (using redhat''s ifcfg- scripts > and the bonding module), although I see there is an ''ethernet link > aggregation'' (802.3ad) driver, which the syskonnect help seems to point to: > http://www.st.rim.or.jp/~yumo/#veth > I''ve no idea what the difference is between this and bonding.Correct me if I am wrong, but ifenslave is a tool from the Beowulf project. I remember downloading it and trying to get it to work in my RaQ XTR. But I could not. All the info was duplicated, IP, subnet, etc. All looked good from ifconfig, but did not work like IANS, or at all. Only one interface was working. The Cobalt uses a weird chipset ethernet cards that are built into the motherboard. They use the dp83815 driver, and I believe that is the chipset as well. Either way I played around with ifenslave and did not have any luck. I am not sure if you have to have the bonding driver compiled or loaded in your kernel. I would imagine that might have been my problem. But I am not going to attempt to replace the Cobalt kernel. I will eventually be phasing the machine out for one I can upgrade. :) That machine will have Intel nics or a gigabit Intel nic so it won''t be a problem.> Anyway (hopefully not pushing my luck too much here :) anyone used the > syskonnect with some kind of channel bonding/aggregation and vlans?No, may be with the card you want to use. But otherwise it should be something that can be done. Like I said I am doing it and have for a while with a dual nic card in one machine, and two ethernet cards in another. Both are basically the same when all is said and done. Reading and writing from those machines is faster than using just a single line, so something is able to aggregate the bandwidth. Not sure if it''s a full 400mps but definitely over 200mps (single card full duplex). -- Sincerely, William L. Thomson Jr. Support Group Obsidian-Studios Inc. 439 Amber Way Petaluma, Ca. 94952 Phone 707.766.9509 Fax 707.766.8989 http://www.obsidian-studios.com
On Sat, Jun 22, 2002 at 05:10:06PM -0700, William L. Thomson Jr. wrote:> On Sat, 2002-06-22 at 15:07, Ivan A. Beveridge wrote:> > However, I''m looking to use a dual gig card (something like a SysKonnect > > SK-9844), with the ports bonded, and then running vlans over the resulting > > bonded channel. > > Your original post did not mention that. You mentioned a dual Intel nic.Apologies - it was mentioned in passing (not stating the model of 1000BaseT [actually that was wrong? they would be fiber]) cards). I made the assumption (oops ;) that whatever worked with 2+ 100BaseTX would work with other cards (eg ifenslave/bonding + vlan).> Either way if you have one Intel nic you may still be able to achieve > what you want. IANS does support gigabit but all cards in the group must > be the same speed. No mixing 100''s with 10''s or 1000''s. All 100''s all > 10''s or all 1000''s.The reason for the syskonnect (prob. SK-9844) is that it is dual-gige fiber (for higher port-density). I''ve not seen any other dual-gige cards, and have used these in "normal" use successfully.> > WRT channel bonding, I was using ifenslave (using redhat''s ifcfg- scripts > > and the bonding module), although I see there is an ''ethernet link > > aggregation'' (802.3ad) driver, which the syskonnect help seems to point to: > > http://www.st.rim.or.jp/~yumo/#veth > > I''ve no idea what the difference is between this and bonding. > > Correct me if I am wrong, but ifenslave is a tool from the Beowulf > project.It''s (now) in the "iputils" package that includes ping, tracepath, rdisc, ping, etc. The RedHat rc scripts will use ifenslave, if you have the relevant entries in /etc/sysconfig/network-scripts/ifcfg-* (MASTER / SLAVE etc entries).> I remember downloading it and trying to get it to work in my > RaQ XTR. But I could not. All the info was duplicated, IP, subnet, etc. > All looked good from ifconfig, but did not work like IANS, or at all. > Only one interface was working.As I mention, I have not actually tried to burst > 1 interface worth of bandwidth with it, but traffic went across. As you say, possibly only 1 interface.> > Anyway (hopefully not pushing my luck too much here :) anyone used the > > syskonnect with some kind of channel bonding/aggregation and vlans? > > No, may be with the card you want to use. But otherwise it should be > something that can be done.Yeah - the page says it can, with the external software mentioned (vlan and the 802.3ad stuff). Perhaps I''ll try to give that a go, but would be interested to hear from anyone who has a working generic solution with any cards (or SK-9844 in particular). Silly timescales :( Many thanks for the information William/Bill - I''ll have a look at that IANS stuff, as I''m sure I''ll need multi-port aggregation on EEPro100s at some point soon :^) Ivan -- Ivan Beveridge <ivan@dreamtime.org> <ivan@dreamtim.demon.co.uk>
Hello, I''ve been trying to manage interactive traffic with HTB [v.2]. I prepared proper classes hierarchy with extra bandwidth and high priority for interactive. I tried two variants: one "classic" with root qdisc and class hierarchy and second with: root_qdisc->rated_class->parent_qdisc->class_hierarchy as is described in HTB manul in section "6. Priorizing bandwidth share": # qdisc for delay simulation tc qdisc add dev eth0 root handle 100: htb tc class add dev eth0 parent 100: classid 100:1 htb rate 90kbps # real measured qdisc tc qdisc add dev eth0 parent 100:1 handle 1: htb AC="tc class add dev eth0 parent" $AC 1: classid 1:1 htb rate 100kbps $AC 1:2 classid 1:10 htb rate 50kbps ceil 100kbps prio 1 $AC 1:2 classid 1:11 htb rate 50kbps ceil 100kbps prio 1 I also attached extra classes on external outgoing interface like have been described in wondershaper scrip. I can see that it works somehow but its not ideal. I tried to check how does it work with filtering ICMP traffic to interactive class and pinging my gateway. When bandwidth wasn''t saturated Round Trip Time of course was low [~4ms] but when becomes congested average RTT raised to 100 sometimes to 200ms and was very jittered [jumping from 6 to 400ms]. 1) I have a question about it, why it doesnt work in so manner, why RTT isn''t always as low as 4ms ie. ? Is it due to reaction time of shaped traffic - time when traffic have been downgraded from initial max. to some shaped band. to have some space for interactive band ? Or maybe is it performance issue ? In my class setup, I limited parent class to 90% of total avialable band. [925Kbits for 1MBit line] to have some space for bursts and to avoid 100% saturation and raise performance of interactive traffic. 2) question. What about setup described in "6. Priorizing bandwidth share" ? Could somebody explain me what is a purpose of rated class [line B] and attached to it second qdisc [line C], how works this delay simulation, why not to use "classic" setup ? If it really have something to do how to compute that rate of "root" class [line B]? # qdisc for delay simulation line A) tc qdisc add dev eth0 root handle 100: htb line B) tc class add dev eth0 parent 100: classid 100:1 htb rate 90kbps # real measured qdisc line C) tc qdisc add dev eth0 parent 100:1 handle 1: htb AC="tc class add dev eth0 parent" line D)$AC 1: classid 1:1 htb rate 100kbps Regards tw -- ---------------- ck.eter.tym.pl "Never let shooling disturb Your education"
Ech, I found something interesting... The lags was propably caused by SFQ queues ! After removing SFQ queues it works perfect. Thanks for info on www page of Stef Coene: http://www.docum.org/ :) tw --