Pablo Fernandes Yahoo
2007-May-26 09:54 UTC
big problem with HTB/CBQ and CPU for more than 1.700 customers
Marek Kierdelewicz
2007-May-26 14:22 UTC
Re: big problem with HTB/CBQ and CPU for more than 1.700 customers
>Hello,Hi there!>iptables -t mangle -A POSTROUTING --dest x.x.x.x -o eth0 -j CLASSIFY >--set-class 1:5 >iptables -t mangle -A FORWARD --src x.x.x.x -o eth1 -j CLASSIFY >--set-class 1:53k iptables rules strike me as something suicidaly slow. Try using tc hashing filters for traffic classification as described here: http://lartc.org/howto/lartc.adv-filter.hashing.html If you use private addresses and NAT then you''ll need IFB (http://linux-net.osdl.org/index.php/IFB) to shape upload per client with u32 hashing filters. Hope that helps. pozdrawiam, Marek Kierdelewicz KoBa ISP
VladSun
2007-May-26 15:23 UTC
Re: big problem with HTB/CBQ and CPU for more than 1.700 customers
Pablo Fernandes Yahoo написа:> > Hello, > > have HTB „rules“ in 4 different ISPs and i control for each customer > this way: > > Flush and 1:0 class > > tc qdisc del dev eth0 root > > tc qdisc add dev eth0 root handle 1:0 htb > > tc class add dev eth0 parent 1:0 classid 1:1 htb rate 100mbit > > tc qdisc del dev eth1 root > > tc qdisc add dev eth1 root handle 1:0 htb > > tc class add dev eth1 parent 1:0 classid 1:1 htb rate 100mbit > > Upload and Download: user1 > > tc class add dev eth0 parent 1:1 classid 1:5 htb rate 150kbit ceil 150kbit > > tc qdisc add dev eth0 parent 1:5 handle 5: sfq perturb 10 > > tc class add dev eth1 parent 1:1 classid 1:5 htb rate 50kbit ceil 50kbit > > tc qdisc add dev eth1 parent 1:5 handle 5: sfq perturb 10 > > iptables -t mangle -A POSTROUTING --dest x.x.x.x -o eth0 -j CLASSIFY > --set-class 1:5 > > iptables -t mangle -A FORWARD --src x.x.x.x -o eth1 -j CLASSIFY > --set-class 1:5 > > Upload and Download: user2 > > tc class add dev eth0 parent 1:1 classid 1:8 htb rate 150kbit ceil 150kbit > > tc qdisc add dev eth0 parent 1:8 handle 8: sfq perturb 10 > > tc class add dev eth1 parent 1:1 classid 1:8 htb rate 50kbit ceil 50kbit > > tc qdisc add dev eth1 parent 1:8 handle 8: sfq perturb 10 > > iptables -t mangle -A POSTROUTING --dest y.y.y.y -o eth0 -j CLASSIFY > --set-class 1:8 > > iptables -t mangle -A FORWARD --src y.y.y.y -o eth1 -j CLASSIFY > --set-class 1:8 > > (…) > > This rules works fine, but just for less than 1.700 customers. More > than 1.700 customers, i have my load avarage in the sky and Ksoftirqd > process (top information) in 100% fulltime. I don’t know why. I used > to use CBQ instead HTB because i had the same problem and Ron (a guy > in this list) gave this rules and told me that he uses this for more > than 3.000 customers. I tested it in more than 7 different computers > (but the same hadware specifications) and i had the same problem with > either CBQ or HTB rules. The computers that i have are all of them > DELL PowerEdge 1850. I will put some hardware iformations here: > > top > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > > 3 root 39 19 0 0 0 R 100 0.0 5316:20 ksoftirqd/0 > > [root@fw ~]# uptime > > 10:38:11 up 161 days, 17:21, 3 users, load average: 1.58, 1.65, 1.51 > (unfortunately when i took this, the load average was „pretty good“, > but minutes ago, it was more than 11.0 > > [root@fw ~]# lspci > > 00:00.0 Host bridge: Intel Corporation E7520 Memory Controller Hub > (rev 09) > > 00:02.0 PCI bridge: Intel Corporation E7525/E7520/E7320 PCI Express > Port A (rev 09) > > 00:04.0 PCI bridge: Intel Corporation E7525/E7520 PCI Express Port B > (rev 09) > > 00:05.0 PCI bridge: Intel Corporation E7520 PCI Express Port B1 (rev 09) > > 00:06.0 PCI bridge: Intel Corporation E7520 PCI Express Port C (rev 09) > > 00:1d.0 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB > UHCI Controller #1 (rev 02) > > 00:1d.1 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB > UHCI Controller #2 (rev 02) > > 00:1d.2 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB > UHCI Controller #3 (rev 02) > > 00:1d.7 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB2 > EHCI Controller (rev 02) > > 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev c2) > > 00:1f.0 ISA bridge: Intel Corporation 82801EB/ER (ICH5/ICH5R) LPC > Interface Bridge (rev 02) > > 00:1f.1 IDE interface: Intel Corporation 82801EB/ER (ICH5/ICH5R) IDE > Controller (rev 02) > > 01:00.0 PCI bridge: Intel Corporation 80332 [Dobson] I/O processor > (A-Segment Bridge) (rev 06) > > 01:00.2 PCI bridge: Intel Corporation 80332 [Dobson] I/O processor > (B-Segment Bridge) (rev 06) > > 02:0c.0 Ethernet controller: Intel Corporation 82545GM Gigabit > Ethernet Controller (rev 04) > > 02:0e.0 RAID bus controller: Dell PowerEdge Expandable RAID controller > 4 (rev 06) > > 03:0b.0 Ethernet controller: Intel Corporation 82545GM Gigabit > Ethernet Controller (rev 04) > > 05:00.0 PCI bridge: Intel Corporation 6700PXH PCI Express-to-PCI > Bridge A (rev 09) > > 05:00.2 PCI bridge: Intel Corporation 6700PXH PCI Express-to-PCI > Bridge B (rev 09) > > 06:07.0 Ethernet controller: Intel Corporation 82541GI/PI Gigabit > Ethernet Controller (rev 05) > > 07:08.0 Ethernet controller: Intel Corporation 82541GI/PI Gigabit > Ethernet Controller (rev 05) > > 09:0d.0 VGA compatible controller: ATI Technologies Inc Radeon RV100 > QY [Radeon 7000/VE] > > [root@fw ~]# free -m > > total used free shared buffers cached > > Mem: 2021 1479 542 0 400 654 > > -/+ buffers/cache: 424 1597 > > Swap: 1027 0 1027 > > [root@fw ~]# cat /proc/cpuinfo > > processor : 0 > > vendor_id : GenuineIntel > > cpu family : 15 > > model : 4 > > model name : Intel(R) Xeon(TM) CPU 3.00GHz > > stepping : 3 > > cpu MHz : 2992.674 > > cache size : 2048 KB > > physical id : 0 > > siblings : 2 > > core id : 0 > > cpu cores : 1 > > fdiv_bug : no > > hlt_bug : no > > f00f_bug : no > > coma_bug : no > > fpu : yes > > fpu_exception : yes > > cpuid level : 5 > > wp : yes > > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov > pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm > constant_tsc pni monitor ds_cpl cid cx16 xtpr > > bogomips : 5990.78 > > processor : 1 > > vendor_id : GenuineIntel > > cpu family : 15 > > model : 4 > > model name : Intel(R) Xeon(TM) CPU 3.00GHz > > stepping : 3 > > cpu MHz : 2992.674 > > cache size : 2048 KB > > physical id : 0 > > siblings : 2 > > core id : 0 > > cpu cores : 1 > > fdiv_bug : no > > hlt_bug : no > > f00f_bug : no > > coma_bug : no > > fpu : yes > > fpu_exception : yes > > cpuid level : 5 > > wp : yes > > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov > pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm > constant_tsc pni monitor ds_cpl cid cx16 xtpr > > bogomips : 5985.13 > > Any help/Tipp/hint will be very welcome. > > Thanks in Advance! > > Pablo Fernandes > > ------------------------------------------------------------------------ > > _______________________________________________ > LARTC mailing list > LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc >You may find this: http://openfmi.net/frs/download.php/410/IPCLASSIFY.zip useful
Acácio Alves dos Santos
2007-May-26 16:22 UTC
Re: big problem with HTB/CBQ and CPU for more than 1.700 customers
Pablo, Here we have HTB being used for more than 10.000 customers. The difference, is that we use tc and u32 filters to classify the packets.. I use the same Dell PE 1850, but I have two Quad-Core Xeon (1.86GHz) on it :) # uptime 13:18:08 up 16 days, 12:32, 1 user, load average: 0.02, 0.02, 0.00 mpstat says: 01:19:11 PM CPU %user %nice %sys %iowait %irq %soft % steal %idle intr/s 01:19:13 PM all 0.00 0.00 0.00 0.00 0.57 13.81 0.00 85.61 10568.88 And as you can see.. the use of cpu is not that big.. On May 26, 2007, at 6:54 AM, Pablo Fernandes Yahoo wrote:> Hello, > > > > have HTB „rules“ in 4 different ISPs and i control for each > customer this way: > > > > Flush and 1:0 class > > tc qdisc del dev eth0 root > > tc qdisc add dev eth0 root handle 1:0 htb > > tc class add dev eth0 parent 1:0 classid 1:1 htb rate 100mbit > > tc qdisc del dev eth1 root > > tc qdisc add dev eth1 root handle 1:0 htb > > tc class add dev eth1 parent 1:0 classid 1:1 htb rate 100mbit > > > > Upload and Download: user1 > > tc class add dev eth0 parent 1:1 classid 1:5 htb rate 150kbit ceil > 150kbit > > tc qdisc add dev eth0 parent 1:5 handle 5: sfq perturb 10 > > tc class add dev eth1 parent 1:1 classid 1:5 htb rate 50kbit ceil > 50kbit > > tc qdisc add dev eth1 parent 1:5 handle 5: sfq perturb 10 > > iptables -t mangle -A POSTROUTING --dest x.x.x.x -o eth0 -j > CLASSIFY --set-class 1:5 > > iptables -t mangle -A FORWARD --src x.x.x.x -o eth1 -j CLASSIFY -- > set-class 1:5 > > > > Upload and Download: user2 > > tc class add dev eth0 parent 1:1 classid 1:8 htb rate 150kbit ceil > 150kbit > > tc qdisc add dev eth0 parent 1:8 handle 8: sfq perturb 10 > > tc class add dev eth1 parent 1:1 classid 1:8 htb rate 50kbit ceil > 50kbit > > tc qdisc add dev eth1 parent 1:8 handle 8: sfq perturb 10 > > iptables -t mangle -A POSTROUTING --dest y.y.y.y -o eth0 -j > CLASSIFY --set-class 1:8 > > iptables -t mangle -A FORWARD --src y.y.y.y -o eth1 -j CLASSIFY -- > set-class 1:8 > > > > (…) > > > > This rules works fine, but just for less than 1.700 customers. More > than 1.700 customers, i have my load avarage in the sky and > Ksoftirqd process (top information) in 100% fulltime. I don’t know > why. I used to use CBQ instead HTB because i had the same problem > and Ron (a guy in this list) gave this rules and told me that he > uses this for more than 3.000 customers. I tested it in more than 7 > different computers (but the same hadware specifications) and i had > the same problem with either CBQ or HTB rules. The computers that i > have are all of them DELL PowerEdge 1850. I will put some hardware > iformations here: > > > > top > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > > 3 root 39 19 0 0 0 R 100 0.0 5316:20 > ksoftirqd/0 > > > > [root@fw ~]# uptime > > 10:38:11 up 161 days, 17:21, 3 users, load average: 1.58, 1.65, > 1.51 (unfortunately when i took this, the load average was > „pretty good“, but minutes ago, it was more than 11.0 > > > > [root@fw ~]# lspci > > 00:00.0 Host bridge: Intel Corporation E7520 Memory Controller Hub > (rev 09) > > 00:02.0 PCI bridge: Intel Corporation E7525/E7520/E7320 PCI Express > Port A (rev 09) > > 00:04.0 PCI bridge: Intel Corporation E7525/E7520 PCI Express Port > B (rev 09) > > 00:05.0 PCI bridge: Intel Corporation E7520 PCI Express Port B1 > (rev 09) > > 00:06.0 PCI bridge: Intel Corporation E7520 PCI Express Port C (rev > 09) > > 00:1d.0 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) > USB UHCI Controller #1 (rev 02) > > 00:1d.1 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) > USB UHCI Controller #2 (rev 02) > > 00:1d.2 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) > USB UHCI Controller #3 (rev 02) > > 00:1d.7 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) > USB2 EHCI Controller (rev 02) > > 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev c2) > > 00:1f.0 ISA bridge: Intel Corporation 82801EB/ER (ICH5/ICH5R) LPC > Interface Bridge (rev 02) > > 00:1f.1 IDE interface: Intel Corporation 82801EB/ER (ICH5/ICH5R) > IDE Controller (rev 02) > > 01:00.0 PCI bridge: Intel Corporation 80332 [Dobson] I/O processor > (A-Segment Bridge) (rev 06) > > 01:00.2 PCI bridge: Intel Corporation 80332 [Dobson] I/O processor > (B-Segment Bridge) (rev 06) > > 02:0c.0 Ethernet controller: Intel Corporation 82545GM Gigabit > Ethernet Controller (rev 04) > > 02:0e.0 RAID bus controller: Dell PowerEdge Expandable RAID > controller 4 (rev 06) > > 03:0b.0 Ethernet controller: Intel Corporation 82545GM Gigabit > Ethernet Controller (rev 04) > > 05:00.0 PCI bridge: Intel Corporation 6700PXH PCI Express-to-PCI > Bridge A (rev 09) > > 05:00.2 PCI bridge: Intel Corporation 6700PXH PCI Express-to-PCI > Bridge B (rev 09) > > 06:07.0 Ethernet controller: Intel Corporation 82541GI/PI Gigabit > Ethernet Controller (rev 05) > > 07:08.0 Ethernet controller: Intel Corporation 82541GI/PI Gigabit > Ethernet Controller (rev 05) > > 09:0d.0 VGA compatible controller: ATI Technologies Inc Radeon > RV100 QY [Radeon 7000/VE] > > > > [root@fw ~]# free -m > > total used free shared buffers > cached > > Mem: 2021 1479 542 0 > 400 654 > > -/+ buffers/cache: 424 1597 > > Swap: 1027 0 1027 > > > > [root@fw ~]# cat /proc/cpuinfo > > processor : 0 > > vendor_id : GenuineIntel > > cpu family : 15 > > model : 4 > > model name : Intel(R) Xeon(TM) CPU 3.00GHz > > stepping : 3 > > cpu MHz : 2992.674 > > cache size : 2048 KB > > physical id : 0 > > siblings : 2 > > core id : 0 > > cpu cores : 1 > > fdiv_bug : no > > hlt_bug : no > > f00f_bug : no > > coma_bug : no > > fpu : yes > > fpu_exception : yes > > cpuid level : 5 > > wp : yes > > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr > pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm > pbe nx lm constant_tsc pni monitor ds_cpl cid cx16 xtpr > > bogomips : 5990.78 > > > > processor : 1 > > vendor_id : GenuineIntel > > cpu family : 15 > > model : 4 > > model name : Intel(R) Xeon(TM) CPU 3.00GHz > > stepping : 3 > > cpu MHz : 2992.674 > > cache size : 2048 KB > > physical id : 0 > > siblings : 2 > > core id : 0 > > cpu cores : 1 > > fdiv_bug : no > > hlt_bug : no > > f00f_bug : no > > coma_bug : no > > fpu : yes > > fpu_exception : yes > > cpuid level : 5 > > wp : yes > > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr > pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm > pbe nx lm constant_tsc pni monitor ds_cpl cid cx16 xtpr > > bogomips : 5985.13 > > > > > > Any help/Tipp/hint will be very welcome. > > > > Thanks in Advance! > > > > Pablo Fernandes > > > > _______________________________________________ > LARTC mailing list > LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc-- Acácio Alves dos Santos Administração de redes Diginet Brasil adm.acacio@digi.com.br (+55) 84 4008-9000 Esta mensagem, incluindo seus anexos, pode conter informação confidencial e/ou privilegiada. Se você não for o destinatário ou a pessoa autorizada a receber esta mensagem, não pode usar, copiar ou divulgar as informações nela contidas ou tomar qualquer ação baseada nessas informações. Se você recebeu esta mensagem por engano, por favor avise imediatamente o remetente, respondendo o e-mail e em seguida apague-o. Agradecemos sua cooperação. This message, including its attatchments, may contain confidential and/or privileged information. If you are not the recipient or authorized person to receive this message, you must not use, copy, disclose or take any action based on this message or any information herein. If you received this message by mistake, please advise the sender immediately by replying the e- mail and deleting this message. Thank you for your cooperation.
Stoimen Gerenski
2007-May-27 22:46 UTC
big problem with HTB/CBQ and CPU for more than 1.700 customers
Hello everybody, I have a problem which is similar: 450 customers with 450 HTB classes and their corresponding 450 filters in a subclass (1:3) of the root qdisc (PRIO) of the interface. The problem is manifesting at times, when I try to ping a host behind the router from a host before the router, the latency becomes 1-1,5 msec. On the machine is running also iptables firewall with a bunch of rules for dropping/accepting/natting specific traffic, plus routing about 30 Mbits/sec. When I remove the HTB qdisc, the latency is normal, 0,3-0,4 msec. Anyone has an idea what could cause this? Any input much appreciated! Regards, Stoimen -------------- Pablo, Here we have HTB being used for more than 10.000 customers. The difference, is that we use tc and u32 filters to classify the packets.. I use the same Dell PE 1850, but I have two Quad-Core Xeon (1.86GHz) on it :) # uptime 13:18:08 up 16 days, 12:32, 1 user, load average: 0.02, 0.02, 0.00 mpstat says: 01:19:11 PM CPU %user %nice %sys %iowait %irq %soft % steal %idle intr/s 01:19:13 PM all 0.00 0.00 0.00 0.00 0.57 13.81 0.00 85.61 10568.88 And as you can see.. the use of cpu is not that big.. On May 26, 2007, at 6:54 AM, Pablo Fernandes Yahoo wrote:> Hello, > > > > have HTB „rules“ in 4 different ISPs and i control for each > customer this way: > > > > Flush and 1:0 class > > tc qdisc del dev eth0 root > > tc qdisc add dev eth0 root handle 1:0 htb > > tc class add dev eth0 parent 1:0 classid 1:1 htb rate 100mbit > > tc qdisc del dev eth1 root > > tc qdisc add dev eth1 root handle 1:0 htb > > tc class add dev eth1 parent 1:0 classid 1:1 htb rate 100mbit > > > > Upload and Download: user1 > > tc class add dev eth0 parent 1:1 classid 1:5 htb rate 150kbit ceil > 150kbit > > tc qdisc add dev eth0 parent 1:5 handle 5: sfq perturb 10 > > tc class add dev eth1 parent 1:1 classid 1:5 htb rate 50kbit ceil > 50kbit > > tc qdisc add dev eth1 parent 1:5 handle 5: sfq perturb 10 > > iptables -t mangle -A POSTROUTING --dest x.x.x.x -o eth0 -j > CLASSIFY --set-class 1:5 > > iptables -t mangle -A FORWARD --src x.x.x.x -o eth1 -j CLASSIFY -- > set-class 1:5 > > > > Upload and Download: user2 > > tc class add dev eth0 parent 1:1 classid 1:8 htb rate 150kbit ceil > 150kbit > > tc qdisc add dev eth0 parent 1:8 handle 8: sfq perturb 10 > > tc class add dev eth1 parent 1:1 classid 1:8 htb rate 50kbit ceil > 50kbit > > tc qdisc add dev eth1 parent 1:8 handle 8: sfq perturb 10 > > iptables -t mangle -A POSTROUTING --dest y.y.y.y -o eth0 -j > CLASSIFY --set-class 1:8 > > iptables -t mangle -A FORWARD --src y.y.y.y -o eth1 -j CLASSIFY -- > set-class 1:8 > > > > (…) > > > > This rules works fine, but just for less than 1.700 customers. More > than 1.700 customers, i have my load avarage in the sky and > Ksoftirqd process (top information) in 100% fulltime. I don’t know > why. I used to use CBQ instead HTB because i had the same problem > and Ron (a guy in this list) gave this rules and told me that he > uses this for more than 3.000 customers. I tested it in more than 7 > different computers (but the same hadware specifications) and i had > the same problem with either CBQ or HTB rules. The computers that i > have are all of them DELL PowerEdge 1850. I will put some hardware > iformations here: > > > > top > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > > 3 root 39 19 0 0 0 R 100 0.0 5316:20 > ksoftirqd/0 > > > > [root at fw ~]# uptime > > 10:38:11 up 161 days, 17:21, 3 users, load average: 1.58, 1.65, > 1.51 (unfortunately when i took this, the load average was > „pretty good“, but minutes ago, it was more than 11.0 > > > > [root at fw ~]# lspci > > 00:00.0 Host bridge: Intel Corporation E7520 Memory Controller Hub > (rev 09) > > 00:02.0 PCI bridge: Intel Corporation E7525/E7520/E7320 PCI Express > Port A (rev 09) > > 00:04.0 PCI bridge: Intel Corporation E7525/E7520 PCI Express Port > B (rev 09) > > 00:05.0 PCI bridge: Intel Corporation E7520 PCI Express Port B1 > (rev 09) > > 00:06.0 PCI bridge: Intel Corporation E7520 PCI Express Port C (rev > 09) > > 00:1d.0 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) > USB UHCI Controller #1 (rev 02) > > 00:1d.1 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) > USB UHCI Controller #2 (rev 02) > > 00:1d.2 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) > USB UHCI Controller #3 (rev 02) > > 00:1d.7 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) > USB2 EHCI Controller (rev 02) > > 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev c2) > > 00:1f.0 ISA bridge: Intel Corporation 82801EB/ER (ICH5/ICH5R) LPC > Interface Bridge (rev 02) > > 00:1f.1 IDE interface: Intel Corporation 82801EB/ER (ICH5/ICH5R) > IDE Controller (rev 02) > > 01:00.0 PCI bridge: Intel Corporation 80332 [Dobson] I/O processor > (A-Segment Bridge) (rev 06) > > 01:00.2 PCI bridge: Intel Corporation 80332 [Dobson] I/O processor > (B-Segment Bridge) (rev 06) > > 02:0c.0 Ethernet controller: Intel Corporation 82545GM Gigabit > Ethernet Controller (rev 04) > > 02:0e.0 RAID bus controller: Dell PowerEdge Expandable RAID > controller 4 (rev 06) > > 03:0b.0 Ethernet controller: Intel Corporation 82545GM Gigabit > Ethernet Controller (rev 04) > > 05:00.0 PCI bridge: Intel Corporation 6700PXH PCI Express-to-PCI > Bridge A (rev 09) > > 05:00.2 PCI bridge: Intel Corporation 6700PXH PCI Express-to-PCI > Bridge B (rev 09) > > 06:07.0 Ethernet controller: Intel Corporation 82541GI/PI Gigabit > Ethernet Controller (rev 05) > > 07:08.0 Ethernet controller: Intel Corporation 82541GI/PI Gigabit > Ethernet Controller (rev 05) > > 09:0d.0 VGA compatible controller: ATI Technologies Inc Radeon > RV100 QY [Radeon 7000/VE] > > > > [root at fw ~]# free -m > > total used free shared buffers > cached > > Mem: 2021 1479 542 0 > 400 654 > > -/+ buffers/cache: 424 1597 > > Swap: 1027 0 1027 > > > > [root at fw ~]# cat /proc/cpuinfo > > processor : 0 > > vendor_id : GenuineIntel > > cpu family : 15 > > model : 4 > > model name : Intel(R) Xeon(TM) CPU 3.00GHz > > stepping : 3 > > cpu MHz : 2992.674 > > cache size : 2048 KB > > physical id : 0 > > siblings : 2 > > core id : 0 > > cpu cores : 1 > > fdiv_bug : no > > hlt_bug : no > > f00f_bug : no > > coma_bug : no > > fpu : yes > > fpu_exception : yes > > cpuid level : 5 > > wp : yes > > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr > pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm > pbe nx lm constant_tsc pni monitor ds_cpl cid cx16 xtpr > > bogomips : 5990.78 > > > > processor : 1 > > vendor_id : GenuineIntel > > cpu family : 15 > > model : 4 > > model name : Intel(R) Xeon(TM) CPU 3.00GHz > > stepping : 3 > > cpu MHz : 2992.674 > > cache size : 2048 KB > > physical id : 0 > > siblings : 2 > > core id : 0 > > cpu cores : 1 > > fdiv_bug : no > > hlt_bug : no > > f00f_bug : no > > coma_bug : no > > fpu : yes > > fpu_exception : yes > > cpuid level : 5 > > wp : yes > > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr > pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm > pbe nx lm constant_tsc pni monitor ds_cpl cid cx16 xtpr > > bogomips : 5985.13 > > > > > > Any help/Tipp/hint will be very welcome. > > > > Thanks in Advance! > > > > Pablo Fernandes--------------------------------------------------- Webmail of Bulsat Ltd. at http://mail.bulsattv.com/
Pablo Fernandes Yahoo
2007-May-28 00:27 UTC
Re: big problem with HTB/CBQ and CPU for more than 1.700 customers
Pablo Fernandes Yahoo
2007-May-28 10:01 UTC
AW: big problem with HTB/CBQ and CPU for more than 1.700 customers
Hey, I''m definately glad because i can see that someone else knows what is happening here. Thank for all the help and also i''m here to help anyone as much as i can. So, refreshing my current setup, i have this rules for each customer: tc qdisc del dev eth0 root tc qdisc add dev eth0 root handle 1:0 htb tc class add dev eth0 parent 1:0 classid 1:1 htb rate 100mbit tc qdisc del dev eth1 root tc qdisc add dev eth1 root handle 1:0 htb tc class add dev eth1 parent 1:0 classid 1:1 htb rate 100mbit user 1 tc class add dev eth0 parent 1:1 classid 1:5 htb rate 150kbit ceil 150kbit tc qdisc add dev eth0 parent 1:5 handle 5: sfq perturb 10 tc class add dev eth1 parent 1:1 classid 1:5 htb rate 50kbit ceil 50kbit tc qdisc add dev eth1 parent 1:5 handle 5: sfq perturb 10 iptables -t mangle -A POSTROUTING --dest 10.30.0.54 -o eth0 -j CLASSIFY --set-class 1:5 iptables -t mangle -A FORWARD --src 10.30.0.54 -o eth1 -j CLASSIFY --set-class 1:5 user n tc class add dev eth0 parent 1:1 classid 1:8 htb rate 150kbit ceil 150kbit tc qdisc add dev eth0 parent 1:8 handle 8: sfq perturb 10 tc class add dev eth1 parent 1:1 classid 1:8 htb rate 50kbit ceil 50kbit tc qdisc add dev eth1 parent 1:8 handle 8: sfq perturb 10 iptables -t mangle -A POSTROUTING --dest 10.20.0.43 -o eth0 -j CLASSIFY --set-class 1:8 iptables -t mangle -A FORWARD --src 10.20.0.43 -o eth1 -j CLASSIFY --set-class 1:8 what u32 rules could replace these iptables rules? I would like to try u32 filters and see if them will solve the problem, if i had no success, i will try the IPCLASSIFY patch. Thanks again in Advance. Regards Pablo Fernandes -----Ursprüngliche Nachricht----- Von: VladSun [mailto:vladsun@relef.net] Gesendet: segunda-feira, 28 de maio de 2007 14:39 An: Alexandru Dragoi Cc: Pablo Fernandes Yahoo; lartc@mailman.ds9a.nl Betreff: Re: [LARTC] big problem with HTB/CBQ and CPU for more than 1.700 customers Alexandru Dragoi написа:> u32 hash filters is the key, as somebody pointed. You can also tune your > iptables setup, like this > > #192.168.1.0/24 > iptables -t mangle -N 192-168-1-0-24 > iptables -t mangle -A FORWARD -s 192.168.1.0/24 -j 192-168-1-0-24 > iptables -t mangle -N 192-168-1-0-25 > iptables -t mangle -N 192-168-1-128-25 > iptables -t mangle -A 192-168-1-0-24 -s 192.168.1.0/25 -j 192-168-1-0-25 > iptables -t mangle -A 192-168-1-0-24 -s 192.168.128.0/25 -j 192-168-1-128-25 > . > . > and so on, until (ip 192.168.1.11, which is called in chain created for > 192.168.1.10/31) > > iptables -t mangle -A 192-168-1-10-31 -s 192.168.1.10 -j CLASSIFY > --set-class 1:10 > iptables -t mangle -A 192-168-1-10-31 -s 192.168.1.11 -j CLASSIFY > --set-class 1:11 > > .. I guess you got the ideea, it requires some RAM, which i belive is > not such a big problem. Similar rules should be made for download. > >Or you can use my patch - IPCLASSIFY. Then the rules above would be substituted by a signle rule per direction: iptables -t mangle -A FORWARD -s 192.168.1.0/24 -j IPCLASSIFY --addr=src --and-mask=0xff --or-mask=0x11000 iptables -t mangle -A FORWARD -d 192.168.1.0/24 -j IPCLASSIFY --addr=dst --and-mask=0xff --or-mask=0x12000 This is equal to applying CLASSIFY target to each packet with --set-class (srcIP & 0xFF | 0x1100 ) and --set-class (dstIP & 0xFF | 0x1200 ). It is very similar to IPMARK, but it uses skb->priority field instead mark. So no tc filters are needed. _______________________________________________________ Yahoo! Mail - Sempre a melhor opo para voc! Experimente j e veja as novidades. http://br.yahoo.com/mailbeta/tudonovo/
Acácio Alves dos Santos
2007-May-28 13:15 UTC
Re: big problem with HTB/CBQ and CPU for more than 1.700 customers
I have classes configured by protocol.. My NIC is the e1000 too (default parameters), but I''m using off-board cards (with scalable i/o support - good for multi-core setups). The main problem I had was the number of interruptions (~ 10.000/s)... With IRQ balance activated, each NIC got used with a specific processor core, and the use of CPU on these cores was always 100%. I''ve solved this problem in my setup (2 Quad-Cores), deactivating the IRQ balance, that caused the interruptions to be processed by all the 8 cores. Are you doing p2p control on this server? This is usually what takes more CPU usage.. On May 27, 2007, at 9:27 PM, Pablo Fernandes Yahoo wrote:> As i told before, i tryed to shape my traffic CBQ and i did use u32 > filters. the results were exactly the same as HTB using u32 filters > (i tryed before too) or not. > > > > Do you have in your HTB setup a class per customer? I’ve seen > different setups, but all of them shaping the traffic either based > in protocols or/and IP ranges (it isn’t our reality whilst we have > at least a class per each single IP within the network). > > > > After some readings, i’m starting to suspect about my NIC driver > (e1000). Do you have in your Dell PE 1850 interfaces using the > e1000 driver? Is the entire traffic passing by this server? I > suppose that problem is something about the hardware or software > interruptions. Are you using the default parameters for the e1000 > kernel module?-- Acácio Alves dos Santos Administração de redes Diginet Brasil adm.acacio@digi.com.br (+55) 84 4008-9000 Esta mensagem, incluindo seus anexos, pode conter informação confidencial e/ou privilegiada. Se você não for o destinatário ou a pessoa autorizada a receber esta mensagem, não pode usar, copiar ou divulgar as informações nela contidas ou tomar qualquer ação baseada nessas informações. Se você recebeu esta mensagem por engano, por favor avise imediatamente o remetente, respondendo o e-mail e em seguida apague-o. Agradecemos sua cooperação. This message, including its attatchments, may contain confidential and/or privileged information. If you are not the recipient or authorized person to receive this message, you must not use, copy, disclose or take any action based on this message or any information herein. If you received this message by mistake, please advise the sender immediately by replying the e- mail and deleting this message. Thank you for your cooperation.
Alexandru Dragoi
2007-May-28 13:29 UTC
Re: big problem with HTB/CBQ and CPU for more than 1.700 customers
u32 hash filters is the key, as somebody pointed. You can also tune your iptables setup, like this #192.168.1.0/24 iptables -t mangle -N 192-168-1-0-24 iptables -t mangle -A FORWARD -s 192.168.1.0/24 -j 192-168-1-0-24 iptables -t mangle -N 192-168-1-0-25 iptables -t mangle -N 192-168-1-128-25 iptables -t mangle -A 192-168-1-0-24 -s 192.168.1.0/25 -j 192-168-1-0-25 iptables -t mangle -A 192-168-1-0-24 -s 192.168.128.0/25 -j 192-168-1-128-25 . . and so on, until (ip 192.168.1.11, which is called in chain created for 192.168.1.10/31) iptables -t mangle -A 192-168-1-10-31 -s 192.168.1.10 -j CLASSIFY --set-class 1:10 iptables -t mangle -A 192-168-1-10-31 -s 192.168.1.11 -j CLASSIFY --set-class 1:11 .. I guess you got the ideea, it requires some RAM, which i belive is not such a big problem. Similar rules should be made for download. Pablo Fernandes Yahoo wrote:> > Hello, > > have HTB „rules“ in 4 different ISPs and i control for each customer > this way: > > Flush and 1:0 class > > tc qdisc del dev eth0 root > > tc qdisc add dev eth0 root handle 1:0 htb > > tc class add dev eth0 parent 1:0 classid 1:1 htb rate 100mbit > > tc qdisc del dev eth1 root > > tc qdisc add dev eth1 root handle 1:0 htb > > tc class add dev eth1 parent 1:0 classid 1:1 htb rate 100mbit > > Upload and Download: user1 > > tc class add dev eth0 parent 1:1 classid 1:5 htb rate 150kbit ceil 150kbit > > tc qdisc add dev eth0 parent 1:5 handle 5: sfq perturb 10 > > tc class add dev eth1 parent 1:1 classid 1:5 htb rate 50kbit ceil 50kbit > > tc qdisc add dev eth1 parent 1:5 handle 5: sfq perturb 10 > > iptables -t mangle -A POSTROUTING --dest x.x.x.x -o eth0 -j CLASSIFY > --set-class 1:5 > > iptables -t mangle -A FORWARD --src x.x.x.x -o eth1 -j CLASSIFY > --set-class 1:5 > > Upload and Download: user2 > > tc class add dev eth0 parent 1:1 classid 1:8 htb rate 150kbit ceil 150kbit > > tc qdisc add dev eth0 parent 1:8 handle 8: sfq perturb 10 > > tc class add dev eth1 parent 1:1 classid 1:8 htb rate 50kbit ceil 50kbit > > tc qdisc add dev eth1 parent 1:8 handle 8: sfq perturb 10 > > iptables -t mangle -A POSTROUTING --dest y.y.y.y -o eth0 -j CLASSIFY > --set-class 1:8 > > iptables -t mangle -A FORWARD --src y.y.y.y -o eth1 -j CLASSIFY > --set-class 1:8 > > (…) > > This rules works fine, but just for less than 1.700 customers. More > than 1.700 customers, i have my load avarage in the sky and Ksoftirqd > process (top information) in 100% fulltime. I don’t know why. I used > to use CBQ instead HTB because i had the same problem and Ron (a guy > in this list) gave this rules and told me that he uses this for more > than 3.000 customers. I tested it in more than 7 different computers > (but the same hadware specifications) and i had the same problem with > either CBQ or HTB rules. The computers that i have are all of them > DELL PowerEdge 1850. I will put some hardware iformations here: > > top > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > > 3 root 39 19 0 0 0 R 100 0.0 5316:20 ksoftirqd/0 > > [root@fw ~]# uptime > > 10:38:11 up 161 days, 17:21, 3 users, load average: 1.58, 1.65, 1.51 > (unfortunately when i took this, the load average was „pretty good“, > but minutes ago, it was more than 11.0 > > [root@fw ~]# lspci > > 00:00.0 Host bridge: Intel Corporation E7520 Memory Controller Hub > (rev 09) > > 00:02.0 PCI bridge: Intel Corporation E7525/E7520/E7320 PCI Express > Port A (rev 09) > > 00:04.0 PCI bridge: Intel Corporation E7525/E7520 PCI Express Port B > (rev 09) > > 00:05.0 PCI bridge: Intel Corporation E7520 PCI Express Port B1 (rev 09) > > 00:06.0 PCI bridge: Intel Corporation E7520 PCI Express Port C (rev 09) > > 00:1d.0 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB > UHCI Controller #1 (rev 02) > > 00:1d.1 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB > UHCI Controller #2 (rev 02) > > 00:1d.2 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB > UHCI Controller #3 (rev 02) > > 00:1d.7 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB2 > EHCI Controller (rev 02) > > 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev c2) > > 00:1f.0 ISA bridge: Intel Corporation 82801EB/ER (ICH5/ICH5R) LPC > Interface Bridge (rev 02) > > 00:1f.1 IDE interface: Intel Corporation 82801EB/ER (ICH5/ICH5R) IDE > Controller (rev 02) > > 01:00.0 PCI bridge: Intel Corporation 80332 [Dobson] I/O processor > (A-Segment Bridge) (rev 06) > > 01:00.2 PCI bridge: Intel Corporation 80332 [Dobson] I/O processor > (B-Segment Bridge) (rev 06) > > 02:0c.0 Ethernet controller: Intel Corporation 82545GM Gigabit > Ethernet Controller (rev 04) > > 02:0e.0 RAID bus controller: Dell PowerEdge Expandable RAID controller > 4 (rev 06) > > 03:0b.0 Ethernet controller: Intel Corporation 82545GM Gigabit > Ethernet Controller (rev 04) > > 05:00.0 PCI bridge: Intel Corporation 6700PXH PCI Express-to-PCI > Bridge A (rev 09) > > 05:00.2 PCI bridge: Intel Corporation 6700PXH PCI Express-to-PCI > Bridge B (rev 09) > > 06:07.0 Ethernet controller: Intel Corporation 82541GI/PI Gigabit > Ethernet Controller (rev 05) > > 07:08.0 Ethernet controller: Intel Corporation 82541GI/PI Gigabit > Ethernet Controller (rev 05) > > 09:0d.0 VGA compatible controller: ATI Technologies Inc Radeon RV100 > QY [Radeon 7000/VE] > > [root@fw ~]# free -m > > total used free shared buffers cached > > Mem: 2021 1479 542 0 400 654 > > -/+ buffers/cache: 424 1597 > > Swap: 1027 0 1027 > > [root@fw ~]# cat /proc/cpuinfo > > processor : 0 > > vendor_id : GenuineIntel > > cpu family : 15 > > model : 4 > > model name : Intel(R) Xeon(TM) CPU 3.00GHz > > stepping : 3 > > cpu MHz : 2992.674 > > cache size : 2048 KB > > physical id : 0 > > siblings : 2 > > core id : 0 > > cpu cores : 1 > > fdiv_bug : no > > hlt_bug : no > > f00f_bug : no > > coma_bug : no > > fpu : yes > > fpu_exception : yes > > cpuid level : 5 > > wp : yes > > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov > pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm > constant_tsc pni monitor ds_cpl cid cx16 xtpr > > bogomips : 5990.78 > > processor : 1 > > vendor_id : GenuineIntel > > cpu family : 15 > > model : 4 > > model name : Intel(R) Xeon(TM) CPU 3.00GHz > > stepping : 3 > > cpu MHz : 2992.674 > > cache size : 2048 KB > > physical id : 0 > > siblings : 2 > > core id : 0 > > cpu cores : 1 > > fdiv_bug : no > > hlt_bug : no > > f00f_bug : no > > coma_bug : no > > fpu : yes > > fpu_exception : yes > > cpuid level : 5 > > wp : yes > > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov > pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm > constant_tsc pni monitor ds_cpl cid cx16 xtpr > > bogomips : 5985.13 > > Any help/Tipp/hint will be very welcome. > > Thanks in Advance! > > Pablo Fernandes > > ------------------------------------------------------------------------ > > _______________________________________________ > LARTC mailing list > LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc >
VladSun
2007-May-28 13:39 UTC
Re: big problem with HTB/CBQ and CPU for more than 1.700 customers
Alexandru Dragoi написа:> u32 hash filters is the key, as somebody pointed. You can also tune your > iptables setup, like this > > #192.168.1.0/24 > iptables -t mangle -N 192-168-1-0-24 > iptables -t mangle -A FORWARD -s 192.168.1.0/24 -j 192-168-1-0-24 > iptables -t mangle -N 192-168-1-0-25 > iptables -t mangle -N 192-168-1-128-25 > iptables -t mangle -A 192-168-1-0-24 -s 192.168.1.0/25 -j 192-168-1-0-25 > iptables -t mangle -A 192-168-1-0-24 -s 192.168.128.0/25 -j 192-168-1-128-25 > . > . > and so on, until (ip 192.168.1.11, which is called in chain created for > 192.168.1.10/31) > > iptables -t mangle -A 192-168-1-10-31 -s 192.168.1.10 -j CLASSIFY > --set-class 1:10 > iptables -t mangle -A 192-168-1-10-31 -s 192.168.1.11 -j CLASSIFY > --set-class 1:11 > > .. I guess you got the ideea, it requires some RAM, which i belive is > not such a big problem. Similar rules should be made for download. > >Or you can use my patch - IPCLASSIFY. Then the rules above would be substituted by a signle rule per direction: iptables -t mangle -A FORWARD -s 192.168.1.0/24 -j IPCLASSIFY --addr=src --and-mask=0xff --or-mask=0x11000 iptables -t mangle -A FORWARD -d 192.168.1.0/24 -j IPCLASSIFY --addr=dst --and-mask=0xff --or-mask=0x12000 This is equal to applying CLASSIFY target to each packet with --set-class (srcIP & 0xFF | 0x1100 ) and --set-class (dstIP & 0xFF | 0x1200 ). It is very similar to IPMARK, but it uses skb->priority field instead mark. So no tc filters are needed.
Alexandru Dragoi
2007-May-28 13:53 UTC
Re: big problem with HTB/CBQ and CPU for more than 1.700 customers
VladSun wrote:> Alexandru Dragoi написа: >> u32 hash filters is the key, as somebody pointed. You can also tune your >> iptables setup, like this >> >> #192.168.1.0/24 >> iptables -t mangle -N 192-168-1-0-24 >> iptables -t mangle -A FORWARD -s 192.168.1.0/24 -j 192-168-1-0-24 >> iptables -t mangle -N 192-168-1-0-25 >> iptables -t mangle -N 192-168-1-128-25 >> iptables -t mangle -A 192-168-1-0-24 -s 192.168.1.0/25 -j 192-168-1-0-25 >> iptables -t mangle -A 192-168-1-0-24 -s 192.168.128.0/25 -j >> 192-168-1-128-25 >> . >> . >> and so on, until (ip 192.168.1.11, which is called in chain created for >> 192.168.1.10/31) >> >> iptables -t mangle -A 192-168-1-10-31 -s 192.168.1.10 -j CLASSIFY >> --set-class 1:10 >> iptables -t mangle -A 192-168-1-10-31 -s 192.168.1.11 -j CLASSIFY >> --set-class 1:11 >> >> .. I guess you got the ideea, it requires some RAM, which i belive is >> not such a big problem. Similar rules should be made for download. >> >> > Or you can use my patch - IPCLASSIFY. Then the rules above would be > substituted by a signle rule per direction: > > > iptables -t mangle -A FORWARD -s 192.168.1.0/24 -j IPCLASSIFY > --addr=src --and-mask=0xff --or-mask=0x11000 > iptables -t mangle -A FORWARD -d 192.168.1.0/24 -j IPCLASSIFY > --addr=dst --and-mask=0xff --or-mask=0x12000 > > This is equal to applying CLASSIFY target to each packet with > --set-class (srcIP & 0xFF | 0x1100 ) and --set-class (dstIP & 0xFF | > 0x1200 ). > It is very similar to IPMARK, but it uses skb->priority field instead > mark. So no tc filters are needed. >Cool, I remember I red about this a little while ago. Now, another thing to tune would be some htb paches for massive hashing on classid lookup. I must say I haven''t use it so far, I hope I will do it soon. http://www.mail-archive.com/lartc@mailman.ds9a.nl/msg16279.html
Pablo Fernandes Yahoo
2007-May-28 23:25 UTC
Re: big problem with HTB/CBQ and CPU for more than 1.700 customers
Marek Kierdelewicz
2007-May-29 06:33 UTC
Re: Re: big problem with HTB/CBQ and CPU for more than 1.700 customers
>So, what do you think should i do with my e1000? What do you think >could be the best board for sites as 8.000 customers? My problem is >exact these lots of interruptions.Plug as many network interfaces (e1000) as cpu cores you have. E1000 multiport nics have separate irq assigned to each "port", so having 2 x Quad-Core Xeon and 2 x 4-port e1000 would allow you to configure static affinity of each port to one core: http://bcr2.uwaterloo.ca/~brecht/servers/apic/SMP-affinity.txt Sometimes symmetric usage of network interfaces (for symmetric core usage) is the problem. I think you can achieve it by plugging all 8 ports to managed switch and configuring some form of aggregation. The best would be src/dst IP EtherChannel or something similar. For some deployments (where router sees all the clients on OSI layer 2) src/dst MAC EtherChannel would suffice. On linux side you would have to configure bonding: http://linux-net.osdl.org/index.php/Bonding pozdrawiam, Marek Kierdelewicz KoBa ISP
Luciano Ruete
2007-Jun-01 02:43 UTC
Re: big problem with HTB/CBQ and CPU for more than 1.700 customers
On Monday 28 May 2007 10:39:11 VladSun wrote:> Alexandru Dragoi написа: > > u32 hash filters is the key, as somebody pointed. You can also tune your > > iptables setup, like this > > > > #192.168.1.0/24 > > iptables -t mangle -N 192-168-1-0-24 > > iptables -t mangle -A FORWARD -s 192.168.1.0/24 -j 192-168-1-0-24 > > iptables -t mangle -N 192-168-1-0-25 > > iptables -t mangle -N 192-168-1-128-25 > > iptables -t mangle -A 192-168-1-0-24 -s 192.168.1.0/25 -j 192-168-1-0-25 > > iptables -t mangle -A 192-168-1-0-24 -s 192.168.128.0/25 -j > > 192-168-1-128-25 . > > . > > and so on, until (ip 192.168.1.11, which is called in chain created for > > 192.168.1.10/31) > > > > iptables -t mangle -A 192-168-1-10-31 -s 192.168.1.10 -j CLASSIFY > > --set-class 1:10 > > iptables -t mangle -A 192-168-1-10-31 -s 192.168.1.11 -j CLASSIFY > > --set-class 1:11 > > > > .. I guess you got the ideea, it requires some RAM, which i belive is > > not such a big problem. Similar rules should be made for download. > > Or you can use my patch - IPCLASSIFY. Then the rules above would be > substituted by a signle rule per direction: > > > iptables -t mangle -A FORWARD -s 192.168.1.0/24 -j IPCLASSIFY --addr=src > --and-mask=0xff --or-mask=0x11000 > iptables -t mangle -A FORWARD -d 192.168.1.0/24 -j IPCLASSIFY --addr=dst > --and-mask=0xff --or-mask=0x12000Wow! now i get it, this patch is amazing, now i have a pendient hack that is to merge this with htb-gen. Any chances that this get into mainline, have you mailed netfilter-dev list? -- Luciano
VladSun
2007-Jun-01 12:00 UTC
Re: big problem with HTB/CBQ and CPU for more than 1.700 customers
Luciano Ruete написа:>> Or you can use my patch - IPCLASSIFY. Then the rules above would be >> substituted by a signle rule per direction: >> >> >> iptables -t mangle -A FORWARD -s 192.168.1.0/24 -j IPCLASSIFY --addr=src >> --and-mask=0xff --or-mask=0x11000 >> iptables -t mangle -A FORWARD -d 192.168.1.0/24 -j IPCLASSIFY --addr=dst >> --and-mask=0xff --or-mask=0x12000 >> > > Wow! now i get it, this patch is amazing, now i have a pendient hack that is > to merge this with htb-gen. Any chances that this get into mainline, have you > mailed netfilter-dev list? > >:) Thank you! You should thank Grzegorz Janoszka also - he wrote the original IPMARK patch. My patch is just a slight modification of it. As far as I know netfilter team refused to include the IPMARK in the official P-o-M. So I don''t think IPCLASSIFY would be accepted either. Regards, Vladimir Mirchev.