Vladislav Prodan
2014-Apr-09 21:14 UTC
Some gruesome moments with performance of FreeBSD at over 20K interfaces
Dear Colleagues! I had a task, using FreeBSD 10.0-STABLE: 1) Receive 20-30 Q-in-Q VLAN (IEEE 802.1ad ), inside of which 2k-4k vlan (IEEE 802.1Q). Total ~60K vlan 2) To every vlan interface assign ipv4 and ipv6 addresses, define routes to ipv4 and ipv6 addresses on another side of vlan (ip unnumbered), and also prescribe ipv6 network /64 by size through ipv6 address on another side of vlan. 3) Perform routing from the world to all of these ipv4/ipv6 addresses ? ipv6 networks inside ~60K vlan To accomplish the 1st task I have no alternatives to using Netgraph. I noticed incorrect behavior of ngctl(8) after addition of 560th vlan (bin/187835) Than speed of addition 4k, 8k, 12k vlans was damnably slow: 10 minutes for first 4k vlans 18 minutes for first 5k vlans 28 minutes for first 6k vlans 52 minutes for first 8k vlans Than I added more 4? vlans 20 minutes - 9500 vlans 33 minutes - 10500 vlans 58 minutes - 12? vlans In total speed of addition of 4k, 8k, 12k vlans was subsequently 10m/52m/110m It?s hard to imagine, how many time is needed to add ~60K vlan :( Process was accelerated a little by shooting off devd, bsnmpd, ntpd services, but it found another problems and limitations. For example, a) Service ntpd refuse to start at 12K interfaces: ntpd[2195]: Too many sockets in use, FD_SETSIZE 16384 exceeded I remind, that in files /usr/src/sys/sys/select.h and /usr/include/sys/select.h FD_SETSIZE value is only 1024U b) Service bsnmpd started at 12K interfaces, but immediately loaded CPU at 80-100% last pid: 64011; load averages: 1.00, 0.97, 0.90 up 0+05:25:39 21:26:36 58 processes: 3 running, 54 sleeping, 1 waiting CPU: 68.2% user, 0.0% nice, 30.6% system, 1.2% interrupt, 0.0% idle Mem: 125M Active, 66M Inact, 435M Wired, 200K Cache, 525M Free ARC: 66M Total, 28M MFU, 36M MRU, 16K Anon, 614K Header, 2035K Other Swap: 1024M Total, 1024M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 63863 root 1 96 0 136M 119M RUN 35:31 79.98% bsnmpd ... c) Size of fields during output of command netstat(1) - netstat -inW is unsufficient (bin/188153) d) If indicate in command netstat of interface it?s impossible to understand, which ipv4/ipv6 neworks are indicated here. # netstat -I ngeth123.223 -nW Name Mtu Network Address Ipkts Ierrs Idrop Opkts Oerrs Coll ngeth12 1500 <Link#8187> 08:00:27:cd:9b:8e 0 0 0 1 5 0 ngeth12 - 172.18.206.13 172.18.206.139 0 - - 0 - - ngeth12 - fe80::a00:27f fe80::a00:27ff:fe 0 - - 1 - - ngeth12 - 2001:570:28:1 2001:570:28:140:: 0 - - 0 - - e) Very low output of command arp: # ngctl list | grep ngeth | wc -l 12003 # ifconfig -a | egrep -e 'inet ' | wc -l 12007 # time /usr/sbin/arp -na > /dev/null 150.661u 551.002s 11:53.71 98.3% 20+172k 1+0io 0pf+0w More info at http://freebsd.1045724.n5.nabble.com/arp-8-performance-use-if-nameindex-instead-of-if-indextoname-td5898205.html After using of patch, speed became acceptable: # time /usr/sbin/arp -na > /dev/null 0.114u 0.090s 0:00.14 142.8% 20+170k 0+0io 0pf+0w I suspect, that output of standard network stack will be too low to accomplish a 3rd task, routing of ~60K vlan I have no idea, how to use netmap(4) in this situation :( Please, help me in fulfillment of assigned task. P.S. Colleague-Linuxoid is adjusting the same task and bragging: At Debian, in test (kernel 3.13), 80K vlans arose in 20 minutes. It takes 3 GB RAM. And deleting of these vlans also took 20 minutes. -- Vladislav V. Prodan System & Network Administrator http://support.od.ua +380 67 4584408, +380 99 4060508 VVP88-RIPE
Adrian Chadd
2014-Apr-10 01:09 UTC
Re: Some gruesome moments with performance of FreeBSD at over 20K interfaces
Hi, There's likely many more places where these aren't O(1) operations. The patch in question should be in -HEAD now. a-a On 9 April 2014 14:14, Vladislav Prodan <universite@ukr.net> wrote:> Dear Colleagues! > > I had a task, using FreeBSD 10.0-STABLE: > 1) Receive 20-30 Q-in-Q VLAN (IEEE 802.1ad ), inside of which 2k-4k vlan (IEEE 802.1Q). Total ~60K vlan > 2) To every vlan interface assign ipv4 and ipv6 addresses, define routes to ipv4 and ipv6 addresses on another side of vlan (ip unnumbered), and also prescribe ipv6 network /64 by size through ipv6 address on another side of vlan. > 3) Perform routing from the world to all of these ipv4/ipv6 addresses и ipv6 networks inside ~60K vlan > > > > To accomplish the 1st task I have no alternatives to using Netgraph. > I noticed incorrect behavior of ngctl(8) after addition of 560th vlan (bin/187835) > Than speed of addition 4k, 8k, 12k vlans was damnably slow: > 10 minutes for first 4k vlans > 18 minutes for first 5k vlans > 28 minutes for first 6k vlans > 52 minutes for first 8k vlans > Than I added more 4к vlans > 20 minutes - 9500 vlans > 33 minutes - 10500 vlans > 58 minutes - 12к vlans > > In total speed of addition of 4k, 8k, 12k vlans was subsequently 10m/52m/110m > It’s hard to imagine, how many time is needed to add ~60K vlan :( > Process was accelerated a little by shooting off devd, bsnmpd, ntpd services, but it found another problems and limitations. > > For example, > a) Service ntpd refuse to start at 12K interfaces: > ntpd[2195]: Too many sockets in use, FD_SETSIZE 16384 exceeded > I remind, that in files /usr/src/sys/sys/select.h and /usr/include/sys/select.h FD_SETSIZE value is only 1024U > > b) Service bsnmpd started at 12K interfaces, but immediately loaded CPU at 80-100% > > last pid: 64011; load averages: 1.00, 0.97, 0.90 up 0+05:25:39 21:26:36 > 58 processes: 3 running, 54 sleeping, 1 waiting > CPU: 68.2% user, 0.0% nice, 30.6% system, 1.2% interrupt, 0.0% idle > Mem: 125M Active, 66M Inact, 435M Wired, 200K Cache, 525M Free > ARC: 66M Total, 28M MFU, 36M MRU, 16K Anon, 614K Header, 2035K Other > Swap: 1024M Total, 1024M Free > > PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND > 63863 root 1 96 0 136M 119M RUN 35:31 79.98% bsnmpd > ... > > c) Size of fields during output of command netstat(1) - netstat -inW is unsufficient (bin/188153) > > d) If indicate in command netstat of interface it’s impossible to understand, which ipv4/ipv6 neworks are indicated here. > > # netstat -I ngeth123.223 -nW > Name Mtu Network Address Ipkts Ierrs Idrop Opkts Oerrs Coll > ngeth12 1500 <Link#8187> 08:00:27:cd:9b:8e 0 0 0 1 5 0 > ngeth12 - 172.18.206.13 172.18.206.139 0 - - 0 - - > ngeth12 - fe80::a00:27f fe80::a00:27ff:fe 0 - - 1 - - > ngeth12 - 2001:570:28:1 2001:570:28:140:: 0 - - 0 - - > > e) Very low output of command arp: > # ngctl list | grep ngeth | wc -l > 12003 > # ifconfig -a | egrep -e 'inet ' | wc -l > 12007 > # time /usr/sbin/arp -na > /dev/null > 150.661u 551.002s 11:53.71 98.3% 20+172k 1+0io 0pf+0w > > > More info at http://freebsd.1045724.n5.nabble.com/arp-8-performance-use-if-nameindex-instead-of-if-indextoname-td5898205.html > > After using of patch, speed became acceptable: > > # time /usr/sbin/arp -na > /dev/null > 0.114u 0.090s 0:00.14 142.8% 20+170k 0+0io 0pf+0w > > I suspect, that output of standard network stack will be too low to accomplish a 3rd task, routing of ~60K vlan > I have no idea, how to use netmap(4) in this situation :( > Please, help me in fulfillment of assigned task. > > P.S. > Colleague-Linuxoid is adjusting the same task and bragging: > At Debian, in test (kernel 3.13), 80K vlans arose in 20 minutes. It takes 3 GB RAM. And deleting of these vlans also took 20 minutes. > > -- > Vladislav V. Prodan > System & Network Administrator > http://support.od.ua > +380 67 4584408, +380 99 4060508 > VVP88-RIPE > _______________________________________________ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"_______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
Adrian Chadd
2014-Apr-10 01:09 UTC
Some gruesome moments with performance of FreeBSD at over 20K interfaces
Hi, There's likely many more places where these aren't O(1) operations. The patch in question should be in -HEAD now. a-a On 9 April 2014 14:14, Vladislav Prodan <universite at ukr.net> wrote:> Dear Colleagues! > > I had a task, using FreeBSD 10.0-STABLE: > 1) Receive 20-30 Q-in-Q VLAN (IEEE 802.1ad ), inside of which 2k-4k vlan (IEEE 802.1Q). Total ~60K vlan > 2) To every vlan interface assign ipv4 and ipv6 addresses, define routes to ipv4 and ipv6 addresses on another side of vlan (ip unnumbered), and also prescribe ipv6 network /64 by size through ipv6 address on another side of vlan. > 3) Perform routing from the world to all of these ipv4/ipv6 addresses ? ipv6 networks inside ~60K vlan > > > > To accomplish the 1st task I have no alternatives to using Netgraph. > I noticed incorrect behavior of ngctl(8) after addition of 560th vlan (bin/187835) > Than speed of addition 4k, 8k, 12k vlans was damnably slow: > 10 minutes for first 4k vlans > 18 minutes for first 5k vlans > 28 minutes for first 6k vlans > 52 minutes for first 8k vlans > Than I added more 4? vlans > 20 minutes - 9500 vlans > 33 minutes - 10500 vlans > 58 minutes - 12? vlans > > In total speed of addition of 4k, 8k, 12k vlans was subsequently 10m/52m/110m > It?s hard to imagine, how many time is needed to add ~60K vlan :( > Process was accelerated a little by shooting off devd, bsnmpd, ntpd services, but it found another problems and limitations. > > For example, > a) Service ntpd refuse to start at 12K interfaces: > ntpd[2195]: Too many sockets in use, FD_SETSIZE 16384 exceeded > I remind, that in files /usr/src/sys/sys/select.h and /usr/include/sys/select.h FD_SETSIZE value is only 1024U > > b) Service bsnmpd started at 12K interfaces, but immediately loaded CPU at 80-100% > > last pid: 64011; load averages: 1.00, 0.97, 0.90 up 0+05:25:39 21:26:36 > 58 processes: 3 running, 54 sleeping, 1 waiting > CPU: 68.2% user, 0.0% nice, 30.6% system, 1.2% interrupt, 0.0% idle > Mem: 125M Active, 66M Inact, 435M Wired, 200K Cache, 525M Free > ARC: 66M Total, 28M MFU, 36M MRU, 16K Anon, 614K Header, 2035K Other > Swap: 1024M Total, 1024M Free > > PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND > 63863 root 1 96 0 136M 119M RUN 35:31 79.98% bsnmpd > ... > > c) Size of fields during output of command netstat(1) - netstat -inW is unsufficient (bin/188153) > > d) If indicate in command netstat of interface it?s impossible to understand, which ipv4/ipv6 neworks are indicated here. > > # netstat -I ngeth123.223 -nW > Name Mtu Network Address Ipkts Ierrs Idrop Opkts Oerrs Coll > ngeth12 1500 <Link#8187> 08:00:27:cd:9b:8e 0 0 0 1 5 0 > ngeth12 - 172.18.206.13 172.18.206.139 0 - - 0 - - > ngeth12 - fe80::a00:27f fe80::a00:27ff:fe 0 - - 1 - - > ngeth12 - 2001:570:28:1 2001:570:28:140:: 0 - - 0 - - > > e) Very low output of command arp: > # ngctl list | grep ngeth | wc -l > 12003 > # ifconfig -a | egrep -e 'inet ' | wc -l > 12007 > # time /usr/sbin/arp -na > /dev/null > 150.661u 551.002s 11:53.71 98.3% 20+172k 1+0io 0pf+0w > > > More info at http://freebsd.1045724.n5.nabble.com/arp-8-performance-use-if-nameindex-instead-of-if-indextoname-td5898205.html > > After using of patch, speed became acceptable: > > # time /usr/sbin/arp -na > /dev/null > 0.114u 0.090s 0:00.14 142.8% 20+170k 0+0io 0pf+0w > > I suspect, that output of standard network stack will be too low to accomplish a 3rd task, routing of ~60K vlan > I have no idea, how to use netmap(4) in this situation :( > Please, help me in fulfillment of assigned task. > > P.S. > Colleague-Linuxoid is adjusting the same task and bragging: > At Debian, in test (kernel 3.13), 80K vlans arose in 20 minutes. It takes 3 GB RAM. And deleting of these vlans also took 20 minutes. > > -- > Vladislav V. Prodan > System & Network Administrator > http://support.od.ua > +380 67 4584408, +380 99 4060508 > VVP88-RIPE > _______________________________________________ > freebsd-net at freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"
Ermal Luçi
2014-Apr-10 07:17 UTC
Re: Some gruesome moments with performance of FreeBSD at over 20K interfaces
From experience with large number of interfaces and configuring them. Its not that the kernel cannot handle it the problem is that you call generic utilities to do this job. I.E. to setup an ip on the interface ifconfig has first to get the whole list of interfaces to determine if that interface exists and extra checkings. This is what slows down the whole thing. In pfSense by using custom utilities the time for configuring 8K interfaces went from around 30 minutes to mere seconds or about a minute. It has been long time not testing such scenarios and if you can generate a config(xml format) with all the information for pfSense i can give a look to see what is the bottleneck there. On Wed, Apr 9, 2014 at 11:14 PM, Vladislav Prodan <universite@ukr.net>wrote:> Dear Colleagues! > > I had a task, using FreeBSD 10.0-STABLE: > 1) Receive 20-30 Q-in-Q VLAN (IEEE 802.1ad ), inside of which 2k-4k vlan > (IEEE 802.1Q). Total ~60K vlan > 2) To every vlan interface assign ipv4 and ipv6 addresses, define routes > to ipv4 and ipv6 addresses on another side of vlan (ip unnumbered), and > also prescribe ipv6 network /64 by size through ipv6 address on another > side of vlan. > 3) Perform routing from the world to all of these ipv4/ipv6 addresses и > ipv6 networks inside ~60K vlan > > > > To accomplish the 1st task I have no alternatives to using Netgraph. > I noticed incorrect behavior of ngctl(8) after addition of 560th vlan > (bin/187835) > Than speed of addition 4k, 8k, 12k vlans was damnably slow: > 10 minutes for first 4k vlans > 18 minutes for first 5k vlans > 28 minutes for first 6k vlans > 52 minutes for first 8k vlans > Than I added more 4к vlans > 20 minutes - 9500 vlans > 33 minutes - 10500 vlans > 58 minutes - 12к vlans > > In total speed of addition of 4k, 8k, 12k vlans was subsequently > 10m/52m/110m > It's hard to imagine, how many time is needed to add ~60K vlan :( > Process was accelerated a little by shooting off devd, bsnmpd, ntpd > services, but it found another problems and limitations. > > For example, > a) Service ntpd refuse to start at 12K interfaces: > ntpd[2195]: Too many sockets in use, FD_SETSIZE 16384 exceeded > I remind, that in files /usr/src/sys/sys/select.h and > /usr/include/sys/select.h FD_SETSIZE value is only 1024U > > b) Service bsnmpd started at 12K interfaces, but immediately loaded CPU at > 80-100% > > last pid: 64011; load averages: 1.00, 0.97, 0.90 > up 0+05:25:39 21:26:36 > 58 processes: 3 running, 54 sleeping, 1 waiting > CPU: 68.2% user, 0.0% nice, 30.6% system, 1.2% interrupt, 0.0% idle > Mem: 125M Active, 66M Inact, 435M Wired, 200K Cache, 525M Free > ARC: 66M Total, 28M MFU, 36M MRU, 16K Anon, 614K Header, 2035K Other > Swap: 1024M Total, 1024M Free > > PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND > 63863 root 1 96 0 136M 119M RUN 35:31 79.98% bsnmpd > ... > > c) Size of fields during output of command netstat(1) - netstat -inW is > unsufficient (bin/188153) > > d) If indicate in command netstat of interface it's impossible to > understand, which ipv4/ipv6 neworks are indicated here. > > # netstat -I ngeth123.223 -nW > Name Mtu Network Address Ipkts Ierrs Idrop > Opkts Oerrs Coll > ngeth12 1500 <Link#8187> 08:00:27:cd:9b:8e 0 0 0 > 1 5 0 > ngeth12 - 172.18.206.13 172.18.206.139 0 - - > 0 - - > ngeth12 - fe80::a00:27f fe80::a00:27ff:fe 0 - - > 1 - - > ngeth12 - 2001:570:28:1 2001:570:28:140:: 0 - - > 0 - - > > e) Very low output of command arp: > # ngctl list | grep ngeth | wc -l > 12003 > # ifconfig -a | egrep -e 'inet ' | wc -l > 12007 > # time /usr/sbin/arp -na > /dev/null > 150.661u 551.002s 11:53.71 98.3% 20+172k 1+0io 0pf+0w > > > More info at > http://freebsd.1045724.n5.nabble.com/arp-8-performance-use-if-nameindex-instead-of-if-indextoname-td5898205.html > > After using of patch, speed became acceptable: > > # time /usr/sbin/arp -na > /dev/null > 0.114u 0.090s 0:00.14 142.8% 20+170k 0+0io 0pf+0w > > I suspect, that output of standard network stack will be too low to > accomplish a 3rd task, routing of ~60K vlan > I have no idea, how to use netmap(4) in this situation :( > Please, help me in fulfillment of assigned task. > > P.S. > Colleague-Linuxoid is adjusting the same task and bragging: > At Debian, in test (kernel 3.13), 80K vlans arose in 20 minutes. It takes > 3 GB RAM. And deleting of these vlans also took 20 minutes. > > -- > Vladislav V. Prodan > System & Network Administrator > http://support.od.ua > +380 67 4584408, +380 99 4060508 > VVP88-RIPE > _______________________________________________ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"-- Ermal _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
Ermal Luçi
2014-Apr-10 07:17 UTC
Some gruesome moments with performance of FreeBSD at over 20K interfaces
>From experience with large number of interfaces and configuring them.Its not that the kernel cannot handle it the problem is that you call generic utilities to do this job. I.E. to setup an ip on the interface ifconfig has first to get the whole list of interfaces to determine if that interface exists and extra checkings. This is what slows down the whole thing. In pfSense by using custom utilities the time for configuring 8K interfaces went from around 30 minutes to mere seconds or about a minute. It has been long time not testing such scenarios and if you can generate a config(xml format) with all the information for pfSense i can give a look to see what is the bottleneck there. On Wed, Apr 9, 2014 at 11:14 PM, Vladislav Prodan <universite at ukr.net>wrote:> Dear Colleagues! > > I had a task, using FreeBSD 10.0-STABLE: > 1) Receive 20-30 Q-in-Q VLAN (IEEE 802.1ad ), inside of which 2k-4k vlan > (IEEE 802.1Q). Total ~60K vlan > 2) To every vlan interface assign ipv4 and ipv6 addresses, define routes > to ipv4 and ipv6 addresses on another side of vlan (ip unnumbered), and > also prescribe ipv6 network /64 by size through ipv6 address on another > side of vlan. > 3) Perform routing from the world to all of these ipv4/ipv6 addresses ? > ipv6 networks inside ~60K vlan > > > > To accomplish the 1st task I have no alternatives to using Netgraph. > I noticed incorrect behavior of ngctl(8) after addition of 560th vlan > (bin/187835) > Than speed of addition 4k, 8k, 12k vlans was damnably slow: > 10 minutes for first 4k vlans > 18 minutes for first 5k vlans > 28 minutes for first 6k vlans > 52 minutes for first 8k vlans > Than I added more 4? vlans > 20 minutes - 9500 vlans > 33 minutes - 10500 vlans > 58 minutes - 12? vlans > > In total speed of addition of 4k, 8k, 12k vlans was subsequently > 10m/52m/110m > It's hard to imagine, how many time is needed to add ~60K vlan :( > Process was accelerated a little by shooting off devd, bsnmpd, ntpd > services, but it found another problems and limitations. > > For example, > a) Service ntpd refuse to start at 12K interfaces: > ntpd[2195]: Too many sockets in use, FD_SETSIZE 16384 exceeded > I remind, that in files /usr/src/sys/sys/select.h and > /usr/include/sys/select.h FD_SETSIZE value is only 1024U > > b) Service bsnmpd started at 12K interfaces, but immediately loaded CPU at > 80-100% > > last pid: 64011; load averages: 1.00, 0.97, 0.90 > up 0+05:25:39 21:26:36 > 58 processes: 3 running, 54 sleeping, 1 waiting > CPU: 68.2% user, 0.0% nice, 30.6% system, 1.2% interrupt, 0.0% idle > Mem: 125M Active, 66M Inact, 435M Wired, 200K Cache, 525M Free > ARC: 66M Total, 28M MFU, 36M MRU, 16K Anon, 614K Header, 2035K Other > Swap: 1024M Total, 1024M Free > > PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND > 63863 root 1 96 0 136M 119M RUN 35:31 79.98% bsnmpd > ... > > c) Size of fields during output of command netstat(1) - netstat -inW is > unsufficient (bin/188153) > > d) If indicate in command netstat of interface it's impossible to > understand, which ipv4/ipv6 neworks are indicated here. > > # netstat -I ngeth123.223 -nW > Name Mtu Network Address Ipkts Ierrs Idrop > Opkts Oerrs Coll > ngeth12 1500 <Link#8187> 08:00:27:cd:9b:8e 0 0 0 > 1 5 0 > ngeth12 - 172.18.206.13 172.18.206.139 0 - - > 0 - - > ngeth12 - fe80::a00:27f fe80::a00:27ff:fe 0 - - > 1 - - > ngeth12 - 2001:570:28:1 2001:570:28:140:: 0 - - > 0 - - > > e) Very low output of command arp: > # ngctl list | grep ngeth | wc -l > 12003 > # ifconfig -a | egrep -e 'inet ' | wc -l > 12007 > # time /usr/sbin/arp -na > /dev/null > 150.661u 551.002s 11:53.71 98.3% 20+172k 1+0io 0pf+0w > > > More info at > http://freebsd.1045724.n5.nabble.com/arp-8-performance-use-if-nameindex-instead-of-if-indextoname-td5898205.html > > After using of patch, speed became acceptable: > > # time /usr/sbin/arp -na > /dev/null > 0.114u 0.090s 0:00.14 142.8% 20+170k 0+0io 0pf+0w > > I suspect, that output of standard network stack will be too low to > accomplish a 3rd task, routing of ~60K vlan > I have no idea, how to use netmap(4) in this situation :( > Please, help me in fulfillment of assigned task. > > P.S. > Colleague-Linuxoid is adjusting the same task and bragging: > At Debian, in test (kernel 3.13), 80K vlans arose in 20 minutes. It takes > 3 GB RAM. And deleting of these vlans also took 20 minutes. > > -- > Vladislav V. Prodan > System & Network Administrator > http://support.od.ua > +380 67 4584408, +380 99 4060508 > VVP88-RIPE > _______________________________________________ > freebsd-net at freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"-- Ermal
Harti Brandt
2014-Apr-10 10:08 UTC
Re: Some gruesome moments with performance of FreeBSD at over 20K interfaces
On Wed, 9 Apr 2014, Vladislav Prodan wrote: VP>b) Service bsnmpd started at 12K interfaces, but immediately loaded CPU VP>at 80-100% I could imagine that this is because of the statistics polling. bsnmp implements 64-bit interface statistics but we have only 32-bit statistics in the kernel. So it polls the kernel statistics for each interface on a rate that ensures that 32-bit don't overflow. If the interfaces are GBit or, worse, 10GBit interfaces the polling rate is rather high (in the order of seconds). You should either make sure that the interfaces report sensible bitrates (I doubt that 20k interfaces could all be GBit interfaces) or force a slower polling interval by setting begemotIfForcePoll.0 to some large value. harti _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
Harti Brandt
2014-Apr-10 10:08 UTC
Some gruesome moments with performance of FreeBSD at over 20K interfaces
On Wed, 9 Apr 2014, Vladislav Prodan wrote: VP>b) Service bsnmpd started at 12K interfaces, but immediately loaded CPU VP>at 80-100% I could imagine that this is because of the statistics polling. bsnmp implements 64-bit interface statistics but we have only 32-bit statistics in the kernel. So it polls the kernel statistics for each interface on a rate that ensures that 32-bit don't overflow. If the interfaces are GBit or, worse, 10GBit interfaces the polling rate is rather high (in the order of seconds). You should either make sure that the interfaces report sensible bitrates (I doubt that 20k interfaces could all be GBit interfaces) or force a slower polling interval by setting begemotIfForcePoll.0 to some large value. harti
Vladislav Prodan
2014-Apr-10 11:05 UTC
Re[2]: Some gruesome moments with performance of FreeBSD at over 20K interfaces
> On Wed, 9 Apr 2014, Vladislav Prodan wrote: > > VP>b) Service bsnmpd started at 12K interfaces, but immediately loaded CPU > VP>at 80-100% > > I could imagine that this is because of the statistics polling. bsnmp > implements 64-bit interface statistics but we have only 32-bit statistics > in the kernel. So it polls the kernel statistics for each interface on a > rate that ensures that 32-bit don't overflow. If the interfaces are GBit > or, worse, 10GBit interfaces the polling rate is rather high (in the order > of seconds). > > You should either make sure that the interfaces report sensible bitrates > (I doubt that 20k interfaces could all be GBit interfaces) or force a slower > polling interval by setting begemotIfForcePoll.0 to some large value. > > harti >Thanks for the tip. At least 10 interfaces to be 1Gb, and the rest no more than 50M. BegemotIfForcePoll parameter in this case a little help, you will be forced to stand another value for Gigabit Interface begemotIfForcePoll ... -- Vladislav V. Prodan System & Network Administrator http://support.od.ua +380 67 4584408, +380 99 4060508 VVP88-RIPE _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"