Hi all, For all people just testing firewall performance. We are in the process of publishing some graphs regarding firewall performance (mainly in low end hardware). We have compared mainly Linux (2.4.30 and 2.6.11) and FreeBSD (m0n0wall) on a Geode 266, Via 533 and Via 1Ghz all with Realtek 8139 ethernets. You can see the first results in: http://www.eneotecnologia.com/archivos/Firewall_512.png Some comments on the experiment: 1) We tested this with a much better box (P4 3Ghz, PCIE, Marvel chipset), but currently we dont have access to it. It supported more than 10.000 consecutive rules for 1.200 pps But we had to give this hardware to our client and cant test any more. 2) We have to make the same tests with 1500 bytes and 64 bytes packages 3) Traffic was generated from a few PC (sorry, we couldnt create more traffic at the moment) using hping2. Traffic was UDP to port 80. 4) The rules means, # number of non matching rules before the matching rule. Some comments on results: A) We are unable to determinate if we are using NAPI or not on this boxes. We tested 2.4.23 too with the same results. After some reading, we discovered the driver needs to support NAPI too, but after finding what seems a valid one (ftp://ftp.ovh.net/made-in-ovh/kernel/) we dont get better results (neither for 2.4.30 and 2.6.11) We need some help to see if really we are using NAPI on this boxes. B) Linux 2.6.11 and 2.4.30 show more or less the same behavior (?) C) All linux seem to hit a wall around 800 rules. This is a known limit in current iptables / netfilter design. (See Hi-PAC and others) With the better box this "wall" was much further away) Also, this limit is quite similar with different CPUs (Geode 266, Via 533, Via 1Ghz) and is shared on all boxes that use Realtek chipsets (we about to test it with a P4 2.1Ghz Realtek) Maybe a problem of the driver? Maybe the lack of NAPI even when supposed to be used? D) FreeBSD (actually dont know what BDS m0n0wall uses) is much more linear and predictable on its behavior, standing for higher loads. What do you think? Any comments? Any help? Hope it helps. Regards. -- Jaime Nebrera - jnebrera@eneotecnologia.com Consultor TI - ENEO Tecnologia SL Telf.- 95 455 40 62 - 619 04 55 18 ------------------------------------------------------- This SF.Net email is sponsored by the ''Do More With Dual!'' webinar happening July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual core and dual graphics technology at this free one hour event hosted by HP, AMD, and NVIDIA. To register visit http://www.hp.com/go/dualwebinar
Jaime Nebrera wrote:> > C) All linux seem to hit a wall around 800 rules. This is a known > limit in current iptables / netfilter design. (See Hi-PAC and others) > With the better box this "wall" was much further away) Also, this limit > is quite similar with different CPUs (Geode 266, Via 533, Via 1Ghz) and > is shared on all boxes that use Realtek chipsets (we about to test it > with a P4 2.1Ghz Realtek) Maybe a problem of the driver? Maybe the lack > of NAPI even when supposed to be used?NAPI and the number of rules are independent. I assume that each packet was forced to go through all 800 rules? If so, that would represent a pretty dumb ruleset... -Tom -- Tom Eastep \ Nothing is foolproof to a sufficiently talented fool Shoreline, \ http://shorewall.net Washington USA \ teastep@shorewall.net PGP Public Key \ https://lists.shorewall.net/teastep.pgp.key
Tom Eastep wrote:> > I assume that each packet was forced to go through all 800 rules? If so, > that would represent a pretty dumb ruleset... >I also note that all of the Linux boxes appear to be running as a bridge whereas the BSD box was running as a router? -Tom -- Tom Eastep \ Nothing is foolproof to a sufficiently talented fool Shoreline, \ http://shorewall.net Washington USA \ teastep@shorewall.net PGP Public Key \ https://lists.shorewall.net/teastep.pgp.key
Hi Tom and others,> NAPI and the number of rules are independent.Well, when we reach the given pps/rules, ksoft_irq process goes up to 99% and freezes the CPU (in this kind of CPUs you get load averages higher than 3 and 4) Discussing this topic with Luca Deri, from ntop, he pointed us that with a lo of pps the kernel might be unable to process packages at the needed speed. Actually not really packages, but interrupts. With non NAPI drivers you dont have TX and RX polling and get an interrupt for each package. When the number of rules is high, the kernel cant keep the pace and a snow ball is formed as packages arrive faster than the kernel can process and the system goes bersek. Actually we had a box with a Via C3 800 with intel chipsets and this problem didnt appear so fast (well, sadly again we dont have this box anymore so we cant test better :(> I assume that each packet was forced to go through all 800 rules? If so, > that would represent a pretty dumb ruleset...Sure, its just a way to measure were the limit is. Of course, you can make the rules jump into chains and partition the number of rules a package must traverse. Actually, the rules parameter should be read somthing like "Average number of rules a given package has to traverse" Of course, the easiest way to produce this is placing them one after the other (actually the rules were more or lees traffic to port 20 DROP, traffic to port 21 DROP, traffic to port 22 DROP, ... and at the end, traffic to port 80 ACCEPT :) Here is were Shorewall really excels, making what we call internally "a decision tree" (zones) before real rules. The problem is, we have encountered limits even with shorewall (lots of zones, if not, tree too flat). If you dig into old emails from me you would see that. Thats why we are researching on this topic and are considering patching shorewall to make the "tree" deeper (more general zones first, zones inside zone in particular chains), or even jump directly into ipsets (very complex rules that match a given traffic with much fewer of them). Also, is hard to explain to a client that a single "GUI" rule can really produce tons of "real" ones. Hope it clarifies. PS.- A very interesting paper is "Netfilter Performance Testing" (Some googling needed. It compares standard netfilter, nf-hipac, compact filter and ipset. -- Jaime Nebrera - jnebrera@eneotecnologia.com Consultor TI - ENEO Tecnologia SL Telf.- 95 455 40 62 - 619 04 55 18 ------------------------------------------------------- This SF.Net email is sponsored by the ''Do More With Dual!'' webinar happening July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual core and dual graphics technology at this free one hour event hosted by HP, AMD, and NVIDIA. To register visit http://www.hp.com/go/dualwebinar
Hi again,> I also note that all of the Linux boxes appear to be running as a bridge > whereas the BSD box was running as a router?Yes, I forgot to comment on original mail. We were unable to make monowall rune as a bridge. Running Linux as a router instead of as a bridge just gave you a 5% increase of performance (we did some testing on one particular box) and too were surprised wasnt worse. Sorry for that. -- Jaime Nebrera - jnebrera@eneotecnologia.com Consultor TI - ENEO Tecnologia SL Telf.- 95 455 40 62 - 619 04 55 18 ------------------------------------------------------- This SF.Net email is sponsored by the ''Do More With Dual!'' webinar happening July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual core and dual graphics technology at this free one hour event hosted by HP, AMD, and NVIDIA. To register visit http://www.hp.com/go/dualwebinar
Jaime Nebrera wrote:> Hi Tom and others, > >>NAPI and the number of rules are independent. > > Discussing this topic with Luca Deri, from ntop, he pointed us that > with a lo of pps the kernel might be unable to process packages at the > needed speed. Actually not really packages, but interrupts. With non > NAPI drivers you dont have TX and RX polling and get an interrupt for > each package. When the number of rules is high, the kernel cant keep the > pace and a snow ball is formed as packages arrive faster than the kernel > can process and the system goes bersek. >I see. So NAPI effectively makes more CPU cycles available to thrash away on absurd rulesets :-)> > Here is were Shorewall really excels, making what we call internally > "a decision tree" (zones) before real rules. The problem is, we have > encountered limits even with shorewall (lots of zones, if not, tree too > flat). If you dig into old emails from me you would see that. > > Thats why we are researching on this topic and are considering > patching shorewall to make the "tree" deeper (more general zones first, > zones inside zone in particular chains), or even jump directly into > ipsets (very complex rules that match a given traffic with much fewer of > them). Also, is hard to explain to a client that a single "GUI" rule can > really produce tons of "real" ones.I''d be interested to hear more about your ideas. Thanks, -Tom -- Tom Eastep \ Nothing is foolproof to a sufficiently talented fool Shoreline, \ http://shorewall.net Washington USA \ teastep@shorewall.net PGP Public Key \ https://lists.shorewall.net/teastep.pgp.key
Hi again,> I see. So NAPI effectively makes more CPU cycles available to thrash > away on absurd rulesets :-)Thats it !!! Just a figure, a P4 3Ghz (thats about 3,75 times faster than a Via 1Ghz) with good gigabit chipsets (Marvel with all kinds of optimizations) was able to manage 10.000 rules for 1.200pps (from memory). The same with Via was 700 rules (14,2x) Of course, other stuff is involved (PCIE vs PCI and others) but most of them really only cared with MUCH more traffic (instead of much more rules). Most optimizations in that machine are intended for gigabit traffic an we are talking about 4Mbps traffic) We think the point here is really NAPI and RX polling. The sad part is, just when we were digging into this problem, we sold the only box we had with a similar CPU (800) and intel chipset (RX polling capable)> I''d be interested to hear more about your ideas.Well, once our client got happy (we have lend him the BIG box until he purchases a new one on September :( we jumped into IPS/Snort Inline stuff, as its something we need more urgent than this. After that a decision will be made regarding firewall generator. Some hints: 1) There are clearly two roads to follow: ipset or heavy shorewall patching. IMHO it might be better to redo shorewall with ipsets in mind than the other way around. So we are thinking on starting our own project on this. 2) Still some points we discovered: a) Connection tracking rules has to be much upper in the rules, unless necessary (accounting). Move from particular chains to the very beginning. This is an easy patch. b) As a bridge, use the same "decision tree" shorewall currently uses when running as a router. Currently all rules go to fwd_br instead of braking it based on eth in / eth out. Of course this means having the physdev patch. c) Make the decision tree deeper. The first branch has already been commented (based on physical interface) but also, instead of: traffic from inside rule -> jump traffic from container rule -> jump Use somtehing like: traffic from container rule -> jump and in the particular container chain traffic from inside rule -> jump In other words, jumps for zones that are inside other zones are not in the fisrt level of the tree (fwd_br0 in our case) but directly inside the chain of the zone that includes them. That "deeps" the tree. d) Sometimes its very hard to explain to clients the concepts of zones, how the rules are traversed, etc. 3) Still, even if going to our own preprocessor, we consider most of shorewall concepts as a foundation basis. The difference would be we will focus on cutting edge available features (ipsets, physdev, time,...) instead of going to a more conservative shorewall way :) This is ok for us as our appliances have this features available. Hope it helps. Awaiting your comments. Regards -- Jaime Nebrera - jnebrera@eneotecnologia.com Consultor TI - ENEO Tecnologia SL Telf.- 95 455 40 62 - 619 04 55 18 ------------------------------------------------------- This SF.Net email is sponsored by the ''Do More With Dual!'' webinar happening July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual core and dual graphics technology at this free one hour event hosted by HP, AMD, and NVIDIA. To register visit http://www.hp.com/go/dualwebinar
Jaime Nebrera wrote:> Well, once our client got happy (we have lend him the BIG box until he > purchases a new one on September :( we jumped into IPS/Snort Inline > stuff, as its something we need more urgent than this. After that a > decision will be made regarding firewall generator. Some hints: > > 1) There are clearly two roads to follow: ipset or heavy shorewall > patching. IMHO it might be better to redo shorewall with ipsets in mind > than the other way around. So we are thinking on starting our own > project on this.A third option suggests itself. You could extend Shorewall to include an abstract model for ipsets and integrate that model into rules. That way, users wouldn''t have to have any knowledge of ipsets themselves. In order to do that right, I think that one needs some experience using ipsets; that''s why I didn''t try to do it when I implemented ipset support in Shorewall originally.> > 2) Still some points we discovered: > > a) Connection tracking rules has to be much upper in the rules, > unless necessary (accounting). Move from particular chains to the very > beginning. This is an easy patch.Which I would make as an option -- my biggest concern with that change is that it makes problem diagnosis more difficult.> > b) As a bridge, use the same "decision tree" shorewall currently > uses when running as a router. Currently all rules go to fwd_br instead > of braking it based on eth in / eth out. Of course this means having the > physdev patch.And it will be a bit ugly because it breaks the nice clean model for interfaces and ports.> > c) Make the decision tree deeper. The first branch has already been > commented (based on physical interface) but also, instead of: > > traffic from inside rule -> jump > traffic from container rule -> jump > > Use somtehing like: > > traffic from container rule -> jump > > and in the particular container chain > > traffic from inside rule -> jump > > In other words, jumps for zones that are inside other zones are not > in the fisrt level of the tree (fwd_br0 in our case) but directly inside > the chain of the zone that includes them. That "deeps" the tree.I''ve thought about the idea of formalizing nested zones in Shorewall but it is something that I''ve not got around to doing. I think it would be fairly easy to leverage the ''complex zone'' facility already implemented in Shorewall. Note: For those of you not familiar with Shorewall internals, "complex zones" are zones with more than one subnet on an interface or that use kernel-2.6 ipsec. The rule structure for such zones includes an extra level, just like Jaime is describing. Note that one of my motivations for implementing actions was to deepen the tree. Effective use of actions though requires that one thinks more like a programmer than a network admin.> > d) Sometimes its very hard to explain to clients the concepts of > zones, how the rules are traversed, etc. > > 3) Still, even if going to our own preprocessor, we consider most of > shorewall concepts as a foundation basis. The difference would be we > will focus on cutting edge available features (ipsets, physdev, > time,...) instead of going to a more conservative shorewall way :) This > is ok for us as our appliances have this features available.Nod. Shorewall has to maintain backward compatibility and can''t assume the presence of cutting-edge features.> > Hope it helps. Awaiting your comments. Regards >It does, thanks. -Tom -- Tom Eastep \ Nothing is foolproof to a sufficiently talented fool Shoreline, \ http://shorewall.net Washington USA \ teastep@shorewall.net PGP Public Key \ https://lists.shorewall.net/teastep.pgp.key
Le lundi 11 juillet 2005 à 10:57 -0700, Tom Eastep a écrit : ... <snip> Huge mail with big ideas </snip> Aren''t you retired =:-D Tony ------------------------------------------------------- This SF.Net email is sponsored by the ''Do More With Dual!'' webinar happening July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual core and dual graphics technology at this free one hour event hosted by HP, AMD, and NVIDIA. To register visit http://www.hp.com/go/dualwebinar
On 11 Jul 2005 at 20:26, tony wrote:> Le lundi 11 juillet 2005 à 10:57 -0700, Tom Eastep a écrit : > ... > <snip> > Huge mail with big ideas > </snip> > > Aren''t you retired =:-DRetirement != Dead -- ______________________________________ John Andersen NORCOM / Juneau, Alaska http://www.screenio.com/ (907) 790-3386 . ------------------------------------------------------- This SF.Net email is sponsored by the ''Do More With Dual!'' webinar happening July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual core and dual graphics technology at this free one hour event hosted by HP, AMD, and NVIDIA. To register visit http://www.hp.com/go/dualwebinar
Hi,> Hi again, > > > >>I also note that all of the Linux boxes appear to be running as a bridge >>whereas the BSD box was running as a router? >> >> > > Yes, I forgot to comment on original mail. We were unable to make >monowall rune as a bridge. > >monowall can be run in a bridged mode, you will need to add another NIC card ( 3 nic box) just to have an optional interface which can be used to bridge on your LAN or WAN interface. regards, kenneth ------------------------------------------------------- This SF.Net email is sponsored by the ''Do More With Dual!'' webinar happening July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual core and dual graphics technology at this free one hour event hosted by HP, AMD, and NVIDIA. To register visit http://www.hp.com/go/dualwebinar
On Mon, 2005-07-11 at 18:13 +0200, Jaime Nebrera wrote:> Hi all, > > For all people just testing firewall performance. > > We are in the process of publishing some graphs regarding firewall > performance (mainly in low end hardware). We have compared mainly Linux > (2.4.30 and 2.6.11) and FreeBSD (m0n0wall) on a Geode 266, Via 533 and > Via 1Ghz all with Realtek 8139 ethernets. > > You can see the first results in: > > http://www.eneotecnologia.com/archivos/Firewall_512.png > > Some comments on the experiment: > > 1) We tested this with a much better box (P4 3Ghz, PCIE, Marvel > chipset), but currently we dont have access to it. It supported more > than 10.000 consecutive rules for 1.200 pps But we had to give this > hardware to our client and cant test any more. > > 2) We have to make the same tests with 1500 bytes and 64 bytes > packages > > 3) Traffic was generated from a few PC (sorry, we couldnt create more > traffic at the moment) using hping2. Traffic was UDP to port 80. > > 4) The rules means, # number of non matching rules before the matching > rule. > > Some comments on results: > > A) We are unable to determinate if we are using NAPI or not on this > boxes. We tested 2.4.23 too with the same results. After some reading, > we discovered the driver needs to support NAPI too, but after finding > what seems a valid one (ftp://ftp.ovh.net/made-in-ovh/kernel/) we dont > get better results (neither for 2.4.30 and 2.6.11) We need some help to > see if really we are using NAPI on this boxes. > > B) Linux 2.6.11 and 2.4.30 show more or less the same behavior (?) > > C) All linux seem to hit a wall around 800 rules. This is a known > limit in current iptables / netfilter design. (See Hi-PAC and others) > With the better box this "wall" was much further away) Also, this limit > is quite similar with different CPUs (Geode 266, Via 533, Via 1Ghz) and > is shared on all boxes that use Realtek chipsets (we about to test it > with a P4 2.1Ghz Realtek) Maybe a problem of the driver? Maybe the lack > of NAPI even when supposed to be used? > > D) FreeBSD (actually dont know what BDS m0n0wall uses) is much more > linear and predictable on its behavior, standing for higher loads. > > What do you think? Any comments? Any help? > > Hope it helps. Regards. >What version of m0n0wall. It is based off of FreeBSD, but the older versions of m0n0wall use a much older version of FreeBSD. ------------------------------------------------------- This SF.Net email is sponsored by the ''Do More With Dual!'' webinar happening July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual core and dual graphics technology at this free one hour event hosted by HP, AMD, and NVIDIA. To register visit http://www.hp.com/go/dualwebinar
On Tue, 2005-07-12 at 08:20 +0800, Kenneth Oncinian wrote:> > > > Yes, I forgot to comment on original mail. We were unable to make > >monowall rune as a bridge. > > > > > monowall can be run in a bridged mode, you will need to add another NIC > card ( 3 nic box) just to have an optional interface which can be used > to bridge on your LAN or WAN interface. > > > regards, > kenneth >Documentation on m0n0wall as a bridge: http://m0n0.ch/wall/docbook/examples-filtered-bridge.html ------------------------------------------------------- This SF.Net email is sponsored by the ''Do More With Dual!'' webinar happening July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual core and dual graphics technology at this free one hour event hosted by HP, AMD, and NVIDIA. To register visit http://www.hp.com/go/dualwebinar
> Documentation on m0n0wall as a bridge: > http://m0n0.ch/wall/docbook/examples-filtered-bridge.htmlGreat, well test it when have some time. Actually, was pretty hard to write the rules in monowall, we even had to build our own wget script :) Is there any way to enter monowall in kind of "low level" and copy the rules directly? Thanks -- Jaime Nebrera - jnebrera@eneotecnologia.com Consultor TI - ENEO Tecnologia SL Telf.- 95 455 40 62 - 619 04 55 18 ------------------------------------------------------- This SF.Net email is sponsored by the ''Do More With Dual!'' webinar happening July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual core and dual graphics technology at this free one hour event hosted by HP, AMD, and NVIDIA. To register visit http://www.hp.com/go/dualwebinar
> What version of m0n0wall. It is based off of FreeBSD, but the older > versions of m0n0wall use a much older version of FreeBSD.The latest stable (1.11) -- Jaime Nebrera - jnebrera@eneotecnologia.com Consultor TI - ENEO Tecnologia SL Telf.- 95 455 40 62 - 619 04 55 18 ------------------------------------------------------- This SF.Net email is sponsored by the ''Do More With Dual!'' webinar happening July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual core and dual graphics technology at this free one hour event hosted by HP, AMD, and NVIDIA. To register visit http://www.hp.com/go/dualwebinar
Hi Tom and others,> A third option suggests itself. You could extend Shorewall to include an > abstract model for ipsets and integrate that model into rules. That way, > users wouldn''t have to have any knowledge of ipsets themselves. In order > to do that right, I think that one needs some experience using ipsets; > that''s why I didn''t try to do it when I implemented ipset support in > Shorewall originally.Actually we have already done that, even with standard shorewall. In MarteGUI (our firewall management system) you create objects, and one of them is used for IP, lists of IPs, networks, etc. Also you can group them with a group object. Even more, in a single GUI rule, you can combine as many objects as you want and the GUI will take care of creating the minimum number of rules and zones necessary. The user still needs to "declare" the zones but is the GUI that makes the decision of creating or not a zone and breaking the rules in the minimum number of them. Actually that part is what has made us reach the limits of our hardware. In a first version we made the dumb decision of considering zone = object. This created a huge number of zones and the decision tree of shorewall grew way too much. Then we decided only Network objects would be created as zones. Still with this we reached again the limit. So now we have to make complex decisions of when to declare a network a zone and when to consider it just part of a bigger zone. Even worse, all of this has to be done transparently to the user and they find two dificulties: why the zone concept (when you already have the object concept) and why a single rule can translate into plenty of them (if the new rule involves a new zone ...) Thats why we are entering into the world of ipsets. Objects could be easily translated into sets and they into rules. Also, if the user makes a rule with a lot of objects involved (lots of different sets) you could just build a "virtual set" that the user doesnt know about just to translate that GUI rule into a single iptables rule. That way you have a very straight relation between number of GUI (or shorewall) rules and iptables rules. Even more, the algorithm would be intelligent enough to join different rules into a single "virtual set" that produces a jump into a iptables chain (the same concept of a zone but with much more complex relations) and then create those particular rules into that chain. That way, you can define into the algorithm a limit into the number of rules a particular chain could have. If that limit is reached, "virtual sets" will be created so the tree for that particular chain is made deeper. Of course this involves many software parts as we currently use. A GUI, a business logic, a firewall parser and iptables itself. Currently we use shorewall as is as the firewall parser and is the GUI and the algorithm that makes the decisions of how to configure shorewall to really get the best of it (without making it hard for the average user). The GUI and business logic is in a JAVA app and they just "produce" or "parse" a bunch of shorewall files and then we let shorewall build the iptables rules. What we are studying is replacing shorewall with our own (probably in JAVA too) cuting edge ipsets firewall parser or still stay with shorewall and heavily change it to include all this.> > a) Connection tracking rules has to be much upper in the rules, > > unless necessary (accounting). Move from particular chains to the very > > beginning. This is an easy patch. > > Which I would make as an option -- my biggest concern with that change > is that it makes problem diagnosis more difficult.Sure> > b) As a bridge, use the same "decision tree" shorewall currently > > uses when running as a router. Currently all rules go to fwd_br instead > > of braking it based on eth in / eth out. Of course this means having the > > physdev patch. > > And it will be a bit ugly because it breaks the nice clean model for > interfaces and ports.I dont understand this, Currently, when running as a bridge you have the forward chain that jumps into ethX_fwd chain that includes all the "zones decision and jumping" and then the particular zoneX2zoneY chains. As a bridge, you dont have the ethX_fwd chains and thats what we consider to use. Actually, we plan to do this with or without shorewall as its very effective with boxes with plenty of interfaces.> I''ve thought about the idea of formalizing nested zones in Shorewall but > it is something that I''ve not got around to doing. I think it would be > fairly easy to leverage the ''complex zone'' facility already implemented > in Shorewall. > > Note: For those of you not familiar with Shorewall internals, > "complex zones" are zones with more than one subnet on an > interface or that use kernel-2.6 ipsec. The rule structure for > such zones includes an extra level, just like Jaime is > describing.Hugh??? Thats why we have not made a decision yet, as it seems shorewall has some features we dont know about :) As said is something we need to take slowly and make a good decision. This will surely involve studying the ipsets part currently available in shorewall.> > 3) Still, even if going to our own preprocessor, we consider most of > > shorewall concepts as a foundation basis. The difference would be we > > will focus on cutting edge available features (ipsets, physdev, > > time,...) instead of going to a more conservative shorewall way :) This > > is ok for us as our appliances have this features available. > > Nod. Shorewall has to maintain backward compatibility and can''t assume > the presence of cutting-edge features.What I meant was, if we continue with shorewall, most of the features we will add will involve cutting edge features available on the kernel / iptables / ... Of course, shorewall would be still compatible with those that dont have them, but then you cant expect to have those advanced features. At the same time, if we use our own, it will be pretty much focused on what we use, and will only work if you have those patches.> > Hope it helps. Awaiting your comments. Regards > > > > It does, thanks.Thanks. -- Jaime Nebrera - jnebrera@eneotecnologia.com Consultor TI - ENEO Tecnologia SL Telf.- 95 455 40 62 - 619 04 55 18 ------------------------------------------------------- This SF.Net email is sponsored by the ''Do More With Dual!'' webinar happening July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual core and dual graphics technology at this free one hour event hosted by HP, AMD, and NVIDIA. To register visit http://www.hp.com/go/dualwebinar
Responding my own email,> Of course this involves many software parts as we currently use. A > GUI, a business logic, a firewall parser and iptables itself. Currently > we use shorewall as is as the firewall parser and is the GUI and the > algorithm that makes the decisions of how to configure shorewall to > really get the best of it (without making it hard for the average user). > The GUI and business logic is in a JAVA app and they just "produce" or > "parse" a bunch of shorewall files and then we let shorewall build the > iptables rules. > > What we are studying is replacing shorewall with our own (probably in > JAVA too) cuting edge ipsets firewall parser or still stay with > shorewall and heavily change it to include all this.What we dont want to end up is introducing a lot of complexity into the GUI and business logic part just to take away shorewall weak points. For example, the "zone" concept was great before ipsets. Now, the zone is more a "object" and there is no need to really define a zone. Actually, in its simplest version, just using the "virtual ipset" concept (a ipset consisting in a group of other ipsets) you would end up with a linear relation between the parser number of rules and the iptables number of rules. Also, if you are able to introduce even further logic, you could deep the tree. The main advantage here is that very complex rules are translated just into one iptables rule. So our first idea is quite simple: 1) Break forward rule based on IN interface and possibly OUT interface. (For this, when you define objects (ipsets) all elements have to be in the same interface) This will reduce the number of rules a package has to traverse. The problem here would be with the OUT decision, as it might collade with things like dinamic routing and such. So probably, at first only a breaking on IN interface will be made. 2) For each IN interface place all the rules in a flat sense. Include support for nesting or deepening the tree. 3) If a complex rule is created involving different sets, create "on the fly" a virtual set that combines those to get a single rule. 4) In the GUI, try to merge separate rules into one simple complex rules if the user doesnt use them (for example, two identical rules with the only difference of destination port could be combined in a single multiport rule) 5) In the GUI, if the number of rules in a particular chain is too big, force the creation of "virtual sets" just to combine similar rules into one jump rule to a new chain and then multiple rules for that chain. This, of course, could be done manually (zone concept) So in essence, what we seek is a bit different from shorewall itself and some concepts are used but not presented to the user. That is, the algorithm will take care of creating virtual sets and jumps as needed to not make any particular chain too long. Some of our ideas come from the experience with our users, and that means abstracting a lot to the user. Our situation is, as we already plan on many features in the business logic (all the abstraction stuff) we consider to do all in one place instead of stick to a third layer. This would also allow us to create the rules file without really using iptables orders and then save, so rules uploading will be much improved (like a load). As said, we dont know yet. We are involved in IPS stuff and need some time still to get it done. Then we will return to this realm and make a decision. Regards -- Jaime Nebrera - jnebrera@eneotecnologia.com Consultor TI - ENEO Tecnologia SL Telf.- 95 455 40 62 - 619 04 55 18 ------------------------------------------------------- This SF.Net email is sponsored by the ''Do More With Dual!'' webinar happening July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual core and dual graphics technology at this free one hour event hosted by HP, AMD, and NVIDIA. To register visit http://www.hp.com/go/dualwebinar
On Tue, 2005-07-12 at 10:52 +0200, Jaime Nebrera wrote:> > What version of m0n0wall. It is based off of FreeBSD, but the older > > versions of m0n0wall use a much older version of FreeBSD. > > The latest stable (1.11) >I believe that is version freeBSD 4.5. You should try one of the newer versions of m0n0wall. It supposedly has better performance. ------------------------------------------------------- This SF.Net email is sponsored by the ''Do More With Dual!'' webinar happening July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual core and dual graphics technology at this free one hour event hosted by HP, AMD, and NVIDIA. To register visit http://www.hp.com/go/dualwebinar
On Tue, 2005-07-12 at 10:50 +0200, Jaime Nebrera wrote:> > Documentation on m0n0wall as a bridge: > > http://m0n0.ch/wall/docbook/examples-filtered-bridge.html > > Great, well test it when have some time. > > Actually, was pretty hard to write the rules in monowall, we even had > to build our own wget script :) Is there any way to enter monowall in > kind of "low level" and copy the rules directly? > > Thanks >I do not believe so. There is always a lot of talk about this subject, but I suspect its impossible. :-/ ------------------------------------------------------- This SF.Net email is sponsored by the ''Do More With Dual!'' webinar happening July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual core and dual graphics technology at this free one hour event hosted by HP, AMD, and NVIDIA. To register visit http://www.hp.com/go/dualwebinar
Tom Eastep wrote:> > I''ve thought about the idea of formalizing nested zones in Shorewall but > it is something that I''ve not got around to doing. I think it would be > fairly easy to leverage the ''complex zone'' facility already implemented > in Shorewall. > > Note: For those of you not familiar with Shorewall internals, > "complex zones" are zones with more than one subnet on an > interface or that use kernel-2.6 ipsec. The rule structure for > such zones includes an extra level, just like Jaime is > describing. >As those of you on the Coding mailing list have noticed, I''ve played around with this idea in the EXPERIMENTAL branch over the last couple of days and didn''t like the results. For the kind of things that I thought this idea might have been useful for, actions together with ipsets do a better job. -Tom -- Tom Eastep \ Nothing is foolproof to a sufficiently talented fool Shoreline, \ http://shorewall.net Washington USA \ teastep@shorewall.net PGP Public Key \ https://lists.shorewall.net/teastep.pgp.key