Hello all, I have been doing a lot of archive searching over the last week reading posts on IMQ and it''s apparent stability / instability. I have seen a number of posts about it not being maintained as well. Can anyone talk to me about IMQ''s stability in a heavy throughput environment (20 Mbps) and what was causing IMQ to fail if you know. Thanks, Mike _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Probably I am going to continue imq development, so I know about it something. IMQ is very unpredictable you can use it all week or it may crash at once. and what is the most strange - crashes osccur everywhere in the kernel except in the driver itself this can be kernel bug as well. under high loag it crashes quite soom while in low load it can hold forewer this probably depends on cpu speed and looks that it tends to crash if you try to shape localy generated trafic if you use it for ingress only it wont have much problems. I have no hope to make it work, I rewrote the code completely few times and no use probably this way just cant work. I am going to use completely other way to do the same job. imq is trying to use userspace queue which dont like when packets are droped and seems there is no way to avoid droping while doing trafic shaping, so I will use another way by completely removing packets from iptables at some place and transmitting them directly where needed. thus replacing part of kernel code. this way I will be able at least to track the bug. P.S. iptables have another similar module ( ROUTE target ) i tryed it and it works in some cases ( i redirect trafic to lo interface) but not very good. ----- Original Message ----- From: "Michael S. Kazmier" <mkazmier@sofast.net> To: <lartc@mailman.ds9a.nl> Sent: Friday, January 23, 2004 7:29 PM Subject: [LARTC] IMQ Stability> Hello all, > > I have been doing a lot of archive searching over the last week reading > posts on IMQ and it''s apparent stability / instability. I have seen a > number of posts about it not being maintained as well. Can anyone talk to > me about IMQ''s stability in a heavy throughput environment (20 Mbps) and > what was causing IMQ to fail if you know. > > Thanks, > > Mike > > _______________________________________________ > LARTC mailing list / LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/ >_______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Thank you for the detailed discussion. There is no doubt that there is a need for an IMQ type device/funtionality. What would work really great, IMHO, is a "fake" or psuedo ethernet driver that simply sits as a shim between one or more real drivers. This fake device could allow us to "Stack" qdiscs in a way to allow one to shape traffic in multiple "policies" - ie, prioritize traffic AND allocate / rate shape end users. I have actually thought of utilizing the kernel bonding driver for this - attaching only a single slave to it - but haven''t had time as yet. Not sure that this would do anything for ingress shaping though. Thanks again... Mike> Probably I am going to continue imq development, so I know about it > something. > > IMQ is very unpredictable you can use it all week or it may crash at once. > and what is the most strange - crashes osccur everywhere in the kernel > except in the driver itself > this can be kernel bug as well. > under high loag it crashes quite soom while in low load it can hold > forewer > this probably depends on cpu speed and looks that it tends to crash if you > try to shape localy generated trafic > if you use it for ingress only it wont have much problems. > > I have no hope to make it work, I rewrote the code completely few times > and > no use > probably this way just cant work. > > I am going to use completely other way to do the same job. > imq is trying to use userspace queue which dont like when packets are > droped > and seems there is no way to avoid droping > while doing trafic shaping, so I will use another way by completely > removing > packets from iptables at some place and transmitting them directly where > needed. > thus replacing part of kernel code. > this way I will be able at least to track the bug. > > P.S. iptables have another similar module ( ROUTE target ) i tryed it and > it > works in some cases ( i redirect trafic to lo interface) but not very > good. > > > ----- Original Message ----- > From: "Michael S. Kazmier" <mkazmier@sofast.net> > To: <lartc@mailman.ds9a.nl> > Sent: Friday, January 23, 2004 7:29 PM > Subject: [LARTC] IMQ Stability > > >> Hello all, >> >> I have been doing a lot of archive searching over the last week reading >> posts on IMQ and it''s apparent stability / instability. I have seen a >> number of posts about it not being maintained as well. Can anyone talk >> to >> me about IMQ''s stability in a heavy throughput environment (20 Mbps) and >> what was causing IMQ to fail if you know. >> >> Thanks, >> >> Mike >> >> _______________________________________________ >> LARTC mailing list / LARTC@mailman.ds9a.nl >> http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/ >> > >_______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
On Jan 24, mkazmier@sofast.net wrote:> Thank you for the detailed discussion. There is no doubt that there is a > need for an IMQ type device/funtionality. What would work really great, > IMHO, is a "fake" or psuedo ethernet driver that simply sits as a shim > between one or more real drivers. This fake device could allow us to > "Stack" qdiscs in a way to allow one to shape traffic in multiple > "policies" - ie, prioritize traffic AND allocate / rate shape end users. > I have actually thought of utilizing the kernel bonding driver for this - > attaching only a single slave to it - but haven''t had time as yet. Not > sure that this would do anything for ingress shaping though. >I have been working on this with using what I call a ppp-pipe. The result is Internet (eth0) <-> ppp0 ----- ppp1 <-> LAN (eth1) 10.0.0.0/8 where ppp0----ppp1 is on the local machine (and simulates two NICs with a crossover cable between them in the same machine). What you throw in at ppp0 appears at ppp1 and vice versa. This works fine, it also means you can shape on the ppp0/ppp1 interfaces and leave all the NAT stuff on the real interfaces. The command to create this ppp-pipe is (as root), so far I am not completely sure if you need to add to the first pppd command "<real ip>:<real ip>" for its parameters (you might also need ''xonxoff'' too in both): # mkfifo /tmp/ppp-pipe # pppd noauth nodefaultroute notty < /tmp/ppp-pipe | pppd noauth \ notty > /tmp/ppp-pipe However there is a major problem......connection tracking. In the above setup you do iptables -t nat -I POSTROUTING -s 10.0.0.0/8 \ -d ! 10.0.0.0/8 -o eth0 -j MASQUERADE the ''-o eth0'' is very important, you also create some advance routing bits to make all traffic crossing the router to pass through the ppp-pipe; easy enough, but depends on your needs. Conntrack unfortunately notices that you did not want to NAT the packet straight away when it arrives on eth1 (if you do then you will be unable to shape fairly per IP, for example with ESFQ), but then later on when the packet resurfaces at ppp0 the ''nat'' table is skipped. The only way about this is to use the patch-o-matic RAW patch and instruct it to skip connection tracking for packets on eth1 destined for the Internet. As I am now pure 2.6.x goodness I am in the middle of porting the patch myself (the patch-o-matic-ng does not work for me, could be me being lame though). Sure this is replacing one patch dependency with another, however IMQ really seems that it has been left out to rot; whilst the RAW patch probably is going to stay better maintained, hell its in the patch-o-matic for starters. Besides there are lots of advantages with the ppp-pipe, as now all you folks who want to shape over with IP-Aliasing can just use cunning ppp-pipes instead; whilst still keeping things very simple. So far the above should work in non-NAT (or rather connection tracking) setups but where you want the equilivent of IP-Aliased style shaping. Anyway thoughts would be apprieated, however when I was on #lartc it was its normal dead self so I was left dead in the water myself :( have fun Alex -- ___________________________ < Fortune favors the lucky. > --------------------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || ||
Internet (eth0) <-> ppp0 ----- ppp1 <-> LAN (eth1) 10.0.0.0/8 this way dont seem excelent because it still lacks some functionality and what about using LO or dummy type interface instead of ppp? the new imq driver that i am developing will have unlimited posibilities it willbe fake interface wich passes all ip trafic without exception no mater which direction, destination and so on even localy generated and received trafic should pass it I removed iptables module so noo need to configure it just everything is catched. so you will be able to shape in + out in one also I am thinking about the chaining functionality is there any need to make chain of imq devices ? ( they will get the all same trafic) you will be able to use few shapers then but it will add latency. I almost finished my driver , but unfortunately there is no way to avoid patching kernel. I need to export ip_finish_output2 and ip_local_deliver_finish functions but dont know how to do that, and where is the best place. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Hi Roy, This is great news! Shaping in+out at once is not always wanted... Usually you want to shape them seperately because each direction has a different bandwidth and limits. So I think it should be optional (i.e. you should be able to configure if you want the ingress and/or the egress side). Your efforts are highly appreciated! Aron ------------------------------------------------------------------------ ----------------------------------- From: "Roy" <roy@xxx.lt> To: <lartc@mailman.ds9a.nl> Subject: Re: [LARTC] IMQ Stability Date: Sun, 25 Jan 2004 05:49:15 +0200 Internet (eth0) <-> ppp0 ----- ppp1 <-> LAN (eth1) 10.0.0.0/8 this way dont seem excelent because it still lacks some functionality and what about using LO or dummy type interface instead of ppp? the new imq driver that i am developing will have unlimited posibilities it willbe fake interface wich passes all ip trafic without exception no mater which direction, destination and so on even localy generated and received trafic should pass it I removed iptables module so noo need to configure it just everything is catched. so you will be able to shape in + out in one also I am thinking about the chaining functionality is there any need to make chain of imq devices ? ( they will get the all same trafic) you will be able to use few shapers then but it will add latency. I almost finished my driver , but unfortunately there is no way to avoid patching kernel. I need to export ip_finish_output2 and ip_local_deliver_finish functions but dont know how to do that, and where is the best place. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Hi Roy, Excelent Roy!!! Good job. Where we can get your IMQ port to test? Best Regards Remus ----- Original Message ----- From: "Roy" <roy@xxx.lt> To: <lartc@mailman.ds9a.nl> Sent: Sunday, January 25, 2004 3:49 AM Subject: Re: [LARTC] IMQ Stability> Internet (eth0) <-> ppp0 ----- ppp1 <-> LAN (eth1) 10.0.0.0/8 > > > this way dont seem excelent because it still lacks some functionality > and what about using LO or dummy type interface instead of ppp? > > the new imq driver that i am developing will have unlimited posibilities > it willbe fake interface wich passes all ip trafic without exception no > mater which direction, destination and so on > even localy generated and received trafic should pass it > I removed iptables module so noo need to configure it just everything is > catched. > so you will be able to shape in + out in one > > also I am thinking about the chaining functionality > is there any need to make chain of imq devices ? ( they will get the all > same trafic) > you will be able to use few shapers then but it will add latency. > > I almost finished my driver , but unfortunately there is no way to avoid > patching kernel. > > I need to export ip_finish_output2 and ip_local_deliver_finish functionsbut> dont know how to do that, and where is the best place. > > > > > _______________________________________________ > LARTC mailing list / LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/ >_______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Hello Alex, Perhaps I missed something below which ties eth0 and eth1 to the PPP pipe, or its just my unfamiliarity with PPP. Regardless, an interesting methodology. Do you think you could do the following: <eth0>----<ppp0>----<standard linux bridging / routing>---<ppp1>---<eth1> The reason I ask is that I would like to, at the PPP level, apply CBQ or HTB rate shaping to my each end user (ie, limit traffic to 256K or something like that). And then, after each customer has their rate shaping, at the ETH level I would like to priorize traffic (ie, all www prio 3, ssh - telnet, prio 1, ftp prio 4, everything else prio 7) Thoughts? -----Original Message----- From: lartc-admin@mailman.ds9a.nl [mailto:lartc-admin@mailman.ds9a.nl] On Behalf Of Alexander Clouter Sent: Saturday, January 24, 2004 7:05 PM To: lartc@mailman.ds9a.nl Subject: Re: [LARTC] IMQ Stability On Jan 24, mkazmier@sofast.net wrote:> Thank you for the detailed discussion. There is no doubt that there is a > need for an IMQ type device/funtionality. What would work really great, > IMHO, is a "fake" or psuedo ethernet driver that simply sits as a shim > between one or more real drivers. This fake device could allow us to > "Stack" qdiscs in a way to allow one to shape traffic in multiple > "policies" - ie, prioritize traffic AND allocate / rate shape end users. > I have actually thought of utilizing the kernel bonding driver for this - > attaching only a single slave to it - but haven''t had time as yet. Not > sure that this would do anything for ingress shaping though. >I have been working on this with using what I call a ppp-pipe. The result is Internet (eth0) <-> ppp0 ----- ppp1 <-> LAN (eth1) 10.0.0.0/8 where ppp0----ppp1 is on the local machine (and simulates two NICs with a crossover cable between them in the same machine). What you throw in at ppp0 appears at ppp1 and vice versa. This works fine, it also means you can shape on the ppp0/ppp1 interfaces and leave all the NAT stuff on the real interfaces. The command to create this ppp-pipe is (as root), so far I am not completely sure if you need to add to the first pppd command "<real ip>:<real ip>" for its parameters (you might also need ''xonxoff'' too in both): # mkfifo /tmp/ppp-pipe # pppd noauth nodefaultroute notty < /tmp/ppp-pipe | pppd noauth \ notty > /tmp/ppp-pipe However there is a major problem......connection tracking. In the above setup you do iptables -t nat -I POSTROUTING -s 10.0.0.0/8 \ -d ! 10.0.0.0/8 -o eth0 -j MASQUERADE the ''-o eth0'' is very important, you also create some advance routing bits to make all traffic crossing the router to pass through the ppp-pipe; easy enough, but depends on your needs. Conntrack unfortunately notices that you did not want to NAT the packet straight away when it arrives on eth1 (if you do then you will be unable to shape fairly per IP, for example with ESFQ), but then later on when the packet resurfaces at ppp0 the ''nat'' table is skipped. The only way about this is to use the patch-o-matic RAW patch and instruct it to skip connection tracking for packets on eth1 destined for the Internet. As I am now pure 2.6.x goodness I am in the middle of porting the patch myself (the patch-o-matic-ng does not work for me, could be me being lame though). Sure this is replacing one patch dependency with another, however IMQ really seems that it has been left out to rot; whilst the RAW patch probably is going to stay better maintained, hell its in the patch-o-matic for starters. Besides there are lots of advantages with the ppp-pipe, as now all you folks who want to shape over with IP-Aliasing can just use cunning ppp-pipes instead; whilst still keeping things very simple. So far the above should work in non-NAT (or rather connection tracking) setups but where you want the equilivent of IP-Aliased style shaping. Anyway thoughts would be apprieated, however when I was on #lartc it was its normal dead self so I was left dead in the water myself :( have fun Alex -- ___________________________ < Fortune favors the lucky. > --------------------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
> the new imq driver that i am developing will have unlimited posibilities > it willbe fake interface wich passes all ip trafic without exception no > mater which direction, destination and so on > even localy generated and received trafic should pass itMay I suggest that if it''s new code with new approach it should get a different name ? Rubens _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Finaly I made imq driver stable it did not crashed for all 5 hours under high load, soo looks stable. (old one was crashing after 1-5 min for me) no need to patch anything just compile and insmod, should work with any kernel probably must be > than 2.4.20 This is completely diferent code than old imq. you can find it on my server http://pupa.da.ru please tell how it works for you and how stable it is. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
On Jan 26, Michael S. Kazmier wrote:> Hello Alex, > > Perhaps I missed something below which ties eth0 and eth1 to the PPP pipe, > or its just my unfamiliarity with PPP. >sorry I should of made it cleaner. If you read up on Advanced Routing HOWTO, its hopefully easy to understand. lets say: ------------ alex@inskipp:~$ cat /etc/iproute2/rt_tables # # reserved values # 255 local 254 main 253 default 0 unspec # # local # 1 inr.ruhep # inskipp 32 ppp-upstream 33 ppp-downstream ------------ you then type (something along the lines of): --------- ip route add default dev ppp1 table ppp-upstream ip route add default dev ppp0 table ppp-downstream ip rule add from 10.0.0.0/8 iif eth1 table ppp-upstream ip rule add to 10.0.0.0/8 iif eth0 table ppp-downstream ip route flush cache --------- In summary, this setups linux to do exactly what is in the diagram (below). The nice thing is after the above is setup you treat it as if its a physical interface, its a real ppp session. Any traffic that goes into ppp0 appears on ppp1 and vice versa; treat it like a fancy wormhole :) The advantage here over the IMQ-ng that is being made, from what I uderstand, is here the only patch you need is to bypass connection tracking on the Internet bound traffic from eth1 (for techie reasons), when it ''appears'' from ppp1 then the connection tracking should be allowed to continue. This is where the RAW netfilter patch comes into play. Although you are swapping one kernel patch for another, the RAW one looks like its going to be around much longer and actually maintained, the other very important fact is that you can now (if you think about it, I will leave it as an exercise for you) use it to simulate those IP-Aliasing interfaces and actually now shape on that basis per pipe. The clue is true _source_ based routing ;)> Regardless, an interesting methodology. Do you think you could do the > following: > > <eth0>----<ppp0>----<standard linux bridging / routing>---<ppp1>---<eth1> > > The reason I ask is that I would like to, at the PPP level, apply CBQ or HTB > rate shaping to my each end user (ie, limit traffic to 256K or something > like that). And then, after each customer has their rate shaping, at the > ETH level I would like to priorize traffic (ie, all www prio 3, ssh - > telnet, prio 1, ftp prio 4, everything else prio 7) > > Thoughts? >in theory I guess you could setup a linu bridge over the ppp-pipe, however there is no point (from what I can see) as you are NATing, so the box is the default gateway for the other machines, plus more importantly, if you want a bridge why not just forget about the ppp-pipe and bridge over eth0<->eth1. This is what my jdg-qos-script[1] from more or less day one. Anyway, feedback would be great on the above idea. Regards Alex [1] http://www.digriz.org.uk/jdg-qos-script/ -- _________________ / Genius is pain. \ | | \ -- John Lennon / ----------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || ||
> Finaly I made imq driver stable it did not crashed for all 5 hours under > high load, soo looks stable. > (old one was crashing after 1-5 min for me)It seems to capture ingress and egress traffic of all interfaces; wouldn''t this count packets twice ? If the machine is doing SNAT or DNAT, what IP addresses would be seen by the qdisc ? Rubens _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Hi. Roy wrote:> Finaly I made imq driver stable [...] > This is completely diferent code than old imq.May I then second the proposal to give the driver another name? How about IMQ2, IMQng (next generation) or something like that? Bye, Mike _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Seems I was to fast to declare success, my version is not much more stable than the original one,everything depends on dropped packets. This is even not imq fault afterall, can be prowed in other way also: atempts to police outgoing trafic it will be ok until you dont touch localy generated packets if you try to drop them you will be sorry, because kernel will resend then together with new ones of cource policer will drop them too, but linux kernel keeps resending then thus increasing rate progresively. I noticed that with my trafic counter. internal trafic grew to enormous levels 10X more than it can be. In reality there was almost no output at all. so DONT USE POLICERS ON EGRESS. on low trafic it is harmless but on 100mb/s it probably can kill computer (not tested). Seems imq have similar problem even if driver itself have no leaks kernel consumes all resousces on resnending droped packets so that computer stops responding for now I dont have good idea how to fix it so I will try to avoid localy generated trafic so it will me possible to shape ingress and forward, egress will be left for real device. maybe later I will find how fix that> > It seems to capture ingress and egress traffic of all interfaces; wouldn''t > this count packets twice ?No, ingress is for local and egress for everything so everything should be ok (in theory)> If the machine is doing SNAT or DNAT, what IP addresses would be seen by > the qdisc ? >I made driver see the final destination address because it is more usefull> Rubens > > > _______________________________________________ > LARTC mailing list / LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/ >_______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Roy wrote:> Seems I was to fast to declare success, > my version is not much more stable than the original one,everything depends > on dropped packets. > This is even not imq fault afterall, can be prowed in other way also: > > atempts to police outgoing trafic it will be ok until you dont touch localy > generated packets > if you try to drop them you will be sorry, because kernel will resend then > together with new ones > of cource policer will drop them too, but linux kernel keeps resending then > thus increasing rate progresively. > > I noticed that with my trafic counter. internal trafic grew to enormous > levels 10X more than it can be. In reality there was almost no output at > all. > so DONT USE POLICERS ON EGRESS. on low trafic it is harmless but on 100mb/s > it probably can kill computer (not tested). > > Seems imq have similar problem even if driver itself have no leaks kernel > consumes all resousces on resnending droped packets so that computer stops > responding > > > for now I dont have good idea how to fix it so I will try to avoid localy > generated trafic > so it will me possible to shape ingress and forward, egress will be left for > real device. > maybe later I will find how fix thatWhich queue do you use to drop the packets? Andy. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
[snip]> I noticed that with my trafic counter. internal trafic grew to enormous > levels 10X more than it can be. In reality there was almost no output at > all. > so DONT USE POLICERS ON EGRESS. on low trafic it is harmless but on > 100mb/s it probably can kill computer (not tested). > > Seems imq have similar problem even if driver itself have no leaks kernel > consumes all resousces on resnending droped packets so that computer stops > responding[snip] Just curious, I suppose you had the same setup on the original IMQ? If not, my point is moot. But if so and if the problem could be verified, it would appear to me that there is less of a problem with the old patch(except for features), and more of a problem with the inherent nature of what is being attempting. Meaning that a set of assumptions made by the network layer developers is being invalidated. For example, local outbound traffic being policed instead of shaped. Philip Thiem -- Icequake.net Administrator Isn''t it obvious lumberjacks love traffic lights? GPG Pub Key Archived at wwwkeys.us.pgp.net
On Fri, Jan 23, 2004 at 10:29:13AM -0700, Michael S. Kazmier wrote: MSK>Hello all, MSK>I have been doing a lot of archive searching over the last week reading MSK>posts on IMQ and it''s apparent stability / instability. I have seen a MSK>number of posts about it not being maintained as well. Can anyone talk to MSK>me about IMQ''s stability in a heavy throughput environment (20 Mbps) and MSK>what was causing IMQ to fail if you know. I use it and it''s work OK for me Traffic at some router up to 30-40 Mbit IMQ has one trouble Don''t assing address to imq interface becase kernel crash it you do this. -- Best regard, Aleksander Trotsai aka MAGE-RIPE aka MAGE-UANIC My PGP key at ftp://blackhole.adamant.ua/pgp/trotsai.key[.asc] Big trouble - ..disk or the processor is on fire. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/