I know the recommendation it to not run more than 2 T/E100P's in a system but what about X100P's.. Usually there are 5 PCI slots in a system, has anyone tried 5 x X100P's in a system? Later.. -- ______________________________________________ http://www.linuxmail.org/ Now with e-mail forwarding for only US$5.95/yr Powered by Outblaze
On Tue, 2003-05-20 at 10:23, WipeOut . wrote:> I know the recommendation it to not run more than 2 T/E100P's in a > system but what about X100P's.. > > Usually there are 5 PCI slots in a system, has anyone tried 5 x > X100P's in a system?At the price, why would you try? You could get a T1 card and a channel bank for not too much more and have loads more flexibility. Consider that to get 5 X100Ps you will send $495US and thats the cost of the T100P, You can jump on ebay and find channel banks down around $200 occasionally. If you wait one out for a decent price with FXO ports your good to go. Then there is also that whole option of if you don't mind not having callerid, you can go with the TDM Devkit that comes with a channel bank and T1 card. -- Steven Critchfield <critch@basesys.com>
Be careful here! I tried running 4 X100p's plus a TDM400 and had LOTS'O trouble. The main problem seems to be getting a unique IRQ for each card. I had to drop back to 3 X100P's, the TDM400 AND I had to disable the on board NIC card and go with a PCI NIC. (BTW - the CPU was a P4 2.4GHZ box dedicated to * so plenty of horsepower) Calls would drop or deteriorate to a loud buzz requiring the removal and reload of the wcfxo driver module to correct. Seems the card would/driver would die. (Mark any luck on the restart code? :) With only 3 X100's the system has been mostly flawless for a few weeks now. YMMV John ------------------------------------------------------------------------- NetRom Internet Services 973-208-1339 voice john@netrom.com 973-208-0942 fax http://www.netrom.com ------------------------------------------------------------------------- On Tue, 20 May 2003, WipeOut . wrote:> I know the recommendation it to not run more than 2 T/E100P's in a system but what about X100P's.. > > Usually there are 5 PCI slots in a system, has anyone tried 5 x X100P's in a system? > > Later.. > -- > ______________________________________________ > http://www.linuxmail.org/ > Now with e-mail forwarding for only US$5.95/yr > > Powered by Outblaze > _______________________________________________ > Asterisk-Users mailing list > Asterisk-Users@lists.digium.com > http://lists.digium.com/mailman/listinfo/asterisk-users >
Thanks for that, experience always provides the best answers..> Be careful here! > > I tried running 4 X100p's plus a TDM400 and had LOTS'O trouble. The main > problem seems to be getting a unique IRQ for each card. I had to drop back > to 3 X100P's, the TDM400 AND I had to disable the on board NIC card and go > with a PCI NIC. (BTW - the CPU was a P4 2.4GHZ box dedicated to * so > plenty of horsepower) > > Calls would drop or deteriorate to a loud buzz requiring the removal and > reload of the wcfxo driver module to correct. Seems the card would/driver > would die. (Mark any luck on the restart code? :) > > > With only 3 X100's the system has been mostly flawless for a few weeks > now. YMMV > > John > ------------------------------------------------------------------------- > NetRom Internet Services 973-208-1339 voice > john@netrom.com 973-208-0942 fax > http://www.netrom.com > ------------------------------------------------------------------------- > > > On Tue, 20 May 2003, WipeOut . wrote: > > > I know the recommendation it to not run more than 2 T/E100P's in a system but what about X100P's.. > > > > Usually there are 5 PCI slots in a system, has anyone tried 5 x X100P's in a system? > > > > Later.. > > -- > > ______________________________________________ > > http://www.linuxmail.org/ > > Now with e-mail forwarding for only US$5.95/yr > > > > Powered by Outblaze > > _______________________________________________ > > Asterisk-Users mailing list > > Asterisk-Users@lists.digium.com > > http://lists.digium.com/mailman/listinfo/asterisk-users > > > > _______________________________________________ > Asterisk-Users mailing list > Asterisk-Users@lists.digium.com > http://lists.digium.com/mailman/listinfo/asterisk-users-- ______________________________________________ http://www.linuxmail.org/ Now with e-mail forwarding for only US$5.95/yr Powered by Outblaze
Alas, having only one card would in fact not work for a very simple hardware reason. Each card on a hardware interrupt generates a service request when conditions on that card warrant it. The other cards, in particular the one generating the interrupts in a "one card generates interrupts" scheme would not know when to generate those interrupts. In a hardware sense, whenever a group of cards shares an interrupt, some mechanism must be in place for each card to assert an interrupt. This is usually done using a "wired OR" configuration, typically implemented using open collector or open drain pull downs all wired to the same interrupt line. One also needs to determine who is generating the interrupt. In a general sense, this means that a unique driver is needed for each type of card. Each driver on that interrupt polls its own card to determine if it is the interrupting hardware. In point of fact, this same strategy was possible on the ISA bus, it just wasn't used. Also, in general it only takes about 3-4 instructions (I/O instruction, test, and jump) to test for interrupt status and jump to the next interrupt routine - if the routines are written carefully and driver chaining is done efficiently. The interrupt status for the driver can (and should) be tested before the stack frame is set or argument list managed or any other processing is done. Processing of the interrupt by the kernel, before you even get to the driver chain most likely presents a far longer instruction stream than testing for interrupt service. Latency should never be an issue. -- Stephen R. Besch, Ph.D. SachsLab 320 Cary Hall SUNY at Buffalo Buffalo, NY 14214 (716) 829-3289 x106