Has anyone installed Linux (CentOS or other) on a Tyan K8SE (S2892) motherboard? I'd really like to hear your experiences -- with this board or Nvidia's nForce Pro chipset in general. I'm looking to build a new server using this board but would like to find some other experiences first. I've googled myself blue but haven't found any reviews or postings regarding this motherboard and Linux.
Bryan J. Smith <b.j.smith@ieee.org>
2005-Jun-10 16:28 UTC
[CentOS] Tyan K8SE (S2892) / nForce Pro Experiences
From: Kirk Bocek <t004 at kbocek.com>> Has anyone installed Linux (CentOS or other) on a Tyan K8SE (S2892) > motherboard? I'd really like to hear your experiences -- with this board > or Nvidia's nForce Pro chipset in general.I wrote a pre-sale evaluation back in January 2005 here: http://lists.leap-cf.org/pipermail/leaplist/2005-January/000532.html And there are a few updates to that evaluation too, like the fact that nVidia _does_ now _actively_ work on the GPL "forcedeth" driver (their legal obligations to prevent them from doing so actually expired months ago), so using the "nvnet" module is probably optional now. - Personal Experience and Boot/Install Considerations I have everything from a 12"x12"x5.5" MicroATX case with a Foxconn NF4K8MC-ERS in my home, 12"x12"x5.5" (with PCIe GeForce 6800GT ;-) -- nForce4 Standard -- to a huge, LianLi PC-V1200 EATX with a Tyan S2895 -- nForce Pro 2200+2050 -- in use as a workstation/ server at various clients running both Fedora Core 2+ and Red Hat Enterprise Linux 4. You _can_ boot/install on the chipset with the FC2+/RHEL4 installer without issue. You do _not_ need to load the nForce drivers at all! They only contain the _legacy_ OSS (pre-ALSA) sound driver and the "nvnet" which really isn't needed with current "forcedeth" quality. This _includes_ SATA drives on the SATA channels that use the "nv_sata" driver, which is in the stock kernel now. On FC1/RHEL3, you might run into some NIC driver issues, and I want to say most definitely SATA support issues. You'll want to either upgrade the kernel to a late 2.4.23+, or use the nForce driver back with the "nvnet" and "nv_sata" drivers, possibly the "nvsound" OSS driver if you are not going to use ALSA. - The Technical Facts of the nForce4/Pro-series ... The nVidia nForce4/Ultra/SLI, 2200 (Pro) and 2050 (Pro, optional) are all legacy interconnect compatible. The PCI-Express (PCIe), PCI-X and PCI peripherial (I/O) interconnects all appear as traditional PCI logic/ busses, and the HyperTransport system interconnect is transparent thanx to its full Intel APIC/I2O compatibility. Yes, _all_ 40 PCIe and _both_ 2 PCI-X channels in the 2200+2050 combination. ;-> The peripherials on the chipsets are well supported with earlier nForce logic as of kernel 2.4.23+ and kernel 2.6.5+. nVidia has released GPL drivers for _all_ peripherals and components, including modifications to various i810 components, and the ALSA drivers work well for the common DSP/audio combinations. Intel has loosened the IP requirements on nVidia on the AGPgart, so I noticed some work has gone into more recent kernels. In fact, nVidia now uses the AGPgart in the kernel by default in its drivers. I still recommend the nVidia Standardware libGL/GLX drivers for production use _but_ stick with the 1.0-6xxx series _until_ 1.0-7xxx mature. And as I mentioned before, the "nvnet" driver is pretty optional now, and the 10/100 works _flawlessly_ with GPL "forcedeth," and the nVidia contributions in newer kernels seem to work well for 1000Mbps too. If you have trouble with 1000Mbps, consider upgrading your kernel or switching to the "nvnet" driver on older kernels.> I'm looking to build a new server using this board but would like to > find some other experiences first. I've googled myself blue but haven't > found any reviews or postings regarding this motherboard and Linux.If you stick with Fedora Core 3 / RHEL 4 (CentOS 4), then you shouldn't run into any issues at all -- not boot, install or otherwise. Make sure you upgrade to the latest 2.6.x kernel to get the most stable forcedeth driver. -- Bryan J. Smith mailto:b.j.smith at ieee.org