similar to: X4540 RIP

Displaying 20 results from an estimated 1000 matches similar to: "X4540 RIP"

2010 Aug 24
7
SCSI write retry errors on ZIL SSD drives...
I posted a thread on this once long ago[1] -- but we''re still fighting with this problem and I wanted to throw it out here again. All of our hardware is from Silicon Mechanics (SuperMicro chassis and motherboards). Up until now, all of the hardware has had a single 24-disk expander / backplane -- but we recently got one of the new SC847-based models with 24 disks up front and 12 in the
2011 May 30
13
JBOD recommendation for ZFS usage
Dear all Sorry if it''s kind of off-topic for the list but after talking to lots of vendors I''m running out of ideas... We are looking for JBOD systems which (1) hold 20+ 3.3" SATA drives (2) are rack mountable (3) have all the nive hot-swap stuff (4) allow 2 hosts to connect via SAS (4+ lines per host) and see all available drives as disks, no RAID volume. In a
2011 Dec 02
14
LSI 3GB HBA SAS Errors (and other misc)
During the diagnostics of my SAN failure last week we thought we had seen a backplane failure due to high error counts with ''lsiutil''. However, even with a new backplane and ruling out failed cards (MPXIO or singular) or bad cables I''m still seeing my error count with LSIUTIL increment. I''ve got no disks attached to the array right now so I''ve also
2009 Feb 16
1
Ideal (possible) configuration for an exalted R system
Hi All, I am trying to assemble a system that will allow me to work with large datasets (45-50 million rows, 300-400 columns) possibly amounting to 10GB + in size. I am aware that R 64 bit implementations on Linux boxes are suitable for such an exercise but I am looking for configurations that R users out there may have used in creating a high-end R system. Due to a lot of apprehensions that SAS
2015 Aug 30
2
[OFFTOPIC] integrated LSI 3008 :: number of hdd support
On 08/30/2015 12:02 PM, Mike Mohr wrote: > In my experience the mass market HBAs and RAID cards typically do support > only 8 or 16 drives. For the internal variety in a standard rack-mount > server you'll usually see either 2 or 4 iPass cables (each of which support > 4 drives) connected to the backplane. The marketing material you've > referenced has a white lie in it:
2010 Jun 18
6
WD caviar/mpt issues
I know that this has been well-discussed already, but it''s been a few months - WD caviars with mpt/mpt_sas generating lots of retryable read errors, spitting out lots of beloved " Log info 31080000 received for target" messages, and just generally not working right. (SM 836EL1 and 836TQ chassis - though I have several variations on theme depending on date of purchase: 836EL2s,
2009 Nov 17
13
ZFS storage server hardware
Hi, I know (from the zfs-discuss archives and other places [1,2,3,4]) that a lot of people are looking to use zfs as a storage server in the 10-100TB range. I''m in the same boat, but I''ve found that hardware choice is the biggest issue. I''m struggling to find something which will work nicely under solaris and which meets my expectations in terms of hardware.
2012 Nov 22
19
ZFS Appliance as a general-purpose server question
A customer is looking to replace or augment their Sun Thumper with a ZFS appliance like 7320. However, the Thumper was used not only as a protocol storage server (home dirs, files, backups over NFS/CIFS/Rsync), but also as a general-purpose server with unpredictably-big-data programs running directly on it (such as corporate databases, Alfresco for intellectual document storage, etc.) in order to
2012 May 30
11
Disk failure chokes all the disks attached to the failing disk HBA
Dear All, It may be this not the correct mailing list, but I''m having a ZFS issue when a disk is failing. The system is a supermicro motherboard X8DTH-6F in a 4U chassis (SC847E1-R1400LPB) and an external SAS2 JBOD (SC847E16-RJBOD1). It makes a system with a total of 4 backplanes (2x SAS + 2x SAS2) each of them connected to a 4 different HBA (2x LSI 3081E-R (1068 chip) + 2x LSI
2015 Aug 30
2
[OFFTOPIC] integrated LSI 3008 :: number of hdd support
Hi guys! Unfortunately there is no offtopic list but the subject is somehow related to centos as the OS is/will be centos :) So, under this thin cover i ask : Is it possible that for a SAS controler like LSI 3008 that in specs says that : "This high-performance I/O controller supports T-10 data protection model and optical support, PCIe hot plugging, and up to 1,000 connected devices"
2010 Feb 24
7
Recommended PCIe SATA/SAS Controller?
Greetings all- I need to purchase a PCIe SATA or SAS controller(non-raid) for a Supermicro 2U system. It should be directly bootable. Any recommendations? The system will be running CentOS 5.4 as an LTSP system. Thanks! --Tim
2006 Aug 31
1
using mulitple sound card inputs
I got 2 cards working, and will need to add another 2 cards shortly. I hadn't thought about the matter-it's something I can try to play with later. What you've done here is awesome, where did you come across a system that could house 11 pci cards? ______________________________________________ Andrew Wehner Chatham Financial - Kennett Square T: 484.731.0029 -----Original
2011 Dec 12
2
Sun X4540 disk replacement
Sorry for the off-topic question. I''m needing to replace a disk in a x4540 zfs file system. I have replacement ST31000NSSUN disks, but it''s not obvious to me how to separate the original disk from its drive sled, it seems to be attached by more than the usual 4 screws. Is it meant to be separated? I''ve looked the x4540 user guide but it does not say anything about it.
2011 Apr 09
2
PCI-X/PCIe RAID controller.
Afternoon, I've got an old Dell PERC 4DC (PCI 64bit/33MHz) sitting in my home rig and with a new board on the way that has PCI-X (133) and PCIe (x4) slots, I was wondering what people would recommend for a cheap hardware parallel SCSI RAID controller (no fake raid please) that is relatively cheap but faster then the old PERC? I'm looking for something cheap that'll do RAID-10 but
2009 Jun 28
2
[storage-discuss] ZFS snapshot send/recv "hangs" X4540 servers
On Fri, Jun 26, 2009 at 10:14 AM, Brent Jones<brent at servuhome.net> wrote: > On Thu, Jun 25, 2009 at 12:00 AM, James Lever<j at jamver.id.au> wrote: >> >> On 25/06/2009, at 4:38 PM, John Ryan wrote: >> >>> Can I ask the same question - does anyone know when the 113 build will >>> show up on pkg.opensolaris.org/dev ? >> >> On
2009 Feb 11
8
Write caches on X4540
We''re using some X4540s, with OpenSolaris 2008.11. According to my testing, to optimize our systems for our specific workload, I''ve determined that we get the best performance with the write cache disabled on every disk, and with zfs:zfs_nocacheflush=1 set in /etc/system. The only issue is setting the write cache permanently, or at least quickly. Right now, as it is,
2007 Feb 17
1
Filesystem won't mount because of "unsupported optional features (80)"
I made a filesystem (mke2fs -j) on a logical volume under kernel 2.6.20 on a 64-bit based system, and when I try to mount it, ext3 complains with EXT3-fs: dm-1: couldn't mount because of unsupported optional features (80). I first thought I just forgot to make the filesystem, so I remade it and the error is still present. I ran fsck on this freshly made filesystem, and it completed with
2010 Jan 12
6
x4500/x4540 does the internal controllers have a bbu?
Has anyone worked with a x4500/x4540 and know if the internal raid controllers have a bbu? I''m concern that we won''t be able to turn off the write-cache on the internal hds and SSDs to prevent data corruption in case of a power failure. -- This message posted from opensolaris.org
2023 Jun 03
1
What could cause rsync to kill ssh?
Maurice R Volaski via rsync <maurice.volaski at lists.samba.org> wrote: > I have an rsync script that it is copying one computer (over ssh) > to a shared CIFS mount on Gentoo Linux, kernel 6.3.4. The script > runs for a while and then at some point quits knocking my ssh > session offline on all terminals and it blocks ssh from being able > to connect again. Even restarting
2007 Jan 13
1
[Q] How can the directory location to dd output affect performance?
I have two Opteron-based Tyan systems being supported by PCI-e Areca cards. There is definitely an issue going on in the two systems that is causing significantly degraded performance of these cards. It appeared, initially, that the SATA backplane on the Tyan chassis was wholly to blame. But then I made an odd discovery. I'm running from the Ubuntu LiveCD for 64-bit. It uses kernel