similar to: RAID card selection - JBOD mode / Linux RAID

Displaying 20 results from an estimated 900 matches similar to: "RAID card selection - JBOD mode / Linux RAID"

2012 May 23
5
biggest disk partition on 5.8?
Hey folks, I have a Sun J4400 SAS1 disk array with 24 x 1T drives in it connected to a Sunfire x2250 running 5.8 ( 64 bit ) I used 'arcconf' to create a big RAID60 out of (see below). But then I mount it and it is way too small This should be about 20TB : [root at solexa1 StorMan]# df -h /dev/sdb1 Filesystem Size Used Avail Use% Mounted on /dev/sdb1 186G 60M
2009 Nov 11
0
[storage-discuss] ZFS on JBOD storage, mpt driver issue - server not responding
miro at cybershade.us said: > So at this point this looks like an issue with the MPT driver or these SAS > cards (I tested two) when under heavy load. I put the latest firmware for the > SAS card from LSI''s web site - v1.29.00 without any changes, server still > locks. > > Any ideas, suggestions how to fix or workaround this issue? The adapter is > suppose to be
2019 Mar 14
1
howto monitor disks on a serveraid-8k?
On 3/14/19 2:31 PM, isdtor wrote: > >> I'd like to monitor the disks connected to a ServeRaid-8k controller in a >> server running Centos 7 such that I can know when one fails. >> >> What's the best way to do that? > > It's been a long time since I worked with ServeRaid, and things may have changed in the meantime. > > IBM used to have a an iso
2011 May 30
13
JBOD recommendation for ZFS usage
Dear all Sorry if it''s kind of off-topic for the list but after talking to lots of vendors I''m running out of ideas... We are looking for JBOD systems which (1) hold 20+ 3.3" SATA drives (2) are rack mountable (3) have all the nive hot-swap stuff (4) allow 2 hosts to connect via SAS (4+ lines per host) and see all available drives as disks, no RAID volume. In a
2010 Oct 24
3
ZFS with STK raid card w battery
We have Sun STK RAID cards in our x4170 servers. These are battery backed with 256mb cache. What is the recommended ZFS configuration for these cards? Right now, I have created a one-to-one logical volume to disk mapping on the RAID card (one disk == one volume on RAID card). Then, I mirror them using ZFS. No hardware mirror. What I am a little confused with is if it is better to not do any
2013 Dec 04
3
Adaptec 5805 as a guest on ESXi 5.5 - problem
Hi, I've installed a FreeBSD (stable 9.2) as a guest on ESXi 5.5. I've added Adaptec Controller via passthrough. Unfortunately FreeBSD does not show hard drives. Any clue? aac0: <Adaptec RAID 5805> mem 0xfd200000-0xfd3fffff irq 18 at device 0.0 on pci3 aac0: Enabling 64-bit address support aac0: Enable Raw I/O aac0: Enable 64-bit array aac0: New comm. interface enabled aac0:
2012 Sep 12
1
systutils/arcconf errors on 9.x versions
Back in July, this error was discussed briefly on the mailing list(s). It appears that a fix (r238182) was submitted for inclusion in 9.1 (early). This problem still appears in 9.1-RC1. Will the fix be included in 9.1-RELEASE (or better yet 9.1-RC2)? Thanks. David Boyd. ---------------------------------------------------------------------------- ------ 1st e-mail from pluknet responding to
2006 Nov 10
3
aaccli on recent conrollers?
I have just built a new SunFire X4100 server with an Adaptec 2230SLP RAID card using FreeBSD 6.2-PRE kernel (from September 20). Everything is working extremely well except I cannot run the aaccli utility on this controller. When I try to open the controller, it gives this error: Command Error: <The current AFAAPI.DLL is too old to work with the current controller software.> On
2017 Jan 20
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
> The disks I am going to use are 6TB Seagate Enterprise ST6000NM0034 > 7200rpm SAS/12Gbit 128 MB Sorry to hear that, my experience is the Seagate brand has the shortest MTBF of any disk I have ever used... > If hardware RAID is preferred, the controller's cache could be updated > to 4GB and I wonder how much performance gain this would give me? Lots, especially with slower
2017 Jan 20
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
> This is why before configuring and installing everything you may want to > attach drives one at a time, and upon boot take a note which physical > drive number the controller has for that drive, and definitely label it so > y9ou will know which drive to pull when drive failure is reported. Sorry Valeri, that only works if you're the only guy in the org. In reality, you cannot
2017 Jan 21
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On 2017-01-20, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote: > > Hm, not certain what process you describe. Most of my controllers are > 3ware and LSI, I just pull failed drive (and I know phailed physical drive > number), put good in its place and rebuild stars right away. I know for sure that LSI's storcli utility supports an identify operation, which (if the
2017 Jan 21
1
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Sat, January 21, 2017 12:16 am, Keith Keller wrote: > On 2017-01-20, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote: >> >> Hm, not certain what process you describe. Most of my controllers are >> 3ware and LSI, I just pull failed drive (and I know phailed physical >> drive >> number), put good in its place and rebuild stars right away. > > I
2017 Jan 20
6
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
Hi, Does anyone have experiences about ARC-1883I SAS controller with CentOS7? I am planning to have RAID1 setup and I am wondering if I should use the controller's RAID functionality which has 2GB cache or should I go with JBOD + Linux software RAID? The disks I am going to use are 6TB Seagate Enterprise ST6000NM0034 7200rpm SAS/12Gbit 128 MB If hardware RAID is preferred, the
2017 Jan 21
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
Hi Valeri, Before you pull a drive you should check to make sure that doing so won't kill the whole array. MegaCli can help you prevent a storage disaster and can let you have more insight into your RAID and the status of the virtual disks and the disks than make up each array. MegaCli will let you see the health and status of each drive. Does it have media errors, is it in predictive
2017 Jan 20
2
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Fri, January 20, 2017 12:59 pm, Joseph L. Casale wrote: >> The disks I am going to use are 6TB Seagate Enterprise ST6000NM0034 >> 7200rpm SAS/12Gbit 128 MB > > Sorry to hear that, my experience is the Seagate brand has the shortest > MTBF > of any disk I have ever used... > >> If hardware RAID is preferred, the controller's cache could be updated >> to
2019 Mar 14
4
howto monitor disks on a serveraid-8k?
Hi, I'd like to monitor the disks connected to a ServeRaid-8k controller in a server running Centos 7 such that I can know when one fails. What's the best way to do that?
2018 Apr 09
0
JBOD / ZFS / Flash backed
> > > Is a flash-backed Raid required for JBOD, and should it be 1gb, 2, or 4gb > flash? > Is anyone able to clarify this requirement for me? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180409/0b7677ce/attachment.html>
2007 May 12
3
zfs and jbod-storage
Hi. I''m managing a HDS storage system which is slightly larger than 100 TB and we have used approx. 3/4. We use vxfs. The storage system is attached to a solaris 9 on sparc via a fiberswitch. The storage is shared via nfs to our webservers. If I was to replace vxfs with zfs I could utilize raidz(2) instead of the built-in hardware raid-controller. Are there any jbod-only storage
2018 Apr 09
0
JBOD / ZFS / Flash backed
Yes the flash-backed RAID cards use a super-capacitor to backup the flash cache. You have a choice of flash module sizes to include on the card. The card supports RAID modes as well as JBOD. I do not know if Gluster can make use of battery-backed flash-based Cache when the disks are presented by the RAID card in JBOD. The Hardware vendor asked "Do you know if Gluster makes use of
2007 Oct 10
0
Areca 1100 SATA Raid Controller in JBOD mode Hangs on zfs root creation.
Just as I create a ZFS pool and copy the root partition to it.... the performance seems to be really good then suddenly the system hangs all my sesssions and displays on the console: Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma map got ''no resources'' Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma allocate fail Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: