similar to: Areca 1100 SATA Raid Controller in JBOD mode Hangs on zfs root creation

Displaying 20 results from an estimated 7000 matches similar to: "Areca 1100 SATA Raid Controller in JBOD mode Hangs on zfs root creation"

2007 Oct 10
0
Areca 1100 SATA Raid Controller in JBOD mode Hangs on zfs root creation.
Just as I create a ZFS pool and copy the root partition to it.... the performance seems to be really good then suddenly the system hangs all my sesssions and displays on the console: Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma map got ''no resources'' Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma allocate fail Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0:
2008 Jul 25
18
zfs, raidz, spare and jbod
Hi. I installed solaris express developer edition (b79) on a supermicro quad-core harpertown E5405 with 8 GB ram and two internal sata-drives. I installed solaris onto one of the internal drives. I added an areca arc-1680 sas-controller and configured it in jbod-mode. I attached an external sas-cabinet with 16 sas-drives 1 TB (931 binary GB). I created a raidz2-pool with ten disks and one spare.
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card I got errors on all drives that result from SCSI timeout errors. yoda:~ # tail -f /var/adm/messages Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Requested Block: 239683776 Error Block: 239683776 Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor: Seagate
2017 Oct 22
2
Areca RAID controller on latest CentOS 7 (1708 i.e. RHEL 7.4) kernel 3.10.0-693.2.2.el7.x86_64
-----Original Message----- From: CentOS [mailto:centos-bounces at centos.org] On Behalf Of Noam Bernstein Sent: Sunday, October 22, 2017 8:54 AM To: CentOS mailing list <centos at centos.org> Subject: [CentOS] Areca RAID controller on latest CentOS 7 (1708 i.e. RHEL 7.4) kernel 3.10.0-693.2.2.el7.x86_64 > Is anyone running any Areca RAID controllers with the latest CentOS 7 kernel, >
2017 Oct 22
0
Areca RAID controller on latest CentOS 7 (1708 i.e. RHEL 7.4) kernel 3.10.0-693.2.2.el7.x86_64
Is anyone running any Areca RAID controllers with the latest CentOS 7 kernel, 3.10.0-693.2.2.el7.x86_64? We recently updated (from 3.10.0-514.26.2.el7.x86_64), and we?ve started having lots of problems. To add to the confusion, there?s also a hardware problem (either with the controller or the backplane most likely) that we?re in the process of analyzing. Regardless, we have an ARC1883i, and
2017 Jan 20
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
> The disks I am going to use are 6TB Seagate Enterprise ST6000NM0034 > 7200rpm SAS/12Gbit 128 MB Sorry to hear that, my experience is the Seagate brand has the shortest MTBF of any disk I have ever used... > If hardware RAID is preferred, the controller's cache could be updated > to 4GB and I wonder how much performance gain this would give me? Lots, especially with slower
2017 Jan 20
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
> This is why before configuring and installing everything you may want to > attach drives one at a time, and upon boot take a note which physical > drive number the controller has for that drive, and definitely label it so > y9ou will know which drive to pull when drive failure is reported. Sorry Valeri, that only works if you're the only guy in the org. In reality, you cannot
2017 Jan 21
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On 2017-01-20, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote: > > Hm, not certain what process you describe. Most of my controllers are > 3ware and LSI, I just pull failed drive (and I know phailed physical drive > number), put good in its place and rebuild stars right away. I know for sure that LSI's storcli utility supports an identify operation, which (if the
2017 Jan 21
1
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Sat, January 21, 2017 12:16 am, Keith Keller wrote: > On 2017-01-20, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote: >> >> Hm, not certain what process you describe. Most of my controllers are >> 3ware and LSI, I just pull failed drive (and I know phailed physical >> drive >> number), put good in its place and rebuild stars right away. > > I
2017 Jan 21
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
Hi Valeri, Before you pull a drive you should check to make sure that doing so won't kill the whole array. MegaCli can help you prevent a storage disaster and can let you have more insight into your RAID and the status of the virtual disks and the disks than make up each array. MegaCli will let you see the health and status of each drive. Does it have media errors, is it in predictive
2017 Jan 20
2
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Fri, January 20, 2017 12:59 pm, Joseph L. Casale wrote: >> The disks I am going to use are 6TB Seagate Enterprise ST6000NM0034 >> 7200rpm SAS/12Gbit 128 MB > > Sorry to hear that, my experience is the Seagate brand has the shortest > MTBF > of any disk I have ever used... > >> If hardware RAID is preferred, the controller's cache could be updated >> to
2017 Jan 20
6
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
Hi, Does anyone have experiences about ARC-1883I SAS controller with CentOS7? I am planning to have RAID1 setup and I am wondering if I should use the controller's RAID functionality which has 2GB cache or should I go with JBOD + Linux software RAID? The disks I am going to use are 6TB Seagate Enterprise ST6000NM0034 7200rpm SAS/12Gbit 128 MB If hardware RAID is preferred, the
2017 Jan 21
1
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Fri, January 20, 2017 7:00 pm, Cameron Smith wrote: > Hi Valeri, > > > Before you pull a drive you should check to make sure that doing so > won't kill the whole array. Wow! What did I say to make you treat me as an ultimate idiot!? ;-) All my comments, at least in my own reading, we about things you need to do to make sure when you hot unplug bad drive it is indeed failed
2005 May 02
0
[ANNOUNCE] Areca SATA RAID drivers for CentOS 4.0
Hi, To follow up my CentOS 4.0/Areca SATA RAID driver disk I created a month or two ago, I have now created kernel-module-arcmsr RPMs containing just the kernel module to track the kernel updates. This means: a) No need to patch, rebuild and maintain customised kernels for Areca support b) Keep everything maintained with RPM I've taken the latest 1.20.00.07 kernel driver source as found in
2005 Mar 20
0
[ANNOUNCE] Areca SATA RAID driver disk for CentOS 4.0
Hi, To satisfy my own requirements, I''ve created a driver disk for the Areca SATA RAID controllers[0]. It currently contains the 1.20.00.06 driver built for the x86_64 SMP and non-SMP kernels, but should be fairly straightforward to add the driver built for 32-bit x86 kernels as well. You can find the driver disk and instructions here: http://www.bodgit-n-scarper.com/code.html#centos
2017 Jan 20
4
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Fri, January 20, 2017 5:16 pm, Joseph L. Casale wrote: >> This is why before configuring and installing everything you may want to >> attach drives one at a time, and upon boot take a note which physical >> drive number the controller has for that drive, and definitely label it >> so >> y9ou will know which drive to pull when drive failure is reported. > >
2009 Nov 11
0
[storage-discuss] ZFS on JBOD storage, mpt driver issue - server not responding
miro at cybershade.us said: > So at this point this looks like an issue with the MPT driver or these SAS > cards (I tested two) when under heavy load. I put the latest firmware for the > SAS card from LSI''s web site - v1.29.00 without any changes, server still > locks. > > Any ideas, suggestions how to fix or workaround this issue? The adapter is > suppose to be
2016 Sep 23
1
OT: Areca ARC-1220 compatible with SATA III (6Gb/s) drives?
Running C6 fileserver. Want to replace 7 year old HDs connected to an Areca ARC-1220 raid sata II (3Gb/s) controller. Has anyone used this controller with newer 2TB SATA III (6Gb/s) WD Re drives like the WD2000FYYZ or the WD2004FBYZ?
2018 Apr 09
0
JBOD / ZFS / Flash backed
Thanks, I suppose what I'm trying to gain is some clarity on what choice is best for a given application. How do I know if it's better for me to use a raid card or not, to include flash-cache on it or not, to use ZFS or not, when combined with a small number of SSDs in Replica 3. On Mon, Apr 9, 2018 at 10:49 AM, Alex Crow <acrow at integrafin.co.uk> wrote: > On 09/04/18
2018 Apr 09
2
JBOD / ZFS / Flash backed
On 09/04/18 16:49, Vincent Royer wrote: > > > Is a flash-backed Raid required for JBOD, and should it be 1gb, 2, > or 4gb flash? > > RAID and JBOD are completely different things. JBODs are just that, bunches of disks, and they don't have any cache above them in hardware. If you're going to use ZFS under Gluster, look at the ZFS docs first. Short answer is no.