Displaying 20 results from an estimated 1100 matches similar to: "Re: [CentOS-devel] Areca RAID drivers"
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card
I got errors on all drives that result from SCSI timeout errors.
yoda:~ # tail -f /var/adm/messages
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]
Requested Block: 239683776 Error Block: 239683776
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor:
Seagate
2007 Oct 10
0
Areca 1100 SATA Raid Controller in JBOD mode Hangs on zfs root creation.
Just as I create a ZFS pool and copy the root partition to it.... the performance seems to be really good then suddenly the system hangs all my sesssions and displays on the console:
Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma map got ''no resources''
Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma allocate fail
Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0:
2008 Mar 26
1
freebsd 7 and areca controller
Hi.
I'm looking at deploying a freebsd 7-release server with some storage
attached to an areca ARC-1680 controller. But this card is not
mentioned in 'man 4 arcmsr'
(http://www.freebsd.org/cgi/man.cgi?query=arcmsr&sektion=4&manpath=FreeBSD+7.0-RELEASE).
Areaca's website does mention freebsd as a supported OS
(http://www.areca.com.tw/products/pcietosas1680series.htm).
Has
2005 May 02
0
[ANNOUNCE] Areca SATA RAID drivers for CentOS 4.0
Hi,
To follow up my CentOS 4.0/Areca SATA RAID driver disk I created a month
or two ago, I have now created kernel-module-arcmsr RPMs containing just
the kernel module to track the kernel updates. This means:
a) No need to patch, rebuild and maintain customised kernels for Areca
support
b) Keep everything maintained with RPM
I've taken the latest 1.20.00.07 kernel driver source as found in
2012 Apr 03
2
CentOS 6.2 + areca raid + xfs problems
Two weeks ago I (clean-)installed CentOS 6.2 on a server which had been running 5.7.
There is a 16 disk = ~11 TB data volume running on an Areca ARC-1280 raid card with LVM + xfs filesystem on it. The included arcmsr driver module is loaded.
At first it seemed ok, but with in a few hours I started getting I/O error message on directory listings, and then a bit later when I did a vgdisplay
2010 Jul 26
1
areca 1100 kmod / kernel support
Hi,
is the areca 1100 Raid Controller supported by CentOS 5? Or is there a
kmod rpm available?
Thanks
Juergen
2008 Jul 25
18
zfs, raidz, spare and jbod
Hi.
I installed solaris express developer edition (b79) on a supermicro
quad-core harpertown E5405 with 8 GB ram and two internal sata-drives.
I installed solaris onto one of the internal drives. I added an areca
arc-1680 sas-controller and configured it in jbod-mode. I attached an
external sas-cabinet with 16 sas-drives 1 TB (931 binary GB). I
created a raidz2-pool with ten disks and one spare.
2015 Jul 05
1
7.1 install with Areca arc-1224
I must be doing something horribly wrong and I hope somebody can help.
The Areca arc-1224 is not supported by the Areca driver included driver in 7.1 so I have to supply that when starting the install. Documentation provided by Areca and in the Red Hat install guide say the same thing, put the driver on an accessible medium then append inst.dd on the boot command, choose the driver and now the
2005 Mar 20
0
[ANNOUNCE] Areca SATA RAID driver disk for CentOS 4.0
Hi,
To satisfy my own requirements, I''ve created a driver disk for the Areca
SATA RAID controllers[0]. It currently contains the 1.20.00.06 driver
built for the x86_64 SMP and non-SMP kernels, but should be fairly
straightforward to add the driver built for 32-bit x86 kernels as well.
You can find the driver disk and instructions here:
http://www.bodgit-n-scarper.com/code.html#centos
2010 Jan 08
0
ZFS partially hangs when removing an rpool mirrored disk while having some IO on another pool on another partition of the same disk
Hello,
Sorry for the (very) long subject but I''ve pinpointed the problem to this exact situation.
I know about the other threads related to hangs, but in my case there was no < zfs destroy > involved, nor any compression or deduplication.
To make a long story short, when
- a disk contains 2 partitions (p1=32GB, p2=1800 GB) and
- p1 is used as part of a zfs mirror of rpool
2017 Oct 22
2
Areca RAID controller on latest CentOS 7 (1708 i.e. RHEL 7.4) kernel 3.10.0-693.2.2.el7.x86_64
-----Original Message-----
From: CentOS [mailto:centos-bounces at centos.org] On Behalf Of Noam
Bernstein
Sent: Sunday, October 22, 2017 8:54 AM
To: CentOS mailing list <centos at centos.org>
Subject: [CentOS] Areca RAID controller on latest CentOS 7 (1708 i.e. RHEL
7.4) kernel 3.10.0-693.2.2.el7.x86_64
> Is anyone running any Areca RAID controllers with the latest CentOS 7 kernel,
>
2007 Sep 11
0
irqbalanced on SMP dom0 ?
Hi listmembers,
not a really urgent question, but i''m just curious about it:
Is it advised to use an irqbalanced on dom0 when running
domU''s pinned to particular cores?
as an example, i''ve got a dual quadcore xen system running
with
domU pinned to cores 1-3 (CPU#0)
domU pinned to cores 4-5 (CPU#1)
domU pinned to cores 6-7 (CPU#1)
so dom0 should have 100% time on
2009 Jan 14
1
Areca 1220 kernel lockups
Has anyone experienced any problems with Areca raid cards specifically
the 1220 causing kernels to lock up?
We are running 2.6.18-92.1.22.el5xen on 64bit. We have "areca_cli rsf
info" run once an hour from cron to check
for raid raid issues. Having this running seems to cause the box to
lock up. Whats weird is I can't seem to make it
lock up while running that command by
2016 Sep 23
1
OT: Areca ARC-1220 compatible with SATA III (6Gb/s) drives?
Running C6 fileserver. Want to replace 7 year old HDs connected to an Areca
ARC-1220 raid sata II (3Gb/s) controller. Has anyone used this controller
with newer 2TB SATA III (6Gb/s) WD Re drives like the WD2000FYYZ or the
WD2004FBYZ?
2009 Jan 14
0
Areca 1220 crashing box
Has anyone experienced any problems with Areca raid cards specifically
the 1220 causing kernels to lock up?
We are running 2.6.18-92.1.22.el5xen on 64bit. We have "areca_cli rsf
info" run once an hour from cron to check
for raid raid issues. Having this running seems to cause the box to
lock up. Whats weird is I can't seem to make it
lock up while running that command by
2005 Oct 11
0
Areca controllers?
Just wondering if anyone has seen performance tests on Areca SATA
controllers? How good are they compared to LSI "X" series and 3ware 9500s?
And last but not least, are the drivers included in the stock kernel?
Regards,
Harald
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2017 Oct 22
0
Areca RAID controller on latest CentOS 7 (1708 i.e. RHEL 7.4) kernel 3.10.0-693.2.2.el7.x86_64
Is anyone running any Areca RAID controllers with the latest CentOS 7 kernel, 3.10.0-693.2.2.el7.x86_64? We recently updated (from 3.10.0-514.26.2.el7.x86_64), and we?ve started having lots of problems. To add to the confusion, there?s also a hardware problem (either with the controller or the backplane most likely) that we?re in the process of analyzing. Regardless, we have an ARC1883i, and
2009 Dec 18
1
part of active zfs pool error message reports incorrect decive
I am seeing this issue posted a lot in the forums:
A zpool add/replace command is run, for example:
zpool add archive spare c2t0d2
invalid vdev specification
use ''-f'' to override the following errors:
/dev/dsk/c2t1d7s0 is part of active ZFS pool archive. Please see zpool(1M).
(-f just says: the following errors must be manually repaired:)
Also, when running format and
2008 Jun 11
4
Areca Raid 6 ARC-1231 Raid 6 Slow LS Listing Performance on large directory
Hello,
I have a RAID-6 Partition with the Areca ARC-1231 card on a S5000PAL
Intel system with 6 disks as part of the raid volume. The system has
been set up as Write-back cache and the raid card has a 2 GIG memory
cache on it. It is installed on Freebsd 7.0 STABLE with SCHED_ULE enabled.
I have a folder with a lot of small and big files in it that total
3009 files. In the user system we
2017 Jan 21
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
Hi Valeri,
Before you pull a drive you should check to make sure that doing so
won't kill the whole array.
MegaCli can help you prevent a storage disaster and can let you have more
insight into your RAID and the status of the virtual disks and the disks
than make up each array.
MegaCli will let you see the health and status of each drive. Does it have
media errors, is it in predictive