Displaying 20 results from an estimated 12000 matches similar to: "7.1 install with Areca arc-1224"
2015 Jul 05
1
7.1 install with Areca arc-1224
On 07/05/2015 09:17 AM, linush at verizon.net wrote:
> Someone please tell me what I did to screw this thing up so badly.
On 07/05/15, Gordon Messmer<gordon.messmer at gmail.com> wrote:
Have you looked at the log files in /mnt/sysimage/root/?
------------- Quoting broken in this mailer ------------
So I looked in /mnt/sysimage/var/log/anaconda and found this in anaconda.packaging.log:
2015 Jul 05
2
7.1 install with Areca arc-1224
On 07/05/15, Gordon Messmer wrote:
anaconda will try to delete an rpm file if it gets an IOError. Your
media may be corrupt. Check that first.
----- Above quoted -----
No such luck. On the system where I'm doing the install, I used dd to read the entire DVD and also copied every .rpm to /dev/null and didn't get any I/O errors.
What next?
2015 Jul 07
0
7.1 install with Areca arc-1224
On 07/06/15 18:06, C Linus Hicks wrote:
> On 07/06/15, g wrote:
>> you might try verifying that system you are getting error message on
>> has a good cd/dvd drive.
>>
>> burn another dvd at at least 4 speeds slower.
>>
> if runs ok, bad drive.
>>
> if still fails, bad drive.
>>
> another way you can check is to pull iso on system you are
2015 Jul 06
2
7.1 install with Areca arc-1224
On 07/05/15, Gordon Messmer wrote:
That's not the same as checking the media for corruption. You may be
able to read all of the files, but if the data is corrupt, rpm may throw
and IOError.
So, the next thing to do is check your media. The DVD should offer to
do that first when you boot from it.
------ Above quoted --------
Booted the DVD again, took the default. It got to 76.2% then
2015 Jul 06
2
7.1 install with Areca arc-1224
On 07/06/15, g wrote:
you might try verifying that system you are getting error message on
has a good cd/dvd drive.
burn another dvd at at least 4 speeds slower.
if runs ok, bad drive.
if still fails, bad drive.
another way you can check is to pull iso on system you are having
problem with and burn dvd.
if you get error, get a new drive.
-------- Above quoted --------
When I md5sum the DVD
2016 Sep 23
1
OT: Areca ARC-1220 compatible with SATA III (6Gb/s) drives?
Running C6 fileserver. Want to replace 7 year old HDs connected to an Areca
ARC-1220 raid sata II (3Gb/s) controller. Has anyone used this controller
with newer 2TB SATA III (6Gb/s) WD Re drives like the WD2000FYYZ or the
WD2004FBYZ?
2008 Jun 11
4
Areca Raid 6 ARC-1231 Raid 6 Slow LS Listing Performance on large directory
Hello,
I have a RAID-6 Partition with the Areca ARC-1231 card on a S5000PAL
Intel system with 6 disks as part of the raid volume. The system has
been set up as Write-back cache and the raid card has a 2 GIG memory
cache on it. It is installed on Freebsd 7.0 STABLE with SCHED_ULE enabled.
I have a folder with a lot of small and big files in it that total
3009 files. In the user system we
2017 Jan 21
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
Hi Valeri,
Before you pull a drive you should check to make sure that doing so
won't kill the whole array.
MegaCli can help you prevent a storage disaster and can let you have more
insight into your RAID and the status of the virtual disks and the disks
than make up each array.
MegaCli will let you see the health and status of each drive. Does it have
media errors, is it in predictive
2017 Jan 21
1
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Fri, January 20, 2017 7:00 pm, Cameron Smith wrote:
> Hi Valeri,
>
>
> Before you pull a drive you should check to make sure that doing so
> won't kill the whole array.
Wow! What did I say to make you treat me as an ultimate idiot!? ;-) All my
comments, at least in my own reading, we about things you need to do to
make sure when you hot unplug bad drive it is indeed failed
2017 Jan 20
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
> The disks I am going to use are 6TB Seagate Enterprise ST6000NM0034
> 7200rpm SAS/12Gbit 128 MB
Sorry to hear that, my experience is the Seagate brand has the shortest MTBF
of any disk I have ever used...
> If hardware RAID is preferred, the controller's cache could be updated
> to 4GB and I wonder how much performance gain this would give me?
Lots, especially with slower
2017 Jan 20
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
> This is why before configuring and installing everything you may want to
> attach drives one at a time, and upon boot take a note which physical
> drive number the controller has for that drive, and definitely label it so
> y9ou will know which drive to pull when drive failure is reported.
Sorry Valeri, that only works if you're the only guy in the org.
In reality, you cannot
2017 Jan 21
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On 2017-01-20, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote:
>
> Hm, not certain what process you describe. Most of my controllers are
> 3ware and LSI, I just pull failed drive (and I know phailed physical drive
> number), put good in its place and rebuild stars right away.
I know for sure that LSI's storcli utility supports an identify
operation, which (if the
2017 Jan 21
1
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Sat, January 21, 2017 12:16 am, Keith Keller wrote:
> On 2017-01-20, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote:
>>
>> Hm, not certain what process you describe. Most of my controllers are
>> 3ware and LSI, I just pull failed drive (and I know phailed physical
>> drive
>> number), put good in its place and rebuild stars right away.
>
> I
2017 Jan 20
2
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Fri, January 20, 2017 12:59 pm, Joseph L. Casale wrote:
>> The disks I am going to use are 6TB Seagate Enterprise ST6000NM0034
>> 7200rpm SAS/12Gbit 128 MB
>
> Sorry to hear that, my experience is the Seagate brand has the shortest
> MTBF
> of any disk I have ever used...
>
>> If hardware RAID is preferred, the controller's cache could be updated
>> to
2017 Jan 20
6
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
Hi,
Does anyone have experiences about ARC-1883I SAS controller with CentOS7?
I am planning to have RAID1 setup and I am wondering if I should use
the controller's RAID functionality which has 2GB cache or should I go
with JBOD + Linux software RAID?
The disks I am going to use are 6TB Seagate Enterprise ST6000NM0034
7200rpm SAS/12Gbit 128 MB
If hardware RAID is preferred, the
2017 Jan 20
4
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Fri, January 20, 2017 5:16 pm, Joseph L. Casale wrote:
>> This is why before configuring and installing everything you may want to
>> attach drives one at a time, and upon boot take a note which physical
>> drive number the controller has for that drive, and definitely label it
>> so
>> y9ou will know which drive to pull when drive failure is reported.
>
>
2007 Oct 10
0
Areca 1100 SATA Raid Controller in JBOD mode Hangs on zfs root creation.
Just as I create a ZFS pool and copy the root partition to it.... the performance seems to be really good then suddenly the system hangs all my sesssions and displays on the console:
Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma map got ''no resources''
Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma allocate fail
Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0:
2008 Mar 26
1
freebsd 7 and areca controller
Hi.
I'm looking at deploying a freebsd 7-release server with some storage
attached to an areca ARC-1680 controller. But this card is not
mentioned in 'man 4 arcmsr'
(http://www.freebsd.org/cgi/man.cgi?query=arcmsr&sektion=4&manpath=FreeBSD+7.0-RELEASE).
Areaca's website does mention freebsd as a supported OS
(http://www.areca.com.tw/products/pcietosas1680series.htm).
Has
2012 Apr 03
2
CentOS 6.2 + areca raid + xfs problems
Two weeks ago I (clean-)installed CentOS 6.2 on a server which had been running 5.7.
There is a 16 disk = ~11 TB data volume running on an Areca ARC-1280 raid card with LVM + xfs filesystem on it. The included arcmsr driver module is loaded.
At first it seemed ok, but with in a few hours I started getting I/O error message on directory listings, and then a bit later when I did a vgdisplay
2014 Feb 21
2
CentOS x86_64 6.4
Hi List,
Strange problem.
I am running off of CentOS x86_64 6.4 usb key that is sdb
I have ssd disk that has a centos image on it. I have mounted
the partitions /dev/sda3 as /mnt/sysimage
and /dev/sda1 as /mnt/sysimage/boot
I am trying to chroot to /mnt/sysimage dir but get the following error.
[root at localhost root]# /usr/sbin/chroot /mnt/sysimage /bin/bash
/usr/sbin/chroot: failed to run