similar to: Areca controllers?

Displaying 20 results from an estimated 10000 matches similar to: "Areca controllers?"

2016 May 09
1
Internal RAID controllers question
On 05/08/2016 06:20 PM, John R Pierce wrote: > there are really only two choices today, Adaptec and Avago (formerly > LSI, they also control the former Areca product line). I don't believe that is correct. LSI acquired 3ware, and Avago acquired LSI. So, Avago owns the 3ware and LSI technology, but Adaptec and Areca are still competitors. > Whoops, Avago is now Broadcom, a
2017 Jan 21
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
Hi Valeri, Before you pull a drive you should check to make sure that doing so won't kill the whole array. MegaCli can help you prevent a storage disaster and can let you have more insight into your RAID and the status of the virtual disks and the disks than make up each array. MegaCli will let you see the health and status of each drive. Does it have media errors, is it in predictive
2016 May 06
3
Internal RAID controllers question
Dear Experts, one of the RAID threads today prompted me ask everybody. Which internal hardware RAID controllers will survive some future to come in your estimate. First of all my beloved 3ware finally seems to have passed away. After multiple acquisitions and becoming part of LSI and getting bought with LSI, it probably became non operational. Namely, the latest 3ware cards have ancient
2017 Jan 21
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On 2017-01-20, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote: > > Hm, not certain what process you describe. Most of my controllers are > 3ware and LSI, I just pull failed drive (and I know phailed physical drive > number), put good in its place and rebuild stars right away. I know for sure that LSI's storcli utility supports an identify operation, which (if the
2017 Jan 21
1
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Sat, January 21, 2017 12:16 am, Keith Keller wrote: > On 2017-01-20, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote: >> >> Hm, not certain what process you describe. Most of my controllers are >> 3ware and LSI, I just pull failed drive (and I know phailed physical >> drive >> number), put good in its place and rebuild stars right away. > > I
2017 Nov 03
3
low end file server with h/w RAID - recommendations
On Fri, November 3, 2017 3:36 am, hw wrote: > Valeri Galtsev wrote: >> If you have not Dell server hardware my choice of [hardware] RAID cards >> would be: >> >> Areca > > Areca is forbiddingly expensive. Yes, and it is worth every dollar it costs. All good RAID cards will be on the same price level. Those cheaper ones I will not let into our stables (don't
2010 Feb 16
3
SAS raid controllers
Is anyone running either the newish Adaptec 5805 or the new LSI (3ware) 9750 sas raid controllers in a production environment with Centos 5.3/5.4? The low price of these cards makes me suspicious, compared to the more expensive pre-merger 3ware cards and considerably more expensive Areca ARC-1680. I've been 'burned' by the low cost of Promise raid cards (just as this group pointed
2017 Jan 21
1
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Fri, January 20, 2017 7:00 pm, Cameron Smith wrote: > Hi Valeri, > > > Before you pull a drive you should check to make sure that doing so > won't kill the whole array. Wow! What did I say to make you treat me as an ultimate idiot!? ;-) All my comments, at least in my own reading, we about things you need to do to make sure when you hot unplug bad drive it is indeed failed
2017 Jan 20
4
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Fri, January 20, 2017 5:16 pm, Joseph L. Casale wrote: >> This is why before configuring and installing everything you may want to >> attach drives one at a time, and upon boot take a note which physical >> drive number the controller has for that drive, and definitely label it >> so >> y9ou will know which drive to pull when drive failure is reported. > >
2017 Nov 03
0
low end file server with h/w RAID - recommendations
Valeri Galtsev wrote: > If you have not Dell server hardware my choice of [hardware] RAID cards > would be: > > Areca Areca is forbiddingly expensive. > LSI (or whoever owns that line these days - Intel was the last one, I > recollect) > > With LSI beware that they have really nasty command line client, and do > not have raid watch daemon with web interface like late
2017 Nov 02
4
low end file server with h/w RAID - recommendations
On Thu, November 2, 2017 11:18 am, hw wrote: > m.roth at 5-cent.us wrote: >> hw wrote: >>> Richard Zimmerman wrote: >>>> DO NOT buy the newer HPE DL20 gen9 or ML10 gen9 servers then >>>> (especially >>>> if using CentOS 6.x) >>> >>> What would you suggest as alternative, something from Dell? >> >> Yep, Dell's
2017 Nov 02
0
low end file server with h/w RAID - recommendations
On 2017-11-02, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote: > > If you have not Dell server hardware my choice of [hardware] RAID cards > would be: > > Areca > LSI (or whoever owns that line these days - Intel was the last one, I > recollect) > > With LSI beware that they have really nasty command line client, and do > not have raid watch daemon with web
2017 Nov 02
1
low end file server with h/w RAID - recommendations
On Thu, November 2, 2017 4:43 pm, Keith Keller wrote: > On 2017-11-02, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote: >> >> If you have not Dell server hardware my choice of [hardware] RAID cards >> would be: >> >> Areca >> LSI (or whoever owns that line these days - Intel was the last one, I >> recollect) >> >> With LSI beware
2017 Nov 04
3
low end file server with h/w RAID - recommendations
On Sat, November 4, 2017 4:32 am, hw wrote: > Valeri Galtsev wrote: >> >> On Fri, November 3, 2017 3:36 am, hw wrote: >>> Valeri Galtsev wrote: >>>> If you have not Dell server hardware my choice of [hardware] RAID >>>> cards >>>> would be: >>>> >>>> Areca >>> >>> Areca is forbiddingly expensive.
2017 Nov 04
0
low end file server with h/w RAID - recommendations
Valeri Galtsev wrote: > > On Fri, November 3, 2017 3:36 am, hw wrote: >> Valeri Galtsev wrote: >>> If you have not Dell server hardware my choice of [hardware] RAID cards >>> would be: >>> >>> Areca >> >> Areca is forbiddingly expensive. > > Yes, and it is worth every dollar it costs. All good RAID cards will be on > the same
2020 Jun 18
2
Amd es1000
On 6/18/20 3:47 PM, John Pierce wrote: > On Thu, Jun 18, 2020 at 11:04 AM paride desimone <parided at gmail.com> wrote: > >> The throuble is the radeon driver. I've already tried to install the gui, >> but the system hung on start gui. >> The es1000 is a shit gpu. >> >> > > those are just intended to provide a minimal VGA for initial
2007 Oct 10
0
Areca 1100 SATA Raid Controller in JBOD mode Hangs on zfs root creation.
Just as I create a ZFS pool and copy the root partition to it.... the performance seems to be really good then suddenly the system hangs all my sesssions and displays on the console: Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma map got ''no resources'' Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma allocate fail Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0:
2005 May 02
0
[ANNOUNCE] Areca SATA RAID drivers for CentOS 4.0
Hi, To follow up my CentOS 4.0/Areca SATA RAID driver disk I created a month or two ago, I have now created kernel-module-arcmsr RPMs containing just the kernel module to track the kernel updates. This means: a) No need to patch, rebuild and maintain customised kernels for Areca support b) Keep everything maintained with RPM I've taken the latest 1.20.00.07 kernel driver source as found in
2005 Mar 20
0
[ANNOUNCE] Areca SATA RAID driver disk for CentOS 4.0
Hi, To satisfy my own requirements, I''ve created a driver disk for the Areca SATA RAID controllers[0]. It currently contains the 1.20.00.06 driver built for the x86_64 SMP and non-SMP kernels, but should be fairly straightforward to add the driver built for 32-bit x86 kernels as well. You can find the driver disk and instructions here: http://www.bodgit-n-scarper.com/code.html#centos
2017 Sep 08
4
cyrus spool on btrfs?
On Fri, September 8, 2017 12:56 pm, hw wrote: > Valeri Galtsev wrote: >> >> On Fri, September 8, 2017 9:48 am, hw wrote: >>> m.roth at 5-cent.us wrote: >>>> hw wrote: >>>>> Mark Haney wrote: >>>> <snip> >>>>>> BTRFS isn't going to impact I/O any more significantly than, say, >>>>>> XFS.