similar to: CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?

Displaying 20 results from an estimated 3000 matches similar to: "CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?"

2017 Jan 20
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
> The disks I am going to use are 6TB Seagate Enterprise ST6000NM0034 > 7200rpm SAS/12Gbit 128 MB Sorry to hear that, my experience is the Seagate brand has the shortest MTBF of any disk I have ever used... > If hardware RAID is preferred, the controller's cache could be updated > to 4GB and I wonder how much performance gain this would give me? Lots, especially with slower
2017 Jan 20
2
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Fri, January 20, 2017 12:59 pm, Joseph L. Casale wrote: >> The disks I am going to use are 6TB Seagate Enterprise ST6000NM0034 >> 7200rpm SAS/12Gbit 128 MB > > Sorry to hear that, my experience is the Seagate brand has the shortest > MTBF > of any disk I have ever used... > >> If hardware RAID is preferred, the controller's cache could be updated >> to
2017 Jan 20
4
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Fri, January 20, 2017 5:16 pm, Joseph L. Casale wrote: >> This is why before configuring and installing everything you may want to >> attach drives one at a time, and upon boot take a note which physical >> drive number the controller has for that drive, and definitely label it >> so >> y9ou will know which drive to pull when drive failure is reported. > >
2017 Jan 21
1
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Sat, January 21, 2017 12:16 am, Keith Keller wrote: > On 2017-01-20, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote: >> >> Hm, not certain what process you describe. Most of my controllers are >> 3ware and LSI, I just pull failed drive (and I know phailed physical >> drive >> number), put good in its place and rebuild stars right away. > > I
2017 Jan 21
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
Hi Valeri, Before you pull a drive you should check to make sure that doing so won't kill the whole array. MegaCli can help you prevent a storage disaster and can let you have more insight into your RAID and the status of the virtual disks and the disks than make up each array. MegaCli will let you see the health and status of each drive. Does it have media errors, is it in predictive
2017 Jan 21
1
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Fri, January 20, 2017 7:00 pm, Cameron Smith wrote: > Hi Valeri, > > > Before you pull a drive you should check to make sure that doing so > won't kill the whole array. Wow! What did I say to make you treat me as an ultimate idiot!? ;-) All my comments, at least in my own reading, we about things you need to do to make sure when you hot unplug bad drive it is indeed failed
2017 Jan 20
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
> This is why before configuring and installing everything you may want to > attach drives one at a time, and upon boot take a note which physical > drive number the controller has for that drive, and definitely label it so > y9ou will know which drive to pull when drive failure is reported. Sorry Valeri, that only works if you're the only guy in the org. In reality, you cannot
2017 Jan 21
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On 2017-01-20, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote: > > Hm, not certain what process you describe. Most of my controllers are > 3ware and LSI, I just pull failed drive (and I know phailed physical drive > number), put good in its place and rebuild stars right away. I know for sure that LSI's storcli utility supports an identify operation, which (if the
2012 Jun 17
26
Recommendation for home NAS external JBOD
Hi, my oi151 based home NAS is approaching a frightening "drive space" level. Right now the data volume is a 4*1TB Raid-Z1, 3 1/2" local disks individually connected to an 8 port LSI 6Gbit controller. So I can either exchange the disks one by one with autoexpand, use 2-4 TB disks and be happy. This was my original approach. However I am totally unclear about the 512b vs 4Kb issue.
2007 Oct 10
0
Areca 1100 SATA Raid Controller in JBOD mode Hangs on zfs root creation.
Just as I create a ZFS pool and copy the root partition to it.... the performance seems to be really good then suddenly the system hangs all my sesssions and displays on the console: Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma map got ''no resources'' Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma allocate fail Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0:
2007 Oct 10
1
Areca 1100 SATA Raid Controller in JBOD mode Hangs on zfs root creation
Updated to latest firmware 1.43-70417 ... same problem.. WARNING: arcmsr0: dma map got ''no resources'' WARNING: arcmsr0: dma allocate fail WARNING: arcmsr0: dma allocate fail free scsi hba pkt WARNING: arcmsr0: dma map got ''no resources'' WARNING: arcmsr0: dma allocate fail WARNING: a The only positive thing is that everytime I try to copy my UFS
2008 Jul 25
18
zfs, raidz, spare and jbod
Hi. I installed solaris express developer edition (b79) on a supermicro quad-core harpertown E5405 with 8 GB ram and two internal sata-drives. I installed solaris onto one of the internal drives. I added an areca arc-1680 sas-controller and configured it in jbod-mode. I attached an external sas-cabinet with 16 sas-drives 1 TB (931 binary GB). I created a raidz2-pool with ten disks and one spare.
2012 Jul 02
14
HP Proliant DL360 G7
Hello, Has anyone out there been able to qualify the Proliant DL360 G7 for your Solaris/OI/Nexenta environments? Any pros/cons/gotchas (vs. previous generation HP servers) would be greatly appreciated. Thanks in advance! -Anh
2009 Nov 17
13
ZFS storage server hardware
Hi, I know (from the zfs-discuss archives and other places [1,2,3,4]) that a lot of people are looking to use zfs as a storage server in the 10-100TB range. I''m in the same boat, but I''ve found that hardware choice is the biggest issue. I''m struggling to find something which will work nicely under solaris and which meets my expectations in terms of hardware.
2007 Apr 19
14
Experience with Promise Tech. arrays/jbod''s?
Greetings, In looking for inexpensive JBOD and/or RAID solutions to use with ZFS, I''ve run across the recent "VTrak" SAS/SATA systems from Promise Technologies, specifically their E-class and J-class series: E310f FC-connected RAID: http://www.promise.com/product/product_detail_eng.asp?product_id=175 E310s SAS-connected RAID:
2011 May 30
13
JBOD recommendation for ZFS usage
Dear all Sorry if it''s kind of off-topic for the list but after talking to lots of vendors I''m running out of ideas... We are looking for JBOD systems which (1) hold 20+ 3.3" SATA drives (2) are rack mountable (3) have all the nive hot-swap stuff (4) allow 2 hosts to connect via SAS (4+ lines per host) and see all available drives as disks, no RAID volume. In a
2007 Jan 25
4
high density SAS
Well Solaris SAS isn''t there yet but anyway just found some interesting high density SAS/SATA enclosures. <http://xtore.com/product_list.asp?cat=JBOD> The XJ 2000 is like the x4500 in that it holds 48 drives, however with the XJ 2000 2 drives are on each carrier and you can get to them from the front. I don''t like xtore in general but the 24 bay (2.5" SAS) and 48
2015 Aug 30
2
[OFFTOPIC] integrated LSI 3008 :: number of hdd support
On 08/30/2015 12:02 PM, Mike Mohr wrote: > In my experience the mass market HBAs and RAID cards typically do support > only 8 or 16 drives. For the internal variety in a standard rack-mount > server you'll usually see either 2 or 4 iPass cables (each of which support > 4 drives) connected to the backplane. The marketing material you've > referenced has a white lie in it:
2018 Apr 09
2
JBOD / ZFS / Flash backed
Your question is difficult to parse. Typically RAID and JBOD are mutually exclusive. By "flash-backed", do you mean a battery backup unit (BBU) on your RAID controller? On Mon, Apr 9, 2018 at 8:49 AM, Vincent Royer <vincent at epicenergy.ca> wrote: > >> Is a flash-backed Raid required for JBOD, and should it be 1gb, 2, or 4gb >> flash? >> > > Is anyone
2018 Apr 09
0
JBOD / ZFS / Flash backed
Yes the flash-backed RAID cards use a super-capacitor to backup the flash cache. You have a choice of flash module sizes to include on the card. The card supports RAID modes as well as JBOD. I do not know if Gluster can make use of battery-backed flash-based Cache when the disks are presented by the RAID card in JBOD. The Hardware vendor asked "Do you know if Gluster makes use of