similar to: RAIDs and JBOD?

Displaying 20 results from an estimated 12000 matches similar to: "RAIDs and JBOD?"

2006 Sep 28
13
jbod questions
Folks, We are in the process of purchasing new san/s that our mail server runs on (JES3). We have moved our mailstores to zfs and continue to have checksum errors -- they are corrected but this improves on the ufs inode errors that require system shutdown and fsck. So, I am recommending that we buy small jbods, do raidz2 and let zfs handle the raiding of these boxes. As we need more
2012 Jun 14
4
RAID options for Gluster
I think this discussion probably came up here already but I couldn't find much on the archives. Would you able to comment or correct whatever might look wrong. What options people think is more adequate to use with Gluster in terms of RAID underneath and a good balance between cost, usable space and performance. I have thought about two main options with its Pros and Cons No RAID (individual
2018 Apr 09
2
JBOD / ZFS / Flash backed
Your question is difficult to parse. Typically RAID and JBOD are mutually exclusive. By "flash-backed", do you mean a battery backup unit (BBU) on your RAID controller? On Mon, Apr 9, 2018 at 8:49 AM, Vincent Royer <vincent at epicenergy.ca> wrote: > >> Is a flash-backed Raid required for JBOD, and should it be 1gb, 2, or 4gb >> flash? >> > > Is anyone
2017 Jan 20
6
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
Hi, Does anyone have experiences about ARC-1883I SAS controller with CentOS7? I am planning to have RAID1 setup and I am wondering if I should use the controller's RAID functionality which has 2GB cache or should I go with JBOD + Linux software RAID? The disks I am going to use are 6TB Seagate Enterprise ST6000NM0034 7200rpm SAS/12Gbit 128 MB If hardware RAID is preferred, the
2018 Apr 09
0
JBOD / ZFS / Flash backed
Yes the flash-backed RAID cards use a super-capacitor to backup the flash cache. You have a choice of flash module sizes to include on the card. The card supports RAID modes as well as JBOD. I do not know if Gluster can make use of battery-backed flash-based Cache when the disks are presented by the RAID card in JBOD. The Hardware vendor asked "Do you know if Gluster makes use of
2018 Apr 09
0
JBOD / ZFS / Flash backed
> > > Is a flash-backed Raid required for JBOD, and should it be 1gb, 2, or 4gb > flash? > Is anyone able to clarify this requirement for me? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180409/0b7677ce/attachment.html>
2018 Apr 04
5
JBOD / ZFS / Flash backed
Hi, Trying to make the most of a limited budget. I need fast I/O for operations under 4MB, and high availability of VMs in an Ovirt cluster. I Have 3 nodes running Ovirt and want to rebuild them with hardware for converging storage. Should I use 2 960GB SSDs in RAID1 in each node, replica 3? Or can I get away with 1 larger SSD per node, JBOD, replica 3? Is a flash-backed Raid required for
2012 Jul 18
1
RAID card selection - JBOD mode / Linux RAID
I don't think this is off topic since I want to use JBOD mode so that Linux can do the RAID. I'm going to hopefully run this in CentOS 5 and Ubuntu 12.04 on a Sunfire x2250 Hard to get answers I can trust out of vendors :-) I have a Sun RAID card which I am pretty sure is LSI OEM. It is a 3G/s SAS1 with 2 external connectors like the one on the right here :
2010 Mar 08
11
ZFS for my home RAID? Or Linux Software RAID?
Hello All, I build a new Storage Server to backup my data, keep archives of client files, etc I recently had a near loss of important items. So I built a 16 SATA bay enclosure (16 hot swappable + 3 internal) enclosure, 2 x 3Ware 8 port RAID cards, 8gb RAM, dual AMD Opertron. I have a 1tb boot drive and I put in 8 x 1.5tb Seagate 7200 drives. In the future I want to fill the other 8 SATA bays
2010 Mar 03
6
Question about multiple RAIDZ vdevs using slices on the same disk
Hi all :) I''ve been wanting to make the switch from XFS over RAID5 to ZFS/RAIDZ2 for some time now, ever since I read about ZFS the first time. Absolutely amazing beast! I''ve built my own little hobby server at home and have a boatload of disks in different sizes that I''ve been using together to build a RAID5 array on Linux using mdadm in two layers; first layer is
2006 Jul 17
28
Big JBOD: what would you do?
ZFS fans, I''m preparing some analyses on RAS for large JBOD systems such as the Sun Fire X4500 (aka Thumper). Since there are zillions of possible permutations, I need to limit the analyses to some common or desirable scenarios. Naturally, I''d like your opinions. I''ve already got a few scenarios in analysis, and I don''t want to spoil the brain storming, so
2006 Oct 16
11
Configuring a 3510 for ZFS
Hi folks, Myself and a colleague are currently involved in a prototyping exercise to evaluate ZFS against our current filesystem. We are looking at the best way to arrange the disks in a 3510 storage array. We have been testing with the 12 disks on the 3510 exported as "nraid" logical devices. We then configured a single ZFS pool on top of this, using two raid-z arrays. We are getting
2018 Apr 09
2
JBOD / ZFS / Flash backed
On 09/04/18 16:49, Vincent Royer wrote: > > > Is a flash-backed Raid required for JBOD, and should it be 1gb, 2, > or 4gb flash? > > RAID and JBOD are completely different things. JBODs are just that, bunches of disks, and they don't have any cache above them in hardware. If you're going to use ZFS under Gluster, look at the ZFS docs first. Short answer is no.
2007 Sep 26
9
Rule of Thumb for zfs server sizing with (192) 500 GB SATA disks?
I''m trying to get maybe 200 MB/sec over NFS for large movie files (need large capacity to hold all of them). Are there any rules of thumb on how much RAM is needed to handle this (probably RAIDZ for all the disks) with zfs, and how large a server should be used? The throughput required is not so large, so I am thinking an X4100 M2 or X4150 should be plenty. This message posted from
2018 Apr 04
0
JBOD / ZFS / Flash backed
Based on your message, it sounds like your total usable capacity requirement is around <1TB. With a modern SSD, you'll get something like 40k theoretical IOPs for 4k I/O size. You don't mention budget. What is your budget? You mention "4MB operations", where is that requirement coming from? On Wed, Apr 4, 2018 at 12:41 PM, Vincent Royer <vincent at epicenergy.ca>
2006 Jul 28
20
3510 JBOD ZFS vs 3510 HW RAID
Hi there Is it fair to compare the 2 solutions using Solaris 10 U2 and a commercial database (SAP SD scenario). The cache on the HW raid helps, and the CPU load is less... but the solution costs more and you _might_ not need the performance of the HW RAID. Has anybody with access to these units done a benchmark comparing the performance (and with the pricelist in hand) came to a conclusion.
2007 May 12
3
zfs and jbod-storage
Hi. I''m managing a HDS storage system which is slightly larger than 100 TB and we have used approx. 3/4. We use vxfs. The storage system is attached to a solaris 9 on sparc via a fiberswitch. The storage is shared via nfs to our webservers. If I was to replace vxfs with zfs I could utilize raidz(2) instead of the built-in hardware raid-controller. Are there any jbod-only storage
2009 Apr 27
23
Raidz vdev size... again.
Hi, i''m new to the list so please bare with me. This isn''t an OpenSolaris related problem but i hope it''s still the right list to post to. I''m on the way to move a backup server to using zfs based storage, but i don''t want to spend too much drives to parity (the 16 drives are attached to a 3ware raid controller so i could also just use raid6 there). I
2007 Apr 19
14
Experience with Promise Tech. arrays/jbod''s?
Greetings, In looking for inexpensive JBOD and/or RAID solutions to use with ZFS, I''ve run across the recent "VTrak" SAS/SATA systems from Promise Technologies, specifically their E-class and J-class series: E310f FC-connected RAID: http://www.promise.com/product/product_detail_eng.asp?product_id=175 E310s SAS-connected RAID:
2011 May 30
13
JBOD recommendation for ZFS usage
Dear all Sorry if it''s kind of off-topic for the list but after talking to lots of vendors I''m running out of ideas... We are looking for JBOD systems which (1) hold 20+ 3.3" SATA drives (2) are rack mountable (3) have all the nive hot-swap stuff (4) allow 2 hosts to connect via SAS (4+ lines per host) and see all available drives as disks, no RAID volume. In a