Displaying 20 results from an estimated 287 matches for "jbods".
Did you mean:
jbod
2020 Sep 10
3
Btrfs RAID-10 performance
I cannot verify it, but I think that even JBOD is propagated as a
virtual device. If you create JBOD from 3 different disks, low level
parameters may differ.
And probably old firmware is the reason we used RAID-0 two or three
years before.
Thank you for the ideas.
Kind regards
Milo
Dne 10.09.2020 v 16:15 Scott Q. napsal(a):
> Actually there is, filesystems like ZFS/BTRFS prefer to see
2018 Apr 09
2
JBOD / ZFS / Flash backed
Your question is difficult to parse. Typically RAID and JBOD are mutually
exclusive. By "flash-backed", do you mean a battery backup unit (BBU) on
your RAID controller?
On Mon, Apr 9, 2018 at 8:49 AM, Vincent Royer <vincent at epicenergy.ca> wrote:
>
>> Is a flash-backed Raid required for JBOD, and should it be 1gb, 2, or 4gb
>> flash?
>>
>
> Is anyone
2018 Apr 09
0
JBOD / ZFS / Flash backed
Yes the flash-backed RAID cards use a super-capacitor to backup the flash
cache. You have a choice of flash module sizes to include on the card.
The card supports RAID modes as well as JBOD.
I do not know if Gluster can make use of battery-backed flash-based Cache
when the disks are presented by the RAID card in JBOD. The Hardware
vendor asked "Do you know if Gluster makes use of
2020 Sep 10
2
Btrfs RAID-10 performance
Some controllers has direct option "pass through to OS" for a drive,
that's what I meant. I can't recall why we have chosen RAID-0 instead of
JBOD, there was some reason, but I hope there is no difference with
single drive.
Thank you
Milo
Dne 09.09.2020 v 15:51 Scott Q. napsal(a):
> The 9361-8i does support passthrough ( JBOD mode ). Make sure you have
> the latest
2018 Apr 04
5
JBOD / ZFS / Flash backed
Hi,
Trying to make the most of a limited budget. I need fast I/O for
operations under 4MB, and high availability of VMs in an Ovirt cluster.
I Have 3 nodes running Ovirt and want to rebuild them with hardware for
converging storage.
Should I use 2 960GB SSDs in RAID1 in each node, replica 3?
Or can I get away with 1 larger SSD per node, JBOD, replica 3?
Is a flash-backed Raid required for
2017 Jan 20
6
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
Hi,
Does anyone have experiences about ARC-1883I SAS controller with CentOS7?
I am planning to have RAID1 setup and I am wondering if I should use
the controller's RAID functionality which has 2GB cache or should I go
with JBOD + Linux software RAID?
The disks I am going to use are 6TB Seagate Enterprise ST6000NM0034
7200rpm SAS/12Gbit 128 MB
If hardware RAID is preferred, the
2012 Jul 18
1
RAID card selection - JBOD mode / Linux RAID
I don't think this is off topic since I want to use JBOD mode so that
Linux can do the RAID. I'm going to hopefully run this in CentOS 5
and Ubuntu 12.04 on a Sunfire x2250
Hard to get answers I can trust out of vendors :-)
I have a Sun RAID card which I am pretty sure is LSI OEM. It is a
3G/s SAS1 with 2 external connectors like the one on the right here :
2018 Apr 09
0
JBOD / ZFS / Flash backed
>
>
> Is a flash-backed Raid required for JBOD, and should it be 1gb, 2, or 4gb
> flash?
>
Is anyone able to clarify this requirement for me?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180409/0b7677ce/attachment.html>
2017 Jan 20
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
> The disks I am going to use are 6TB Seagate Enterprise ST6000NM0034
> 7200rpm SAS/12Gbit 128 MB
Sorry to hear that, my experience is the Seagate brand has the shortest MTBF
of any disk I have ever used...
> If hardware RAID is preferred, the controller's cache could be updated
> to 4GB and I wonder how much performance gain this would give me?
Lots, especially with slower
2017 Jan 20
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
> This is why before configuring and installing everything you may want to
> attach drives one at a time, and upon boot take a note which physical
> drive number the controller has for that drive, and definitely label it so
> y9ou will know which drive to pull when drive failure is reported.
Sorry Valeri, that only works if you're the only guy in the org.
In reality, you cannot
2017 Jan 21
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On 2017-01-20, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote:
>
> Hm, not certain what process you describe. Most of my controllers are
> 3ware and LSI, I just pull failed drive (and I know phailed physical drive
> number), put good in its place and rebuild stars right away.
I know for sure that LSI's storcli utility supports an identify
operation, which (if the
2017 Jan 21
1
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Sat, January 21, 2017 12:16 am, Keith Keller wrote:
> On 2017-01-20, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote:
>>
>> Hm, not certain what process you describe. Most of my controllers are
>> 3ware and LSI, I just pull failed drive (and I know phailed physical
>> drive
>> number), put good in its place and rebuild stars right away.
>
> I
2008 Nov 17
14
Storage 7000
I''m not sure if this is the right place for the question or not, but I''ll
throw it out there anyways. Does anyone know, if you create your pool(s)
with a system running fishworks, can that pool later be imported by a
standard solaris system? IE: If for some reason the head running fishworks
were to go away, could I attach the JBOD/disks to a system running
snv/mainline
2009 Jan 05
3
Don''t Shout at your JBODs
http://www.youtube.com/watch?v=tDacjrSCeq4
I wonder if the inverse is true. If I whisper soothing
words of encouragement at my JBODs, will I get
more IOPS with reduced latency?
:^)
2018 Apr 04
0
JBOD / ZFS / Flash backed
Based on your message, it sounds like your total usable capacity
requirement is around <1TB. With a modern SSD, you'll get something like
40k theoretical IOPs for 4k I/O size.
You don't mention budget. What is your budget? You mention "4MB
operations", where is that requirement coming from?
On Wed, Apr 4, 2018 at 12:41 PM, Vincent Royer <vincent at epicenergy.ca>
2007 Oct 20
4
Distribued ZFS
...lustre integration which will take some time will provide parallel
file system abilities. I am unsure if lustre at the moment supports
redundancy between storage nodes (it was on the road map)
But ZFS at the moment supports Sun cluster 3.2 (no paralel acccess is
supported) and new upcoming SAS Jbods will let you implement cheaper ZFS
cluster easly. )2 x Entry level sun server + 4 x 48 slot Jbod)
<http://www.sun.com/> http://www.sun.com/emrkt/sigs/6g_top.gif
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +9053393107...
2007 May 12
3
zfs and jbod-storage
Hi.
I''m managing a HDS storage system which is slightly larger than 100 TB
and we have used approx. 3/4. We use vxfs. The storage system is
attached to a solaris 9 on sparc via a fiberswitch. The storage is
shared via nfs to our webservers.
If I was to replace vxfs with zfs I could utilize raidz(2) instead of
the built-in hardware raid-controller.
Are there any jbod-only storage
2018 Apr 09
2
JBOD / ZFS / Flash backed
On 09/04/18 16:49, Vincent Royer wrote:
>
>
> Is a flash-backed Raid required for JBOD, and should it be 1gb, 2,
> or 4gb flash?
>
>
RAID and JBOD are completely different things. JBODs are just that,
bunches of disks, and they don't have any cache above them in hardware.
If you're going to use ZFS under Gluster, look at the ZFS docs first.
Short answer is no. If Gluster passes sync writes down to the lower
level FS as sync, and you decide to use a ZFS SLOG device, usu...
2017 Jan 21
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
Hi Valeri,
Before you pull a drive you should check to make sure that doing so
won't kill the whole array.
MegaCli can help you prevent a storage disaster and can let you have more
insight into your RAID and the status of the virtual disks and the disks
than make up each array.
MegaCli will let you see the health and status of each drive. Does it have
media errors, is it in predictive
2007 Jan 25
4
high density SAS
...enclosures.
<http://xtore.com/product_list.asp?cat=JBOD>
The XJ 2000 is like the x4500 in that it holds 48 drives, however with
the XJ 2000 2 drives are on each carrier and you can get to them from
the front.
I don''t like xtore in general but the 24 bay (2.5" SAS) and 48 bay
JBODs are interesting. How badly can you mess up a JBOD?
-frank