Displaying 20 results from an estimated 10000 matches similar to: "BIOS RAID0 and differences between disks"
2020 Nov 12
1
BIOS RAID0 and differences between disks
On 11/04/2020 10:21 PM, John Pierce wrote:
> is it RAID 0 (striped) or raid1 (mirrored) ??
>
> if you wrote on half of a raid0 stripe set, you basically trashed it.
> blocks are striped across both drives, so like 16k on the first disk, then
> 16k on the 2nd then 16k back on the first, repeat (replace 16k with
> whatever your raid stripe size is).
>
> if its a raid 1
2020 Nov 05
1
BIOS RAID0 and differences between disks
> On Nov 4, 2020, at 9:21 PM, John Pierce <jhn.pierce at gmail.com> wrote:
>
> is it RAID 0 (striped) or raid1 (mirrored) ??
>
> if you wrote on half of a raid0 stripe set, you basically trashed it.
> blocks are striped across both drives, so like 16k on the first disk, then
> 16k on the 2nd then 16k back on the first, repeat (replace 16k with
> whatever your raid
2020 Nov 05
0
BIOS RAID0 and differences between disks
is it RAID 0 (striped) or raid1 (mirrored) ??
if you wrote on half of a raid0 stripe set, you basically trashed it.
blocks are striped across both drives, so like 16k on the first disk, then
16k on the 2nd then 16k back on the first, repeat (replace 16k with
whatever your raid stripe size is).
if its a raid 1 mirror, then either disk by itself has the complete file
system on it, so you should be
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi,
I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping.
Is this possibile with ZFS?
As far as I understood, if I use
zpool create myPool lun-1 lun-2 ... lun-n
I will get a RAID0 striping where each data block is split across all "n" LUNs.
If that''s
2013 Oct 04
1
btrfs raid0
How can I verify the read speed of a btrfs raid0 pair in archlinux.?
I assume raid0 means striped activity in a paralleled mode at lease
similar to raid0 in mdadm.
How can I measure the btrfs read speed since it is copy-on-write which
is not the norm in mdadm raid0.?
Perhaps I cannot use the same approach in btrfs to determine the
performance.
Secondly, I see a methodology for raid10 using
2013 Aug 19
1
LVM RAID0 and SSD discards/TRIM
I'm trying to work out the kinks of a proprietary, old, and clunky
application that runs on CentOS. One of its main problems is that it
writes image sequences extremely non-linearly and in several passes,
using many CPUs, so the sequences get very fragmented.
The obvious solution to this seems to be to use SSDs for its output, and
some scripts that will pick up and copy our the sequences
2017 Oct 17
1
lvconvert(split) - raid10 => raid0
hi guys, gals
do you know if conversion from lvm's raid10 to raid0 is
possible?
I'm fiddling with --splitmirrors but it gets me nowhere.
On "takeover" subject man pages says: "..between
striped/raid0 and raid10."" but no details, nowhere I could
find documentation, nor a howto.
many thanks, L.
2012 Feb 07
2
Understanding Default RAID Behavior
The Wiki does not make it clear as to why adding a secondary device
defaults to RAID1 metadata and RAID0 data. I bought two SSDs with the
intention of doing a BTRFS RAID0 for my root.
What is the difference between forcing RAID0 on metadata and data as
opposed to the default behavior? Can anyone clarify that?
Thank you for your time,
Mario
--
To unsubscribe from this list: send the line
2015 Feb 16
4
Centos 7.0 and mismatched swap file
On Mon, Feb 16, 2015 at 6:47 AM, Eliezer Croitoru <eliezer at ngtech.co.il> wrote:
> I am unsure I understand what you wrote.
> "XFS will create multiple AG's across all of those
> devices,"
> Are you comparing md linear/concat to md raid0? and that the upper level XFS
> will run on top them?
Yes to the first question, I'm not understanding the second
2012 Mar 10
4
Any recommendations on Perc H700 controller on Dell Rx10 ?
Hi folks:
At work, I have an R510, and R610 and an R710 - all with the H700 PERC
controller.
Based on experiments, it seems like there is no way to bypass the PERC
controller - it seems like one can only access the individual disks if
they are set up in RAID0 each.
This brings me to ask some questions:
a. Is it fine (in terms of an intelligent controller coming in the way
of ZFS) to have the
2012 Mar 15
2
Usage Case: just not getting the performance I was hoping for
All,
For our project, we bought 8 new Supermicro servers. Each server is a
quad-core Intel cpu with 2U chassis supporting 8 x 7200 RPM Sata drives.
To start out, we only populated 2 x 2TB enterprise drives in each
server and added all 8 peers with their total of 16 drives as bricks to
our gluster pool as distributed replicated (2). The replica worked as
follows:
1.1 -> 2.1
1.2
2013 Jan 30
8
RAID 0 across SSD and HDD
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I''ve been unable to find anything definitive about what happens if I use
RAID0 to join an SSD and HDD together with respect to performance
(latency, throughput). The future is obvious (hot data tracking, using
most appropriate device for the data, data migration).
In my specific case I have a 250GB SSD and a 500GB HDD, and about 250GB of
2012 Jan 17
8
[RFC][PATCH 1/2] Btrfs: try to allocate new chunks with degenerated profile
If there is no free space, the free space allocator will try to get space from
the block group with the degenerated profile. For example, if there is no free
space in the RAID1 block groups, the allocator will try to allocate space from
the DUP block groups. And besides that, the space reservation has the similar
behaviour: if there is no enough space in the space cache to reserve, it will
reserve
2009 Sep 24
1
Problem with raid0
Hey! I have big problem with my centos 5.1. I have two hard discs 500gig and 40gig and those are in RAID0. I would like to remowe the small 40gig hd and put new 500gig hd and i don't wan't raid0 anymore.
How can i copy data from 40gig hd to 500gig hd and switch 40gig hd to new 500gig hd?
Another question: Can i just copy all my files to windows laptop and if i wan't
2011 Oct 23
2
ssd quandry
On a CentOS 6 64bit system, I added a couple prototype SAS SSDs on a HP
P411 raid controller (I believe this is a rebranded LSI megaraid with HP
firmware) and am trying to format them for best random IO performance
with something like postgresql.
so, I used the raid command tool to build a raid0 with 2 SAS SSDs
# hpacucli ctrl slot=1 logicaldrive 3 show detail
Smart Array P410 in Slot 1
2011 Jul 15
22
Zil on multiple usb keys
This might be a stupid question, but here goes... Would adding, say, 4 4 or 8gb usb keys as a zil make enough of a difference for writes on an iscsi shared vol?
I am finding reads are not too bad (40is mb/s over gige on 2 500gb drives stripped) but writes top out at about 10 and drop a lot lower... If I where to add a couple usb keys for zil, would it make a difference?
Thanks.
Sent from a
2017 Sep 08
4
cyrus spool on btrfs?
On Fri, September 8, 2017 12:56 pm, hw wrote:
> Valeri Galtsev wrote:
>>
>> On Fri, September 8, 2017 9:48 am, hw wrote:
>>> m.roth at 5-cent.us wrote:
>>>> hw wrote:
>>>>> Mark Haney wrote:
>>>> <snip>
>>>>>> BTRFS isn't going to impact I/O any more significantly than, say,
>>>>>> XFS.
2006 Sep 21
1
Software versus hardware RAID performance.
With the Dell OpenManage question on my mind (and having seen it answered very
well), I was reminded of an interesting and a little surprising thing I saw
yesterday.
I upgraded a PowerEdge 2850 from CentOS4 to Fedora Core 5 (keeping everything
updated for GNUradio to run on CentOS 4 became more of a job that it should
have) for our pulsar data processing machine (it has a GNUradio Universal
2015 Feb 16
2
Centos 7.0 and mismatched swap file
On Mon, Feb 16, 2015 at 12:07 AM, Michael Schumacher
<michael.schumacher at pamas.de> wrote:
> Btw., are you sure you want to use XFS for a mail server? I made some
> tests about a year ago and found that EXT4 is by the factor 10 faster
> compared to XFS. The tests I performed were using the "maildir" style
> postfix installation that results in many thousands files in
2012 Apr 09
4
Transfer speed
Hello,
I have a little problem which I can't solve with Samba used under Debian
Squeeze. (version of Samba : 3.5.6)
I made a transfer speed between Samba server and a samba mount on the same
PC copy to RAM and the same file directly to RAM.
1. HDD to RAM => 105-115Mo/s
2. Shared the same HDD directory with samba and mount to "/media" locally
with (mount -t smbfs) :
Samba to RAM