Displaying 20 results from an estimated 20000 matches similar to: "ZFS-Performance: Raid-Z vs. Raid5/6 vs. mirrored"
2010 Mar 26
23
RAID10
Hi All,
I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection?
So if I have 8 x 1.5tb drives, wouldn''t I:
- mirror drive 1 and 5
- mirror drive 2 and 6
- mirror drive 3 and 7
- mirror drive 4 and 8
Then stripe 1,2,3,4
Then stripe 5,6,7,8
How does one do this with ZFS?
2012 Jan 15
22
Does raidzN actually protect against bitrot? If yes - how?
"Does raidzN actually protect against bitrot?"
That''s a kind of radical, possibly offensive, question formula
that I have lately.
Reading up on theory of RAID5, I grasped the idea of the write
hole (where one of the sectors of the stripe, such as the parity
data, doesn''t get written - leading to invalid data upon read).
In general, I think the same applies to bitrot of
2009 Apr 27
23
Raidz vdev size... again.
Hi,
i''m new to the list so please bare with me. This isn''t an OpenSolaris
related problem but i hope it''s still the right list to post to.
I''m on the way to move a backup server to using zfs based storage, but i
don''t want to spend too much drives to parity (the 16 drives are attached
to a 3ware raid controller so i could also just use raid6 there).
I
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1.
I want to add "phase 2" which is another 7x1.5tb raidz1
Can I add the second phase to the first phase and basically have two
raid5''s striped (in raid terms?)
Yes, I probably should upgrade the zpool format too. Currently running
snv_104. Also should upgrade to 110.
If that is possible, would anyone happen to have the simple command
lines to
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting
up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and
connected via load-shared 4Gbit FC links. This week I have tried many
different configurations, using firmware managed RAID, ZFS managed
RAID, and with the controller cache enabled or disabled.
My objective is to obtain the best single-file write performance.
2010 Apr 19
4
upgrade zfs stripe
hi there,
since i am really new to zfs, i got 2 important questions for starting. i got a nas up and running zfs in stripe mode with 2x 1,5tb hdd. my question for future proof would be, if i could add just another drive to the pool and zfs can integrate it flawlessly? and second if this hdd could also be another size than 1,5tb? so could i put in 2tb also and integrate it?
thanks in advance
2010 Jul 20
16
zfs raidz1 and traditional raid 5 perfomrance comparision
Hi,
for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk''s ipos.
Regards
Victor
--
This message posted from opensolaris.org
2010 Oct 20
5
Myth? 21 disk raidz3: "Don''t put more than ___ disks in a vdev"
In a discussion a few weeks back, it was mentioned that the Best Practices
Guide says something like "Don''t put more than ___ disks into a single
vdev." At first, I challenged this idea, because I see no reason why a
21-disk raidz3 would be bad. It seems like a good thing.
I was operating on assumption that resilver time was limited by sustainable
throughput of disks, which
2009 Nov 08
5
Disk I/O in RAID-Z as new disks are added/removed
Hello,
As I understand it, in a traditional RAID 5 setup adding new disks to the pool provides more overall I/O as the load is spread out across multiple disks.
What exactly is this relationship in a RAID-Z setup? What should one expect in terms of overall I/O performance as disks are added and/or removed? I understand that the checksum data is distributed across all disks, unlike a traditional
2009 Jan 15
21
4 disk raidz1 with 3 disks...
Hello,
I was hoping that this would work:
http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror
I have 4x(1TB) disks, one of which is filled with 800GB of data (that I
cant delete/backup somewhere else)
> root at FSK-Backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0
> /dev/lofi/1
> root at FSK-Backup:~# zpool list
> NAME SIZE USED AVAIL CAP HEALTH ALTROOT
2010 Apr 10
41
Secure delete?
Hi all
Is it possible to securely delete a file from a zfs dataset/zpool once it''s been snapshotted, meaning "delete (and perhaps overwrite) all copies of this file"?
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ
2009 Dec 02
10
Separate Zil on HDD ?
Hi all,
I have a home server based on SNV_127 with 8 disks;
2 x 500GB mirrored root pool
6 x 1TB raidz2 data pool
This server performs a few functions;
NFS : for several ''lab'' ESX virtual machines
NFS : mythtv storage (videos, music, recordings etc)
Samba : for home directories for all networked PCs
I backup the important data to external USB hdd each day.
I previously had
2009 Aug 25
41
snv_110 -> snv_121 produces checksum errors on Raid-Z pool
I have a 5-500GB disk Raid-Z pool that has been producing checksum errors right after upgrading SXCE to build 121. They seem to be randomly occurring on all 5 disks, so it doesn''t look like a disk failure situation.
Repeatingly running a scrub on the pools randomly repairs between 20 and a few hundred checksum errors.
Since I hadn''t physically touched the machine, it seems a
2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on
disk will be 128K whenever possible. But if you''re using raidzN with a
capacity of M disks (M disks useful capacity + N disks redundancy) then the
block size on each individual disk will be 128K / M. Right? This is one of
the reasons the raidzN resilver code is inefficient. Since you end up
waiting for the
2010 Apr 07
53
ZFS RaidZ recommendation
I have been searching this forum and just about every ZFS document i can find trying to find the answer to my questions. But i believe the answer i am looking for is not going to be documented and is probably best learned from experience.
This is my first time playing around with open solaris and ZFS. I am in the midst of replacing my home based filed server. This server hosts all of my media
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC.
I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system,
which is intended to be used as a backup SAN during storage migration,
so it''s built on a tight budget.
The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII
SATA HDDs attached to an Areca 8port ARC-1220 controller
2008 Nov 26
9
ZPool and Filesystem Sizing - Best Practices?
Hello,
We have a new Thor here with 24TB of disk in (first of many, hopefully).
We are trying to determine the bext practices with respect to file system
management and sizing. Previously, we have tried to keep each file system
to a max size of 500GB to make sure we could fit it all on a single tape,
and to minimise restore times and impact should we experience some kind of
volume
2008 Nov 16
8
Mirror and RaidZ on only 3 disks
Hi,
I have a small Linux server PC at home (Intel Core2 Q9300, 4 GB RAM),
and I''m seriously considering switching to OpenSolaris (Indiana,
2008.11) in the near future, mainly because of ZFS. The idea is to run
the existing CentOS 4.7 system inside a VM and let it NFS mount home
directories and other filesystems from OpenSolaris. I might migrate more
services from Linux over time, but for
2009 Jun 10
13
Apple Removes Nearly All Reference To ZFS
http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS
2008 Jun 30
20
Some basic questions about getting the best performance for database usage
I''m new so opensolaris and very new to ZFS. In the past we have always used linux for our database backends.
So now we are looking for a new database server to give us a big performance boost, and also the possibility for scalability.
Our current database consists mainly of a huge table containing about 230 million records and a few (relatively) smaller tables (something like 13 million