similar to: zfs raidz1 and traditional raid 5 perfomrance comparision

Displaying 20 results from an estimated 8000 matches similar to: "zfs raidz1 and traditional raid 5 perfomrance comparision"

2007 Apr 02
4
Convert raidz
Hi Is it possible to convert live 3 disks zpool from raidz to raidz2 And is it possible to add 1 new disk to raidz configuration without backups and recreating zpool from cratch. Thanks This message posted from opensolaris.org
2007 Jul 07
17
Raid-Z expansion
Apologies for the blank message (if it came through). I have heard here and there that there might be in development a plan to make it such that a raid-z can grow its "raid-z''ness" to accommodate a new disk added to it. Example: I have 4Disks in a raid-z[12] configuration. I am uncomfortably low on space, and would like to add a 5th disk. The idea is to pop in disk 5 and have
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1. I want to add "phase 2" which is another 7x1.5tb raidz1 Can I add the second phase to the first phase and basically have two raid5''s striped (in raid terms?) Yes, I probably should upgrade the zpool format too. Currently running snv_104. Also should upgrade to 110. If that is possible, would anyone happen to have the simple command lines to
2010 Jan 28
13
ZFS configuration suggestion with 24 drives
Replacing my current media server with another larger capacity media server. Also switching over to solaris/zfs. Anyhow we have 24 drive capacity. These are for large sequential access (large media files) used by no more than 3 or 5 users at a time. I''m inquiring as to what the best configuration for this is for vdevs. I''m considering the following configurations 4 x x6
2009 Oct 01
4
RAIDZ v. RAIDZ1
So, I took four 1.5TB drives and made RAIDZ, RAIDZ1 and RAIDZ2 pools. The sizes for the pools were 5.3TB, 4.0TB, and 2.67TB respectively. The man page for RAIDZ states that "The raidz vdev type is an alias for raidz1." So why was there a difference between the sizes for RAIDZ and RAIDZ1? Shouldn''t the size be the same for "zpool create raidz ..." and "zpool
2010 Mar 26
23
RAID10
Hi All, I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection? So if I have 8 x 1.5tb drives, wouldn''t I: - mirror drive 1 and 5 - mirror drive 2 and 6 - mirror drive 3 and 7 - mirror drive 4 and 8 Then stripe 1,2,3,4 Then stripe 5,6,7,8 How does one do this with ZFS?
2011 Feb 05
12
ZFS Newbie question
I?ve spend a few hours reading through the forums and wiki and honestly my head is spinning. I have been trying to study up on either buying or building a box that would allow me to add drives of varying sizes/speeds/brands (adding more later etc) and still be able to use the full space of drives (minus parity? [not sure if I got the terminology right]) with redundancy. I have found the ?all in
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate of about 400K/s. (When this pool was first set up we saw rates in the MB/s range during a scrub). Both zpool iostat and an iostat -Xn show lots of idle disk times, no above average service times, no abnormally high busy percentages. Load on the box is .59. 8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2010 Nov 18
5
RAID-Z/mirror hybrid allocator
Hi, I''m referring to; http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6977913 It should be in Solaris 11 Express, has anyone tried this? How this is supposed to work? Any documentation available? Yours Markus Kovero -------------- next part -------------- An HTML attachment was scrubbed... URL:
2011 Mar 01
5
btrfs wishlist
Hi all Having managed ZFS for about two years, I want to post a wishlist. INCLUDED IN ZFS - Mirror existing single-drive filesystem, as in ''zfs attach'' - RAIDz-stuff - single and hopefully multiple-parity RAID configuration with block-level checksumming - Background scrub/fsck - Pool-like management with multiple RAIDs/mirrors (VDEVs) - Autogrow as in ZFS autoexpand NOT
2007 Jun 20
14
Z-Raid performance with Random reads/writes
Given a 1.6TB ZFS Z-Raid consisting 6 disks: And a system that does an extreme amount of small /(<20K) /random reads /(more than twice as many reads as writes) / 1) What performance gains, if any does Z-Raid offer over other RAID or Large filesystem configurations? 2) What is any hindrance is Z-Raid to this configuration, given the complete randomness and size of these accesses? Would
2009 Jan 15
21
4 disk raidz1 with 3 disks...
Hello, I was hoping that this would work: http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror I have 4x(1TB) disks, one of which is filled with 800GB of data (that I cant delete/backup somewhere else) > root at FSK-Backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0 > /dev/lofi/1 > root at FSK-Backup:~# zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT
2009 Jan 12
1
ZFS size is different ?
Hi all, I have 2 questions about ZFS. 1. I have create a snapshot in my pool1/data1, and zfs send/recv it to pool2/data2. but I found the USED in zfs list is different: NAME USED AVAIL REFER MOUNTPOINT pool2/data2 160G 1.44T 159G /pool2/data2 pool1/data 176G 638G 175G /pool1/data1 It keep about 30,000,000 files. The content of p_pool/p1 and backup/p_backup
2010 Apr 27
42
Performance drop during scrub?
Hi all I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to something hardly usable while scrubbing the pool. How can I address this? Will adding Zil or L2ARC help? Is it possible to tune down scrub''s priority somehow? Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at
2006 Sep 28
13
jbod questions
Folks, We are in the process of purchasing new san/s that our mail server runs on (JES3). We have moved our mailstores to zfs and continue to have checksum errors -- they are corrected but this improves on the ufs inode errors that require system shutdown and fsck. So, I am recommending that we buy small jbods, do raidz2 and let zfs handle the raiding of these boxes. As we need more
2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if it is vdev specific, or pool wide. Google didn''t seem to know. I''m considering a mixed pool with some "advanced format" (4KB sector) drives, and some normal 512B sector drives, and was wondering if the ashift can be set per vdev, or only per pool. Theoretically, this would save me some size on
2012 Apr 16
2
Any options on crypt+zfs ?
hail, I have a soekris running an atom and 2GB RAM and ZFS using 7 drives, small capacity though, to test and study if I can make my home server this box and this way. It will be a simple server, three users tops. I followed the handbook and made the geli step on the disks: Geom name: label/zfs1.eli State: ACTIVE EncryptionAlgorithm: AES-XTS KeyLength: 128 Crypto: software UsedKey: 0 Flags:
2010 Apr 24
3
ZFS RAID-Z2 degraded vs RAID-Z1
Had an idea, could someone please tell me why it''s wrong? (I feel like it has to be). A RaidZ-2 pool with one missing disk offers the same failure resilience as a healthy RaidZ1 pool (no data loss when one disk fails). I had initially wanted to do single parity raidz pool (5disk), but after a recent scare decided raidz2 was the way to go. With the help of a sparse file
2011 Mar 04
13
cannot replace c10t0d0 with c10t0d0: device is too small
In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz storage pool and then shelved the other two for spares. One of the disks failed last night so I shut down the server and replaced it with a spare. When I tried to zpool replace the disk I get: zpool replace tank c10t0d0 cannot replace c10t0d0 with c10t0d0: device is too small The 4 original disk partition tables look like
2007 May 05
13
Optimal strategy (add or replace disks) to build a cheap and raidz?
Hello, i have an 8 port sata-controller and i don''t want to spend the money for 8 x 750 GB Sata Disks right now. I''m thinking about an optimal way of building a growing raidz-pool without loosing any data. As far as i know there are two ways to achieve this: - Adding 750 GB Disks from time to time. But this would lead to multiple groups with multiple redundancy/parity disks. I