similar to: Raidz vdev size... again.

Displaying 20 results from an estimated 8000 matches similar to: "Raidz vdev size... again."

2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1. I want to add "phase 2" which is another 7x1.5tb raidz1 Can I add the second phase to the first phase and basically have two raid5''s striped (in raid terms?) Yes, I probably should upgrade the zpool format too. Currently running snv_104. Also should upgrade to 110. If that is possible, would anyone happen to have the simple command lines to
2010 Apr 27
42
Performance drop during scrub?
Hi all I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to something hardly usable while scrubbing the pool. How can I address this? Will adding Zil or L2ARC help? Is it possible to tune down scrub''s priority somehow? Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at
2012 Jan 15
22
Does raidzN actually protect against bitrot? If yes - how?
"Does raidzN actually protect against bitrot?" That''s a kind of radical, possibly offensive, question formula that I have lately. Reading up on theory of RAID5, I grasped the idea of the write hole (where one of the sectors of the stripe, such as the parity data, doesn''t get written - leading to invalid data upon read). In general, I think the same applies to bitrot of
2010 Mar 26
23
RAID10
Hi All, I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection? So if I have 8 x 1.5tb drives, wouldn''t I: - mirror drive 1 and 5 - mirror drive 2 and 6 - mirror drive 3 and 7 - mirror drive 4 and 8 Then stripe 1,2,3,4 Then stripe 5,6,7,8 How does one do this with ZFS?
2010 Apr 07
53
ZFS RaidZ recommendation
I have been searching this forum and just about every ZFS document i can find trying to find the answer to my questions. But i believe the answer i am looking for is not going to be documented and is probably best learned from experience. This is my first time playing around with open solaris and ZFS. I am in the midst of replacing my home based filed server. This server hosts all of my media
2009 Aug 25
41
snv_110 -> snv_121 produces checksum errors on Raid-Z pool
I have a 5-500GB disk Raid-Z pool that has been producing checksum errors right after upgrading SXCE to build 121. They seem to be randomly occurring on all 5 disks, so it doesn''t look like a disk failure situation. Repeatingly running a scrub on the pools randomly repairs between 20 and a few hundred checksum errors. Since I hadn''t physically touched the machine, it seems a
2009 Jan 15
21
4 disk raidz1 with 3 disks...
Hello, I was hoping that this would work: http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror I have 4x(1TB) disks, one of which is filled with 800GB of data (that I cant delete/backup somewhere else) > root at FSK-Backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0 > /dev/lofi/1 > root at FSK-Backup:~# zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT
2010 Feb 16
2
Speed question: 8-disk RAIDZ2 vs 10-disk RAIDZ3
I currently am getting good speeds out of my existing system (8x 2TB in a RAIDZ2 exported over fibre channel) but there''s no such thing as too much speed, and these other two drive bays are just begging for drives in them.... If I go to 10x 2TB in a RAIDZ3, will the extra spindles increase speed, or will the extra parity writes reduce speed, or will the two factors offset and leave things
2007 Jun 17
18
6 disk raidz2 or 3 stripe 2 way mirror
I''m playing around with ZFS and want to figure out the best use of my 6x 300GB SATA drives. The purpose of the drives is to store all of my data at home (video, photos, music, etc). I''m debating between: 6x 300GB disks in a single raidz2 pool --or-- 2x (3x 300GB disks in a pool) mirrored I''ve read up a lot on ZFS, but I can''t really figure out which is
2010 Mar 03
6
Question about multiple RAIDZ vdevs using slices on the same disk
Hi all :) I''ve been wanting to make the switch from XFS over RAID5 to ZFS/RAIDZ2 for some time now, ever since I read about ZFS the first time. Absolutely amazing beast! I''ve built my own little hobby server at home and have a boatload of disks in different sizes that I''ve been using together to build a RAID5 array on Linux using mdadm in two layers; first layer is
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC. I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system, which is intended to be used as a backup SAN during storage migration, so it''s built on a tight budget. The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII SATA HDDs attached to an Areca 8port ARC-1220 controller
2010 Jan 16
95
Best 1.5TB drives for consumer RAID?
Which consumer-priced 1.5TB drives do people currently recommend? I had zero read/write/checksum errors so far in 2 years with my trusty old Western Digital WD7500AAKS drives, but now I want to upgrade to a new set of drives that are big, reliable and cheap. As of Jan 2010 it seems the price sweet spot is the 1.5TB drives. As I had a lot of success with Western Digital drives I thought I would
2006 Sep 28
13
jbod questions
Folks, We are in the process of purchasing new san/s that our mail server runs on (JES3). We have moved our mailstores to zfs and continue to have checksum errors -- they are corrected but this improves on the ufs inode errors that require system shutdown and fsck. So, I am recommending that we buy small jbods, do raidz2 and let zfs handle the raiding of these boxes. As we need more
2010 Jan 28
16
Large scale ZFS deployments out there (>200 disks)
While thinking about ZFS as the next generation filesystem without limits I am wondering if the real world is ready for this kind of incredible technology ... I''m actually speaking of hardware :) ZFS can handle a lot of devices. Once in the import bug (http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786) is fixed it should be able to handle a lot of disks. I want to
2011 Mar 01
5
btrfs wishlist
Hi all Having managed ZFS for about two years, I want to post a wishlist. INCLUDED IN ZFS - Mirror existing single-drive filesystem, as in ''zfs attach'' - RAIDz-stuff - single and hopefully multiple-parity RAID configuration with block-level checksumming - Background scrub/fsck - Pool-like management with multiple RAIDs/mirrors (VDEVs) - Autogrow as in ZFS autoexpand NOT
2010 Apr 26
23
SAS vs SATA: Same size, same speed, why SAS?
I''m building another 24-bay rackmount storage server, and I''m considering what drives to put in the bays. My chassis is a Supermicro SC846A, so the backplane supports SAS or SATA; my controllers are LSI3081E, again supporting SAS or SATA. Looking at drives, Seagate offers an enterprise (Constellation) 2TB 7200RPM drive in both SAS and SATA configurations; the SAS model offers
2010 Jul 19
6
Performance advantages of spool with 2x raidz2 vdev"s vs. Single vdev
Hi guys, I am about to reshape my data spool and am wondering what performance diff. I can expect from the new config. Vs. The old. The old config. Is a pool of a single vdev of 8 disks raidz2. The new pool config is 2vdev''s of 7 disk raidz2 in a single pool. I understand it should be better with higher io throughput....and better read/write rates...but interested to hear the science
2007 Dec 23
11
RAIDZ(2) expansion?
I skimmed the archives and found a thread from July earlier this year about RAIDZ expansion. Not adding more RAIDZ stripes to a pool, but adding more drives to the stripe itself. I''m wondering if an RFE has been submitted for this and if any progress has been made, or is expected? I find myself out of space on my current RAID5 setup and would love to flip over to a ZFS raidz2 solution
2010 Jul 05
5
never ending resilver
Hi list, Here''s my case : pool: mypool state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress for 147h19m, 100.00% done, 0h0m to go config: NAME STATE READ WRITE CKSUM filerbackup13
2008 Nov 26
9
ZPool and Filesystem Sizing - Best Practices?
Hello, We have a new Thor here with 24TB of disk in (first of many, hopefully). We are trying to determine the bext practices with respect to file system management and sizing. Previously, we have tried to keep each file system to a max size of 500GB to make sure we could fit it all on a single tape, and to minimise restore times and impact should we experience some kind of volume