search for: raidz3

Displaying 20 results from an estimated 27 matches for "raidz3".

Did you mean: raidz2
2010 Aug 06
3
Reconfigure zpool
I have zpool like that pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz3-0 ONLINE 0 0 0 ___c6t0d0 ONLINE 0 0 0 ___c6t1d0 ONLINE 0 0 0 ___c6t2d0 ONLINE 0 0 0 ___c6t3d0 ONLINE 0 0 0 ___c6t4d0 ONLINE 0 0 0 ___c6t5d0 ONLINE 0 0 0 ___c6t6d0 ONLINE 0 0 0 ___...
2010 Oct 20
5
Myth? 21 disk raidz3: "Don''t put more than ___ disks in a vdev"
In a discussion a few weeks back, it was mentioned that the Best Practices Guide says something like "Don''t put more than ___ disks into a single vdev." At first, I challenged this idea, because I see no reason why a 21-disk raidz3 would be bad. It seems like a good thing. I was operating on assumption that resilver time was limited by sustainable throughput of disks, which was wrong. At present, resilver time is limited by random IO, so the ZFS resilver time is typically much longer than it would be if you were resilverin...
2010 Feb 16
2
Speed question: 8-disk RAIDZ2 vs 10-disk RAIDZ3
I currently am getting good speeds out of my existing system (8x 2TB in a RAIDZ2 exported over fibre channel) but there''s no such thing as too much speed, and these other two drive bays are just begging for drives in them.... If I go to 10x 2TB in a RAIDZ3, will the extra spindles increase speed, or will the extra parity writes reduce speed, or will the two factors offset and leave things a wash? (My goal is to be able to survive one controller failure, so if I add more drives I''ll have to add redundancy to compensate for the fact that o...
2010 Mar 26
23
RAID10
Hi All, I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection? So if I have 8 x 1.5tb drives, wouldn''t I: - mirror drive 1 and 5 - mirror drive 2 and 6 - mirror drive 3 and 7 - mirror drive 4 and 8 Then stripe 1,2,3,4 Then stripe 5,6,7,8 How does one do this with ZFS?
2010 Apr 24
3
ZFS RAID-Z2 degraded vs RAID-Z1
...39;ll swap it in for the sparse file and let it resilver. Can someone with a stronger understanding of ZFS tell me why a degraded RaidZ2 (minus one disk) is less efficient than RaidZ1? (Besides the fact that your pools are always reported as degraded.) I guess the same would apply with RaidZ2 vs RaidZ3 - 1disk. Thanks -- This message posted from opensolaris.org
2010 Jan 16
95
Best 1.5TB drives for consumer RAID?
Which consumer-priced 1.5TB drives do people currently recommend? I had zero read/write/checksum errors so far in 2 years with my trusty old Western Digital WD7500AAKS drives, but now I want to upgrade to a new set of drives that are big, reliable and cheap. As of Jan 2010 it seems the price sweet spot is the 1.5TB drives. As I had a lot of success with Western Digital drives I thought I would
2011 Jul 30
7
NexentaCore 3.1 - ZFS V. 28
apt-get update apt-clone upgrade Any first impressions? -- Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
2010 Apr 27
42
Performance drop during scrub?
Hi all I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to something hardly usable while scrubbing the pool. How can I address this? Will adding Zil or L2ARC help? Is it possible to tune down scrub''s priority somehow? Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at
2010 Nov 18
5
RAID-Z/mirror hybrid allocator
Hi, I''m referring to; http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6977913 It should be in Solaris 11 Express, has anyone tried this? How this is supposed to work? Any documentation available? Yours Markus Kovero -------------- next part -------------- An HTML attachment was scrubbed... URL:
2010 Nov 01
6
Excruciatingly slow resilvering on X4540 (build 134)
Hello, I''m working with someone who replaced a failed 1TB drive (50% utilized), on an X4540 running OS build 134, and I think something must be wrong. Last Tuesday afternoon, zpool status reported: scrub: resilver in progress for 306h0m, 63.87% done, 173h7m to go and a week being 168 hours, that put completion at sometime tomorrow night. However, he just reported zpool status shows:
2010 Nov 07
0
zfs under medium load causes SMB to delay writes
...ssage. Crossposting to zfs-discuss (where it perhaps primarily belongs) and to cifs-discuss, which also relates. > Hi, > > I have an I/O load issue and after days of searching > wanted to know if anyone has pointers on how to > approach this. > > My 1-year stable zfs system (raidz3 8 2TB drives, all > OK) just started to cause problems when I introduced > a new backup script that puts medium I/O load. This > script simply tars up a few filesystems and md5sums > the tarball, to copy to another system for off > OpenSolaris backup. The simple commands are: > &...
2012 Feb 18
6
Cannot mount encrypted filesystems.
Looking for help regaining access to encrypted ZFS file systems that stopped accepting the encryption key. I have a file server with a setup as follows: Solaris 11 Express 1010.11/snv_151a 8 x 2-TB disks, each one divided into three equal size partitions, three raidz3 pools built from a "slice" across matching partitions: Disk 1 Disk 8 zpools +--+ +--+ |p1| .. |p1| <- slice_0 +--+ +--+ |p2| .. |p2| <- slice_1 +--+ +--+ |p3| .. |p3| <- slice_2 +--+ +--+ zpool status shows: ... NAME STATE slice_0...
2011 Dec 15
31
Can I create a mirror for a root rpool?
On Solaris 10 If I install using ZFS root on only one drive is there a way to add another drive as a mirror later? Sorry if this was discussed already. I searched the archives and couldn''t find the answer. Thank you.
2010 Jun 04
5
Depth of Scrub
Hi, I have a small question about the depth of scrub in a raidz/2/3 configuration. I''m quite sure scrub does not check spares or unused areas of the disks (it could check if the disks detects any errors there). But what about the parity? Obviously it has to be checked, but I can''t find any indications for it in the literature. The man page only states that the data is being
2010 Apr 26
23
SAS vs SATA: Same size, same speed, why SAS?
I''m building another 24-bay rackmount storage server, and I''m considering what drives to put in the bays. My chassis is a Supermicro SC846A, so the backplane supports SAS or SATA; my controllers are LSI3081E, again supporting SAS or SATA. Looking at drives, Seagate offers an enterprise (Constellation) 2TB 7200RPM drive in both SAS and SATA configurations; the SAS model offers
2010 Jul 21
5
slog/L2ARC on a hard drive and not SSD?
Hi, Out of pure curiosity, I was wondering, what would happen if one tries to use a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)? I know these are designed with SSDs in mind, and I know it''s possible to use anything you want as cache. So would ZFS benefit from it? Would it be the same? Would it slow down? I guess it would slow things down, because it would be trying to
2010 Jul 19
6
Performance advantages of spool with 2x raidz2 vdev"s vs. Single vdev
Hi guys, I am about to reshape my data spool and am wondering what performance diff. I can expect from the new config. Vs. The old. The old config. Is a pool of a single vdev of 8 disks raidz2. The new pool config is 2vdev''s of 7 disk raidz2 in a single pool. I understand it should be better with higher io throughput....and better read/write rates...but interested to hear the science
2011 Mar 01
5
btrfs wishlist
Hi all Having managed ZFS for about two years, I want to post a wishlist. INCLUDED IN ZFS - Mirror existing single-drive filesystem, as in ''zfs attach'' - RAIDz-stuff - single and hopefully multiple-parity RAID configuration with block-level checksumming - Background scrub/fsck - Pool-like management with multiple RAIDs/mirrors (VDEVs) - Autogrow as in ZFS autoexpand NOT
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi, I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping. Is this possibile with ZFS? As far as I understood, if I use zpool create myPool lun-1 lun-2 ... lun-n I will get a RAID0 striping where each data block is split across all "n" LUNs. If that''s
2013 Oct 24
4
ZFS on Linux in production?
We are a CentOS shop, and have the lucky, fortunate problem of having ever-increasing amounts of data to manage. EXT3/4 becomes tough to manage when you start climbing, especially when you have to upgrade, so we're contemplating switching to ZFS. As of last spring, it appears that ZFS On Linux http://zfsonlinux.org/ calls itself production ready despite a version number of 0.6.2, and