We are using the following equipment: - 12 x WD RE3 1TB SATA - 1 x lsi 1068e HBA - supermicro expander - xeon 5520 / 12gb memory We''re having very slow read performance on our san/nas. We have one raidz2 pool of 12 devices. We use the pool for iscsi ( xenserver virtual machines) + cifs share. The pool feels very unresponsive and especially the cifs share is slooooow. bonnie benchmark: Version 1.03b ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP san 25000M 335044 49 71691 16 128556 13 204.9 1 Is it true that a raidz2 pool has a read capacity equal to the slowest disk''s IOPs per second ?? A 128KB block in a 12-wide raidz2 vdev will be split into 128/(12-2) = 12.8KB chunks and this affects performance? What bothers me is iostat, asvc_t shows most of the time values <10, but then jumps to >10 even when the disks are relative idle. extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 4.1 11.2 32.0 21.7 0.0 0.0 0.0 2.4 0 3 c0t10d0 4.1 12.4 33.7 22.9 0.0 0.0 0.0 2.6 0 3 c0t11d0 3.8 12.7 32.8 23.5 0.0 0.0 0.0 2.2 0 3 c0t12d0 4.0 12.0 33.7 22.6 0.0 0.0 0.0 2.6 0 3 c0t13d0 4.2 11.0 32.3 21.9 0.0 0.0 0.0 2.5 0 3 c0t14d0 4.2 11.6 32.8 23.3 0.0 0.0 0.0 2.2 0 3 c0t15d0 3.8 11.9 32.8 22.7 0.0 0.0 0.0 2.3 0 3 c0t16d0 4.1 11.7 33.9 23.3 0.0 0.0 0.0 2.1 0 2 c0t18d0 4.1 12.3 32.7 22.7 0.0 0.0 0.0 2.2 0 3 c0t19d0 3.8 11.1 32.2 21.6 0.0 0.0 0.0 2.3 0 3 c0t17d0 3.8 10.6 32.6 20.9 0.0 0.0 0.0 2.5 0 3 c0t21d0 4.1 11.2 34.0 21.9 0.0 0.0 0.0 2.2 0 3 c0t20d0 0.0 6.3 0.0 26.5 0.0 0.0 0.0 0.2 0 0 c1t1d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.7 0.4 0.7 2.6 0.0 0.0 0.0 10.3 0 1 c0t10d0 0.7 0.4 0.6 2.6 0.0 0.0 0.0 12.4 0 1 c0t11d0 0.6 0.4 0.4 2.6 0.0 0.0 0.0 10.2 0 1 c0t12d0 0.4 0.4 0.3 2.6 0.0 0.0 0.0 6.5 0 0 c0t13d0 0.5 0.4 0.3 2.6 0.0 0.0 0.0 3.1 0 0 c0t14d0 0.5 0.4 0.4 2.6 0.0 0.0 0.0 10.7 0 1 c0t15d0 0.5 0.4 0.4 2.6 0.0 0.0 0.0 10.6 0 1 c0t16d0 0.5 0.4 0.5 2.6 0.0 0.0 0.0 9.1 0 0 c0t18d0 0.5 0.4 0.5 2.6 0.0 0.0 0.0 10.7 0 1 c0t19d0 0.7 0.4 0.6 2.6 0.0 0.0 0.0 12.3 0 1 c0t17d0 0.7 0.4 0.5 2.5 0.0 0.0 0.0 11.6 0 1 c0t21d0 0.7 0.4 0.6 2.6 0.0 0.0 0.0 10.2 0 1 c0t20d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.4 0.0 0.2 0.0 0.0 0.0 0.0 18.0 0 0 c0t10d0 0.6 0.0 0.5 0.0 0.0 0.0 0.0 11.4 0 1 c0t11d0 0.6 0.0 0.6 0.0 0.0 0.0 0.0 13.4 0 0 c0t12d0 0.6 0.0 0.6 0.0 0.0 0.0 0.0 20.9 0 1 c0t13d0 0.7 0.0 0.6 0.0 0.0 0.0 0.0 16.4 0 1 c0t14d0 0.7 0.0 0.5 0.0 0.0 0.0 0.0 13.5 0 1 c0t15d0 0.7 0.0 0.5 0.0 0.0 0.0 0.0 11.1 0 1 c0t16d0 0.6 0.0 0.3 0.0 0.0 0.0 0.0 10.8 0 1 c0t18d0 0.3 0.0 0.2 0.0 0.0 0.0 0.0 16.7 0 0 c0t19d0 0.7 0.0 0.4 0.0 0.0 0.0 0.0 17.1 0 1 c0t17d0 0.6 0.0 0.5 0.0 0.0 0.0 0.0 13.5 0 1 c0t21d0 0.5 0.0 0.4 0.0 0.0 0.0 0.0 14.4 0 1 c0t20d0 -- This message posted from opensolaris.org
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Bueno > > Is it true that a raidz2 pool has a read capacity equal to the slowestdisk''s IOPs> per second ??No, but there''s a grain of truth there. Random reads: * If you have a single process issuing random reads, and waiting for the results of each read before issuing the next one... then your performance will be even worse than a single disk. Possibly as bad as 50%. This is the situation you are asking about, so yes it''s possible for this to happen. This might happen if you decide to tar up or copy a whole directory tree, or something like that. * If you have several processes, each issuing random reads, then each disk will have a bunch of commands queued up, and each disk will fetch them all as fast as possible. Each read request will only be satisfied when all (or enough) of the disks have obtained valid data. In my benchmarking, a 5-6 disk raidzN was able to do random reads approx 2x faster than a single disk. Sequential reads: * The raidzN will destroy the performance of a single disk.