I created a raidz from three 70GB disks and got a total of 200GB out of it. It''t that supposed to give 140GB? Here is some details: # zpool status zpoll_c2raidz pool: zpoll_c2raidz state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zpoll_c2raidz ONLINE 0 0 0 raidz ONLINE 0 0 0 c2t10d0 ONLINE 0 0 0 c2t11d0 ONLINE 0 0 0 c2t12d0 ONLINE 0 0 0 errors: No known data errors # df -k /zpoll_c2raidz Filesystem kbytes used avail capacity Mounted on zpoll_c2raidz 210567168 49 210564184 1% /zpoll_c2raidz Thanks, This message posted from opensolaris.org
On 11/6/06, Vahid Moghaddasi <vahid at cckeeper.com> wrote:> I created a raidz from three 70GB disks and got a total of 200GB out of it. It''t that supposed to give 140GB? Here is some details: > # zpool status zpoll_c2raidz > pool: zpoll_c2raidz > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > zpoll_c2raidz ONLINE 0 0 0 > raidz ONLINE 0 0 0 > c2t10d0 ONLINE 0 0 0 > c2t11d0 ONLINE 0 0 0 > c2t12d0 ONLINE 0 0 0 > > errors: No known data errors > # df -k /zpoll_c2raidz > Filesystem kbytes used avail capacity Mounted on > zpoll_c2raidz 210567168 49 210564184 1% /zpoll_c2raidzDisclaimer: I am not an expert. But based on my experiences and readings... I believe that parity is maintained (somewhat) on the filesystem level. That is, if you write a ~66MB file, in actuality you use ~100MB of pool space. Modest tests I did just now with a new raidz pool support this. Someone please correct me if I am wrong. -- Eric Enright
Vahid Moghaddasi wrote:> I created a raidz from three 70GB disks and got a total of 200GB out > of it. It''t that supposed to give 140GB?You are hitting 6288488 du reports misleading size on RAID-Z which affects pools created before build 42 or s10u3. --matt
When creating raidz pool out of n disks where n =>2 pool size will get a size of the smallest disk multiplied by n: # zpool create -f newpool raidz c1t12d0 c1t10d0 c1t13d0 # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT newpool 139G 141K 139G 0% ONLINE - c1t10d0 <SEAGATE-ST150176LW-0002-46.58GB> c1t12d0 <SEAGATE-ST1181677LWV-0002-169.09GB> c1t13d0 <SEAGATE-ST3146807LW-0004-136.73GB> SIZE should be 351GB and not 139GB. What happend to the rest of the space? I have Solaris 10 06/06 patched with all patches. Looks like if you have to use the same size of disks for raidz. borut. This message posted from opensolaris.org
Podlipnik wrote:> When creating raidz pool out of n disks where n =>2 pool size will get a size > of the smallest disk multiplied by n: > > # zpool create -f newpool raidz c1t12d0 c1t10d0 c1t13d0 > # zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > newpool 139G 141K 139G 0% ONLINE - > > c1t10d0 <SEAGATE-ST150176LW-0002-46.58GB>46.58G * 3 = 139G> c1t12d0 <SEAGATE-ST1181677LWV-0002-169.09GB> > c1t13d0 <SEAGATE-ST3146807LW-0004-136.73GB> > > SIZE should be 351GB and not 139GB. What happend to the rest of the space? > I have Solaris 10 06/06 patched with all patches. > > Looks like if you have to use the same size of disks for raidz.Yes, this is true for most, if not all, RAID-5-like implementations. If you want more space, then you might need to get more clever in your usage. For example, if you slide as follows: disk1 disk2 disk3 raid space -------------------------------------- 46.58 46.58 mirror 46.58 122.51 122.51 mirror 122.51 total 169.09 Obviously, there are many more combinations possible and my example may not meet your requirements. -- richard
What is even stranger is that if I copy a file in a raidz configuration (let''s say /usr/bin/ls), we have different results out of regular commands : ls -l /usr/bin/ls /mapool/ls => same size ls -s /usr/bin/ls /mapool/ls => Different sizes du -s /usr/bin/ls /mapool/ls = > different sizes Shouldn''t be those values similar ? I can understand that the zpool command shows parity as part of the total size, but for usual Unix commands, they should give exact size. This message posted from opensolaris.org
On Mon, 2006-11-27 at 06:11 -0800, Marlanne DeLaSource wrote:> What is even stranger is that if I copy a file in a raidz configuration (let''s say /usr/bin/ls), we have different results out of regular commands : > > ls -l /usr/bin/ls /mapool/ls => same size > ls -s /usr/bin/ls /mapool/ls => Different sizes > du -s /usr/bin/ls /mapool/ls = > different sizes > > Shouldn''t be those values similar ?No, and you can even see different numbers between different UFS partitions under certain circumstances. ls -l shows the logical length in bytes of the file. ls -s and du count the number of 512-byte disk blocks occupied by the file; the on-disk footprint is an implementation artifact of the filesystem. On UFS, this number has always included the indirect blocks and excluded file holes (ranges of zeros not backed by allocated blocks); moreover, depending on the file system block size you use at newfs time, ls -s and du can show different values when the same file is copied between filesystems with different block sizes. Given that ZFS already does other things quite differently it shouldn''t be that surprising that files occupy a different on-disk footprint... - Bill
I can understand that ls is giving you the "logical" size of the file and du the "physical" size of the file (in-disk footprint). But then, how do you explain that, when using a mirrored pool, ls and du returns exactly the same size. According to your reasoning, du should return twice the logical size returned by ls. The main problem (my opinion) is the lack of consistency. This message posted from opensolaris.org