I have 7 * 1,5TB disk in a raidz1 configuration, then the system (how i understanding it) uses 1,5TB ( 1 disk ) for parity, but when i uses "df" the available space in my newly created pool it says Filesystem Size Used Avail Use% Mounted on bf 8.0T 36K 8.0T 1% /bf when I uses zpool list it says NAME SIZE USED AVAIL CAP HEALTH ALTROOT bf 9.50T 292K 9.50T 0% ONLINE - the pool is created with the following command, and compression is set to off zpool create -f bf raidz1 c9t0d0 c9t1d0 c9t2d0 c9t3d0 c9t4d0 c9t5d0 c9t6d0 and when I do some calculation 7 x 1,5TB = 10,5TB - 1,5TB for parity = 9,5 TB , so to my questions 1. why do I only have 8TB in my bf pool ? 2. why do "zpool list" and "df" reports diffrents disk space avaibable thanks Per Jorgensen -- This message posted from opensolaris.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 ZFS and "du" use binary byte multipliers (1kB = 1024 B, etc.), while drive manufacturers use decimal conversion (1kB = 1000 B). So your 1.5TB drives are in fact ~1.36 TiB (binary TB): 7 x 1,36 TiB = 9.52 TiB - 1,36 TiB for parity = 8.16 TiB - -- Saso On 08/06/2010 01:29 PM, Per Jorgensen wrote:> I have 7 * 1,5TB disk in a raidz1 configuration, then the system (how i understanding it) uses 1,5TB ( 1 disk ) for parity, but when i uses "df" the available space in my newly created pool it says > > Filesystem Size Used Avail Use% Mounted on > bf 8.0T 36K 8.0T 1% /bf > > when I uses zpool list it says > > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > bf 9.50T 292K 9.50T 0% ONLINE - > > the pool is created with the following command, and compression is set to off > > zpool create -f bf raidz1 c9t0d0 c9t1d0 c9t2d0 c9t3d0 c9t4d0 c9t5d0 c9t6d0 > > and when I do some calculation > > 7 x 1,5TB = 10,5TB - 1,5TB for parity = 9,5 TB , so to my questions > > 1. why do I only have 8TB in my bf pool ? > 2. why do "zpool list" and "df" reports diffrents disk space avaibable > > thanks > Per Jorgensen-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkxb9qkACgkQRO8UcfzpOHArigCghxLGcQjueptfokCXvCA/rm5q WaQAoKkRAKAcXU/dtazbrahJwwyhUUwk =Y7SN -----END PGP SIGNATURE-----
ahh that explains it all, god damn that base 1000 standard , only usefull for sales people :) thanks for the help /pej -- This message posted from opensolaris.org
> From: Per Jorgensen <pej at combox.dk> > Date: Fri, 06 Aug 2010 04:29:08 PDT > To: <zfs-discuss at opensolaris.org> > Subject: [zfs-discuss] Disk space on Raidz1 configuration > > I have 7 * 1,5TB disk in a raidz1 configuration, then the system (how i > understanding it) uses 1,5TB ( 1 disk ) for parity, but when i uses "df" the > available space in my newly created pool it says > > Filesystem Size Used Avail Use% Mounted on > bf 8.0T 36K 8.0T 1% /bf > > when I uses zpool list it says > > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > bf 9.50T 292K 9.50T 0% ONLINE - > > the pool is created with the following command, and compression is set to off > > zpool create -f bf raidz1 c9t0d0 c9t1d0 c9t2d0 c9t3d0 c9t4d0 c9t5d0 c9t6d0 > > and when I do some calculation > > 7 x 1,5TB = 10,5TB - 1,5TB for parity = 9,5 TB , so to my questions > > 1. why do I only have 8TB in my bf pool ? > 2. why do "zpool list" and "df" reports diffrents disk space avaibableYou only have 8 TB in your pool because your drives are 1.5 TB using 1000 * 1000 as a MB. Df uses 1024*1024 as a MB. Zpool list shows raw pool space, not usable space. RAIDZ1 takes the space for one disk out to store error checking data so that is not "usable" space. -- Terry Hull Network Resource Group, Inc.
> ahh that explains it all, god damn that base 1000 > standard , only usefull for sales people :)As much as it all annoys me too, the SI prefixes are used correctly pretty much everywhere except in operating systems. A kilometer is not 1024 meters and a megawatt is not 1048576 watts. Us, the IT community, grabbed a set of well defined prefixes used by the rest of creation, redefined them, and then became angry because the remainder of civilization uses the correct terms. We have no one to blame but ourselves. -- This message posted from opensolaris.org