Hello, I''ve got some weird problem: ZFS does not seem to be utilizing all disks in my pool properly. For some reason, it''s only using 2 of the 3 disks in my pool: capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- database 8.48G 1.35T 202 0 12.4M 0 c0t1d0 4.30G 460G 103 0 6.21M 0 c0t3d0 4.12G 460G 96 0 6.00M 0 c0t2d0 54.9M 464G 2 0 190K 0 ---------- ----- ----- ----- ----- ----- ----- I''ve added all the disks at the same time, so it''s not like the last disk was added later. Any ideas on what might be causing this ? I''m using solaris express b62. This message posted from opensolaris.org
Hello Leon, Thursday, May 10, 2007, 10:43:27 AM, you wrote: LM> Hello, LM> I''ve got some weird problem: ZFS does not seem to be utilizing LM> all disks in my pool properly. For some reason, it''s only using 2 of the 3 disks in my pool: LM> capacity operations bandwidth LM> pool used avail read write read write LM> ---------- ----- ----- ----- ----- ----- ----- LM> database 8.48G 1.35T 202 0 12.4M 0 LM> c0t1d0 4.30G 460G 103 0 6.21M 0 LM> c0t3d0 4.12G 460G 96 0 6.00M 0 LM> c0t2d0 54.9M 464G 2 0 190K 0 LM> ---------- ----- ----- ----- ----- ----- ----- LM> I''ve added all the disks at the same time, so it''s not like the LM> last disk was added later. Any ideas on what might be causing this ? I''m using solaris express b62. LM> Your third disks is 4GB larger that first two disks and ZFS tries to "load-balance" data so that you can fill up all devices. As you''ve already have about 4GB on each of the first two disks ZFS should start to use third disks after copying addtitional data. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Robert Milkowski wrote:> Hello Leon, > > Thursday, May 10, 2007, 10:43:27 AM, you wrote: > > LM> Hello, > > LM> I''ve got some weird problem: ZFS does not seem to be utilizing > LM> all disks in my pool properly. For some reason, it''s only using 2 of the 3 disks in my pool: > > LM> capacity operations bandwidth > LM> pool used avail read write read write > LM> ---------- ----- ----- ----- ----- ----- ----- > LM> database 8.48G 1.35T 202 0 12.4M 0 > LM> c0t1d0 4.30G 460G 103 0 6.21M 0 > LM> c0t3d0 4.12G 460G 96 0 6.00M 0 > LM> c0t2d0 54.9M 464G 2 0 190K 0 > LM> ---------- ----- ----- ----- ----- ----- ----- > > LM> I''ve added all the disks at the same time, so it''s not like the > LM> last disk was added later. Any ideas on what might be causing this ? I''m using solaris express b62. > LM> > > Your third disks is 4GB larger that first two disks and ZFS tries to > "load-balance" data so that you can fill up all devices. As you''ve > already have about 4GB on each of the first two disks ZFS should start > to use third disks after copying addtitional data.No, it is not - other two disks have 4G out of 464G used, and disk in question has only 55M used. So for me it does not look like weighting problem. This is something else I believe. I''m not sure but i suspect this may be somehow related to meta data allocation, given that ZFS stores two copies for file system meta data. But this is nothing more than a wild guess. Leon, What kind of data is stored in this pool? What Solaris version are you using? How is your pool configured? Cheers, Victor
Simple test - mkfile 8gb now and see where the data goes... :) Victor Latushkin wrote:> Robert Milkowski wrote: >> Hello Leon, >> >> Thursday, May 10, 2007, 10:43:27 AM, you wrote: >> >> LM> Hello, >> >> LM> I''ve got some weird problem: ZFS does not seem to be utilizing >> LM> all disks in my pool properly. For some reason, it''s only using 2 >> of the 3 disks in my pool: >> >> LM> capacity operations bandwidth >> LM> pool used avail read write read write >> LM> ---------- ----- ----- ----- ----- ----- ----- >> LM> database 8.48G 1.35T 202 0 12.4M 0 >> LM> c0t1d0 4.30G 460G 103 0 6.21M 0 >> LM> c0t3d0 4.12G 460G 96 0 6.00M 0 >> LM> c0t2d0 54.9M 464G 2 0 190K 0 >> LM> ---------- ----- ----- ----- ----- ----- ----- >> >> LM> I''ve added all the disks at the same time, so it''s not like the >> LM> last disk was added later. Any ideas on what might be causing this >> ? I''m using solaris express b62. >> LM> >> Your third disks is 4GB larger that first two disks and ZFS tries to >> "load-balance" data so that you can fill up all devices. As you''ve >> already have about 4GB on each of the first two disks ZFS should start >> to use third disks after copying addtitional data. > > No, it is not - other two disks have 4G out of 464G used, and disk in > question has only 55M used. So for me it does not look like weighting > problem. This is something else I believe. > > I''m not sure but i suspect this may be somehow related to meta data > allocation, given that ZFS stores two copies for file system meta data. > But this is nothing more than a wild guess. > > Leon, What kind of data is stored in this pool? What Solaris version are > you using? How is your pool configured? > > Cheers, > Victor > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> I''m not sure but i suspect this may be somehow > related to meta data > allocation, given that ZFS stores two copies for file > system meta data. > But this is nothing more than a wild guess. > > Leon, What kind of data is stored in this pool? What > Solaris version are > you using? How is your pool configured?I''m currently running a PostgreSQL database on the pool, where the transaction log file is on another (non-ZFS) disk, but that should be irrelevant. I''ve done really only the basic configuration: I created a pool called ''database'', added the 3 disks to it, enabled compression and set the recordsize to 8K to match my database''s recordsize. Nothing more, nothing less - it''s running for about 24 hours now and it seems that the third disk is not being used. The disks in question are all identical. I''m running Solaris Express Community Edition B62. Thanks for any help! This message posted from opensolaris.org
Hello Victor, Thursday, May 10, 2007, 11:26:35 AM, you wrote: VL> Robert Milkowski wrote:>> Hello Leon, >> >> Thursday, May 10, 2007, 10:43:27 AM, you wrote: >> >> LM> Hello, >> >> LM> I''ve got some weird problem: ZFS does not seem to be utilizing >> LM> all disks in my pool properly. For some reason, it''s only using 2 of the 3 disks in my pool: >> >> LM> capacity operations bandwidth >> LM> pool used avail read write read write >> LM> ---------- ----- ----- ----- ----- ----- ----- >> LM> database 8.48G 1.35T 202 0 12.4M 0 >> LM> c0t1d0 4.30G 460G 103 0 6.21M 0 >> LM> c0t3d0 4.12G 460G 96 0 6.00M 0 >> LM> c0t2d0 54.9M 464G 2 0 190K 0 >> LM> ---------- ----- ----- ----- ----- ----- ----- >> >> LM> I''ve added all the disks at the same time, so it''s not like the >> LM> last disk was added later. Any ideas on what might be causing this ? I''m using solaris express b62. >> LM> >> >> Your third disks is 4GB larger that first two disks and ZFS tries to >> "load-balance" data so that you can fill up all devices. As you''ve >> already have about 4GB on each of the first two disks ZFS should start >> to use third disks after copying addtitional data.VL> No, it is not - other two disks have 4G out of 464G used, and disk in VL> question has only 55M used. So for me it does not look like weighting VL> problem. This is something else I believe. VL> I''m not sure but i suspect this may be somehow related to meta data VL> allocation, given that ZFS stores two copies for file system meta data. VL> But this is nothing more than a wild guess. VL> Leon, What kind of data is stored in this pool? What Solaris version are VL> you using? How is your pool configured? Yep, that''s available space not a size - you''re right, my fault. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
> Simple test - mkfile 8gb now and see where the data goes... :)Unless you''ve got compression=on, in which case you won''t see anything! cheers, --justin
What does "zpool status database" say? This message posted from opensolaris.org
> What does "zpool status database" say?Hello, As far as I can see, there are no real errors: -bash-3.00# zpool status database pool: database state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM database ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 errors: No known data errors -bash-3.00# Is there any way I can ''clear'' all the metadata from the disks, so that ZFS starts learning about the disks again from scratch ? Regards, Leon Mergen This message posted from opensolaris.org
> As far as I can see, there are no real errors: > > -bash-3.00# zpool status database > pool: database > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > database ONLINE 0 0 0 > c0t1d0 ONLINE 0 0 0 > c0t3d0 ONLINE 0 0 0 > c0t2d0 ONLINE 0 0 0 > > Is there any way I can ''clear'' all the metadata from the disks, so that ZFS starts learning about the disks again from scratch ?dd if=/dev/zero of=/dev/dsk/c0t1d0 I had an issue with FreeBSD when I created a zpool on a harddrive which I had used previously so it had a (FreeBSD) disklabel. I could create the zpool but after a reboot the pool had status FAULTED. After zeroing the first part of the drive it works flawlessly. -- regards Claus When lenity and cruelty play for a kingdom, the gentlest gamester is the soonest winner. Shakespeare.