Abdullah Al-Dahlawi
2010-Mar-05 09:46 UTC
[zfs-discuss] why L2ARC device is used to store files ?
Greeting All I have create a pool that consists oh a hard disk and a ssd as a cache zpool create hdd c11t0d0p3 zpool add hdd cache c8t0d0p0 - cache device I ran an OLTP bench mark to emulate a DMBS One I ran the benchmark, the pool started create the database file on the ssd cache device ??????????? can any one explain why this happening ? is not L2ARC is used to absorb the evicted data from ARC ? why it is used this way ??? -- Abdullah Al-Dahlawi George Washington University Department. Of Electrical & Computer Engineering ---- Check The Fastest 500 Super Computers Worldwide http://www.top500.org/list/2009/11/100 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100305/a94a2c6b/attachment.html>
Giovanni Tirloni
2010-Mar-05 10:13 UTC
[zfs-discuss] why L2ARC device is used to store files ?
On Fri, Mar 5, 2010 at 6:46 AM, Abdullah Al-Dahlawi <dahlawi at ieee.org>wrote:> Greeting All > > I have create a pool that consists oh a hard disk and a ssd as a cache > > zpool create hdd c11t0d0p3 > zpool add hdd cache c8t0d0p0 - cache device > > I ran an OLTP bench mark to emulate a DMBS > > One I ran the benchmark, the pool started create the database file on the > ssd cache device ??????????? > > > can any one explain why this happening ? > > is not L2ARC is used to absorb the evicted data from ARC ? > > why it is used this way ??? > >Hello Abdullah, I don''t think I understand. How are you seeing files being created on the SSD disk ? You can check device usage with `zpool iostat -v hdd`. Please also send the output of `zpool status hdd`. Thank you, -- Giovanni Tirloni sysdroid.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100305/05490556/attachment.html>
Abdullah Al-Dahlawi
2010-Mar-05 10:41 UTC
[zfs-discuss] why L2ARC device is used to store files ?
Hi Geovanni I was monitering the ssd cache using zpool iostat -v like you said. the cache device within the pool was showing a persistent write IOPS during the ten (1GB) file creation phase by the benchmark. The benchmark even gave an insufficient space and terminated which proves that it was writing on the ssd cache (my HDD is 50GB free space) !!!!!!!! thanks On Fri, Mar 5, 2010 at 5:13 AM, Giovanni Tirloni <gtirloni at sysdroid.com>wrote:> On Fri, Mar 5, 2010 at 6:46 AM, Abdullah Al-Dahlawi <dahlawi at ieee.org>wrote: > >> Greeting All >> >> I have create a pool that consists oh a hard disk and a ssd as a cache >> >> zpool create hdd c11t0d0p3 >> zpool add hdd cache c8t0d0p0 - cache device >> >> I ran an OLTP bench mark to emulate a DMBS >> >> One I ran the benchmark, the pool started create the database file on the >> ssd cache device ??????????? >> >> >> can any one explain why this happening ? >> >> is not L2ARC is used to absorb the evicted data from ARC ? >> >> why it is used this way ??? >> >> > Hello Abdullah, > > I don''t think I understand. How are you seeing files being created on the > SSD disk ? > > You can check device usage with `zpool iostat -v hdd`. Please also send > the output of `zpool status hdd`. > > Thank you, > > -- > Giovanni Tirloni > sysdroid.com >-- Abdullah Al-Dahlawi PhD Candidate George Washington University Department. Of Electrical & Computer Engineering ---- Check The Fastest 500 Super Computers Worldwide http://www.top500.org/list/2009/11/100 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100305/e76d0231/attachment.html>
Giovanni Tirloni
2010-Mar-05 13:06 UTC
[zfs-discuss] why L2ARC device is used to store files ?
On Fri, Mar 5, 2010 at 7:41 AM, Abdullah Al-Dahlawi <dahlawi at ieee.org>wrote:> Hi Geovanni > > I was monitering the ssd cache using zpool iostat -v like you said. the > cache device within the pool was showing a persistent write IOPS during the > ten (1GB) file creation phase by the benchmark. > > The benchmark even gave an insufficient space and terminated which proves > that it was writing on the ssd cache (my HDD is 50GB free space) !!!!!!!! >The L2ARC cache is not accessible to end user applications. It''s only used for reads that miss the ARC and it''s managed internally by ZFS. I can''t comment on the specifics of how ZFS evicts objects from ARC to L2ARC but that should never give you insufficient space errors. Your data is not getting stored in the cache device. The writes you see on the SSD device are ZFS moving objects from ARC to L2ARC. It has to write data there otherwise there is nothing to read back from later when a read() misses the ARC cache and checks L2ARC. I don''t know what your OLTP benchmark does but my advice is to check if it''s really writing files in the ''hdd'' zpool mount point. -- Giovanni Tirloni sysdroid.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100305/4064a688/attachment.html>
James Dickens
2010-Mar-06 02:02 UTC
[zfs-discuss] why L2ARC device is used to store files ?
please post the output of zpool status -v. Thanks James Dickens On Fri, Mar 5, 2010 at 3:46 AM, Abdullah Al-Dahlawi <dahlawi at ieee.org>wrote:> Greeting All > > I have create a pool that consists oh a hard disk and a ssd as a cache > > zpool create hdd c11t0d0p3 > zpool add hdd cache c8t0d0p0 - cache device > > I ran an OLTP bench mark to emulate a DMBS > > One I ran the benchmark, the pool started create the database file on the > ssd cache device ??????????? > > > can any one explain why this happening ? > > is not L2ARC is used to absorb the evicted data from ARC ? > > why it is used this way ??? > > > > > > -- > Abdullah Al-Dahlawi > George Washington University > Department. Of Electrical & Computer Engineering > ---- > Check The Fastest 500 Super Computers Worldwide > http://www.top500.org/list/2009/11/100 > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100305/511b6f09/attachment.html>
Abdullah Al-Dahlawi
2010-Mar-06 08:15 UTC
[zfs-discuss] why L2ARC device is used to store files ?
hi James here is the out put you''ve requested abdullah at HP_HDX_16:~/Downloads# zpool status -v pool: hdd state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM hdd ONLINE 0 0 0 c7t0d0p3 ONLINE 0 0 0 cache c8t0d0p0 ONLINE 0 0 0 errors: No known data errors pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c7t0d0s0 ONLINE 0 0 0 ----------------------- abdullah at HP_HDX_16:~/Downloads# zpool iostat -v hdd capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- hdd 1.96G 17.7G 10 64 1.27M 7.76M c7t0d0p3 1.96G 17.7G 10 64 1.27M 7.76M cache - - - - - - c8t0d0p0 *2.87G* 12.0G 0 17 103 2.19M ---------- ----- ----- ----- ----- ----- ----- abdullah at HP_HDX_16:~/Downloads# kstat -m zfs module: zfs instance: 0 name: arcstats class: misc c 2147483648 c_max 2147483648 c_min 268435456 crtime 34.558539423 data_size 2078015488 deleted 9816 demand_data_hits 382992 demand_data_misses 20579 demand_metadata_hits 74629 demand_metadata_misses 6434 evict_skip 21073 hash_chain_max 5 hash_chains 7032 hash_collisions 31409 hash_elements 36568 hash_elements_max 36568 hdr_size 7827792 hits 481410 l2_abort_lowmem 0 l2_cksum_bad 0 l2_evict_lock_retry 0 l2_evict_reading 0 l2_feeds 1157 l2_free_on_write 475 l2_hdr_size 0 l2_hits 0 l2_io_error 0 l2_misses 14997 l2_read_bytes 0 l2_rw_clash 0 l2_size 588342784 l2_write_bytes 3085701632 l2_writes_done 194 l2_writes_error 0 l2_writes_hdr_miss 0 l2_writes_sent 194 memory_throttle_count 0 mfu_ghost_hits 9410 mfu_hits 343112 misses 33011 mru_ghost_hits 4609 mru_hits 116739 mutex_miss 90 other_size 51590832 p 1320449024 prefetch_data_hits 4775 prefetch_data_misses 1694 prefetch_metadata_hits 19014 prefetch_metadata_misses 4304 recycle_miss 484 size 2137434112 snaptime 1945.241664714 module: zfs instance: 0 name: vdev_cache_stats class: misc crtime 34.558587713 delegations 3415 hits 5578 misses 3647 snaptime 1945.243484925 On Fri, Mar 5, 2010 at 9:02 PM, James Dickens <jamesd.wi at gmail.com> wrote:> please post the output of zpool status -v. > > > Thanks > > James Dickens > > > On Fri, Mar 5, 2010 at 3:46 AM, Abdullah Al-Dahlawi <dahlawi at ieee.org>wrote: > >> Greeting All >> >> I have create a pool that consists oh a hard disk and a ssd as a cache >> >> zpool create hdd c11t0d0p3 >> zpool add hdd cache c8t0d0p0 - cache device >> >> I ran an OLTP bench mark to emulate a DMBS >> >> One I ran the benchmark, the pool started create the database file on the >> ssd cache device ??????????? >> >> >> can any one explain why this happening ? >> >> is not L2ARC is used to absorb the evicted data from ARC ? >> >> why it is used this way ??? >> >> >> >> >> >> -- >> Abdullah Al-Dahlawi >> George Washington University >> Department. Of Electrical & Computer Engineering >> ---- >> Check The Fastest 500 Super Computers Worldwide >> http://www.top500.org/list/2009/11/100 >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> >-- Abdullah Al-Dahlawi PhD Candidate George Washington University Department. Of Electrical & Computer Engineering ---- Check The Fastest 500 Super Computers Worldwide http://www.top500.org/list/2009/11/100 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100306/4dd7c9b7/attachment.html>
Fajar A. Nugraha
2010-Mar-06 11:37 UTC
[zfs-discuss] why L2ARC device is used to store files ?
On Sat, Mar 6, 2010 at 3:15 PM, Abdullah Al-Dahlawi <dahlawi at ieee.org> wrote:> abdullah at HP_HDX_16:~/Downloads# zpool iostat -v hdd > ?????????????? capacity???? operations??? bandwidth > pool???????? used? avail?? read? write?? read? write > ----------? -----? -----? -----? -----? -----? ----- > hdd???????? 1.96G? 17.7G???? 10???? 64? 1.27M? 7.76M > ? c7t0d0p3? 1.96G? 17.7G???? 10???? 64? 1.27M? 7.76Myou only have 17.7GB free space there, not 50GB that you said earlier. -- Fajar
Henrik Johansson
2010-Mar-06 12:58 UTC
[zfs-discuss] why L2ARC device is used to store files ?
Hello, On Mar 5, 2010, at 10:46 AM, Abdullah Al-Dahlawi wrote:> Greeting All > > I have create a pool that consists oh a hard disk and a ssd as a cache > > zpool create hdd c11t0d0p3 > zpool add hdd cache c8t0d0p0 - cache device > > I ran an OLTP bench mark to emulate a DMBS > > One I ran the benchmark, the pool started create the database file on the ssd cache device ??????????? > > > can any one explain why this happening ? > > is not L2ARC is used to absorb the evicted data from ARC ?No, it is not. if we look in the source there is a very good description of the L2ARC behavior: http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c "1. There is no eviction path from the ARC to the L2ARC. Evictions from the ARC behave as usual, freeing buffers and placing headers on ghost lists. The ARC does not send buffers to the L2ARC during eviction as this would add inflated write latencies for all ARC memory pressure." Regards Henrik http://sparcv9.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100306/d4e8c36e/attachment.html>
Andrey Kuzmin
2010-Mar-06 17:02 UTC
[zfs-discuss] why L2ARC device is used to store files ?
This is purely tactical, to avoid l2arc write penalty on eviction. You seem to have missed the very next paragraph: 3644 * 2. The L2ARC attempts to cache data from the ARC before it is evicted. 3645 * It does this by periodically scanning buffers from the eviction-end of 3646 * the MFU and MRU ARC lists, copying them to the L2ARC devices if they are 3647 * not already there. Regards, Andrey On Sat, Mar 6, 2010 at 3:58 PM, Henrik Johansson <henrikj at henkis.net> wrote:> Hello, > > On Mar 5, 2010, at 10:46 AM, Abdullah Al-Dahlawi wrote: > > Greeting All > > I have create a pool that consists oh a hard disk and a ssd as a cache > > zpool create hdd c11t0d0p3 > zpool add hdd cache c8t0d0p0 - cache device > > I ran an OLTP bench mark to emulate a DMBS > > One I ran the benchmark, the pool started create the database file on the > ssd cache device ??????????? > > > can any one explain why this happening ? > > is not L2ARC is used to absorb the evicted data from ARC ? > > > No, it is not. if we look in the source there is a very good description of > the L2ARC behavior: > > http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c > > "1. There is no eviction path from the ARC to the L2ARC. Evictions > from the ARC behave as usual, freeing buffers and placing headers on > ghost lists. The ARC does not send buffers to the L2ARC during eviction > as this would add inflated write latencies for all ARC memory pressure." > > Regards > > Henrik > http://sparcv9.blogspot.com > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100306/db1150f0/attachment.html>
Henrik Johansson
2010-Mar-06 17:13 UTC
[zfs-discuss] why L2ARC device is used to store files ?
Hello, On Mar 6, 2010, at 6:02 PM, Andrey Kuzmin wrote:> This is purely tactical, to avoid l2arc write penalty on eviction. You seem to have missed the very next paragraph: > > 3644 * 2. The L2ARC attempts to cache data from the ARC before it is evicted. > 3645 * It does this by periodically scanning buffers from the eviction-end of > 3646 * the MFU and MRU ARC lists, copying them to the L2ARC devices if they are > 3647 * not already there. > >My point was just that nothing is evicted from the ARC to the L2ARC, of course things evicted can be available in the L2ARC, but its not pushed there when evicted. I commented on the question "is not L2ARC is used to absorb the evicted data from ARC ?" Then no, the L2ARC absorbs non-evicted data from the ARC, that possibly gets evicted later. But it''s just semantics. Regards Henrik http://sparcv9.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100306/84971c5e/attachment.html>
James Dickens
2010-Mar-06 20:32 UTC
[zfs-discuss] why L2ARC device is used to store files ?
Hi okay its not what i feared, it is probably caching every bit of data and metadata you have written so far, why shouldn''t it you have the space in the l2 cache, and it can''t offer to return it if its not in the cache, after the cache is full or near full it will choose more carefully what to keep and what to throw away. James Dickens http://uadmin.blogspot.com On Sat, Mar 6, 2010 at 2:15 AM, Abdullah Al-Dahlawi <dahlawi at ieee.org>wrote:> hi James > > > here is the out put you''ve requested > > abdullah at HP_HDX_16:~/Downloads# zpool status -v > pool: hdd > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > hdd ONLINE 0 0 0 > c7t0d0p3 ONLINE 0 0 0 > cache > c8t0d0p0 ONLINE 0 0 0 > > errors: No known data errors > > pool: rpool > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > rpool ONLINE 0 0 0 > c7t0d0s0 ONLINE 0 0 0 > > ----------------------- > > abdullah at HP_HDX_16:~/Downloads# zpool iostat -v hdd > capacity operations bandwidth > pool used avail read write read write > ---------- ----- ----- ----- ----- ----- ----- > hdd 1.96G 17.7G 10 64 1.27M 7.76M > c7t0d0p3 1.96G 17.7G 10 64 1.27M 7.76M > cache - - - - - - > c8t0d0p0 *2.87G* 12.0G 0 17 103 2.19M > ---------- ----- ----- ----- ----- ----- ----- > > abdullah at HP_HDX_16:~/Downloads# kstat -m zfs > module: zfs instance: 0 > name: arcstats class: misc > c 2147483648 > c_max 2147483648 > c_min 268435456 > crtime 34.558539423 > data_size 2078015488 > deleted 9816 > demand_data_hits 382992 > demand_data_misses 20579 > demand_metadata_hits 74629 > demand_metadata_misses 6434 > evict_skip 21073 > hash_chain_max 5 > hash_chains 7032 > hash_collisions 31409 > hash_elements 36568 > hash_elements_max 36568 > hdr_size 7827792 > hits 481410 > l2_abort_lowmem 0 > l2_cksum_bad 0 > l2_evict_lock_retry 0 > l2_evict_reading 0 > l2_feeds 1157 > l2_free_on_write 475 > l2_hdr_size 0 > l2_hits 0 > l2_io_error 0 > l2_misses 14997 > l2_read_bytes 0 > l2_rw_clash 0 > l2_size 588342784 > l2_write_bytes 3085701632 > l2_writes_done 194 > l2_writes_error 0 > l2_writes_hdr_miss 0 > l2_writes_sent 194 > memory_throttle_count 0 > mfu_ghost_hits 9410 > mfu_hits 343112 > misses 33011 > mru_ghost_hits 4609 > mru_hits 116739 > mutex_miss 90 > other_size 51590832 > p 1320449024 > prefetch_data_hits 4775 > prefetch_data_misses 1694 > prefetch_metadata_hits 19014 > prefetch_metadata_misses 4304 > recycle_miss 484 > size 2137434112 > snaptime 1945.241664714 > > module: zfs instance: 0 > name: vdev_cache_stats class: misc > crtime 34.558587713 > delegations 3415 > hits 5578 > misses 3647 > snaptime 1945.243484925 > > > > > On Fri, Mar 5, 2010 at 9:02 PM, James Dickens <jamesd.wi at gmail.com> wrote: > >> please post the output of zpool status -v. >> >> >> Thanks >> >> James Dickens >> >> >> On Fri, Mar 5, 2010 at 3:46 AM, Abdullah Al-Dahlawi <dahlawi at ieee.org>wrote: >> >>> Greeting All >>> >>> I have create a pool that consists oh a hard disk and a ssd as a cache >>> >>> zpool create hdd c11t0d0p3 >>> zpool add hdd cache c8t0d0p0 - cache device >>> >>> I ran an OLTP bench mark to emulate a DMBS >>> >>> One I ran the benchmark, the pool started create the database file on the >>> ssd cache device ??????????? >>> >>> >>> can any one explain why this happening ? >>> >>> is not L2ARC is used to absorb the evicted data from ARC ? >>> >>> why it is used this way ??? >>> >>> >>> >>> >>> >>> -- >>> Abdullah Al-Dahlawi >>> George Washington University >>> Department. Of Electrical & Computer Engineering >>> ---- >>> Check The Fastest 500 Super Computers Worldwide >>> http://www.top500.org/list/2009/11/100 >>> >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>> >>> >> > > > -- > Abdullah Al-Dahlawi > PhD Candidate > George Washington University > Department. Of Electrical & Computer Engineering > ---- > Check The Fastest 500 Super Computers Worldwide > http://www.top500.org/list/2009/11/100 >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100306/117c3e25/attachment.html>
Eric D. Mudama
2010-Mar-06 22:42 UTC
[zfs-discuss] why L2ARC device is used to store files ?
On Sat, Mar 6 at 3:15, Abdullah Al-Dahlawi wrote:> > hdd ONLINE 0 0 0 > c7t0d0p3 ONLINE 0 0 0 > > rpool ONLINE 0 0 0 > c7t0d0s0 ONLINE 0 0 0I trimmed your zpool status output a bit. Are those two the same device? I''m barely familiar with solaris partitioning and labels... what''s the difference between a slice and a partition? -- Eric D. Mudama edmudama at mail.bounceswoosh.org
Richard Elling
2010-Mar-06 23:04 UTC
[zfs-discuss] why L2ARC device is used to store files ?
On Mar 6, 2010, at 2:42 PM, Eric D. Mudama wrote:> On Sat, Mar 6 at 3:15, Abdullah Al-Dahlawi wrote: >> >> hdd ONLINE 0 0 0 >> c7t0d0p3 ONLINE 0 0 0 >> >> rpool ONLINE 0 0 0 >> c7t0d0s0 ONLINE 0 0 0 > > I trimmed your zpool status output a bit. > > Are those two the same device? I''m barely familiar with solaris > partitioning and labels... what''s the difference between a slice and a > partition?In this context, "partition" is an fdisk partition and "slice" is a SMI or EFI labeled slice. The SMI or EFI labeling tools (format, prtvtoc, and ftmhard) do not work on partitions. So when you choose to use ZFS on a partition, you have no tools other than fdisk to manage the space. This can lead to confusion... a bad thing. -- richard ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)
Eric D. Mudama
2010-Mar-07 04:05 UTC
[zfs-discuss] why L2ARC device is used to store files ?
On Sat, Mar 6 at 15:04, Richard Elling wrote:>On Mar 6, 2010, at 2:42 PM, Eric D. Mudama wrote: >> On Sat, Mar 6 at 3:15, Abdullah Al-Dahlawi wrote: >>> >>> hdd ONLINE 0 0 0 >>> c7t0d0p3 ONLINE 0 0 0 >>> >>> rpool ONLINE 0 0 0 >>> c7t0d0s0 ONLINE 0 0 0 >> >> I trimmed your zpool status output a bit. >> >> Are those two the same device? I''m barely familiar with solaris >> partitioning and labels... what''s the difference between a slice and a >> partition? > >In this context, "partition" is an fdisk partition and "slice" is a >SMI or EFI labeled slice. The SMI or EFI labeling tools (format, >prtvtoc, and ftmhard) do not work on partitions. So when you >choose to use ZFS on a partition, you have no tools other than >fdisk to manage the space. This can lead to confusion... a bad >thing.So in that context, is the above ''zpool status'' snippet a "bad thing to do"? -- Eric D. Mudama edmudama at mail.bounceswoosh.org
Richard Elling
2010-Mar-07 05:05 UTC
[zfs-discuss] why L2ARC device is used to store files ?
On Mar 6, 2010, at 8:05 PM, Eric D. Mudama wrote:> On Sat, Mar 6 at 15:04, Richard Elling wrote: >> On Mar 6, 2010, at 2:42 PM, Eric D. Mudama wrote: >>> On Sat, Mar 6 at 3:15, Abdullah Al-Dahlawi wrote: >>>> >>>> hdd ONLINE 0 0 0 >>>> c7t0d0p3 ONLINE 0 0 0 >>>> >>>> rpool ONLINE 0 0 0 >>>> c7t0d0s0 ONLINE 0 0 0 >>> >>> I trimmed your zpool status output a bit. >>> >>> Are those two the same device? I''m barely familiar with solaris >>> partitioning and labels... what''s the difference between a slice and a >>> partition? >> >> In this context, "partition" is an fdisk partition and "slice" is a >> SMI or EFI labeled slice. The SMI or EFI labeling tools (format, >> prtvtoc, and ftmhard) do not work on partitions. So when you >> choose to use ZFS on a partition, you have no tools other than >> fdisk to manage the space. This can lead to confusion... a bad >> thing. > > So in that context, is the above ''zpool status'' snippet a "bad thing > to do"?If the partition containing c7t0d0s0 was p3, then it could be exceedingly bad. Normally, if you try to create a zpool on a slice which already has a zpool, then you will get an error message to that effect, which you can override with the "-f" flag. However, that checking is done on slices, not fdisk partitions. Hence, there is an opportunity for confusion... a bad thing. -- richard ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)
Abdullah Al-Dahlawi
2010-Mar-07 05:31 UTC
[zfs-discuss] why L2ARC device is used to store files ?
Hi ALL I might be little bit confused !!! I will try to ask my question in a simple way ... Why would a 16GB L2ARC device got filled by running a benchmark that uses a 2GB workingset while having a 2GB ARC max ????? I know I am missing something here !!!!! Thanks On Sun, Mar 7, 2010 at 12:05 AM, Richard Elling <richard.elling at gmail.com>wrote:> On Mar 6, 2010, at 8:05 PM, Eric D. Mudama wrote: > > On Sat, Mar 6 at 15:04, Richard Elling wrote: > >> On Mar 6, 2010, at 2:42 PM, Eric D. Mudama wrote: > >>> On Sat, Mar 6 at 3:15, Abdullah Al-Dahlawi wrote: > >>>> > >>>> hdd ONLINE 0 0 0 > >>>> c7t0d0p3 ONLINE 0 0 0 > >>>> > >>>> rpool ONLINE 0 0 0 > >>>> c7t0d0s0 ONLINE 0 0 0 > >>> > >>> I trimmed your zpool status output a bit. > >>> > >>> Are those two the same device? I''m barely familiar with solaris > >>> partitioning and labels... what''s the difference between a slice and a > >>> partition? > >> > >> In this context, "partition" is an fdisk partition and "slice" is a > >> SMI or EFI labeled slice. The SMI or EFI labeling tools (format, > >> prtvtoc, and ftmhard) do not work on partitions. So when you > >> choose to use ZFS on a partition, you have no tools other than > >> fdisk to manage the space. This can lead to confusion... a bad > >> thing. > > > > So in that context, is the above ''zpool status'' snippet a "bad thing > > to do"? > > If the partition containing c7t0d0s0 was p3, then it could be exceedingly > bad. Normally, if you try to create a zpool on a slice which already has a > zpool, then you will get an error message to that effect, which you can > override with the "-f" flag. However, that checking is done on slices, not > fdisk partitions. Hence, there is an opportunity for confusion... a bad > thing. > -- richard > > ZFS storage and performance consulting at http://www.RichardElling.com > ZFS training on deduplication, NexentaStor, and NAS performance > http://nexenta-atlanta.eventbrite.com (March 16-18, 2010) > > > > >-- Abdullah Al-Dahlawi PhD Candidate George Washington University Department. Of Electrical & Computer Engineering ---- Check The Fastest 500 Super Computers Worldwide http://www.top500.org/list/2009/11/100 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100307/70d60b84/attachment.html>
Richard Elling
2010-Mar-07 20:09 UTC
[zfs-discuss] why L2ARC device is used to store files ?
On Mar 6, 2010, at 9:31 PM, Abdullah Al-Dahlawi wrote:> Hi ALL > > I might be little bit confused !!! > > I will try to ask my question in a simple way ... > > Why would a 16GB L2ARC device got filled by running a benchmark that uses a 2GB workingset while having a 2GB ARC max ?????ZFS is COW, so if you are writing then the changes are cached. -- richard> > I know I am missing something here !!!!! > > Thanks > > > On Sun, Mar 7, 2010 at 12:05 AM, Richard Elling <richard.elling at gmail.com> wrote: > On Mar 6, 2010, at 8:05 PM, Eric D. Mudama wrote: > > On Sat, Mar 6 at 15:04, Richard Elling wrote: > >> On Mar 6, 2010, at 2:42 PM, Eric D. Mudama wrote: > >>> On Sat, Mar 6 at 3:15, Abdullah Al-Dahlawi wrote: > >>>> > >>>> hdd ONLINE 0 0 0 > >>>> c7t0d0p3 ONLINE 0 0 0 > >>>> > >>>> rpool ONLINE 0 0 0 > >>>> c7t0d0s0 ONLINE 0 0 0 > >>> > >>> I trimmed your zpool status output a bit. > >>> > >>> Are those two the same device? I''m barely familiar with solaris > >>> partitioning and labels... what''s the difference between a slice and a > >>> partition? > >> > >> In this context, "partition" is an fdisk partition and "slice" is a > >> SMI or EFI labeled slice. The SMI or EFI labeling tools (format, > >> prtvtoc, and ftmhard) do not work on partitions. So when you > >> choose to use ZFS on a partition, you have no tools other than > >> fdisk to manage the space. This can lead to confusion... a bad > >> thing. > > > > So in that context, is the above ''zpool status'' snippet a "bad thing > > to do"? > > If the partition containing c7t0d0s0 was p3, then it could be exceedingly > bad. Normally, if you try to create a zpool on a slice which already has a > zpool, then you will get an error message to that effect, which you can > override with the "-f" flag. However, that checking is done on slices, not > fdisk partitions. Hence, there is an opportunity for confusion... a bad thing. > -- richard > > ZFS storage and performance consulting at http://www.RichardElling.com > ZFS training on deduplication, NexentaStor, and NAS performance > http://nexenta-atlanta.eventbrite.com (March 16-18, 2010) > > > > > > > > -- > Abdullah Al-Dahlawi > PhD Candidate > George Washington University > Department. Of Electrical & Computer Engineering > ---- > Check The Fastest 500 Super Computers Worldwide > http://www.top500.org/list/2009/11/100ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)
Cindy Swearingen
2010-Mar-08 17:14 UTC
[zfs-discuss] why L2ARC device is used to store files ?
Good catch Eric, I didn''t see this problem at first... The problem here and Richard described it well is that the ctdp* devices represent the larger fdisk partition, which might also contain a ctds* device. This means that in this configuration, c7t0d0p3 and c7t0d0s0, might share the same blocks. My advice would be to copy the data from both the hdd pool and rpool and recreate these pools. My fear is that if you destroy the hdd pool, you will clobber your rpool data. Recreate both pools by using the two entire disks and get another disk for the cache device would be less headaches all around. ZFS warns when you attempt to create this config and we also have a current CR to prevent creating pools on p* devices. Thanks, Cindy On 03/06/10 15:42, Eric D. Mudama wrote:> On Sat, Mar 6 at 3:15, Abdullah Al-Dahlawi wrote: >> >> hdd ONLINE 0 0 0 >> c7t0d0p3 ONLINE 0 0 0 >> >> rpool ONLINE 0 0 0 >> c7t0d0s0 ONLINE 0 0 0 > > I trimmed your zpool status output a bit. > > Are those two the same device? I''m barely familiar with solaris > partitioning and labels... what''s the difference between a slice and a > partition? > >