similar to: l2arc-like ability in btrfs

Displaying 20 results from an estimated 9000 matches similar to: "l2arc-like ability in btrfs"

2012 Nov 14
3
SSD ZIL/L2ARC partitioning
Hi, I''ve ordered a new server with: - 4x600GB Toshiba 10K SAS2 Disks - 2x100GB OCZ DENEVA 2R SYNC eMLC SATA (no expander so I hope no SAS/ SATA problems). Specs: http://www.oczenterprise.com/ssd-products/deneva-2-r-sata-6g-2.5-emlc.html I want to use the 2 OCZ SSDs as mirrored intent log devices, but as the intent log needs quite a small amount of the disks (10GB?), I was wondering
2012 Oct 22
2
What is L2ARC write pattern?
Hello all, A few months ago I saw a statement that L2ARC writes are simplistic in nature, and I got the (mis?)understanding that some sort of ring buffer may be in use, like for ZIL. Is this true, and the only metric of write-performance important for L2ARC SSD device is the sequential write bandwidth (and IOPS)? In particular, there are some SD/MMC/CF cards for professional photography and
2010 Aug 03
2
When is the L2ARC refreshed if on a separate drive?
I''m running a mirrored pair of 2 TB SATA drives as my data storage drives on my home workstation, a Core i7-based machine with 10 GB of RAM. I recently added a sandforce-based 60 GB SSD (OCZ Vertex 2, NOT the pro version) as an L2ARC to the single mirrored pair. I''m running B134, with ZFS pool version 22, with dedup enabled. If I understand correctly, the dedup table should be in
2010 Mar 15
1
persistent L2ARC
Greeting ALL I understand that L2ARC is still under enhancement. Does any one know if ZFS can be upgrades to include "Persistent L2ARC", ie. L2ARC will not loose its contents after system reboot ? -- Abdullah Al-Dahlawi George Washington University Department. Of Electrical & Computer Engineering ---- Check The Fastest 500 Super Computers Worldwide
2010 Feb 20
6
l2arc current usage (population size)
Hello, How do you tell how much of your l2arc is populated? I''ve been looking for a while now, can''t seem to find it. Must be easy, as this blog entry shows it over time: http://blogs.sun.com/brendan/entry/l2arc_screenshots And follow up, can you tell how much of each data set is in the arc or l2arc? -- This message posted from opensolaris.org
2009 Dec 03
5
L2ARC in clusters
Hi, When deploying ZFS in cluster environment it would be nice to be able to have some SSDs as local drives (not on SAN) and when pool switches over to the other node zfs would pick up the node''s local disk drives as L2ARC. To better clarify what I mean lets assume there is a 2-node cluster with 1sx 2540 disk array. Now lets put 4x SSDs in each node (as internal/local drives). Now
2010 Jul 10
2
block align SSD for use as a l2arc cache
I have an Intel X25-M 80GB SSD. For optimum performance, I need to block align the SSD device, but I am not sure exactly how I should to it. If I run the format -> fdisk it allows me to partition based on a cylinder, but I don''t think that is sufficient enough. Can someone tell me how they block aligned an SSD device for use in l2arc. Thanks, Geoff
2010 Jun 19
6
does sharing an SSD as slog and l2arc reduces its life span?
Hi, I don''t know if it''s already been discussed here, but while thinking about using the OCZ Vertex 2 Pro SSD (which according to spec page has supercaps built in) as a shared slog and L2ARC device it stroke me that this might not be a such a good idea. Because this SSD is MLC based, write cycles are an issue here, though I can''t find any number in their spec. Why do I
2010 Jul 21
5
L2ARC and ZIL on same SSD?
Are there any drawbacks to partition a SSD in two parts and use L2ARC on one partition, and ZIL on the other? Any thoughts? -- This message posted from opensolaris.org
2010 Apr 02
6
L2ARC & Workingset Size
Hi all I ran a workload that reads & writes within 10 files each file is 256M, ie, (10 * 256M = 2.5GB total Dataset Size). I have set the ARC max size to 1 GB on etc/system file In the worse case, let us assume that the whole dataset is hot, meaning my workingset size= 2.5GB My SSD flash size = 8GB and being used for L2ARC No slog is used in the pool My File system record size = 8K ,
2010 Jul 21
5
slog/L2ARC on a hard drive and not SSD?
Hi, Out of pure curiosity, I was wondering, what would happen if one tries to use a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)? I know these are designed with SSDs in mind, and I know it''s possible to use anything you want as cache. So would ZFS benefit from it? Would it be the same? Would it slow down? I guess it would slow things down, because it would be trying to
2010 Mar 29
19
sharing a ssd between rpool and l2arc
Hi, as Richard Elling wrote earlier: "For more background, low-cost SSDs intended for the boot market are perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root and the rest for an L2ARC. For small form factor machines or machines with max capacity of 8GB of RAM (a typical home system) this can make a pleasant improvement over a HDD-only implementation." For the upcoming
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC. I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system, which is intended to be used as a backup SAN during storage migration, so it''s built on a tight budget. The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII SATA HDDs attached to an Areca 8port ARC-1220 controller
2010 Oct 04
1
MySQL on BTRFS Experiences
Hi, My shop is a long time (~2006) ZFS user, considering moving off OpenSolaris due to a number of non-technical issues. We use ZFS for all of our MySQL databases. It''s cheap/fast snapshots are critical to our backup strategy, and we rely on the L2ARC and dedicated log device (i.e. NVRAM) features to accelerate our pools with SSDs. I''ve read through the archives for comments on
2010 Sep 14
9
dedicated ZIL/L2ARC
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 ? 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing out different slices of the pool to windows servers using comstar and fibrechannel. We are currently getting around 300MB/sec performance with 70-100% disk busy.
2010 Mar 05
17
why L2ARC device is used to store files ?
Greeting All I have create a pool that consists oh a hard disk and a ssd as a cache zpool create hdd c11t0d0p3 zpool add hdd cache c8t0d0p0 - cache device I ran an OLTP bench mark to emulate a DMBS One I ran the benchmark, the pool started create the database file on the ssd cache device ??????????? can any one explain why this happening ? is not L2ARC is used to absorb the evicted data
2011 Jan 10
0
L2ARC and prefetched data.
Hi. I can''t reach Brendan Gregg with this question (user unknown, he doesn''t work for Oracle anymore?), so I''m sending it here: FreeBSD users report much better performance and lower disk and CPU load when L2ARC also holds prefetched data (l2arc_noprefetch = B_FALSE). I was wondering what was the reason to avoid storing prefetched data on L2ARC vdevs by default. --
2010 Oct 19
7
SSD partitioned into multiple L2ARC read cache
What would the performance impact be of splitting up a 64 GB SSD into four partitions of 16 GB each versus having the entire SSD dedicated to each pool? Scenario A: 2 TB Mirror w/ 16 GB read cache partition 2 TB Mirror w/ 16 GB read cache partition 2 TB Mirror w/ 16 GB read cache partition 2 TB Mirror w/ 16 GB read cache partition versus Scenario B: 2 TB Mirror w/ 64 GB read cache SSD 2 TB
2011 Apr 25
3
arcstat updates
Hi ZFSers, I''ve been working on merging the Joyent arcstat enhancements with some of my own and am now to the point where it is time to broaden the requirements gathering. The result is to be merged into the illumos tree. arcstat is a perl script to show the value of ARC kstats as they change over time. This is similar to the ideas behind mpstat, iostat, vmstat, and friends. The current
2010 Feb 01
0
quick overhead sizing for DDT and L2ARC
Two related questions: - given an existing pool with dedup''d data, how can I find the current size of the DDT? I presume some zdb work to find and dump the relevant object, but what specifically? - what''s the expansion ratio for the memory overhead of L2ARC entries? If I know my DDT can fit on a ssd of size X, that''s good - but how much RAM do I need