Displaying 20 results from an estimated 1000 matches similar to: "When is the L2ARC refreshed if on a separate drive?"
2012 Nov 14
3
SSD ZIL/L2ARC partitioning
Hi,
I''ve ordered a new server with:
- 4x600GB Toshiba 10K SAS2 Disks
- 2x100GB OCZ DENEVA 2R SYNC eMLC SATA (no expander so I hope no SAS/
SATA problems). Specs: http://www.oczenterprise.com/ssd-products/deneva-2-r-sata-6g-2.5-emlc.html
I want to use the 2 OCZ SSDs as mirrored intent log devices, but as
the intent log needs quite a small amount of the disks (10GB?), I was
wondering
2010 Feb 20
6
l2arc current usage (population size)
Hello,
How do you tell how much of your l2arc is populated? I''ve been looking for a while now, can''t seem to find it.
Must be easy, as this blog entry shows it over time:
http://blogs.sun.com/brendan/entry/l2arc_screenshots
And follow up, can you tell how much of each data set is in the arc or l2arc?
--
This message posted from opensolaris.org
2010 Jul 21
5
L2ARC and ZIL on same SSD?
Are there any drawbacks to partition a SSD in two parts and use L2ARC on one partition, and ZIL on the other? Any thoughts?
--
This message posted from opensolaris.org
2010 Apr 02
6
L2ARC & Workingset Size
Hi all
I ran a workload that reads & writes within 10 files each file is 256M, ie,
(10 * 256M = 2.5GB total Dataset Size).
I have set the ARC max size to 1 GB on etc/system file
In the worse case, let us assume that the whole dataset is hot, meaning my
workingset size= 2.5GB
My SSD flash size = 8GB and being used for L2ARC
No slog is used in the pool
My File system record size = 8K ,
2010 May 24
16
questions about zil
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL performance wise. My question is, how
safe is it? I know it doesn''t have a supercap so lets'' say dataloss
occurs....is it just dataloss or is it pool loss?
also, does the fact that i have a UPS matter?
the numbers i''m seeing are really nice....these are some nfs tar times
before
2010 Mar 29
19
sharing a ssd between rpool and l2arc
Hi,
as Richard Elling wrote earlier:
"For more background, low-cost SSDs intended for the boot market are
perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root
and the rest for an L2ARC. For small form factor machines or machines
with max capacity of 8GB of RAM (a typical home system) this can make a
pleasant improvement over a HDD-only implementation."
For the upcoming
2010 Jun 19
6
does sharing an SSD as slog and l2arc reduces its life span?
Hi,
I don''t know if it''s already been discussed here, but while
thinking about using the OCZ Vertex 2 Pro SSD (which according
to spec page has supercaps built in) as a shared slog and L2ARC
device it stroke me that this might not be a such a good idea.
Because this SSD is MLC based, write cycles are an issue here,
though I can''t find any number in their spec.
Why do I
2010 Jul 02
14
NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?
I see in NexentaStor''s announcement of Community Edition 3.0.3 they mention some backported patches in this release.
Aside from their management features / UI what is the core OS difference if we move to Nexenta from OpenSolaris b134?
These DeDup bugs are my main frustration - if a staff member does a rm * in a directory with dedup you can take down the whole storage server - all with
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this:
ool 488K 20.0T 0 0 0 0
xpool 488K 20.0T 0 0 0 0
xpool
2010 Sep 14
9
dedicated ZIL/L2ARC
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 ? 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing out different slices of the pool to windows servers using comstar and fibrechannel. We are currently getting around 300MB/sec performance with 70-100% disk busy.
2010 Jul 21
5
slog/L2ARC on a hard drive and not SSD?
Hi,
Out of pure curiosity, I was wondering, what would happen if one tries to use a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)?
I know these are designed with SSDs in mind, and I know it''s possible to use anything you want as cache. So would ZFS benefit from it? Would it be the same? Would it slow down?
I guess it would slow things down, because it would be trying to
2010 Feb 18
3
improve meta data performance
We have a SunFire X4500 running Solaris 10U5 which does about 5-8k nfs ops
of which about 90% are meta data. In hind sight it would have been
significantly better to use a mirrored configuration but we opted for 4 x
(9+2) raidz2 at the time. We can not take the downtime necessary to change
the zpool configuration.
We need to improve the meta data performance with little to no money. Does
anyone
2012 Oct 22
2
What is L2ARC write pattern?
Hello all,
A few months ago I saw a statement that L2ARC writes are simplistic
in nature, and I got the (mis?)understanding that some sort of ring
buffer may be in use, like for ZIL. Is this true, and the only metric
of write-performance important for L2ARC SSD device is the sequential
write bandwidth (and IOPS)? In particular, there are some SD/MMC/CF
cards for professional photography and
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC.
I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system,
which is intended to be used as a backup SAN during storage migration,
so it''s built on a tight budget.
The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII
SATA HDDs attached to an Areca 8port ARC-1220 controller
2011 Nov 08
6
Couple of questions about ZFS on laptops
Hello all,
I am thinking about a new laptop. I see that there are
a number of higher-performance models (incidenatlly, they
are also marketed as "gamer" ones) which offer two SATA
2.5" bays and an SD flash card slot. Vendors usually
position the two-HDD bay part as either "get lots of
capacity with RAID0 over two HDDs, or get some capacity
and some performance by mixing one
2010 Mar 15
1
persistent L2ARC
Greeting ALL
I understand that L2ARC is still under enhancement. Does any one know if ZFS
can be upgrades to include "Persistent L2ARC", ie. L2ARC will not loose its
contents after system reboot ?
--
Abdullah Al-Dahlawi
George Washington University
Department. Of Electrical & Computer Engineering
----
Check The Fastest 500 Super Computers Worldwide
2009 Dec 03
5
L2ARC in clusters
Hi,
When deploying ZFS in cluster environment it would be nice to be able
to have some SSDs as local drives (not on SAN) and when pool switches
over to the other node zfs would pick up the node''s local disk drives as
L2ARC.
To better clarify what I mean lets assume there is a 2-node cluster with
1sx 2540 disk array.
Now lets put 4x SSDs in each node (as internal/local drives). Now
2012 May 17
6
SSD format/mount parameters questions
For using SSDs:
Are there any format/mount parameters that should be set for using btrfs
on SSDs (other than the "ssd" mount option)?
General questions:
How long is the ''delay'' for the delayed alloc?
Are file allocations aligned to 4kiB boundaries, or larger?
What byte value is used to pad unused space?
(Aside: For some, the erased state reads all 0x00, and for
2010 Jul 10
2
block align SSD for use as a l2arc cache
I have an Intel X25-M 80GB SSD.
For optimum performance, I need to block align the SSD device, but I am not
sure exactly how I should to it.
If I run the format -> fdisk it allows me to partition based on a cylinder,
but I don''t think that is sufficient enough.
Can someone tell me how they block aligned an SSD device for use in l2arc.
Thanks,
Geoff
2012 Dec 12
20
Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)
I''ve hit this bug on four of my Solaris 11 servers. Looking for anyone else
who has seen it, as well as comments/speculation on cause.
This bug is pretty bad. If you are lucky you can import the pool read-only
and migrate it elsewhere.
I''ve also tried setting zfs:zfs_recover=1,aok=1 with varying results.
http://docs.oracle.com/cd/E26502_01/html/E28978/gmkgj.html#scrolltoc