Displaying 20 results from an estimated 6000 matches similar to: "slog/L2ARC on a hard drive and not SSD?"
2010 Aug 12
6
one ZIL SLOG per zpool?
I have three zpools on a server and want to add a mirrored pair of ssd''s for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool?
--
This message posted from opensolaris.org
2010 Jun 19
6
does sharing an SSD as slog and l2arc reduces its life span?
Hi,
I don''t know if it''s already been discussed here, but while
thinking about using the OCZ Vertex 2 Pro SSD (which according
to spec page has supercaps built in) as a shared slog and L2ARC
device it stroke me that this might not be a such a good idea.
Because this SSD is MLC based, write cycles are an issue here,
though I can''t find any number in their spec.
Why do I
2010 Jul 21
5
L2ARC and ZIL on same SSD?
Are there any drawbacks to partition a SSD in two parts and use L2ARC on one partition, and ZIL on the other? Any thoughts?
--
This message posted from opensolaris.org
2009 Dec 03
5
L2ARC in clusters
Hi,
When deploying ZFS in cluster environment it would be nice to be able
to have some SSDs as local drives (not on SAN) and when pool switches
over to the other node zfs would pick up the node''s local disk drives as
L2ARC.
To better clarify what I mean lets assume there is a 2-node cluster with
1sx 2540 disk array.
Now lets put 4x SSDs in each node (as internal/local drives). Now
2012 Nov 14
3
SSD ZIL/L2ARC partitioning
Hi,
I''ve ordered a new server with:
- 4x600GB Toshiba 10K SAS2 Disks
- 2x100GB OCZ DENEVA 2R SYNC eMLC SATA (no expander so I hope no SAS/
SATA problems). Specs: http://www.oczenterprise.com/ssd-products/deneva-2-r-sata-6g-2.5-emlc.html
I want to use the 2 OCZ SSDs as mirrored intent log devices, but as
the intent log needs quite a small amount of the disks (10GB?), I was
wondering
2010 Oct 12
2
Multiple SLOG devices per pool
I have a pool with a single SLOG device rated at Y iops.
If I add a second (non-mirrored) SLOG device also rated at Y iops will
my zpool now theoretically be able to handle 2Y iops? Or close to
that?
Thanks,
Ray
2010 Oct 19
7
SSD partitioned into multiple L2ARC read cache
What would the performance impact be of splitting up a 64 GB SSD into four
partitions of 16 GB each versus having the entire SSD dedicated to each
pool?
Scenario A:
2 TB Mirror w/ 16 GB read cache partition
2 TB Mirror w/ 16 GB read cache partition
2 TB Mirror w/ 16 GB read cache partition
2 TB Mirror w/ 16 GB read cache partition
versus
Scenario B:
2 TB Mirror w/ 64 GB read cache SSD
2 TB
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC.
I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system,
which is intended to be used as a backup SAN during storage migration,
so it''s built on a tight budget.
The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII
SATA HDDs attached to an Areca 8port ARC-1220 controller
2010 Apr 02
6
L2ARC & Workingset Size
Hi all
I ran a workload that reads & writes within 10 files each file is 256M, ie,
(10 * 256M = 2.5GB total Dataset Size).
I have set the ARC max size to 1 GB on etc/system file
In the worse case, let us assume that the whole dataset is hot, meaning my
workingset size= 2.5GB
My SSD flash size = 8GB and being used for L2ARC
No slog is used in the pool
My File system record size = 8K ,
2010 Mar 29
19
sharing a ssd between rpool and l2arc
Hi,
as Richard Elling wrote earlier:
"For more background, low-cost SSDs intended for the boot market are
perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root
and the rest for an L2ARC. For small form factor machines or machines
with max capacity of 8GB of RAM (a typical home system) this can make a
pleasant improvement over a HDD-only implementation."
For the upcoming
2008 Oct 08
1
Shutting down / exporting zpool without flushing slog devices
Hey folks,
This might be a daft idea, but is there any way to shut down solaris / zfs without flushing the slog device?
The reason I ask is that we''re planning to use mirrored nvram slogs, and in the long term hope to use a pair of 80GB ioDrives. I''d like to have a large amount of that reserved for write cache (potentially 20-30GB), to facilitate rapid suspend to disk of
2010 Jul 10
2
block align SSD for use as a l2arc cache
I have an Intel X25-M 80GB SSD.
For optimum performance, I need to block align the SSD device, but I am not
sure exactly how I should to it.
If I run the format -> fdisk it allows me to partition based on a cylinder,
but I don''t think that is sufficient enough.
Can someone tell me how they block aligned an SSD device for use in l2arc.
Thanks,
Geoff
2009 Apr 09
8
ZIL SSD performance testing... -IOzone works great, others not so great
Hi folks,
I would appreciate it if someone can help me understand some weird
results I''m seeing with trying to do performance testing with an SSD
offloaded ZIL.
I''m attempting to improve my infrastructure''s burstable write capacity
(ZFS based WebDav servers), and naturally I''m looking at implementing
SSD based ZIL devices.
I have a test machine with the
2011 Jul 30
7
NexentaCore 3.1 - ZFS V. 28
apt-get update
apt-clone upgrade
Any first impressions?
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
2012 Dec 01
3
6Tb Database with ZFS
Hello,
Im about to migrate a 6Tb database from Veritas Volume Manager to ZFS, I
want to set arc_max parameter so ZFS cant use all my system''s memory, but i
dont know how much i should set, do you think 24Gb will be enough for a 6Tb
database? obviously the more the better but i cant set too much memory.
Have someone implemented succesfully something similar?
We ran some test and the
2010 Sep 14
9
dedicated ZIL/L2ARC
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 ? 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing out different slices of the pool to windows servers using comstar and fibrechannel. We are currently getting around 300MB/sec performance with 70-100% disk busy.
2009 Jul 24
6
When writing to SLOG at full speed all disk IO is blocked
Hello all...
I''m seeing this behaviour in an old build (89), and i just want to hear from you if there is some known bug about it. I''m aware of the "picket fencing" problem, and that ZFS is not choosing right if write to slog is better or not (thinking if we have a better throughput from disks).
But i did not find anything about 100% slog activity (~115MB/s) blocks
2008 May 27
6
slog devices don''t resilver correctly
This past weekend, but holiday was ruined due to a log device
"replacement" gone awry.
I posted all about it here:
http://jmlittle.blogspot.com/2008/05/problem-with-slogs-how-i-lost.html
In a nutshell, an resilver of a single log device with itself, due to
the fact one can''t remove a log device from a pool once defined, cause
ZFS to fully resilver but then attach the log
2011 Mar 01
14
Good SLOG devices?
Hi
I''m running OpenSolaris 148 on a few boxes, and newer boxes are getting installed as we speak. What would you suggest for a good SLOG device? It seems some new PCI-E-based ones are hitting the market, but will those require special drivers? Cost is obviously alsoo an issue here....
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at karlsbakk.net
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
I have historically noticed that in ZFS, when ever there is a heavy
writer to a pool via NFS, the reads can held back (basically paused).
An example is a RAID10 pool of 6 disks, whereby a directory of files
including some large 100+MB in size being written can cause other
clients over NFS to pause for seconds (5-30 or so). This on B70 bits.
I''ve gotten used to this behavior over NFS, but