Displaying 20 results from an estimated 2000 matches similar to: "does sharing an SSD as slog and l2arc reduces its life span?"
2011 Aug 11
19
Intel 320 as ZIL?
Are any of you using the Intel 320 as ZIL? It''s MLC based, but I
understand its wear and performance characteristics can be bumped up
significantly by increasing the overprovisioning to 20% (dropping
usable capacity to 80%).
Anyone have experience with this?
Ray
2010 Jul 21
5
slog/L2ARC on a hard drive and not SSD?
Hi,
Out of pure curiosity, I was wondering, what would happen if one tries to use a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)?
I know these are designed with SSDs in mind, and I know it''s possible to use anything you want as cache. So would ZFS benefit from it? Would it be the same? Would it slow down?
I guess it would slow things down, because it would be trying to
2009 Dec 03
5
L2ARC in clusters
Hi,
When deploying ZFS in cluster environment it would be nice to be able
to have some SSDs as local drives (not on SAN) and when pool switches
over to the other node zfs would pick up the node''s local disk drives as
L2ARC.
To better clarify what I mean lets assume there is a 2-node cluster with
1sx 2540 disk array.
Now lets put 4x SSDs in each node (as internal/local drives). Now
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC.
I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system,
which is intended to be used as a backup SAN during storage migration,
so it''s built on a tight budget.
The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII
SATA HDDs attached to an Areca 8port ARC-1220 controller
2011 Jan 12
6
SSD drives are really fast running Dovecot
I just replaced my drives for Dovecot using Maildir format with a pair
of Solid State Drives (SSD) in a raid 0 configuration. It's really
really fast. Kind of expensive but it's like getting 20x the speed for
20x the price. I think the big gain is in the 0 seek time.
Here's what I bought.
Crucial RealSSD C300 CTFDDAC256MAG-1G1 2.5" 256GB SATA III MLC Internal
Solid State
2008 Sep 10
7
Intel M-series SSD
Interesting flash technology overview and SSD review here:
http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403
and another review here:
http://www.tomshardware.com/reviews/Intel-x25-m-SSD,2012.html
Regards,
--
Al Hopper Logical Approach Inc,Plano,TX al at logical-approach.com
Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005
2009 Apr 11
17
Supermicro SAS/SATA controllers?
The standard controller that has been recommended in the past is the
AOC-SAT2-MV8 - an 8 port with a marvel chipset. There have been several
mentions of LSI based controllers on the mailing lists and I''m wondering
about them.
One obvious difference is that the Marvel contoller is PCI-X and the LSI
controllers are PCI-E.
Supermicro have several LSI controllers. AOC-USASLP-L8i with the
2016 Feb 09
4
Utility to zero unused blocks on disk
On Mon, Feb 8, 2016 at 3:18 PM, <m.roth at 5-cent.us> wrote:
> Chris Murphy wrote:
>> DBAN is obsolete. NIST 800-88 for some time now says to use secure erase
>> or enhanced security erase or crypto erase if supported.
>>
>> Other options do not erase data in remapped sectors.
>
> dban doesn't? What F/OSS does "secure erase"? And does it do
2010 Aug 12
6
one ZIL SLOG per zpool?
I have three zpools on a server and want to add a mirrored pair of ssd''s for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool?
--
This message posted from opensolaris.org
2011 Jul 30
7
NexentaCore 3.1 - ZFS V. 28
apt-get update
apt-clone upgrade
Any first impressions?
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
2012 Dec 01
3
6Tb Database with ZFS
Hello,
Im about to migrate a 6Tb database from Veritas Volume Manager to ZFS, I
want to set arc_max parameter so ZFS cant use all my system''s memory, but i
dont know how much i should set, do you think 24Gb will be enough for a 6Tb
database? obviously the more the better but i cant set too much memory.
Have someone implemented succesfully something similar?
We ran some test and the
2012 Oct 22
2
What is L2ARC write pattern?
Hello all,
A few months ago I saw a statement that L2ARC writes are simplistic
in nature, and I got the (mis?)understanding that some sort of ring
buffer may be in use, like for ZIL. Is this true, and the only metric
of write-performance important for L2ARC SSD device is the sequential
write bandwidth (and IOPS)? In particular, there are some SD/MMC/CF
cards for professional photography and
2010 Jul 21
5
L2ARC and ZIL on same SSD?
Are there any drawbacks to partition a SSD in two parts and use L2ARC on one partition, and ZIL on the other? Any thoughts?
--
This message posted from opensolaris.org
2010 Oct 12
2
Multiple SLOG devices per pool
I have a pool with a single SLOG device rated at Y iops.
If I add a second (non-mirrored) SLOG device also rated at Y iops will
my zpool now theoretically be able to handle 2Y iops? Or close to
that?
Thanks,
Ray
2008 Oct 08
1
Shutting down / exporting zpool without flushing slog devices
Hey folks,
This might be a daft idea, but is there any way to shut down solaris / zfs without flushing the slog device?
The reason I ask is that we''re planning to use mirrored nvram slogs, and in the long term hope to use a pair of 80GB ioDrives. I''d like to have a large amount of that reserved for write cache (potentially 20-30GB), to facilitate rapid suspend to disk of
2010 Apr 02
6
L2ARC & Workingset Size
Hi all
I ran a workload that reads & writes within 10 files each file is 256M, ie,
(10 * 256M = 2.5GB total Dataset Size).
I have set the ARC max size to 1 GB on etc/system file
In the worse case, let us assume that the whole dataset is hot, meaning my
workingset size= 2.5GB
My SSD flash size = 8GB and being used for L2ARC
No slog is used in the pool
My File system record size = 8K ,
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this:
ool 488K 20.0T 0 0 0 0
xpool 488K 20.0T 0 0 0 0
xpool
2009 Jul 24
6
When writing to SLOG at full speed all disk IO is blocked
Hello all...
I''m seeing this behaviour in an old build (89), and i just want to hear from you if there is some known bug about it. I''m aware of the "picket fencing" problem, and that ZFS is not choosing right if write to slog is better or not (thinking if we have a better throughput from disks).
But i did not find anything about 100% slog activity (~115MB/s) blocks
2010 Nov 18
9
WarpDrive SLP-300
http://www.lsi.com/channel/about_channel/whatsnew/warpdrive_slp300/index.html
Good stuff for ZFS.
Fred
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101117/d48186f0/attachment.html>
2011 Mar 01
14
Good SLOG devices?
Hi
I''m running OpenSolaris 148 on a few boxes, and newer boxes are getting installed as we speak. What would you suggest for a good SLOG device? It seems some new PCI-E-based ones are hitting the market, but will those require special drivers? Cost is obviously alsoo an issue here....
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at karlsbakk.net