Displaying 20 results from an estimated 5000 matches similar to: "Intel M-series SSD"
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this:
ool 488K 20.0T 0 0 0 0
xpool 488K 20.0T 0 0 0 0
xpool
2010 Apr 10
41
Secure delete?
Hi all
Is it possible to securely delete a file from a zfs dataset/zpool once it''s been snapshotted, meaning "delete (and perhaps overwrite) all copies of this file"?
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ
2010 Jun 19
6
does sharing an SSD as slog and l2arc reduces its life span?
Hi,
I don''t know if it''s already been discussed here, but while
thinking about using the OCZ Vertex 2 Pro SSD (which according
to spec page has supercaps built in) as a shared slog and L2ARC
device it stroke me that this might not be a such a good idea.
Because this SSD is MLC based, write cycles are an issue here,
though I can''t find any number in their spec.
Why do I
2010 May 24
16
questions about zil
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL performance wise. My question is, how
safe is it? I know it doesn''t have a supercap so lets'' say dataloss
occurs....is it just dataloss or is it pool loss?
also, does the fact that i have a UPS matter?
the numbers i''m seeing are really nice....these are some nfs tar times
before
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC.
I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system,
which is intended to be used as a backup SAN during storage migration,
so it''s built on a tight budget.
The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII
SATA HDDs attached to an Areca 8port ARC-1220 controller
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting
up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and
connected via load-shared 4Gbit FC links. This week I have tried many
different configurations, using firmware managed RAID, ZFS managed
RAID, and with the controller cache enabled or disabled.
My objective is to obtain the best single-file write performance.
2009 Nov 11
20
zfs eradication
Hi,
I was discussing the common practice of disk eradication used by many firms for security. I was thinking this may be a useful feature of ZFS to have an option to eradicate data as its removed, meaning after the last reference/snapshot is done and a block is freed, then write the eradication patterns back to the removed blocks.
By any chance, has this been discussed or considered before?
2009 Jun 10
13
Apple Removes Nearly All Reference To ZFS
http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS
2009 Jun 15
33
compression at zfs filesystem creation
Hi,
I just installed 2009.06 and found that compression isn''t enabled by default when filesystems are created. Does is make sense to have an RFE open for this? (I''ll open one tonight if need be.) We keep telling people to turn on compression. Are there any situations where turning on compression doesn''t make sense, like rpool/swap? what about rpool/dump?
Thanks,
~~sa
2010 Oct 19
7
SSD partitioned into multiple L2ARC read cache
What would the performance impact be of splitting up a 64 GB SSD into four
partitions of 16 GB each versus having the entire SSD dedicated to each
pool?
Scenario A:
2 TB Mirror w/ 16 GB read cache partition
2 TB Mirror w/ 16 GB read cache partition
2 TB Mirror w/ 16 GB read cache partition
2 TB Mirror w/ 16 GB read cache partition
versus
Scenario B:
2 TB Mirror w/ 64 GB read cache SSD
2 TB
2010 Apr 27
42
Performance drop during scrub?
Hi all
I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to something hardly usable while scrubbing the pool.
How can I address this? Will adding Zil or L2ARC help? Is it possible to tune down scrub''s priority somehow?
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at
2011 Nov 08
6
Couple of questions about ZFS on laptops
Hello all,
I am thinking about a new laptop. I see that there are
a number of higher-performance models (incidenatlly, they
are also marketed as "gamer" ones) which offer two SATA
2.5" bays and an SD flash card slot. Vendors usually
position the two-HDD bay part as either "get lots of
capacity with RAID0 over two HDDs, or get some capacity
and some performance by mixing one
2009 Dec 04
30
ZFS send | verify | receive
If there were a ?zfs send? datastream saved someplace, is there a way to
verify the integrity of that datastream without doing a ?zfs receive? and
occupying all that disk space?
I am aware that ?zfs send? is not a backup solution, due to vulnerability of
even a single bit error, and lack of granularity, and other reasons.
However ... There is an attraction to ?zfs send? as an augmentation to the
2010 Jun 25
13
OCZ Vertex 2 Pro performance numbers
Now the test for the Vertex 2 Pro. This was fun.
For more explanation please see the thread "Crucial RealSSD C300 and cache
flush?"
This time I made sure the device is attached via 3GBit SATA. This is also
only a short test. I''ll retest after some weeks of usage.
cache enabled, 32 buffers, 64k blocks
linear write, random data: 96 MB/s
linear read, random data: 206 MB/s
linear
2009 Feb 18
4
Zpool scrub in cron hangs u3/u4 server, stumps tech support.
I''ve got a server that freezes when I run a zpool scrub from cron.
Zpool scrub runs fine from the command line, no errors.
The freeze happens within 30 seconds of the zpool scrub happening.
The one core dump I succeeded in taking showed an arccache eating up
all the ram.
The server''s running Solaris 10 u3, kernel patch 127727-11 but it''s
been patched and seems to have
2008 May 21
11
Per-user home filesystems and OS-X Leopard anomaly
I encountered an issue that people using OS-X systems as NFS clients
need to be aware of. While not strictly a ZFS issue, it may be
encounted most often by ZFS users since ZFS makes it easy to support
and export per-user filesystems. The problem I encountered was when
using ZFS to create exported per-user filesystems and the OS-X
automounter to perform the necessary mount magic.
OS-X
2009 Apr 15
5
StorageTek 2540 performance radically changed
Today I updated the firmware on my StorageTek 2540 to the latest
recommended version and am seeing radically difference performance
when testing with iozone than I did in February of 2008. I am using
Solaris 10 U5 with all the latest patches.
This is the performance achieved (on a 32GB file) in February last
year:
KB reclen write rewrite read reread
33554432
2012 Jan 07
14
zfs defragmentation via resilvering?
Hello all,
I understand that relatively high fragmentation is inherent
to ZFS due to its COW and possible intermixing of metadata
and data blocks (of which metadata path blocks are likely
to expire and get freed relatively quickly).
I believe it was sometimes implied on this list that such
fragmentation for "static" data can be currently combatted
only by zfs send-ing existing
2008 Nov 14
23
Still more questions WRT selecting a mobo for small ZFS RAID
Like many others, I am looking to put together a SOHO NAS based on ZFS/CIFS. The plan is 6 x 1TB drives in RAIDZ2 configuration, driven via mobo with 6 SATA ports.
I''ve read most, if not all, of the threads here, as well as sbredon''s excellent article on building a home NAS, yet I still have a number of unanswered questions.
I was leaning heavily towards the M2N-E for a while,
2009 Sep 24
5
Checksum property change does not change pre-existing data - right?
My understanding is that if I "zfs set checksum=<different>" to change the algorithm that this will change the checksum algorithm for all FUTURE data blocks written, but does not in any way change the checksum for previously written data blocks.
I need to corroborate this understanding. Could someone please point me to a document that states this? I have searched and searched