Displaying 20 results from an estimated 10000 matches similar to: "zfs destory snapshot takes an hours."
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC.
I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system,
which is intended to be used as a backup SAN during storage migration,
so it''s built on a tight budget.
The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII
SATA HDDs attached to an Areca 8port ARC-1220 controller
2011 Jan 29
27
ZFS and TRIM
My google-fu is coming up short on this one... I didn''t see that it had
been discussed in a while ...
What is the status of ZFS support for TRIM?
For the pool in general...
and...
Specifically for the slog and/or cache???
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2010 Sep 14
9
dedicated ZIL/L2ARC
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 ? 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing out different slices of the pool to windows servers using comstar and fibrechannel. We are currently getting around 300MB/sec performance with 70-100% disk busy.
2012 Jul 30
10
encfs on top of zfs
Dear ZFS-Users,
I want to switch to ZFS, but still want to encrypt my data. Native
Encryption for ZFS was added in "ZFS Pool Version Number
30<http://en.wikipedia.org/wiki/ZFS#Release_history>",
but I''m using ZFS on FreeBSD with Version 28. My question is how would
encfs (fuse encryption) affect zfs specific features like data Integrity
and deduplication?
Regards
2013 Feb 15
28
zfs-discuss mailing list & opensolaris EOL
So, I hear, in a couple weeks'' time, opensolaris.org is shutting down. What does that mean for this mailing list? Should we all be moving over to something at illumos or something?
I''m going to encourage somebody in an official capacity at opensolaris to respond...
I''m going to discourage unofficial responses, like, illumos enthusiasts etc simply trying to get people
2010 Jan 27
13
zfs destroy hangs machine if snapshot exists- workaround found
Hi,
I was suffering for weeks from the following problem:
a zfs dataset contained an automatic snapshot (monthly) that used 2.8 TB of data. The dataset was deprecated, so I chose to destroy it after I had deleted some files; eventually it was completely blank besides the snapshot that still locked 2.8 TB on the pool.
''zfs destroy -r pool/dataset''
hung the machine within seconds
2010 Jun 18
25
Erratic behavior on 24T zpool
Well, I''ve searched my brains out and I can''t seem to find a reason for this.
I''m getting bad to medium performance with my new test storage device. I''ve got 24 1.5T disks with 2 SSDs configured as a zil log device. I''m using the Areca raid controller, the driver being arcmsr. Quad core AMD with 16 gig of RAM OpenSolaris upgraded to snv_134.
The zpool
2011 Jan 05
6
ZFS on top of ZFS iSCSI share
I have a filer running Opensolaris (snv_111b) and I am presenting a
iSCSI share from a RAIDZ pool. I want to run ZFS on the share at the
client. Is it necessary to create a mirror or use ditto blocks at the
client to ensure ZFS can recover if it detects a failure at the client?
Thanks,
Bruin
2011 Jan 28
8
ZFS Dedup question
I created a zfs pool with dedup with the following settings:
zpool create data c8t1d0
zfs create data/shared
zfs set dedup=on data/shared
The thing I was wondering about was it seems like ZFS only dedup at the file level and not the block. When I make multiple copies of a file to the store I see an increase in the deup ratio, but when I copy similar files the ratio stays at 1.00x.
--
This
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this:
ool 488K 20.0T 0 0 0 0
xpool 488K 20.0T 0 0 0 0
xpool
2010 Feb 12
13
SSD and ZFS
Hi all,
just after sending a message to sunmanagers I realized that my question
should rather have gone here. So sunmanagers please excus ethe double
post:
I have inherited a X4140 (8 SAS slots) and have just setup the system
with Solaris 10 09. I first setup the system on a mirrored pool over
the first two disks
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME
2011 Jul 15
22
Zil on multiple usb keys
This might be a stupid question, but here goes... Would adding, say, 4 4 or 8gb usb keys as a zil make enough of a difference for writes on an iscsi shared vol?
I am finding reads are not too bad (40is mb/s over gige on 2 500gb drives stripped) but writes top out at about 10 and drop a lot lower... If I where to add a couple usb keys for zil, would it make a difference?
Thanks.
Sent from a
2012 Feb 26
3
zfs diff performance
I had high hopes of significant performance gains using zfs diff in
Solaris 11 compared to my home-brew stat based version in Solaris 10.
However the results I have seen so far have been disappointing.
Testing on a reasonably sized filesystem (4TB), a diff that listed 41k
changes took 77 minutes. I haven''t tried my old tool, but I would
expect the same diff to take a couple of
2010 Jun 07
20
Homegrown Hybrid Storage
Hi,
I''m looking to build a virtualized web hosting server environment accessing
files on a hybrid storage SAN. I was looking at using the Sun X-Fire x4540
with the following configuration:
- 6 RAID-Z vdevs with one hot spare each (all 500GB 7200RPM SATA drives)
- 2 Intel X-25 32GB SSD''s as a mirrored ZIL
- 4 Intel X-25 64GB SSD''s as the L2ARC.
-
2010 May 24
16
questions about zil
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL performance wise. My question is, how
safe is it? I know it doesn''t have a supercap so lets'' say dataloss
occurs....is it just dataloss or is it pool loss?
also, does the fact that i have a UPS matter?
the numbers i''m seeing are really nice....these are some nfs tar times
before
2010 Oct 19
7
SSD partitioned into multiple L2ARC read cache
What would the performance impact be of splitting up a 64 GB SSD into four
partitions of 16 GB each versus having the entire SSD dedicated to each
pool?
Scenario A:
2 TB Mirror w/ 16 GB read cache partition
2 TB Mirror w/ 16 GB read cache partition
2 TB Mirror w/ 16 GB read cache partition
2 TB Mirror w/ 16 GB read cache partition
versus
Scenario B:
2 TB Mirror w/ 64 GB read cache SSD
2 TB
2010 Aug 03
2
When is the L2ARC refreshed if on a separate drive?
I''m running a mirrored pair of 2 TB SATA drives as my data storage drives on my home workstation, a Core i7-based machine with 10 GB of RAM. I recently added a sandforce-based 60 GB SSD (OCZ Vertex 2, NOT the pro version) as an L2ARC to the single mirrored pair. I''m running B134, with ZFS pool version 22, with dedup enabled. If I understand correctly, the dedup table should be in
2010 Dec 23
31
SAS/short stroking vs. SSDs for ZIL
Hi,
as I have learned from the discussion about which SSD to use as ZIL
drives, I stumbled across this article, that discusses short stroking
for increasing IOPs on SAS and SATA drives:
http://www.tomshardware.com/reviews/short-stroking-hdd,2157.html
Now, I am wondering if using a mirror of such 15k SAS drives would be a
good-enough fit for a ZIL on a zpool that is mainly used for file
2011 Nov 08
6
Couple of questions about ZFS on laptops
Hello all,
I am thinking about a new laptop. I see that there are
a number of higher-performance models (incidenatlly, they
are also marketed as "gamer" ones) which offer two SATA
2.5" bays and an SD flash card slot. Vendors usually
position the two-HDD bay part as either "get lots of
capacity with RAID0 over two HDDs, or get some capacity
and some performance by mixing one
2010 Jul 21
5
slog/L2ARC on a hard drive and not SSD?
Hi,
Out of pure curiosity, I was wondering, what would happen if one tries to use a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)?
I know these are designed with SSDs in mind, and I know it''s possible to use anything you want as cache. So would ZFS benefit from it? Would it be the same? Would it slow down?
I guess it would slow things down, because it would be trying to