similar to: ZFS - Implementation Successes and Failures

Displaying 20 results from an estimated 20000 matches similar to: "ZFS - Implementation Successes and Failures"

2011 Jan 05
6
ZFS on top of ZFS iSCSI share
I have a filer running Opensolaris (snv_111b) and I am presenting a iSCSI share from a RAIDZ pool. I want to run ZFS on the share at the client. Is it necessary to create a mirror or use ditto blocks at the client to ensure ZFS can recover if it detects a failure at the client? Thanks, Bruin
2010 Jan 12
3
set zfs:zfs_vdev_max_pending
We have a zpool made of 4 512g iscsi luns located on a network appliance. We are seeing poor read performance from the zfs pool. The release of solaris we are using is: Solaris 10 10/09 s10s_u8wos_08a SPARC The server itself is a T2000 I was wondering how we can tell if the zfs_vdev_max_pending setting is impeding read performance of the zfs pool? (The pool consists of lots of small files).
2010 Jan 30
3
Checksum fletcher4 or sha256 ?
Hi, I''m atmost ready to deploy my new homeserver for final testing. Before I want to be sure that nothing big is left untouched. Reading ZFS Admin Guide About the checksum method, there''s no advice about it. The default is fletcher4. there''s also SHA256 Now the sha256 is pretty ''heavy'' to calculate, so I think that it''s left out because can
2008 Apr 01
29
OpenSolaris ZFS NAS Setup
If it''s of interest, I''ve written up some articles on my experiences of building a ZFS NAS box which you can read here: http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/ I used CIFS to share the filesystems, but it will be a simple matter to use NFS instead: issue the command ''zfs set sharenfs=on pool/filesystem'' instead of ''zfs set
2009 Nov 11
20
zfs eradication
Hi, I was discussing the common practice of disk eradication used by many firms for security. I was thinking this may be a useful feature of ZFS to have an option to eradicate data as its removed, meaning after the last reference/snapshot is done and a block is freed, then write the eradication patterns back to the removed blocks. By any chance, has this been discussed or considered before?
2010 Jan 19
5
ZFS/NFS/LDOM performance issues
[Cross-posting to ldoms-discuss] We are occasionally seeing massive time-to-completions for I/O requests on ZFS file systems on a Sun T5220 attached to a Sun StorageTek 2540 and a Sun J4200, and using a SSD drive as a ZIL device. Primary access to this system is via NFS, and with NFS COMMITs blocking until the request has been sent to disk, performance has been deplorable. The NFS server is a
2008 May 21
11
Per-user home filesystems and OS-X Leopard anomaly
I encountered an issue that people using OS-X systems as NFS clients need to be aware of. While not strictly a ZFS issue, it may be encounted most often by ZFS users since ZFS makes it easy to support and export per-user filesystems. The problem I encountered was when using ZFS to create exported per-user filesystems and the OS-X automounter to perform the necessary mount magic. OS-X
2009 Jun 10
13
Apple Removes Nearly All Reference To ZFS
http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and connected via load-shared 4Gbit FC links. This week I have tried many different configurations, using firmware managed RAID, ZFS managed RAID, and with the controller cache enabled or disabled. My objective is to obtain the best single-file write performance.
2011 Sep 14
3
Is there any implementation of VSS for a ZFS iSCSI snapshot on Solaris?
I am using a Solaris + ZFS environment to export a iSCSI block layer device and use the snapshot facility to take a snapshot of the ZFS volume. Is there an existing Volume Shadow Copy (VSS) implementation on Windows for this environment? Thanks S Joshi -------------- next part -------------- An HTML attachment was scrubbed... URL:
2010 Mar 18
13
ZFS/OSOL/Firewire...
An interesting thing I just noticed here testing out some Firewire drives with OpenSolaris. Setup : OpenSolaris 2009.06 and a dev version (snv_129) 2 500Gb Firewire 400 drives with integrated hubs for daisy-chaining (net: 4 devices on the chain) - one SATA bridge - one PATA bridge Created a zpool with both drives as simple vdevs Started a zfs send/recv to backup a local filesystem Watching
2009 Dec 08
1
Live Upgrade Solaris 10 UFS to ZFS boot pre-requisites?
I have a Solaris 10 U5 system massively patched so that it supports ZFS pool version 15 (similar to U8, kernel Generic_141445-09), live upgrade components have been updated to Solaris 10 U8 versions from the DVD, and GRUB has been updated to support redundant menus across the UFS boot environments. I have studied the Solaris 10 Live Upgrade manual (821-0438) and am unable to find any
2009 Feb 17
32
Backing up ZFS snapshots
I have an OpenSolaris snv_105 server at home that holds my photos, docs, music, etc, in a zfs pool. I backup my laptops with rsync to the OpenSolaris server. All of my important data is in one place, on the OpenSolaris server. I want to backup this data. I want to protect against losing my data, and I would also like to recover previous versions of files when I make mistakes. * I do not have a
2010 Jun 07
20
Homegrown Hybrid Storage
Hi, I''m looking to build a virtualized web hosting server environment accessing files on a hybrid storage SAN. I was looking at using the Sun X-Fire x4540 with the following configuration: - 6 RAID-Z vdevs with one hot spare each (all 500GB 7200RPM SATA drives) - 2 Intel X-25 32GB SSD''s as a mirrored ZIL - 4 Intel X-25 64GB SSD''s as the L2ARC. -
2009 Dec 04
30
ZFS send | verify | receive
If there were a ?zfs send? datastream saved someplace, is there a way to verify the integrity of that datastream without doing a ?zfs receive? and occupying all that disk space? I am aware that ?zfs send? is not a backup solution, due to vulnerability of even a single bit error, and lack of granularity, and other reasons. However ... There is an attraction to ?zfs send? as an augmentation to the
2008 Jun 02
29
ZFS Hardware Check, OS X Compatibility, NEWBIE!!
This is my first post here, and i hope it is ok that i posted in this thread. I have been doing a bit of reading on the solaris platforms, and seem to be inclined to try out the open solaris os or solaris 10. My only worry is that my lack of knowledge with the command line may make this difficult regarding trouble shooting. It seems fairly straighforward creating zpools etc, but maybe nexenta is
2010 Apr 19
4
upgrade zfs stripe
hi there, since i am really new to zfs, i got 2 important questions for starting. i got a nas up and running zfs in stripe mode with 2x 1,5tb hdd. my question for future proof would be, if i could add just another drive to the pool and zfs can integrate it flawlessly? and second if this hdd could also be another size than 1,5tb? so could i put in 2tb also and integrate it? thanks in advance
2008 Oct 06
15
Looking for some hardware answers, maybe someone on this list could help
I posted a thread here... http://forums.opensolaris.com/thread.jspa?threadID=596 I am trying to finish building a system and I kind of need to pick working NIC and onboard SATA chipsets (video is not a big deal - I can get a silent PCIe card for that, I already know one which works great) I need 8 onboard SATA. I would prefer Intel CPU. At least one gigabit port. That''s about it. I
2008 Jul 09
4
RFE: ZFS commands "zmv" and "zcp"
I''ve run across something that would save me days of trouble. Situation, the contents of one ZFS file system needs to be moved to another ZFS file system. The destination can be the same Zpool, even a brand new ZFS file system. A command to move the data from one ZFS file system to another, WITHOUT COPYING, would be nice. At present, the data is almost 1TB. Ideally a "zmv" or
2010 Jan 19
8
Panic running a scrub
This is probably unreproducible, but I just got a panic whilst scrubbing a simple mirrored pool on scxe snv124. Evidently on of the disks went offline for some reason and shortly thereafter the panic happened. I have the dump and the /var/adm/messages containing the trace. Is there any point in submitting a bug report? The panic starts with: Jan 19 13:27:13 host6