similar to: ZFS handling of many files

Displaying 20 results from an estimated 6000 matches similar to: "ZFS handling of many files"

2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and connected via load-shared 4Gbit FC links. This week I have tried many different configurations, using firmware managed RAID, ZFS managed RAID, and with the controller cache enabled or disabled. My objective is to obtain the best single-file write performance.
2009 Jun 10
13
Apple Removes Nearly All Reference To ZFS
http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS
2009 Dec 04
30
ZFS send | verify | receive
If there were a ?zfs send? datastream saved someplace, is there a way to verify the integrity of that datastream without doing a ?zfs receive? and occupying all that disk space? I am aware that ?zfs send? is not a backup solution, due to vulnerability of even a single bit error, and lack of granularity, and other reasons. However ... There is an attraction to ?zfs send? as an augmentation to the
2009 Dec 08
1
Live Upgrade Solaris 10 UFS to ZFS boot pre-requisites?
I have a Solaris 10 U5 system massively patched so that it supports ZFS pool version 15 (similar to U8, kernel Generic_141445-09), live upgrade components have been updated to Solaris 10 U8 versions from the DVD, and GRUB has been updated to support redundant menus across the UFS boot environments. I have studied the Solaris 10 Live Upgrade manual (821-0438) and am unable to find any
2010 Jan 19
5
ZFS/NFS/LDOM performance issues
[Cross-posting to ldoms-discuss] We are occasionally seeing massive time-to-completions for I/O requests on ZFS file systems on a Sun T5220 attached to a Sun StorageTek 2540 and a Sun J4200, and using a SSD drive as a ZIL device. Primary access to this system is via NFS, and with NFS COMMITs blocking until the request has been sent to disk, performance has been deplorable. The NFS server is a
2010 May 24
16
questions about zil
I recently got a new SSD (ocz vertex LE 50gb) It seems to work really well as a ZIL performance wise. My question is, how safe is it? I know it doesn''t have a supercap so lets'' say dataloss occurs....is it just dataloss or is it pool loss? also, does the fact that i have a UPS matter? the numbers i''m seeing are really nice....these are some nfs tar times before
2010 Jan 15
1
ZFS ARC Cache and Solaris u8
Has the ARC cache in Solaris 10 u8 been improved? Have been reading some mixed messages. Also should this parameter be tuned? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100114/e98e161f/attachment.html>
2010 Mar 12
2
ZFS error while enabling DIRECT_IO for the DB chunks
Hi, We are using Solaris 10 update 7 with ZFS file system.And using the machine for informix db. Solaris Patch level Generic_142900-02 (Dec 09 PatchCluster release) Informix DB version 11.5FC6 We are facing an issue while enabling DIRECT_IO for the DB chunks. The error message which appears in the online.log file is "Direct I/O cannot be used for the chunk file <file_name>"
2009 Aug 23
3
zfs send/receive and compression
Is there a mechanism by which you can perform a zfs send | zfs receive and not have the data uncompressed and recompressed at the other end? I have a gzip-9 compressed filesystem that I want to backup to a remote system and would prefer not to have to recompress everything again at such great computation expense. If this doesn''t exist, how would one go about creating an RFE for
2010 Feb 27
1
slow zfs scrub?
hi all I have a server running svn_131 and the scrub is very slow. I have a cron job for starting it every week and now it''s been running for a while, and it''s very, very slow scrub: scrub in progress for 40h41m, 12.56% done, 283h14m to go The configuration is listed below, consisting of three raidz2 groups with seven 2TB drives each. The root fs is on a pair of X25M (gen 1)
2009 Nov 04
1
ZFS non-zero checksum and permanent error with deleted file
Hello, I am actually using ZFS under FreeBSD, but maybe someone over here can help me anyway. I''d like some advice if I still can rely on one of my ZFS pools: [user at host ~]$ sudo zpool clear zpool01 ... [user at host ~]$ sudo zpool scrub zpool01 ... [user at host ~]$ sudo zpool status -v zpool01 pool: zpool01 state: ONLINE status: One or more devices has experienced an
2009 Nov 18
2
ZFS and NFS
Hi, My customer says: ------------------------------------ Application has NFS directories with millions of files in a directory, and this can''t changed. We are having issues with the EMC appliance and RPC timeouts on the NFS lookup. I am looking doing is moving one of the major NFS exports to as Sun 25k using VCS to cluster a ZFS RAIDZ that is then NFS exported. For performance I
2010 Apr 19
4
upgrade zfs stripe
hi there, since i am really new to zfs, i got 2 important questions for starting. i got a nas up and running zfs in stripe mode with 2x 1,5tb hdd. my question for future proof would be, if i could add just another drive to the pool and zfs can integrate it flawlessly? and second if this hdd could also be another size than 1,5tb? so could i put in 2tb also and integrate it? thanks in advance
2010 Mar 02
4
ZFS Large scale deployment model
We have a virtualized environment of T-Series where each host has either zones or LDoms. All of the virtual systems will have their own dedicated storage on ZFS (and some may also get raw LUNs). All the SAN storage is delivered in fixed sized 33GB LUNs. The question I have to the community is whether it would be better to have a pool per virtual system, or create a large pool and carve out ZFS
2008 Jun 22
6
ZFS-Performance: Raid-Z vs. Raid5/6 vs. mirrored
Hi list, as this matter pops up every now and then in posts on this list I just want to clarify that the real performance of RaidZ (in its current implementation) is NOT anything that follows from raidz-style data efficient redundancy or the copy-on-write design used in ZFS. In a M-Way mirrored setup of N disks you get the write performance of the worst disk and a read performance that is
2009 May 06
12
Monitoring ZFS host memory use
Hi, Please forgive me if my searching-fu has failed me in this case, but I''ve been unable to find any information on how people are going about monitoring and alerting regarding memory usage on Solaris hosts using ZFS. The problem is not that the ZFS ARC is using up the memory, but that the script Nagios is using to check memory usage simply sees, say 96% RAM used, and alerts. The
2008 Jul 09
4
RFE: ZFS commands "zmv" and "zcp"
I''ve run across something that would save me days of trouble. Situation, the contents of one ZFS file system needs to be moved to another ZFS file system. The destination can be the same Zpool, even a brand new ZFS file system. A command to move the data from one ZFS file system to another, WITHOUT COPYING, would be nice. At present, the data is almost 1TB. Ideally a "zmv" or
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1. I want to add "phase 2" which is another 7x1.5tb raidz1 Can I add the second phase to the first phase and basically have two raid5''s striped (in raid terms?) Yes, I probably should upgrade the zpool format too. Currently running snv_104. Also should upgrade to 110. If that is possible, would anyone happen to have the simple command lines to
2010 Apr 12
5
How to Catch ZFS error with syslog ?
I have a simple mirror pool with 2 disks. I pulled out one disk to simulate a failed drive. zpool status shows that the pool is in DEGRADED state. I want syslog to log these type of ZFS errors. I have syslog running and logging all sorts of error to a log server. But this failed disk in ZFS pool did not generate any syslog messages. ZFS diagnosists engine are online as seen bleow. hrs1zgpprd1#
2010 Jan 12
3
set zfs:zfs_vdev_max_pending
We have a zpool made of 4 512g iscsi luns located on a network appliance. We are seeing poor read performance from the zfs pool. The release of solaris we are using is: Solaris 10 10/09 s10s_u8wos_08a SPARC The server itself is a T2000 I was wondering how we can tell if the zfs_vdev_max_pending setting is impeding read performance of the zfs pool? (The pool consists of lots of small files).