similar to: ZFS and disk usage management?

Displaying 20 results from an estimated 20000 matches similar to: "ZFS and disk usage management?"

2010 May 07
2
ZFS root ARC memory usage on VxFS system...
Hi Folks.. We have started to convert our Veritas clustered systems over to ZFS root to take advantage of the extreme simplification of using Live Upgrade. Moving the data of these systems off VxVM and VxFS is not in scope for reasons to numerous to go into.. One thing my customers noticed immediately was a reduction in "free" memory as reported by ''top''. By way
2009 Jun 15
33
compression at zfs filesystem creation
Hi, I just installed 2009.06 and found that compression isn''t enabled by default when filesystems are created. Does is make sense to have an RFE open for this? (I''ll open one tonight if need be.) We keep telling people to turn on compression. Are there any situations where turning on compression doesn''t make sense, like rpool/swap? what about rpool/dump? Thanks, ~~sa
2009 Jun 10
13
Apple Removes Nearly All Reference To ZFS
http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS
2009 May 06
12
Monitoring ZFS host memory use
Hi, Please forgive me if my searching-fu has failed me in this case, but I''ve been unable to find any information on how people are going about monitoring and alerting regarding memory usage on Solaris hosts using ZFS. The problem is not that the ZFS ARC is using up the memory, but that the script Nagios is using to check memory usage simply sees, say 96% RAM used, and alerts. The
2012 Jan 06
4
ZFS Upgrade
Dear list, I''m about to upgrade a zpool from 10 to 29 version, I suppose that this upgrade will improve several performance issues that are present on 10, however inside that pool we have several zfs filesystems all of them are version 1 my first question is is there a problem with performance or any other problem if you operate a zpool 29 with zfs filesystems version 1 ? Is it better
2009 Dec 08
1
Live Upgrade Solaris 10 UFS to ZFS boot pre-requisites?
I have a Solaris 10 U5 system massively patched so that it supports ZFS pool version 15 (similar to U8, kernel Generic_141445-09), live upgrade components have been updated to Solaris 10 U8 versions from the DVD, and GRUB has been updated to support redundant menus across the UFS boot environments. I have studied the Solaris 10 Live Upgrade manual (821-0438) and am unable to find any
2009 Dec 04
30
ZFS send | verify | receive
If there were a ?zfs send? datastream saved someplace, is there a way to verify the integrity of that datastream without doing a ?zfs receive? and occupying all that disk space? I am aware that ?zfs send? is not a backup solution, due to vulnerability of even a single bit error, and lack of granularity, and other reasons. However ... There is an attraction to ?zfs send? as an augmentation to the
2010 Apr 19
4
upgrade zfs stripe
hi there, since i am really new to zfs, i got 2 important questions for starting. i got a nas up and running zfs in stripe mode with 2x 1,5tb hdd. my question for future proof would be, if i could add just another drive to the pool and zfs can integrate it flawlessly? and second if this hdd could also be another size than 1,5tb? so could i put in 2tb also and integrate it? thanks in advance
2008 Jul 09
4
RFE: ZFS commands "zmv" and "zcp"
I''ve run across something that would save me days of trouble. Situation, the contents of one ZFS file system needs to be moved to another ZFS file system. The destination can be the same Zpool, even a brand new ZFS file system. A command to move the data from one ZFS file system to another, WITHOUT COPYING, would be nice. At present, the data is almost 1TB. Ideally a "zmv" or
2010 Jan 19
5
ZFS/NFS/LDOM performance issues
[Cross-posting to ldoms-discuss] We are occasionally seeing massive time-to-completions for I/O requests on ZFS file systems on a Sun T5220 attached to a Sun StorageTek 2540 and a Sun J4200, and using a SSD drive as a ZIL device. Primary access to this system is via NFS, and with NFS COMMITs blocking until the request has been sent to disk, performance has been deplorable. The NFS server is a
2010 May 28
6
zfs send/recv reliability
After looking through the archives I haven''t been able to assess the reliability of a backup procedure which employs zfs send and recv. Currently I''m attempting to create a script that will allow me to write a zfs stream to a tape via tar like below. # zfs send -R pool at something | tar -c > /dev/tape I''m primarily concerned with in the possibility
2011 Jan 05
6
ZFS on top of ZFS iSCSI share
I have a filer running Opensolaris (snv_111b) and I am presenting a iSCSI share from a RAIDZ pool. I want to run ZFS on the share at the client. Is it necessary to create a mirror or use ditto blocks at the client to ensure ZFS can recover if it detects a failure at the client? Thanks, Bruin
2008 Jun 22
6
ZFS-Performance: Raid-Z vs. Raid5/6 vs. mirrored
Hi list, as this matter pops up every now and then in posts on this list I just want to clarify that the real performance of RaidZ (in its current implementation) is NOT anything that follows from raidz-style data efficient redundancy or the copy-on-write design used in ZFS. In a M-Way mirrored setup of N disks you get the write performance of the worst disk and a read performance that is
2006 Apr 25
3
ZFS quotas & zoned datasets
I''m seeing some unexpected and strange behaviour with respect to quotas and zones. Initially I set things up with no quota on the data set that as delegated to the zone. Then as the local zone admin I created a new child dataset and set a quota on that. Now the global zone admin attempts to quota the delegated dataset, and it appears to work but.... global zone=pingpong local
2010 Apr 12
5
How to Catch ZFS error with syslog ?
I have a simple mirror pool with 2 disks. I pulled out one disk to simulate a failed drive. zpool status shows that the pool is in DEGRADED state. I want syslog to log these type of ZFS errors. I have syslog running and logging all sorts of error to a log server. But this failed disk in ZFS pool did not generate any syslog messages. ZFS diagnosists engine are online as seen bleow. hrs1zgpprd1#
2010 Mar 12
2
ZFS error while enabling DIRECT_IO for the DB chunks
Hi, We are using Solaris 10 update 7 with ZFS file system.And using the machine for informix db. Solaris Patch level Generic_142900-02 (Dec 09 PatchCluster release) Informix DB version 11.5FC6 We are facing an issue while enabling DIRECT_IO for the DB chunks. The error message which appears in the online.log file is "Direct I/O cannot be used for the chunk file <file_name>"
2009 Aug 23
3
zfs send/receive and compression
Is there a mechanism by which you can perform a zfs send | zfs receive and not have the data uncompressed and recompressed at the other end? I have a gzip-9 compressed filesystem that I want to backup to a remote system and would prefer not to have to recompress everything again at such great computation expense. If this doesn''t exist, how would one go about creating an RFE for
2010 Mar 02
4
ZFS Large scale deployment model
We have a virtualized environment of T-Series where each host has either zones or LDoms. All of the virtual systems will have their own dedicated storage on ZFS (and some may also get raw LUNs). All the SAN storage is delivered in fixed sized 33GB LUNs. The question I have to the community is whether it would be better to have a pool per virtual system, or create a large pool and carve out ZFS
2010 Jan 12
3
set zfs:zfs_vdev_max_pending
We have a zpool made of 4 512g iscsi luns located on a network appliance. We are seeing poor read performance from the zfs pool. The release of solaris we are using is: Solaris 10 10/09 s10s_u8wos_08a SPARC The server itself is a T2000 I was wondering how we can tell if the zfs_vdev_max_pending setting is impeding read performance of the zfs pool? (The pool consists of lots of small files).
2009 Jan 15
21
4 disk raidz1 with 3 disks...
Hello, I was hoping that this would work: http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror I have 4x(1TB) disks, one of which is filled with 800GB of data (that I cant delete/backup somewhere else) > root at FSK-Backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0 > /dev/lofi/1 > root at FSK-Backup:~# zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT