similar to: Does zpool clear delete corrupted files

Displaying 20 results from an estimated 8000 matches similar to: "Does zpool clear delete corrupted files"

2007 Dec 15
4
Is round-robin I/O correct for ZFS?
I''m testing an Iscsi multipath configuration on a T2000 with two disk devices provided by a Netapp filer. Both the T2000 and the Netapp have two ethernet interfaces for Iscsi, going to separate switches on separate private networks. The scsi_vhci devices look like this in `format'': 1. c4t60A98000433469764E4A413571444B63d0 <NETAPP-LUN-0.2-50.00GB>
2008 May 04
2
Inconcistancies with scrub and zdb
Hi List, First of all: S10u4 120011-14 So I have the weird situation. Earlier this week, I finally mirrored up two iSCSI based pools. I had been wanting to do this for some time, because the availability of the data in these pools is important. One pool mirrored just fine, but the other pool is another story. First lesson (I think) is you should scrub your pools, at least those backed by
2008 Jan 30
18
ZIL controls in Solaris 10 U4?
Is it true that Solaris 10 u4 does not have any of the nice ZIL controls that exist in the various recent Open Solaris flavors? I would like to move my ZIL to solid state storage, but I fear I can''t do it until I have another update. Heck, I would be happy to just be able to turn the ZIL off to see how my NFS on ZFS performance is effected before spending the $''s. Anyone
2008 Nov 12
6
Inexpensive ZFS home server
For anyone looking for a cheap home ZFS server... Dell is having a sale on their PowerEdge SC440 for $199 (regular $598) through 11/12/2008. http://www.dell.com/content/products/productdetails.aspx/pedge_sc440?c=us&cs=04&l=en&s=bsd Its got Dual Core Intel? Pentium?E2180, 2.0GHz, 1MB Cache, 800MHz FSB and you can upgrade the memory (ECC too) to 2gb for 19$ bucks. @$199, I just
2007 Apr 14
1
Move data from the zpool (root) to a zfs file system
Hi List, As a ZFS newbie, I foolishly copied my data set to the root zpool file system (a large iSCSI SAN array). Thus: # zpool create -f iscsi c4t19d0 c4t20d0 c4t21d0 c4t22d0 c4t23d0 c4t24d0 # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT iscsi 9.53T 64.5K 5.34T 0% ONLINE - # zfs set mountpoint=/mydisks/iscsi iscsi Then copied
2007 Sep 13
26
hardware sizing for a zfs-based system?
Hi all, I''m putting together a OpenSolaris ZFS-based system and need help picking hardware. I''m thinking about using this 26-disk case: [FYI: 2-disk RAID1 for the OS & 4*(4+2) RAIDZ2 for SAN] http://rackmountpro.com/productpage.php?prodid=2418 Regarding the mobo, cpus, and memory - I searched goggle and the ZFS site and all I came up with so far is that, for a
2007 Sep 13
26
hardware sizing for a zfs-based system?
Hi all, I''m putting together a OpenSolaris ZFS-based system and need help picking hardware. I''m thinking about using this 26-disk case: [FYI: 2-disk RAID1 for the OS & 4*(4+2) RAIDZ2 for SAN] http://rackmountpro.com/productpage.php?prodid=2418 Regarding the mobo, cpus, and memory - I searched goggle and the ZFS site and all I came up with so far is that, for a
2007 Sep 19
53
enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS
We are looking for a replacement enterprise file system to handle storage needs for our campus. For the past 10 years, we have been happily using DFS (the distributed file system component of DCE), but unfortunately IBM killed off that product and we have been running without support for over a year now. We have looked at a variety of possible options, none of which have proven fruitful. We are
2007 Oct 02
53
Direct I/O ability with zfs?
We are using MySQL, and love the idea of using zfs for this. We are used to using Direct I/O to bypass file system caching (let the DB do this). Does this exist for zfs? This message posted from opensolaris.org
2010 Nov 01
6
Excruciatingly slow resilvering on X4540 (build 134)
Hello, I''m working with someone who replaced a failed 1TB drive (50% utilized), on an X4540 running OS build 134, and I think something must be wrong. Last Tuesday afternoon, zpool status reported: scrub: resilver in progress for 306h0m, 63.87% done, 173h7m to go and a week being 168 hours, that put completion at sometime tomorrow night. However, he just reported zpool status shows:
2008 Apr 01
29
OpenSolaris ZFS NAS Setup
If it''s of interest, I''ve written up some articles on my experiences of building a ZFS NAS box which you can read here: http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/ I used CIFS to share the filesystems, but it will be a simple matter to use NFS instead: issue the command ''zfs set sharenfs=on pool/filesystem'' instead of ''zfs set
2009 Aug 04
7
Sol10u7: can''t "zpool remove" missing hot spare
I''m using Solaris 10u6 updated to u7 via patches, and I have a pool with a mirrored pair and a (shared) hot spare. We reconfigured disks a while ago and now the controller is c4 instead of c2. The hot spare was originally on c2, and apparently on rebooting it didn''t get found. So, I looked up what the new name for the hot spare was, then added it to the pool with "zpool
2009 Nov 20
2
ZFS Send Priority and Performance
I have several X4540 Thor systems with one large zpool that replicate data to a backup host via zfs send/recv. The process works quite well when there is little to no usage on the source systems. However when the source systems are under usage replication slows down to a near crawl. Without load replication streams along usually near 1 Gbps but drops down to anywhere between 0 - 5000
2008 Nov 26
9
ZPool and Filesystem Sizing - Best Practices?
Hello, We have a new Thor here with 24TB of disk in (first of many, hopefully). We are trying to determine the bext practices with respect to file system management and sizing. Previously, we have tried to keep each file system to a max size of 500GB to make sure we could fit it all on a single tape, and to minimise restore times and impact should we experience some kind of volume
2010 Jan 12
6
x4500/x4540 does the internal controllers have a bbu?
Has anyone worked with a x4500/x4540 and know if the internal raid controllers have a bbu? I''m concern that we won''t be able to turn off the write-cache on the internal hds and SSDs to prevent data corruption in case of a power failure. -- This message posted from opensolaris.org
2009 Nov 17
14
X45xx storage vs 7xxx Unified storage
We are looking at adding to our storage. We would like ~20TB-30 TB. we have ~ 200 nodes (1100 cores) to feed data to using nfs, and we are looking for high reliability, good performance (up to at least 350 MBytes /second over 10 GigE connection) and large capacity. For the X45xx (aka thumper) capacity and performanance seem to be there (we have 3 now) However, for system upgrades , maintenance
2009 Feb 11
8
Write caches on X4540
We''re using some X4540s, with OpenSolaris 2008.11. According to my testing, to optimize our systems for our specific workload, I''ve determined that we get the best performance with the write cache disabled on every disk, and with zfs:zfs_nocacheflush=1 set in /etc/system. The only issue is setting the write cache permanently, or at least quickly. Right now, as it is,
2011 Jul 07
8
Replacement disks for Sun X4500
I am bumping this thread because I too have the same question ... can I put modern 3TB disks (hitachi deskstars) into an old x4500 ? If not, would the x4540 accept them ? -- This message posted from opensolaris.org
2010 Jun 07
20
Homegrown Hybrid Storage
Hi, I''m looking to build a virtualized web hosting server environment accessing files on a hybrid storage SAN. I was looking at using the Sun X-Fire x4540 with the following configuration: - 6 RAID-Z vdevs with one hot spare each (all 500GB 7200RPM SATA drives) - 2 Intel X-25 32GB SSD''s as a mirrored ZIL - 4 Intel X-25 64GB SSD''s as the L2ARC. -
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
Hi, Created a zpool with 64k recordsize and enabled dedupe on it. zpool create -O recordsize=64k TestPool device1 zfs set dedup=on TestPool I copied files onto this pool over nfs from a windows client. Here is the output of zpool list Prompt:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT TestPool 696G 19.1G 677G 2% 1.13x ONLINE - When I ran a