similar to: ZFS read error

Displaying 20 results from an estimated 10000 matches similar to: "ZFS read error"

2007 Mar 23
2
ZFS ontop of SVM - CKSUM errors
Hi. bash-3.00# uname -a SunOS nfs-14-2.srv 5.10 Generic_125101-03 i86pc i386 i86pc I created first zpool (stripe of 85 disks) and did some simple stress testing - everything seems almost alright (~700MB seq reads, ~430 seqential writes). Then I destroyed pool and put SVM stripe on top the same disks utilizing the fact that zfs already put EFI and s0 represents almost entire disk. The on top on
2009 Apr 08
2
ZFS data loss
Hi, I have lost a ZFS volume and I am hoping to get some help to recover the information ( a couple of months worth of work :( ). I have been using ZFS for more than 6 months on this project. Yesterday I ran a "zvol status" command, the system froze and rebooted. When it came back the discs where not available. See bellow the output of " zpool status", "format"
2009 Nov 17
1
upgrading to the latest zfs version
Hi guys, after reading the mailings yesterday i noticed someone was after upgrading to zfs v21 (deduplication) i''m after the same, i installed osol-dev-127 earlier which comes with v19 and then followed the instructions on http://pkg.opensolaris.org/dev/en/index.shtml to bring my system up to date, however, the system is reporting no updates are available and stays at zfs v19, any ideas?
2007 Jul 12
9
Again ZFS with expanding LUNs!
Hello, I know, that you had this discussion a view days ago but I''m in the installation phase of our new production servers and I intend to migrate the data from UFS volumes to ZFS volumes in near future. For doing this it must be ABSOLUTELY sure that I can resize the SAN LUNs because during the last 4 years I had to double the LUN size every year. I tried to resize a test volume
2009 Nov 02
0
Kernel panic on zfs import (hardware failure)
Hey, On Sat, Oct 31, 2009 at 5:03 PM, Victor Latushkin <Victor.Latushkin at sun.com> wrote: > Donald Murray, P.Eng. wrote: >> >> Hi, >> >> I''ve got an OpenSolaris 2009.06 box that will reliably panic whenever >> I try to import one of my pools. What''s the best practice for >> recovering (before I resort to nuking the pool and
2007 Jun 15
3
Virtual IP Integration
Has there been any discussion here about the idea integrating a virtual IP into ZFS. It makes sense to me because of the integration of NFS and iSCSI with the sharenfs and shareiscsi properties. Since these are both dependent on an IP it would be pretty cool if there was also a virtual IP that would automatically move with the pool. Maybe something like "zfs set ip.nge0=x.x.x.x mypool"
2010 Feb 27
1
slow zfs scrub?
hi all I have a server running svn_131 and the scrub is very slow. I have a cron job for starting it every week and now it''s been running for a while, and it''s very, very slow scrub: scrub in progress for 40h41m, 12.56% done, 283h14m to go The configuration is listed below, consisting of three raidz2 groups with seven 2TB drives each. The root fs is on a pair of X25M (gen 1)
2007 Dec 03
2
Help replacing dual identity disk in ZFS raidz and SVM mirror
Hi, We have a number of 4200''s setup using a combination of an SVM 4 way mirror and a ZFS raidz stripe. Each disk (of 4) is divided up like this / 6GB UFS s0 Swap 8GB s1 /var 6GB UFS s3 Metadb 50MB UFS s4 /data 48GB ZFS s5 For SVM we do a 4 way mirror on /,swap, and /var So we have 3 SVM mirrors d0=root (sub mirrors d10, d20, d30, d40) d1=swap (sub mirrors d11, d21,d31,d41)
2009 Nov 04
1
ZFS non-zero checksum and permanent error with deleted file
Hello, I am actually using ZFS under FreeBSD, but maybe someone over here can help me anyway. I''d like some advice if I still can rely on one of my ZFS pools: [user at host ~]$ sudo zpool clear zpool01 ... [user at host ~]$ sudo zpool scrub zpool01 ... [user at host ~]$ sudo zpool status -v zpool01 pool: zpool01 state: ONLINE status: One or more devices has experienced an
2006 Jan 27
2
Do I have a problem? (longish)
Hi, to shorten the story, I describe the situation. I have 4 disks in a zfs/svm config: c2t9d0 9G c2t10d0 9G c2t11d0 18G c2t12d0 18G c2t11d0 is devided in two: selecting c2t11d0 [disk formatted] /dev/dsk/c2t11d0s0 is in use by zpool storedge. Please see zpool(1M). /dev/dsk/c2t11d0s1 is part of SVM volume stripe:d11. Please see metaclear(1M). /dev/dsk/c2t11d0s2 is in use by zpool storedge. Please
2007 Sep 06
0
Zfs with storedge 6130
On 9/4/07 4:34 PM, "Richard Elling" <Richard.Elling at Sun.COM> wrote: > Hi Andy, > my comments below... > note that I didn''t see zfs-discuss at opensolaris.org in the CC for the > original... > > Andy Lubel wrote: >> Hi All, >> >> I have been asked to implement a zfs based solution using storedge 6130 and >> im chasing my own
2008 Aug 06
0
zfs status -v tries too hard?
After some errors were logged as to a problem with a ZFS file system, I ran zfs status followed by zfs status -v... # zpool status pool: ehome state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see:
2007 Nov 16
0
ZFS mirror and sun STK 2540 FC array
Hi all, we have just bought a sun X2200M2 (4GB / 2 opteron 2214 / 2 disks 250GB SATA2, solaris 10 update 4) and a sun STK 2540 FC array (8 disks SAS 146 GB, 1 raid controller). The server is attached to the array with a single 4 Gb Fibre Channel link. I want to make a mirror using ZFS with this array. I have created 2 volumes on the array in RAID0 (stripe of 128 KB) presented to the host
2007 Feb 11
0
unable to mount legacy vol - panic in zfs:space_map_remove - zdb crashes
I have a 100gb SAN lun in a pool, been running ok for about 6 months. panicked the system this morning. system was running S10U2. In the course of troubleshooting I''ve installed the latest recommended bundle including kjp 118833-36 and zfs patch 124204-03 created as: zpool create zfspool01 /dev/dsk/emcpower0c zfs create zfspool01/nb60openv zfs set mountpoint=legacy zfspool01/nb60openv
2006 Jun 22
1
zfs snapshot restarts scrubbing?
Hi, yesterday I implemented a simple hourly snapshot on my filesystems. I also regularly initiate a manual "zpool scrub" on all my pools. Usually the scrubbing will run for about 3 hours. But after enabling hourly snapshots I noticed that zfs scrub is always restarted if a new snapshot is created - so basically it will never have the chance to finish: # zpool scrub scratch # zpool
2010 Jun 30
1
zfs rpool corrupt?????
Hello, Has anyone encountered the following error message, running Solaris 10 u8 in an LDom. bash-3.00# devfsadm devfsadm: write failed for /dev/.devfsadm_dev.lock: Bad exchange descriptor bash-3.00# zpool status -v rpool pool: rpool state: DEGRADED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in
2006 Dec 12
3
ZFS Corruption
Please reply directly to me. Seeing the message below. Is it possible to determine exactly which file is corrupted? I was thinking the OBJECT/RANGE info may be pointing to it but I don''t know how to equate that to a file. # zpool status -v pool: u01 state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be
2011 Apr 24
2
zfs problem vdev I/O failure
Good morning, I have a problem with ZFS: ZFS filesystem version 4 ZFS storage pool version 15 Yesterday my comp with Freebsd 8.2 releng shutdown with ad4 error detached,when I copy a big file... and after reboot in 2 wd green 1tb say me goodbye. One of them die and other with zfs errors: Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path= offset=187921768448 size=512 error=6
2008 Aug 25
5
Unable to import zpool since system hang during zfs destroy
Hi all, I have a RAID-Z zpool made up of 4 x SATA drives running on Nexenta 1.0.1 (OpenSolaris b85 kernel). It has on it some ZFS filesystems and few volumes that are shared to various windows boxes over iSCSI. On one particular iSCSI volume, I discovered that I had mistakenly deleted some files from the FAT32 partition that is on it. The files were still in a ZFS snapshot that was made earlier
2009 Jul 25
1
OpenSolaris 2009.06 - ZFS Install Issue
I''ve installed Open Solaris 2009.06 on a machine with 5 identical 1TB wd green drives to create a ZFS nas. The intended install is one drive dedicated to the OS and the remaining 4 drives in a raidz1 configuration. The install is working fine, but creating the raidz1 pool and rebooting is causing the machine report "Cannot find active partition" upon reboot. Below is command