similar to: zfs mount i/o error and workarounds

Displaying 20 results from an estimated 20000 matches similar to: "zfs mount i/o error and workarounds"

2009 Nov 02
0
Kernel panic on zfs import (hardware failure)
Hey, On Sat, Oct 31, 2009 at 5:03 PM, Victor Latushkin <Victor.Latushkin at sun.com> wrote: > Donald Murray, P.Eng. wrote: >> >> Hi, >> >> I''ve got an OpenSolaris 2009.06 box that will reliably panic whenever >> I try to import one of my pools. What''s the best practice for >> recovering (before I resort to nuking the pool and
2013 Feb 17
13
zfs raid1 error resilvering and mount
hi, i have raid1 on zfs with 2 device on pool first device died and boot from second not working... i try to get http://mfsbsd.vx.sk/ flash and load from it with zpool import http://puu.sh/2402E when i load zfs.ko and opensolaris.ko i see this message: Solaris: WARNING: Can''t open objset for zroot/var/crash Solaris: WARNING: Can''t open objset for zroot/var/crash zpool status:
2013 Oct 15
0
How to unstick ZFS resilver?
I have a large (88-drive) zpool in which a drive was recently replaced. (The pool has a bunch of duff Toshiba MK2001TRKB drives -- never ever pay money for these! -- and I'm trying to replace them one by one before they fail completely.) The resilver on the first drive replacement has been taking much much too long, and currently it's stuck in this state: pool: export state: DEGRADED
2008 Mar 13
4
Disabling zfs xattr in S10u4
Hi, I want to disable extended attributes in my zfs on s10u4. I found out that the command to do is zfs set xattr=off <poolname>. But, I do not see this option in s10u4. How can I disable zfs extended attributes on s10u4? I''m not in the zfs-discuss alias. Please respond to me directly. Thanks Balaji
2019 Jun 14
3
zfs
Hi, folks, testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I pulled one drive (11-drive, one hot spare pool), and it resilvered with the hot spare. zpool status -x shows me state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state.
2019 Jun 14
0
zfs [SOLVED]
mark wrote: > Hi, folks, > > > testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I > pulled one drive (11-drive, one hot spare pool), and it resilvered with > the hot spare. zpool status -x shows me state: DEGRADED > status: One or more devices could not be used because the label is missing > or invalid. Sufficient replicas exist for the pool to
2009 Dec 22
2
Mirror of SAN Boxes with ZFS ? (split site mirror)
Hello, I''m thinking about a setup that looks like this: - 2 headnodes with FC connectivity (OpenSolaris) - 2 backend FC srtorages (Disk Shelves with RAID Controllers presenting a huge 15 TB RAID5) - 2 datacenters (distance 1 km with dark fibre) - one headnode and one storage in each data center (Sorry for this ascii art :) ( Data Center 1) <--1km--> (Data
2011 Apr 24
2
zfs problem vdev I/O failure
Good morning, I have a problem with ZFS: ZFS filesystem version 4 ZFS storage pool version 15 Yesterday my comp with Freebsd 8.2 releng shutdown with ad4 error detached,when I copy a big file... and after reboot in 2 wd green 1tb say me goodbye. One of them die and other with zfs errors: Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path= offset=187921768448 size=512 error=6
2010 Mar 31
1
[PATCH] Add LocalCopy transfer method to transfer local files to a target
Also changes command line parsing to require a pool to be specified when using libvirt output, meaning storage will always be copied. --- MANIFEST | 1 + lib/Sys/VirtV2V/Connection.pm | 74 +++++++++--------- lib/Sys/VirtV2V/Connection/LibVirt.pm | 18 +++-- lib/Sys/VirtV2V/Connection/LibVirtXML.pm | 11 ++- lib/Sys/VirtV2V/Target/LibVirt.pm
2018 Jan 14
0
Volume can not write to data if this volume quota limits capacity and mount itself volume on arm64(aarch64) architecture
Thanks for reading this email?I found a problem while using Glusterfs? First?I created a Distributed Dispersed volume on three nodes?and Limit the volume capacity use quota command?this volume is auto mounted on /run/gluster/VOLUME_NAME. This volume can be read and written normally? After, I manually mounted the volume in another path to provide data storage of SAMBA and ISCSI services, after
2010 Apr 14
1
Checksum errors on and after resilver
Hi all, I recently experienced a disk failure on my home server and observed checksum errors while resilvering the pool and on the first scrub after the resilver had completed. Now everything seems fine but I''m posting this to get help with calming my nerves and detect any possible future faults. Lets start with some specs. OSOL 2009.06 Intel SASUC8i (w LSI 1.30IT FW) Gigabyte
2010 Mar 17
0
checksum errors increasing on "spare" vdev?
Hi, One of my colleagues was confused by the output of ''zpool status'' on a pool where a hot spare is being resilvered in after a drive failure: $ zpool status data pool: data state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub:
2017 Nov 07
0
error logged in fuse-mount log file
Hi, I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log file. what does this error mean? should i worry about this and how do i resolve this? [2017-11-07 11:59:17.218973] W [MSGID: 109005] [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory selfheal fail ed : 1 subvolumes have unrecoverable errors. path = /fol1/fol2/fol3/fol4/fol5, gfid
2010 Dec 05
4
Zfs ignoring spares?
Hi all I have installed a new server with 77 2TB drives in 11 7-drive RAIDz2 VDEVs, all on WD Black drives. Now, it seems two of these drives were bad, one of them had a bunch of errors, the other was very slow. After zfs offlining these and then zfs replacing them with online spares, resilver ended and I thought it''d be ok. Appearently not. Albeit the resilver succeeds, the pool status
2011 Feb 01
1
zpool-poolname has 99 threads
After an upgrade of a busy server to Oracle Solaris 10 9/10, I notice a process called zpool-poolname that has 99 threads. This seems to be a limit, as it never goes above that. It is lower on workstations. The `zpool'' man page says only: Processes Each imported pool has an associated process, named zpool- poolname. The threads in this process are the pool''s
2017 Nov 10
0
Error logged in fuse-mount log file
Hi, Comments inline. Regards, Nithya On 9 November 2017 at 15:05, Amudhan Pandian <amudh_an at hotmail.com> wrote: > resending mail from another id, doubt on whether mail reaches mailing list. > > > ---------- Forwarded message ---------- > From: *Amudhan P* <amudhan83 at gmail.com> > Date: Tue, Nov 7, 2017 at 6:43 PM > Subject: error logged in fuse-mount log
2012 Nov 11
0
Expanding a ZFS pool disk in Solaris 10 on VMWare (or other expandable storage technology)
Hello all, This is not so much a question but rather a "how-to" for posterity. Comments and possible fixes are welcome, though. I''m toying (for work) with a Solaris 10 VM, and it has a dedicated virtual HDD for data and zones. The template VM had a 20Gb disk, but a particular application needs more. I hoped ZFS autoexpand would do the trick transparently, but it turned out
2013 Oct 23
0
[PATCH] btrfs-progs: add filter for deleted but uncleanded subvolumes
New option to subvolume list that acts as a global filter and applies the other filters to either live subvolumes or the uncleaned ones. The path to the deleted subvolumes is lost at the deletion time, sample output looks like: ID 259 gen 7 top level 0 path <FS_TREE>/DELETED Signed-off-by: David Sterba <dsterba@suse.cz> --- btrfs-list.c | 24 +++++++++++++++++++++---
2009 Jul 13
7
OpenSolaris 2008.11 - resilver still restarting
Just look at this. I thought all the restarting resilver bugs were fixed, but it looks like something odd is still happening at the start: Status immediately after starting resilver: # zpool status pool: rc-pool state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine
2011 Apr 06
3
[PATCH V2] Btrfs: fix subvolume mount by name problem when default mount subvolume is set
We create two subvolumes (meego_root and meego_home) in btrfs root directory. And set meego_root as default mount subvolume. After we remount btrfs, meego_root is mounted to top directory by default. Then when we try to mount meego_home (subvol=meego_home) to a subdirectory, it failed. The problem is when default mount subvolume is set to meego_root, we search meego_home in meego_root but can not