similar to: some ZFS issues

Displaying 20 results from an estimated 7000 matches similar to: "some ZFS issues"

2010 Apr 05
3
no hot spare activation?
While testing a zpool with a different storage adapter using my "blkdev" device, I did a test which made a disk unavailable -- all attempts to read from it report EIO. I expected my configuration (which is a 3 disk test, with 2 disks in a RAIDZ and a hot spare) to work where the hot spare would automatically be activated. But I''m finding that ZFS does not behave this way
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
Hi, Created a zpool with 64k recordsize and enabled dedupe on it. zpool create -O recordsize=64k TestPool device1 zfs set dedup=on TestPool I copied files onto this pool over nfs from a windows client. Here is the output of zpool list Prompt:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT TestPool 696G 19.1G 677G 2% 1.13x ONLINE - When I ran a
2007 Nov 26
1
some ZFS issues
Hello, I have read much about ZFS and I find it great, especially the checksums against silent data corruption and the COW writing policy and the snapshots and of course the storage pooling. But there are some points I have problems with: - What happens to the pools if the machine is shut down and rebooted? Are the pools automatically exported and imported again on boot up? Where is the
2010 May 29
4
ZFS and IBM SDD Vpaths
I have 6 zfs pools and after rebooting init 6 the vpath device path names have changed for some unknown reason. But I can''t detach, remove and reattach to the new device names....ANY HELP! please pjde43m01 - - - - FAULTED - pjde43m02 - - - - FAULTED - pjde43m03 - - - - FAULTED - poas43m01 - - - -
2008 Apr 29
24
recovering data from a dettach mirrored vdev
Hi, my system (solaris b77) was physically destroyed and i loosed data saved in a zpool mirror. The only thing left is a dettached vdev from the pool. I''m aware that uberblock is gone and that i can''t import the pool. But i still hope their is a way or a tool (like tct http://www.porcupine.org/forensics/) i can go too recover at least partially some data) thanks in advance for
2006 Jun 27
28
Supporting ~10K users on ZFS
OK, I know that there''s been some discussion on this before, but I''m not sure that any specific advice came out of it. What would the advice be for supporting a largish number of users (10,000 say) on a system that supports ZFS? We currently use vxfs and assign a user quota, and backups are done via Legato Networker. >From what little I currently understand, the general
2008 Jan 31
7
mounting a copy of a zfs pool /file system while orginal is still active
Hello SUN gurus I do not know if this is supported, I have a created a zpool consisting of the SAN resources and created a zfs file system. Using third part software I have taken snapshots of all luns in the zfs pool. My question is in a recovery situation is there a way for me to mount the snapshots and import the pool while the original is still active. Right now all I am able to do is export
2008 Mar 12
5
[Bug 752] New: zfs set keysource no longer works on existing pools
http://defect.opensolaris.org/bz/show_bug.cgi?id=752 Summary: zfs set keysource no longer works on existing pools Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: blocker Priority: P1 Component: other AssignedTo:
2007 Jul 26
8
Read-only (forensic) mounts of ZFS
Hi I''m looking into forensic aspects of ZFS, in particular ways to use ZFS tools to investigate ZFS file systems without writing to the pools. I''m working on a test suite of file system images within VTOC partitions. At the moment, these only have 1 file system per pool per VTOC partition for simplicity''s sake, and I''m using Solaris 10 6/06, which may not
2007 Dec 31
4
Help! ZFS pool is UNAVAILABLE
Hi All, I posted this in a different threat, but it was recommended that I post in this one. Basically, I have a 3 drive raidz array on internal Seagate drives. running build 64nv. I purchased 3 add''l USB drives with the intention of mirroring and then migrating the data to the new USB drives. I accidentally added the 3 USB drives in a raidz to my original storage pool, so now I have 2
2005 Nov 20
2
ZFS & small files
First - many, many congrats to team ZFS. Developing/writing a new Unix fs is a very non-trivial exercise with zero tolerance for developer bugs. I just loaded build 27a on a w1100z with a single AMD 150 CPU (2Gb RAM) and a single (for now) SCSI disk drive: FUJITSU MAP3367NP (Revision: 0108) hooked up to the built-in SCSI controller (the only device on the SCSI bus). My initial ZFS test was to
2012 Jun 12
2
lost ZFS pool
hail, I write just to make sure its dead. I've lost the first disk on a ZFS pool (jbod). Now I can't mount it with only the second disk. The first disk clicks to death :( [root@optimus ~]# zpool status pool: pool state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing
2009 Nov 11
20
zfs eradication
Hi, I was discussing the common practice of disk eradication used by many firms for security. I was thinking this may be a useful feature of ZFS to have an option to eradicate data as its removed, meaning after the last reference/snapshot is done and a block is freed, then write the eradication patterns back to the removed blocks. By any chance, has this been discussed or considered before?
2008 Jun 02
6
[Bug 2114] New: delegation_004: a non-root user can''t do ''zfs key -c'' with keychange delegated
http://defect.opensolaris.org/bz/show_bug.cgi?id=2114 Summary: delegation_004: a non-root user can''t do ''zfs key -c'' with keychange delegated Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: normal
2009 Mar 11
6
Export ZFS via ISCSI to Linux - Is it stable for production use now?
Hello, I want to setup an opensolaris for centralized storage server, using ZFS as the underlying FS, on a RAID 10 SATA disks. I will export the storage blocks using ISCSI to RHEL 5 (less than 10 clients, and I will format the partition as EXT3) I want to ask... 1. Is this setup suitable for mission critical use now? 2. Can I use LVM with this setup? Currently we are using NFS as the
2009 Apr 28
1
zfs-fuse mirror unavailable after upgrade to ubuntu 9.04
Hi there, juliusr at rainforest:~$ cat /etc/issue Ubuntu 9.04 \n \l juliusr at rainforest:~$ dpkg -l | grep -i zfs-fuse ii zfs-fuse 0.5.1-1ubuntu5 I have two 320gb sata disks connected to a PCI raid controller: juliusr at rainforest:~$ lspci | grep -i sata 00:08.0 RAID bus controller: Silicon Image, Inc. SiI 3512 [SATALink/SATARaid] Serial ATA Controller (rev
2008 Apr 28
3
[Bug 1657] New: tests/functional/acl/nontrivial/ zfs_acl_cp_001_pos causes panic
http://defect.opensolaris.org/bz/show_bug.cgi?id=1657 Summary: tests/functional/acl/nontrivial/zfs_acl_cp_001_pos causes panic Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: critical Priority: P2
2008 May 26
5
[Bug 2033] New: ''zfs create'' causes panic if key file doesn''t exist
http://defect.opensolaris.org/bz/show_bug.cgi?id=2033 Summary: ''zfs create'' causes panic if key file doesn''t exist Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: minor Priority: P2 Component:
2008 Mar 27
4
dsl_dataset_t pointer during ''zfs create'' changes
I''ve noticed that the dsl_dataset_t that points to a given dataset changes during the life time of a ''zfs create'' command. We start out with one dsl_dataset_t* during dmu_objset_create_sync() but by the time we are later mounting the dataset we have a different in memory dsl_dataset_t* referring to the same dataset. This causes me a big issue with per dataset
2006 May 16
2
Status of zfs create time options
Whats the status of the zfs create time option setting ? Is someone working on it yet ? -- Darren J Moffat