similar to: ZFS filesystems not mounted on reboot with Solaris 10 10/09

Displaying 20 results from an estimated 20000 matches similar to: "ZFS filesystems not mounted on reboot with Solaris 10 10/09"

2007 Sep 19
7
ZFS Solaris 10 Update 4 Patches
The latest ZFS patches for Solaris 10 are now available: 120011-14 - SunOS 5.10: kernel patch 120012-14 - SunOS 5.10_x86: kernel patch ZFS Pool Version available with patches = 4 These patches will provide access to all of the latest features and bug fixes: Features: PSARC 2006/288 zpool history PSARC 2006/308 zfs list sort option PSARC 2006/479 zfs receive -F PSARC 2006/486 ZFS canmount
2013 Dec 17
2
Setting up a lustre zfs dual mgs/mdt over tcp - help requested
Hi all, Here is the situation: I have 2 nodes MDS1 , MDS2 (10.0.0.22 , 10.0.0.23) I wish to use as failover MGS, active/active MDT with zfs. I have a jbod shelf with 12 disks, seen by both nodes as das (the shelf has 2 sas ports, connected to a sas hba on each node), and I am using lustre 2.4 on centos 6.4 x64 I have created 3 zfs pools: 1. mgs: # zpool
2012 Jun 13
0
ZFS NFS service hanging on Sunday morning problem
> > Shot in the dark here: > > What are you using for the sharenfs value on the ZFS filesystem? Something like rw=.mydomain.lan ? They are IP blocks or hosts specified as FQDNs, eg., pptank/home/tcrane sharenfs rw=@192.168.101/24,rw=serverX.xx.rhul.ac.uk:serverY.xx.rhul.ac.uk > > I''ve had issues where a ZFS server loses connectivity to the primary DNS server and
2013 Aug 21
1
Properties list for zfs in FreeBSD
Hi: Where can I find a list of properties (-o/-O property=value) for creating a zpool? I meant something like: #zpool create \ -o ashift=12 \ -0 dedup=off -O autoexpand=off -O atime=off \ -O canmount=off \ -O compression=lz4 \ -O normalization=formD \ -O mountpoint=/jail \ tank \ mirror \ /dev/gptid/diskname0 \ /dev/gptid/diskname1 \
2010 Jun 30
1
zfs rpool corrupt?????
Hello, Has anyone encountered the following error message, running Solaris 10 u8 in an LDom. bash-3.00# devfsadm devfsadm: write failed for /dev/.devfsadm_dev.lock: Bad exchange descriptor bash-3.00# zpool status -v rpool pool: rpool state: DEGRADED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in
2008 Jan 07
0
CR 6647661 <User 1-5Q-12446>, Now responsible engineer P2 kernel/zfs "set once" / "create time only" properties can''t be set for pool level dataset
*Synopsis*: "set once" / "create time only" properties can''t be set for pool level dataset Due to a change requested by <User 1-5Q-12446>, <User 1-5Q-12446> is now the responsible engineer for: CR 6647661 changed on Jan 7 2008 by <User 1-5Q-12446> === Field ============ === New Value ============= === Old Value ============= Responsible Engineer
2009 Apr 15
0
CR 6647661 Updated, P2 kernel/zfs "set once" / "create time only" properties can''t be set for pool level dataset
*Synopsis*: "set once" / "create time only" properties can''t be set for pool level dataset CR 6647661 changed on Apr 15 2009 by <User 1-ERV-6> === Field ============ === New Value ============= === Old Value ============= See Also 6828754 ====================== ===========================
2011 Jun 30
1
cross platform (freebsd) zfs pool replication
Hi, I have two servers running: freebsd with a zpool v28 and a nexenta (opensolaris b134) running zpool v26. Replication (with zfs send/receive) from the nexenta box to the freebsd works fine, but I have a problem accessing my replicated volume. When I''m typing and autocomplete with tab key the command cd /remotepool/us (for /remotepool/users) I get a panic. check the panic @
2008 Jul 15
1
Cannot share RW, "Permission Denied" with sharenfs in ZFS
Hi everyone, I have just installed Solaris and have added a 3x500GB raidz drive array. I am able to use this pool (''tank'') successfully locally, but when I try to share it remotely, I can only read, I cannot execute or write. I didn''t do anything other than the default ''zfs set sharenfs=on tank''... how can I get it so that any allowed user can access
2008 Jul 15
2
Cannot share RW, "Permission Denied" with sharenfs in ZFS
Hi everyone, I have just installed Solaris and have added a 3x500GB raidz drive array. I am able to use this pool (''tank'') successfully locally, but when I try to share it remotely, I can only read, I cannot execute or write. I didn''t do anything other than the default ''zfs set sharenfs=on tank''... how can I get it so that any allowed user can access
2006 Jul 26
9
zfs questions from Sun customer
Please reply to david.curtis at sun.com ******** Background / configuration ************** zpool will not create a storage pool on fibre channel storage. I''m attached to an IBM SVC using the IBMsdd driver. I have no problem using SVM metadevices and UFS on these devices. List steps to reproduce the problem(if applicable): Build Solaris 10 Update 2 server Attach to an external
2011 Feb 01
1
zpool-poolname has 99 threads
After an upgrade of a busy server to Oracle Solaris 10 9/10, I notice a process called zpool-poolname that has 99 threads. This seems to be a limit, as it never goes above that. It is lower on workstations. The `zpool'' man page says only: Processes Each imported pool has an associated process, named zpool- poolname. The threads in this process are the pool''s
2006 Oct 31
0
6345875 The zfs "sharenfs" option fails after an alternate-root mount, until reboot
Author: lling Repository: /hg/zfs-crypto/gate Revision: 470ed1fa8c0b5104bdf6c9dcdb194eb4781ddc19 Log message: 6345875 The zfs "sharenfs" option fails after an alternate-root mount, until reboot Files: update: usr/src/cmd/zpool/zpool_dataset.c update: usr/src/cmd/zpool/zpool_main.c update: usr/src/cmd/zpool/zpool_util.h
2009 Feb 18
11
Confused about prerequisites for ZFS to work
I''m hoping to get some general clues about what all is required to get an experiment going with zfs. I''ve managed to install osol-11 in a vmware on windowsXP host from a recent *.iso. I''m following along with Simon''s blog showing how to set up ZFS. I''m newbie with both ZFS and Solaris but the instructions seem pretty clear. However I''m
2010 Jun 16
0
files lost in the zpool - retrieval possible ?
Greetings, my Opensolaris 06/2009 installation on an Thinkpad x60 notebook is a little unstable. From the symptoms during installation it seems that there might be something with the ahci driver. No problem with the Opensolaris LiveCD system. Some weeks ago during copy of about 2 GB from a USB stick to the zfs filesystem, the system froze and afterwards refused to boot. Now when investigating
2013 Feb 17
13
zfs raid1 error resilvering and mount
hi, i have raid1 on zfs with 2 device on pool first device died and boot from second not working... i try to get http://mfsbsd.vx.sk/ flash and load from it with zpool import http://puu.sh/2402E when i load zfs.ko and opensolaris.ko i see this message: Solaris: WARNING: Can''t open objset for zroot/var/crash Solaris: WARNING: Can''t open objset for zroot/var/crash zpool status:
2007 Jun 09
2
zfs bug
dd if=/dev/zero of=sl1 bs=512 count=256000 dd if=/dev/zero of=sl2 bs=512 count=256000 dd if=/dev/zero of=sl3 bs=512 count=256000 dd if=/dev/zero of=sl4 bs=512 count=256000 zpool create -m /export/test1 test1 raidz /export/sl1 /export/sl2 /export/sl3 zpool add -f test1 /export/sl4 dd if=/dev/zero of=sl4 bs=512 count=256000 zpool scrub test1 panic. and message like on image. This message posted
2006 Nov 02
4
reproducible zfs panic on Solaris 10 06/06
Hi, I am able to reproduce the following panic on a number of Solaris 10 06/06 boxes (Sun Blade 150, V210 and T2000). The script to do this is: #!/bin/sh -x uname -a mkfile 100m /data zpool create tank /data zpool status cd /tank ls -al cp /etc/services . ls -al cd / rm /data zpool status # uncomment the following lines if you want to see the system think # it can still read and write to the
2009 Feb 05
0
zpool import
Even Though zfs got the snapshot and send/recv options for backup/replication thing, we were testing the metro-mirror using zfs on a u6 system with a duplicate set of of IBM SVC luns. I had the SAN backend sync done while zpool ZA was exported on system A. import of pool ZA was a flawless and imported using the same set of LUNS. zpool ignored the duplicate copy of pool ZA existing on a another
2010 Jan 12
11
How do separate ZFS filesystems affect performance?
I''m working with a Cyrus IMAP server running on a T2000 box under Solaris 10 10/09 with current patches. Mailboxes reside on six ZFS filesystems, each containing about 200 gigabytes of data. These are part of a single zpool built on four Iscsi devices from our Netapp filer. One of these ZFS filesystems contains a number of global and per-user databases in addition to one sixth of the