Displaying 20 results from an estimated 800 matches similar to: "zpool cross mount"
2011 Nov 22
3
SUMMARY: mounting datasets from a read-only pool with aid of tmpfs
Hello all,
I''d like to report a tricky situation and a workaround
I''ve found useful - hope this helps someone in similar
situations.
To cut the long story short, I could not properly mount
some datasets from a readonly pool, which had a non-"legacy"
mountpoint attribute value set, but the mountpoint was not
available (directory absent or not empty). In this case
2012 Nov 09
3
Forcing ZFS options
There are times when ZFS options can not be applied at the moment,
i.e. changing desired mountpoints of active filesystems (or setting
a mountpoint over a filesystem location that is currently not empty).
Such attempts now bail out with messages like:
cannot unmount ''/var/adm'': Device busy
cannot mount ''/export'': directory is not empty
and such.
Is it
2006 Oct 02
2
vfstab
Hello all,
Is there any way to mount zfs file system from vfstab?
Thanks,
Chris
2001 Oct 18
3
group ownership
I am attempting to rsync data from a rsync server and set the
permissions to a different gid on the client:
my servers name is "rserver01"
my clients name is "rclient01"
here is the rysync.conf contained on rserver01:
# log file
log file = /var/adm/rsync_log
# global options for all modules
dont compress = *.gz *.tgz *.zip *.z *.rpm *.deb *.iso *.bz2 *.tbz
uid = nobody
2008 Sep 30
6
something wrong with puppet client or Server
Hi All,
I have running puppet client and server on solaris 10 x86.
Now days some of puppet client behaviors is something weird !!! or May
be i am missing something...
for ex.
I created class to add one line in /etc/vfstab . but puppet client
did it successfully 1st time ...But .. After few days , i saw there
are same line has been added more than 250 times.. [ see same line is
added so many
2010 Mar 27
16
zpool split problem?
Zpool split is a wonderful feature and it seems to work well,
and the choice of which disk got which name was perfect!
But there seems to be an odd anomaly (at least with b132) .
Started with c0t1d0s0 running b132 (root pool is called rpool)
Attached c0t0d0s0 and waited for it to resilver
Rebooted from c0t0d0s0
zpool split rpool spool
Rebooted from c0t0d0s0, both rpool and spool were mounted
2014 Jun 22
3
mdraid Q on c6...
so, I installed a c6 system offsite today, had to do it in a hurry.
box has 2 disks meant to be mirrored... I couldn't figure out how to
get anaconda to build a LVM root on a mirror, so I ended up just
installing a /boot and vg_system on sda and raid it later.
every howto I find for linux says to half-raid the OTHER disk, COPY
everything to it, then boot from it and wipe the first disk
2010 Jan 28
2
Need help with repairing zpool :(
...how this can happen is not a topic of this message.
now, there is a problem and I need to solve it, if it is possible.
have one HDD device (80gb), entire disk is for rpool, system on it and home folders.
this is no problem to reinstall system, but need to save some files from user dirs.
an, o''cos, there is no backup.
so, the problem is that zpool is broken :( when I try to start
2007 Oct 25
1
How to have ZFS root with /usr on a separate datapool
Ref: http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling
Ref: http://mediacast.sun.com/share/timf/zfs-actual-root-install.sh
This is my errata for Tim Foster''s zfs root install script:
1/ Correct mode for /tmp should be 1777.
2/ The zfs boot install should allow you to have /usr on a separate zpool:
a/ We need to create /zfsroot/usr/lib in the root partition and
2007 Sep 28
5
ZFS Boot Won''t work with a straight or mirror zfsroot
Using build 70, I followed the zfsboot instructions at http://
www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ to the
letter.
I tried first with a mirror zfsroot, when I try to boot to zfsboot
the screen is flooded with "init(1M) exited on fatal signal 9"
Than I tried with a simple zfs pool (not mirrored) and it just
reboots right away.
If I try to setup grub
2007 Oct 07
1
VFSTAB mounting for ProFTPD
I have ProFTPD successfully installed and running, though I would like to virtually mount some directory''s from my ZFS configurations. In a previous ProFTPD install on Ubuntu, I had in my /etc/fstab directory an entry like this:
/HDD ID/directory /home/FTP-shared/information vfat bind 0 0
Though I am not able to do this with my /etc/vfstab. This is my entry in my vfstab config file:
2006 Oct 31
0
6409257 /etc/vfstab isn''t properly aligned
Author: sjelinek
Repository: /hg/zfs-crypto/gate
Revision: e2d8706d226b8bdada8284689c37d35a33396b15
Log message:
6409257 /etc/vfstab isn''t properly aligned
6409251 typo in stmsboot
6409254 unused variables in svc-syseventd
6409228 typo in aclutils.h
Contributed by Rainer Orth <ro at TechFak.Uni-Bielefeld.DE>.
Files:
update: usr/src/cmd/initpkg/vfstab.sh
update:
2008 Dec 26
19
separate home "partition"?
(i use the term loosely because i know that zfs likes whole volumes better)
when installing ubuntu, i got in the habit of using a separate partition for my home directory so that my data and gnome settings would all remain intact when i reinstalled or upgraded.
i''m running osol 2008.11 on an ultra 20, which has only two drives. i''ve got all my data located in my home directory,
2009 Jan 13
6
mirror rpool
Hi
Host: VirtualBox 2.1.0 (WinXP SP3)
Guest: OSol 5.11snv_101b
IDE Primary Master: 10 GB, rpool
IDE Primary Slave: 10 GB, empty
format output:
AVAILABLE DISK SELECTIONS:
0. c3d0 <DEFAULT cyl 1302 alt 2 hd 255 sec 63>
/pci0,0/pci-ide at 1,1/ide at 0/cmdk at 0,0
1. c3d1 <drive unknown>
/pci0,0/pci-ide at 1,1/ide at 0/cmdk at 1,0
# ls
2007 Sep 19
7
ZFS Solaris 10 Update 4 Patches
The latest ZFS patches for Solaris 10 are now available:
120011-14 - SunOS 5.10: kernel patch
120012-14 - SunOS 5.10_x86: kernel patch
ZFS Pool Version available with patches = 4
These patches will provide access to all of the latest features and bug
fixes:
Features:
PSARC 2006/288 zpool history
PSARC 2006/308 zfs list sort option
PSARC 2006/479 zfs receive -F
PSARC 2006/486 ZFS canmount
2013 Dec 17
2
Setting up a lustre zfs dual mgs/mdt over tcp - help requested
Hi all,
Here is the situation:
I have 2 nodes MDS1 , MDS2 (10.0.0.22 , 10.0.0.23) I wish to use as
failover MGS, active/active MDT with zfs.
I have a jbod shelf with 12 disks, seen by both nodes as das (the
shelf has 2 sas ports, connected to a sas hba on each node), and I
am using lustre 2.4 on centos 6.4 x64
I have created 3 zfs pools:
1. mgs:
# zpool
2008 Jun 27
1
''zfs list'' output showing incorrect mountpoint after boot -Z
I installed snv_92 with zfs root. Then took a snapshot of the root and clonned it. Now I am booting from the clone using the -Z option. The system boots fine from the clone but ''zfs list'' still shows that the ''/'' is mounted on the original mountpoint instead of the clone even though the output of ''mount'' shows that ''/'' is
2013 Feb 17
13
zfs raid1 error resilvering and mount
hi, i have raid1 on zfs with 2 device on pool
first device died and boot from second not working...
i try to get http://mfsbsd.vx.sk/ flash and load from it with zpool import
http://puu.sh/2402E
when i load zfs.ko and opensolaris.ko i see this message:
Solaris: WARNING: Can''t open objset for zroot/var/crash
Solaris: WARNING: Can''t open objset for zroot/var/crash
zpool status:
2010 Mar 01
2
flying ZFS pools
Hi everyone,
I''m preparing around 6 Solaris physical servers and I want to see if
it''s possible to create a zfs pool that I can make it as a shared pool
between all the 6 servers (not concurrent, just active-passive way) is
that possible? Is there any article that can show me how to do it ?
sorry if this is a basic question but I''m new to ZFS area, in UFS I
can just
2010 Mar 04
8
Huge difference in reporting disk usage via du and zfs list. Fragmentation?
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10?
What except zfs send/receive can be done to free the fragmented space?
One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du.
The other ZFS was used for similar