Displaying 20 results from an estimated 500 matches similar to: "S10u6, zfs and zones"
2010 Jan 24
4
zfs streams
Can I send a zfs send stream (ZFS pool version 22 ; ZFS filesystem
version 4) to a zfs receive stream on Solaris 10 (ZFS pool version 15 ;
ZFS filesystem version 4)?
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.03 b131
+ All that''s really worth doing is what we do for others (Lewis Carrol)
2009 Feb 27
3
luactive question
After a liveupgrade and luactivate I can login to the -new- BE.
My question is: do I have to luactive the -old- BE again if I want to
chose that one from the grub menu or can I just run it if I want to.
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS sxce snv107 ++
+ All that''s really worth doing is what we do for others (Lewis Carrol)
2009 Feb 17
5
scrub on snv-b107
scrub completed after 1h9m with 0 errors on Tue Feb 17 12:09:31 2009
This is about twice as slow as the same srub on a solaris 10 box with a
mirrored zfs root pool. Has scrub become that much slower? And if so,
why?
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS sxce snv107 ++
+ All that''s really worth doing is what we do for others (Lewis Carrol)
2009 Aug 23
23
incremental backup with zfs to file
FULL backup to a file
zfs snapshot -r rpool at 0908
zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908
INCREMENTAL backup to a file
zfs snapshot -i rpool at 0908 rpool at 090822
zfs send -Rv rpool at 090822 > /net/remote/rpool/snaps/rpool.090822
As I understand the latter gives a file with changes between 0908 and
090822. Is this correct?
How do I restore those files? I know
2008 Jan 30
18
ZIL controls in Solaris 10 U4?
Is it true that Solaris 10 u4 does not have any of the nice ZIL controls
that exist in the various recent Open Solaris flavors? I would like to
move my ZIL to solid state storage, but I fear I can''t do it until I
have another update. Heck, I would be happy to just be able to turn the
ZIL off to see how my NFS on ZFS performance is effected before spending
the $''s. Anyone
2008 Mar 20
7
ZFS panics solaris while switching a volume to read-only
Hi,
I just found out that ZFS triggers a kernel-panic while switching a mounted volume
into read-only mode:
The system is attached to a Symmetrix, all zfs-io goes through Powerpath:
I ran some io-intensive stuff on /tank/foo and switched the device into
read-only mode at the same time (symrdf -g bar failover -establish).
ZFS went ''bam'' and triggered a Panic:
WARNING: /pci at
2009 Jun 29
7
ZFS - SWAP and lucreate..
Good morning everybody
I was migrating my ufs ? rootfilesystem to a zfs ? one, but was a little upset finding out that it became bigger (what was clear because of the swap and dump size).
Now I am questioning myself if it is possible to set the swap and dump size by using the lucreate ? command (I wanna try it again but on less space). Unfortunately I did not find any advice in manpages.
2008 Apr 08
6
lucreate error: Cannot determine the physical boot device ...
# lucreate -n B85
Analyzing system configuration.
Hi,
after typing
# lucreate -n B85
I get the following error:
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name <BE1>.
Current boot environment is named <BE1>.
Creating initial configuration for primary boot environment <BE1>.
ERROR: Unable to determine major and
2008 Aug 08
1
[install-discuss] lucreate into New ZFS pool
Hello,
Since I''ve got my disk partitioning sorted out now, I want to move my BE
from the old disk to the new disk.
I created a new zpool, named RPOOL for distinction with the existing
"rpool".
I then did lucreate -p RPOOL -n new95
This completed without error, the log is at the bottom of this mail.
I have not yet dared to run luactivate. I also have not yet dared set the
2008 Jul 31
9
Terrible zfs performance under NFS load
Hello,
We have a S10U5 server sharing with zfs sharing up NFS shares. While using the nfs mount for a log destination for syslog for 20 or so busy mail servers we have noticed that the throughput becomes severly degraded shortly. I have tried disabling the zil, turning off cache flushing and I have not seen any changes in performance. The servers are only pushing about 1MB/s of constant
2008 Nov 25
2
Can a zpool cachefile be copied between systems?
Suppose that you have a SAN environment with a lot of LUNs. In the
normal course of events this means that ''zpool import'' is very slow,
because it has to probe all of the LUNs all of the time.
In S10U6, the theoretical ''obvious'' way to get around this for your
SAN filesystems seems to be to use a non-default cachefile (likely one
cachefile per virtual
2010 Aug 28
4
ufs root to zfs root liveupgrade?
hi all
Try to learn how UFS root to ZFS root liveUG work.
I download the vbox image of s10u8, it come up as UFS root.
add a new disks (16GB)
create zpool rpool
run lucreate -n zfsroot -p rpool
run luactivate zfsroot
run lustatus it do show zfsroot will be active in next boot
init 6
but it come up with UFS root,
lustatus show ufsroot active
zpool rpool is mounted but not used by boot
Is this a
2009 Jan 28
11
destroy means destroy, right?
Hi,
I just said zfs destroy pool/fs, but meant to say zfs destroy
pool/junk. Is ''fs'' really gone?
thx
jake
2009 Apr 19
21
[on-discuss] Reliability at power failure?
Casper.Dik at Sun.COM wrote:
>
> I would suggest that you follow my recipe: not check the boot-archive
> during a reboot. And then report back. (I''m assuming that that will take
> several weeks)
>
We are back at square one; or, at the subject line.
I did a zpool status -v, everything was hunky dory.
Next, a power failure, 2 hours later, and this is what zpool status
2007 Sep 30
1
Upgrading ZFS Version on Solaris 08/07
I wanted to crossgrade from OpenSolaris b65 to Solaris 10 08/07 for my main
fileserver, but I found that I could not import my zpool due to a version
mismatch. Is there any way to upgrade only ZFS on 08/07 so that it matches
or exceeds the version used by b68?
Blake
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2012 Jan 21
2
patching a solaris server with zones on zfs file systems
Hi All,
Please let me know the procedure how to patch a server which is having 5
zones on zfs file systems.
Root file system exists on internal disk and zones are existed on SAN.
Thank you all,
Bhanu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20120121/0672ad27/attachment.html>
2009 Feb 18
4
Zpool scrub in cron hangs u3/u4 server, stumps tech support.
I''ve got a server that freezes when I run a zpool scrub from cron.
Zpool scrub runs fine from the command line, no errors.
The freeze happens within 30 seconds of the zpool scrub happening.
The one core dump I succeeded in taking showed an arccache eating up
all the ram.
The server''s running Solaris 10 u3, kernel patch 127727-11 but it''s
been patched and seems to have
2008 Oct 28
4
blktap, vmdk, vdi, and disk management support
Just a quick fyi...
We''ve recently added support for blktap along with
support for managing virtual disks (disk file images).
There are some difference from a linux dom0.
This is available in b101 @
http://www.opensolaris.org/os/downloads/sol_ex_dvd_1/
This allows you to create and manage vmdk and vdi
(Virtual Box) disk files. By default, virt-install
will now use a vmdk vdisk when
2008 Jun 04
17
Get your SXCE on ZFS here!
With the release of the Nevada build 90 binaries, it is now possible to install SXCE directly onto a ZFS root filesystem, and also put ZFS swap onto a ZFS filesystem without worrying about having it deadlock. ZFS now also supports crash dumps!
To install SXCE to a ZFS root, simply use the text-based installer, after choosing "Solaris Express" from the boot menu on the DVD.
DVD download
2009 Jun 05
4
Recover ZFS destroyed dataset?
I was asked by a coworker about recovering destroyed datasets on ZFS - and
whether it is possible at all? As a related question, if a filesystem dataset was
recursively destroyed along with all its snapshots, is there some means to at
least find some pointers whether it existed at all?
I remember "zpool import -D" can be used to import whole destroyed pools.
But crawling around the