Displaying 20 results from an estimated 22 matches for "s10u6".
Did you mean:
10u6
2008 Aug 04
1
S10u6, zfs and zones
My server runs S10u5. All slices are UFS. I run a couple of sparse
zones on a seperate slice mounted on /zones.
When S10u6 comes out booting of ZFS will become possible. That is great
news. However, will it be possible to have those zones I run now too?
I always understood ZFS and root zones are difficult. I hope to be able
to change all FS to ZFS, including the space for the sparse zones.
Does somebody have more info...
2008 Nov 25
2
Can a zpool cachefile be copied between systems?
Suppose that you have a SAN environment with a lot of LUNs. In the
normal course of events this means that ''zpool import'' is very slow,
because it has to probe all of the LUNs all of the time.
In S10U6, the theoretical ''obvious'' way to get around this for your
SAN filesystems seems to be to use a non-default cachefile (likely one
cachefile per virtual fileserver, although you could go all the way to
one cachefile per pool) and then copy this cachefile from the master
host to all...
2009 Mar 31
3
Bad SWAP performance from zvol
I''ve upgraded my system from ufs to zfs (root pool).
By default, it creates a zvol for dump and swap.
It''s a 4GB Ultra-45 and every late night/morning I run a job which takes
around 2GB of memory.
With a zvol swap, the system becomes unusable and the Sun Ray client often
goes into "26B".
So I removed the zvol swap and now I have a standard swap partition.
The
2009 Mar 25
3
anonymous dtrace?
...simple reboot.
The set bootpath property in this file is getting changed after the
machine boots up for the first time in the newly created BE resulting in
kernel panic (gives error: cannot mount root path
/pci at 0,0/pci-ide at 7/ide at 0/cmdk at 0,0:e)
The Primary boot env is S10u3 and ABE is S10u6.
I want to see at what point of time and which process is writing to the
bootenv.rc file. How can I achieve it using dtrace?
Thanks in advance.
Regards,
Nishchaya
2009 Jan 21
6
nfsv3 provider: "failed to grab process"
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
hi,
i''m trying to use the nfsv3 provider on S10U6, with the following simple
script:
#! /usr/sbin/dtrace -s
#pragma D option quiet
nfsv3:::op-read-start {
printf("%s\n", args[1]->noi_curpath);
}
however, when running it, i get the following error:
dtrace: failed to compile script ./nfs2.d: line 5: failed to grab process 3
p...
2008 May 14
2
vdev cache - comments in the source
Hello zfs-code,
http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_cache.c
72 * All i/os smaller than zfs_vdev_cache_max will be turned into
73 * 1<<zfs_vdev_cache_bshift byte reads by the vdev_cache (aka software
74 * track buffer). At most zfs_vdev_cache_size bytes will be kept in each
75 * vdev''s vdev_cache.
While it
2008 Oct 28
4
blktap, vmdk, vdi, and disk management support
...]#; virsh console nevada
Oct 28 16:22:38 nevada genunix: NOTICE: Domain suspending for save/migrate
Oct 28 12:24:38 nevada unix: NOTICE: domain restore/migrate completed
nevada console login:
Hotplug a vdisk..
--
xm block-attach snv89 tap:vdisk:/tank/guests/nevada/b89/disk3 3 w
virsh attach-disk s10u6-02 /tank/guests/s10u6/disk2 hdb --driver tap --subdriver vdisk
create a new guest using blktap/vdisk (assuming /tank/guests/myguest/disk0 doesn''t exist)
--
virt-install --p --nographics --noautoinstall -r 1024 -n myguest -s 16 -f /tank/guests/myguest/disk0 -l /tank/install/snv101.iso
2009 Feb 12
1
strange ''too many errors'' msg
Hi,
just found on a X4500 with S10u6:
fmd: [ID 441519 daemon.error] SUNW-MSG-ID: ZFS-8000-GH, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Wed Feb 11 16:03:26 CET 2009
PLATFORM: Sun Fire X4500, CSN: 00:14:4F:20:E0:2C , HOSTNAME: peng
SOURCE: zfs-diagnosis, REV: 1.0
EVENT-ID: 74e6f0ec-b1e7-e49b-8d71-dc1c9b68ad2b
DESC: The number o...
2008 Jan 30
18
ZIL controls in Solaris 10 U4?
Is it true that Solaris 10 u4 does not have any of the nice ZIL controls
that exist in the various recent Open Solaris flavors? I would like to
move my ZIL to solid state storage, but I fear I can''t do it until I
have another update. Heck, I would be happy to just be able to turn the
ZIL off to see how my NFS on ZFS performance is effected before spending
the $''s. Anyone
2008 Mar 20
7
ZFS panics solaris while switching a volume to read-only
Hi,
I just found out that ZFS triggers a kernel-panic while switching a mounted volume
into read-only mode:
The system is attached to a Symmetrix, all zfs-io goes through Powerpath:
I ran some io-intensive stuff on /tank/foo and switched the device into
read-only mode at the same time (symrdf -g bar failover -establish).
ZFS went ''bam'' and triggered a Panic:
WARNING: /pci at
2009 Feb 18
4
Zpool scrub in cron hangs u3/u4 server, stumps tech support.
...'m not really happy about doing as a bug fix to a production
server running a supported version of Sun OS. Once Upon a Time, Sun
used to offer *patches* to paying customers for operating system bugs.
I quote the latest ticket note in disgust: "I really don''t know what
to tell you. S10u6 has many enhancements and improvments to zfs, but
most can be gained though patchs with the exception of new features."
I''m trying to escalate the ticket, but really, I''m angry. I''ve been a
big champion of staying with Sun/Solaris over Linux and one of the
reasons h...
2009 Apr 23
1
ZFS SMI vs EFI performance using filebench
I have been testing the performance of zfs vs. ufs using filebench. The setup is a v240, 4GB RAM, 2 at 1503MHz, 1 320GB _SAN_ attached LUN, and using a ZFS mirrored root disk. Our SAN is a top notch NVRAM based SAN. There are lots of discussions using ZFS with SAN based storage.. and it seems ZFS is designed to perform best with dumb disk (JBODs). The test I ran support this observation.. and
2009 Jan 20
2
hot spare not so hot ??
I have configured a test system with a mirrored rpool and one hot spare. I
powered the systems off, pulled one of the disks from rpool to simulate a
hardware failure.
The hot spare is not activating automatically. Is there something more i
should have done to make this work ?
pool: rpool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist
for
2008 Jul 29
8
questions about ZFS Send/Receive
Hi guys,
we are proposing a customer a couple of X4500 (24 Tb) used as NAS
(i.e. NFS server).
Both server will contain the same files and should be accessed by
different clients at the same time (i.e. they should be both active)
So we need to guarantee that both x4500 contain the same files:
We could simply copy the contents on both x4500 , which is an option
because the "new
2008 Nov 16
4
[ldoms-discuss] Solaris 10 patch 137137-09 broke LDOM
I''ve tried using S10 U6 to reinstall the boot file (instead of U5) over jumpstart as its a ldom, noticed a another error.
Boot device: /virtual-devices at 100/channel-devices at 200/network at 0 File and args: -s
Requesting Internet Address for 0:14:4f:f9:84:f3
boot: cannot open kernel/sparcv9/unix
Enter filename [kernel/sparcv9/unix]:
Has anyone seen this error on U6 jumpstart or is
2008 Nov 08
7
Paravirtualized Solaris Update 6 (10/08)?
Gurus;
I''ve been running Solaris 10 on a HVM domain on my machine (running SXCE
snv_93 x86) for some time now.
Now that Solaris 10 Update 6 (10/08) has been released, I tried creating
a Paravirtualized Guest domain but got the same error message I got
previously...
# virt-install -n sol10 -p -r 1560 --nographics -f
/dev/zvol/dsk/rpool/sol10 -l /stage/sol-10-u6->
Starting
2008 May 20
4
Ways to speed up ''zpool import''?
We''re planning to build a ZFS-based Solaris NFS fileserver environment
with the backend storage being iSCSI-based, in part because of the
possibilities for failover. In exploring things in our test environment,
I have noticed that ''zpool import'' takes a fairly long time; about
35 to 45 seconds per pool. A pool import time this slow obviously
has implications for how fast
2010 Jan 24
4
zfs streams
Can I send a zfs send stream (ZFS pool version 22 ; ZFS filesystem
version 4) to a zfs receive stream on Solaris 10 (ZFS pool version 15 ;
ZFS filesystem version 4)?
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.03 b131
+ All that''s really worth doing is what we do for others (Lewis Carrol)
2008 Jul 25
11
send/receive
I created snapshot for my whole zpool (zfs version 3):
zfs snapshot -r tank@`date +%F_%T`
then trid to send it to the remote host:
zfs send tank at 2008-07-25_09:31:03 | ssh user at 10.0.1.14 -i identitykey ''zfs
receive tank/tankbackup''
but got the error "zfs: command not found" since user is not superuser, even
though it is in the root group.
I found
2008 Jul 31
17
Can I trust ZFS?
Hey folks,
I guess this is an odd question to be asking here, but I could do with some feedback from anybody who''s actually using ZFS in anger.
I''m about to go live with ZFS in our company on a new fileserver, but I have some real concerns about whether I can really trust ZFS to keep my data alive if things go wrong. This is a big step for us, we''re a 100% windows