Displaying 15 results from an estimated 15 matches for "s10u2".
Did you mean:
10u2
2006 May 16
3
ZFS snv_b39 and S10U2
Hello zfs-discuss,
Just to be sure - if I create ZFS filesystems on snv_39 and then
later I would want just to import that pool on S10U2 - can I safely
assume it will just work (I mean nothing new to on-disk format was
added or changed in last few snv releases which is not going to be
in u2)?
I want to put right now (I have to do it now) some data on ZFS and
later I want to reinstall system to s10u2 but I do not want to...
2006 Jul 18
4
add dataset
Hi,
a simple question..
is add dataset not part of zonecfg ?
global# zonecfg -z myzone (OK)
zonecfg:myzone> add dataset (fails as there is no dataset option)
zonecfg:myzone> add zfs (fails as there is no dataset option)
Basically how do I add a dataset to a zone ?
Thanks
Roshan
please cc me pererar at visa.com
2006 Mar 29
3
ON 20060327 and upcoming solaris 10 U2 / coreutils
So, I noticed that a lot of the fixes discussed here recently,
including the ZFS/NFS interaction bug fixes and the deadlock fix has
made it into 20060327 that was released this morning. My question is
whether we''ll see all these up to the minute bug fixes in the Solaris
10 update that brings ZFS to that product, or if there is a specific
date where no further updates will make it in to
2006 Sep 21
1
Dtrace script compilation error.
Hi All,
One of the customer is seeing the following error messages.
He is using a S10U1 system with all latest patches.
Dtrace script is attached.
I am not seeing these error messages on S10U2 system.
What could be the problem?
Thanks,
Gangadhar.
------------------------------------------------------------------------
rroberto at njengsunu60-2:~$ dtrace -AFs /lib/svc/share/kcfd.d
dtrace: failed to compile script /lib/svc/share/kcfd.d: line 26: args[ ]
may not be referenced because pro...
2007 Feb 20
2
Multicast groups? within crossbow and over IPMP?
We have had an interesting issue with Market Data and Network Interface failures in the past with ipmp (or even VCS multinicA).. Multicast groups can not span multiple interfaces and in our environment when we have a network failure the interface is failed over however the multicast group does not. The only way to recreate the multicast group is to restart the application with the active network
2007 Feb 11
0
unable to mount legacy vol - panic in zfs:space_map_remove - zdb crashes
I have a 100gb SAN lun in a pool, been running ok for about 6 months. panicked the system this morning. system was running S10U2. In the course of troubleshooting I''ve installed the latest recommended bundle including kjp 118833-36 and zfs patch 124204-03
created as:
zpool create zfspool01 /dev/dsk/emcpower0c
zfs create zfspool01/nb60openv
zfs set mountpoint=legacy zfspool01/nb60openv
mkdir -p /zfs/NB60/nb60openv
m...
2006 Jul 15
2
zvol of files for Oracle?
Hello zfs-discuss,
What would you rather propose for ZFS+ORACLE - zvols or just files
from the performance standpoint?
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
2007 Mar 21
4
HELP!! I can''t mount my zpool!!
Hi all.
One of our server had a panic and now can''t mount the zpool anymore!
Here is what I get at boot:
Mar 21 11:09:17 SERVER142 ^Mpanic[cpu1]/thread=ffffffff90878200:
Mar 21 11:09:17 SERVER142 genunix: [ID 603766 kern.notice] assertion failed: ss->ss_start <= start (0x670000b800 <= 0x67
00009000), file: ../../common/fs/zfs/space_map.c, line: 126
Mar 21 11:09:17 SERVER142
2006 Jun 20
3
nevada_41 and zfs disk partition
I just installed build 41 of Nevada on a SunBlade 1500 with 2GB of ram. I wanted to check out zfs since the delay of S10U2 I really could not wait any longer :)
I installed it on my system and created a zpool out of an approximately 40GB disk slice. I then wanted to build a version of thunderbird that contains a local patch that we like. So I download the source tar ball. I try to untar it on the zfs filesystem and th...
2006 Nov 03
27
# devices in raidz.
for s10u2, documentation recommends 3 to 9 devices in raidz. what is the
basis for this recommendation? i assume it is performance and not failure
resilience, but i am just guessing... [i know, recommendation was intended
for people who know their raid cold, so it needed no further explanation]
thanks... oz...
2006 Mar 03
5
flag day: ZFS on-disk format change
Summary: If you use ZFS, do not downgrade from build 35 or later to
build 34 or earlier.
This putback (into Solaris Nevada build 35) introduced a backwards-
compatable change to the ZFS on-disk format. Old pools will be
seamlessly accessed by the new code; you do not need to do anything
special.
However, do *not* downgrade from build 35 or later to build 34 or
earlier. If you do so, some of
2007 Apr 19
14
Experience with Promise Tech. arrays/jbod''s?
Greetings,
In looking for inexpensive JBOD and/or RAID solutions to use with ZFS, I''ve
run across the recent "VTrak" SAS/SATA systems from Promise Technologies,
specifically their E-class and J-class series:
E310f FC-connected RAID:
http://www.promise.com/product/product_detail_eng.asp?product_id=175
E310s SAS-connected RAID:
2006 Jun 27
28
Supporting ~10K users on ZFS
OK, I know that there''s been some discussion on this before, but I''m not sure that any specific advice came out of it. What would the advice be for supporting a largish number of users (10,000 say) on a system that supports ZFS? We currently use vxfs and assign a user quota, and backups are done via Legato Networker.
>From what little I currently understand, the general
2006 Oct 11
41
ZFS Inexpensive SATA Whitebox
All,
So I have started working with Solaris 10 at work a bit (I''m a Linux
guy by trade) and I have a dying nfs box at home. So the long and short of
it is as follows: I would like to setup a SATAII whitebox that uses ZFS as
its filesystem. The box will probably be very lightly used, streaming media
to my laptop and workstation would be the bulk of the work. However I do
have quite a
2007 Oct 02
53
Direct I/O ability with zfs?
We are using MySQL, and love the idea of using zfs for this. We are used to using Direct I/O to bypass file system caching (let the DB do this). Does this exist for zfs?
This message posted from opensolaris.org