Displaying 20 results from an estimated 1000 matches similar to: "zfs boot issue, changing device id"
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare
support in ZFS. Below you can find a current draft of the proposed
interfaces. This has not yet been submitted for ARC review, but
comments are welcome. Note that this does not include any enhanced FMA
diagnosis to determine when a device is "faulted". This will come in a
follow-on project, of which some
2009 Mar 25
3
anonymous dtrace?
Hello experts,
I heard that there is something called anonymous dtrace that would still
be running when I do a reboot.
Basically, I have the following problem:
The /boot/solaris/bootenv.rc file in my alternate boot environment is
getting modified when I reboot the machine after doing luactivate <ABE>.
It happens only on init 6, doesn''t happen when I do a simple reboot.
The set
2006 Oct 24
3
determining raidz pool configuration
Hi all,
Sorry for the newbie question, but I''ve looked at the docs and haven''t
been able to find an answer for this.
I''m working with a system where the pool has already been configured and
want to determine what the configuration is. I had thought that''d be
with zpool status -v <poolname>, but it doesn''t seem to agree with the
2009 Nov 17
2
p2v for sxce snv115 to xvm on opensolaris host?
hi folks,
is there a straightforward or well-documented way to migrate my physical sxce snv_115 (x64) system into an xvm ?
searching for "p2v" in an opensolaris context seems to pick up a few hits on zones, but nothing obvious relating to xvm on opensolaris
for what it''s worth the host system is opensolaris (2010.02 snv_126), but i''m hoping that''s not very
2006 Apr 06
15
A few Newbie questions about RAIDZ
1. I have a 4x18GB drive setup as RAIDZ. Now when thinking about it
in terms of RAID5 I would expect to get (4-1)x18 worth of drive
space, but DF -h shows 4x18. Is this a bug or do I not understand?
2. Once again thinking in RAID5 terms if I have 4X18GB and 12X9GB
drives and I want to make a RAIDZ of all of them I would expect the
18GB to be treated at 9GB so the RAIDZ would be 16X9GB. Is
2009 Jul 01
14
can''t boot 2009.06 domU on Xen 3.4.1 / CentOS 5.3 dom0
I''ve got a CentOS 5.3 dom0 with Xen 3.4.1-rc5 (or so). I''ve tried the same stuff below with 3.4.0, no difference. I''m trying to install 2009.06 PV domU based on instructions from [1] and [2]. I can run the install fine, I can also get the kernel and boot archive (from [2]) after the install. But for the life of me I can''t get the installed domU to boot.
If I
2007 Oct 18
2
GRUB + zpool version mismatches
Apparently with zfs boot, if the zpool is a version grub doesn''t
recognize, it merely ignores any zfs entries in menu.lst, and
apparently instead boots the first entry it thinks it can boot. I ran
into this myself due to some boneheaded mistakes while doing a very
manual zfs / install at the summit.
Shouldn''t it at least spit out a warning? If so, I have no issues
filing a
2005 Nov 20
2
ZFS & small files
First - many, many congrats to team ZFS. Developing/writing a new Unix fs
is a very non-trivial exercise with zero tolerance for developer bugs.
I just loaded build 27a on a w1100z with a single AMD 150 CPU (2Gb RAM) and
a single (for now) SCSI disk drive: FUJITSU MAP3367NP (Revision: 0108)
hooked up to the built-in SCSI controller (the only device on the SCSI
bus).
My initial ZFS test was to
2009 Aug 02
1
zpool status showing wrong device name (similar to: ZFS confused about disk controller )
Hi All,
over the last couple of weeks, I had to boot from my rpool from various physical
machines because some component on my laptop mainboard blew up (you know that
burned electronics smell?). I can''t retrospectively document all I did, but I am
sure I recreated the boot-archive, ran devfsadm -C and deleted
/etc/zfs/zpool.cache several times.
Now zpool status is referring to a
2008 Jan 05
11
Help with booting dom0 on a Dell 2950
Hi, I have installed b_78 on a Dell 2950 and booting to bare metal works fine but when I try to boot using the grub entry Solaris xVM it will boot to the point where it displays the uname info and then just stays there. It will not boot past that point. I have enabled VT technology in the BIOS (but only after the installation).
Where/what can I look at to trouble shoot this? I am new to xen and
2008 Jul 28
1
zpool status my_pool , shows a pulled disk c1t6d0 as ONLINE ???
New server build with Solaris-10 u5/08,
on a SunFire t5220, and this is our first rollout of ZFS and Zpools.
Have 8 disks, boot disk is hardware mirrored (c1t0d0 + c1t1d0)
Created Zpool my_pool as RaidZ using 5 disks + 1 spare:
c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and spare c1t7d0
I am working on alerting & recovery plans for disks failures in the zpool.
As a test, I have pulled disk
2007 Oct 09
4
dom0 boot panic after bfu to b75
After BFU-ing my system to b75 I ended up with a panicing system
when booting into Xen:
grub> #---------- ADDED BY BOOTADM - DO NOT EDIT ----------
grub> title Solaris on Xen
grub> kernel$ /boot/$ISADIR/xen.gz
grub> module$ /platform/i86xpv/kernel/$ISADIR/unix /platform/i86xpv/kernel/$ISADIR/unix -B console=ttyb
grub> module$ /platform/i86pc/$ISADIR/boot_archive
grub>
2008 Apr 01
29
OpenSolaris ZFS NAS Setup
If it''s of interest, I''ve written up some articles on my experiences of building a ZFS NAS box which you can read here:
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
I used CIFS to share the filesystems, but it will be a simple matter to use NFS instead: issue the command ''zfs set sharenfs=on pool/filesystem'' instead of ''zfs set
2008 Apr 02
1
delete old zpool config?
Hi experts
zpool import shows some weird config of an old zpool
bash-3.00# zpool import
pool: data1
id: 7539031628606861598
state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-3C
config:
data1 UNAVAIL insufficient replicas
2006 Jul 18
1
file access algorithm within pools
Hello,
What is the access algorithm used within multi-component pools for a
given pool, and does it change when one or more members of the pool
become degraded ?
examples:
zpool create mtank mirror c1t0d0 c2t0d0 mirror c3t0d0 c4t0d0 mirror
c5t0d0 c6t0d0
or;
zpool create ztank raidz c1t0d0 c2t0d0 c3t0d0 raidz c4t0d0 c5t0d0 c6t0d0
As files are created on the filesystem within these pools,
2009 Aug 04
7
Sol10u7: can''t "zpool remove" missing hot spare
I''m using Solaris 10u6 updated to u7 via patches, and I have a pool
with a mirrored pair and a (shared) hot spare. We reconfigured disks
a while ago and now the controller is c4 instead of c2. The hot spare
was originally on c2, and apparently on rebooting it didn''t get found.
So, I looked up what the new name for the hot spare was, then added
it to the pool with "zpool
2007 Sep 28
5
ZFS Boot Won''t work with a straight or mirror zfsroot
Using build 70, I followed the zfsboot instructions at http://
www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ to the
letter.
I tried first with a mirror zfsroot, when I try to boot to zfsboot
the screen is flooded with "init(1M) exited on fatal signal 9"
Than I tried with a simple zfs pool (not mirrored) and it just
reboots right away.
If I try to setup grub
2010 Jan 13
3
Recovering a broken mirror
We have a production SunFireV240 that had a zfs mirror until this week. One of the drives (c1t3d0) in the mirror failed.
The system was shutdown and the bad disk replaced without an export.
I don''t know what happened next but by the time I got involved there was no evidence that the remaining good disk (c1t2d0) had ever been part of a ZFS mirror.
Using dd on the raw device I can see data
2007 Jan 11
4
Help understanding some benchmark results
G''day, all,
So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern.
However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2006 Sep 11
7
installing a pseudo driver in a Solaris DOM U and DOM U reboot
Hello,
on a v20z, we have as DOM 0 a Solaris XEN on snv44 64bits
and we have as DOM U a Solaris XEN on snv44 64 bits.
We then install a pseudo driver in the Solaris DOM 1 XEN snv44:
installation is ok and driver works as expected.
But on reboot of DOM 1, the driver is no more
there (in modinfo, driver not found).
Is there something special to do after a pseudo driver installation in
a Solaris