similar to: zfs sparc boot "Bad magic number in disk label"

Displaying 20 results from an estimated 900 matches similar to: "zfs sparc boot "Bad magic number in disk label""

2010 Jul 09
4
resilver of older root pool disk
This is a hypothetical question that could actually happen: Suppose a root pool is a mirror of c0t0d0s0 and c0t1d0s0 and for some reason c0t0d0s0 goes off line, but comes back on line after a shutdown. The primary boot disk would then be c0t0d0s0 which would have much older data than c0t1d0s0. Under normal circumstances ZFS would know that c0t0d0s0 needs to be resilvered. But in this case
2010 Mar 27
16
zpool split problem?
Zpool split is a wonderful feature and it seems to work well, and the choice of which disk got which name was perfect! But there seems to be an odd anomaly (at least with b132) . Started with c0t1d0s0 running b132 (root pool is called rpool) Attached c0t0d0s0 and waited for it to resilver Rebooted from c0t0d0s0 zpool split rpool spool Rebooted from c0t0d0s0, both rpool and spool were mounted
2008 Jul 09
8
Using zfs boot with MPxIO on T2000
Here is what I have configured: T2000 with OBP 4.28.6 2008/05/23 12:07 with 2 - 72 GB disks as the root disks OpenSolaris Nevada Build 91 Solaris Express Community Edition snv_91 SPARC Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 03 June 2008
2008 Jun 04
17
Get your SXCE on ZFS here!
With the release of the Nevada build 90 binaries, it is now possible to install SXCE directly onto a ZFS root filesystem, and also put ZFS swap onto a ZFS filesystem without worrying about having it deadlock. ZFS now also supports crash dumps! To install SXCE to a ZFS root, simply use the text-based installer, after choosing "Solaris Express" from the boot menu on the DVD. DVD download
2006 Nov 07
6
Best Practices recommendation on x4200
Greetings all- I have a new X4200 that I''m getting ready to deploy. It has four 146 GB SAS drives. I''d like to setup the box for maximum redundancy on the data stored on these drives. Unfortunately, it looks like ZFS boot/root aren''t really options at this time. The LSI Logic controller in this box only supports either a RAID0 array with all four disks, or a RAID 1
2009 Jan 09
24
zfs root, jumpstart and flash archives
I understand that currently, at least under Solaris 10u6, it is not possible to jumpstart a new system with a zfs root using a flash archive as a source. Can anyone comment as to whether this restriction will pass in the near term, or if this is a while out (6+ months) before this will be possible? Thanks, Jerry
2010 May 28
21
expand zfs for OpenSolaris running inside vm
hello, all I am have constraint disk space (only 8GB) while running os inside vm. Now i want to add more. It is easy to add for vm but how can i update fs in os? I cannot use autoexpand because it doesn''t implemented in my system: $ uname -a SunOS sopen 5.11 snv_111b i86pc i386 i86pc If it was 171 it would be grate, right? Doing following: o added new virtual HDD (it becomes
2008 Nov 12
21
zfs boot - U6 kernel patch breaks sparc boot
Hi, in preparation to try zfs boot on sparc I installed all recent patches incl. feature patches comming from s10s_u3wos_10 and after reboot finally 137137-09 (still having everything on UFS). Now it doesn''t boot at anymore: ############################### Sun Fire V240, No Keyboard Copyright 2006 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.22.23, 2048 MB memory installed,
2010 Aug 28
4
ufs root to zfs root liveupgrade?
hi all Try to learn how UFS root to ZFS root liveUG work. I download the vbox image of s10u8, it come up as UFS root. add a new disks (16GB) create zpool rpool run lucreate -n zfsroot -p rpool run luactivate zfsroot run lustatus it do show zfsroot will be active in next boot init 6 but it come up with UFS root, lustatus show ufsroot active zpool rpool is mounted but not used by boot Is this a
2008 Aug 08
1
[install-discuss] lucreate into New ZFS pool
Hello, Since I''ve got my disk partitioning sorted out now, I want to move my BE from the old disk to the new disk. I created a new zpool, named RPOOL for distinction with the existing "rpool". I then did lucreate -p RPOOL -n new95 This completed without error, the log is at the bottom of this mail. I have not yet dared to run luactivate. I also have not yet dared set the
2007 Dec 03
2
Help replacing dual identity disk in ZFS raidz and SVM mirror
Hi, We have a number of 4200''s setup using a combination of an SVM 4 way mirror and a ZFS raidz stripe. Each disk (of 4) is divided up like this / 6GB UFS s0 Swap 8GB s1 /var 6GB UFS s3 Metadb 50MB UFS s4 /data 48GB ZFS s5 For SVM we do a 4 way mirror on /,swap, and /var So we have 3 SVM mirrors d0=root (sub mirrors d10, d20, d30, d40) d1=swap (sub mirrors d11, d21,d31,d41)
2009 Nov 16
5
xVM filas on SXCE 127
During boot, I get the following error: Nov 16 09:16:41 sol11 svc.startd[7]: [ID 652011 daemon.warning] svc:/system xvm/store:default: Method "/lib/svc/method/xenstored start" failed with exit status 96. Nov 16 09:16:41 sol11 svc.startd[7]: [ID 748625 daemon.error] system/xvm/store:default misconfigured: transitioned to maintenance (see ''svcs -xv'' for details) It
2010 May 07
0
confused about zpool import -f and export
Hi, all, I think I''m missing a concept with import and export. I''m working on installing a Nexenta b134 system under Xen, and I have to run the installer under hvm mode, then I''m trying to get it back up under pv mode. In that process the controller names change, and that''s where I''m getting tripped up. I do a successful install, then I boot OK,
2011 Jan 04
0
zpool import hangs system
Hello, I''ve been using Nexentastore Community Edition with no issues now for a while now, however last week I was going to rebuild a different system so I started to copy all the data off that to my to a raidz2 volume on me CE system. This was going fine until I noticed that they copy was stalled, as well as the entire system was non-responsive. I let it sit for several hours with no
2011 Aug 05
0
Kernel panic on zpool import. 200G of data inaccessible! assertion failed: zvol_get_stats(os, nv) == 0
System: snv_151a 64 bit on Intel. Error: panic[cpu0] assertion failed: zvol_get_stats(os, nv) == 0, file: ../../common/fs/zfs/zfs_ioctl.c, line: 1815 Failure first seen on Solaris 10, update 8 History: I recently received two 320G drives and realized from reading this list it would have been better if I would have done the install on the small drives but I didn''t have them at the time.
2005 Oct 26
1
Error message with fbt::copen:entry probe
All, The attached script is causing the following error message ... bash-3.00# ./zmon_bug.d dtrace: error on enabled probe ID 2 (ID 4394: fbt:genunix:copen:entry): invalid address (0xfd91747f) in predicate at DIF offset 120 dtrace: error on enabled probe ID 2 (ID 4394: fbt:genunix:copen:entry): invalid address (0xfef81a3f) in predicate at DIF offset 120 Any ideas? thxs Joe --
2011 Jan 28
2
ZFS root clone problem
(for some reason I cannot find my original thread..so I''m reposting it) I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive. This is in a Netra running Solaris 10. Originally what I did was: zpool attach -f rpool c0t0d0 c0t2d0. Then I did an installboot on c0t2d0s0. Didnt work. I was not able to boot from my second drive (c0t2d0). I cannot remember
2010 Aug 28
0
zfs-discuss Digest, Vol 58, Issue 117
>> hi all >> Try to learn how UFS root to ZFS root liveUG work. >> >> I download the vbox image of s10u8, it come up as UFS root. >> add a new disks (16GB) >> create zpool rpool >> run lucreate -n zfsroot -p rpool >> run luactivate zfsroot >> run lustatus it do show zfsroot will be active in next boot >> init 6 >> but it come up
2007 Aug 21
12
Is ZFS efficient for large collections of small files?
Is ZFS efficient at handling huge populations of tiny-to-small files - for example, 20 million TIFF images in a collection, each between 5 and 500k in size? I am asking because I could have sworn that I read somewhere that it isn''t, but I can''t find the reference. Thanks, Brian -- - Brian Gupta http://opensolaris.org/os/project/nycosug/
2009 Jun 29
7
ZFS - SWAP and lucreate..
Good morning everybody I was migrating my ufs ? rootfilesystem to a zfs ? one, but was a little upset finding out that it became bigger (what was clear because of the swap and dump size). Now I am questioning myself if it is possible to set the swap and dump size by using the lucreate ? command (I wanna try it again but on less space). Unfortunately I did not find any advice in manpages.