similar to: b134 - Mirrored rpool won''t boot unless both mirrors are present

Displaying 20 results from an estimated 700 matches similar to: "b134 - Mirrored rpool won''t boot unless both mirrors are present"

2010 Mar 13
0
Re: [caiman-discuss] Preliminary Text Install Images for b134
Hi, It doesn''t work as a PVM domain within xVM: # uname -srv SunOS 5.11 snv_133 # virt-install --name osvm01 -p -r 1024 -f /export/xvm/osvm01/disk1 -l nfs://localhost/export/install --nographics Starting install... Retrieving file unix... 100% |=========================| 2.1 MB 00:00 Retrieving file boot_arch 100% |=========================| 44 MB 00:00 Creating
2011 Apr 19
3
zero fill empty cell in data.frame
Hello List, I have a data frame like: V130 V131 V132 V133 V134 V135 V136 1 0 0 0.9 0 0.9 0 0 2 0 0 0 0 0 0.8 3 0 0 0 0 0.9 0 0 4 0.9 0 0 0 0 0 0.9 5 0 0 0 6 0 0 0 0.9 0 0 0.9 7 0 0 0.8 0 0 0 0 8 0.9 0 0 0.9 0.8 0 9 0 0 0 0.9 0.9 0 0 10 0 0 0 0 0 0 0.9
2010 May 04
8
iscsitgtd failed request to share on zpool import after upgrade from b104 to b134
Hi, I am posting my question to both storage-discuss and zfs-discuss as I am not quite sure what is causing the messages I am receiving. I have recently migrated my zfs volume from b104 to b134 and upgraded it from zfs version 14 to 22. It consist of two zvol''s ''vol01/zvol01'' and ''vol01/zvol02''. During zpool import I am getting a non-zero exit code,
2011 May 17
3
Reboots when importing old rpool
I have a fresh install of Solaris 11 Express on a new SSD. I have inserted the old hard disk, and tried to import it, with: # zpool import -f <long id number> Old_rpool but the computer reboots. Why is that? On my old hard disk, I have 10-20 BE, starting with OpenSolaris 2009.06 and upgraded to b134 up to snv_151a. I also have a WinXP entry in GRUB. This hard disk is partitioned, with a
2008 Jul 14
2
long data frame selection error
Hello, I am trying to select the following headers from a data frame but when I try and run the command it executes halfway through and give me an error at V188 and V359. Temp <- data.frame(V4, V5, V6, V7, V8, V9, V10, V11, V12, V13, V14, V15, V16, V17, V18, V19, V20, V21, V22, V23, V24, V25, V26, V27, V28, V29, V30, V31, V32, V33, V34, V35, V36, V37, V38, V39, V40, V41, V42, V43, V44, V45,
2006 Feb 01
2
sort columns
Hi. I have a simple (I think) question My dataset have these variables: names(data) [1] "v1" "v2" "v3" "v4" "v5" "v6" "v7" "v8" "v9" "v10" "v11" "v12" "v13" "v14" "v15" "v16" "v17"
2010 Mar 29
19
sharing a ssd between rpool and l2arc
Hi, as Richard Elling wrote earlier: "For more background, low-cost SSDs intended for the boot market are perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root and the rest for an L2ARC. For small form factor machines or machines with max capacity of 8GB of RAM (a typical home system) this can make a pleasant improvement over a HDD-only implementation." For the upcoming
2010 Jan 08
0
ZFS partially hangs when removing an rpool mirrored disk while having some IO on another pool on another partition of the same disk
Hello, Sorry for the (very) long subject but I''ve pinpointed the problem to this exact situation. I know about the other threads related to hangs, but in my case there was no < zfs destroy > involved, nor any compression or deduplication. To make a long story short, when - a disk contains 2 partitions (p1=32GB, p2=1800 GB) and - p1 is used as part of a zfs mirror of rpool
2011 Mar 22
0
rpool to use emcpower device
I decided to post this question to the mailing list because it needs ZFS knowledge to be solved. The situation is like this: I have a blade server that boots from a LUN, which has no additional storage or internal disk but that LUN used to boot. MPxIO works perfectly; but the management wants to use EMC PowerPath, because the company already has an investment on licensing. After disabling
2010 Sep 29
2
rpool spare
Using ZFS v22, is it possible to add a hot spare to rpool? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100929/4b036d1d/attachment.html>
2010 Jun 30
1
zfs rpool corrupt?????
Hello, Has anyone encountered the following error message, running Solaris 10 u8 in an LDom. bash-3.00# devfsadm devfsadm: write failed for /dev/.devfsadm_dev.lock: Bad exchange descriptor bash-3.00# zpool status -v rpool pool: rpool state: DEGRADED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in
2009 Jan 07
1
ZFS: Log device for rpool (/ root partition) not supported?
Why is it impossible to have a ZFS pool with a log device for the rpool (device used for the root partition)? Is this a bug? I can''t boot a ZFS partition / on a zpool which uses also a log device. Maybe its not supported because then grub should support it too? -- This message posted from opensolaris.org
2010 Jul 28
4
zfs allow does not work for rpool
I am trying to give a general user permissions to create zfs filesystems in the rpool. zpool set=delegation=on rpool zfs allow <user> create rpool both run without any issues. zfs allow rpool reports the user does have create permissions. zfs create rpool/test cannot create rpool/test : permission denied. Can you not allow to the rpool? -- This message posted from opensolaris.org
2010 Nov 15
0
SCSI timeouts with rPool on usb
I''m currently having a few problems with my storage server. Server specs are - Open Solaris snv_134 Supermicro X8DTi motherboard Intel Xeon 5520 6x 4GB DDR3 LSI RAID Card - running 24x 1.5TB SATA drives Adaptec 2405 - running 4x Intel SSD X25-E''s Boot''s from 8GB USB flash drive The initial problem started after a while when the console showed SCSI timeouts and the whole
2010 Nov 15
1
Moving rpool disks
We need to move the disks comprising our mirrored rpool on a Solaris 10 U9 x86_64 (not SPARC) system. We''ll be relocating both drives to a different controller in the same system (should go from c1* to c0*). We''re curious as to what the best way is to go about this? We''d love to be able to just relocate the disks and update the system BIOS to boot off the drives in
2011 Aug 08
2
rpool recover not using zfs send/receive
Is it possible to recover the rpool with only a tar/star archive of the root filesystem? I have used the zfs send/receive methods and that work without a problem. What I am trying to do is recreate the rpool and underlying zfs filesystems (rpool/ROOT, rpool/s10_uXXXXXX, rpool/dump, rpool/swap, rpool/export, and rpool/export/home). I then mount the pool at a alternate root and restore the tar
2010 Apr 16
1
cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices
I am getting the following error, however as you can see below this is a SMI label... cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices # zpool get bootfs rpool NAME PROPERTY VALUE SOURCE rpool bootfs - default # zpool set bootfs=rpool/ROOT/s10s_u8wos_08a rpool cannot set property for ''rpool'': property
2009 Jan 13
6
mirror rpool
Hi Host: VirtualBox 2.1.0 (WinXP SP3) Guest: OSol 5.11snv_101b IDE Primary Master: 10 GB, rpool IDE Primary Slave: 10 GB, empty format output: AVAILABLE DISK SELECTIONS: 0. c3d0 <DEFAULT cyl 1302 alt 2 hd 255 sec 63> /pci0,0/pci-ide at 1,1/ide at 0/cmdk at 0,0 1. c3d1 <drive unknown> /pci0,0/pci-ide at 1,1/ide at 0/cmdk at 1,0 # ls
2010 Jul 02
14
NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?
I see in NexentaStor''s announcement of Community Edition 3.0.3 they mention some backported patches in this release. Aside from their management features / UI what is the core OS difference if we move to Nexenta from OpenSolaris b134? These DeDup bugs are my main frustration - if a staff member does a rm * in a directory with dedup you can take down the whole storage server - all with
2011 Jul 22
4
add device to mirror rpool in sol11exp
In my new oracle server, sol11exp, it''s using multipath device names... Presently I have two disks attached: (I removed the other 10 disks for now, because these device names are so confusing. This way I can focus on *just* the OS disks.) 0. c0t5000C5003424396Bd0 <SEAGATE-ST32000SSSUN2.0-0514 cyl 3260 alt 2 hd 255 sec 252> /scsi_vhci/disk at g5000c5003424396b