Displaying 20 results from an estimated 2000 matches similar to: "Cannot Mirror RPOOL, Can''t Label Disk to SMI"
2010 Apr 16
1
cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices
I am getting the following error, however as you can see below this is a SMI
label...
cannot set property for ''rpool'': property ''bootfs'' not supported on EFI
labeled devices
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs - default
# zpool set bootfs=rpool/ROOT/s10s_u8wos_08a rpool
cannot set property for ''rpool'': property
2009 Jul 25
1
OpenSolaris 2009.06 - ZFS Install Issue
I''ve installed Open Solaris 2009.06 on a machine with 5 identical 1TB wd green drives to create a ZFS nas. The intended install is one drive dedicated to the OS and the remaining 4 drives in a raidz1 configuration. The install is working fine, but creating the raidz1 pool and rebooting is causing the machine report "Cannot find active partition" upon reboot. Below is command
2010 Feb 18
2
Killing an EFI label
Since this seems to be a ubiquitous problem for people running ZFS, even
though it''s really a general Solaris admin issue, I''m guessing the
expertise is actually here, so I''m asking here.
I found lots of online pages telling how to do it.
None of them were correct or complete. I think. I seem to have
accomplished it in a somewhat hackish fashion, possibly not
2010 May 05
3
[indiana-discuss] image-update doesn''t work anymore (bootfs not supported on EFI)
On 5/5/10 1:44 AM, Christian Thalinger wrote:
> On Tue, 2010-05-04 at 16:19 -0600, Evan Layton wrote:
>> Can you try the following and see if it really thinks it''s an EFI lable?
>> # dd if=/dev/dsk/c12t0d0s2 of=x skip=512 bs=1 count=10
>> # cat x
>>
>> This may help us determine if this is another instance of bug 6860320
>
> # dd
2009 Dec 21
0
Mirror config and installgrub errors
I''ve hsut bought second drive for my hope PC and decided to do mirror. I''ve made
pfexec zpool attach rpool c9d0s0 c13d0s0
waited for scrub and tried to install grub on second disk:
$ pfexec installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c13d0s0
cannot open/stat device /dev/rdsk/c13d0s2
$ pfexec installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c13d0
raw
2012 Nov 11
0
Expanding a ZFS pool disk in Solaris 10 on VMWare (or other expandable storage technology)
Hello all,
This is not so much a question but rather a "how-to" for posterity.
Comments and possible fixes are welcome, though.
I''m toying (for work) with a Solaris 10 VM, and it has a dedicated
virtual HDD for data and zones. The template VM had a 20Gb disk,
but a particular application needs more. I hoped ZFS autoexpand
would do the trick transparently, but it turned out
2009 Jan 21
8
cifs perfomance
Hello!
I''am setup zfs / cifs home storage server, end now have low performance with play movie stored on this zfs from windows client. server hardware is not new , but n windows it perfomance was normal.
CPU is AMD Athlon Burton Thunderbird 2500, runing on 1,7GHz, 1024 RAM and storage:
usb c4t0d0 ST332062-0A-3.AA-298.09GB /pci at 0,0/pci1458,5004 at 2,2/cdrom at 1/disk at
2016 Jul 15
3
[PATCH 1/4] Create a simple project to create version.h to run before any other
Avoids trying to create and replace version.h more than once which
led to file-locking errors with multicore builds.
---
Makefile.am | 1 +
win32/VS2015/celt.vcxproj | 48 +++++++++++++++++---------
win32/VS2015/generate_version.vcxproj | 65 +++++++++++++++++++++++++++++++++++
win32/VS2015/opus.sln | 32 ++++++++++++++++-
2009 Aug 05
2
?: SMI vs. EFI label and a disk''s write cache
For Solaris 10 5/09...
There are supposed to be performance improvements if you create a zpool
on a full disk, such as one with an EFI label. Does the same apply if
the full disk is used with an SMI label, which is required to boot?
I am trying to determine the trade-off, if any, of having a single rpool
on cXtYd0s2, if I can even do that, and improved performance compared to
having two
2011 Jul 22
4
add device to mirror rpool in sol11exp
In my new oracle server, sol11exp, it''s using multipath device names...
Presently I have two disks attached: (I removed the other 10 disks for now,
because these device names are so confusing. This way I can focus on *just*
the OS disks.)
0. c0t5000C5003424396Bd0 <SEAGATE-ST32000SSSUN2.0-0514 cyl 3260 alt 2
hd 255 sec 252>
/scsi_vhci/disk at g5000c5003424396b
2007 Jun 13
5
drive displayed multiple times
So I just imported an old zpool onto this new system. The problem would be one drive (c4d0) is showing up twice. First it''s displayed as ONLINE, then it''s displayed as "UNAVAIL". This is obviously causing a problem as the zpool now thinks it''s in a degraded state, even though all drives are there, and all are online.
This pool should have 7 drives total,
2007 May 24
1
how do I revert back from ZFS partitioned disk to original partitions
I accidentally created a zpool on a boot disk, it paniced the system
and now I can jumpstart and install the OS on it.
This is what it looks like.
partition> p
Current partition table (original):
Total disk sectors available: 17786879 + 16384 (reserved sectors)
Part Tag Flag First Sector Size Last Sector
0 usr wm 34 8.48GB
2011 Aug 08
2
rpool recover not using zfs send/receive
Is it possible to recover the rpool with only a tar/star archive of the root filesystem? I have used the zfs send/receive methods and that work without a problem.
What I am trying to do is recreate the rpool and underlying zfs filesystems (rpool/ROOT, rpool/s10_uXXXXXX, rpool/dump, rpool/swap, rpool/export, and rpool/export/home). I then mount the pool at a alternate root and restore the tar
2010 Jul 28
4
zfs allow does not work for rpool
I am trying to give a general user permissions to create zfs filesystems in the rpool.
zpool set=delegation=on rpool
zfs allow <user> create rpool
both run without any issues.
zfs allow rpool reports the user does have create permissions.
zfs create rpool/test
cannot create rpool/test : permission denied.
Can you not allow to the rpool?
--
This message posted from opensolaris.org
2011 May 17
3
Reboots when importing old rpool
I have a fresh install of Solaris 11 Express on a new SSD. I have inserted the old hard disk, and tried to import it, with:
# zpool import -f <long id number> Old_rpool
but the computer reboots. Why is that? On my old hard disk, I have 10-20 BE, starting with OpenSolaris 2009.06 and upgraded to b134 up to snv_151a. I also have a WinXP entry in GRUB.
This hard disk is partitioned, with a
2010 Jun 30
1
zfs rpool corrupt?????
Hello,
Has anyone encountered the following error message, running Solaris 10 u8 in
an LDom.
bash-3.00# devfsadm
devfsadm: write failed for /dev/.devfsadm_dev.lock: Bad exchange descriptor
bash-3.00# zpool status -v rpool
pool: rpool
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in
2011 Mar 22
0
rpool to use emcpower device
I decided to post this question to the mailing list because it needs ZFS
knowledge to be solved.
The situation is like this:
I have a blade server that boots from a LUN, which has no additional storage
or internal disk but that LUN used to boot.
MPxIO works perfectly; but the management wants to use EMC PowerPath,
because the company already has an investment on licensing.
After disabling
2010 Nov 15
0
SCSI timeouts with rPool on usb
I''m currently having a few problems with my storage server. Server specs are -
Open Solaris snv_134
Supermicro X8DTi motherboard
Intel Xeon 5520
6x 4GB DDR3
LSI RAID Card - running 24x 1.5TB SATA drives
Adaptec 2405 - running 4x Intel SSD X25-E''s
Boot''s from 8GB USB flash drive
The initial problem started after a while when the console showed SCSI timeouts and the whole
2010 Nov 15
1
Moving rpool disks
We need to move the disks comprising our mirrored rpool on a Solaris 10
U9 x86_64 (not SPARC) system.
We''ll be relocating both drives to a different controller in the same
system (should go from c1* to c0*).
We''re curious as to what the best way is to go about this? We''d love
to be able to just relocate the disks and update the system BIOS to
boot off the drives in
2010 Sep 29
2
rpool spare
Using ZFS v22, is it possible to add a hot spare to rpool?
Thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100929/4b036d1d/attachment.html>