similar to: ?: SMI vs. EFI label and a disk''s write cache

Displaying 20 results from an estimated 5000 matches similar to: "?: SMI vs. EFI label and a disk''s write cache"

2011 Dec 15
31
Can I create a mirror for a root rpool?
On Solaris 10 If I install using ZFS root on only one drive is there a way to add another drive as a mirror later? Sorry if this was discussed already. I searched the archives and couldn''t find the answer. Thank you.
2010 Nov 05
3
ZFS vs mpxio vs cfgadm in Solaris.
Folks, I''m trying to figure out whether we should give ZFS / mpxio a shot on one of our research servers, or simply skip it (as we have previously). In Nov 2009 Cindy responded to a thread concerning ZFS device issues, cfgadm, and mpxio: http://mail.opensolaris.org/pipermail/zfs-discuss/2009-November/033496.html I''ve got an x2270 with the Sun EZ-SAS HBA and external SATA
2008 Jul 17
2
zfs sparc boot "Bad magic number in disk label"
Hello, I recently installed SunOS 5.11 snv_91 onto a Ultra 60 UPA/PCI with OpenBoot 3.31 and two 300GB SCSI disks. The root file system is UFS on c0t0d0s0. Following the steps in ZFS Admin I have attempted to convert root to ZFS utilizing c0t1d0s0. However, upon "init 6" I am always presented with: Bad magic number in disk label can''t open disk label package My Steps: 1)
2010 Jul 12
3
Need ZFS master!
Hello all. I am new...very new to opensolaris and I am having an issue and have no idea what is going wrong. So I have 5 drives in my machine. all 500gb. I installed open solaris on the first drive and rebooted. . Now what I want to do is ad a second drive so they are mirrored. How does one do this!!! I am getting no where and need some help. -- This message posted from opensolaris.org
2010 May 28
21
expand zfs for OpenSolaris running inside vm
hello, all I am have constraint disk space (only 8GB) while running os inside vm. Now i want to add more. It is easy to add for vm but how can i update fs in os? I cannot use autoexpand because it doesn''t implemented in my system: $ uname -a SunOS sopen 5.11 snv_111b i86pc i386 i86pc If it was 171 it would be grate, right? Doing following: o added new virtual HDD (it becomes
2009 Feb 03
1
Cannot Mirror RPOOL, Can''t Label Disk to SMI
Dear ZFS experts, I have 2 SATA 500 GB Hard Drive on my Dual Core PC I have installed OpenSolaris 2008.11 using Live CD I got from Sun Tech Days in Singapore Now, using all the guidelines I got here at Indiana Discussion, I can''t attach my second drive to rpool to make them mirror Initially I was playing around with similar configuration in VirtualBox, and it does not succeed. Finally
2010 Feb 18
2
Killing an EFI label
Since this seems to be a ubiquitous problem for people running ZFS, even though it''s really a general Solaris admin issue, I''m guessing the expertise is actually here, so I''m asking here. I found lots of online pages telling how to do it. None of them were correct or complete. I think. I seem to have accomplished it in a somewhat hackish fashion, possibly not
2010 Mar 02
11
Expand zpool capacity
Hello, Experts. I''ve got a problem. I''m trying to expand my main zpool (rpool), but don''t know how to do that. (i''m 100% newbie in non-windows world) I use Osol under Vmware on Windows. I had a pretty small vhdd -> only 12gb. Yesterday i decided to expand my virtual drive to 20gb. (After several tries to upgrade the OS to a newest dev-releases and
2010 Mar 05
17
why L2ARC device is used to store files ?
Greeting All I have create a pool that consists oh a hard disk and a ssd as a cache zpool create hdd c11t0d0p3 zpool add hdd cache c8t0d0p0 - cache device I ran an OLTP bench mark to emulate a DMBS One I ran the benchmark, the pool started create the database file on the ssd cache device ??????????? can any one explain why this happening ? is not L2ARC is used to absorb the evicted data
2011 Jan 28
2
ZFS root clone problem
(for some reason I cannot find my original thread..so I''m reposting it) I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive. This is in a Netra running Solaris 10. Originally what I did was: zpool attach -f rpool c0t0d0 c0t2d0. Then I did an installboot on c0t2d0s0. Didnt work. I was not able to boot from my second drive (c0t2d0). I cannot remember
2009 Oct 22
1
raidz "ZFS Best Practices" wiki inconsistency
<http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations> says that the number of disks in a RAIDZ should be (N+P) with N = {2,4,8} and P = {1,2}. But if you go down the page just a little further to the thumper configuration examples, none of the 3 examples follow this recommendation! I will have 10 disks to put into a
2007 Mar 30
0
On disk SMI & EFI label documentation
I''m not sure if this alias is only to discuss the Solaris ZFS implementation or others. I''m writing my own ZFS code from scratch in Java. I''ll skip the reasons why I''m doing this in Java, lets just assume I have some. Is there any good documentation on the disk label structures? Right now my code is just reading the ZFS labels and the nvlist data but when
2009 Aug 04
7
Sol10u7: can''t "zpool remove" missing hot spare
I''m using Solaris 10u6 updated to u7 via patches, and I have a pool with a mirrored pair and a (shared) hot spare. We reconfigured disks a while ago and now the controller is c4 instead of c2. The hot spare was originally on c2, and apparently on rebooting it didn''t get found. So, I looked up what the new name for the hot spare was, then added it to the pool with "zpool
2011 Jul 22
4
add device to mirror rpool in sol11exp
In my new oracle server, sol11exp, it''s using multipath device names... Presently I have two disks attached: (I removed the other 10 disks for now, because these device names are so confusing. This way I can focus on *just* the OS disks.) 0. c0t5000C5003424396Bd0 <SEAGATE-ST32000SSSUN2.0-0514 cyl 3260 alt 2 hd 255 sec 252> /scsi_vhci/disk at g5000c5003424396b
2006 Oct 09
1
Question regarding ZFS
Hi gurus, I was playing with zfs in a V890 before it was installed for production. We reinstalled it for production, but the 4 disks We used to play with zfs have a non standard format (slices comes from s0 to s8 not s2 for backup s7 does not exists) We need to recover those 4 disks to be used by Solaris Volume Manager. I know this would be an easy question, but I was trying to fix it
2011 Oct 12
33
weird bug with Seagate 3TB USB3 drive
Banging my head against a Seagate 3TB USB3 drive. Its marketing name is: Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102 format(1M) shows it identify itself as: Seagate-External-SG11-2.73TB Under both Solaris 10 and Solaris 11x, I receive the evil message: | I/O request is not aligned with 4096 disk sector size. | It is handled through Read Modify Write but the performance
2011 Apr 01
15
Zpool resize
Hi, LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I''m changing LUN size on netapp and solaris format see new value but zpool still have old value. I tryed zpool export and zpool import but it didn''t resolve my problem. bash-3.00# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d1 <DEFAULT cyl 6523 alt 2 hd 255 sec 63>
2009 Oct 01
4
RAIDZ v. RAIDZ1
So, I took four 1.5TB drives and made RAIDZ, RAIDZ1 and RAIDZ2 pools. The sizes for the pools were 5.3TB, 4.0TB, and 2.67TB respectively. The man page for RAIDZ states that "The raidz vdev type is an alias for raidz1." So why was there a difference between the sizes for RAIDZ and RAIDZ1? Shouldn''t the size be the same for "zpool create raidz ..." and "zpool
2010 Apr 16
1
cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices
I am getting the following error, however as you can see below this is a SMI label... cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices # zpool get bootfs rpool NAME PROPERTY VALUE SOURCE rpool bootfs - default # zpool set bootfs=rpool/ROOT/s10s_u8wos_08a rpool cannot set property for ''rpool'': property
2010 Mar 19
3
zpool I/O error
Hi all, I''m trying to delete a zpool and when I do, I get this error: # zpool destroy oradata_fs1 cannot open ''oradata_fs1'': I/O error # The pools I have on this box look like this: #zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT oradata_fs1 532G 119K 532G 0% DEGRADED - rpool 136G 28.6G 107G 21% ONLINE - # Why