similar to: ZFSboot : Initial disk layout

Displaying 20 results from an estimated 30000 matches similar to: "ZFSboot : Initial disk layout"

2008 Apr 08
6
lucreate error: Cannot determine the physical boot device ...
# lucreate -n B85 Analyzing system configuration. Hi, after typing # lucreate -n B85 I get the following error: No name for current boot environment. INFORMATION: The current boot environment is not named - assigning name <BE1>. Current boot environment is named <BE1>. Creating initial configuration for primary boot environment <BE1>. ERROR: Unable to determine major and
2007 Apr 12
9
status of zfs boot netinstall kit
I wanted to send out status on the effort to make a version of Solaris install available that supports zfs as a root file system. I''ve got a version of it ready for distribution, but I''d like to test it on the Build 62 community release before I make it available. Without the build 62 community release, I have to test it on a build 61 image, updated with some build 62 packages.
2010 Jul 01
3
zpool on raw disk. Do I need to format?
Folks, I am learning more about zfs storage. It appears, zfs pool can be created on a raw disk. There is no need to create any partitions, etc. on the disk. Does this mean there is no need to run "format" on a raw disk? I have added a new disk to my system. It shows up as /dev/rdsk/c8t1d0s0. Do I need to format it before I convert it to zfs storage? Or, can I simply use it as: # zfs
2007 Sep 28
5
ZFS Boot Won''t work with a straight or mirror zfsroot
Using build 70, I followed the zfsboot instructions at http:// www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ to the letter. I tried first with a mirror zfsroot, when I try to boot to zfsboot the screen is flooded with "init(1M) exited on fatal signal 9" Than I tried with a simple zfs pool (not mirrored) and it just reboots right away. If I try to setup grub
2010 May 01
5
Single-disk pool corrupted after controller failure
I had a single spare 500GB HDD and I decided to install a FreeBSD file server in it for learning purposes, and I moved almost all of my data to it. Yesterday, and naturally after no longer having backups of the data in the server, I had a controller failure (SiS 180 (oh, the quality)) and the HDD was considered unplugged. When I noticed a few checksum failures on `zfs status` (including two on
2009 Jul 13
2
questions regarding RFE 6334757 and CR 6322205 disk write cache. thanks (case 11356581)
Hello experts, I would like consult you some questions regarding RFE 6334757 and CR 6322205 (disk write cache). ========================================== RFE 6334757 disk write cache should be enabled and should have a tool to switch it on and off CR 6322205 Enable disk write cache if ZFS owns the disk ========================================== The cu found on SPARC Enterprise T5140,
2010 Mar 29
19
sharing a ssd between rpool and l2arc
Hi, as Richard Elling wrote earlier: "For more background, low-cost SSDs intended for the boot market are perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root and the rest for an L2ARC. For small form factor machines or machines with max capacity of 8GB of RAM (a typical home system) this can make a pleasant improvement over a HDD-only implementation." For the upcoming
2009 Oct 08
2
convert raidz from osx
I am converting a 4 disk raidz from osx to opensolaris. And I want to keep the data intact. I want zfs to get access to the full disk instead of a slice. I believe like c8d0 instead off c8d0s1. I wanted to do this 1 disk at a time and let it resilver. what is the proper way to do this. I tried, I believe from memory : zpool replace -f rpool c8d1s1 c8d1 but it didn''t let me do that. then I
2006 Aug 21
12
SCSI synchronize cache cmd
Hi, I work on a support team for the Sun StorEdge 6920 and have a question about the use of the SCSI sync cache command in Solaris and ZFS. We have a bug in our 6920 software that exposes us to a memory leak when we receive the SCSI sync cache command: 6456312 - SCSI Synchronize Cache Command is flawed It will take some time for this bug fix to role out to the field so we need to understand
2008 Jan 08
2
AMD-V extension disabled by BIOS
Hi, I am encountering a sad problem. HP disabled the AMD-V extension in the BIOS for the nx6325 model. It has an AMD Turion 64bit TL-60. I read about that xen 3.1 would recognize if this flag is disabled. Would it be possible to enable this extension over xen as a consequence when it recognizes it? Or has somone from the community an idea how to do that? Roman This message posted from
2010 Aug 12
6
one ZIL SLOG per zpool?
I have three zpools on a server and want to add a mirrored pair of ssd''s for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool? -- This message posted from opensolaris.org
2007 Feb 27
3
2-way mirror or RAIDZ?
I have a shiny new Ultra 40 running S10U3 with 2 x 250Gb disks. I want to make best use of the available disk space and have some level of redundancy without impacting performance too much. What I am trying to figure out is: would it be better to have a simple mirror of an identical 200Gb slice from each disk or split each disk into 2 x 80Gb slices plus one extra 80Gb slice on one of the
2010 Mar 03
6
Question about multiple RAIDZ vdevs using slices on the same disk
Hi all :) I''ve been wanting to make the switch from XFS over RAID5 to ZFS/RAIDZ2 for some time now, ever since I read about ZFS the first time. Absolutely amazing beast! I''ve built my own little hobby server at home and have a boatload of disks in different sizes that I''ve been using together to build a RAID5 array on Linux using mdadm in two layers; first layer is
2008 Jan 24
7
Replacing Devices in a Storage Pool
Hi, Assume that you have a 2-way mirror of small drives that you want to replace with another 2-way mirror of larger drives. What is the best way to do this? If you use the zpool replace command, one at a time on each of the existing old drives, then you will end up wasting the additional space on the new drives. I have tried to access this space by creating a new pool and adding the unused
2009 Aug 05
2
?: SMI vs. EFI label and a disk''s write cache
For Solaris 10 5/09... There are supposed to be performance improvements if you create a zpool on a full disk, such as one with an EFI label. Does the same apply if the full disk is used with an SMI label, which is required to boot? I am trying to determine the trade-off, if any, of having a single rpool on cXtYd0s2, if I can even do that, and improved performance compared to having two
2008 Jun 17
6
mirroring zfs slice
Hi All, I had a slice with zfs file system which I want to mirror, I followed the procedure mentioned in the amin guide I am getting this error. Can you tell me what I did wrong? root # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT export 254G 230K 254G 0% ONLINE - root # echo |format Searching for disks...done
2010 Mar 05
17
why L2ARC device is used to store files ?
Greeting All I have create a pool that consists oh a hard disk and a ssd as a cache zpool create hdd c11t0d0p3 zpool add hdd cache c8t0d0p0 - cache device I ran an OLTP bench mark to emulate a DMBS One I ran the benchmark, the pool started create the database file on the ssd cache device ??????????? can any one explain why this happening ? is not L2ARC is used to absorb the evicted data
2011 Aug 09
7
Disk IDs and DD
Hiya, Is there any reason (and anything to worry about) if disk target IDs don''t start at 0 (zero). For some reason mine are like this (3 controllers - 1 onboard and 2 PCIe); AVAILABLE DISK SELECTIONS: 0. c8t0d0 <ATA -ST9160314AS -SDM1 cyl 19454 alt 2 hd 255 sec 63> /pci at 0,0/pci10de,cb84 at 5/disk at 0,0 1. c8t1d0 <ATA -ST9160314AS -SDM1
2006 Nov 07
6
Best Practices recommendation on x4200
Greetings all- I have a new X4200 that I''m getting ready to deploy. It has four 146 GB SAS drives. I''d like to setup the box for maximum redundancy on the data stored on these drives. Unfortunately, it looks like ZFS boot/root aren''t really options at this time. The LSI Logic controller in this box only supports either a RAID0 array with all four disks, or a RAID 1
2009 Jun 03
7
"no pool_props" for OpenSolaris 2009.06 with old SPARC hardware
Hi, yesterday evening I tried to upgrade my Ultra 60 to 2009.06 from SXCE snv_98. I can''t use AI Installer because OpenPROM is version 3.27. So I built IPS from source, then created a zpool on a spare drive and installed OS 2006.06 on it To make the disk bootable I used: installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0 using the executable from my new