similar to: SUMMARY: mounting datasets from a read-only pool with aid of tmpfs

Displaying 20 results from an estimated 6000 matches similar to: "SUMMARY: mounting datasets from a read-only pool with aid of tmpfs"

2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all, I have an oi_148a PC with a single root disk, and since recently it fails to boot - hangs after the copyright message whenever I use any of my GRUB menu options. Booting with an oi_148a LiveUSB I had around since installation, I ran some zdb traversals over the rpool and zpool import attempts. The imports fail by running the kernel out of RAM (as recently discussed in the list with
2010 Mar 27
16
zpool split problem?
Zpool split is a wonderful feature and it seems to work well, and the choice of which disk got which name was perfect! But there seems to be an odd anomaly (at least with b132) . Started with c0t1d0s0 running b132 (root pool is called rpool) Attached c0t0d0s0 and waited for it to resilver Rebooted from c0t0d0s0 zpool split rpool spool Rebooted from c0t0d0s0, both rpool and spool were mounted
2009 Jan 09
24
zfs root, jumpstart and flash archives
I understand that currently, at least under Solaris 10u6, it is not possible to jumpstart a new system with a zfs root using a flash archive as a source. Can anyone comment as to whether this restriction will pass in the near term, or if this is a while out (6+ months) before this will be possible? Thanks, Jerry
2011 Nov 19
0
"zfs hold" and "zfs send" on a readonly pool
Hello, all. I''m in the process of repairing a corrupted unmirrored rpool, and my current idea was to evacuate all reachable data by "zfs send" to the redundant data pool, then recreate and repopulate the rpool with copies=2. As I previously wrote, my machine crashes when trying to import the rpool in any sort of read-write mode, however I got it to import with
2010 Feb 08
5
zfs send/receive : panic and reboot
<copied from opensolaris-dicuss as this probably belongs here.> I kept on trying to migrate my pool with children (see previous threads) and had the (bad) idea to try the -d option on the receive part. The system reboots immediately. Here is the log in /var/adm/messages Feb 8 16:07:09 amber unix: [ID 836849 kern.notice] Feb 8 16:07:09 amber ^Mpanic[cpu1]/thread=ffffff014ba86e40: Feb 8
2011 Jul 21
4
Raidz2 slow read speed (under 5MB/s)
Hello all, I''m building a file server (or just a storage that I intend to access by Workgroup from primarily Windows machines) using zfs raidz2 and openindiana 148. I will be using this to stream blu-ray movies and other media, so I will be happy if I get just 20MB/s reads, which seems like a pretty low standard considering some people are getting 100+. This is my first time with OI, and
2011 Apr 05
11
ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?
Hello, I''m debating an OS change and also thinking about my options for data migration to my next server, whether it is on new or the same hardware. Migrating to a new machine I understand is a simple matter of ZFS send/receive, but reformatting the existing drives to host my existing data is an area I''d like to learn a little more about. In the past I''ve asked about
2011 Apr 08
11
How to rename rpool. Is that recommended ?
Hello, I have a situation where a host, which is booted off its ''rpool'', need to temporarily import the ''rpool'' of another host, edit some files in it, and export the pool back retaining its original name ''rpool''. Can this be done ? Here is what I am trying to do: # zpool import -R /a rpool temp-rpool # zfs set mountpoint=/mnt
2012 Jun 12
15
Recovery of RAIDZ with broken label(s)
Hi all, I have a 5 drive RAIDZ volume with data that I''d like to recover. The long story runs roughly: 1) The volume was running fine under FreeBSD on motherboard SATA controllers. 2) Two drives were moved to a HP P411 SAS/SATA controller 3) I *think* the HP controllers wrote some volume information to the end of each disk (hence no more ZFS labels 2,3) 4) In its "auto
2008 Aug 04
16
zpool upgrade wrecked GRUB
Machine is running x86 snv_94 after recent upgrade from opensolaris 2008.05. ZFS and zpool reported no troubles except suggesting upgrade for from ver.10 to ver.11. seemed like a good idea at the time. system up for several days after that point then took down for some unrelated maintenance. now will not boot the opensol, drops to grub prompt, no menus. zfs was mirrored on two disks c6d0s0 and
2010 Jan 28
2
Need help with repairing zpool :(
...how this can happen is not a topic of this message. now, there is a problem and I need to solve it, if it is possible. have one HDD device (80gb), entire disk is for rpool, system on it and home folders. this is no problem to reinstall system, but need to save some files from user dirs. an, o''cos, there is no backup. so, the problem is that zpool is broken :( when I try to start
2012 Oct 03
14
Changing rpool device paths/drivers
Hello all, It was often asked and discussed on the list about "how to change rpool HDDs from AHCI to IDE mode" and back, with the modern routine involving reconfiguration of the BIOS, bootup from separate live media, simple import and export of the rpool, and bootup from the rpool. The documented way is to reinstall the OS upon HW changes. Both are inconvenient to say the least.
2009 Jun 03
7
"no pool_props" for OpenSolaris 2009.06 with old SPARC hardware
Hi, yesterday evening I tried to upgrade my Ultra 60 to 2009.06 from SXCE snv_98. I can''t use AI Installer because OpenPROM is version 3.27. So I built IPS from source, then created a zpool on a spare drive and installed OS 2006.06 on it To make the disk bootable I used: installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0 using the executable from my new
2009 Mar 03
8
zfs list extentions related to pNFS
Hi, I am soliciting input from the ZFS engineers and/or ZFS users on an extension to "zfs list". Thanks in advance for your feedback. Quick Background: The pNFS project (http://opensolaris.org/os/project/nfsv41/) is adding a new DMU object set type which is used on the pNFS data server to store pNFS stripe DMU objects. A pNFS dataset gets created with the "zfs
2011 Apr 01
15
Zpool resize
Hi, LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I''m changing LUN size on netapp and solaris format see new value but zpool still have old value. I tryed zpool export and zpool import but it didn''t resolve my problem. bash-3.00# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d1 <DEFAULT cyl 6523 alt 2 hd 255 sec 63>
2009 Dec 11
7
Doing ZFS rollback with preserving later created clones/snapshot?
Hi. Is it possible on Solaris 10 5/09, to rollback to a ZFS snapshot, WITHOUT destroying later created clones or snapshots? Example: --($ ~)-- sudo zfs snapshot rpool/ROOT at 01 --($ ~)-- sudo zfs snapshot rpool/ROOT at 02 --($ ~)-- sudo zfs clone rpool/ROOT at 02 rpool/ROOT-02 --($ ~)-- LC_ALL=C sudo zfs rollback rpool/ROOT at 01 cannot rollback to ''rpool/ROOT at 01'': more
2008 Jul 22
2
Problems mounting ZFS after install
Let me thank everyone in advance. I''ve read a number of posts here and it helped tremendously in getting the install done. I have a couple of remaining issues which I can''t seem to overcome. Here are the basics: dom0 - CentOS 5.2 32-bit Xen 3.2.1 compiles from source domU - os200805.iso The install config: [root@internetpowagroup oshman]# cat opensolaris.install name =
2010 Jan 26
1
zfs root pool on upgraded host
Hi, I installed opensolaris on a x2200 m2 with two internal drives that had an existing root pool with a Solaris 10 update 6. After installing opensolaris 2009.06 the host refused to boot. The opensolaris install was fine. I had to pull the second hard drive to get the host to boot. Then insert the second drive and relabel the old root pool to something other than rpool. Then the host was
2009 Aug 14
16
What''s eating my disk space? Missing snapshots?
Please can someone take a look at the attached file which shows the output on my machine of zfs list -r -t filesystem,snapshot -o space rpool/export/home/matt The USEDDS figure of ~2GB is what I would expect, and is the same figure reported by the Disk Usage Analyzer. Where is the remaining 13.8GB USEDSNAP figure coming from? If I total up the list of zfs-auto snapshots it adds up to about 4.8GB,
2010 Jul 28
4
zfs allow does not work for rpool
I am trying to give a general user permissions to create zfs filesystems in the rpool. zpool set=delegation=on rpool zfs allow <user> create rpool both run without any issues. zfs allow rpool reports the user does have create permissions. zfs create rpool/test cannot create rpool/test : permission denied. Can you not allow to the rpool? -- This message posted from opensolaris.org