Displaying 20 results from an estimated 10000 matches similar to: "ZFS and Virtualization"
2010 Feb 24
9
Import zpool from FreeBSD in OpenSolaris
I want to import my zpool''s from FreeBSD 8.0 in OpenSolaris 2009.06.
After reading the few posts (links below) I was able to find on the subject, it seems like it there is a differences between FreeBSD and Solaris. FreeBSD operates on directly on the disk and Solaris creates a partion and uses that... is that right? Is it impossible for OpenSolaris to use zpool''s from FreeBSD?
2007 Jun 13
5
drive displayed multiple times
So I just imported an old zpool onto this new system. The problem would be one drive (c4d0) is showing up twice. First it''s displayed as ONLINE, then it''s displayed as "UNAVAIL". This is obviously causing a problem as the zpool now thinks it''s in a degraded state, even though all drives are there, and all are online.
This pool should have 7 drives total,
2008 Oct 22
12
Hotplug issues on USB removable media.
Hi,
As a part of the next stages of the time-slider project we are looking into doing actual backups onto
removable media devices such as USB media. The goal is to be able to view snapshots stored on the
media and merge these into the list of viewable snapshots in nautilus giving the user a broader
selection of restore points. In an ideal world we would like to detect the insertion of the
2008 Aug 25
5
Unable to import zpool since system hang during zfs destroy
Hi all,
I have a RAID-Z zpool made up of 4 x SATA drives running on Nexenta 1.0.1 (OpenSolaris b85 kernel). It has on it some ZFS filesystems and few volumes that are shared to various windows boxes over iSCSI. On one particular iSCSI volume, I discovered that I had mistakenly deleted some files from the FAT32 partition that is on it. The files were still in a ZFS snapshot that was made earlier
2010 Jun 02
11
ZFS recovery tools
Hi,
I have just recovered from a ZFS crash. During the antagonizing time this took, I was surprised to
learn how undocumented the tools and options for ZFS recovery we''re. I managed to recover thanks
to some great forum posts from Victor Latushkin, however without his posts I would still be crying
at night...
I think the worst example is the zdb man page, which all it does is to ask you
2008 Jun 04
3
Util to remove (s)log from remaining vdevs?
After having to reset my i-ram card, I can no longer import my raidz pool on 2008.05.
Also trying to import the pool using the zpool.cache causes a kernel panic on 2008.05 and B89 (I''m waiting to try B90 when released).
So I have 2 options:
* Wait for a release that can import after log failure... (no time frame ATM)
* Use a util that removes the log vdev info from the remaining vdevs.
2007 May 29
6
Deterioration with zfs performace and recent zfs bits?
Has anyone else noticed a significant zfs performance deterioration
when running recent opensolaris bits?
My 32-bit / 768 MB Toshiba Tecra S1 notebook was able to do a
full opensolaris release build in ~ 4 hours 45 minutes (gcc shadow
compilation disabled; using an lzjb compressed zpool / zfs on a
single notebook hdd p-ata drive).
After upgrading to 2007-05-25 opensolaris release bits
2010 Sep 18
6
space_map again nuked!!
I''m really angry against ZFS:
My server no more reboots because the ZFS spacemap is again corrupt.
I just replaced the whole spacemap by recreating a new zpool from scratch and copying back the data with "zfs send & zfs receive".
Did it copied corrupt spacemap?!
For me its now terminated. I loss to much time and money with this experimental filesystem.
My version is Zpool
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool.
root@:/$ zpool status
panic[cpu1]/thread=fffffe8000758c80: assertion failed:
2006 Jul 19
3
ZFS support for USB disks 120GB Western Digital
Hey,
I have a portable harddisk Western Digital 120GB USB. Im running Nevada b42a on Thinkpad T43. Is this a supported configuration for setting up ZFS on portable disks ?
Found out some old blogs about this topic:
http://blogs.sun.com/roller/page/artem?entry=zfs_on_the_go and some other info under: http://www.sun.com/io_technologies/USB-Faq.html
Is this information still valid ? Under ZFS FAQ
2009 Apr 15
6
Supermicro AOC-SASLP-MV8
Bouncing a thread from the device drivers list:
http://opensolaris.org/jive/thread.jspa?messageID=357176
Does anybody know if OpenSolaris will support this new Supermicro card,
based on the Marvell 88SE6480 chipset? It''s a true PCI Express 8 port JBOD
SAS/SATA controller with pricing apparently around $125.
If it works with OpenSolaris it sounds pretty much perfect.
--------------
2011 May 19
2
Faulted Pool Question
I just got a call from another of our admins, as I am the resident ZFS
expert, and they have opened a support case with Oracle, but I figured
I''d ask here as well, as this forum often provides better, faster
answers :-)
We have a server (M4000) with 6 FC attached SE-3511 disk arrays
(some behind a 6920 DSP engine). There are many LUNs, all about 500 GB
and mirrored via ZFS. The LUNs
2006 Jun 26
2
raidz2 is alive!
Already making use of it, thank you!
http://www.justinconover.com/blog/?p=17
I took 6x250gb disk and tried raidz2/raidz/none
# zpool create zfs raidz2 c0d0 c1d0 c2d0 c3d0 c7d0 c8d0
df -h zfs
Filesystem size used avail capacity Mounted on
zfs 915G 49K 915G 1% /zfs
# zpool destroy -f zfs
Plain old raidz (raid-5ish)
# zpool create zfs raidz c0d0
2008 Dec 18
3
automatic forced zpool import with unmatched hostid
Hi,
since hostid is stored in the label, "zpool import" failed if the hostid dind''t match. Under certain circonstances (ldom failover) it means you have to manually force the zpool import while booting. With more than 80 LDOMs on a single host it will be great if we could configure the machine back to the old behavior where it didn''t failed, maybe with a /etc/sytem
2008 Feb 16
4
Solaris snv81 xVM on a Thinkpad T30
Trying to boot my Thinkpad T30 with Solaris snv_81 xVM it get the error message
This version of Solaris xVM does not support this hardware
The output of "echo ::interrupts | mdb -k" while booted without xVM is
bash-3.2# echo ::interrupts | mdb -k
IRQ Vector IPL(lo/hi) Bus Share ISR(s)
0 0x20 14/14 - 1 cbe_fire
1 0x21 5/5 ISA 1 i8042_intr
3 0x23
2007 Oct 26
1
data error in dataset 0. what''s that?
Hi forum,
I did something stupid the other day, managed to connect an external disk that was part of zpool A such that it appeared in zpool B. I realised as soon as I had done zpool status that zpool B should not have been online, but it was. I immediately switched off the machine, booted without that disk connected and destroyed zpool B. I managed to get zpool A back and all of my data appears
2009 Jul 01
14
can''t boot 2009.06 domU on Xen 3.4.1 / CentOS 5.3 dom0
I''ve got a CentOS 5.3 dom0 with Xen 3.4.1-rc5 (or so). I''ve tried the same stuff below with 3.4.0, no difference. I''m trying to install 2009.06 PV domU based on instructions from [1] and [2]. I can run the install fine, I can also get the kernel and boot archive (from [2]) after the install. But for the life of me I can''t get the installed domU to boot.
If I
2008 Nov 06
3
Help recovering zfs filesystem
Let me preface this by admitting that I''m a bonehead.
I had a mirrored a zfs filesystem. I needed to use one of the mirrors temporarily so I did a zpool detach to remove the member (call it disk1) leaving disk0 in the pool. However, after the detach I mistakenly wiped disk0.
So here is the question. I haven''t touched disk1 yet so the data is hopefully still there. Is there
2008 Jan 15
2
s10 HVM 64-bit boot
Hi --
I wanted to boot s10 HVM in 64-bit mode but it always boots in 32-bit
mode. Is there a special grub entry to boot s10 HVM domU in 64-bit mode?
Or is it HVM loader?
The grub entry is like this
title Solaris 10 5/08 s10x_u5wos_01 X86
kernel /platform/i86pc/multiboot
module /platform/i86pc/boot_archive
s10.py file is as follows :-
import os, re
arch = os.uname()[4]
if
2010 Apr 04
15
Diagnosing Permanent Errors
I would like to get some help diagnosing permanent errors on my files. The machine in question has 12 1TB disks connected to an Areca raid card. I installed OpenSolaris build 134 and according to zpool history, created a pool with
zpool create bigraid raidz2 c4t0d0 c4t0d1 c4t0d2 c4t0d3 c4t0d4 c4t0d5 c4t0d6 c4t0d7 c4t1d0 c4t1d1 c4t1d2 c4t1d3
I then backed up 806G of files to the machine, and had