similar to: Reboots when importing old rpool

Displaying 20 results from an estimated 300 matches similar to: "Reboots when importing old rpool"

2011 Sep 14
3
Is there any implementation of VSS for a ZFS iSCSI snapshot on Solaris?
I am using a Solaris + ZFS environment to export a iSCSI block layer device and use the snapshot facility to take a snapshot of the ZFS volume. Is there an existing Volume Shadow Copy (VSS) implementation on Windows for this environment? Thanks S Joshi -------------- next part -------------- An HTML attachment was scrubbed... URL:
2011 May 19
8
Mapping sas address to physical disk in enclosure
Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with sas-addresses such as this in "zpool status" output: NAME STATE READ WRITE CKSUM cuve ONLINE 0 0 0 mirror-0 ONLINE 0 0 0
2011 Jun 30
14
700GB gone?
I have a 1.5TB disk that has several partitions. One of them is 900GB. Now I can only see 300GB. Where is the rest? Is there a command I can do to reach the rest of the data? Will scrub help? -- This message posted from opensolaris.org
2008 Feb 12
4
xVM and VirtualBox
Hi unfortunately VirtualBox does not work yet in a Solaris Dom0: I installed VirtualBox beta and it runs fine on bare metal. Unfortunately the necessary driver does not load automatically in a xVM Dom0. It can be loaded manually but it looks like it does not work in a Dom0: bash-3.2# modinfo | grep vbox # this is a one time task: bash-3.2# cp /platform/i86pc/kernel/drv/amd64/vboxdrv
2012 Jul 25
8
online increase of zfs after LUN increase ?
Hello, There is a feature of zfs (autoexpand, or zpool online -e ) that it can consume the increased LUN immediately and increase the zpool size. That would be a very useful ( vital ) feature in enterprise environment. Though when I tried to use it, it did not work. LUN expanded and visible in format, but zpool did not increase. I found a bug SUNBUG:6430818 (Solaris Does Not Automatically
2012 Jan 11
3
Unable to allocate dma memory for extra SGL
Hi all; We have a Solaris 10 U9 x86 instance running on Silicon Mechanics / SuperMicro hardware. Occasionally under high load (ZFS scrub for example), the box becomes non-responsive (it continues to respond to ping but nothing else works -- not even the local console). Our only solution is to hard reset after which everything comes up normally. Logs are showing the following: Jan 8
2011 Aug 09
7
Disk IDs and DD
Hiya, Is there any reason (and anything to worry about) if disk target IDs don''t start at 0 (zero). For some reason mine are like this (3 controllers - 1 onboard and 2 PCIe); AVAILABLE DISK SELECTIONS: 0. c8t0d0 <ATA -ST9160314AS -SDM1 cyl 19454 alt 2 hd 255 sec 63> /pci at 0,0/pci10de,cb84 at 5/disk at 0,0 1. c8t1d0 <ATA -ST9160314AS -SDM1
2011 Apr 24
1
Infinite loop with bcmxcp_usb and Powerware 5115
I'm running Solaris 11 Express (snv_151a X86) with NUT 2.6.0 compliled from the source tarball and a Powerware 5115. I was seeing the driver hang, so gave it some '-D's (23 apparently ;-) and found that it seems to be getting into an infinite loop when reading data from the UPS. The debug output is attached and I would be very grateful for some help or advice on what to try next.
2011 May 28
7
Have my RMA... Now what??
I have a raidz2 pool with one disk that seems to be going bad, several errors are noted in iostat. I have an RMA for the drive, however - no I am wondering how I proceed. I need to send the drive in and then they will send me one back. If I had the drive on hand, I could do a zpool replace. Do I do a zpool offline? zpool detach? Once I get the drive back and put it in the same drive bay..
2012 Jan 04
9
Stress test zfs
Hi all, I''ve got a solaris 10 running 9/10 on a T3. It''s an oracle box with 128GB memory RIght now oracle . I''ve been trying to load test the box with bonnie++. I can seem to get 80 to 90 K writes, but can''t seem to get more than a couple K for writes. Any suggestions? Or should I take this to a bonnie++ mailing list? Any help is appreciated. I''m kinda
2012 Jul 11
5
Solaris derivate with the best long-term future
As a napp-it user who recently needs to upgrade from NexentaCore I recently saw "preferred for OpenIndiana live but running under Illumian, NexentaCore and Solaris 11 (Express)" as a system recommendation for napp-it. I wonder about the future of OpenIndiana and Illumian, which fork is likely to see the most continued development, in your opinion? Thanks.
2010 Nov 11
3
Booting fails with `Can not read the pool label'' error
I''m still trying to find a fix/workaround for the problem described in Unable to mount root pool dataset http://opensolaris.org/jive/thread.jspa?messageID=492460 Since the Blade 1500''s rpool is mirrored, I''ve decided to detach the second half of the mirror, relabel the disk, create an alternative rpool (rpool2) there, copy the current BE (snv_134) using beadm
2010 Aug 28
4
ufs root to zfs root liveupgrade?
hi all Try to learn how UFS root to ZFS root liveUG work. I download the vbox image of s10u8, it come up as UFS root. add a new disks (16GB) create zpool rpool run lucreate -n zfsroot -p rpool run luactivate zfsroot run lustatus it do show zfsroot will be active in next boot init 6 but it come up with UFS root, lustatus show ufsroot active zpool rpool is mounted but not used by boot Is this a
2007 Dec 03
31
How to enable 64bit solaris guest on top of solaris dom0
I can enabling 32bit solaris guest on top of solaris dom0, but I don''t know how to enable 64bit solaris guest on top of solaris dom0. what configuration I need to modify?
2015 Jun 04
3
[PATCH RFC][Resend] New API: btrfs_convert
Disable the test case temporarily for 2 reasons: 1. Because the default test disk size is 500M, while btrfs convert command think it is too small to convert it(actually, just add 10M or 20M more is enough). 2. Btrfs-progs has may have a tiny bug, when execute the command in guestfish, it report some error, but convert the filesystem to btrfs successfully. Signed-off-by: Pino Tsao
2010 Aug 19
3
SSD caching of MDT
Article by Jeff Layton: http://www.linux-mag.com/id/7839 anyone have views on whether this sort of caching would be useful for the MDT? My feeling is that MDT reads are probably pretty random but writes might benefit...? GREG -- Greg Matthews 01235 778658 Senior Computer Systems Administrator Diamond Light Source, Oxfordshire, UK
2010 Mar 27
14
b134 - Mirrored rpool won''t boot unless both mirrors are present
I have two 500 GB drives on my system that are attached to built-in SATA ports on my Asus M4A785-M motherboard, running in AHCI mode. If I shut down the system, remove either drive, and then try to boot the system, it will fail to boot. If I disable the splash screen, I find that it will display the SunOS banner and the hostname, but it never gets as far as the "Reading ZFS config:"
2010 Sep 13
3
Proper procedure when device names have changed
I am running zfs-fuse on an Ubuntu 10.04 box. I have a dual mirrored pool: mirror sdd sde mirror sdf sdg Recently the device names shifted on my box and the devices are now sdc sdd sde and sdf. The pool is of course very unhappy about the mirrors are no longer matched up and one device is "missing". What is the proper procedure to deal with this? -brian -- This message posted from
2015 Jun 10
2
[PATCH] New API: btrfs_replace_start
Signed-off-by: Pino Tsao <caoj.fnst@cn.fujitsu.com> --- daemon/btrfs.c | 40 +++++++++++++++++++++++++++++++++++++++ generator/actions.ml | 19 +++++++++++++++++++ tests/btrfs/test-btrfs-devices.sh | 8 ++++++++ 3 files changed, 67 insertions(+) diff --git a/daemon/btrfs.c b/daemon/btrfs.c index 39392f7..acc300d 100644 --- a/daemon/btrfs.c +++
2015 Jun 05
2
Re: [PATCH RFC][Resend] New API: btrfs_convert
Hi Toscano 在 2015年06月05日 00:37, Pino Toscano 写道: > Hi, > > In data giovedì 4 giugno 2015 11:56:41, Pino Tsao ha scritto: >> Disable the test case temporarily for 2 reasons: >> 1. Because the default test disk size is 500M, while btrfs >> convert command think it is too small to convert it(actually, >> just add 10M or 20M more is enough). >