Displaying 20 results from an estimated 3000 matches similar to: "weird bug with Seagate 3TB USB3 drive"
2011 Dec 15
31
Can I create a mirror for a root rpool?
On Solaris 10 If I install using ZFS root on only one drive is there a way
to add another drive as a mirror later? Sorry if this was discussed
already. I searched the archives and couldn''t find the answer. Thank you.
2010 Aug 13
15
NFS issue with ZFS
I have Solaris 10 U7 that is exporting ZFS filesytem.
The client is Solaris 9 U7.
I can mount the filesytem just fine but I am unable to write to it.
showmount -e shows my mount is set for everyone.
the dfstab file has option rw set.
So what gives?
Phillip
--
This message posted from opensolaris.org
2009 Aug 04
7
Sol10u7: can''t "zpool remove" missing hot spare
I''m using Solaris 10u6 updated to u7 via patches, and I have a pool
with a mirrored pair and a (shared) hot spare. We reconfigured disks
a while ago and now the controller is c4 instead of c2. The hot spare
was originally on c2, and apparently on rebooting it didn''t get found.
So, I looked up what the new name for the hot spare was, then added
it to the pool with "zpool
2009 Dec 04
2
USB sticks show on one set of devices in zpool, different devices in format
Hello,
I had snv_111b running for a while on a HP DL160G5. With two 16GB USB sticks comprising the mirrored rpool for boot. And four 1TB drives comprising another pool, pool1, for data.
So that''s been working just fine for a few months. Yesterday I get it into my mind to upgrade the OS to latest, then was snv_127. That worked, and all was well. Also did an upgrade to the
2007 Oct 08
6
zfs boot issue, changing device id
Hi,
Given two disk c1t0d0 (DISK A) and c1t1d0 (DISK B)...
1/ Standard install on DISK A.
2/ zfs boot install on DISK B.
3/ I change the boot order and my zfs boot works fine.
4/ I install grub on the mbr of DISK B
5/ I disconnect and replace DISK A with DISK B
6/ Reboot, get the grub menu select Solaris ZFS and it panics that it
cannot mount root path @ device XXX...
This is not a ZFS
2009 Aug 02
1
zpool status showing wrong device name (similar to: ZFS confused about disk controller )
Hi All,
over the last couple of weeks, I had to boot from my rpool from various physical
machines because some component on my laptop mainboard blew up (you know that
burned electronics smell?). I can''t retrospectively document all I did, but I am
sure I recreated the boot-archive, ran devfsadm -C and deleted
/etc/zfs/zpool.cache several times.
Now zpool status is referring to a
2010 Jan 03
2
"zpool import -f" not forceful enough?
I had to use the labelfix hack (and I had to recompile it at that) on 1/2 of an old zpool. I made this change:
/* zio_checksum(ZIO_CHECKSUM_LABEL, &zc, buf, size); */
zio_checksum_table[ZIO_CHECKSUM_LABEL].ci_func[0](buf, size, &zc);
and I''m assuming [0] is the correct endianness, since afterwards I saw it come up with "zpool import".
Unfortunately, I
2010 Jul 12
3
Need ZFS master!
Hello all. I am new...very new to opensolaris and I am having an issue and have no idea what is going wrong. So I have 5 drives in my machine. all 500gb. I installed open solaris on the first drive and rebooted. . Now what I want to do is ad a second drive so they are mirrored. How does one do this!!! I am getting no where and need some help.
--
This message posted from opensolaris.org
2009 Feb 02
8
ZFS core contributor nominations
The time has come to review the current Contributor and Core contributor
grants for ZFS. Since all of the ZFS core contributors grants are set
to expire on 02-24-2009 we need to renew the members that are still
contributing at core contributor levels. We should also add some new
members to both Contributor and Core contributor levels.
First the current list of Core contributors:
Bill
2010 Oct 04
8
Can I "upgrade" a striped pool of vdevs to mirrored vdevs?
Hi,
once I created a zpool of single vdevs not using mirroring of any kind. Now I wonder if it''s possible to add vdevs and mirror the currently existing ones.
Thanks,
budy
--
This message posted from opensolaris.org
2012 Dec 12
20
Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)
I''ve hit this bug on four of my Solaris 11 servers. Looking for anyone else
who has seen it, as well as comments/speculation on cause.
This bug is pretty bad. If you are lucky you can import the pool read-only
and migrate it elsewhere.
I''ve also tried setting zfs:zfs_recover=1,aok=1 with varying results.
http://docs.oracle.com/cd/E26502_01/html/E28978/gmkgj.html#scrolltoc
2010 Jan 19
8
Panic running a scrub
This is probably unreproducible, but I just got a panic whilst
scrubbing a simple mirrored pool on scxe snv124. Evidently
on of the disks went offline for some reason and shortly
thereafter the panic happened. I have the dump and the
/var/adm/messages containing the trace.
Is there any point in submitting a bug report?
The panic starts with:
Jan 19 13:27:13 host6
2010 Jan 13
3
Recovering a broken mirror
We have a production SunFireV240 that had a zfs mirror until this week. One of the drives (c1t3d0) in the mirror failed.
The system was shutdown and the bad disk replaced without an export.
I don''t know what happened next but by the time I got involved there was no evidence that the remaining good disk (c1t2d0) had ever been part of a ZFS mirror.
Using dd on the raw device I can see data
2011 Jul 26
2
recover zpool with a new installation
Hi all,
I lost my storage because rpool don''t boot. I try to recover, but
opensolaris says to "destroy and re-create".
My rpool installed on flash drive, and my pool (with my info) it''s on
another disks.
My question is: It''s possible I reinstall opensolaris in new flash drive,
without stirring on my pool of disks, and recover this pool?
Thanks.
Regards,
--
2010 Mar 19
3
zpool I/O error
Hi all,
I''m trying to delete a zpool and when I do, I get this error:
# zpool destroy oradata_fs1
cannot open ''oradata_fs1'': I/O error
#
The pools I have on this box look like this:
#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
oradata_fs1 532G 119K 532G 0% DEGRADED -
rpool 136G 28.6G 107G 21% ONLINE -
#
Why
2009 Oct 14
14
ZFS disk failure question
So, my Areca controller has been complaining via email of read errors for a couple days on SATA channel 8. The disk finally gave up last night at 17:40. I got to say I really appreciate the Areca controller taking such good care of me.
For some reason, I wasn''t able to log into the server last night or in the morning, probably because my home dir was on the zpool with the failed disk
2008 Jun 28
3
Proper wayto do disk replacement in an A1000 storage array and raidz2.
I''m using ZFS and a drive has failed.
I am quite new to solaris and Frankly I seem to know more about ZFS and how it works then I do the OS.
I have the hot spare taking over the failed disk and from here, do I need to remove the disk on the OS side (if so what is proper) or do I need to take action on the ZFS side first?
This message posted from opensolaris.org
2009 Oct 17
3
zvol used apparently greater than volsize for sparse volume
What does it mean for the reported value of a zvol volsize to be
less than the product of used and compressratio?
For example,
# zfs get -p all home1/home1mm01
NAME PROPERTY VALUE SOURCE
home1/home1mm01 type volume -
home1/home1mm01 creation 1254440045 -
home1/home1mm01 used 14902492672
2012 Dec 21
4
zfs receive options (was S11 vs illumos zfs compatiblity)
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-
> bounces at opensolaris.org] On Behalf Of bob netherton
>
> You can, with recv, override any property in the sending stream that can
> be
> set from the command line (ie, a writable).
>
> # zfs send repo/support at cpu-0412 | zfs recv -o version=4 repo/test
> cannot receive: cannot override received
2011 Mar 04
13
cannot replace c10t0d0 with c10t0d0: device is too small
In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz storage pool and then shelved the other two for spares. One of the disks failed last night so I shut down the server and replaced it with a spare. When I tried to zpool replace the disk I get:
zpool replace tank c10t0d0
cannot replace c10t0d0 with c10t0d0: device is too small
The 4 original disk partition tables look like