Displaying 20 results from an estimated 500 matches similar to: "drive replaced from spare"
2008 Apr 11
0
How to replace root drive if ZFS data is on it?
Hi, Experts:
A customer has X4500 and the boot drives mirrored (c5t0d0s0 and
c5t4d0s0) by SVM,
The ZFS uses the two other partitions on these two drives(c5t0d0s3 and
c5t4d0s3).
If we need to replace the disk drive c5t0d0, do we need to do anything
on the ZFS
(c5t0d0s3 and c5t4d0s3) first or just follow the regular boot drive
replacement procedure?
Below is the summary of their current ZFS
2007 Apr 11
0
raidz2 another resilver problem
Hello zfs-discuss,
One of a disk started to behave strangely.
Apr 11 16:07:42 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1:
Apr 11 16:07:42 thumper-9.srv port 6: device reset
Apr 11 16:07:42 thumper-9.srv scsi: [ID 107833 kern.warning] WARNING: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1/disk at 6,0 (sd27):
Apr 11 16:07:42 thumper-9.srv
2008 Apr 02
1
delete old zpool config?
Hi experts
zpool import shows some weird config of an old zpool
bash-3.00# zpool import
pool: data1
id: 7539031628606861598
state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-3C
config:
data1 UNAVAIL insufficient replicas
2009 Jun 19
8
x4500 resilvering spare taking forever?
I''ve got a Thumper running snv_57 and a large ZFS pool. I recently noticed a drive throwing some read errors, so I did the right thing and zfs replaced it with a spare.
Everything went well, but the resilvering process seems to be taking an eternity:
# zpool status
pool: bigpool
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was
2008 Mar 12
3
Mixing RAIDZ and RAIDZ2 zvols in the same zpool
I have a customer who has implemented the following layout: As you can
see, he has mostly raidz zvols but has one raidz2 in the same zpool.
What are the implications here? Is this a bad thing to do? Please
elaborate.
Thanks,
Scott Gaspard
Scott.J.Gaspard at Sun.COM
> NAME STATE READ WRITE CKSUM
>
> chipool1 ONLINE 0 0 0
>
>
2006 Oct 24
3
determining raidz pool configuration
Hi all,
Sorry for the newbie question, but I''ve looked at the docs and haven''t
been able to find an answer for this.
I''m working with a system where the pool has already been configured and
want to determine what the configuration is. I had thought that''d be
with zpool status -v <poolname>, but it doesn''t seem to agree with the
2010 Dec 17
3
Recent (unfun) experience with cron resource on Solaris 10 with puppet 0.25.5
I was attempting to set up some cron jobs via puppet.
I was trying to get cron to mail the output of the cron jobs to a specific user, so I was
attempting to set MAILTO=user@example.com, via the environment => specifier.
Puppet did as it was told.
Unfortunately, I guess that Solaris 10 does not support setting of environment variables in crontab files (directly), so
when puppet attempted to
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state
after a drive in a raidz2 vdev has been successfully replaced. In this
particular case drive c0t6d0 was failing so I ran,
zpool offline home/c0t6d0
zpool replace home c0t6d0 c8t1d0
and after the resilvering finished the pool reports a degraded state.
Hopefully this is incorrect. At this point is the vdev in question
now has
2010 Nov 08
8
Any limit on pool hierarchy?
Folks,
>From zfs documentation, it appears that a "vdev" can be built from more vdevs. That is, a raidz vdev can be built across a bunch of mirrored vdevs, and a mirror can be built across a few raidz vdevs.
Is my understanding correct? Also, is there a limit on the depth of a vdev?
Thank you in advance for your help.
Regards,
Peter
--
This message posted from opensolaris.org
2007 Jul 31
0
controller number mismatch
Hi,
I just noticed something interesting ... don''t know whether it''s
relevant or not (two commands run in succession during a ''nightly'' run):
$ iostat -xnz 6
[...]
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.3 0.0 0.8 0.0 0.0 0.2 0.2 0 0 c2t0d0
2.2
2010 Mar 17
0
checksum errors increasing on "spare" vdev?
Hi,
One of my colleagues was confused by the output of ''zpool status'' on a pool
where a hot spare is being resilvered in after a drive failure:
$ zpool status data
pool: data
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub:
2013 Feb 15
28
zfs-discuss mailing list & opensolaris EOL
So, I hear, in a couple weeks'' time, opensolaris.org is shutting down. What does that mean for this mailing list? Should we all be moving over to something at illumos or something?
I''m going to encourage somebody in an official capacity at opensolaris to respond...
I''m going to discourage unofficial responses, like, illumos enthusiasts etc simply trying to get people
2009 Aug 04
3
Managing about 30 users?
I have about 30 dev. and operation users on my machines, is there a
recipe anywhere for doing this? The best practices doc on the wiki is
incomplete and confusing.
Also, any workaround for the ssh_authorized_key bug in 24.8? All I
really want to do is create users, home directories and put ssh keys
in them, but it tries to add the keys first, so it doesn''t work.
2006 Aug 28
1
Sol 10 x86_64 intermittent SATA device locks up server
Hello All,
I have an issue where I have two SATA cards with 5 drives each in one
zfs pool. The issue is one of the devices has been intermittently
failing. The problem is that the entire box seems to lock up on
occasion when this happens. I currently have the SATA cable to that
device disconnected in the hopes that the box will at least stay up for
now. This is a new build that I am
2007 Sep 14
3
Convert Raid-Z to Mirror
Is there a way to convert a 2 disk raid-z file system to a mirror without backing up the data and restoring?
We have this:
bash-3.00# zpool status
pool: archives
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
archives ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
2008 Jul 28
1
zpool status my_pool , shows a pulled disk c1t6d0 as ONLINE ???
New server build with Solaris-10 u5/08,
on a SunFire t5220, and this is our first rollout of ZFS and Zpools.
Have 8 disks, boot disk is hardware mirrored (c1t0d0 + c1t1d0)
Created Zpool my_pool as RaidZ using 5 disks + 1 spare:
c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and spare c1t7d0
I am working on alerting & recovery plans for disks failures in the zpool.
As a test, I have pulled disk
2007 Mar 07
0
anyone want a Solaris 10u3 core file...
I executed sync just before this happened....
ultra:ultra# mdb -k unix.0 vmcore.0
Loading modules: [ unix krtld genunix specfs dtrace ufs sd pcipsy md ip sctp
usba fctl nca crypto zfs random nfs ptm cpc fcip sppp lofs ]
> $c
vpanic(7b653bd8, 7036fca0, 7036fc70, 7b652990, 0, 60002d0b480)
zio_done+0x284(60002d0b480, 0, a8, 7036fca0, 0, 60000b08d80)
zio_vdev_io_assess+0x178(60002d0b480, 8000,
2010 Jan 13
3
Recovering a broken mirror
We have a production SunFireV240 that had a zfs mirror until this week. One of the drives (c1t3d0) in the mirror failed.
The system was shutdown and the bad disk replaced without an export.
I don''t know what happened next but by the time I got involved there was no evidence that the remaining good disk (c1t2d0) had ever been part of a ZFS mirror.
Using dd on the raw device I can see data
2007 Oct 12
0
zfs: allocating allocated segment(offset=77984887808 size=66560)
how does one free segment(offset=77984887808 size=66560)
on a pool that won''t import?
looks like I found
http://bugs.opensolaris.org/view_bug.do?bug_id=6580715
http://mail.opensolaris.org/pipermail/zfs-discuss/2007-September/042541.html
when I luupgrade a ufs partition with a
dvd-b62 that was bfu to b68 with a dvd of b74
it booted fine and I was doing the same thing
that I had done on
2008 Jun 07
4
Mixing RAID levels in a pool
Hi,
I had a plan to set up a zfs pool with different raid levels but I ran
into an issue based on some testing I''ve done in a VM. I have 3x 750
GB hard drives and 2x 320 GB hard drives available, and I want to set
up a RAIDZ for the 750 GB and mirror for the 320 GB and add it all to
the same pool.
I tested detaching a drive and it seems to seriously mess up the
entire pool and I