Displaying 20 results from an estimated 1000 matches similar to: "remove disk (again)"
2008 Apr 11
0
How to replace root drive if ZFS data is on it?
Hi, Experts:
A customer has X4500 and the boot drives mirrored (c5t0d0s0 and
c5t4d0s0) by SVM,
The ZFS uses the two other partitions on these two drives(c5t0d0s3 and
c5t4d0s3).
If we need to replace the disk drive c5t0d0, do we need to do anything
on the ZFS
(c5t0d0s3 and c5t4d0s3) first or just follow the regular boot drive
replacement procedure?
Below is the summary of their current ZFS
2009 Jan 27
5
Replacing HDD in x4500
The vendor wanted to come in and replace an HDD in the 2nd X4500, as it
was "constantly busy", and since our x4500 has always died miserably in
the past when a HDD dies, they wanted to replace it before the HDD
actually died.
The usual was done, HDD replaced, resilvering started and ran for about
50 minutes. Then the system hung, same as always, all ZFS related
commands would just
2008 Jan 30
2
Convert MBOX
To all,
I am using dovecot --version 1.0.10 and trying to convert MBOXes to
MailDir's with the end goal of creating one folder filled with users old
MBOXes that when they log in for the first time will be converted to
Mail Dir format.
I tried this and it did not work and gave me this output;
<snip>
default_mail_env = maildir:%h/mail/
#convert_mail =
2009 Jun 19
8
x4500 resilvering spare taking forever?
I''ve got a Thumper running snv_57 and a large ZFS pool. I recently noticed a drive throwing some read errors, so I did the right thing and zfs replaced it with a spare.
Everything went well, but the resilvering process seems to be taking an eternity:
# zpool status
pool: bigpool
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server.
We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57
2012 Nov 11
0
Expanding a ZFS pool disk in Solaris 10 on VMWare (or other expandable storage technology)
Hello all,
This is not so much a question but rather a "how-to" for posterity.
Comments and possible fixes are welcome, though.
I''m toying (for work) with a Solaris 10 VM, and it has a dedicated
virtual HDD for data and zones. The template VM had a 20Gb disk,
but a particular application needs more. I hoped ZFS autoexpand
would do the trick transparently, but it turned out
2009 Oct 27
2
root pool can not have multiple vdevs ?
This seems like a bit of a restriction ... is this intended ?
# cat /etc/release
Solaris Express Community Edition snv_125 SPARC
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 05 October 2009
# uname -a
SunOS neptune 5.11 snv_125 sun4u sparc SUNW,Sun-Fire-880
#
2008 Sep 16
3
iscsi target problems on snv_97
I''ve recently upgraded my x4500 to Nevada build 97, and am having problems with the iscsi target.
Background: this box is used to serve NFS underlying a VMware ESX environment (zfs filesystem-type datasets) and presents iSCSI targets (zfs zvol datasets) for a Windows host and to act as zoneroots for Solaris 10 hosts. For optimal random-read performance, I''ve configured a single
2008 Jun 07
4
Mixing RAID levels in a pool
Hi,
I had a plan to set up a zfs pool with different raid levels but I ran
into an issue based on some testing I''ve done in a VM. I have 3x 750
GB hard drives and 2x 320 GB hard drives available, and I want to set
up a RAIDZ for the 750 GB and mirror for the 320 GB and add it all to
the same pool.
I tested detaching a drive and it seems to seriously mess up the
entire pool and I
2010 Feb 18
3
improve meta data performance
We have a SunFire X4500 running Solaris 10U5 which does about 5-8k nfs ops
of which about 90% are meta data. In hind sight it would have been
significantly better to use a mirrored configuration but we opted for 4 x
(9+2) raidz2 at the time. We can not take the downtime necessary to change
the zpool configuration.
We need to improve the meta data performance with little to no money. Does
anyone
2010 Jun 07
2
NOTICE: spa_import_rootpool: error 5
IHAC
Who has an x4500(x86 box) who has a zfs root filesystem. They installed
patches today,
the latest solaris 10 x86 recommended patch cluster and the patching
seemed to complete
successfully. Then when they tried to reboot the box the machine would
not boot? They
get the following error
NOTICE: spa_import_rootpool: error 5, Inc. All rights reserved.
Cannot mount root on
/pci at
2004 Mar 02
1
Samba 3.0.2 make command RUN AMUCK! fills filesystem...
Listers:
Am trying to compile Samba 3.0.2a on Solaris (9 - SPARC). After a bit of fiddling, we were encouraged, having finally
gotten the configuration to work smoothly. (This was itself a major coup; glad to share config and results with anyone
trying the same thing...)
However, the 'make' command goes nuts, filling the filesystem (with almost 900 MB!) before dying with a 'no
2008 Jul 29
8
questions about ZFS Send/Receive
Hi guys,
we are proposing a customer a couple of X4500 (24 Tb) used as NAS
(i.e. NFS server).
Both server will contain the same files and should be accessed by
different clients at the same time (i.e. they should be both active)
So we need to guarantee that both x4500 contain the same files:
We could simply copy the contents on both x4500 , which is an option
because the "new
2008 Apr 01
29
OpenSolaris ZFS NAS Setup
If it''s of interest, I''ve written up some articles on my experiences of building a ZFS NAS box which you can read here:
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
I used CIFS to share the filesystems, but it will be a simple matter to use NFS instead: issue the command ''zfs set sharenfs=on pool/filesystem'' instead of ''zfs set
2009 Feb 12
1
strange ''too many errors'' msg
Hi,
just found on a X4500 with S10u6:
fmd: [ID 441519 daemon.error] SUNW-MSG-ID: ZFS-8000-GH, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Wed Feb 11 16:03:26 CET 2009
PLATFORM: Sun Fire X4500, CSN: 00:14:4F:20:E0:2C , HOSTNAME: peng
SOURCE: zfs-diagnosis, REV: 1.0
EVENT-ID: 74e6f0ec-b1e7-e49b-8d71-dc1c9b68ad2b
DESC: The number of checksum errors associated with a ZFS device exceeded
2009 Sep 25
1
NLM_DENIED_NOLOCKS Solaris 10u5 X4500
This was previously posed to the sun-managers mailing list but the only
reply I received recommended I post here at well.
We have a production Solaris 10u5 / ZFS X4500 file server which is
reporting NLM_DENIED_NOLOCKS immediately for any nfs locking request. The
lockd does not appear to be busy so is it possible we have hit some sort of
limit on the number of files that can be locked? Are there
2007 Oct 27
14
X4500 device disconnect problem persists
After applying 125205-07 on two X4500 machines running Sol10U4 and
removing "set sata:sata_func_enable = 0x5" from /etc/system to
re-enable NCQ, I am again observing drive disconnect error messages.
This in spite of the patch description which claims multiple fixes
in this area:
6587133 repeated DMA command timeouts and device resets on x4500
6538627 x4500 message logs contain multiple
2008 Jul 02
0
Q: grow zpool build on top of iSCSI devices
Hi all.
We currenty move out a number of iSCSI servers based on Thumpers
(x4500) running both, Solaris 10 and OpenSolaris build 90+. The
targets on the machines are based on ZVOLs. Some of the clients use those
iSCSI "disks" to build mirrored Zpools. As the volumes size on the x4500
can easily be grown I would like to know if that growth in space can be
propagated to the client
2012 Jun 10
0
A disk on Thumper giving random CKSUM error counts
Hello all,
As some of you might remember, there is a Sun Fire X4500
(Thumper) server that I was asked to help upgrade to modern
disks. It is still in a testing phase, and the one UltraStar
3Tb currently available to the server''s owners is humming
in the server, with one small partition on its tail which
replaced a dead 250Gb disk earlier in to pool. The OS is
still SXCE snv_117 so
2008 Dec 23
1
Upgrade from UFS Sol 10u5 to ZFS Sol 10u6/OS 2008.11[SEC=UNCLASSIFIED]
Hi ZFS gods,
I have a x4500 I wish to upgrade from a SVM UFS Sol 10u5 to a ZFS
rpool 10u6 or Opensolaris.
Since I know (via backing up my sharetab) what shares I need to have
(all nfs share - no cifs on this 4500 - YAY) and have organised
downtime for this server, would it be easier for me to go to Solaris
10u6 (or opensolaris) by just installing from scratch and re-importing
the ZPOOL