Displaying 20 results from an estimated 10000 matches similar to: "Have my RMA... Now what??"
2012 Jan 17
6
Failing WD desktop drive in mirror, how to identify?
I have a desktop system with 2 ZFS mirrors. One drive in one mirror is
starting to produce read errors and slowing things down dramatically. I
detached it and the system is running fine. I can''t tell which drive it is
though! The error message and format command let me know which pair the bad
drive is in, but I don''t know how to get any more info than that like the
serial number
2010 Jun 18
25
Erratic behavior on 24T zpool
Well, I''ve searched my brains out and I can''t seem to find a reason for this.
I''m getting bad to medium performance with my new test storage device. I''ve got 24 1.5T disks with 2 SSDs configured as a zil log device. I''m using the Areca raid controller, the driver being arcmsr. Quad core AMD with 16 gig of RAM OpenSolaris upgraded to snv_134.
The zpool
2009 Jul 10
5
Slow Resilvering Performance
I know this topic has been discussed many times... but what the hell
makes zpool resilvering so slow? I''m running OpenSolaris 2009.06.
I have had a large number of problematic disks due to a bad production
batch, leading me to resilver quite a few times, progressively
replacing each disk as it dies (and now preemptively removing disks.)
My complaint is that resilvering ends up
2009 Oct 14
14
ZFS disk failure question
So, my Areca controller has been complaining via email of read errors for a couple days on SATA channel 8. The disk finally gave up last night at 17:40. I got to say I really appreciate the Areca controller taking such good care of me.
For some reason, I wasn''t able to log into the server last night or in the morning, probably because my home dir was on the zpool with the failed disk
2010 Jul 05
5
never ending resilver
Hi list,
Here''s my case :
pool: mypool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 147h19m, 100.00% done, 0h0m to go
config:
NAME STATE READ WRITE CKSUM
filerbackup13
2010 Nov 01
6
Excruciatingly slow resilvering on X4540 (build 134)
Hello,
I''m working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon, zpool status reported:
scrub: resilver in progress for 306h0m, 63.87% done, 173h7m to go
and a week being 168 hours, that put completion at sometime tomorrow night.
However, he just reported zpool status shows:
2012 Nov 30
13
Remove disk
Hi all,
I would like to knwon if with ZFS it''s possible to do something like that :
http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
meaning :
I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 disk
I''ve 36x 3T and 12 x 2T.
Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
to migrate all data on those 12 old disk on the new and
2011 Jun 01
11
SATA disk perf question
I figure this group will know better than any other I have contact
with, is 700-800 I/Ops reasonable for a 7200 RPM SATA drive (1 TB Sun
badged Seagate ST31000N in a J4400) ? I have a resilver running and am
seeing about 700-800 writes/sec. on the hot spare as it resilvers.
There is no other I/O activity on this box, as this is a remote
replication target for production data. I have a the
2010 Oct 16
4
resilver question
Hi all
I''m seeing some rather bad resilver times for a pool of WD Green drives (I know, bad drives, but leave that). Does resilver go through the whole pool or just the VDEV in question?
--
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres
2009 Nov 13
11
scrub differs in execute time?
I have a raidz2 and did a scrub, it took 8h. Then I reconnected some drives to other SATA ports, and now it takes 15h to scrub??
Why is that?
--
This message posted from opensolaris.org
2007 Jan 11
4
Help understanding some benchmark results
G''day, all,
So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern.
However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2010 Apr 26
23
SAS vs SATA: Same size, same speed, why SAS?
I''m building another 24-bay rackmount storage server, and I''m considering
what drives to put in the bays. My chassis is a Supermicro SC846A, so the
backplane supports SAS or SATA; my controllers are LSI3081E, again
supporting SAS or SATA.
Looking at drives, Seagate offers an enterprise (Constellation) 2TB 7200RPM
drive in both SAS and SATA configurations; the SAS model offers
2007 Nov 02
2
What is the correct way to replace a good disk?
I have a 9-bay JBOD configured as a raidz2. One of the disks, which is on-line and fine, needs to be swapped out and replaced. I have been looking though the zfs admin guide and am confused on how I should go about swapping out. I though I could put the disk off-line, remove it, put a new disk in, and put on-line. Does this sound right?
Any help would be great
Thanks
Chris
This message
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1.
I want to add "phase 2" which is another 7x1.5tb raidz1
Can I add the second phase to the first phase and basically have two
raid5''s striped (in raid terms?)
Yes, I probably should upgrade the zpool format too. Currently running
snv_104. Also should upgrade to 110.
If that is possible, would anyone happen to have the simple command
lines to
2010 Oct 19
8
Balancing LVOL fill?
Hi all
I have this server with some 50TB disk space. It originally had 30TB on WD Greens, was filled quite full, and another storage chassis was added. Now, space problem gone, fine, but what about speed? Three of the VDEVs are quite full, as indicated below. VDEV #3 (the one with the spare active) just spent some 72 hours resilvering a 2TB drive. Now, those green drives suck quite hard, but not
2010 Jan 25
24
Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboard
My current home fileserver (running Open Solaris 111b and ZFS) has an ASUS
M2N-SLI DELUXE motherboard. This has 6 SATA connections, which are
currently all in use (mirrored pair of 80GB for system zfs pool, two
mirrors of 400GB both in my data pool).
I''ve got two more hot-swap drive bays. And I''m getting up towards 90%
full on the data pool. So, it''s time to expand,
2011 Jan 18
4
Zpool Import Hanging
Hi All,
I believe this has been asked before, but I wasn?t able to find too much
information about the subject. Long story short, I was moving data around on
a storage zpool of mine and a zfs destroy <filesystem> hung (or so I
thought). This pool had dedup turned on at times while imported as well;
it?s running on a Nexenta Core 3.0.1 box (snv_134f).
The first time the machine was
2007 Dec 23
11
RAIDZ(2) expansion?
I skimmed the archives and found a thread from July earlier this year
about RAIDZ expansion. Not adding more RAIDZ stripes to a pool, but
adding more drives to the stripe itself. I''m wondering if an RFE has
been submitted for this and if any progress has been made, or is
expected? I find myself out of space on my current RAID5 setup and
would love to flip over to a ZFS raidz2 solution
2010 Nov 06
10
Apparent SAS HBA failure-- now what?
My setup: A SuperMicro 24-drive chassis with Intel dual-processor
motherboard, three LSI SAS3081E controllers, and 24 SATA 2TB hard drives,
divided into three pools with each pool a single eight-disk RAID-Z2. (Boot
is an SSD connected to motherboard SATA.)
This morning I got a cheerful email from my monitoring script: "Zchecker has
discovered a problem on bigdawg." The full output is
2010 Sep 29
10
Resliver making the system unresponsive
This must be resliver day :)
I just had a drive failure. The hot spare kicked in, and access to the pool over NFS was effectively zero for about 45 minutes. Currently the pool is still reslivering, but for some reason I can access the file system now.
Resliver speed has been beaten to death I know, but is there a way to avoid this? For example, is more enterprisy hardware less susceptible to