similar to: Resliver making the system unresponsive

Displaying 20 results from an estimated 600 matches similar to: "Resliver making the system unresponsive"

2008 Sep 05
6
resilver speed.
Is there any way to control the resliver speed? Having attached a third disk to a mirror (so I can replace the other disks with larger ones) the resilver goes at a fraction of the speed of the same operation using disk suite. However it still renders the system pretty much unusable for anything else. So I would like to control the rate of the resilver. Either slow it down a lot so that the
2006 Sep 15
8
reslivering, how long will it take?
Being resilvered 444.00 GB 168.21 GB 158.73 GB Just wondering if anyone has any rough guesstimate of how long this will take? It''s 3x1200JB ata drives and one Seagate SATA drive. The SATA drive is the one that was replaced. Any idea how long this will take? As in 5 hours? 2 days? I don''t see any way to get a status update on where it''s at in the reslivering
2010 Sep 29
7
Is there any way to stop a resilver?
Is there any way to stop a resilver? We gotta stop this thing - at minimum, completion time is 300,000 hours, and maximum is in the millions. Raidz2 array, so it has the redundancy, we just need to get data off. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100929/9dbb6cf5/attachment.html>
2008 Jan 15
4
Moving zfs to an iscsci equallogic LUN
We have a Mirror setup in ZFS that''s 73GB (two internal disks on a sun fire v440). We currently are going to attach this system to an Equallogic Box, and will attach an ISCSCI LUN from the Equallogic box to the v440 of about 200gb. The Equallogic Box is configured as a Hardware Raid 50 (two hot spares for redundancy). My question is what''s the best approach to moving the ZFS
2010 Dec 05
4
Zfs ignoring spares?
Hi all I have installed a new server with 77 2TB drives in 11 7-drive RAIDz2 VDEVs, all on WD Black drives. Now, it seems two of these drives were bad, one of them had a bunch of errors, the other was very slow. After zfs offlining these and then zfs replacing them with online spares, resilver ended and I thought it''d be ok. Appearently not. Albeit the resilver succeeds, the pool status
2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on disk will be 128K whenever possible. But if you''re using raidzN with a capacity of M disks (M disks useful capacity + N disks redundancy) then the block size on each individual disk will be 128K / M. Right? This is one of the reasons the raidzN resilver code is inefficient. Since you end up waiting for the
2010 Jan 28
13
ZFS configuration suggestion with 24 drives
Replacing my current media server with another larger capacity media server. Also switching over to solaris/zfs. Anyhow we have 24 drive capacity. These are for large sequential access (large media files) used by no more than 3 or 5 users at a time. I''m inquiring as to what the best configuration for this is for vdevs. I''m considering the following configurations 4 x x6
2010 Sep 24
3
Kernel panic on ZFS import - how do I recover?
I posted this on the www.nexentastor.org forums, but no answer so far, so I apologize if you are seeing this twice. I am also engaged with nexenta support, but was hoping to get some additional insights here. I am running nexenta 3.0.3 community edition, based on 134. The box crashed yesterday, and goes into a reboot loop (kernel panic) when trying to import my data pool, screenshot attached.
2012 Feb 28
4
JIRA anyone?
I''m trying to evaluate what use I can make of JIRA with Rails/Ruby and am getting frustrated. The jira4r gem will install but the soap4r gem won''t load. literally: ''irb -r soap4r'' returns :cannot load such file... I also set this up in a Gemfile and tried to run this from the rails console:
2013 Mar 23
0
Dirves going offline in Zpool
Hi, I have Dell md1200 connected to two heads ( Dell R710 ). The heads have Perc H800 card and drives are configured in Raid0 ( Virtual Disk) in the RAID controller. One of the drives had crashed and is replaced by a spare. Resilvering was triggered but fails to complete due to drives going offline. I have to reboot the head ( R710) and drives comes online. This happened repeatedly when
2009 Oct 29
2
Difficulty testing an SSD as a ZIL
Hi all, I received my SSD, and wanted to test it out using fake zpools with files as backing stores before attaching it to my production pool. However, when I exported the test pool and imported, I get an error. Here is what I did: I created a file to use as a backing store for my new pool: mkfile 1g /data01/test2/1gtest Created a new pool: zpool create ziltest2 /data01/test2/1gtest Added the
2009 Aug 06
4
Can I setting ''zil_disable'' to increase ZFS/iscsi performance ?
Is there any way to increase the ZFS performance? -- This message posted from opensolaris.org
2006 Feb 08
8
Riding the Rails to acquisition
Guys, I think many of us on this list would consider ourselves entrepreneurs. I''m willing to bet at least 40% of you are working on ideas for startups...hoping that you just might have the next Flickr, Oddpost, del.icio.us, etc. Me too, for what it''s worth. Rails is good for this, in that it enables you to move quickly (after the learning curve) and it''s
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance. I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.). According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2010 Apr 15
6
ZFS for ISCSI ntfs backing store.
I''m looking to move our file storage from Windows to Opensolaris/zfs. The windows box will be connected through 10g for iscsi to the storage. The windows box will continue to serve the windows clients and will be hosting approximately 4TB of data. The physical box is a sunfire x4240, single AMD 2435 processor, 16G ram, LSI 3801E HBA, ixgbe 10g card. I''m looking for suggestions
2006 Oct 03
4
! camping 1.5 + markaby 0.5
Not too different from their corresponding last releases, but documentation has been filled in for both. To upgrade: gem install camping --source code.whytheluckystiff.net And, here is a complete changelog: == Camping 1.5 * Camping::Apps stores an array of classes for all loaded apps. * bin/camping can be given a directory. Like: <tt>camping examples/</tt> * Console mode -- thank
2010 Nov 01
6
Excruciatingly slow resilvering on X4540 (build 134)
Hello, I''m working with someone who replaced a failed 1TB drive (50% utilized), on an X4540 running OS build 134, and I think something must be wrong. Last Tuesday afternoon, zpool status reported: scrub: resilver in progress for 306h0m, 63.87% done, 173h7m to go and a week being 168 hours, that put completion at sometime tomorrow night. However, he just reported zpool status shows:
2010 Oct 16
4
resilver question
Hi all I''m seeing some rather bad resilver times for a pool of WD Green drives (I know, bad drives, but leave that). Does resilver go through the whole pool or just the VDEV in question? -- Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres
2010 Sep 02
5
what is zfs doing during a log resilver?
So, when you add a log device to a pool, it initiates a resilver. What is it actually doing, though? Isn''t the slog a copy of the in-memory intent log? Wouldn''t it just simply replicate the data that''s in the other log, checked against what''s in RAM? And presumably there isn''t that much data in the slog so there isn''t that much to check? Or
2009 Jul 13
7
OpenSolaris 2008.11 - resilver still restarting
Just look at this. I thought all the restarting resilver bugs were fixed, but it looks like something odd is still happening at the start: Status immediately after starting resilver: # zpool status pool: rc-pool state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine