similar to: slog devices don''t resilver correctly

Displaying 20 results from an estimated 1000 matches similar to: "slog devices don''t resilver correctly"

2010 Sep 02
5
what is zfs doing during a log resilver?
So, when you add a log device to a pool, it initiates a resilver. What is it actually doing, though? Isn''t the slog a copy of the in-memory intent log? Wouldn''t it just simply replicate the data that''s in the other log, checked against what''s in RAM? And presumably there isn''t that much data in the slog so there isn''t that much to check? Or
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
I have historically noticed that in ZFS, when ever there is a heavy writer to a pool via NFS, the reads can held back (basically paused). An example is a RAID10 pool of 6 disks, whereby a directory of files including some large 100+MB in size being written can cause other clients over NFS to pause for seconds (5-30 or so). This on B70 bits. I''ve gotten used to this behavior over NFS, but
2009 Oct 14
14
ZFS disk failure question
So, my Areca controller has been complaining via email of read errors for a couple days on SATA channel 8. The disk finally gave up last night at 17:40. I got to say I really appreciate the Areca controller taking such good care of me. For some reason, I wasn''t able to log into the server last night or in the morning, probably because my home dir was on the zpool with the failed disk
2008 Jan 23
4
Synchronous scrub?
Say I''m firing off an at(1) or cron(1) job to do scrubs, and say I want to scrub two pools sequentially because they share one device. The first pool, BTW, is a mirror comprising of a smaller disk and a subset of a larger disk. The other pool is the remainder of the larger disk. I see no documentation mentioning how to scrub, then wait-until-completed. I''m happy to be pointed
2008 Sep 05
6
resilver speed.
Is there any way to control the resliver speed? Having attached a third disk to a mirror (so I can replace the other disks with larger ones) the resilver goes at a fraction of the speed of the same operation using disk suite. However it still renders the system pretty much unusable for anything else. So I would like to control the rate of the resilver. Either slow it down a lot so that the
2010 Oct 16
4
resilver question
Hi all I''m seeing some rather bad resilver times for a pool of WD Green drives (I know, bad drives, but leave that). Does resilver go through the whole pool or just the VDEV in question? -- Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres
2010 Apr 05
3
no hot spare activation?
While testing a zpool with a different storage adapter using my "blkdev" device, I did a test which made a disk unavailable -- all attempts to read from it report EIO. I expected my configuration (which is a 3 disk test, with 2 disks in a RAIDZ and a hot spare) to work where the hot spare would automatically be activated. But I''m finding that ZFS does not behave this way
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare support in ZFS. Below you can find a current draft of the proposed interfaces. This has not yet been submitted for ARC review, but comments are welcome. Note that this does not include any enhanced FMA diagnosis to determine when a device is "faulted". This will come in a follow-on project, of which some
2006 Jan 30
4
Adding a mirror to an existing single disk zpool
Hello All, I''m transitioning data off my old UFS partitions onto ZFS. I don''t have a lot of duplicate space so I created a zpool, rsync''ed the data from UFS to the ZFS mount and then repartitioned the UFS drive to have partitions that match the cylinder count of the ZFS. The idea here is that once the data is over I wipe out UFS and then attach that partition to the
2010 Oct 12
2
Multiple SLOG devices per pool
I have a pool with a single SLOG device rated at Y iops. If I add a second (non-mirrored) SLOG device also rated at Y iops will my zpool now theoretically be able to handle 2Y iops? Or close to that? Thanks, Ray
2006 Jun 15
4
devid support for EFI partition improved zfs usibility
Hi, guys, I have add devid support for EFI, (not putback yet) and test it with a zfs mirror, now the mirror can recover even a usb harddisk is unplugged and replugged into a different usb port. But there is still something need to improve. I''m far from zfs expert, correct me if I''m wrong. First, zfs should sense the hotplug event. I use zfs status to check the status of the
2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on disk will be 128K whenever possible. But if you''re using raidzN with a capacity of M disks (M disks useful capacity + N disks redundancy) then the block size on each individual disk will be 128K / M. Right? This is one of the reasons the raidzN resilver code is inefficient. Since you end up waiting for the
2008 Oct 08
1
Shutting down / exporting zpool without flushing slog devices
Hey folks, This might be a daft idea, but is there any way to shut down solaris / zfs without flushing the slog device? The reason I ask is that we''re planning to use mirrored nvram slogs, and in the long term hope to use a pair of 80GB ioDrives. I''d like to have a large amount of that reserved for write cache (potentially 20-30GB), to facilitate rapid suspend to disk of
2010 Aug 12
6
one ZIL SLOG per zpool?
I have three zpools on a server and want to add a mirrored pair of ssd''s for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool? -- This message posted from opensolaris.org
2010 Jul 21
5
slog/L2ARC on a hard drive and not SSD?
Hi, Out of pure curiosity, I was wondering, what would happen if one tries to use a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)? I know these are designed with SSDs in mind, and I know it''s possible to use anything you want as cache. So would ZFS benefit from it? Would it be the same? Would it slow down? I guess it would slow things down, because it would be trying to
2010 Sep 29
10
Resliver making the system unresponsive
This must be resliver day :) I just had a drive failure. The hot spare kicked in, and access to the pool over NFS was effectively zero for about 45 minutes. Currently the pool is still reslivering, but for some reason I can access the file system now. Resliver speed has been beaten to death I know, but is there a way to avoid this? For example, is more enterprisy hardware less susceptible to
2010 Jul 05
5
never ending resilver
Hi list, Here''s my case : pool: mypool state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress for 147h19m, 100.00% done, 0h0m to go config: NAME STATE READ WRITE CKSUM filerbackup13
2009 Nov 22
9
Resilver/scrub times?
Hi all! I''ve decided to take the "big jump" and build a ZFS home filer (although it might also do "other work" like caching DNS, mail, usenet, bittorent and so forth). YAY! I wonder if anyone can shed some light on how long a pool scrub would take on a fairly decent rig. These are the specs as-ordered: Asus P5Q-EM mainboard Core2 Quad 2.83 GHZ 8GB DDR2/80 OS: 2 x
2010 Apr 14
1
Checksum errors on and after resilver
Hi all, I recently experienced a disk failure on my home server and observed checksum errors while resilvering the pool and on the first scrub after the resilver had completed. Now everything seems fine but I''m posting this to get help with calming my nerves and detect any possible future faults. Lets start with some specs. OSOL 2009.06 Intel SASUC8i (w LSI 1.30IT FW) Gigabyte
2009 Jul 13
7
OpenSolaris 2008.11 - resilver still restarting
Just look at this. I thought all the restarting resilver bugs were fixed, but it looks like something odd is still happening at the start: Status immediately after starting resilver: # zpool status pool: rc-pool state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine