search for: sparingly

Displaying 20 results from an estimated 2815 matches for "sparingly".

2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare support in ZFS. Below you can find a current draft of the proposed interfaces. This has not yet been submitted for ARC review, but comments are welcome. Note that this does not include any enhanced FMA diagnosis to determine when a device is "faulted". This will come in a follow-on project, of which some
2010 Oct 04
3
hot spare remains in use
Hi, I had a hot spare used to replace a failed drive, but then the drive appears to be fine anyway. After clearing the error it shows that the drive was resilvered, but keeps the spare in use. zpool status pool2 pool: pool2 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool2 ONLINE 0 0 0 raidz2
2009 Aug 04
7
Sol10u7: can''t "zpool remove" missing hot spare
I''m using Solaris 10u6 updated to u7 via patches, and I have a pool with a mirrored pair and a (shared) hot spare. We reconfigured disks a while ago and now the controller is c4 instead of c2. The hot spare was originally on c2, and apparently on rebooting it didn''t get found. So, I looked up what the new name for the hot spare was, then added it to the pool with "zpool
2009 Sep 04
3
2.6.31-rc8: CIFS with 5 seconds hiccups
This is on 32 bit x86 on a Dell 1950 After mouting a cifs share we have 5 second hiccups. Typical log output when doing a simple "ls /mnt": Sep 4 16:21:43 rd-spare kernel: fs/cifs/transport.c: For smb_command 50 Sep 4 16:21:43 rd-spare kernel: fs/cifs/transport.c: Sending smb: total_len 118 Sep 4 16:21:43 rd-spare kernel: fs/cifs/inode.c: CIFS VFS: leaving cifs_revalidate (xid =
2013 Apr 11
6
RAID 6 - opinions
I'm setting up this huge RAID 6 box. I've always thought of hot spares, but I'm reading things that are comparing RAID 5 with a hot spare to RAID 6, implying that the latter doesn't need one. I *certainly* have enough drives to spare in this RAID box: 42 of 'em, so two questions: should I assign one or more hot spares, and, if so, how many? mark
2009 Oct 14
14
ZFS disk failure question
So, my Areca controller has been complaining via email of read errors for a couple days on SATA channel 8. The disk finally gave up last night at 17:40. I got to say I really appreciate the Areca controller taking such good care of me. For some reason, I wasn''t able to log into the server last night or in the morning, probably because my home dir was on the zpool with the failed disk
2008 Jun 28
3
Proper wayto do disk replacement in an A1000 storage array and raidz2.
I''m using ZFS and a drive has failed. I am quite new to solaris and Frankly I seem to know more about ZFS and how it works then I do the OS. I have the hot spare taking over the failed disk and from here, do I need to remove the disk on the OS side (if so what is proper) or do I need to take action on the ZFS side first? This message posted from opensolaris.org
2010 Dec 05
4
Zfs ignoring spares?
Hi all I have installed a new server with 77 2TB drives in 11 7-drive RAIDz2 VDEVs, all on WD Black drives. Now, it seems two of these drives were bad, one of them had a bunch of errors, the other was very slow. After zfs offlining these and then zfs replacing them with online spares, resilver ended and I thought it''d be ok. Appearently not. Albeit the resilver succeeds, the pool status
2019 Jan 30
4
C7, mdadm issues
On 01/30/19 03:45, Alessandro Baggi wrote: > Il 29/01/19 20:42, mark ha scritto: >> Alessandro Baggi wrote: >>> Il 29/01/19 18:47, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 15:03, mark ha scritto: >>>>> >>>>>> I've no idea what happened, but the box I was working on last week
2019 Jan 30
2
C7, mdadm issues
Alessandro Baggi wrote: > Il 30/01/19 14:02, mark ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto:
2009 Jan 20
2
hot spare not so hot ??
I have configured a test system with a mirrored rpool and one hot spare. I powered the systems off, pulled one of the disks from rpool to simulate a hardware failure. The hot spare is not activating automatically. Is there something more i should have done to make this work ? pool: rpool state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for
2008 Feb 01
2
RAID Hot Spare
I've googled this question without a great deal of information. Monday I'm rebuilding a Linux server at work. Instead of purchasing 3 drives for this system I purchased 4 with intent to create a hot spare. Here is my usual setup which I'll do again but with a hot spare for each partion. Create /dev/md0 mount point /boot RAID1 3 drives with 1 hot spare Create two more raid setups
2011 Mar 08
0
Race condition with mdadm at bootup?
Hello folks, I am experiencing a weird problem at bootup with large RAID-6 arrays. After Googling around (a lot) I find that others are having the same issues with CentOS/RHEL/Ubuntu/whatever. In my case it's Scientific Linux-6 which should behave the same way as CentOS-6. I had the same problem with the RHEL-6 evaluation version. I'm posting this question to the SL mailing list
2010 Sep 29
2
rpool spare
Using ZFS v22, is it possible to add a hot spare to rpool? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100929/4b036d1d/attachment.html>
2019 Jun 14
3
zfs
Hi, folks, testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I pulled one drive (11-drive, one hot spare pool), and it resilvered with the hot spare. zpool status -x shows me state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state.
2005 Jul 25
3
RAID 5 vs. RAID 10
Hi, I am looking into purchasing a new server. This server will be mission-critical. I have read and somewhat understood the theories behind RAIDs 0, 1, 5, 10 & JBOD. However, I would like to get some feedback from those who have experience in implementing and recovering from a HDD failure using RAID. Hardware specs include:- Dual Xeon 3.2 GHz 2 GB RAM I would like to implement
2019 Jan 30
1
C7, mdadm issues
Alessandro Baggi wrote: > Il 30/01/19 16:33, mark ha scritto: > >> Alessandro Baggi wrote: >> >>> Il 30/01/19 14:02, mark ha scritto: >>> >>>> On 01/30/19 03:45, Alessandro Baggi wrote: >>>> >>>>> Il 29/01/19 20:42, mark ha scritto: >>>>> >>>>>> Alessandro Baggi wrote:
2013 May 01
2
Shorewall 4.5.15 fails to start using systemctl on FC18
Starting Shorewall using systemctl fails with the error message as below. Starting from command line succeeds. I''ve tried changing the permissions on the /var/lib/shorewall folder to 777 but no change. The temp file isn''t present after the error so I don''t know if the permission issue is related to that. Selinux is disabled. I''m new to FC18 and systemctl so
2011 Mar 04
13
cannot replace c10t0d0 with c10t0d0: device is too small
In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz storage pool and then shelved the other two for spares. One of the disks failed last night so I shut down the server and replaced it with a spare. When I tried to zpool replace the disk I get: zpool replace tank c10t0d0 cannot replace c10t0d0 with c10t0d0: device is too small The 4 original disk partition tables look like
2010 Apr 05
3
no hot spare activation?
While testing a zpool with a different storage adapter using my "blkdev" device, I did a test which made a disk unavailable -- all attempts to read from it report EIO. I expected my configuration (which is a 3 disk test, with 2 disks in a RAIDZ and a hot spare) to work where the hot spare would automatically be activated. But I''m finding that ZFS does not behave this way