Displaying 20 results from an estimated 54 matches for "hotspares".
Did you mean:
hotspare
2005 Nov 17
2
hotspares possible?
Hi,
I could not find any hints about hotspares with zfs.
Are hotspares not possible?
Thanks for this really great filesystem
Willi
2006 Oct 31
3
Centos and Network Attached Storage
Hello,
I need to setup a NAS server on Centos. The machine will be:
Dual xeon/Dual Opteron
4GB memory
13x 320GB SATA + 1 hotspare
1x 320GB SATA for OS
The server can do RAID5,6,or 10.
Has anyone installed such a software and can you recommand a specific
product?
Thank you.
2010 May 22
2
LSI software raid with centos 5.4
Hi,
I have been trying to install CentOS 5.4 on a Intel SR1530SHS, Intel S3200SH
mainboard.. It has a 3 x 1TB sata hotswap drives with LSI software raid
onboard.
I had configured the LSI to have Sata0 and Sata1 with raid 1 and the third
drive as a hotspare drive.
Format the harddisk and installation was a breeze. The server rebooted into
a blank screen and the cursor just keep blinking.
Please
2006 Jan 20
2
HP NetServer LC2000r and LH6000r install woes
hello folks!
i've got four HP NetServers, two LC2000r and two LH6000r. the CentOS
4.2 ServerCD can't see hard drives on any of them.
each machine has a HP NetRAID controller; the LC2000rs have a
NetRAID-1Si and the LH6000rs have an Integrated NetRAID. each
controller has three 9GB drives, two mirrored and one hotspare.
each machine boots ok (the LC2000rs need "linux
2014 Aug 25
3
Hardware raid health?
I just had an IBM in a remote location with a hardware raid1 have both
drives go bad. With local machines I probably would have caught it
from the drive light before the 2nd one died... What is the state of
the art in linux software monitoring for this? Long ago when that
box was set up I think the best I could have done was a Java GUI tool
that IBM had for their servers - and that seemed like
2010 Jun 09
2
software raid - better management advice needed
Hi,
I've used mdadm for years now to manage software raids.
The task of using fdisk to first create partitions on a spare drive
sitting on a shelf (raid 0 were my 1st of 2 drives failed) is kind of
bugging me now.
After using fdisk to create the same partition layout on the new drive
as is on the existing drive and then using mdadm to finish every thing
up is a little tedious.
Any
2002 Dec 10
5
VRRPD (rfc2338)
Can someone point me for good VRRPD (rfc2338) implementation on linux.
Some stable and live project
Thanks
_______________________________________________
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
2013 Dec 09
3
Gluster infrastructure question
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Heyho guys,
I'm running since years glusterfs in a small environment without big
problems.
Now I'm going to use glusterFS for a bigger cluster but I've some
questions :)
Environment:
* 4 Servers
* 20 x 2TB HDD, each
* Raidcontroller
* Raid 10
* 4x bricks => Replicated, Distributed volume
* Gluster 3.4
1)
I'm asking me, if I can
2016 Feb 16
3
slightly off-topic, RAID program for on-board SAS 2308-4i ?
Does anyone know what program can be used to query the RAID status
from the OS for an on-board LSI SAS 2308-4i?
On this page:
http://docs.avagotech.com/docs/12351997
there is a curious note on the left that reads:
"Integrated MegaRAID support available upon request"
After one mostly fruitless round of chatting with LSI/Avago/Broadcom
and one completely fruitless round of chatting
2008 Nov 10
1
Autodetecing RAID members upon boot... need to update initrd?
Hello fellow CentOS'ers-
I've got a system running CentOS 5.0. The motherboard has two onboard SATA ports with two drives attached. I installed the system on a RAID1 setup. However, I'd like to add a hotspare disk to the array. Since there are no additional SATA ports, I've installed an additional controller. After partitioning, the additional drive was easily and successfully
2015 Jan 30
5
Very slow disk I/O
On 1/30/2015 1:53 AM, Gordon Messmer wrote:
> On 01/29/2015 05:07 AM, Jatin Davey wrote:
>> Yes , it is a SATA disk. I am not sure of the speed. Can you tell me
>> how to find out this information ? Additionally we are using RAID 10
>> configuration with 4 disks.
>
> What RAID controller are you using?
>
> # lspci | grep RAID
[Jatin]
[root at localhost ~]# lspci |
2015 Jan 30
0
Very slow disk I/O
On 1/29/2015 7:21 PM, Jatin Davey wrote:
> [root at localhost ~]# lspci | grep RAID
> 05:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS-3
> 3108 [Invader] (rev 02)
to get info out of those, you need to install MegaCli64 from LSI Logic,
which has the ugliest command lines and output you've ever seen.
I use the python script below, which I put in
2011 Apr 12
17
40TB File System Recommendations
Hello All
I have a brand spanking new 40TB Hardware Raid6 array to play around
with. I am looking for recommendations for which filesystem to use. I am
trying not to break this up into multiple file systems as we are going
to use it for backups. Other factors is performance and reliability.
CentOS 5.6
array is /dev/sdb
So here is what I have tried so far
reiserfs is limited to 16TB
ext4
2016 Feb 16
0
slightly off-topic, RAID program for on-board SAS 2308-4i ?
On 2/16/2016 3:23 PM, Zube wrote:
> Does anyone know what program can be used to query the RAID status
> from the OS for an on-board LSI SAS 2308-4i?
the 2308 isn't actually a megaraid, its a simple SAS HBA that has an
optional raid mode IF its flashed with IR firmware... this only supports
raid 0/1/10. I always(!) flash these with the IT firmware that
turns them back into a
2010 Sep 29
10
Resliver making the system unresponsive
This must be resliver day :)
I just had a drive failure. The hot spare kicked in, and access to the pool over NFS was effectively zero for about 45 minutes. Currently the pool is still reslivering, but for some reason I can access the file system now.
Resliver speed has been beaten to death I know, but is there a way to avoid this? For example, is more enterprisy hardware less susceptible to
2015 Feb 02
1
Very slow disk I/O
On 1/30/2015 9:44 AM, John R Pierce wrote:
> On 1/29/2015 7:21 PM, Jatin Davey wrote:
>> [root at localhost ~]# lspci | grep RAID
>> 05:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS-3
>> 3108 [Invader] (rev 02)
>
> to get info out of those, you need to install MegaCli64 from LSI
> Logic, which has the ugliest command lines and output you've
2007 Jan 09
2
ZFS Hot Spare Behavior
I physically removed a disk (c3t8d0 used by ZFS ''pool01'') from a 3310 JBOD connected to a V210 running s10u3 (11/06) and ''zpool status'' reported this:
# zpool status
pool: pool01
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the
2017 Feb 03
3
raid 10 not in consistent state?
hi everyone
I've just configured a simple raid10 on a Dell system, but
one thing is puzzling to me.
I'm seeing this below and I wonder why? There: Consist = No
...
/c0/v1 :
======
---------------------------------------------------------------
DG/VD TYPE State Access Consist Cache Cac sCC Size Name
---------------------------------------------------------------
3/1 RAID10 Optl
2011 Sep 22
4
Beginner Question: Limited conf: file-based storage pools vs. FSs directly on rpool
Hi, everyone!
I have a beginner''s question:
I must configure a small file server. It only has two disk drives, and they
are (forcibly) destined to be used in a mirrored, hot-spare configuration.
The OS is installed and working, and rpool is mirrored on the two disks.
The question is: I want to create some ZFS file systems for sharing them via
CIFS. But given my limited configuration:
2007 May 31
3
zfs boot error recovery
hi all,
i would like to ask some questions regarding best practices for zfs
recovery if disk errors occur.
currently i have zfs boot (nv62) and the following setup:
2 si3224 controllers (each 4 sata disks)
8 sata disks, same size, same type
i have two pools:
a) rootpool
b) datapool
the rootpool is a mirrored pool, where every disk has a slice (the s0,
which is 5 % of the whole disk) and this