Displaying 20 results from an estimated 2000 matches similar to: "RAID6 in production?"
2015 Aug 25
0
CentOS 6.6 - reshape of RAID 6 is stucked
Hello
I have a CentOS 6.6 Server with 13 disks in a RAID 6. Some weeks ago, i upgraded it to 17 disks, two of them configured as spare. The reshape worked like normal in the beginning. But at 69% it stopped.
md2 : active raid6 sdj1[0] sdg1[18](S) sdh1[2] sdi1[5] sdm1[15] sds1[12] sdr1[14] sdk1[9] sdo1[6] sdn1[13] sdl1[8] sdd1[20] sdf1[19] sdq1[16] sdb1[10] sde1[17](S) sdc1[21]
19533803520
2007 Nov 15
0
md raid6 recommended?
I notice that raid6 is recommended in the manual. eg. "...RAID6 is a must."
http://manual.lustre.org/manual/LustreManual16_HTML/DynamicHTML-11-1.html
which I found a bit surprising given that in Dec ''06 Peter Braam said
on this list "Some of our customers experienced data corruption in the
RAID6 layer.", and recommending the CFS optimised md raid5 path through
the
2013 Nov 24
3
The state of btrfs RAID6 as of kernel 3.13-rc1
Hi
What is the general state of btrfs RAID6 as of kernel 3.13-rc1 and the
latest btrfs tools?
More specifically:
- Is it able to correct errors during scrubs?
- Is it able to transparently handle disk failures without downtime?
- Is it possible to convert btrfs RAID10 to RAID6 without recreating the fs?
- Is it possible to add/remove drives to a RAID6 array?
Regards,
Hans-Kristian
--
To
2019 Jun 28
0
raid 5 install
Le 27/06/2019 ? 15:36, Nikos Gatsis - Qbit a ?crit?:
> Do I have to consider anything before installation, because the disks
> are very large?
I'm doing this kind of installation quite regularly. Here's my two cents.
1. Use RAID6 instead of RAID5. You'll lose a little space, but you'll
gain quite some redundancy.
2. The initial sync will be very (!) long, something like a
2009 May 25
1
raid5 or raid6 level cluster
Hello,
?s there anyway to create raid6 or raid5 level glusterfs installation ?
>From docs I undetstood that I can do raid1 base glusterfs installation or
radi0 (strapting data too all servers ) and raid10 based solution but raid10
based solution is not cost effective because need too much server.
Do you have a plan for keep one or two server as a parity for whole
glusterfs system
2015 Jun 25
0
LVM hatred, was Re: /boot on a separate partition?
On 06/25/2015 01:20 PM, Chris Adams wrote:
> ...It's basically a way to assemble one arbitrary set of block devices
> and then divide them into another arbitrary set of block devices, but
> now separate from the underlying physical structure.
> Regular partitions have various limitations (one big one on Linux
> being that modifying the partition table of a disk with in-use
2010 Aug 12
2
Problem resizing partition of nfs volume
Hi:
I have an NFS volume that I'm trying to resize a partition on.
Something about the fdisk process is corrupting something on the drive
Before running fdisk, I can mount the volume find:
$ mount /dev/sdo1 /home
... and the volume is mounted fine.
And,
$ e2fsck -f /dev/sdo1
/dev/sdo1: clean, ...
But then I run fdisk to rewrite the partition table of this drive, to expand
the /dev/sdo1
2008 Jun 22
8
3ware 9650 issues
I've been having no end of issues with a 3ware 9650SE-24M8 in a server that's
coming on a year old. I've got 24 WDC WD5001ABYS drives (500GB) hooked to it,
running as a single RAID6 w/ a hot spare. These issues boil down to the card
periodically throwing errors like the following:
sd 1:0:0:0: WARNING: (0x06:0x002C): Command (0x8a) timed out, resetting card.
Usually when this
2013 May 23
11
raid6: rmw writes all the time?
Hi all,
we got a new test system here and I just also tested btrfs raid6 on
that. Write performance is slightly lower than hw-raid (LSI megasas) and
md-raid6, but it probably would be much better than any of these two, if
it wouldn''t read all the during the writes. Is this a known issue? This
is with linux-3.9.2.
Thanks,
Bernd
--
To unsubscribe from this list: send the line
2019 Jun 28
1
raid 5 install
On Fri, Jun 28, 2019 at 07:01:00AM +0200, Nicolas Kovacs wrote:
> 3. Here's a neat little trick you can use to speed up the initial sync.
>
> $ sudo echo 50000 > /proc/sys/dev/raid/speed_limit_min
>
> I've written a detailed blog article about the kind of setup you want.
> It's in French, but the Linux bits are universal.
>
>
2019 Jan 31
0
C7, mdadm issues
> Il 30/01/19 16:49, Simon Matter ha scritto:
>>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>>> Il 29/01/19 20:42, mark ha scritto:
>>>>> Alessandro Baggi wrote:
>>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>>> Alessandro Baggi wrote:
>>>>>>>> Il 29/01/19 15:03, mark ha scritto:
2011 Jan 12
3
variable raid1 rebuild speed?
I have a 750Gb 3-member software raid1 where 2 partitions are always
present and the third is regularly rotated and re-synced (SATA disks in
hot-swap bays). The timing of the resync seems to be extremely variable
recently, taking anywhere from 3 to 10 hours even if the partition is
unmounted and the drives aren't doing anything else and regardless of
what I echo into
2019 Jan 22
2
C7 and mdadm
A user's system had a hard drive failure over the weekend. Linux RAID 6. I
identified the drive, brought the system down (8 drives, and I didn't know
the s/n of the bad one. why it was there in the box, rather than where I
started looking...) Brought it up, RAID not working. I finally found that
I had to do an mdadm --stop /dev/md0, then I could do an assemble, then I
could add the new
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto:
>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>> Il 29/01/19 20:42, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>> Alessandro Baggi wrote:
>>>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>>>
2010 Mar 04
1
removing a md/software raid device
Hello folks,
I successfully stopped the software RAID. How can I delete the ones
found on scan? I also see them in dmesg.
[root at extragreen ~]# mdadm --stop --scan ; echo $?
0
[root at extragreen ~]# mdadm --examine --scan
ARRAY /dev/md0 level=raid5 num-devices=4
UUID=89af91cb:802eef21:b2220242:b05806b5
ARRAY /dev/md0 level=raid6 num-devices=4
UUID=3ecf5270:339a89cf:aeb092ab:4c95c5c3
[root
2017 Aug 19
0
Problem with softwareraid
18. Aug 2017 13:35 by euroregistrar at gmail.com:
> Hello all,
>
> i have already had a discussion on the software raid mailinglist and i
> want to switch to this one :)
>
> I am having a really strange problem with my md0 device running
> centos7. after a new start of my server the md0 was gone. now after
> trying to find the problem i detected the following:
>
>
2019 Jan 30
0
C7, mdadm issues
> On 01/30/19 03:45, Alessandro Baggi wrote:
>> Il 29/01/19 20:42, mark ha scritto:
>>> Alessandro Baggi wrote:
>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>> Alessandro Baggi wrote:
>>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>>
>>>>>>> I've no idea what happened, but the box I was working
2007 Jun 14
0
(no subject)
I installed a fresh copy of Debian 4.0 and Xen 3.1.0 SMP PAE from the
binaries. I had a few issues getting fully virtualized guests up and
running, but finally managed to figure everything out. Now I''m having a
problem with paravirtualized guests and hoping that someone can help.
My domU config:
#
# Configuration file for the Xen instance dev.umucaoki.org, created
# by xen-tools
2017 Aug 18
4
Problem with softwareraid
Hello all,
i have already had a discussion on the software raid mailinglist and i
want to switch to this one :)
I am having a really strange problem with my md0 device running
centos7. after a new start of my server the md0 was gone. now after
trying to find the problem i detected the following:
Booting any installed kernel gives me NO md0 device. (ls /dev/md*
doesnt give anything). a 'cat
2013 Oct 09
1
mdraid strange surprises...
Hey,
I installed 2 new data servers with a big (12TB) RAID6 mdraid.
I formated the whole arrays with bad blocks checks.
One server is moderately used (nfs on one md), while the other not.
One week later, after the raid-check from cron, I get on both servers
a few block_mismatch... 1976162368 on the used one and a tiny bit less
on the other...? That seems a tiny little bit high...
I do the