Displaying 20 results from an estimated 20000 matches similar to: "mdadm raid-check"
2020 Nov 16
1
mdadm raid-check
On Sat, 2020-11-14 at 21:55 -0600, Valeri Galtsev wrote:
> > On Nov 14, 2020, at 8:20 PM, hw <hw at gc-24.de> wrote:
> >
> >
> > Hi,
> >
> > is it required to run /usr/sbin/raid-check once per week? Centos 7 does
> > this. Maybe it's sufficient to run it monthly? IIRC Debian did it monthly.
>
> On hardware RAIDs I do RAID
2020 Nov 15
0
mdadm raid-check
> On Nov 14, 2020, at 8:20 PM, hw <hw at gc-24.de> wrote:
>
>
> Hi,
>
> is it required to run /usr/sbin/raid-check once per week? Centos 7 does
> this. Maybe it's sufficient to run it monthly? IIRC Debian did it monthly.
On hardware RAIDs I do RAID verification once a week. Once a Month a not often enough in my book. That RAID verification effectively reads
2010 Jul 20
16
zfs raidz1 and traditional raid 5 perfomrance comparision
Hi,
for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk''s ipos.
Regards
Victor
--
This message posted from opensolaris.org
2008 Jun 16
2
mdadm on reboot
Hi,
I'm in the process of trying mdadm for the first time
I've been trying stuff out of tutorials, etc.
At this point I know how to create stripes, and mirrors.
My stripe is automatically restarting on reboot,
but the degraded mirror isn't.
--
Drew Einhorn
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2007 Jul 07
17
Raid-Z expansion
Apologies for the blank message (if it came through).
I have heard here and there that there might be in development a plan
to make it such that a raid-z can grow its "raid-z''ness" to
accommodate a new disk added to it.
Example:
I have 4Disks in a raid-z[12] configuration. I am uncomfortably low on
space, and would like to add a 5th disk. The idea is to pop in disk 5
and have
2015 May 28
2
New controller card issues
On 05/28/2015 09:12 AM, Kirk Bocek wrote:
> I suggest everyone stay away from 3Ware from now on.
My experience has been that 3ware cards are less reliable that software
RAID for a long, long time. It took me a while to convince my previous
employer to stop using them. Inability to migrate disk sets across
controller families, data corruption, and boot failures due to bad
battery daughter
2011 Oct 08
1
CentOS 6.0 CR mdadm-3.2.2 breaks Intel BIOS RAID
I just upgraded my home KVM server to CentOS 6.0 CR to make use of the
latest libvirt and now my RAID array with my VM storage is missing. It
seems that the upgrade to mdadm-3.2.2 is the culprit.
This is the output from mdadm when scanning that array,
# mdadm --detail --scan
ARRAY /dev/md0 metadata=imsm UUID=734f79cf:22200a5a:73be2b52:3388006b
ARRAY /dev/md126 metadata=imsm
2007 Apr 02
4
Convert raidz
Hi
Is it possible to convert live 3 disks zpool from raidz to raidz2
And is it possible to add 1 new disk to raidz configuration without backups and recreating zpool from cratch.
Thanks
This message posted from opensolaris.org
2007 Dec 31
4
Help! ZFS pool is UNAVAILABLE
Hi All,
I posted this in a different threat, but it was recommended that I post in this one.
Basically, I have a 3 drive raidz array on internal Seagate drives. running build 64nv. I purchased 3 add''l USB drives with the intention of mirroring and then migrating the data to the new USB drives.
I accidentally added the 3 USB drives in a raidz to my original storage pool, so now I have 2
2019 Feb 25
7
Problem with mdadm, raid1 and automatically adds any disk to raid
Hi.
CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade new/old machines.
I was trying to setup two disks as a RAID1 array, using these lines
mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2
mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2
2007 May 05
13
Optimal strategy (add or replace disks) to build a cheap and raidz?
Hello,
i have an 8 port sata-controller and i don''t want to spend the money for 8 x 750 GB Sata Disks
right now. I''m thinking about an optimal way of building a growing raidz-pool without loosing
any data.
As far as i know there are two ways to achieve this:
- Adding 750 GB Disks from time to time. But this would lead to multiple groups with multiple
redundancy/parity disks. I
2015 Feb 11
1
CentOS 7 : create RAID arrays manually using mdadm --create ?
On 2/10/2015 6:54 PM, Chris Murphy wrote:
> Why I avoid swap on md raid 1/10 is because of the swap caveats listed
> under man 4 md. Is possible for a page in memory to change between the
> writes to the two md devices such that the mirrors are in fact
> different. The man page only suggests this makes scrub check results
> unreliable, and that such a difference wouldn't be read
2016 May 09
2
Internal RAID controllers question
On Mon, May 9, 2016 1:14 pm, Gordon Messmer wrote:
> On 05/09/2016 11:01 AM, Valeri Galtsev wrote:
>> Thanks Gordon! Yes, I know, ZFS, of course. I hear it as you definitely
>> will use zfs for "bricks" of distributed file system, right?
>
>
> You could, I suppose, but I don't think its use case is limited to
> that. There aren't many spaces where I
2008 Jun 07
1
Software raid tutorial and hardware raid questions.
I remember seeing one with an example migrating
from an old fashioned filesystem on a partition
to a new filesystem on a mirrored lvm logical volume
but one only one side of the mirror is set up at this
time.
First I need to copy stuff from what will become
the second side of the mirror
to filesystem on the first side or the mirror
Then I will be ready to follow the rest of the tutorial
and
2017 Feb 17
3
RAID questions
On 2017-02-15, John R Pierce <pierce at hogranch.com> wrote:
> On 2/14/2017 4:48 PM, tdukes at palmettoshopper.com wrote:
>
>> 3 - Can additional drive(s) be added later with a changein RAID level
>> without current data loss?
>
> Only some systems support that sort of restriping, and its a dangerous
> activity (if the power fails or system crashes midway through
2009 Aug 06
10
RAID[56] status
If we''ve abandoned the idea of putting the number of redundant blocks
into the top bits of the type bitmask (and I hope we have), then we''re
fairly much there. Current code is at:
git://, http://git.infradead.org/users/dwmw2/btrfs-raid56.git
git://, http://git.infradead.org/users/dwmw2/btrfs-progs-raid56.git
We have recovery working, as well as both full-stripe writes
2010 Jan 28
13
ZFS configuration suggestion with 24 drives
Replacing my current media server with another larger capacity media server. Also switching over to solaris/zfs.
Anyhow we have 24 drive capacity. These are for large sequential access (large media files) used by no more than 3 or 5 users at a time. I''m inquiring as to what the best configuration for this is for vdevs. I''m considering the following configurations
4 x x6
2011 Feb 05
12
ZFS Newbie question
I?ve spend a few hours reading through the forums and wiki and honestly my head is spinning. I have been trying to study up on either buying or building a box that would allow me to add drives of varying sizes/speeds/brands (adding more later etc) and still be able to use the full space of drives (minus parity? [not sure if I got the terminology right]) with redundancy. I have found the ?all in
2010 Oct 20
5
Myth? 21 disk raidz3: "Don''t put more than ___ disks in a vdev"
In a discussion a few weeks back, it was mentioned that the Best Practices
Guide says something like "Don''t put more than ___ disks into a single
vdev." At first, I challenged this idea, because I see no reason why a
21-disk raidz3 would be bad. It seems like a good thing.
I was operating on assumption that resilver time was limited by sustainable
throughput of disks, which
2019 Jul 23
2
mdadm issue
Just rebuilt a C6 box last week as C7. Four drives, and sda and sdb for
root, with RAID-1 and luks encryption.
Layout:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE
MOUNTPOINT
sda 8:0 0 931.5G 0 disk
??sda1 8:1 0 200M 0 part
/boot/efi
??sda2