Displaying 20 results from an estimated 1308 matches for "raid1s".
Did you mean:
raid1
2022 Apr 24
3
Installing mdadm and C7 on new computer
On 04/23/2022 09:19 PM, H wrote:
> On 04/19/2022 09:57 AM, Roberto Ragusa wrote:
>> On 4/18/22 1:27 PM, H wrote:
>>> I have a new computer with 2 x 2TB SSDs where I wanted to install C7 and use mdadm for RAID1 configuration and encrypting the /home partition. On the net I found https://tuxfixer.com/centos-7-installation-with-lvm-raid-1-mirroring/ which I adopted slightly with
2020 Sep 19
1
storage for mailserver
On 9/17/20 4:25 PM, Phil Perry wrote:
> On 17/09/2020 13:35, Michael Schumacher wrote:
>> Hello Phil,
>>
>> Wednesday, September 16, 2020, 7:40:24 PM, you wrote:
>>
>> PP> You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and
>> PP> marking the HDD members as --write-mostly, meaning most of the reads
>> PP> will come from the
2014 Apr 07
3
Software RAID10 - which two disks can fail?
Hi All.
I have a server which uses RAID10 made of 4 partitions for / and boots from
it. It looks like so:
mdadm -D /dev/md1
/dev/md1:
Version : 00.90
Creation Time : Mon Apr 27 09:25:05 2009
Raid Level : raid10
Array Size : 973827968 (928.71 GiB 997.20 GB)
Used Dev Size : 486913984 (464.36 GiB 498.60 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 1
2012 May 05
5
Is it possible to reclaim block groups once they are allocated to data or metadata?
Hello list,
recently reformatted my home partition from XFS to RAID1 btrfs. I used
the default options to mkfs.btrfs except for enabling raid1 for data
as well as metadata. Filesystem is made up of two 1TB drives.
mike@mercury (0) pts/3 ~ $ sudo btrfs filesystem show
Label: none uuid: f08a8896-e03e-4064-9b94-9342fb547e47
Total devices 2 FS bytes used 888.06GB
devid 1 size 931.51GB used
2016 Jun 01
3
Centos 7 and Software Raid Minimal Install
I am trying to install Centos 7 on a couple 4TB drives with software
raid. In the Supermicro bios I set UEFI/BIOS boot mode to legacy. I
am using the Centos 7 minimal install ISO flashed to a USB thumb
drive.
So I do custom drive layout something like this using sda and sdb.
Create /boot as 512 MB XFS raid1 array.
Create SWAP as 32 GB SWAP raid1 array.
Create / on 3.xxx TB XFS raid1 array.
2011 Jan 12
1
Filesystem creation in "degraded mode"
I''ve had a go at determining exactly what happens when you create a
filesystem without enough devices to meet the requested replication
strategy:
# mkfs.btrfs -m raid1 -d raid1 /dev/vdb
# mount /dev/vdb /mnt
# btrfs fi df /mnt
Data: total=8.00MB, used=0.00
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=153.56MB, used=24.00KB
Metadata:
2020 Sep 17
2
storage for mailserver
Hello Phil,
Wednesday, September 16, 2020, 7:40:24 PM, you wrote:
PP> You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and
PP> marking the HDD members as --write-mostly, meaning most of the reads
PP> will come from the faster SSDs retaining much of the speed advantage,
PP> but you have the redundancy of both SSDs and HDDs in the array.
PP> Read performance is
2011 Feb 14
2
rescheduling sector linux raid ?
Hi List,
What this means?
md: syncing RAID array md0
md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
md: using maximum available idle IO bandwidth (but not more than
200000 KB/sec) for reconstruction.
md: using 128k window, over a total of 2096384 blocks.
md: md0: sync done.
RAID1 conf printout:
--- wd:2 rd:2
disk 0, wo:0, o:1, dev:sda2
disk 1, wo:0, o:1, dev:sdb2
sd 0:0:0:0:
2012 Dec 17
5
Feeback on RAID1 feature of Btrfs
Hello,
I''m testing Btrfs RAID1 feature on 3 disks of ~10GB. Last one is not
exactly 10GB (would be too easy).
About the test machine, it''s a kvm vm running an up-to-date archlinux
with linux 3.7 and btrfs-progs 0.19.20121005.
#uname -a
Linux seblu-btrfs-1 3.7.0-1-ARCH #1 SMP PREEMPT Tue Dec 11 15:05:50 CET
2012 x86_64 GNU/Linux
Filesystem was created with :
# mkfs.btrfs -L
2014 Jan 24
4
Booting Software RAID
I installed Centos 6.x 64 bit with the minimal ISO and used two disks
in RAID 1 array.
Filesystem Size Used Avail Use% Mounted on
/dev/md2 97G 918M 91G 1% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/md1 485M 54M 407M 12% /boot
/dev/md3 3.4T 198M 3.2T 1% /vz
Personalities : [raid1]
md1 : active raid1 sda1[0] sdb1[1]
511936 blocks super 1.0
2010 Oct 19
3
more software raid questions
hi all!
back in Aug several of you assisted me in solving a problem where one
of my drives had dropped out of (or been kicked out of) the raid1 array.
something vaguely similar appears to have happened just a few mins ago,
upon rebooting after a small update. I received four emails like this,
one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for
/dev/md126:
Subject: DegradedArray
2023 Jan 11
1
Upgrading system from non-RAID to RAID1
I plan to upgrade an existing C7 computer which currently has one 256 GB SSD to use mdadmin software RAID1 after adding two 4 TB M2. SSDs, the rest of the system remaining the same. The system also has one additional internal and one external harddisk but these should not be touched. The system will continue to run C7.
If I remember correctly, the existing SSD does not use a M2. slot so they
2009 Nov 30
3
/etc/cron.weekly/99-raid-check
hi,
it's been a few weeks since rhel/centos 5.4 released and there were many
discussion about this new "feature" the weekly raid partition check.
we've got a lot's of server with raid1 system and i already try to
configure them not to send these messages, but i'm not able ie. i
already add to the SKIP_DEVS all of my swap partitions (since i read it
on linux-kernel list
2019 Jan 10
3
Help finishing off Centos 7 RAID install
> On 1/9/19 2:30 AM, Gary Stainburn wrote:
>> 1) The big problem with this is that it is dependant on sda for booting.
>> I
>> did find an aritcle on how to set up boot loading on multiple HDD's,
>> including cloning /boot/efi but I now can't find it. Does anyone know
>> of a
>> similar article?
>
>
> Use RAID1 for /boot/efi as well.? The
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file:
more /etc/mdadm.conf
# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382
ARRAY /dev/md2 level=raid1 num-devices=2
2009 May 08
3
Software RAID resync
I have configured 2x 500G sata HDD as Software RAID1 with three partitions
md0,md1 and md2 with md2 as 400+ gigs
Now it is almost 36 hours the status is
cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hdb1[1] hda1[0]
104320 blocks [2/2] [UU]
resync=DELAYED
md1 : active raid1 hdb2[1] hda2[0]
4096448 blocks [2/2] [UU]
resync=DELAYED
md2 : active raid1
2014 Dec 05
3
CentOS 7 install software Raid on large drives error
...ns within the installer then appear to allow me to create my
LVM with Raid1, but the /boot and /boot/efi are then outside the Raid.
4. It looks like I can set the /boot partition to be Raid1, but then it is
a separate Raid1 from the LVM Raid1 on the rest of the disk. Resulting in
two separate Raid1s; a small Raid1 for /boot and a much larger Raid1 for the
LVM volume group.
I finally manually setup a base partition structure using GParted that
allowed the install to complete using the format below.
sda (3TB)
sda1 /boot fat32 500MB
sda2 /boot/efi fat32 500MB
sdb (3TB)
sd...
2007 Apr 25
2
Raid 1 newbie question
Hi
I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output:
[root at server admin]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 hdc2[1] hda2[0]
1052160 blocks [2/2] [UU]
md1 : active raid1 hda3[0]
77023552 blocks [2/1] [U_]
md0 : active raid1 hdc1[1] hda1[0]
104320 blocks [2/2] [UU]
What happens with md1 ?
My dmesg output is:
[root at
2007 Aug 21
6
Saftware RAID1 or Hardware RAID1 with Asterisk
Dear All,
I would like to get community's feedback with regard to RAID1 ( Software or
Hardware) implementations with asterisk.
This is my setup
Motherboard with SATA RAID1 support
CENT OS 4.4
Asterisk 1.2.19
Libpri/zaptel latest release
2.8 Ghz Intel processor
2 80 GB SATA Hard disks
256 MB RAM
digium PRI/E1 card
Following are the concerns I am having
I'm planing to put this asterisk
2017 Sep 20
3
xfs not getting it right?
Chris Adams wrote:
> Once upon a time, hw <hw at gc-24.de> said:
>> xfs is supposed to detect the layout of a md-RAID devices when creating the
>> file system, but it doesn?t seem to do that:
>>
>>
>> # cat /proc/mdstat
>> Personalities : [raid1]
>> md10 : active raid1 sde[1] sdd[0]
>> 499976512 blocks super 1.2 [2/2] [UU]
>>