Displaying 20 results from an estimated 28 matches for "md125".
2010 Oct 19
3
more software raid questions
...me in solving a problem where one
of my drives had dropped out of (or been kicked out of) the raid1 array.
something vaguely similar appears to have happened just a few mins ago,
upon rebooting after a small update. I received four emails like this,
one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for
/dev/md126:
Subject: DegradedArray event on /dev/md125:fcshome.stoneham.ma.us
X-Spambayes-Classification: unsure; 0.24
Status: RO
Content-Length: 564
Lines: 23
This is an automatically generated mail message from mdadm
running on fcshome.stoneham.ma.us
A DegradedArray event h...
2018 Dec 05
3
Accidentally nuked my system - any suggestions ?
...rebuilding parts.
Once I made sure I retrieved all my data, I followed your suggestion,
and it looks like I'm making big progress. The system booted again,
though it feels a bit sluggish. Here's the current state of things.
[root at alphamule:~] # cat /proc/mdstat
Personalities : [raid1]
md125 : active raid1 sdb2[1] sda2[0]
512960 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md126 : inactive sda1[0](S)
16777216 blocks super 1.2
md127 : active raid1 sda3[0]
959323136 blocks super 1.2 [2/1] [U_]
bitmap: 8/8 pages [32KB], 65536KB chunk...
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
...s=$?
+ set +e
+
+ # Don't delete the output files if non-zero exit
+ if [ "$status" -eq 0 ]; then rm -f $disk1 $disk2; fi
+
+ exit $status
+}
+trap cleanup INT QUIT TERM EXIT
+
+# Create 2 disks partitioned as:
+# sda1: 20M MD (md127)
+# sda2: 20M PV (vg1)
+# sda3: 20M MD (md125)
+#
+# sdb1: 20M PV (vg0)
+# sdb2: 20M PV (vg2)
+# sdb3: 20M MD (md125)
+#
+# lv0 : LV (vg0)
+# lv1 : LV (vg1)
+# lv2 : LV (vg2)
+# md127 : md (sda1, lv0)
+# md126 : md (lv1, lv2)
+# md125 : md (sda3, sdb3)
+# vg3 : VG (md125)
+# lv3 : LV (vg3)
+
+guestfish <<EOF
+# Add 2 empty disk...
2018 Dec 05
0
Accidentally nuked my system - any suggestions ?
...e sure I retrieved all my data, I followed your suggestion,
> and it looks like I'm making big progress. The system booted again,
> though it feels a bit sluggish. Here's the current state of things.
>
> [root at alphamule:~] # cat /proc/mdstat
> Personalities : [raid1]
> md125 : active raid1 sdb2[1] sda2[0]
> 512960 blocks super 1.0 [2/2] [UU]
> bitmap: 0/1 pages [0KB], 65536KB chunk
>
> md126 : inactive sda1[0](S)
> 16777216 blocks super 1.2
>
> md127 : active raid1 sda3[0]
> 959323136 blocks super 1.2 [2/1] [U_]
>...
2013 Mar 08
1
recover lvm from pv
...physical volume. This volume group, vg0, contains 10 ext3 file systems and I need to get the data from them.
What do I know:
[root at mickey ~]# pvscan
PV /dev/sda2 VG VolGroup00 lvm2 [465.66 GB / 0 free]
PV /dev/sdb1 VG VolGroup00 lvm2 [465.75 GB / 0 free]
PV /dev/md125 lvm2 [1.81 TB]
Total: 3 [2.72 TB] / in use: 2 [931.41 GB] / in no VG: 1 [1.81 TB]
[root at mickey ~]#
The first two contain the running system. The third one, /dev/md125 is my lvm physical volume.
[root at mickey ~]# pvdisplay
--------- snip first two pvs -------
"...
2019 Jan 10
3
Help finishing off Centos 7 RAID install
> On 1/9/19 2:30 AM, Gary Stainburn wrote:
>> 1) The big problem with this is that it is dependant on sda for booting.
>> I
>> did find an aritcle on how to set up boot loading on multiple HDD's,
>> including cloning /boot/efi but I now can't find it. Does anyone know
>> of a
>> similar article?
>
>
> Use RAID1 for /boot/efi as well.? The
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
...ichiers Taille Utilis? Dispo Uti% Mont? sur
/dev/md127 226G 1,1G 213G 1% /
devtmpfs 1,4G 0 1,4G 0% /dev
tmpfs 1,4G 0 1,4G 0% /dev/shm
tmpfs 1,4G 8,5M 1,4G 1% /run
tmpfs 1,4G 0 1,4G 0% /sys/fs/cgroup
/dev/md125 194M 80M 101M 45% /boot
/dev/sde1 917G 88M 871G 1% /mnt
The root partition (/dev/md127) only shows 226 G of space. So where has
everything gone?
[root at nestor:~] # cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md125 : active raid1 sdc2[2] sdd2[3...
2013 Mar 03
4
Strange behavior from software RAID
...2
ARRAY /dev/md3 level=raid1 num-devices=2 metadata=0.90 UUID=4cc310ee:60201e16:c7017bd4:9feea350
ARRAY /dev/md4 level=raid1 num-devices=2 metadata=0.90 UUID=ea205046:3c6e78c6:ab84faa4:0da53c7c
After a system re-boot, here is the contents of /proc/mdstat
# cat /proc/mdstat
Personalities : [raid1]
md125 : active raid1 sdc3[0]
455482816 blocks [2/1] [U_]
md0 : active raid1 sdd1[3] sdc1[0] sdb1[1] sda1[2]
1000320 blocks [4/4] [UUUU]
md127 : active raid1 sdd3[1] sdb3[0]
971747648 blocks [2/2] [UU]
md3 : active raid1 sdf1[1] sde1[0]
1003904 blocks [2/2] [UU]
md4 : activ...
2015 Feb 18
3
CentOS 7: software RAID 5 array with 4 disks and no spares?
...the partitions has the wrong size. What's the output of lsblk?
[root at nestor:~] # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 232,9G 0 disk
??sda1 8:1 0 3,9G 0 part
? ??md126 9:126 0 3,9G 0 raid1 [SWAP]
??sda2 8:2 0 200M 0 part
? ??md125 9:125 0 200M 0 raid1 /boot
??sda3 8:3 0 76,4G 0 part
??md127 9:127 0 229G 0 raid5 /
sdb 8:16 0 232,9G 0 disk
??sdb1 8:17 0 3,9G 0 part
? ??md126 9:126 0 3,9G 0 raid1 [SWAP]
??sdb2 8:18 0 200M 0 part
? ??md125 9:125 0 200M 0 raid1 /...
2019 Jan 10
0
Help finishing off Centos 7 RAID install
...the vfat
> filesystem behind the RAID metadata.
Anaconda knows that it needs to use 1.0 metadata for EFI system
partitions, which is what I mean when I said "the installer should get
the details right."
# df /boot/efi/
Filesystem???? 1K-blocks? Used Available Use% Mounted on
/dev/md125??????? 194284 11300??? 182984?? 6% /boot/efi
# mdadm --detail /dev/md125
/dev/md125:
?????????? Version : 1.0
> Maybe certain EFI firmware is more tolerant but at least in my case I
> didn't get it to work on RAID1 at all.
>
> I'd really be interested if someone got it to work...
2019 Jul 23
2
mdadm issue
...8:4 0 4G 0 part
? ??md126 9:126 0 4G 0 raid1
? ??luks-9dd9aacc-e702-43d9-97c2-e7e954619886
253:1 0 4G 0 crypt [SWAP]
??sda5 8:5 0 426.2G 0 part
??md125 9:125 0 426.1G 0 raid1
sdb 8:16 0 931.5G 0 disk
??sdb1 8:17 0 500.1G 0 part
? ??md127 9:127 0 500G 0 raid1
? ??luks-5e007234-cd4c-47...
2019 Jan 10
1
Help finishing off Centos 7 RAID install
...D metadata.
>
>
> Anaconda knows that it needs to use 1.0 metadata for EFI system
> partitions, which is what I mean when I said "the installer should get
> the details right."
>
> # df /boot/efi/
> Filesystem???? 1K-blocks? Used Available Use% Mounted on
> /dev/md125??????? 194284 11300??? 182984?? 6% /boot/efi
> # mdadm --detail /dev/md125
> /dev/md125:
> ?????????? Version : 1.0
OK I see. Do you also have /boot mounted on its own MD device?
>
>> Maybe certain EFI firmware is more tolerant but at least in my case I
>> didn't get...
2019 Jul 08
2
Server fails to boot
...and mouse only to find:
Warning: /dev/disk/by-id/md-uuid-xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
does not exist
repeated three times - one for each of the /, /boot, and swap raid
member sets along with a
Warning: /dev/disk/by-uuid/xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx does not
exist
for the /dev/md125 which is the actual raid 1 / device.
The system is in a root shell of some sort as it has not made the
transition from initramfs to the mdraid root drive.
there are some other lines of info and a txt file with hundreds of lines
of boot info, ending with the above info (as I recall).
I tried a...
2018 Dec 25
0
upgrading 7.5 ==> 7.6
...d devices in my raid1, and it seems pretty
weird. they were all created by the installer (Anaconda) when I first
installed C7, yet 3 of them are 1.2 and one is 1.0.
# mdadm --detail /dev/md124
/dev/md124:
Version : 1.2
Creation Time : Wed Dec 2 23:13:09 2015
# mdadm --detail /dev/md125
/dev/md125:
Version : 1.2
Creation Time : Wed Dec 2 23:13:21 2015
# mdadm -D /dev/md126
/dev/md126:
Version : 1.2
Creation Time : Wed Dec 2 23:13:22 2015
# mdadm -D /dev/md127
/dev/md127:
Version : 1.0
Creation Time : Wed Dec 2 23:13:18 2015
--...
2019 Jul 23
2
mdadm issue
Gordon Messmer wrote:
> On 7/23/19 11:12 AM, mark wrote:
>
>> Now, cryptsetup gives me the same UUID as I have in /etc/mdadm.conf.
>> The
>> entry in /etc/crypttab looks identical to the RAIDs for root and swap,
>> but nope.
>
>
> Can you post those files somewhere?? I'm confused by the idea that
> cryptsetup is involved in or using the same UUID as an
2019 Jul 23
1
mdadm issue
> Am 23.07.2019 um 22:39 schrieb Gordon Messmer <gordon.messmer at gmail.com>:
>
> I still don't understand how this relates to md125. I don't see it referenced in mdadm.conf. It sounds like you see it in the output from lsblk, but only because you manually assembled it. Do you expect there to be a luks volume there?
To check:
cryptsetup isLuks <device> && echo Success
cryptsetup luksDump <device>...
2011 Nov 24
1
mdadm / RHEL 6 error
libguestfs: error: md_detail: mdadm: md device /dev/md125 does not appear to be active.
FAIL: test-mdadm.sh
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine. Supports Linux and Windows.
http://et.redhat.com/~rjone...
2015 Feb 18
0
CentOS 7: software RAID 5 array with 4 disks and no spares?
...ichiers Taille Utilis? Dispo Uti% Mont? sur
/dev/md127 226G 1,1G 213G 1% /
devtmpfs 1,4G 0 1,4G 0% /dev
tmpfs 1,4G 0 1,4G 0% /dev/shm
tmpfs 1,4G 8,5M 1,4G 1% /run
tmpfs 1,4G 0 1,4G 0% /sys/fs/cgroup
/dev/md125 194M 80M 101M 45% /boot
/dev/sde1 917G 88M 871G 1% /mnt
The root partition (/dev/md127) only shows 226 G of space. So where has
everything gone?
[root at nestor:~] # cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md125 : active raid1 sdc2[2] sdd2[3...
2019 Apr 03
4
New post message
Hello!
On my server PC i have Centos 7 installed.
CentOS Linux release 7.6.1810.
There are four arrays RAID1 (software RAID)
md124 - /boot/efi
md125 - /boot
md126 - /bd
md127 - /
I have configured booting from both drives, everything works fine if both drives are connected.
But if I disable any drive from which RAID1 is built the system crashes, there is a partial boot and as a result the Entering emergency mode.
By all rules RAID1 in case of...
2017 Nov 13
1
Shared storage showing 100% used
...l 4 nodes can
access it(baby steps).? It was all working great for a couple of weeks
until I was alerted that /run/gluster/shared_storage was full, see
below.? There was no warning; it went from fine to critical overnight.
Filesystem???????????????????????? Size? Used Avail Use% Mounted on
/dev/md125????????????????????????? 50G? 102M?? 47G?? 1% /
devtmpfs??????????????????????????? 32G???? 0?? 32G?? 0% /dev
tmpfs?????????????????????????????? 32G???? 0?? 32G?? 0% /dev/shm
tmpfs?????????????????????????????? 32G?? 17M?? 32G?? 1% /run
tmpfs?????????????????????????????? 32G???? 0?? 32G?? 0% /sys...