Displaying 6 results from an estimated 6 matches for "mdxxx".
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
...ils net-tools
# disbale ksm (probably not important / needed)
systemctl disable ksm
systemctl disable ksmtuned
3. create LVM cache
#set some variables and create a raid1 array with the two SSDs
VGBASE= && ssddevice1=/dev/sdX1 && ssddevice2=/dev/sdX1 &&
hddraiddevice=/dev/mdXXX && ssdraiddevice=/dev/mdXXX && mdadm --create
--verbose ${ssdraiddevice} --level=mirror --bitmap=none --raid-devices=2
${ssddevice1} ${ssddevice2}
# create PV and extend VG
pvcreate ${ssdraiddevice} && vgextend ${VGBASE} ${ssdraiddevice}
# create slow LV on HDDs (use...
2005 Apr 17
5
MIrrored drives won't boot after installation
I have a p4 motherboard with 2 ide interfaces, I connect 2 40 GB drives as
hda and hdc, I install Centos 4 from a CDROM, and partition the drives as 2
x raid partition each plus a swap partition on hda, the make md0 and md1 to
install /boot and / respectively. Install goes well, everything looks great,
go to reboot from drives, and all I get is "grub" but no boot. I have tried
this ten
2015 Nov 20
1
EL7: Detecting FS errors on XFS while mounted
Is there a way of checking an XFS filesystem for clean/dirty status while
mounted?
One of the checks we've long performed is an FS-level error check. This is
*not a full-on fsck*, this is "asking the file system if it noted any
problems". This is done while the file system is mounted and "hot". For
example, here's how we'd check an ext* partition:
# debugfs
2018 Jul 31
1
Increase Disk Space with LVM
Dear CentOS-Community,
we have a server with four hard drives that are configured as raid10
(/dev/sda).
Now, /home and /root are almost full. Therefore, we decided to buy four
additional hard drives that should configured also as raid 10 (/dev/sdb).
I want to use LVM to extend disk space for root and home.
My (successful) test procedure in a virtual environment looks like this:
1. devide
2017 Apr 10
0
lvm cache + qemu-kvm stops working after about 20GB of writes
...important / needed)
> systemctl disable ksm
> systemctl disable ksmtuned
>
> 3. create LVM cache
>
> #set some variables and create a raid1 array with the two SSDs
>
> VGBASE= && ssddevice1=/dev/sdX1 && ssddevice2=/dev/sdX1 &&
> hddraiddevice=/dev/mdXXX && ssdraiddevice=/dev/mdXXX && mdadm --create
> --verbose ${ssdraiddevice} --level=mirror --bitmap=none --raid-devices=2
> ${ssddevice1} ${ssddevice2}
>
> # create PV and extend VG
>
> pvcreate ${ssdraiddevice} && vgextend ${VGBASE} ${ssdraiddevice}
>
&...
2017 Apr 20
2
lvm cache + qemu-kvm stops working after about 20GB of writes
...; systemctl disable ksm
> systemctl disable ksmtuned
>
> 3. create LVM cache
>
> #set some variables and create a raid1 array with the two SSDs
>
> VGBASE= && ssddevice1=/dev/sdX1 && ssddevice2=/dev/sdX1 &&
> hddraiddevice=/dev/mdXXX && ssdraiddevice=/dev/mdXXX && mdadm
> --create --verbose ${ssdraiddevice} --level=mirror --bitmap=none
> --raid-devices=2 ${ssddevice1} ${ssddevice2}
>
> # create PV and extend VG
>
> pvcreate ${ssdraiddevice} && vgextend ${VGBASE} ${ssdra...