similar to: raid6: rmw writes all the time?

Displaying 20 results from an estimated 1000 matches similar to: "raid6: rmw writes all the time?"

2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi, I just replaced Slackware64 14.1 running on my office's HP Proliant Microserver with a fresh installation of CentOS 7. The server has 4 x 250 GB disks. Every disk is configured like this : * 200 MB /dev/sdX1 for /boot * 4 GB /dev/sdX2 for swap * 248 GB /dev/sdX3 for / There are supposed to be no spare devices. /boot and swap are all supposed to be assembled in RAID level 1 across
2018 Dec 05
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?: > In the rescue mode, recreate the partition table which was on the sdb > by copying over what is on sda > > > sfdisk ?d /dev/sda | sfdisk /dev/sdb > > This will give the kernel enough to know it has things to do on > rebuilding parts. Once I made sure I retrieved all my data, I followed your suggestion, and it looks
2011 Oct 08
1
CentOS 6.0 CR mdadm-3.2.2 breaks Intel BIOS RAID
I just upgraded my home KVM server to CentOS 6.0 CR to make use of the latest libvirt and now my RAID array with my VM storage is missing. It seems that the upgrade to mdadm-3.2.2 is the culprit. This is the output from mdadm when scanning that array, # mdadm --detail --scan ARRAY /dev/md0 metadata=imsm UUID=734f79cf:22200a5a:73be2b52:3388006b ARRAY /dev/md126 metadata=imsm
2010 Oct 19
3
more software raid questions
hi all! back in Aug several of you assisted me in solving a problem where one of my drives had dropped out of (or been kicked out of) the raid1 array. something vaguely similar appears to have happened just a few mins ago, upon rebooting after a small update. I received four emails like this, one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for /dev/md126: Subject: DegradedArray
2018 Dec 21
2
upgrading 7.5 ==> 7.6
> On Wed, Dec 19, 2018 at 01:50:06PM -0500, Fred Smith wrote: >> hI ALL! >> >> There have been a large enough number of people posting here about >> difficulties when upgrading from 7. to 7.6 that I'm being somewhat >> paranoid about it. >> >> I have several machines to upgrade, but so far the only one I've dared >> to work on (least
2010 Sep 10
11
Large directory performance
We have been struggling with our Lustre performance for some time now especially with large directories. I recently did some informal benchmarking (on a live system so I know results are not scientifically valid) and noticed a huge drop in performance of reads(stat operations) past 20k files in a single directory. I''m using bonnie++, disabling IO testing (-s 0) and just creating, reading,
2020 Oct 22
3
ThinkStation with BIOS RAID and disk error messages in gparted
My ThinkStation runs CentOS 7 which I installed on a BIOS RAID 0 setup with two identical 256 Gb SSDs after removing Windows. It runs fine but I just discovered in gparted something that does not seem right: - Launching gparted it complains "invalid argument during seek for red on /dev/md126" and when I click on Ignore I get another error "The backup GPT table is corrupt, but the
2013 Jan 30
9
Poor performance of btrfs. Suspected unidentified btrfs housekeeping process which writes a lot
Welcome, I''ve been using btrfs for over a 3 months to store my personal data on my NAS server. Almost all interactions with files on the server are done using unison synchronizer. After another use of bedup (https://github.com/g2p/bedup) on my btrfs volume I experienced huge perfomance loss with synchronization. It now takes over 3 hours what have taken only 15 minutes! File
2011 Oct 31
2
libguestfs and md devices
We've recently discovered that libguestfs can't handle guests which use md. There are (at least) 2 reasons for this: Firstly, the appliance doesn't include mdadm. Without this, md devices aren't detected during the boot process. Simply adding mdadm to the appliance package list fixes this. Secondly, md devices referenced in fstab as, e.g. /dev/md0, aren't handled
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present. This will prevent starting arrays in degraded state. Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state. Two new tests are added. This fixes rhbz1527852. Here is boot-benchmark
2020 Oct 23
0
ThinkStation with BIOS RAID and disk error messages in gparted
> My ThinkStation runs CentOS 7 which I installed on a BIOS RAID 0 setup > with two identical 256 Gb SSDs after removing Windows. It runs fine but I > just discovered in gparted something that does not seem right: > > - Launching gparted it complains "invalid argument during seek for red on > /dev/md126" and when I click on Ignore I get another error "The backup
2019 Apr 03
4
New post message
Hello! On my server PC i have Centos 7 installed. CentOS Linux release 7.6.1810. There are four arrays RAID1 (software RAID) md124 - /boot/efi md125 - /boot md126 - /bd md127 - / I have configured booting from both drives, everything works fine if both drives are connected. But if I disable any drive from which RAID1 is built the system crashes, there is a partial boot and as a result the
2018 Dec 04
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:10, Gordon Messmer a ?crit : > The system should boot normally if you disconnect sdb. Have you > tried that? Unfortunately that didn't work. The boot process stops here: [OK] Reached target Basic System. Now what ? -- Microlinux - Solutions informatiques durables 7, place de l'?glise - 30730 Montpezat Site : https://www.microlinux.fr Blog :
2013 Feb 11
1
mdadm: hot remove failed for /dev/sdg: Device or resource busy
Hello all, I have run into a sticky problem with a failed device in an md array, and I asked about it on the linux raid mailing list, but since the problem may not be md-specific, I am hoping to find some insight here. (If you are on the MD list, and are seeing this twice, I humbly apologize.) The summary is that during a reshape of a raid6 on an up to date CentOS 6.3 box, one disk failed, and
2013 Apr 17
2
libvirt support for qcow2 rebase?
I have not found support in libvirt (nor virsh) for doing the equivalent of "qemu-img rebase ....". The use case: You have copied a qcow2 stack and the new files have different names or reside in a different directory. Therefore you need to change the backing file. Is there a way to do this? Is this a planned addition to libvirt? Harald -------------- next part
2018 Dec 04
5
Accidentally nuked my system - any suggestions ?
Hi, My workstation is running CentOS 7 on two disks (sda and sdb) in a software RAID 1 setup. It looks like I accidentally nuked it. I wanted to write an installation ISO file to a USB disk, and instead of typing dd if=install.iso of=/dev/sdc I typed /dev/sdb. As soon as I hit <Enter>, the screen froze. I tried a hard reset, but of course, the boot process would stop short very early in
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello, I would really appreciate some help/guidance with this problem. First of all sorry for the long message. I would file a bug, but do not know if it is my fault, dm-cache, qemu or (probably) a combination of both. And i can imagine some of you have this setup up and running without problems (or maybe you think it works, just like i did, but it does not): PROBLEM LVM cache writeback
2012 Oct 03
1
Retraction: Protocol stacking: gluster over NFS
Hi All, Well, it <http://goo.gl/hzxyw> was too good to be true. Under extreme, extended IO on a 48core node, some part of the the NFS stack collapses and leads to an IO lockup thru NFS. We've replicated it on 48core and 64 core nodes, but don't know yet whether it acts similarly on lower-core-count nodes. Tho I haven't had time to figure out exactly /how/ it collapses, I
2002 Sep 22
2
Assertion failure in ext3_get_block() at inode.c:853: "handle != 0"
Hi, Got the following on Linux 2.5.37 trying to run apt-get update. MikaL Sep 21 23:10:05 devil kernel: Assertion failure in ext3_get_block() at inode.c:853: "handle != 0" Sep 21 23:10:05 devil kernel: kernel BUG at inode.c:853! Sep 21 23:10:05 devil kernel: invalid operand: 0000 Sep 21 23:10:05 devil kernel: CPU: 1 Sep 21 23:10:05 devil kernel: EIP:
2020 Sep 16
7
storage for mailserver
hi, I am planning to replace my old CentOS 6 mail server soon. Most details are quite obvious and do not need to be changed, but the old system was running on spinning discs and this is certainly not the best option for todays mail servers. With spinning discs, HW-RAID6 was the way to go to increase reliability and speed. Today, I get the feeling, that traditional RAID is not the best option for