similar to: md raid6 recommended?

Displaying 20 results from an estimated 1000 matches similar to: "md raid6 recommended?"

2009 Apr 17
0
problem with 5.3 upgrade or just bad timing?
I've been experiencing delays access data off my file server since I upgraded to 5.3... either I hosed something, have bad hardware or very unlikely, found a bug. When reading or writing data, the stream to the hdd's stops every 5-10 min and %iowait goes through the roof. I checked the logs and they are filled with this diagnostic data that I can't readily decipher. my setup
2005 Oct 20
1
RAID6 in production?
Is anyone using RAID6 in production? In moving from hardware RAID on my dual 3ware 7500-8 based systems to md, I decided I'd like to go with RAID6 (since md is less tolerant of marginal drives than is 3ware). I did some benchmarking and was getting decent speeds with a 128KiB chunksize. So the next step was failure testing. First, I fired off memtest.sh as found at
2013 Nov 24
3
The state of btrfs RAID6 as of kernel 3.13-rc1
Hi What is the general state of btrfs RAID6 as of kernel 3.13-rc1 and the latest btrfs tools? More specifically: - Is it able to correct errors during scrubs? - Is it able to transparently handle disk failures without downtime? - Is it possible to convert btrfs RAID10 to RAID6 without recreating the fs? - Is it possible to add/remove drives to a RAID6 array? Regards, Hans-Kristian -- To
2009 May 25
1
raid5 or raid6 level cluster
Hello, ?s there anyway to create raid6 or raid5 level glusterfs installation ? >From docs I undetstood that I can do raid1 base glusterfs installation or radi0 (strapting data too all servers ) and raid10 based solution but raid10 based solution is not cost effective because need too much server. Do you have a plan for keep one or two server as a parity for whole glusterfs system
2010 May 28
2
permanently add md device
Hi All Currently i'm setting up a 5.4 server and try to create a 3rd raid device, when i run: $mdadm --create /dev/md2 -v --raid-devices=15 --chunk=32 --level=raid6 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq the device file "md2" is created and the raid is being configured. but somehow
2013 May 23
11
raid6: rmw writes all the time?
Hi all, we got a new test system here and I just also tested btrfs raid6 on that. Write performance is slightly lower than hw-raid (LSI megasas) and md-raid6, but it probably would be much better than any of these two, if it wouldn''t read all the during the writes. Is this a known issue? This is with linux-3.9.2. Thanks, Bernd -- To unsubscribe from this list: send the line
2013 Feb 26
0
Dom0 OOM, page allocation failure
Hello, I''m running into some trouble with what appear on the surface to be OOM issues in Dom0, but I''m not seeing any other evidence. This typically happens during periods of high I/O, and has occurred during RAID initial sync, and mkfs.ext4ing (as a test, no intention to keep ext4 on this array). I''ve found some older posts citing very similar circumstances, however
2010 Mar 04
1
removing a md/software raid device
Hello folks, I successfully stopped the software RAID. How can I delete the ones found on scan? I also see them in dmesg. [root at extragreen ~]# mdadm --stop --scan ; echo $? 0 [root at extragreen ~]# mdadm --examine --scan ARRAY /dev/md0 level=raid5 num-devices=4 UUID=89af91cb:802eef21:b2220242:b05806b5 ARRAY /dev/md0 level=raid6 num-devices=4 UUID=3ecf5270:339a89cf:aeb092ab:4c95c5c3 [root
2001 May 17
0
Fwd: ext3 for 2.4
---------- Forwarded Message ---------- Subject: ext3 for 2.4 Date: Thu, 17 May 2001 21:20:38 +1000 From: Andrew Morton <andrewm@uow.edu.au> To: ext2-devel@lists.sourceforge.net, "Peter J. Braam" <braam@mountainviewdata.com>, Andreas Dilger <adilger@turbolinux.com>, "Stephen C. Tweedie" <sct@redhat.com> Cc: linux-fsdevel@vger.kernel.org Summary:
2008 Dec 15
2
Zaptel / TDM400P card stopped working
Hi I have a Dell PE2300 with a Digium TDM400P line card in it (with one module to handle an inbound phone line). This is running on a Fedora 8 system with Asterisk 1.4.21.2-1.fc8 This system has been working nicely for about 12 months. After a recent move of office and relocation of the server Asterisk is back on line, but the TDM line card has stopped working. I have spent half a day
2017 Mar 12
0
nVidia ION on Acer Revo out of signal after boot.
I think you want kernel 4.10, or cherry-pick upstream commit 7dfee6827780d4228148263545af936d0cae8930. On Sun, Mar 12, 2017 at 11:27 AM, Marcelo Ribeiro <mbribeiro at gmail.com> wrote: > Hi, I am changing from proprietary nvidia drivers to Nouveau drivers in my > mediacenter. > > I run a Gentoo linux in this hardware and I am facing an out of signal > screen just after the
2008 Jan 15
19
How do you make an MGS/OSS listen on 2 NICs?
I am running on CentOS 5 distribution without adding any updates from CentOS. I am using the lustre 1.6.4.1 kernel and software. I have two NICs that run though different switches. I have the lustre options in my modprobe.conf to look like this: options lnet networks=tcp0(eth1,eth0) My MGS seems to be only listening on the first interface however. When I try and ping the 1st interface (eth1)
2017 Mar 12
2
nVidia ION on Acer Revo out of signal after boot.
Hi, I am changing from proprietary nvidia drivers to Nouveau drivers in my mediacenter. I run a Gentoo linux in this hardware and I am facing an out of signal screen just after the boot (when the nouveau module is loaded). If I blacklist this module and modprobe it in the prompt, the screen gets out of signal in my 1080p TV in this moment. I tried recompile the kernel enabling and disabling a
2007 Jun 14
0
(no subject)
I installed a fresh copy of Debian 4.0 and Xen 3.1.0 SMP PAE from the binaries. I had a few issues getting fully virtualized guests up and running, but finally managed to figure everything out. Now I''m having a problem with paravirtualized guests and hoping that someone can help. My domU config: # # Configuration file for the Xen instance dev.umucaoki.org, created # by xen-tools
2007 Nov 03
4
Problems exporting a PCI device to a domU...
Hi! I am trying to export a PCI device (an AVM Fritzcard PCI ISDN card...) to a domU but when starting my domU I am getting this error: "pciback pci-4-0: 22 Couldn''t locate PCI device (0000:00:06.0)! perhaps already in-use?" My system is running both debian etch in dom0 and domU... Below you can find (hopefully) all important information... Sincerely, Gaubatz Patrick
2009 Mar 31
2
DomU console appears to hang when starting
Hello, I have Xen running on an Ubuntu 8.04 LTS machine. This machine has been in production for a few months now. Up until recently, all the DomU machines have also been Ubuntu 8.04 machines. Recently, I tried creating a Debian Lenny machine. I use xen tools to create the DomU machines. Upon completion of creating the DomU with xen tools, I start the DomU using the [sanitized] command
2013 Dec 10
0
Re: gentoo linux, problem starting vm´s when cache=none
On Tue, Dec 10, 2013 at 03:21:59PM +0100, Marko Weber | ZBF wrote: > Hello Daniel, > > Am 2013-12-10 11:23, schrieb Daniel P. Berrange: > >On Tue, Dec 10, 2013 at 11:20:35AM +0100, Marko Weber | ZBF wrote: > >> > >>hello mailinglist, > >> > >>on gentoo system with qemu-1.6.1, libvirt 1.1.4, libvirt-glib-0.1.7, > >>virt-manager 0.10.0-r1
2001 Jun 26
2
Re: Ext3 kernel RPMS (2.4.5 & 2.2.19)
hi, is this rpms differ from redhat's rawhide 2.4.5 kernel which seems to contain ext3. so my question that your rpm contain different ext3 than rh's rpm? or I can simple use rh's rawhide rpms? thanks. yours. ps. please reply to my private address to since I'm not on the list. thanks. > Hi, > > Mostly for my own use, I prepared two kernel RPM's with Ext3 in them.
2007 Nov 26
15
bad 1.6.3 striped write performance
Hi, I''m seeing what can only be described as dismal striped write performance from lustre 1.6.3 clients :-/ 1.6.2 and 1.6.1 clients are fine. 1.6.4rc3 clients (from cvs a couple of days ago) are also terrible. the below shows that the OS (centos4.5/5) or fabric (gigE/IB) or lustre version on the servers doesn''t matter - the problem is with the 1.6.3 and 1.6.4rc3 client kernels
2007 Feb 23
1
Samba + Bonding = Terrible Performance
Hello, Please CC replies I'm not subscribed. Performance with samba and only samba degrades terribly when we use the bonding driver to aggregate two ethernet cards. Instead of a steady file copy it seems to go in spurts. If I pull out one of the network cables (doesn't matter which) performance resumes to full speed. I can pull the cable in the middle of a transfer and it will go