similar to: Problem with RAID on 6.3

Displaying 20 results from an estimated 300 matches similar to: "Problem with RAID on 6.3"

2005 Jul 27
0
Please, I looking for halp!
Hi all! I don't cnow where can i ask about this and I hope you can halp me. I have a digital voice recorder olymus ds-2300, it's working with *.dss files (digital speech standart). There is Olympus Dss Player for Windows but nathing can play it with linux. On olympus development homepage i found "Sound SDK for WIndows" but they don't wont to make it open. Please can you
2014 Oct 04
2
Mounting LUNs from a SAN array - LUN mappings to devices in /dev/ - are they static?
Hi All :) I am currently involved in a project in which there is a SAN array (Sun Storagetek 2540) which exports LUNs for some servers with Centos 5.2 x86. I will be performing a migration to Centos 5.9 x86_64 in some time and am gathering needed info now :) I am trying to find the place in the OS where there is the information about LUN mappings to /dev/ devices. For example on array level I
2020 Aug 08
1
[PATCH nbdkit] plugins: python: Fix imageio example instructions
Fix the instructions so they should work now on CentOS 8.2: - Add note about oVirt setup - List all required packages - Add missing -W flag to the qemu-img command I tested this on Fedora 32, with ovirt-engine-sdk built from source, since the package is not available on Fedora 31 or 32, and the Fedora 30 package cannot be installed on Fedora 32. Signed-off-by: Nir Soffer
2016 Jun 01
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I did some additional testing - I stopped Kafka on the host, and kicked off a disk check, and it ran at the expected speed overnight. I started kafka this morning, and the raid check's speed immediately dropped down to ~2000K/Sec. I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*). The raid check is now running between 100000K/Sec and 200000K/Sec, and has been for several
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because two drives became unavailable. After adjusting the cables on several occasions and shutting down and restarting, I was able to see the drives again. This is when I snatched defeat from the jaws of victory. Please, someone with vast knowledge of how RAID 5 with mdadm works, tell me if I have any chance at all
2017 Dec 19
0
kernel: blk_cloned_rq_check_limits: over max segments limit., Device Mapper Multipath, iBFT, iSCSI COMSTAR
Hi, WARNING: Long post ahead I have an issue when starting multipathd. The kernel complains about "blk_cloned_rq_check_limits: over max segments limit". The server in question is configured for KVM hosting. It boots via iBFT to an iSCSI volume. Target is COMSTAR and underlying that is a ZFS volume (100GB). The server also has two infiniband cards providing four (4) more paths over SRP
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it. We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: 2x E5-2650 128 GB RAM 12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA Dual port 10 GB NIC The drives are configured as one large
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one. Here is an iostat example from a host within the same cluster, but without the RAID check running: [root at r2k1 ~] # iostat -xdmc 1 10 Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2011 Jan 18
6
BUG while writing to USB btrfs filesystem
While untar''ing an image to an sd card via a reader, I got the following bug. The system also has a btrfs root, and a whole swath of processes went into uninterruptable sleep. I was able to poke around via ssh and sysrq, and already had netconsole set up to capture the bug. Root fs is on /dev/sdi1, and /dev/sdj2 is the card reader which was the target of the untar. [29571.448889] sd
2005 Oct 11
0
AW: Re: xen 3.0 boot problem
> > Well, i''m using here the qla2340 on several boxes. It works > > with Xen 2.0 but noch with Xen 3.0. as part of SUSE Linux 10.0: > > Interesting. If the driver really doews work flawlessly in > Xen 2, then I think the culprit has to be interrupt routeing. > > Under Xen 3, does /proc/interrupts show you''re receiving interrupts? I cannot boot with
2005 Oct 11
0
AW: Re: xen 3.0 boot problem
> > Well, i''m using here the qla2340 on several boxes. It works > > with Xen 2.0 but noch with Xen 3.0. as part of SUSE Linux 10.0: > > Interesting. If the driver really doews work flawlessly in > Xen 2, then I think the culprit has to be interrupt routeing. > > Under Xen 3, does /proc/interrupts show you''re receiving interrupts? I cannot boot with
2018 Jun 24
2
Build and testing issues
While setting up a development environment on a clean Fedora 28 host, I got some errors. I followed the instructions in http://libguestfs.org/guestfs-building.1.html dnf builddep libguestfs ./autogen.sh autogen.sh failed: ./configure: line 57694: syntax error near unexpected token `external' ./configure: line 57694: `AM_GNU_GETTEXT(external)' Checking the entire log show that
2009 Apr 10
0
Anaconda kickstart laying out / randomly; ignoring --ondisk in part command
This problem occurred in both CentOS 5.2 and the new (awesome!) 5.3: I'm using a SuperMicro motherboard with a 6 port NVidia SATA controller on the motherboard and an 8 port SuperMicro Marvel controller in a slot. In my kickstart file, I have part raid.01 . --ondisk=sda . part raid.02 . --ondisk=sdb . part raid.03 . --ondisk=sdc . part raid.04 . --ondisk=sdd . . raid /
2013 Oct 15
0
Antw: Xen-users Digest, Vol 104, Issue 18
Op 10/12/13, xen-users-request@lists.xen.org schreef: > Send Xen-users mailing list submissions to > xen-users@lists.xen.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://lists.xen.org/cgi-bin/mailman/listinfo/xen-users > or, via email, send a message with subject or body ''help'' to > xen-users-request@lists.xen.org > > You
2010 May 28
2
permanently add md device
Hi All Currently i'm setting up a 5.4 server and try to create a 3rd raid device, when i run: $mdadm --create /dev/md2 -v --raid-devices=15 --chunk=32 --level=raid6 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq the device file "md2" is created and the raid is being configured. but somehow
2017 Mar 25
1
"isolinux.bin missing or corrupt" when booting USB flash drive in old PC
Hi, for some reason David always omits the Cc: to this list. He has now reported to Martin and me the outcome of the latest two MBR test proposals. - His BIOS announces no LBA addressing but another extra feature. This lead to unexpected success in one of David's tests. - The newest fix proposal by Martin is a full success ! Distros which produce isohybrid should consider to already
2020 Aug 08
0
Re: [PATCH nbdkit] plugins: python: Add imageio plugin example
On 8/6/20 5:54 PM, Nir Soffer wrote: > This is mainly for testing the new parallel python threading model, but > it also an example how to manage multiple connection from a plugin. > > I tested this with local imageio server, serving qcow2 image on local > SSD. > > diff --git a/plugins/python/examples/imageio.py b/plugins/python/examples/imageio.py > new file mode 100644
2014 Aug 29
3
*very* ugly mdadm issue
We have a machine that's a distro mirror - a *lot* of data, not just CentOS. We had the data on /dev/sdc. I added another drive, /dev/sdd, and created that as /dev/md4, with --missing, made an ext4 filesystem on it, and rsync'd everything from /dev/sdc. Note that we did this on *raw*, unpartitioned drives (not my idea). I then umounted /dev/sdc, and mounted /dev/md4, and it looked fine; I
2008 Mar 14
0
Help needed in Building lustre using pre-packaged releases
Hi, Can anyone guide me in building the lustre using pre-packaged lustre release.I''m using Ubuntu 7.10 I want to build lustre using RHEL2.6 rpms available on my system.I''m referring how_to in wiki. but in that no detailed step by step procedure is given for building lustre using pre-packed release. I''m in need of this. Thanks and Regards, Ashok Bharat -----Original
2005 Nov 24
1
boot with more scsi card
hi, we've got a server with a 8 port 3ware card and 2 ide system disks. now we'd like to replace the ide disks with scsi disks or sata disks (these also recognized as scsi in the kernel). but we can't boot from it. the problem are twofold. first in the normal case the first scsi host scsi0 id the 3ware card, but grub only see the first 8 disk so if the system disk are sdi and sdj the