similar to: GFS/LVM/RAID1 recovery question

Displaying 20 results from an estimated 4000 matches similar to: "GFS/LVM/RAID1 recovery question"

2007 Oct 19
2
CentOS 5 centosplus kernel + kmod-gfs
On CentOS 5, the current centosplus kernel doesn't want to play with the kmod-gfs package (because kmod-gfs wants the stock kernel): # yum install kmod-gfs gives me: Transaction Check Error: package kernel-2.6.18-8.1.14.el5.centos.plus (which is newer than kernel-2.6.18-8.1.14.el5) is already installed Should it work, or is there a separate centosplus build that I've not
2014 Aug 29
3
*very* ugly mdadm issue
We have a machine that's a distro mirror - a *lot* of data, not just CentOS. We had the data on /dev/sdc. I added another drive, /dev/sdd, and created that as /dev/md4, with --missing, made an ext4 filesystem on it, and rsync'd everything from /dev/sdc. Note that we did this on *raw*, unpartitioned drives (not my idea). I then umounted /dev/sdc, and mounted /dev/md4, and it looked fine; I
2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-05-25 19:13, Kelly Lesperance wrote: > Hdparm didn?t get far: > > [root at r1k1 ~] # hdparm -tT /dev/sda > > /dev/sda: > Timing cached reads: Alarm clock > [root at r1k1 ~] # Hi Kelly, Try running 'iostat -xdmc 1'. Look for a single drive that has substantially greater await than ~10msec. If all the drives except one are taking 6-8msec, but one is very
2010 Oct 15
2
puppet-lvm and volume group issues
Trying to setup a volume group with puppet lvm and this:- volume_group { "my_vg": ensure => present, physical_volumes => "/dev/sdb /dev/sdc /dev/sdd", require => [ Physical_volume["/dev/sdb"], Physical_volume["/dev/sdc"], Physical_volume["/dev/sdd"] ] } Fails with this in the debug
2009 Sep 14
8
10 Node OCFS2 Cluster - Performance
Hi, I am currently running a 10 Node OCFS2 Cluster (version 1.3.9-0ubuntu1) on Ubuntu Server 8.04 x86_64. Linux n1 2.6.24-24-server #1 SMP Tue Jul 7 19:39:36 UTC 2009 x86_64 GNU/Linux The Cluster is connected to a 1Tera iSCSI Device presented by an IBM 3300 Storage System, running over a 1Gig Network. Mounted on all nodes: /dev/sdc1 on /cfs1 type ocfs2
2011 Sep 08
1
HBA port
Hi, I have a host which is connected to SAN via single Fibre channel HBA (qlogic). I have several LUNS assigned to this (sdc, sdd). I added another single port HBA to this host. I can now see two world wide names. Now the confusion is which world wide name does sdc and sdd is/was using. scsi_id -g -u -s /block/sdc only gives wwid but I need the wwn for sdc and sdd. Thanks Paras.
2017 Feb 18
3
usb drives & Orico ORICO 9548U3-BK
Everyone, Is there a way to manually assign usb drives to a specified device label. Is there a way to force two usb drives to be labeled as /dev/sdc and /dev/sdd? I decided to build an archive server for the purpose of backing up other fedora/centos desktops at the office. I built a machine and have installed Centos 7.3 on it with all updates current. I also purchased a 3.0 usb sata drive
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it. We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: 2x E5-2650 128 GB RAM 12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA Dual port 10 GB NIC The drives are configured as one large
2013 Jan 03
33
Option LABEL
Hallo, linux-btrfs, please delete the option "-L" (for labelling) in "mkfs.btrfs", in some configurations it doesn''t work as expected. My usual way: mkfs.btrfs -d raid0 -m raid1 /dev/sdb /dev/sdc /dev/sdd ... One call for some devices. Wenn I add the option "-L mylabel" then each device gets the same label, and therefore some other programs
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one. Here is an iostat example from a host within the same cluster, but without the RAID check running: [root at r2k1 ~] # iostat -xdmc 1 10 Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2010 Sep 13
3
Proper procedure when device names have changed
I am running zfs-fuse on an Ubuntu 10.04 box. I have a dual mirrored pool: mirror sdd sde mirror sdf sdg Recently the device names shifted on my box and the devices are now sdc sdd sde and sdf. The pool is of course very unhappy about the mirrors are no longer matched up and one device is "missing". What is the proper procedure to deal with this? -brian -- This message posted from
2005 Nov 08
10
Hotplug script not working
I have problem to start domU with errors: # xm create -c x335-hien1-vm4.cfg Using config file "x335-hien1-vm4.cfg". Error: Device 0 (vif) could not be connected. Hotplug scripts not working in changeset: changeset: 7700:98bcd8fbd5e36662c10becdcd0222a22161bb2b6 tag: tip user: kaf24@firebug.cl.cam.ac.uk date: Tue Nov 8 09:48:42 2005 summary: Fix alloc_skb()
2013 May 10
5
Btrfs balance invalid argument error
Hi list, I am using kernel 3.9.0, btrfs-progs 0.20-rc1-253-g7854c8b. I have a three disk array of level single: # btrfs fi sh Label: none uuid: 2e905f8f-e525-4114-afa6-cce48f77b629 Total devices 3 FS bytes used 3.80TB devid 1 size 2.73TB used 2.25TB path /dev/sdd devid 2 size 2.73TB used 1.55TB path /dev/sdc devid 3 size 2.73TB used 0.00 path /dev/sdb
2010 Dec 01
12
Fsck, parent transid verify failed
Hi folks! Been using btrfs for quite a while now, worked great until now. Got power-loss on my machine and now i have the "parent transid verify failed on X wanted X found X" problem. So I can''t get it to mount. My btrfs is spread over sda (2tb), sdc(2tb), sdd(1tb). Is this something that an offline fsck could fix ? If so is the fsck-util being developed ? Is there a way to
2012 May 28
1
Disk geometry problem.
Hi all. I have a CentOS server: CentOS release 5.7 (Final) 2.6.18-274.3.1.el5 x86_64 I have two SSD disks attached: smartctl -i /dev/sdc smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: INTEL SSDSA2CW120G3 Serial Number: CVPR13010957120LGN Firmware
2009 May 04
2
FW: Oracle 9204 installation on linux x86-64 on ocfs
Hello All, I have installed Oracle Cluster Manager on linux x86-64 nit. I am using ocfs file system for quorum file. But I am getting following error. Please see ocfs configureation below. I would appreciate, if someone could help me to understand if I am doing something wrong. Thanks in advance. --------------------------------------------------cm.log file ---------------------------- oracm,
2013 Jun 05
8
btrfs raid1 on 16TB goes read-only after "btrfs: block rsv returned -28"
Dear Devs, I have x4 4TB HDDs formatted with: mkfs.btrfs -L bu-16TB_0 -d raid1 -m raid1 /dev/sd[cdef] /etc/fstab mounts with the options: noatime,noauto,space_cache,inode_cache All on kernel 3.8.13. Upon using rsync to copy some heavily hardlinked backups from ReiserFS, I''ve seen: The following "block rsv returned -28" is repeated 7 times until there is a call trace
2019 Jun 14
3
zfs
Hi, folks, testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I pulled one drive (11-drive, one hot spare pool), and it resilvered with the hot spare. zpool status -x shows me state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state.
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because two drives became unavailable. After adjusting the cables on several occasions and shutting down and restarting, I was able to see the drives again. This is when I snatched defeat from the jaws of victory. Please, someone with vast knowledge of how RAID 5 with mdadm works, tell me if I have any chance at all
2010 May 28
2
permanently add md device
Hi All Currently i'm setting up a 5.4 server and try to create a 3rd raid device, when i run: $mdadm --create /dev/md2 -v --raid-devices=15 --chunk=32 --level=raid6 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq the device file "md2" is created and the raid is being configured. but somehow