Displaying 20 results from an estimated 2000 matches similar to: "permanently add md device"
2012 Sep 05
3
BTRFS thinks device is busy [kernel 3.5.3]
Hi,
I''m running OpenSuse 12.2 with kernel 3.5.3
HBA= LSI 1068e using the MPTSAS driver (patched)
(https://patchwork.kernel.org/patch/1379181/)
SANOS1:/media # uname -a
Linux SANOS1 3.5.3 #3 SMP Sun Sep 2 18:44:37 CEST 2012 x86_64 x86_64
x86_64 GNU/Linux
I''ve tried to simulate a disk replacement but it seems that now
/dev/sdg is stuck in the btrfs pool (RAID10)
SANOS1:/media #
2010 Sep 17
1
multipath troubleshoot
Hi,
My storage admin just assigned a Lun (fibre) to my server. Then re scanned using
echo "1" > /sys/class/fc_host/host5/issue_lip
echo "1" > /sys/class/fc_host/host6/issue_lip
I can see the scsi device using dmesg
But mpath device are not created for this LUN
Pleas see below. The last 4 should be active and I think this is the problem
Kernel:
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because
two drives became unavailable. After adjusting the cables on several
occasions and shutting down and restarting, I was able to see the
drives again. This is when I snatched defeat from the jaws of
victory. Please, someone with vast knowledge of how RAID 5 with mdadm
works, tell me if I have any chance at all
2010 Jul 23
5
install on raid1
Hi All,
I'm currently trying to install centos 5.4 x86-64bit on a raid 1, so if one the 2 disks fails the server will still be available.
i installed grub on /dev/sda using the advanced grub configuration option during the install.
after the install is done i boot in linux rescue mode , chroot the filesystem and copy grub to both drives using:
grub>root (hd0,0)
grub>setup (hd0)
2011 Nov 22
1
Recovering data from old corrupted file system
I have a corrupted multi-device file system that got corrupted ages
ago (as I recall, one of the drives stopped responding, causing btrfs
to panic). I am hoping to recover some of the data. For what it''s
worth, here is the dmesg output from trying to mount the file system
on a 3.0 kernel:
device label Media devid 6 transid 816153 /dev/sdq
device label Media devid 7 transid 816153
2020 Sep 09
4
Btrfs RAID-10 performance
Hi, thank you for your reply. I'll continue inline...
Dne 09.09.2020 v 3:15 John Stoffel napsal(a):
> Miloslav> Hello,
> Miloslav> I sent this into the Linux Kernel Btrfs mailing list and I got reply:
> Miloslav> "RAID-1 would be preferable"
> Miloslav> (https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2112 at lechevalier.se/T/).
>
2020 Sep 07
4
Btrfs RAID-10 performance
Hello,
I sent this into the Linux Kernel Btrfs mailing list and I got reply:
"RAID-1 would be preferable"
(https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2112 at lechevalier.se/T/).
May I ask you for the comments as from people around the Dovecot?
We are using btrfs RAID-10 (/data, 4.7TB) on a physical Supermicro
server with Intel(R) Xeon(R) CPU E5-2620 v4 @
2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-05-25 19:13, Kelly Lesperance wrote:
> Hdparm didn?t get far:
>
> [root at r1k1 ~] # hdparm -tT /dev/sda
>
> /dev/sda:
> Timing cached reads: Alarm clock
> [root at r1k1 ~] #
Hi Kelly,
Try running 'iostat -xdmc 1'. Look for a single drive that has
substantially greater await than ~10msec. If all the drives
except one are taking 6-8msec, but one is very
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it.
We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs:
2x E5-2650
128 GB RAM
12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA
Dual port 10 GB NIC
The drives are configured as one large
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one.
Here is an iostat example from a host within the same cluster, but without the RAID check running:
[root at r2k1 ~] # iostat -xdmc 1 10
Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2010 Jun 03
2
Tracking down hangs
We're using a storage solution involving two SunFire X4500 servers using
DRBD to replicate a 15TB partition across the network with ocfs2 on top.
We're sharing the partition from one server over NFS and the other is
mounted read-only at present. The DBRD backing store is software RAID 60
on 40 disks.
We've been seeing periodic issues whereby our NFS clients (Debian Lenny)
are very
2019 Jun 14
3
zfs
Hi, folks,
testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I
pulled one drive (11-drive, one hot spare pool), and it resilvered with
the hot spare. zpool status -x shows me
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
2024 Oct 19
2
How much disk can fail after a catastrophic failure occur?
Hi there.
I have 2 servers with this number of disks in each side:
pve01:~# df | grep disco
/dev/sdd 1.0T 9.4G 1015G 1% /disco1TB-0
/dev/sdh 1.0T 9.3G 1015G 1% /disco1TB-3
/dev/sde 1.0T 9.5G 1015G 1% /disco1TB-1
/dev/sdf 1.0T 9.4G 1015G 1% /disco1TB-2
/dev/sdg 2.0T 19G 2.0T 1% /disco2TB-1
/dev/sdc 2.0T 19G 2.0T 1%
2011 Mar 07
2
connection speeds between nodes
Hi All,
I've been asked to setup a 3d renderfarm at our office , at the start it
will contain about 8 nodes but it should be build at growth. now the
setup i had in mind is as following:
All the data is already stored on a StorNext SAN filesystem (quantum )
this should be mounted on a centos server trough fiber optics , which
in its turn shares the FS over NFS to all the rendernodes
2009 Jan 13
2
mounted.ocfs2 -f return Unknown: Bad magic number in inode
Hello,
I have installed ocfs2 without problem and use it for a RAC10gR2.
Only Clusterware files are ocfs2 type.
multipath is also used.
When I issue : mounted.ocfs2 -f
I have a strange result:
Device FS Nodes
/dev/sda ocfs2 Unknown: Bad magic number in inode
/dev/sda1 ocfs2 pocrhel2, pocrhel1
/dev/sdb ocfs2 Not mounted
/dev/sdf
2006 Aug 10
3
MD raid tools ... did i missed something?
Hi
I have a degraded array /dev/md2
=====================================================================
$ mdadm -D /dev/md2
/dev/md2:
Version : 00.90.01
Creation Time : Thu Oct 6 20:31:57 2005
Raid Level : raid5
Array Size : 221953536 (211.67 GiB 227.28 GB)
Device Size : 110976768 (105.84 GiB 113.64 GB)
Raid Devices : 3
Total Devices : 2
Preferred Minor : 2
2011 Oct 26
4
openldap missing modules
Hi List,
I'm currently setting up an openldap server and included the following
lines in my slapd.conf :
modulepath /usr/lib/ldap
moduleload back_hdb
after finishing up my config and i run slaptest on it i get an error
saying that the modulepath doesn't exist.
I checked and it indeed isn't there , in fact i can find it anywhere on my
system (centos 5.7).
the packages i've
2008 Mar 23
4
md raid1 - no speed improvement
Hi,
I have two 320 GB SATA disks (/dev/sda, /dev/sdb) in a server running
CentOS release 5.
They both have three partitions setup as RAID1 using md (boot, swap,
and an LVM data partition).
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0]
4192896 blocks [2/2] [UU]
md2 : active raid1 sdb3[1]
2013 Mar 28
1
question about replacing a drive in raid10
Hi all,
I have a question about replacing a drive in raid10 (and linux kernel 3.8.4).
A bad disk was physical removed from the server. After this a new disk
was added with "btrfs device add /dev/sdg /btrfs" to the raid10 btrfs
FS.
After this the server was rebooted and I mounted the filesystem in
degraded mode. It seems that a previous started balance continued.
At this point I want to
2011 May 16
1
bond empty after reboot
Hi all,
I've setup a ethernet bond on my centos 5.6 server , when i do a reboot
the bond does come up but cleared all the slaves
and i've to manually re-add them with ifenslave.
does anyone know a solution to this? am i missing something? offcourse i
can add it to my rc.local but there must be a more elegant way. please
see my configs below
Thanks,
Wessel
ifcfg-bond0:
DEVICE=bond0