Displaying 20 results from an estimated 4000 matches similar to: "Proper procedure when device names have changed"
2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-05-25 19:13, Kelly Lesperance wrote:
> Hdparm didn?t get far:
>
> [root at r1k1 ~] # hdparm -tT /dev/sda
>
> /dev/sda:
> Timing cached reads: Alarm clock
> [root at r1k1 ~] #
Hi Kelly,
Try running 'iostat -xdmc 1'. Look for a single drive that has
substantially greater await than ~10msec. If all the drives
except one are taking 6-8msec, but one is very
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it.
We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs:
2x E5-2650
128 GB RAM
12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA
Dual port 10 GB NIC
The drives are configured as one large
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one.
Here is an iostat example from a host within the same cluster, but without the RAID check running:
[root at r2k1 ~] # iostat -xdmc 1 10
Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2013 Sep 04
2
Error Attaching Seventh VirtIO-SCSI Device to Guest
I have run into a problem attempting to attach the seventh virtio-scsi device to a RHEL 6.4 Guest from a RHEL 6.4 host running libvirt version 0.10.2-18.
I have a guest that is running RHEL6.4 where I can attach disks sda(boot), sdb, sdc, sdd, sde and sdf but when I try to attach sdg the virsh attach-disk command fails with the error:
error: Failed to attach disk
error: internal error Unable to
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because
two drives became unavailable. After adjusting the cables on several
occasions and shutting down and restarting, I was able to see the
drives again. This is when I snatched defeat from the jaws of
victory. Please, someone with vast knowledge of how RAID 5 with mdadm
works, tell me if I have any chance at all
2019 Jun 14
3
zfs
Hi, folks,
testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I
pulled one drive (11-drive, one hot spare pool), and it resilvered with
the hot spare. zpool status -x shows me
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
2010 May 28
2
permanently add md device
Hi All
Currently i'm setting up a 5.4 server and try to create a 3rd raid device, when i run:
$mdadm --create /dev/md2 -v --raid-devices=15 --chunk=32 --level=raid6 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq
the device file "md2" is created and the raid is being configured. but somehow
2009 Jan 13
2
mounted.ocfs2 -f return Unknown: Bad magic number in inode
Hello,
I have installed ocfs2 without problem and use it for a RAC10gR2.
Only Clusterware files are ocfs2 type.
multipath is also used.
When I issue : mounted.ocfs2 -f
I have a strange result:
Device FS Nodes
/dev/sda ocfs2 Unknown: Bad magic number in inode
/dev/sda1 ocfs2 pocrhel2, pocrhel1
/dev/sdb ocfs2 Not mounted
/dev/sdf
2018 Jan 10
2
Issues accessing ZFS-shares on Linux
I just noticed that by running by commands /usr/sbin/smbd -D or
/usr/sbin/smbd -i without systemd's unit, all shares work perfectly so
the problem must then be somehow related to systemd.. Let the testing
continue..
I also tested what happens if I comment out everything and just use
ExecStart=/usr/sbin/smbd -D as that command worked on the console. That
did not help.
For the record, this is
2024 Oct 19
2
How much disk can fail after a catastrophic failure occur?
Hi there.
I have 2 servers with this number of disks in each side:
pve01:~# df | grep disco
/dev/sdd 1.0T 9.4G 1015G 1% /disco1TB-0
/dev/sdh 1.0T 9.3G 1015G 1% /disco1TB-3
/dev/sde 1.0T 9.5G 1015G 1% /disco1TB-1
/dev/sdf 1.0T 9.4G 1015G 1% /disco1TB-2
/dev/sdg 2.0T 19G 2.0T 1% /disco2TB-1
/dev/sdc 2.0T 19G 2.0T 1%
2012 Sep 05
3
BTRFS thinks device is busy [kernel 3.5.3]
Hi,
I''m running OpenSuse 12.2 with kernel 3.5.3
HBA= LSI 1068e using the MPTSAS driver (patched)
(https://patchwork.kernel.org/patch/1379181/)
SANOS1:/media # uname -a
Linux SANOS1 3.5.3 #3 SMP Sun Sep 2 18:44:37 CEST 2012 x86_64 x86_64
x86_64 GNU/Linux
I''ve tried to simulate a disk replacement but it seems that now
/dev/sdg is stuck in the btrfs pool (RAID10)
SANOS1:/media #
2013 Sep 07
1
Re: Error Attaching Seventh VirtIO-SCSI Device to Guest
> -----Original Message-----
> From: Osier Yang [mailto:jyang@redhat.com]
> Sent: Friday, September 06, 2013 10:54 PM
> To: McEvoy, James
> Cc: libvirt-users@redhat.com
> Subject: Re: [libvirt-users] Error Attaching Seventh VirtIO-SCSI Device to Guest
>
> On 04/09/13 09:34, McEvoy, James wrote:
> > I have run into a problem attempting to attach the seventh
2006 Sep 12
3
RE: Help: Xen HVM Domain can ONLY support four hard drivesat most???
>-----Original Message-----
>From: xen-users-bounces@lists.xensource.com
>[mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Liang Yang
>Sent: 2006年9月12日 8:57
>To: xen-users@lists.xensource.com
>Subject: [Xen-users] Help: Xen HVM Domain can ONLY support four hard
>drivesat most???
>
>Hi,
>
>I have 5 SATA hard drives and I want to expose all these five
2007 Oct 07
1
Replacing failed software RAID drive
CentOS release 4.5
Hi All:
First of all I will admit to being spoiled by my MegaRAID SCSI RAID
controllers. When a drive fails on one of them I just replace the
drive and carry on with out having to do anything else.
I now find myself in the situation where I have a failed drive on a
non-MegaRAID controller, specifically an Adaptec 29160 SCSI controller.
The system is an Acer G700 with 8
2010 Sep 17
1
multipath troubleshoot
Hi,
My storage admin just assigned a Lun (fibre) to my server. Then re scanned using
echo "1" > /sys/class/fc_host/host5/issue_lip
echo "1" > /sys/class/fc_host/host6/issue_lip
I can see the scsi device using dmesg
But mpath device are not created for this LUN
Pleas see below. The last 4 should be active and I think this is the problem
Kernel:
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64.
I have noticed when building a new software RAID-6 array on CentOS 6.3
that the mismatch_cnt grows monotonically while the array is building:
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
2011 Feb 05
2
Strangeness on btrfs balance..
Hi there...
I have kernel version 2.6.36.3, compiled with gcc 4.4.5, btrfstools
version 0.19+20101101
I have a btrfs filesystem (/data) consisting of two 1TB hard disks, raid0.
I added in another 1TB hard drive.
root@X86-64:~# btrfs filesystem show
failed to read /dev/sdh
failed to read /dev/sdg
failed to read /dev/sdf
failed to read /dev/sde
failed to read /dev/sr0
failed to read /dev/fd0u800
2024 Oct 20
1
How much disk can fail after a catastrophic failure occur?
If it's replica 2, you can loose up to 1 replica per distribution group.For example, if you have a volume TEST with such setup:
server1:/brick1
server2:/brick1
server1:/brick2
server2:/brick2
You can loose any brick of the replica "/brick1" and any brick in the replica "/brick2". So if you loose server1:/brick1 and server2:/brick2 -> no data loss will be experienced.
2004 Oct 13
1
i-node showing 100 % used whereas the partitions are empty
Output df -i
------------------
Filesystem Inodes IUsed IFree IUse% Mounted on
/
/dev/sde 348548 348195 353 100% /ocfsa01
/dev/sdf 348548 348195 353 100% /ocfsa02
/dev/sdg 348548 348195 353 100% /ocfsa03
/dev/sdk 139410 138073 1337 100% /ocfsq01
Output df -kP
-----------------------
Filesystem
2013 Jul 22
7
[Bug 67161] New: Blank video after resuming from S3 or S4
https://bugs.freedesktop.org/show_bug.cgi?id=67161
Priority: medium
Bug ID: 67161
Assignee: nouveau at lists.freedesktop.org
Summary: Blank video after resuming from S3 or S4
QA Contact: xorg-team at lists.x.org
Severity: normal
Classification: Unclassified
OS: Linux (All)
Reporter: mauromol at tiscali.it