similar to: CentOS 6.6 - reshape of RAID 6 is stucked

Displaying 20 results from an estimated 200 matches similar to: "CentOS 6.6 - reshape of RAID 6 is stucked"

2007 Aug 23
1
Transport endpoint not connected after crash of one node
Hi, I am on SLES 10, SP1, x86_64, running the distribution rpm's of ocfs: ocfs2console-1.2.3-0.7 ocfs2-tools-1.2.3-0.7 I have a two node ocfs2 cluster configured. One node died (manual reset), and the second started immediately to have problems on accessing the file system for the following reason from the logs: Transport endpoint not connected. a mounted.ocfs2 on the still living
2015 Jun 25
0
LVM hatred, was Re: /boot on a separate partition?
On 06/25/2015 01:20 PM, Chris Adams wrote: > ...It's basically a way to assemble one arbitrary set of block devices > and then divide them into another arbitrary set of block devices, but > now separate from the underlying physical structure. > Regular partitions have various limitations (one big one on Linux > being that modifying the partition table of a disk with in-use
2012 Feb 26
0
"device delete" kills contents
Hallo, linux-btrfs, I''ve (once again) tried "add" and "delete". First, with 3 devices (partitions): mkfs.btrfs -d raid0 -m raid1 /dev/sdk1 /dev/sdl1 /dev/sdm1 Mounted (to /mnt/btr), filled with about 100 GByte data. Then btrfs device add /dev/sdj1 /mnt/btr results in # show Label: none uuid: 6bd7d4df-e133-47d1-9b19-3c7565428770 Total devices 4 FS bytes
2011 Feb 10
0
(o2net, 6301, 0):o2net_connect_expired:1664 ERROR: no connection established with node 1 after 60.0 seconds, giving up and returning errors.
Hello, I am installing Two Node cluster when I automount the file systems I am getting o2net_connect_expired error and it is not mounting the cluster filesystems if I mount the cluster file systems manually as mount -a it is mounting the file systems without any issues. 1.If I bring Node1 up with Node2 to down cluster file system is automounting fine without any issues. 2.I checked the
2009 Sep 24
4
mdadm size issues
Hi, I am trying to create a 10 drive raid6 array. OS is Centos 5.3 (64 Bit) All 10 drives are 2T in size. device sd{a,b,c,d,e,f} are on my motherboard device sd{i,j,k,l} are on a pci express areca card (relevant lspci info below) #lspci 06:0e.0 RAID bus controller: Areca Technology Corp. ARC-1210 4-Port PCI-Express to SATA RAID Controller The controller is set to JBOD the drives. All
2002 Mar 02
4
ext3 on Linux software RAID1
Everyone, We just had a pretty bad crash on one of production boxes and the ext2 filesystem on the data partition of our box had some major filesystem corruption. Needless to say, I am now looking into converting the filesystem to ext3 and I have some questions regarding ext3 and Linux software RAID. I have read that previously there were some issues running ext3 on a software raid device
2010 Aug 12
2
Problem resizing partition of nfs volume
Hi: I have an NFS volume that I'm trying to resize a partition on. Something about the fdisk process is corrupting something on the drive Before running fdisk, I can mount the volume find: $ mount /dev/sdo1 /home ... and the volume is mounted fine. And, $ e2fsck -f /dev/sdo1 /dev/sdo1: clean, ... But then I run fdisk to rewrite the partition table of this drive, to expand the /dev/sdo1
2009 Apr 17
0
problem with 5.3 upgrade or just bad timing?
I've been experiencing delays access data off my file server since I upgraded to 5.3... either I hosed something, have bad hardware or very unlikely, found a bug. When reading or writing data, the stream to the hdd's stops every 5-10 min and %iowait goes through the roof. I checked the logs and they are filled with this diagnostic data that I can't readily decipher. my setup
2017 Jul 05
0
attempt to access beyond end of device XFS Disks
Hi, I rebooted some CEPH servers with 24 HDs and do get some messages for some of the disks: [ 519.667055] XFS (sdk1): Mounting V4 Filesystem [ 519.692307] XFS (sdk1): Ending clean mount [ 519.781975] attempt to access beyond end of device [ 519.781984] sdk1: rw=0, want=1560774288, limit=1560774287 All disks are xfs formated and currently I don't see any problem on the CEPH side. But I
2016 Jul 24
13
[Bug 97065] New: memory leak under Xwayland with old sdl1 applications
https://bugs.freedesktop.org/show_bug.cgi?id=97065 Bug ID: 97065 Summary: memory leak under Xwayland with old sdl1 applications Product: xorg Version: unspecified Hardware: Other OS: All Status: NEW Severity: normal Priority: medium Component: Driver/nouveau Assignee: nouveau
2012 Mar 01
3
[LLVMdev] Aliasing bug or feature?
Hello everyone, I am working on some changes to the Hexagon VLIW PreRA scheduler, and as a part of it need to test aliasing properties of two instruction. What it boils down to is the following code: char a[20]; char s; char *p, *q; // p == &a[0]; q == &s; void test() { register char reg; s = 0; reg = p[0] + p[1]; s = q[0] + reg; return; } When I ask the question whether
2011 Jun 25
8
how do determine last file system on disk?
Hi all, Does anyone know how to determine which file system a disk was formatted with, if fdisk -l doesn't show it? usb-storage: device found at 5 usb-storage: waiting for device to settle before scanning Vendor: Model: Rev: Type: Direct-Access ANSI SCSI revision: 02 sd 7:0:0:0: Attached scsi disk sda sd 7:0:0:0: Attached scsi generic
2005 Oct 20
1
RAID6 in production?
Is anyone using RAID6 in production? In moving from hardware RAID on my dual 3ware 7500-8 based systems to md, I decided I'd like to go with RAID6 (since md is less tolerant of marginal drives than is 3ware). I did some benchmarking and was getting decent speeds with a 128KiB chunksize. So the next step was failure testing. First, I fired off memtest.sh as found at
2007 Oct 25
0
group descriptors corrupted on ext3 filesystem
I am unable to mount an ext3 filesystem on RHEL AS 2.1. This is not the boot or root filesystem. When I try to mount the file system, I get the following error: mount: wrong fs type, bad option, bad superblock on /dev/sdj1, or too many mounted file systems When I try to run e2fsck -vvfy /dev/sdj1 I get the following error: Group descriptors look bad... trying backup blocks...
2010 Jan 08
7
SAN help
My CentOS 5.4 box has a single HBA card with 2 ports connected to my Storage. 2 Luns are assigned to my HBA card. Under /dev instead of seeing 4 devices I can see 12 devices from sdb to sdm. I am using qlogic driver that is bulitin to the OS. Has any one seen this kind of situation? Paras -------------- next part -------------- An HTML attachment was scrubbed... URL:
2005 Aug 31
8
problem with OCFS label
I used this command to create volume label on OCFS: mkfs.ocfs -F -b 128 -L data13 -m /oradata/data13 -u oracle -g dba -p 0775 /dev/emcpowerp1 emcpowerp is composed of /dev/sdad and /dev/sdk. It seems the above command created the same labels for /dev/emcpowerp1, /dev/sdad1 and /dev/sdk1. But when I tried to mount this ocfs filesystem by label, it gave me the following error. # mount -L data13
2003 Jun 12
2
How can I read superblock(ext2, ext3)'s information?
Hello, I'd like to read superblock's information on Redhat 7.3, but I don't know how to do it. For example, Input : "/dev/sdj2" Output : ext2_super_block struct's s_wtime (I saw it at "/usr/include/ext2fs/ext2_fs.h") Input : "/dev/sdj1" Output : ext3_super_block struct's s_wtime (I saw it at
2011 Nov 09
1
[LLVMdev] .debug_info section size in arm executable
On Nov 9, 2011, at 2:12 PM, Chris Lattner wrote: > On Nov 9, 2011, at 1:08 PM, Jim Grosbach wrote: >>> On Nov 9, 2011, at 10:49 AM, Jim Grosbach wrote: >>>>> >>>>> It's not good, but people do it. Also constructing enums via & and | etc. It'd be nice to be able to get the name of whatever it is that the code generator actually produced :)
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64. I have noticed when building a new software RAID-6 array on CentOS 6.3 that the mismatch_cnt grows monotonically while the array is building: # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
2012 Jul 10
1
Problem with RAID on 6.3
I have 4 ST2000DL003-9VT166 (2Tbyte) disks in a RAID 5 array. Because of the size I built them as a GPT partitioned disk. They were originally built on a CentOS 5.x machine but more recently plugged into a CentOS 6.2 machine where they were detected just fine e.g. % parted /dev/sdj print Model: ATA ST2000DL003-9VT1 (scsi) Disk /dev/sdj: 2000GB Sector size (logical/physical): 512B/512B