Displaying 20 results from an estimated 1000 matches similar to: "mdadm size issues"
2015 Aug 25
0
CentOS 6.6 - reshape of RAID 6 is stucked
Hello
I have a CentOS 6.6 Server with 13 disks in a RAID 6. Some weeks ago, i upgraded it to 17 disks, two of them configured as spare. The reshape worked like normal in the beginning. But at 69% it stopped.
md2 : active raid6 sdj1[0] sdg1[18](S) sdh1[2] sdi1[5] sdm1[15] sds1[12] sdr1[14] sdk1[9] sdo1[6] sdn1[13] sdl1[8] sdd1[20] sdf1[19] sdq1[16] sdb1[10] sde1[17](S) sdc1[21]
19533803520
2012 Feb 26
0
"device delete" kills contents
Hallo, linux-btrfs,
I''ve (once again) tried "add" and "delete".
First, with 3 devices (partitions):
mkfs.btrfs -d raid0 -m raid1 /dev/sdk1 /dev/sdl1 /dev/sdm1
Mounted (to /mnt/btr), filled with about 100 GByte data.
Then
btrfs device add /dev/sdj1 /mnt/btr
results in
# show
Label: none uuid: 6bd7d4df-e133-47d1-9b19-3c7565428770
Total devices 4 FS bytes
2011 Feb 10
0
(o2net, 6301, 0):o2net_connect_expired:1664 ERROR: no connection established with node 1 after 60.0 seconds, giving up and returning errors.
Hello,
I am installing Two Node cluster when I automount the file systems I am getting o2net_connect_expired error and it is not mounting the cluster filesystems if I mount the cluster file systems manually as mount -a it is mounting the file systems without any issues.
1.If I bring Node1 up with Node2 to down cluster file system is automounting fine without any issues.
2.I checked the
2007 Aug 23
1
Transport endpoint not connected after crash of one node
Hi,
I am on SLES 10, SP1, x86_64, running the distribution rpm's of ocfs:
ocfs2console-1.2.3-0.7
ocfs2-tools-1.2.3-0.7
I have a two node ocfs2 cluster configured. One node died (manual reset),
and the second started immediately to have problems on accessing the file
system for the following reason from the logs: Transport endpoint not
connected.
a mounted.ocfs2 on the still living
2015 Jun 25
0
LVM hatred, was Re: /boot on a separate partition?
On 06/25/2015 01:20 PM, Chris Adams wrote:
> ...It's basically a way to assemble one arbitrary set of block devices
> and then divide them into another arbitrary set of block devices, but
> now separate from the underlying physical structure.
> Regular partitions have various limitations (one big one on Linux
> being that modifying the partition table of a disk with in-use
2023 Mar 30
1
Performance: lots of small files, hdd, nvme etc.
Well, you have *way* more files than we do... :)
Il 30/03/2023 11:26, Hu Bert ha scritto:
> Just an observation: is there a performance difference between a sw
> raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick)
Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks.
> with
> the same disks (10TB hdd)? The heal processes on the 5xraid1-scenario
>
2009 Apr 17
0
problem with 5.3 upgrade or just bad timing?
I've been experiencing delays access data off my file server since I
upgraded to 5.3... either I hosed something, have bad hardware or very
unlikely, found a bug.
When reading or writing data, the stream to the hdd's stops every 5-10
min and %iowait goes through the roof. I checked the logs and they
are filled with this diagnostic data that I can't readily decipher.
my setup
2013 Mar 03
4
Strange behavior from software RAID
Somewhere, mdadm is cacheing information. Here is my /etc/mdadm.conf file:
more /etc/mdadm.conf
# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=4 metadata=0.90 UUID=55ff58b2:0abb5bad:42911890:5950dfce
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=315eaf5c:776c85bd:5fa8189c:68a99382
ARRAY /dev/md2 level=raid1 num-devices=2
2011 Dec 31
1
problem with missing bricks
Gluster-user folks,
I'm trying to use gluster in a way that may be a considered an unusual use
case for gluster. Feel free to let me know if you think what I'm doing
is dumb. It just feels very comfortable doing this with gluster.
I have been using gluster in other, more orthodox configurations, for
several years.
I have a single system with 45 inexpensive sata drives - it's a
2002 Mar 02
4
ext3 on Linux software RAID1
Everyone,
We just had a pretty bad crash on one of production boxes and the ext2
filesystem on the data partition of our box had some major filesystem
corruption. Needless to say, I am now looking into converting the
filesystem to ext3 and I have some questions regarding ext3 and Linux
software RAID.
I have read that previously there were some issues running ext3 on a
software raid device
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64.
I have noticed when building a new software RAID-6 array on CentOS 6.3
that the mismatch_cnt grows monotonically while the array is building:
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
2007 Aug 27
3
mdadm --create on Centos5?
Is there some new trick to making raid devices on Centos5?
# mdadm --create /dev/md3 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdc1
mdadm: error opening /dev/md3: No such file or directory
I thought that worked on earlier versions. Do I have to do something
udev related first?
--
Les Mikesell
lesmikesell at gmail.com
2012 Jul 10
1
Problem with RAID on 6.3
I have 4 ST2000DL003-9VT166 (2Tbyte) disks in a RAID 5 array. Because
of the size I built them as a GPT partitioned disk. They were originally
built on a CentOS 5.x machine but more recently plugged into a CentOS
6.2 machine where they were detected just fine
e.g.
% parted /dev/sdj print
Model: ATA ST2000DL003-9VT1 (scsi)
Disk /dev/sdj: 2000GB
Sector size (logical/physical): 512B/512B
2019 Jan 31
0
C7, mdadm issues
> Il 30/01/19 16:49, Simon Matter ha scritto:
>>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>>> Il 29/01/19 20:42, mark ha scritto:
>>>>> Alessandro Baggi wrote:
>>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>>> Alessandro Baggi wrote:
>>>>>>>> Il 29/01/19 15:03, mark ha scritto:
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto:
>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>> Il 29/01/19 20:42, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>> Alessandro Baggi wrote:
>>>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>>>
2019 Jan 30
0
C7, mdadm issues
> On 01/30/19 03:45, Alessandro Baggi wrote:
>> Il 29/01/19 20:42, mark ha scritto:
>>> Alessandro Baggi wrote:
>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>> Alessandro Baggi wrote:
>>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>>
>>>>>>> I've no idea what happened, but the box I was working
2019 Jan 30
4
C7, mdadm issues
On 01/30/19 03:45, Alessandro Baggi wrote:
> Il 29/01/19 20:42, mark ha scritto:
>> Alessandro Baggi wrote:
>>> Il 29/01/19 18:47, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>
>>>>>> I've no idea what happened, but the box I was working on last week
2017 Jul 05
0
attempt to access beyond end of device XFS Disks
Hi,
I rebooted some CEPH servers with 24 HDs and do get some messages for
some of the disks:
[ 519.667055] XFS (sdk1): Mounting V4 Filesystem
[ 519.692307] XFS (sdk1): Ending clean mount
[ 519.781975] attempt to access beyond end of device
[ 519.781984] sdk1: rw=0, want=1560774288, limit=1560774287
All disks are xfs formated and currently I don't see any problem on the
CEPH side.
But I
2010 Jan 08
7
SAN help
My CentOS 5.4 box has a single HBA card with 2 ports connected to my
Storage. 2 Luns are assigned to my HBA card. Under /dev instead of seeing 4
devices I can see 12 devices from sdb to sdm. I am using qlogic driver that
is bulitin to the OS. Has any one seen this kind of situation?
Paras
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2016 Jul 24
13
[Bug 97065] New: memory leak under Xwayland with old sdl1 applications
https://bugs.freedesktop.org/show_bug.cgi?id=97065
Bug ID: 97065
Summary: memory leak under Xwayland with old sdl1 applications
Product: xorg
Version: unspecified
Hardware: Other
OS: All
Status: NEW
Severity: normal
Priority: medium
Component: Driver/nouveau
Assignee: nouveau