Displaying 20 results from an estimated 100 matches similar to: "raid10 data fs full after degraded mount"
2010 Jan 12
0
dmu_zfetch_find - lock contention?
Hi,
I have a mysql instance which if I point more load towards it it suddenly gets 100% in SYS as shown below. It can work fine for an hour but eventually it gets to a jump from 5-15% of CPU utilization to 100% in SYS as show in mpstat output below:
# prtdiag | head
System Configuration: SUN MICROSYSTEMS SUN FIRE X4170 SERVER
BIOS Configuration: American Megatrends Inc. 07060215
2007 Apr 24
2
setting up CentOS 5 with Raid10
I would like to set up CentOS on 4 SATA hard drives that I would like to
configure in RAID10. I read somewhere that Raid10 support is in the
latest kernel, but I can't seem to get anaconda to let me create it. I
only see raid 0, 1, 5, and 6.
Even when I tried to set up raid5 or raid1, it would not let me put the
/boot partition on it, and I though that this was now possible.
Is it
2013 Mar 12
1
what is "single spanned virtual disk" on RAID10????
We have DELL R910 with H800 adapter in it. several MD1220 connect to H800.
Since MD1220 have 24 hard disks in it. When I configured RAID10, there is a choice call 'single spanned virtual disk" (22 disks).
Can anyone tell me how "single spanned virtual disk" work?
Any document relate to it?
Thanks.
2011 May 05
1
Converting 1-drive ext4 to 4-drive raid10 btrfs
Hello!
I have a 1 TB ext4 drive that''s quite full (~50 GB free space, though I
could free up another 100 GB or so if necessary) and two empty 0.5 TB
drives.
Is it possible to get another 1 TB drive and combine the four drives to
a btrfs raid10 setup without (if all goes well) losing my data?
Regards,
Paul
--
To unsubscribe from this list: send the line "unsubscribe
2009 Dec 10
3
raid10, centos 4.x
I just created a 4 drive mdadm --level=raid10 on a centos 4.8-ish system
here, and shortly thereafter remembreed I hadn't updated it in a while,
so i ran yum update...
while installing/updating stuff, got these errors:
Installing: kernel #######################
[14/69]
raid level raid10 (in /proc/mdstat) not recognized
...
Installing: kernel-smp
2017 Oct 17
1
lvconvert(split) - raid10 => raid0
hi guys, gals
do you know if conversion from lvm's raid10 to raid0 is
possible?
I'm fiddling with --splitmirrors but it gets me nowhere.
On "takeover" subject man pages says: "..between
striped/raid0 and raid10."" but no details, nowhere I could
find documentation, nor a howto.
many thanks, L.
2019 Sep 30
1
CentOS 8 broken mdadm Raid10
Hello,
On my system with a Intel SCU Controller and a Raid 10 System it is not
possible to install this Raid10. I have tested this with a CentOS 7 and
Opensuse all found my Raid but with CentOS 8 this is broken?
I found on start the Installation a Error from mdadm that ist all.
Now I download and Test the Stream iso?
and hope .....
--
mit freundlichen Gr?ssen / best regards
G?nther J,
2014 May 28
0
Failed Disk RAID10 Problems
Hi,
I have a Btrfs RAID 10 (data and metadata) file system that I believe
suffered a disk failure. In my attempt to replace the disk, I think
that I've made the problem worse and need some help recovering it.
I happened to notice a lot of errors in the journal:
end_request: I/O error, dev dm-11, sector 1549378344
BTRFS: bdev /dev/mapper/Hitachi_HDS721010KLA330_GTA040PBG71HXF1 errs:
wr
2012 Jul 14
2
bug: raid10 filesystem has suddenly ceased to mount
Hi!
The problem is that the BTRFS raid10 filesystem without any
understandable cause refuses to mount.
Here is dmesg output:
[77847.845540] device label linux-btrfs-raid10 devid 3 transid 45639 /dev/sdc1
[77848.633912] btrfs: allowing degraded mounts
[77848.633917] btrfs: enabling auto defrag
[77848.633919] btrfs: use lzo compression
[77848.633922] btrfs: turning on flush-on-commit
[77848.658879]
2013 Jun 16
1
btrfs balance resume + raid5/6
Greetings!
I''m testing raid6, and recently added two drives.
I haven''t been able to properly resume a balance operation: the number of
total chunks is always too low.
It seems that the balance starts and pauses properly, but always resumes
with ~7 chunks.
Here''s an example:
vendikar tim # uname -r
3.10.0-031000rc4-generic
vendikar tim # btrfs fi sho
Label:
2013 Mar 28
1
question about replacing a drive in raid10
Hi all,
I have a question about replacing a drive in raid10 (and linux kernel 3.8.4).
A bad disk was physical removed from the server. After this a new disk
was added with "btrfs device add /dev/sdg /btrfs" to the raid10 btrfs
FS.
After this the server was rebooted and I mounted the filesystem in
degraded mode. It seems that a previous started balance continued.
At this point I want to
2014 Apr 07
3
Software RAID10 - which two disks can fail?
Hi All.
I have a server which uses RAID10 made of 4 partitions for / and boots from
it. It looks like so:
mdadm -D /dev/md1
/dev/md1:
Version : 00.90
Creation Time : Mon Apr 27 09:25:05 2009
Raid Level : raid10
Array Size : 973827968 (928.71 GiB 997.20 GB)
Used Dev Size : 486913984 (464.36 GiB 498.60 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 1
2005 Aug 12
3
Need a CONFIRMED working hardware sata raid10 card
Does anyone know of a card that actually works with CentOS 4
that supports Raid10? I don't think CentOS support software raid 10 from
the installer.
Thanks,
-Drew
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.centos.org/pipermail/centos/attachments/20050812/ab215629/attachment-0002.html>
2007 May 07
5
Anaconda doesn't support raid10
So after troubleshooting this for about a week, I was finally able to
create a raid 10 device by installing the system, copying the md modules
onto a floppy, and loading the raid10 module during the install.
Now the problem is that I can't get it to show up in anaconda. It
detects the other arrays (raid0 and raid1) fine, but the raid10 array
won't show up. Looking through the logs
2013 May 10
5
Btrfs balance invalid argument error
Hi list,
I am using kernel 3.9.0, btrfs-progs 0.20-rc1-253-g7854c8b.
I have a three disk array of level single:
# btrfs fi sh
Label: none uuid: 2e905f8f-e525-4114-afa6-cce48f77b629
Total devices 3 FS bytes used 3.80TB
devid 1 size 2.73TB used 2.25TB path /dev/sdd
devid 2 size 2.73TB used 1.55TB path /dev/sdc
devid 3 size 2.73TB used 0.00 path /dev/sdb
2013 Aug 11
2
(un)mounting takes a long time
Hello!
I''m using ArchLinux with kernel Linux horus 3.10.5-1-ARCH #1 SMP PREEMPT.
Mounting and unmounting takes a long time:
# time mount -v /mnt/Archiv
mount: /dev/sde1 mounted on /mnt/Archiv.
mount -v /mnt/Archiv 0,00s user 0,16s system 1% cpu 9,493 total
# sync && time umount -v /mnt/Archiv
umount: /mnt/Archiv (/dev/sdd1) unmounted
umount -v /mnt/Archiv 0,00s user
2016 Jun 16
0
[PATCH v7 00/12] Support non-lru page migration
Hi,
On (06/16/16 08:12), Minchan Kim wrote:
> > [ 315.146533] kasan: CONFIG_KASAN_INLINE enabled
> > [ 315.146538] kasan: GPF could be caused by NULL-ptr deref or user memory access
> > [ 315.146546] general protection fault: 0000 [#1] PREEMPT SMP KASAN
> > [ 315.146576] Modules linked in: lzo zram zsmalloc mousedev coretemp hwmon crc32c_intel r8169 i2c_i801 mii
2012 May 06
4
btrfs-raid10 <-> btrfs-raid1 confusion
Greetings,
until yesterday I was running a btrfs filesystem across two 2.0 TiB
disks in RAID1 mode for both metadata and data without any problems.
As space was getting short I wanted to extend the filesystem by two
additional drives lying around, which both are 1.0 TiB in size.
Knowing little about the btrfs RAID implementation I thought I had to
switch to RAID10 mode, which I was told is
2016 Jun 15
0
[PATCH v7 00/12] Support non-lru page migration
Hello Minchan,
-next 4.7.0-rc3-next-20160614
[ 315.146533] kasan: CONFIG_KASAN_INLINE enabled
[ 315.146538] kasan: GPF could be caused by NULL-ptr deref or user memory access
[ 315.146546] general protection fault: 0000 [#1] PREEMPT SMP KASAN
[ 315.146576] Modules linked in: lzo zram zsmalloc mousedev coretemp hwmon crc32c_intel r8169 i2c_i801 mii snd_hda_codec_realtek snd_hda_codec_generic
2011 Sep 27
2
high CPU usage and low perf
Hiya,
Recently,
a btrfs file system of mine started to behave very poorly with
some btrfs kernel tasks taking 100% of CPU time.
# btrfs fi show /dev/sdb
Label: none uuid: b3ce8b16-970e-4ba8-b9d2-4c7de270d0f1
Total devices 3 FS bytes used 4.25TB
devid 2 size 2.73TB used 1.52TB path /dev/sdc
devid 1 size 2.70TB used 1.49TB path /dev/sda4
devid 3 size