similar to: question about replacing a drive in raid10

Displaying 20 results from an estimated 300 matches similar to: "question about replacing a drive in raid10"

2012 Sep 05
3
BTRFS thinks device is busy [kernel 3.5.3]
Hi, I''m running OpenSuse 12.2 with kernel 3.5.3 HBA= LSI 1068e using the MPTSAS driver (patched) (https://patchwork.kernel.org/patch/1379181/) SANOS1:/media # uname -a Linux SANOS1 3.5.3 #3 SMP Sun Sep 2 18:44:37 CEST 2012 x86_64 x86_64 x86_64 GNU/Linux I''ve tried to simulate a disk replacement but it seems that now /dev/sdg is stuck in the btrfs pool (RAID10) SANOS1:/media #
2012 Dec 12
1
kernel BUG at fs/btrfs/extent_io.c:4052 (kernel 3.5.3)
Hi all, Last week we had 2 times an "uncorrectable ecc memory error" crash on our server on the same memory module. After removing the faulty module and restarting the server, everything was working again. However, yesterday we had a soft lockup and had to restart the server again. No warning or ecc error this time. Everything is working now, but we want to avoid this in the future
2014 May 28
0
Failed Disk RAID10 Problems
Hi, I have a Btrfs RAID 10 (data and metadata) file system that I believe suffered a disk failure. In my attempt to replace the disk, I think that I've made the problem worse and need some help recovering it. I happened to notice a lot of errors in the journal: end_request: I/O error, dev dm-11, sector 1549378344 BTRFS: bdev /dev/mapper/Hitachi_HDS721010KLA330_GTA040PBG71HXF1 errs: wr
2012 Nov 22
0
raid10 data fs full after degraded mount
Hello, on a fs with 4 disks, raid 10 for data, one drive was failing and has been removed. After reboot and ''mount -o degraded...'', the fs looks full, even though before removal of the failed device it was almost 80% free. root@fs0:~# df -h /mnt/b Filesystem Size Used Avail Use% Mounted on /dev/sde 11T 2.5T 41M 100% /mnt/b root@fs0:~# btrfs fi df /mnt/b Data,
2012 May 06
4
btrfs-raid10 <-> btrfs-raid1 confusion
Greetings, until yesterday I was running a btrfs filesystem across two 2.0 TiB disks in RAID1 mode for both metadata and data without any problems. As space was getting short I wanted to extend the filesystem by two additional drives lying around, which both are 1.0 TiB in size. Knowing little about the btrfs RAID implementation I thought I had to switch to RAID10 mode, which I was told is
2012 Jul 14
2
bug: raid10 filesystem has suddenly ceased to mount
Hi! The problem is that the BTRFS raid10 filesystem without any understandable cause refuses to mount. Here is dmesg output: [77847.845540] device label linux-btrfs-raid10 devid 3 transid 45639 /dev/sdc1 [77848.633912] btrfs: allowing degraded mounts [77848.633917] btrfs: enabling auto defrag [77848.633919] btrfs: use lzo compression [77848.633922] btrfs: turning on flush-on-commit [77848.658879]
2013 Jun 16
1
btrfs balance resume + raid5/6
Greetings! I''m testing raid6, and recently added two drives. I haven''t been able to properly resume a balance operation: the number of total chunks is always too low. It seems that the balance starts and pauses properly, but always resumes with ~7 chunks. Here''s an example: vendikar tim # uname -r 3.10.0-031000rc4-generic vendikar tim # btrfs fi sho Label:
2011 Feb 05
2
Strangeness on btrfs balance..
Hi there... I have kernel version 2.6.36.3, compiled with gcc 4.4.5, btrfstools version 0.19+20101101 I have a btrfs filesystem (/data) consisting of two 1TB hard disks, raid0. I added in another 1TB hard drive. root@X86-64:~# btrfs filesystem show failed to read /dev/sdh failed to read /dev/sdg failed to read /dev/sdf failed to read /dev/sde failed to read /dev/sr0 failed to read /dev/fd0u800
2010 Nov 02
0
raid0 corruption, how to restore?
I have two disks that I formatted as btrfs RAID0 on opensuse 11.3. The raid worked well several days until there was a power surge. The system successfully rebooted and the btrfs raid reappeared, but the kernel occasionally threw oops. That was my first experience with oops. After two days, the btrfs raid failed to mount via fstab and when I manually tried to mount it, there was a kernel
2012 May 05
5
Is it possible to reclaim block groups once they are allocated to data or metadata?
Hello list, recently reformatted my home partition from XFS to RAID1 btrfs. I used the default options to mkfs.btrfs except for enabling raid1 for data as well as metadata. Filesystem is made up of two 1TB drives. mike@mercury (0) pts/3 ~ $ sudo btrfs filesystem show Label: none uuid: f08a8896-e03e-4064-9b94-9342fb547e47 Total devices 2 FS bytes used 888.06GB devid 1 size 931.51GB used
2010 Dec 29
1
list files on a device
Hello, After another power loss (I am fortune, am I not?) I have the following situation : Label: none uuid: ac155851-0e31-4aed-9ba4-ee712506368a Total devices 3 FS bytes used 1.02TB devid 1 size 931.51GB used 70.00GB path /dev/sdd1 devid 3 size 1.79TB used 66.52GB path /dev/md2 devid 2 size 914.70GB used 914.50GB path /dev/sda4 btrfs device delete does not work on both md2 and
2010 Oct 14
0
AMD/Supermicro machine - AS-2022G-URF
Sorry for the long post but I know trying to decide on hardware often want to see details about what people are using. I have the following AS-2022G-URF machine running OpenGaryIndiana[1] that I am starting to use. I successfully transferred a deduped zpool with 1.x TB of files and 60 or so zfs filesystems using mbuffer from an old 134 system with 6 drives - it ran at about 50MB/s or
2007 Apr 24
2
setting up CentOS 5 with Raid10
I would like to set up CentOS on 4 SATA hard drives that I would like to configure in RAID10. I read somewhere that Raid10 support is in the latest kernel, but I can't seem to get anaconda to let me create it. I only see raid 0, 1, 5, and 6. Even when I tried to set up raid5 or raid1, it would not let me put the /boot partition on it, and I though that this was now possible. Is it
2013 Mar 12
1
what is "single spanned virtual disk" on RAID10????
We have DELL R910 with H800 adapter in it. several MD1220 connect to H800. Since MD1220 have 24 hard disks in it. When I configured RAID10, there is a choice call 'single spanned virtual disk" (22 disks). Can anyone tell me how "single spanned virtual disk" work? Any document relate to it? Thanks.
2011 May 05
1
Converting 1-drive ext4 to 4-drive raid10 btrfs
Hello! I have a 1 TB ext4 drive that''s quite full (~50 GB free space, though I could free up another 100 GB or so if necessary) and two empty 0.5 TB drives. Is it possible to get another 1 TB drive and combine the four drives to a btrfs raid10 setup without (if all goes well) losing my data? Regards, Paul -- To unsubscribe from this list: send the line "unsubscribe
2009 Dec 10
3
raid10, centos 4.x
I just created a 4 drive mdadm --level=raid10 on a centos 4.8-ish system here, and shortly thereafter remembreed I hadn't updated it in a while, so i ran yum update... while installing/updating stuff, got these errors: Installing: kernel ####################### [14/69] raid level raid10 (in /proc/mdstat) not recognized ... Installing: kernel-smp
2017 Oct 17
1
lvconvert(split) - raid10 => raid0
hi guys, gals do you know if conversion from lvm's raid10 to raid0 is possible? I'm fiddling with --splitmirrors but it gets me nowhere. On "takeover" subject man pages says: "..between striped/raid0 and raid10."" but no details, nowhere I could find documentation, nor a howto. many thanks, L.
2019 Sep 30
1
CentOS 8 broken mdadm Raid10
Hello, On my system with a Intel SCU Controller and a Raid 10 System it is not possible to install this Raid10. I have tested this with a CentOS 7 and Opensuse all found my Raid but with CentOS 8 this is broken? I found on start the Installation a Error from mdadm that ist all. Now I download and Test the Stream iso? and hope ..... -- mit freundlichen Gr?ssen / best regards G?nther J,
2009 Jul 25
1
OpenSolaris 2009.06 - ZFS Install Issue
I''ve installed Open Solaris 2009.06 on a machine with 5 identical 1TB wd green drives to create a ZFS nas. The intended install is one drive dedicated to the OS and the remaining 4 drives in a raidz1 configuration. The install is working fine, but creating the raidz1 pool and rebooting is causing the machine report "Cannot find active partition" upon reboot. Below is command
2009 Dec 12
0
Messed up zpool (double device label)
Hi! I tried to add an other FiweFire Drive to my existing four devices but it turned out, that the OpenSolaris IEEE1394 support doen''t seem to be well-engineered. After not recognizing the new device and exporting and importing the existing zpool, I get this zpool status: pool: tank state: DEGRADED status: One or more devices could not be used because the label is missing or