similar to: Problem with raid0

Displaying 20 results from an estimated 3000 matches similar to: "Problem with raid0"

2013 Oct 04
1
btrfs raid0
How can I verify the read speed of a btrfs raid0 pair in archlinux.? I assume raid0 means striped activity in a paralleled mode at lease similar to raid0 in mdadm. How can I measure the btrfs read speed since it is copy-on-write which is not the norm in mdadm raid0.? Perhaps I cannot use the same approach in btrfs to determine the performance. Secondly, I see a methodology for raid10 using
2013 Aug 19
1
LVM RAID0 and SSD discards/TRIM
I'm trying to work out the kinks of a proprietary, old, and clunky application that runs on CentOS. One of its main problems is that it writes image sequences extremely non-linearly and in several passes, using many CPUs, so the sequences get very fragmented. The obvious solution to this seems to be to use SSDs for its output, and some scripts that will pick up and copy our the sequences
2020 Nov 12
1
BIOS RAID0 and differences between disks
On 11/04/2020 10:21 PM, John Pierce wrote: > is it RAID 0 (striped) or raid1 (mirrored) ?? > > if you wrote on half of a raid0 stripe set, you basically trashed it. > blocks are striped across both drives, so like 16k on the first disk, then > 16k on the 2nd then 16k back on the first, repeat (replace 16k with > whatever your raid stripe size is). > > if its a raid 1
2020 Nov 05
1
BIOS RAID0 and differences between disks
> On Nov 4, 2020, at 9:21 PM, John Pierce <jhn.pierce at gmail.com> wrote: > > is it RAID 0 (striped) or raid1 (mirrored) ?? > > if you wrote on half of a raid0 stripe set, you basically trashed it. > blocks are striped across both drives, so like 16k on the first disk, then > 16k on the 2nd then 16k back on the first, repeat (replace 16k with > whatever your raid
2020 Nov 05
3
BIOS RAID0 and differences between disks
My computer running CentOS 7 is configured to use BIOS RAID0 and has two identical SSDs which are also encrypted. I had a crash the other day and due to a bug in the operating system update, I am unable to boot the system in RAID mode since dracut does not recognize the disks in grub. After modifying the grub command line I am able to boot the system from one of the harddisks after entering the
2017 Oct 17
1
lvconvert(split) - raid10 => raid0
hi guys, gals do you know if conversion from lvm's raid10 to raid0 is possible? I'm fiddling with --splitmirrors but it gets me nowhere. On "takeover" subject man pages says: "..between striped/raid0 and raid10."" but no details, nowhere I could find documentation, nor a howto. many thanks, L.
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi, I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping. Is this possibile with ZFS? As far as I understood, if I use zpool create myPool lun-1 lun-2 ... lun-n I will get a RAID0 striping where each data block is split across all "n" LUNs. If that''s
2007 Apr 28
1
Problems with RAID0 array on new server
Hello, i recently installed Centos 5 on a new server with a single scsii disk. After the installation, i added 2 additional disks that were once the components of a raid0 array on another server. I get some errors and am unable to start the array the following is an extract from dmesg output: md: Autodetecting RAID arrays. md: could not open unknown-block(8,17). md: could not open
2020 Nov 05
0
BIOS RAID0 and differences between disks
is it RAID 0 (striped) or raid1 (mirrored) ?? if you wrote on half of a raid0 stripe set, you basically trashed it. blocks are striped across both drives, so like 16k on the first disk, then 16k on the 2nd then 16k back on the first, repeat (replace 16k with whatever your raid stripe size is). if its a raid 1 mirror, then either disk by itself has the complete file system on it, so you should be
2010 Oct 28
0
RAID0 limiting disk utilization
I noticed that if I have single-device allocation for data in a multi-device btrfs filesystem, a balance operation will convert the data to RAID0. This is true even if ''-d single'' is specified explicitly when creating the filesystem. Then it wants to continue using RAID0 for future data allocations, and I run out of space once there''s no longer two drives with space
2013 May 27
0
[Question] How to restore btrfs raid0 image file?
Hi, So the case is, now I''ve got a btrfs image file, which is created from a raid0 btrfs fs. And if I run ''btrfs-image -r image_file /dev/sdf'', then I have to mount it with ''degraded'' mode, and that still fails because raid0 requires two disks at least. So any ideas how to make it work? thanks, liubo -- To unsubscribe from this list: send the line
2014 Dec 18
2
sysvol /etc /private replication/backup via git
Hello List, we have a running (test) AD samba4 system running successful for several month. We are now trying to add more DCs to the mix and have a proper backup/desaster recovery strategy. Basically I wan't to adapt the samba_backup script found in the sources to backup everything via git into our repository. I understand that the best solution would be to tar everything before commit. In
2006 Apr 03
0
[amr] raid config went from RAID5 to RAID0 ?
I just noticed something very strange on one of our Dell PE1750 servers. It is running FreeBSD 4-STABLE on dual CPU's with the embedded Dell Raid controller (amr driver). Attached are 3 disks of 145GB. On a RAID5 logical drive this gives me ~280GB storage. Up until the last reboot (35 days ago) the 'amrcontrol' status utility gave me: Logical drive 0 Stipes blah Size blah
2010 Nov 02
0
raid0 corruption, how to restore?
I have two disks that I formatted as btrfs RAID0 on opensuse 11.3. The raid worked well several days until there was a power surge. The system successfully rebooted and the btrfs raid reappeared, but the kernel occasionally threw oops. That was my first experience with oops. After two days, the btrfs raid failed to mount via fstab and when I manually tried to mount it, there was a kernel
2015 Feb 16
4
Centos 7.0 and mismatched swap file
On Mon, Feb 16, 2015 at 6:47 AM, Eliezer Croitoru <eliezer at ngtech.co.il> wrote: > I am unsure I understand what you wrote. > "XFS will create multiple AG's across all of those > devices," > Are you comparing md linear/concat to md raid0? and that the upper level XFS > will run on top them? Yes to the first question, I'm not understanding the second
2013 Apr 03
2
[bug] btrfs fi df doesn't show raid type after balance
Did something break.. ? we are not reporting raid type after balance. ----------- # btrfs fi df /btrfs Data, RAID0: total=2.00GB, used=2.03MB Data: total=8.00MB, used=0.00 System, RAID0: total=16.00MB, used=4.00KB System: total=4.00MB, used=0.00 Metadata, RAID0: total=2.00GB, used=216.00KB Metadata: total=8.00MB, used=4.00KB # btrfs bal /btrfs Done, had to relocate 5 out of 5 chunks # btrfs fi
2012 Feb 07
2
Understanding Default RAID Behavior
The Wiki does not make it clear as to why adding a secondary device defaults to RAID1 metadata and RAID0 data. I bought two SSDs with the intention of doing a BTRFS RAID0 for my root. What is the difference between forcing RAID0 on metadata and data as opposed to the default behavior? Can anyone clarify that? Thank you for your time, Mario -- To unsubscribe from this list: send the line
2011 Aug 08
7
“bio too big” regression and silent data corruption in 3.0
tl;dr version: 3.0 produces “bio too big” dmesg entries and silently corrupts data in “meta-raid1/data-single” configurations on disks with different max_hw_sectors, where 2.6.38 worked fine. tl;dr side-issue: on-line removal of partitions holding “single” data attempts to create raid0 (rather than single) block groups. If it can''t get enough room for raid0 over all remaining disks, it
2012 Jan 17
8
[RFC][PATCH 1/2] Btrfs: try to allocate new chunks with degenerated profile
If there is no free space, the free space allocator will try to get space from the block group with the degenerated profile. For example, if there is no free space in the RAID1 block groups, the allocator will try to allocate space from the DUP block groups. And besides that, the space reservation has the similar behaviour: if there is no enough space in the space cache to reserve, it will reserve
2006 Sep 21
1
Software versus hardware RAID performance.
With the Dell OpenManage question on my mind (and having seen it answered very well), I was reminded of an interesting and a little surprising thing I saw yesterday. I upgraded a PowerEdge 2850 from CentOS4 to Fedora Core 5 (keeping everything updated for GNUradio to run on CentOS 4 became more of a job that it should have) for our pulsar data processing machine (it has a GNUradio Universal