Displaying 20 results from an estimated 4000 matches similar to: "btrfs raid0"
2012 Jan 17
8
[RFC][PATCH 1/2] Btrfs: try to allocate new chunks with degenerated profile
If there is no free space, the free space allocator will try to get space from
the block group with the degenerated profile. For example, if there is no free
space in the RAID1 block groups, the allocator will try to allocate space from
the DUP block groups. And besides that, the space reservation has the similar
behaviour: if there is no enough space in the space cache to reserve, it will
reserve
2017 Oct 17
1
lvconvert(split) - raid10 => raid0
hi guys, gals
do you know if conversion from lvm's raid10 to raid0 is
possible?
I'm fiddling with --splitmirrors but it gets me nowhere.
On "takeover" subject man pages says: "..between
striped/raid0 and raid10."" but no details, nowhere I could
find documentation, nor a howto.
many thanks, L.
2013 Jan 30
8
RAID 0 across SSD and HDD
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I''ve been unable to find anything definitive about what happens if I use
RAID0 to join an SSD and HDD together with respect to performance
(latency, throughput). The future is obvious (hot data tracking, using
most appropriate device for the data, data migration).
In my specific case I have a 250GB SSD and a 500GB HDD, and about 250GB of
2009 Nov 27
5
unexpected raid1 behavior?
Hi, I''m starting to play with btrfs on my new computer. I''m running Gentoo and
have compiled the 2.6.31 kernel, enabling btrfs.
Now I have 2 partitions (on 2 different sata disks) that are free for me to
play with, each about 375 gb in size. I wanted to create a "raid1" volume
using these two partitions, so I did:
# mkfs.btrfs -d raid1 /dev/sda5 /dev/sdb5
# mount
2011 Oct 23
2
Subvolume level allocation policy
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Is it ( yet? ) possible to manipulate the allocation policy on a
subvolume level instead of the fs level? For example, to make / use
raid1, and /home use raid0? Or to have / allocated from an ssd and
/home allocated from the giant 2tb hd.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla -
2020 Nov 05
3
BIOS RAID0 and differences between disks
My computer running CentOS 7 is configured to use BIOS RAID0 and has two identical SSDs which are also encrypted. I had a crash the other day and due to a bug in the operating system update, I am unable to boot the system in RAID mode since dracut does not recognize the disks in grub. After modifying the grub command line I am able to boot the system from one of the harddisks after entering the
2015 Feb 16
4
Centos 7.0 and mismatched swap file
On Mon, Feb 16, 2015 at 6:47 AM, Eliezer Croitoru <eliezer at ngtech.co.il> wrote:
> I am unsure I understand what you wrote.
> "XFS will create multiple AG's across all of those
> devices,"
> Are you comparing md linear/concat to md raid0? and that the upper level XFS
> will run on top them?
Yes to the first question, I'm not understanding the second
2012 May 03
2
How file store when using Btrfs on multi-devices? What happen when a device fail?
Hi, i have some questions when using Btrfs on multi-devices:
1. a large file will always be stored wholely on a device or it may
spread on some devices/partitions? Btrfs has option to specify it
explicitly?
2. suppose i have a directory tree like that:
Dir_1
|--> file_1A
|--> file_1B
|--> Dir_2
|--> file_2C
|--> file_2D
If Dir_2, file_2C on a failed device,
2012 May 07
53
kernel 3.3.4 damages filesystem (?)
Hallo,
"never change a running system" ...
For some months I run btrfs unter kernel 3.2.5 and 3.2.9, without
problems.
Yesterday I compiled kernel 3.3.4, and this morning I started the
machine with this kernel. There may be some ugly problems.
Copying something into the btrfs "directory" worked well for some files,
and then I got error messages (I''ve not
2020 Nov 12
1
BIOS RAID0 and differences between disks
On 11/04/2020 10:21 PM, John Pierce wrote:
> is it RAID 0 (striped) or raid1 (mirrored) ??
>
> if you wrote on half of a raid0 stripe set, you basically trashed it.
> blocks are striped across both drives, so like 16k on the first disk, then
> 16k on the 2nd then 16k back on the first, repeat (replace 16k with
> whatever your raid stripe size is).
>
> if its a raid 1
2020 Nov 05
1
BIOS RAID0 and differences between disks
> On Nov 4, 2020, at 9:21 PM, John Pierce <jhn.pierce at gmail.com> wrote:
>
> is it RAID 0 (striped) or raid1 (mirrored) ??
>
> if you wrote on half of a raid0 stripe set, you basically trashed it.
> blocks are striped across both drives, so like 16k on the first disk, then
> 16k on the 2nd then 16k back on the first, repeat (replace 16k with
> whatever your raid
2013 Nov 24
3
The state of btrfs RAID6 as of kernel 3.13-rc1
Hi
What is the general state of btrfs RAID6 as of kernel 3.13-rc1 and the
latest btrfs tools?
More specifically:
- Is it able to correct errors during scrubs?
- Is it able to transparently handle disk failures without downtime?
- Is it possible to convert btrfs RAID10 to RAID6 without recreating the fs?
- Is it possible to add/remove drives to a RAID6 array?
Regards,
Hans-Kristian
--
To
2012 May 06
4
btrfs-raid10 <-> btrfs-raid1 confusion
Greetings,
until yesterday I was running a btrfs filesystem across two 2.0 TiB
disks in RAID1 mode for both metadata and data without any problems.
As space was getting short I wanted to extend the filesystem by two
additional drives lying around, which both are 1.0 TiB in size.
Knowing little about the btrfs RAID implementation I thought I had to
switch to RAID10 mode, which I was told is
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi,
I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping.
Is this possibile with ZFS?
As far as I understood, if I use
zpool create myPool lun-1 lun-2 ... lun-n
I will get a RAID0 striping where each data block is split across all "n" LUNs.
If that''s
2011 Apr 09
16
wrong values in "df" and "btrfs filesystem df"
Hallo, linux-btrfs,
First I create an array of 2 disks with
mkfs.btrfs -d raid0 -m raid1 /dev/sdb1 /dev/sdd1
and mount it at /srv/MM.
Then I fill it with about 1,6 TByte.
And then I add /dev/sde1 via
btrfs device add /dev/sde1 /srv/MM
btrfs filesystem balance /srv/MM
(it run about 20 hours)
Then I work on it, copy some new files, delete some old files - all
works well. Only
df
2014 Apr 07
3
Software RAID10 - which two disks can fail?
Hi All.
I have a server which uses RAID10 made of 4 partitions for / and boots from
it. It looks like so:
mdadm -D /dev/md1
/dev/md1:
Version : 00.90
Creation Time : Mon Apr 27 09:25:05 2009
Raid Level : raid10
Array Size : 973827968 (928.71 GiB 997.20 GB)
Used Dev Size : 486913984 (464.36 GiB 498.60 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 1
2011 May 05
1
Converting 1-drive ext4 to 4-drive raid10 btrfs
Hello!
I have a 1 TB ext4 drive that''s quite full (~50 GB free space, though I
could free up another 100 GB or so if necessary) and two empty 0.5 TB
drives.
Is it possible to get another 1 TB drive and combine the four drives to
a btrfs raid10 setup without (if all goes well) losing my data?
Regards,
Paul
--
To unsubscribe from this list: send the line "unsubscribe
2013 Jan 03
33
Option LABEL
Hallo, linux-btrfs,
please delete the option "-L" (for labelling) in "mkfs.btrfs", in some
configurations it doesn''t work as expected.
My usual way:
mkfs.btrfs -d raid0 -m raid1 /dev/sdb /dev/sdc /dev/sdd ...
One call for some devices.
Wenn I add the option "-L mylabel" then each device gets the same label,
and therefore some other programs
2013 Aug 19
1
LVM RAID0 and SSD discards/TRIM
I'm trying to work out the kinks of a proprietary, old, and clunky
application that runs on CentOS. One of its main problems is that it
writes image sequences extremely non-linearly and in several passes,
using many CPUs, so the sequences get very fragmented.
The obvious solution to this seems to be to use SSDs for its output, and
some scripts that will pick up and copy our the sequences
2010 Oct 14
2
Metadata size
I''m a little concerned about the size of my metadata. I''m doing
raid10 on both data and metadata, and:
hrm@vlad:mnt $ sudo btrfs fi df /mnt
Data: total=488.01GB, used=487.23GB
Metadata: total=3.01GB, used=677.73MB
System: total=11.88MB, used=52.00KB
hrm@vlad:mnt $ find /mnt | wc -l
20137
By my calculations, that''s something on the order of 17.5K per
filesystem