Displaying 4 results from an estimated 4 matches for "4x2tb".
Did you mean:
4x2gb
2011 Dec 28
3
Btrfs: blocked for more than 120 seconds, made worse by 3.2 rc7
...n machine 1 for the time being. machine 2 has not crashed yet
after 200gb of writes and I am still testing that.
machine 1: btrfs on a 6tb sparse file, mounted as loop, on a xfs
filesystem that lies on a 10TB md raid5. mount options
compress=zlib,compress-force
machine 2: btrfs over md raid 5 (4x2TB)=5.5TB filesystem. mount options
compress=zlib,compress-force
pastebins:
machine1:
3.2rc7 http://pastebin.com/u583G7jK
3.2rc6 http://pastebin.com/L12TDaXa
machine2:
3.2rc6 http://pastebin.com/khD0wGXx
3.2rc7 (not crashed yet)
--
To unsubscribe from this list: send the line "unsubscribe lin...
2011 Feb 05
12
ZFS Newbie question
I?ve spend a few hours reading through the forums and wiki and honestly my head is spinning. I have been trying to study up on either buying or building a box that would allow me to add drives of varying sizes/speeds/brands (adding more later etc) and still be able to use the full space of drives (minus parity? [not sure if I got the terminology right]) with redundancy. I have found the ?all in
2014 Mar 17
1
Slow RAID resync
OK todays problem.
I have a HP N54L Microserver running centos 6.5.
In this box I have a 3x2TB disk raid 5 array, which I am in the
process of extending to a 4x2TB raid 5 array.
I've added the new disk --> mdadm --add /dev/md0 /dev/sdb
And grown the array --> mdadm --grow /dev/md0 --raid-devices=4
Now the problem the resync speed is v slow, it refuses to rise above
5MB, in general it sits at 4M.
from looking at glances it would appear that writi...
2011 Jun 24
1
How long should resize2fs take?
...S %CPU %MEM TIME+ COMMAND
32719 root 20 0 785m 768m 792 R 98 20.0 2443:18 resize2fs
$ sudo resize2fs /dev/mapper/data-data 3000G
resize2fs 1.41.11 (14-Mar-2010)
Resizing the filesystem on /dev/mapper/data-data to 786432000 (4k) blocks.
Time passes. :D
It's an LVM comprising 4x2TB disks in RAID10 and 4x500GB in RAID10.
$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid10] [raid6]
[raid5] [raid4]
md1 : active raid10 sdi1[0] sdg1[1] sdf1[3] sdh1[2]
976767872 blocks 64K chunks 2 near-copies [4/4] [UUUU]
md0 : active raid10 sda1[2] sdc1[3] sdb1...