Displaying 20 results from an estimated 11000 matches similar to: "OT - small hd recommendation"
2012 Feb 07
1
Recommendations for busy static web server replacement
Hi all
after being a silent reader for some time and not very successful in getting
good performance out of our test set-up, I'm finally getting to the list with
questions.
Right now, we are operating a web server serving out 4MB files for a
distributed computing project. Data is requested from all over the world at a
rate of about 650k to 800k downloads a day. Each data file is usually
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto:
>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>> Il 29/01/19 20:42, mark ha scritto:
>>>> Alessandro Baggi wrote:
>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>> Alessandro Baggi wrote:
>>>>>>> Il 29/01/19 15:03, mark ha scritto:
>>>>>>>
2017 Aug 18
4
Problem with softwareraid
Hello all,
i have already had a discussion on the software raid mailinglist and i
want to switch to this one :)
I am having a really strange problem with my md0 device running
centos7. after a new start of my server the md0 was gone. now after
trying to find the problem i detected the following:
Booting any installed kernel gives me NO md0 device. (ls /dev/md*
doesnt give anything). a 'cat
2009 Aug 06
10
RAID[56] status
If we''ve abandoned the idea of putting the number of redundant blocks
into the top bits of the type bitmask (and I hope we have), then we''re
fairly much there. Current code is at:
git://, http://git.infradead.org/users/dwmw2/btrfs-raid56.git
git://, http://git.infradead.org/users/dwmw2/btrfs-progs-raid56.git
We have recovery working, as well as both full-stripe writes
2020 Sep 18
4
Drive failed in 4-drive md RAID 10
I got the email that a drive in my 4-drive RAID10 setup failed. What are my
options?
Drives are WD1000FYPS (Western Digital 1 TB 3.5" SATA).
mdadm.conf:
# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/root level=raid10 num-devices=4
UUID=942f512e:2db8dc6c:71667abc:daf408c3
/proc/mdstat:
Personalities : [raid10]
md127 : active raid10 sdf1[2](F)
2013 Feb 18
1
btrfs send & receive produces "Too many open files in system"
I believe what I am going to write is a bug report.
When I finaly did
# btrfs send -v /mnt/adama-docs/backups/20130101-192722 | btrfs receive
/mnt/tmp/backups
to migrate btrfs from one partition layout to another.
After a while system keeps saying that "Too many open files in system"
and denies access to almost every command line tool. When I had access
to iostat I confirmed the
2012 Mar 29
3
RAID-10 vs Nested (RAID-0 on 2x RAID-1s)
Greetings-
I'm about to embark on a new installation of Centos 6 x64 on 4x SATA HDDs. The plan is to use RAID-10 as a nice combo between data security (RAID1) and speed (RAID0). However, I'm finding either a lack of raw information on the topic, or I'm having a mental issue preventing the osmosis of the implementation into my brain.
Option #1:
My understanding of RAID10 using 4
2006 Nov 21
3
RAID benchmarks
We (a small college with about 3000 active accounts) are currently in
the process of moving from UW IMAP running on linux to dovecot running
on a cluster of 3 or 4 new faster Linux machines. (Initially using
perdition to split the load.)
As we are building and designing the system, I'm attempting to take (or
find) benchmarks everywhere I can in order to make informed decisions
and so
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64.
I have noticed when building a new software RAID-6 array on CentOS 6.3
that the mismatch_cnt grows monotonically while the array is building:
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
2013 Nov 24
3
The state of btrfs RAID6 as of kernel 3.13-rc1
Hi
What is the general state of btrfs RAID6 as of kernel 3.13-rc1 and the
latest btrfs tools?
More specifically:
- Is it able to correct errors during scrubs?
- Is it able to transparently handle disk failures without downtime?
- Is it possible to convert btrfs RAID10 to RAID6 without recreating the fs?
- Is it possible to add/remove drives to a RAID6 array?
Regards,
Hans-Kristian
--
To
2013 Oct 04
1
btrfs raid0
How can I verify the read speed of a btrfs raid0 pair in archlinux.?
I assume raid0 means striped activity in a paralleled mode at lease
similar to raid0 in mdadm.
How can I measure the btrfs read speed since it is copy-on-write which
is not the norm in mdadm raid0.?
Perhaps I cannot use the same approach in btrfs to determine the
performance.
Secondly, I see a methodology for raid10 using
2009 Dec 10
3
raid10, centos 4.x
I just created a 4 drive mdadm --level=raid10 on a centos 4.8-ish system
here, and shortly thereafter remembreed I hadn't updated it in a while,
so i ran yum update...
while installing/updating stuff, got these errors:
Installing: kernel #######################
[14/69]
raid level raid10 (in /proc/mdstat) not recognized
...
Installing: kernel-smp
2014 Apr 07
3
Software RAID10 - which two disks can fail?
Hi All.
I have a server which uses RAID10 made of 4 partitions for / and boots from
it. It looks like so:
mdadm -D /dev/md1
/dev/md1:
Version : 00.90
Creation Time : Mon Apr 27 09:25:05 2009
Raid Level : raid10
Array Size : 973827968 (928.71 GiB 997.20 GB)
Used Dev Size : 486913984 (464.36 GiB 498.60 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 1
2010 Sep 25
3
Raid 10 questions...2 drive
I have been reading lots of stuff but trying to find out if a raid10 2drive
setup is any better/worse than a normal raid 1 setup....I have to 1Tb drives
for my data and a seperate system drive, I am only interested in doing raid
on my data...
So i setup my initial test like this....
mdadm -v --create /dev/md0 --chunk 1024 --level=raid10 --raid-devices=2
/dev/sdb1 /dev/sdc1
I have also read
2013 Dec 09
3
Gluster infrastructure question
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Heyho guys,
I'm running since years glusterfs in a small environment without big
problems.
Now I'm going to use glusterFS for a bigger cluster but I've some
questions :)
Environment:
* 4 Servers
* 20 x 2TB HDD, each
* Raidcontroller
* Raid 10
* 4x bricks => Replicated, Distributed volume
* Gluster 3.4
1)
I'm asking me, if I can
2011 Apr 12
17
40TB File System Recommendations
Hello All
I have a brand spanking new 40TB Hardware Raid6 array to play around
with. I am looking for recommendations for which filesystem to use. I am
trying not to break this up into multiple file systems as we are going
to use it for backups. Other factors is performance and reliability.
CentOS 5.6
array is /dev/sdb
So here is what I have tried so far
reiserfs is limited to 16TB
ext4
2007 May 01
2
Raid5 issues
So when I couldn't get the raid10 to work, I decided to do raid5.
Everything installed and looked good. I left it overnight to rebuild
the array, and when I came in this morning, everything was frozen. Upon
reboot, it said that 2 of the 4 devices for the raid5 array failed.
Luckily, I didn't have any data on it, but how do I know that the same
thing won't happen when I have
2007 Jun 10
1
mdadm Linux Raid 10: is it 0+1 or 1+0?
The relevance of this question can be found here:
http://aput.net/~jheiss/raid10/
I read the mdadm documents but I could not find a positive answer.
I even read the raid10 module source but I didn't find the answer there
either.
Does someone here know it?
Thank you!
2016 Dec 12
2
raid 6 on centos 7
i have 6 sata hdd 2 TB . i want install centos 7 on these hdd in raid 6 mode.
how can i do it ?
2010 Nov 14
3
RAID Resynch...??
So still coming up to speed with mdadm and I notice this morning one of my
servers acting sluggish...so when I looked at the mdadm raid device I see
this:
mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Mon Sep 27 22:47:44 2010
Raid Level : raid10
Array Size : 976759808 (931.51 GiB 1000.20 GB)
Used Dev Size : 976759808 (931.51 GiB 1000.20 GB)
Raid