similar to: Areca Raid 6 ARC-1231 Raid 6 Slow LS Listing Performance on large directory

Displaying 20 results from an estimated 1000 matches similar to: "Areca Raid 6 ARC-1231 Raid 6 Slow LS Listing Performance on large directory"

2013 Aug 27
6
Suggest changing dirhash defaults for FreeBSD 9.2.
I have been experimenting with dirhash settings, and have scoured the internet for other peoples' experience with it. (I found the performance improvement in compiling has forestalled the need to add an SSD drive. ;-) I believe that increasing the following values by 10 would benefit most FreeBSD users without disadvantage. vfs.ufs.dirhash_maxmem: 2097152 to 20971520
2015 Jul 05
1
7.1 install with Areca arc-1224
I must be doing something horribly wrong and I hope somebody can help. The Areca arc-1224 is not supported by the Areca driver included driver in 7.1 so I have to supply that when starting the install. Documentation provided by Areca and in the Red Hat install guide say the same thing, put the driver on an accessible medium then append inst.dd on the boot command, choose the driver and now the
2016 Sep 23
1
OT: Areca ARC-1220 compatible with SATA III (6Gb/s) drives?
Running C6 fileserver. Want to replace 7 year old HDs connected to an Areca ARC-1220 raid sata II (3Gb/s) controller. Has anyone used this controller with newer 2TB SATA III (6Gb/s) WD Re drives like the WD2000FYYZ or the WD2004FBYZ?
2015 Jul 05
2
7.1 install with Areca arc-1224
On 07/05/15, Gordon Messmer wrote: anaconda will try to delete an rpm file if it gets an IOError. Your media may be corrupt. Check that first. ----- Above quoted ----- No such luck. On the system where I'm doing the install, I used dd to read the entire DVD and also copied every .rpm to /dev/null and didn't get any I/O errors. What next?
2015 Jul 07
0
7.1 install with Areca arc-1224
On 07/06/15 18:06, C Linus Hicks wrote: > On 07/06/15, g wrote: >> you might try verifying that system you are getting error message on >> has a good cd/dvd drive. >> >> burn another dvd at at least 4 speeds slower. >> > if runs ok, bad drive. >> > if still fails, bad drive. >> > another way you can check is to pull iso on system you are
2017 Jan 20
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
> The disks I am going to use are 6TB Seagate Enterprise ST6000NM0034 > 7200rpm SAS/12Gbit 128 MB Sorry to hear that, my experience is the Seagate brand has the shortest MTBF of any disk I have ever used... > If hardware RAID is preferred, the controller's cache could be updated > to 4GB and I wonder how much performance gain this would give me? Lots, especially with slower
2017 Jan 20
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
> This is why before configuring and installing everything you may want to > attach drives one at a time, and upon boot take a note which physical > drive number the controller has for that drive, and definitely label it so > y9ou will know which drive to pull when drive failure is reported. Sorry Valeri, that only works if you're the only guy in the org. In reality, you cannot
2017 Jan 21
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On 2017-01-20, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote: > > Hm, not certain what process you describe. Most of my controllers are > 3ware and LSI, I just pull failed drive (and I know phailed physical drive > number), put good in its place and rebuild stars right away. I know for sure that LSI's storcli utility supports an identify operation, which (if the
2017 Jan 21
1
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Sat, January 21, 2017 12:16 am, Keith Keller wrote: > On 2017-01-20, Valeri Galtsev <galtsev at kicp.uchicago.edu> wrote: >> >> Hm, not certain what process you describe. Most of my controllers are >> 3ware and LSI, I just pull failed drive (and I know phailed physical >> drive >> number), put good in its place and rebuild stars right away. > > I
2015 Jul 06
2
7.1 install with Areca arc-1224
On 07/05/15, Gordon Messmer wrote: That's not the same as checking the media for corruption. You may be able to read all of the files, but if the data is corrupt, rpm may throw and IOError. So, the next thing to do is check your media. The DVD should offer to do that first when you boot from it. ------ Above quoted -------- Booted the DVD again, took the default. It got to 76.2% then
2008 Nov 18
3
High system in %system load .
Hello Got strange problem with high system "%system load" and very slow user level programs (apache+php+mysql) behavior gstat shows 1.5-4% hard disk busy load but system shows about 20-30% load while user load is max 5%. vmstat shows from 2 to 35 process in "b" state. Now use 7.0-RELEASE-p5 , but the same problem was with 7.0-RELEASE. And have no idea what to do with this.
2015 Jul 05
1
7.1 install with Areca arc-1224
On 07/05/2015 09:17 AM, linush at verizon.net wrote: > Someone please tell me what I did to screw this thing up so badly. On 07/05/15, Gordon Messmer<gordon.messmer at gmail.com> wrote: Have you looked at the log files in /mnt/sysimage/root/? ------------- Quoting broken in this mailer ------------ So I looked in /mnt/sysimage/var/log/anaconda and found this in anaconda.packaging.log:
2015 Jul 06
2
7.1 install with Areca arc-1224
On 07/06/15, g wrote: you might try verifying that system you are getting error message on has a good cd/dvd drive. burn another dvd at at least 4 speeds slower. if runs ok, bad drive. if still fails, bad drive. another way you can check is to pull iso on system you are having problem with and burn dvd. if you get error, get a new drive. -------- Above quoted -------- When I md5sum the DVD
2017 Jan 21
0
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
Hi Valeri, Before you pull a drive you should check to make sure that doing so won't kill the whole array. MegaCli can help you prevent a storage disaster and can let you have more insight into your RAID and the status of the virtual disks and the disks than make up each array. MegaCli will let you see the health and status of each drive. Does it have media errors, is it in predictive
2017 Jan 20
2
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Fri, January 20, 2017 12:59 pm, Joseph L. Casale wrote: >> The disks I am going to use are 6TB Seagate Enterprise ST6000NM0034 >> 7200rpm SAS/12Gbit 128 MB > > Sorry to hear that, my experience is the Seagate brand has the shortest > MTBF > of any disk I have ever used... > >> If hardware RAID is preferred, the controller's cache could be updated >> to
2017 Jan 20
6
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
Hi, Does anyone have experiences about ARC-1883I SAS controller with CentOS7? I am planning to have RAID1 setup and I am wondering if I should use the controller's RAID functionality which has 2GB cache or should I go with JBOD + Linux software RAID? The disks I am going to use are 6TB Seagate Enterprise ST6000NM0034 7200rpm SAS/12Gbit 128 MB If hardware RAID is preferred, the
2017 Jan 21
1
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Fri, January 20, 2017 7:00 pm, Cameron Smith wrote: > Hi Valeri, > > > Before you pull a drive you should check to make sure that doing so > won't kill the whole array. Wow! What did I say to make you treat me as an ultimate idiot!? ;-) All my comments, at least in my own reading, we about things you need to do to make sure when you hot unplug bad drive it is indeed failed
2017 Jan 20
4
CentOS 7 and Areca ARC-1883I SAS controller: JBOD or not to JBOD?
On Fri, January 20, 2017 5:16 pm, Joseph L. Casale wrote: >> This is why before configuring and installing everything you may want to >> attach drives one at a time, and upon boot take a note which physical >> drive number the controller has for that drive, and definitely label it >> so >> y9ou will know which drive to pull when drive failure is reported. > >
2008 Jun 06
3
6.2-STABLE => 7.0-STABLE Upgrade root partition more full
I successfully did my first FreeBSD upgrade yesterday after looking at the manual, and cross referencing with Googling and getting help from our network engineer here at CWU. Before the upgrade, running df showed: Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/da0s1a 507630 77662 389358 17% / devfs 1 1 0 100% /dev /dev/da0s1e
2020 Mar 30
3
Multithreaded encoding?
I am interested in being able to encode a single Opus stream using several CPU cores. I get a raw audio input and "opusenc" can transcode it at 1200% speed (Raspberry PI 3B+). It saturates a single CPU core, but the other three are idle. Is out there any project to add multithreading options to "opusenc", or something in that line? Looking around, I have found this: