Displaying 5 results from an estimated 5 matches for "105mb".
Did you mean:
100mb
2012 Nov 08
2
Help Read File With Odd Characters
I have a large (105MB) data file, tab-delimited with a header. There are
some odd characters at the beginning of the file that are preventing it
from being read by R.
> dfTemp = read.delim(filename)
Error in make.names(col.names, unique = TRUE) :
invalid multibyte string at '<ff><fe>m'
When I...
2003 Jun 05
0
Summary of hangind HP proliant
...a
"reasonable" level.( cpu at this hang time would be about 90-95% idle,
and load would go from about 40-100)
We then changed the 5i config to present each disk as a raid 0 device
(no processing on the card now) and used Linux Raid to do the mirroring
and raid5-ing. We now get about 100-105MB/s thoughput to the SW raid 5,
and no more apparent hangs.
If anyone is using a SmartArray device, you may want to experiment with
SW raid instead.
Dan Liebster
Adecco
2012 Aug 28
2
Is glusterfs ready?
Hey,
since RH took control of glusterfs, I've been looking to convert our old independent RAID storage servers to several non RAID glustered ones.
The thing is that I, here and there, heard a few frightening stories from some users (even with latest release).
Any one has experienced with it long enough to think one can blindly trust it or if it is almost there but not yet ready?
Thx,
JD
2012 Feb 07
1
Recommendations for busy static web server replacement
...k on its own, i.e.
gluster volume create test-volume replica 2 transport tcp $(for i in b c d e f
g h i j k l m ; do for n in 1 2; do echo -n "gluster0$n:/data-$i "; done;
done)
As expected, the overhead here is larger. initial file creation started slowly
at 45MB/s and peaked around 105MB/s, 50 clients saw a total bandwidth of about
41MB/s
(3) I ran other tests across all 10 backend bricks, but never went beyond ~
80MB/s when using 4 disk raid0 over 10 servers.
All tests were run with glusterfs 3.2.5 on Debian Squeeze, md-software raid,
xfs file system, "default" sett...
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting
up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and
connected via load-shared 4Gbit FC links. This week I have tried many
different configurations, using firmware managed RAID, ZFS managed
RAID, and with the controller cache enabled or disabled.
My objective is to obtain the best single-file write performance.