Displaying 20 results from an estimated 20 matches for "spead".
Did you mean:
speak
2018 Feb 22
3
Gluster performance / Dell Idrac enterprise conflict
...providing storage
for our RHEV environment. We've been having issues with inconsistent
performance from the VMs depending on which Hypervisor they are
running on. I've confirmed throughput to be ~9Gb/s to each of the
storage hosts from the hypervisors. I'm getting ~300MB/s disk read
spead when our test vm is on the slow Hypervisors and over 500 on the
faster ones. The performance doesn't seem to be affected much by the
cpu, memory that are in the hypervisors. I have tried a couple of
really old boxes and got over 500 MB/s. The common thread seems to be
that the poorly perfomi...
2006 Mar 28
12
Rails & PHP
Hi there - I wanted to know if anyone has used Rails & PHP on the same
production server and whether they''ve experienced any problems.
I''m looking to install rails on our production soon, however I would
like to know if there are any issues I need to be aware about.
Many thanks,
Jared.
2018 Feb 26
2
Gluster performance / Dell Idrac enterprise conflict
...V environment. We've been having issues with inconsistent performance
> > from the VMs depending on which Hypervisor they are running on. I've
> > confirmed throughput to be ~9Gb/s to each of the storage hosts from the
> > hypervisors. I'm getting ~300MB/s disk read spead when our test vm is on
> > the slow Hypervisors and over 500 on the faster ones. The performance
> > doesn't seem to be affected much by the cpu, memory that are in the
> > hypervisors. I have tried a couple of really old boxes and got over 500
> > MB/s. The common th...
2009 Oct 20
7
Slow reads with ZFS+NFS
...Over a gig-e line, we''re seeing ~30 MB/s reads on average - doesn''t seem to
matter if we''re doing large numbers of small files or small numbers of large
files, the speed seems to top out there. We''ve disabled pre-fetching, which
may be having some affect on read speads, but proved necessary due to severe
performance issues on database reads with it enabled. (Reading from the DB
with pre-fetching enabled was taking 4-5 times as long than with it
disabled.)
Write speed seems to be fine. Testing is showing ~95 MB/s, which seems
pretty decent considering there...
2018 Feb 22
0
Gluster performance / Dell Idrac enterprise conflict
...or our
> RHEV environment. We've been having issues with inconsistent performance
> from the VMs depending on which Hypervisor they are running on. I've
> confirmed throughput to be ~9Gb/s to each of the storage hosts from the
> hypervisors. I'm getting ~300MB/s disk read spead when our test vm is on
> the slow Hypervisors and over 500 on the faster ones. The performance
> doesn't seem to be affected much by the cpu, memory that are in the
> hypervisors. I have tried a couple of really old boxes and got over 500
> MB/s. The common thread seems to be tha...
2004 Jun 20
1
[LLVMdev] benchmarking LLVM
...ng from
> 3.98-4.58s. FWIW, these tests are on a Intel P4 Xeon 3.06Ghz machine with
> HT enabled. In any case, it appears that we're slower than GCC on this one.
what is very funny, that the situation on my AMD is not similar
at all to what you state here. On my side tests show GREAT spead up
with ackerman, i.e. 2 to 3 times. Sorry, I can't access my
linux box now to get concrete values.
> If you
> wanted to dig into the test to see what is going wrong (is the LLVM code
> missing an optimization, or is it bad code generation?), that would be
> quite helpful.
i wi...
2018 Feb 26
0
Gluster performance / Dell Idrac enterprise conflict
...been having issues with inconsistent
>> > performance
>> > from the VMs depending on which Hypervisor they are running on. I've
>> > confirmed throughput to be ~9Gb/s to each of the storage hosts from the
>> > hypervisors. I'm getting ~300MB/s disk read spead when our test vm is
>> > on
>> > the slow Hypervisors and over 500 on the faster ones. The performance
>> > doesn't seem to be affected much by the cpu, memory that are in the
>> > hypervisors. I have tried a couple of really old boxes and got over 500
>...
2018 Feb 26
2
Gluster performance / Dell Idrac enterprise conflict
...nconsistent
> >> > performance
> >> > from the VMs depending on which Hypervisor they are running on. I've
> >> > confirmed throughput to be ~9Gb/s to each of the storage hosts from
> the
> >> > hypervisors. I'm getting ~300MB/s disk read spead when our test vm is
> >> > on
> >> > the slow Hypervisors and over 500 on the faster ones. The performance
> >> > doesn't seem to be affected much by the cpu, memory that are in the
> >> > hypervisors. I have tried a couple of really old boxes a...
2018 Feb 26
2
Gluster performance / Dell Idrac enterprise conflict
...>> > performance
>> >> > from the VMs depending on which Hypervisor they are running on. I've
>> >> > confirmed throughput to be ~9Gb/s to each of the storage hosts from
>> the
>> >> > hypervisors. I'm getting ~300MB/s disk read spead when our test vm
>> is
>> >> > on
>> >> > the slow Hypervisors and over 500 on the faster ones. The
>> performance
>> >> > doesn't seem to be affected much by the cpu, memory that are in the
>> >> > hypervisors. I have tr...
2018 Feb 26
0
Gluster performance / Dell Idrac enterprise conflict
...> performance
> >> > from the VMs depending on which Hypervisor they are running
> on.? I've
> >> > confirmed throughput to be ~9Gb/s to each of the storage
> hosts from the
> >> > hypervisors.? I'm getting ~300MB/s disk read spead when our
> test vm is
> >> > on
> >> > the slow Hypervisors and over 500 on the faster ones.? The
> performance
> >> > doesn't seem to be affected much by the cpu, memory that are
> in the
> >> > hypervisors.?...
2018 Feb 27
0
Gluster performance / Dell Idrac enterprise conflict
...gt;>> >> > from the VMs depending on which Hypervisor they are running on.
>>> I've
>>> >> > confirmed throughput to be ~9Gb/s to each of the storage hosts from
>>> the
>>> >> > hypervisors. I'm getting ~300MB/s disk read spead when our test vm
>>> is
>>> >> > on
>>> >> > the slow Hypervisors and over 500 on the faster ones. The
>>> performance
>>> >> > doesn't seem to be affected much by the cpu, memory that are in the
>>> >> >...
2008 Nov 13
1
Error in Quantile function
If anyone can assist with this problem you have my great thanks:
I am trying to establish and plot confidence intervals on a bootstrapped
function. I have a more complicated function that has no problems with
determining the confidence intervals using the quantile command. This is
outside the bootstrap portion of the code that is working fine it is just
determining everything for the more
2004 Dec 09
1
EDD error RE: Re: SYSLINUX 2.12-pre7 released
...high drive
>number (typically 9Fh or so); MS-DOS will normally ignore this since it
>asks the BIOS how many drives there are and ignores the rest.
>Does the error message come from UDMA2.SYS or...?
> -hpa
yes, it comes from UDMA2.SYS, it does a series of initialisation tests (read spead for example, and VDS).
I don't know behaviour when booting to MSDOS on harddisk, only Isolinux --> A: and Isolinux --> Memdisk --> Disk image
Bernd
http://fdos.org/ripcord/beta9sr1/other/misc/Udma2_16.zip
output on screen (sourcecode in UDMA2.ASM, line 855 and 736 ??? )
UDMA2 Disk...
2005 Aug 09
0
Console Auto-Completion Lockup
An interesting bug..
It may be more wide-spead than this, and there may be other ways to
reproduce it.but this is how I can produce the problem:
At the console, I type "iax2 show peer jc" and press tab to auto-complete
the peer "jcallen" that is usually registered. But sometimes (perhaps when
jcallen is not registered),...
2018 Feb 16
0
Fwd: gluster performance
...providing storage for our RHEV environment. We've been having issues with inconsistent performance from the VMs depending on which Hypervisor they are running on. I've confirmed throughput to be ~9Gb/s to each of the storage hosts from the hypervisors. I'm getting ~300MB/s disk read spead when our test vm is on the slow Hypervisors and over 500 on the faster ones. The performance doesn't seem to be affected much by the cpu, memory that are in the hypervisors. I have tried a couple of really old boxes and got over 500 MB/s. The common thread seems to be that the poorly perfomi...
2018 Feb 27
1
Gluster performance / Dell Idrac enterprise conflict
...> > from the VMs depending on which Hypervisor they are running on.
>>>> I've
>>>> >> > confirmed throughput to be ~9Gb/s to each of the storage hosts
>>>> from the
>>>> >> > hypervisors. I'm getting ~300MB/s disk read spead when our test
>>>> vm is
>>>> >> > on
>>>> >> > the slow Hypervisors and over 500 on the faster ones. The
>>>> performance
>>>> >> > doesn't seem to be affected much by the cpu, memory that are in the
>&g...
2013 Sep 04
4
Linux tool to check random I/O performance
we just purchase new I/O card and like to check I/O performance.
for sequence I/O performance test we can use "hdparm -t /dev/xxx"
bur for random I/O performance test which Linux command I can use?
** our environment does NOT allow install third party software..
Thanks
2004 Jun 19
0
[LLVMdev] benchmarking LLVM
On Sat, 19 Jun 2004, [koi8-r] "Valery A.Khamenya[koi8-r] " wrote:
> i took a look into LLVM benchmarks from nightly tester and
> ran Shootout tests on my own. Below go just few outlines.
>
> 1. results on my AMD AthlonXP and Xeon used by LLVM
> team are different sometime. In particular, both Shootout
> and Shootout-C++ show great speed up with LLVM (in
>
2007 Jun 19
4
Speed up R
Dear R Users,
I hope that there is someone who has an experience with a problem that I
describe below and will help me.
I must buy new desktop computer and I'm wondering which processor to choose
if my only aim is to speed up R. I would like to reduce a simulation time -
sometimes it takes days. I consider buying one of them (I'm working under
Win XP 32 bit):
1. Intel Core2 Duo E6700
2004 Jun 19
2
[LLVMdev] benchmarking LLVM
Hi all
i took a look into LLVM benchmarks from nightly tester and
ran Shootout tests on my own. Below go just few outlines.
1. results on my AMD AthlonXP and Xeon used by LLVM
team are different sometime. In particular, both Shootout
and Shootout-C++ show great speed up with LLVM (in
comparison to GCC) on ackerman test on my AthlonXP.
But here: