Displaying 9 results from an estimated 9 matches for "rhod".
Did you mean:
rho
2007 Aug 20
3
RAID storage - SATA, SCSI, or Fibre Channel?
I have a Dell PowerEdge 2950 and am looking to add more storage. I know a lot
of factors can go into the type of answer given, but for present and future
technology planning, should I look for a rack of SATA, SCSI, or fibre channel
drives? Maybe I'm dating myself with fibre channel, and possibly SCSI?
I may be looking to add a few TB now, and possibly more later.
What are people
2006 May 31
3
Database can''t connect -
I''m sorry guys. I''m really still such a newbie at this. i''ve been having
trouble getting my Rails apps to run on Dreamhost, and I think it might
be a problem with the database connection. Anyway, I read on Code
Snippets that I should do this:
[rhod]$ ruby -Iconfig -renvironment -e "p ActiveRecord::Base.connection"
to find out if my database.yml file is connecting properly. This is the
result i got:
/usr/lib/ruby/gems/1.8/gems/activerecord-1.14.2/lib/active_record/connection_adapters/mysql_adapter.rb:331:in
`real_connect''...
2014 Mar 13
3
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 03/10/2014 04:03 PM, Michael S. Tsirkin wrote:
> On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote:
>> > We used to stop the handling of tx when the number of pending DMAs
>> > exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>> > of both host and guest. But it was too aggressive in some cases, since
>> > any delay or blocking
2014 Mar 13
3
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 03/10/2014 04:03 PM, Michael S. Tsirkin wrote:
> On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote:
>> > We used to stop the handling of tx when the number of pending DMAs
>> > exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>> > of both host and guest. But it was too aggressive in some cases, since
>> > any delay or blocking
2013 Nov 13
1
[PATCH net-next 4/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
On 11/13/2013 04:19 PM, Eric Dumazet wrote:
> On Wed, 2013-11-13 at 10:47 +0200, Ronen Hod wrote:
>
>> I looked at how ewma works, and although it is computationally efficient,
>> and it does what it is supposed to do, initially (at the first samples) it is strongly
>> biased towards the value that was added at the first ewma_add.
>> I suggest that you print the
2013 Nov 13
1
[PATCH net-next 4/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
On 11/13/2013 04:19 PM, Eric Dumazet wrote:
> On Wed, 2013-11-13 at 10:47 +0200, Ronen Hod wrote:
>
>> I looked at how ewma works, and although it is computationally efficient,
>> and it does what it is supposed to do, initially (at the first samples) it is strongly
>> biased towards the value that was added at the first ewma_add.
>> I suggest that you print the
2014 Mar 17
0
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 03/13/2014 09:28 AM, Jason Wang wrote:
> On 03/10/2014 04:03 PM, Michael S. Tsirkin wrote:
>> On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote:
>>>> We used to stop the handling of tx when the number of pending DMAs
>>>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>>>> of both host and guest. But it was too
2013 Nov 13
2
[PATCH net-next 4/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
On 11/13/2013 12:21 AM, Michael Dalton wrote:
> Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag
> allocators") changed the mergeable receive buffer size from PAGE_SIZE to
> MTU-size, introducing a single-stream regression for benchmarks with large
> average packet size. There is no single optimal buffer size for all workloads.
> For workloads
2013 Nov 13
2
[PATCH net-next 4/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
On 11/13/2013 12:21 AM, Michael Dalton wrote:
> Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag
> allocators") changed the mergeable receive buffer size from PAGE_SIZE to
> MTU-size, introducing a single-stream regression for benchmarks with large
> average packet size. There is no single optimal buffer size for all workloads.
> For workloads