similar to: DomU vs Dom0 performance.

Displaying 20 results from an estimated 1100 matches similar to: "DomU vs Dom0 performance."

2005 Mar 03
14
Serious performance issues
Hi. I have a Shuttle box with an AMD Athlon XP 2200+ and 1GB of RAM. I''m normally running it with Debian sarge/sid and kernel 2.6.10-1-k7, as built by Debian. I want to use Xen on it. I built a xen0 kernel which is as close to the Debian kernel as I can (no power management, no HPET timers, broken ISA drivers disabled), disabled /lib/tls, and booted with the new kernel. Everything works.
2009 Aug 25
5
uninitialized constant
API-> hello_message_api.rb. class HelloMessageApi < ActionWebService::API::Base api_method :hello_message, :expects => [{:firstname=>:string}, {:lastname=>:string}], :returns => [:string] end controller -> class HelloMessageController < ApplicationController web_service_api HelloMessageApi web_service_dispatching_mode :direct wsdl_service_name
2017 Jun 13
2
Transport Endpoint Not connected while running sysbench on Gluster Volume
I'm having a hard time trying to get a gluster volume up and running. I have setup other gluster volumes on other systems without much problems but this one is killing me. The gluster vol was created with the command: gluster volume create mariadb_gluster_volume laeft-dccdb01p:/export/mariadb/brick I had to lower frame-timeout since the system would become unresponsive until the frame failed
2017 Jun 15
1
Transport Endpoint Not connected while running sysbench on Gluster Volume
<re added gluster users, it looks like it was dropped from your email> ----- Original Message ----- > From: "Julio Guevara" <julioguevara150 at gmail.com> > To: "Ben Turner" <bturner at redhat.com> > Sent: Thursday, June 15, 2017 5:52:26 PM > Subject: Re: [Gluster-users] Transport Endpoint Not connected while running sysbench on Gluster Volume
2013 Nov 19
6
[PATCH] Btrfs: fix very slow inode eviction and fs unmount
The inode eviction can be very slow, because during eviction we tell the VFS to truncate all of the inode''s pages. This results in calls to btrfs_invalidatepage() which in turn does calls to lock_extent_bits() and clear_extent_bit(). These calls result in too many merges and splits of extent_state structures, which consume a lot of time and cpu when the inode has many pages. In some
2012 Dec 03
17
[PATCH 0 of 3] xen: sched_credit: fix tickling and add some tracing
Hello, This small series deals with some weirdness in the mechanism with which the credit scheduler choses what PCPU to tickle upon a VCPU wake-up. Details are available in the changelog of the first patch. The new approach has been extensively benchmarked and proved itself either beneficial or harmless. That means it does not introduce any significant amount of overhead and/or performances
2020 Jun 14
4
very low performance of Xen guests
Hello ??? For the past months I've been testing upgrading my Xen hosts to CentOS 7 and I face an issue for which I need your help to solve. ??? The testing machines are IBM blades, model H21 and H21XM. Initial tests were performed on the H21 with 16 GB RAM; during the last 6=7 weeks I've been using the H21XM with 64 GB. In all cases the guests were fully updated CentOS 7 --
2012 Apr 17
2
Kernel bug in BTRFS (kernel 3.3.0)
Hi, Doing some extensive benchmarks on BTRFS, I encountered a kernel bug in BTRFS (as reported in dmesg) Maybe the information below can help you making btrfs better. Situation Doing an intensive sequential write on a SAS 3TB disk drive (SEAGATE ST33000652SS) with 128 threads with Sysbench. Device is connected through an HBA. Blocksize was 256k ; Kernel is 3.3.0 (x86_64) ; Btrfs is version
2009 Mar 05
1
[PATCH] OCFS2: Pagecache usage optimization on OCFS2
Hi. I introduced "is_partially_uptodate" aops for OCFS2. A page can have multiple buffers and even if a page is not uptodate, some buffers can be uptodate on pagesize != blocksize environment. This aops checks that all buffers which correspond to a part of a file that we want to read are uptodate. If so, we do not have to issue actual read IO to HDD even if a page is not uptodate
2020 Jun 15
1
very low performance of Xen guests
On 6/15/20 2:46 PM, Stephen John Smoogen wrote: > > > On Sun, 14 Jun 2020 at 14:49, Manuel Wolfshant > <wolfy at nobugconsulting.ro <mailto:wolfy at nobugconsulting.ro>> wrote: > > Hello > > > ??? For the past months I've been testing upgrading my Xen hosts > to CentOS 7 and I face an issue for which I need your help to solve. > >
2010 May 05
6
Benchmark Disk IO
What is the best way to benchmark disk IO? I'm looking to move one of my servers, which is rather IO intense. But not without first benchmarking the current and new disk array, To make sure this isn't a full waste of time. thanks
2009 May 25
1
Update rails
I can''t update rail version 2.2.2 i have 2.1.0 what to do? i use command gem update rails -v 2.2.2 Also Firstly i use gem install -v= 2.2.2 rails it also take lot of time,but can''t process -- Posted via http://www.ruby-forum.com/.
2017 Feb 14
6
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
Hi, while we're still waiting for a definitive ACK from Microsoft that the algorithm is good for SMP case (as we can't prevent the code in vdso from migrating between CPUs) I'd like to send v2 with some modifications to keep the discussion going. Changes since v1: - Document the TSC page reading protocol [Thomas Gleixner]. - Separate the TSC page reading code from
2017 Feb 14
6
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
Hi, while we're still waiting for a definitive ACK from Microsoft that the algorithm is good for SMP case (as we can't prevent the code in vdso from migrating between CPUs) I'd like to send v2 with some modifications to keep the discussion going. Changes since v1: - Document the TSC page reading protocol [Thomas Gleixner]. - Separate the TSC page reading code from
2009 Jun 08
1
[PATCH] Btrfs: fdatasync should skip metadata writeout
Hi. In btrfs, fdatasync and fsync are identical. I think fdatasync should skip committing transaction when inode->i_state is set just I_DIRTY_SYNC and this indicates only atime or/and mtime updates. Following patch improves fdatasync throughput. #sysbench --num-threads=16 --max-requests=10000 --test=fileio --file-block-size=4K --file-total-size=16G --file-test-mode=rndwr
2018 Jan 05
2
Intel Flaw
How does the latest Intel flaw relate to CentOS 6.x systems that run under VirtualBox hosted on Windows 7 computers? Given the virtual machine degree of separation from the hardware, can this issue actually be detected and exploited in the operating systems that run virtually?? If there is a slow down associated with the fix, how much might it impact the virtual systems?
2023 Aug 29
2
[PATCH] virtio_balloon: Fix endless deflation and inflation on arm64
The deflation request to the target, which isn't unaligned to the guest page size causes endless deflation and inflation actions. For example, we receive the flooding QMP events for the changes on memory balloon's size after a deflation request to the unaligned target is sent for the ARM64 guest, where we have 64KB base page size. /home/gavin/sandbox/qemu.main/build/qemu-system-aarch64
2003 Dec 08
3
Strange variable chopping from AGI's
AGI's are resulting in unusual behaviors. Can someone please tell me if this is my inappropriate use of AGI's, inappropriate use of Time::HiRes, or a bug with *: I call this script twice: #!/usr/bin/perl use Time::HiRes qw( gettimeofday ); ($seconds, $microseconds) = gettimeofday; $hirestime = sprintf("%s","$seconds$microseconds"); print "SET VARIABLE
2023 Aug 31
2
[PATCH v2] virtio_balloon: Fix endless deflation and inflation on arm64
The deflation request to the target, which isn't unaligned to the guest page size causes endless deflation and inflation actions. For example, we receive the flooding QMP events for the changes on memory balloon's size after a deflation request to the unaligned target is sent for the ARM64 guest, where we have 64KB base page size. /home/gavin/sandbox/qemu.main/build/qemu-system-aarch64
2023 Aug 30
1
[PATCH] virtio_balloon: Fix endless deflation and inflation on arm64
On 29.08.23 03:54, Gavin Shan wrote: > The deflation request to the target, which isn't unaligned to the > guest page size causes endless deflation and inflation actions. For > example, we receive the flooding QMP events for the changes on memory > balloon's size after a deflation request to the unaligned target is > sent for the ARM64 guest, where we have 64KB base page