Displaying 20 results from an estimated 200 matches similar to: "very low performance of Xen guests"
2020 Jun 15
1
very low performance of Xen guests
On 6/15/20 2:46 PM, Stephen John Smoogen wrote:
>
>
> On Sun, 14 Jun 2020 at 14:49, Manuel Wolfshant
> <wolfy at nobugconsulting.ro <mailto:wolfy at nobugconsulting.ro>> wrote:
>
> Hello
>
>
> ??? For the past months I've been testing upgrading my Xen hosts
> to CentOS 7 and I face an issue for which I need your help to solve.
>
>
2020 Jun 15
0
very low performance of Xen guests
On Sun, 14 Jun 2020 at 14:49, Manuel Wolfshant <wolfy at nobugconsulting.ro>
wrote:
> Hello
>
>
> For the past months I've been testing upgrading my Xen hosts to CentOS
> 7 and I face an issue for which I need your help to solve.
>
> The testing machines are IBM blades, model H21 and H21XM. Initial
> tests were performed on the H21 with 16 GB RAM; during
2017 Jun 13
2
Transport Endpoint Not connected while running sysbench on Gluster Volume
I'm having a hard time trying to get a gluster volume up and running. I
have setup other gluster volumes on other systems without much problems but
this one is killing me.
The gluster vol was created with the command:
gluster volume create mariadb_gluster_volume
laeft-dccdb01p:/export/mariadb/brick
I had to lower frame-timeout since the system would become unresponsive
until the frame failed
2017 Jun 15
1
Transport Endpoint Not connected while running sysbench on Gluster Volume
<re added gluster users, it looks like it was dropped from your email>
----- Original Message -----
> From: "Julio Guevara" <julioguevara150 at gmail.com>
> To: "Ben Turner" <bturner at redhat.com>
> Sent: Thursday, June 15, 2017 5:52:26 PM
> Subject: Re: [Gluster-users] Transport Endpoint Not connected while running sysbench on Gluster Volume
2013 Sep 29
9
DomU vs Dom0 performance.
Hi,
I have been doing some diskIO bench-marking of dom0 and domU (HVM). I ran
into an issue where domU
performed better than dom0. So I ran few experiments to check if it is
just diskIO performance.
I have an archlinux (kernel 3.5.0) + xen 4.2.2) installed on a Intel Core
i7 Q720 machine. I have also installed
archlinux (kernel 3.5.0) in domU running on this machine. The domU runs
with 8 vcpus.
2013 Nov 19
6
[PATCH] Btrfs: fix very slow inode eviction and fs unmount
The inode eviction can be very slow, because during eviction we
tell the VFS to truncate all of the inode''s pages. This results
in calls to btrfs_invalidatepage() which in turn does calls to
lock_extent_bits() and clear_extent_bit(). These calls result in
too many merges and splits of extent_state structures, which
consume a lot of time and cpu when the inode has many pages. In
some
2012 Dec 03
17
[PATCH 0 of 3] xen: sched_credit: fix tickling and add some tracing
Hello,
This small series deals with some weirdness in the mechanism with which the
credit scheduler choses what PCPU to tickle upon a VCPU wake-up. Details are
available in the changelog of the first patch.
The new approach has been extensively benchmarked and proved itself either
beneficial or harmless. That means it does not introduce any significant amount
of overhead and/or performances
2012 Apr 17
2
Kernel bug in BTRFS (kernel 3.3.0)
Hi,
Doing some extensive benchmarks on BTRFS, I encountered a kernel bug
in BTRFS (as reported in dmesg)
Maybe the information below can help you making btrfs better.
Situation
Doing an intensive sequential write on a SAS 3TB disk drive (SEAGATE
ST33000652SS) with 128 threads with Sysbench.
Device is connected through an HBA. Blocksize was 256k ; Kernel is
3.3.0 (x86_64) ; Btrfs is version
2009 Mar 05
1
[PATCH] OCFS2: Pagecache usage optimization on OCFS2
Hi.
I introduced "is_partially_uptodate" aops for OCFS2.
A page can have multiple buffers and even if a page is not uptodate, some buffers
can be uptodate on pagesize != blocksize environment.
This aops checks that all buffers which correspond to a part of a file
that we want to read are uptodate. If so, we do not have to issue actual
read IO to HDD even if a page is not uptodate
2010 May 05
6
Benchmark Disk IO
What is the best way to benchmark disk IO?
I'm looking to move one of my servers, which is rather IO intense. But
not without first benchmarking the current and new disk array, To make
sure this isn't a full waste of time.
thanks
2020 Jun 18
0
very low performance of Xen guests
On 6/15/20 5:40 PM, Stephen John Smoogen wrote:
>
>
> On Mon, 15 Jun 2020 at 09:42, Manuel Wolfshant
> <wolfy at nobugconsulting.ro <mailto:wolfy at nobugconsulting.ro>> wrote:
>
> On 6/15/20 2:46 PM, Stephen John Smoogen wrote:
>
> I got inspired by Adi's earlier suggestion and after reading
> https://access.redhat.com/articles/3311301
2008 Sep 09
1
creating table of averages
Dear Colleagues,
I have a dataframe with variables:
[1] "ID" "category" "a11" "a12"
"a13" "a21"
[7] "a22" "a23" "a31" "a32"
"b11" "b12"
[13] "b13" "b21"
2017 Feb 14
6
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
Hi,
while we're still waiting for a definitive ACK from Microsoft that the
algorithm is good for SMP case (as we can't prevent the code in vdso from
migrating between CPUs) I'd like to send v2 with some modifications to keep
the discussion going.
Changes since v1:
- Document the TSC page reading protocol [Thomas Gleixner].
- Separate the TSC page reading code from
2017 Feb 14
6
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
Hi,
while we're still waiting for a definitive ACK from Microsoft that the
algorithm is good for SMP case (as we can't prevent the code in vdso from
migrating between CPUs) I'd like to send v2 with some modifications to keep
the discussion going.
Changes since v1:
- Document the TSC page reading protocol [Thomas Gleixner].
- Separate the TSC page reading code from
2009 Jun 08
1
[PATCH] Btrfs: fdatasync should skip metadata writeout
Hi.
In btrfs, fdatasync and fsync are identical.
I think fdatasync should skip committing transaction when
inode->i_state is set just I_DIRTY_SYNC and this indicates
only atime or/and mtime updates.
Following patch improves fdatasync throughput.
#sysbench --num-threads=16 --max-requests=10000 --test=fileio
--file-block-size=4K --file-total-size=16G --file-test-mode=rndwr
2008 Nov 26
1
Request for Assistance in R with NonMem
Hi
I am having some problems running a covariate analysis with my
colleage using R with the NonMem program we are using for a graduate
school project. R and NonMem run fine without adding in the
covariates, but the program is giving us a problem when the covariate
analysis is added. We think the problem is with the R code to run the
covariate data analysis. We have the control stream, R code
2018 Jan 05
2
Intel Flaw
How does the latest Intel flaw relate to CentOS 6.x systems
that run under VirtualBox hosted on Windows 7 computers? Given
the virtual machine degree of separation from the hardware, can
this issue actually be detected and exploited in the operating
systems that run virtually?? If there is a slow down associated
with the fix, how much might it impact the virtual systems?
2019 Sep 11
4
Fw: Calling a LAPACK subroutine from R
Sorry for cross-posting, but I realized my question might be more appropriate for r-devel...
Thank you,
Giovanni
________________________________________
From: R-help <r-help-bounces at r-project.org> on behalf of Giovanni Petris <gpetris at uark.edu>
Sent: Tuesday, September 10, 2019 16:44
To: r-help at r-project.org
Subject: [R] Calling a LAPACK subroutine from R
Hello R-helpers!
2018 May 23
3
ceph_vms performance
Hi,
I'm testing out ceph_vms vs a cephfs mount with a cifs export.
I currently have 3 active ceph mds servers to maximise throughput and
when I have configured a cephfs mount with a cifs export, I'm getting
a reasonable benchmark results.
However, when I tried some benchmarking with the ceph_vms module, I
only got a 3rd of the comparable write throughput.
I'm just wondering if
2007 Aug 16
2
Newbie
Hello,
I'm a bit new to the world of R so forgive my ignorance. I'm trying to do a zero-inflated negative binomial regression and have received an error message and i'm not sure what it means. I'm running R 2.5.1 on XP. I have just tried a really simple version of the model to see if it would run before I put all the variables in. I have attached all the variables to the