Displaying 20 results from an estimated 400 matches similar to: "Transport Endpoint Not connected while running sysbench on Gluster Volume"
2017 Jun 15
1
Transport Endpoint Not connected while running sysbench on Gluster Volume
<re added gluster users, it looks like it was dropped from your email>
----- Original Message -----
> From: "Julio Guevara" <julioguevara150 at gmail.com>
> To: "Ben Turner" <bturner at redhat.com>
> Sent: Thursday, June 15, 2017 5:52:26 PM
> Subject: Re: [Gluster-users] Transport Endpoint Not connected while running sysbench on Gluster Volume
2017 Jun 14
0
Transport Endpoint Not connected while running sysbench on Gluster Volume
Also, this is the profile output of this Volume:
gluster> volume profile mariadb_gluster_volume info cumulative
Brick: laeft-dccdb01p.core.epay.us.loc:/export/mariadb_backup/brick
-------------------------------------------------------------------
Cumulative Stats:
   Block Size:              16384b+               32768b+
65536b+
 No. of Reads:                    0                     0
  0
2017 Aug 22
0
Performance testing with sysbench...
Hi all,
I'm doing some performance test...
If I test a simple sequential write using dd I get a thorughput of about
550 Mb/s. When I do a sequential write test using sysbench this drops to
about 200. Is this due to the way sysbench tests? Or has in this case the
performance of sysbench itself become the bottleneck?
Krist
-- 
Vriendelijke Groet |  Best Regards | Freundliche Gr??e |
2013 Sep 29
9
DomU vs Dom0 performance.
Hi,
I have been doing some diskIO bench-marking of dom0 and domU (HVM). I ran
into an issue where domU
performed better than dom0.  So I ran few experiments to check if it is
just diskIO performance.
I have an archlinux (kernel 3.5.0) + xen 4.2.2) installed on a Intel Core
i7 Q720 machine. I have also installed
archlinux (kernel 3.5.0) in domU running on this machine. The domU runs
with 8 vcpus.
2012 Apr 17
2
Kernel bug in BTRFS (kernel 3.3.0)
Hi,
Doing some extensive benchmarks on BTRFS, I encountered a kernel bug
in BTRFS (as reported in dmesg)
Maybe the information below can help you making btrfs better.
Situation
Doing an intensive sequential write on a SAS 3TB disk drive (SEAGATE
ST33000652SS) with 128 threads with Sysbench.
Device is connected through an HBA. Blocksize was 256k ; Kernel is
3.3.0 (x86_64) ; Btrfs is version
2009 Mar 05
1
[PATCH] OCFS2: Pagecache usage optimization on OCFS2
Hi.
I introduced "is_partially_uptodate" aops for OCFS2.
A page can have multiple buffers and even if a page is not uptodate, some buffers 
can be uptodate on pagesize != blocksize environment.
This aops checks that all buffers which correspond to a part of a file 
that we want to read are uptodate. If so, we do not have to issue actual 
read IO to HDD even if a page is not uptodate
2020 Jun 15
1
very low performance of Xen guests
On 6/15/20 2:46 PM, Stephen John Smoogen wrote:
>
>
> On Sun, 14 Jun 2020 at 14:49, Manuel Wolfshant 
> <wolfy at nobugconsulting.ro <mailto:wolfy at nobugconsulting.ro>> wrote:
>
>     Hello
>
>
>     ??? For the past months I've been testing upgrading my Xen hosts
>     to CentOS 7 and I face an issue for which I need your help to solve.
>
>   
2020 Jun 14
4
very low performance of Xen guests
Hello
 ??? For the past months I've been testing upgrading my Xen hosts to 
CentOS 7 and I face an issue for which I need your help to solve.
 ??? The testing machines are IBM blades, model H21 and H21XM. Initial 
tests were performed on the H21 with 16 GB RAM; during the last 6=7 
weeks I've been using the H21XM with 64 GB. In all cases the guests were 
fully updated CentOS 7 --
2013 Nov 19
6
[PATCH] Btrfs: fix very slow inode eviction and fs unmount
The inode eviction can be very slow, because during eviction we
tell the VFS to truncate all of the inode''s pages. This results
in calls to btrfs_invalidatepage() which in turn does calls to
lock_extent_bits() and clear_extent_bit(). These calls result in
too many merges and splits of extent_state structures, which
consume a lot of time and cpu when the inode has many pages. In
some
2017 Feb 14
6
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
Hi,
while we're still waiting for a definitive ACK from Microsoft that the
algorithm is good for SMP case (as we can't prevent the code in vdso from
migrating between CPUs) I'd like to send v2 with some modifications to keep
the discussion going.
Changes since v1:
- Document the TSC page reading protocol [Thomas Gleixner].
- Separate the TSC page reading code from
2017 Feb 14
6
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
Hi,
while we're still waiting for a definitive ACK from Microsoft that the
algorithm is good for SMP case (as we can't prevent the code in vdso from
migrating between CPUs) I'd like to send v2 with some modifications to keep
the discussion going.
Changes since v1:
- Document the TSC page reading protocol [Thomas Gleixner].
- Separate the TSC page reading code from
2009 Jun 08
1
[PATCH] Btrfs: fdatasync should skip metadata writeout
Hi.
In btrfs, fdatasync and fsync are identical.
I think fdatasync should skip committing transaction when 
inode->i_state is set just I_DIRTY_SYNC and this indicates
only atime or/and mtime updates.
Following patch improves fdatasync throughput.
#sysbench --num-threads=16 --max-requests=10000 --test=fileio 
--file-block-size=4K --file-total-size=16G --file-test-mode=rndwr 
2009 Jan 28
0
smp_tlb_shootdown bottleneck?
Hi.
Sometimes I see much contention in smp_tlb_shootdown while running sysbench:
sysbench --test=fileio --num-threads=8 --file-test-mode=rndrd
--file-total-size=3G  run
kern.smp.cpus: 8
FreeBSD 7.1-R
CPU:  0.8% user,  0.0% nice, 93.8% system,  0.0% interrupt,  5.4% idle
Mem: 11M Active, 2873M Inact, 282M Wired, 8K Cache, 214M Buf, 765M Free
Swap: 4096M Total, 4096M Free
  PID USERNAME PRI NICE
2012 Dec 03
17
[PATCH 0 of 3] xen: sched_credit: fix tickling and add some tracing
Hello,
This small series deals with some weirdness in the mechanism with which the
credit scheduler choses what PCPU to tickle upon a VCPU wake-up.  Details are
available in the changelog of the first patch.
The new approach has been extensively benchmarked and proved itself either
beneficial or harmless. That means it does not introduce any significant amount
of overhead and/or performances
2020 Jun 15
0
very low performance of Xen guests
On Sun, 14 Jun 2020 at 14:49, Manuel Wolfshant <wolfy at nobugconsulting.ro>
wrote:
> Hello
>
>
>     For the past months I've been testing upgrading my Xen hosts to CentOS
> 7 and I face an issue for which I need your help to solve.
>
>     The testing machines are IBM blades, model H21 and H21XM. Initial
> tests were performed on the H21 with 16 GB RAM; during
2010 May 05
6
Benchmark Disk IO
What is the best way to benchmark disk IO?
I'm looking to move one of my servers, which is rather IO intense. But
not without first benchmarking the current and new disk array, To make
sure this isn't a full waste of time.
thanks
2017 Feb 08
3
[PATCH RFC 0/2] x86/vdso: Add Hyper-V TSC page clocksource support
Hi,
Hyper-V TSC page clocksource is suitable for vDSO, however, the protocol
defined by the hypervisor is different from VCLOCK_PVCLOCK. I implemented
the required support re-using pvclock_page VVAR. Simple sysbench test shows
the following results:
Before:
# time sysbench --test=memory --max-requests=500000 run
...
real    1m22.618s
user    0m50.193s
sys     0m32.268s
After:
# time sysbench
2017 Feb 08
3
[PATCH RFC 0/2] x86/vdso: Add Hyper-V TSC page clocksource support
Hi,
Hyper-V TSC page clocksource is suitable for vDSO, however, the protocol
defined by the hypervisor is different from VCLOCK_PVCLOCK. I implemented
the required support re-using pvclock_page VVAR. Simple sysbench test shows
the following results:
Before:
# time sysbench --test=memory --max-requests=500000 run
...
real    1m22.618s
user    0m50.193s
sys     0m32.268s
After:
# time sysbench
2009 Dec 10
4
Linux router with CentOS
Hello everybody.
I'm wondering here if is it possible to setup a CentOS machine as a router
for two Internet connections in a LAN. This _router_ would work as the
gateway for the workstations using DHCPD. The purpose of this is to optimize
the broadband "joining" both connections, and given the case, do not lose
the Internet access.
?is this possible? ?too much complicated?
2009 Jul 15
2
Crypto in 7000 Family
Hi Darren,
I found a presentation (Data at Rest: ZFS & Lofi Crypto) with your 
information. Do yo have any information about the release date for 
encryption in ZFS, or encryption related in our 7000 Unified Storage Family.
Thanks in advance for any help and information that you can send to me.
-- 
<http://www.sun.com> 	* H. Alejandro Guevara Castro *
Storage Practice Solution