search for: sysbench

Displaying 20 results from an estimated 39 matches for "sysbench".

2017 Jun 13
2
Transport Endpoint Not connected while running sysbench on Gluster Volume
...ter_volume laeft-dccdb01p:/export/mariadb/brick I had to lower frame-timeout since the system would become unresponsive until the frame failed by timeout: gluster volume set mariadb_gluster_volume networking.frame-timeout 5 running gluster version: glusterfs 3.8.12 The workload i'm using is: sysbench --test=fileio --file-total-size=4G --file-num=64 prepare sysbench version: sysbench 0.4.12-5.el6 kernel version: 2.6.32-696.1.1.el6 centos: 6.8 Issue: Whenever I run the sysbench over the mount /var/lib/mysql_backups I get the error that is shown on the log output. It is a constant issue, I ca...
2017 Jun 15
1
Transport Endpoint Not connected while running sysbench on Gluster Volume
...il> ----- Original Message ----- > From: "Julio Guevara" <julioguevara150 at gmail.com> > To: "Ben Turner" <bturner at redhat.com> > Sent: Thursday, June 15, 2017 5:52:26 PM > Subject: Re: [Gluster-users] Transport Endpoint Not connected while running sysbench on Gluster Volume > > I stumble upon the problem. > > We are using deep security agent (da_agent) as our main antivirus. When the > antivirus gets activated it installs kernel modules: > redirfs > gsch > > Apparently when this modules are present and loaded to the...
2009 Jan 28
0
smp_tlb_shootdown bottleneck?
Hi. Sometimes I see much contention in smp_tlb_shootdown while running sysbench: sysbench --test=fileio --num-threads=8 --file-test-mode=rndrd --file-total-size=3G run kern.smp.cpus: 8 FreeBSD 7.1-R CPU: 0.8% user, 0.0% nice, 93.8% system, 0.0% interrupt, 5.4% idle Mem: 11M Active, 2873M Inact, 282M Wired, 8K Cache, 214M Buf, 765M Free Swap: 4096M Total, 4096M Free P...
2017 Jun 14
0
Transport Endpoint Not connected while running sysbench on Gluster Volume
...iadb/brick > > I had to lower frame-timeout since the system would become unresponsive > until the frame failed by timeout: > gluster volume set mariadb_gluster_volume networking.frame-timeout 5 > > running gluster version: glusterfs 3.8.12 > > The workload i'm using is: sysbench --test=fileio --file-total-size=4G > --file-num=64 prepare > > sysbench version: sysbench 0.4.12-5.el6 > > kernel version: 2.6.32-696.1.1.el6 > > centos: 6.8 > > Issue: Whenever I run the sysbench over the mount /var/lib/mysql_backups I > get the error that is shown on...
2017 Aug 22
0
Performance testing with sysbench...
Hi all, I'm doing some performance test... If I test a simple sequential write using dd I get a thorughput of about 550 Mb/s. When I do a sequential write test using sysbench this drops to about 200. Is this due to the way sysbench tests? Or has in this case the performance of sysbench itself become the bottleneck? Krist -- Vriendelijke Groet | Best Regards | Freundliche Gr??e | Cordialement ------------------------------ Krist van Besien senior architect, RHCE,...
2013 Sep 29
9
DomU vs Dom0 performance.
...hlinux (kernel 3.5.0) in domU running on this machine. The domU runs with 8 vcpus. I have alloted both dom0 and domu 4096M ram. I performed following experiments to compare the performance of domU vs dom0. experiment 1] 1. Created a file.img of 5G 2. Mounted the file with ext2 filesystem. 3. Ran sysbench with following command. sysbench --num-threads=8 --test=fileio --file-total-size=1G --max-requests=1000000 prepare 4. Read files into memory script to read files <snip> for i in `ls test_file.*` do sudo dd if=./$i of=/dev/zero done </snip> 5. Ran sysbench. sysbench --num-thread...
2013 Nov 19
6
[PATCH] Btrfs: fix very slow inode eviction and fs unmount
...of time and cpu when the inode has many pages. In some scenarios I have experienced umount times higher than 15 minutes, even when there''s no pending IO (after a btrfs fs sync). A quick way to reproduce this issue: $ mkfs.btrfs -f /dev/sdb3 $ mount /dev/sdb3 /mnt/btrfs $ cd /mnt/btrfs $ sysbench --test=fileio --file-num=128 --file-total-size=16G \ --file-test-mode=seqwr --num-threads=128 \ --file-block-size=16384 --max-time=60 --max-requests=0 run $ time btrfs fi sync . FSSync ''.'' real 0m25.457s user 0m0.000s sys 0m0.092s $ cd .. $ time umount /mnt/btrfs real 1m...
2012 Apr 17
2
Kernel bug in BTRFS (kernel 3.3.0)
Hi, Doing some extensive benchmarks on BTRFS, I encountered a kernel bug in BTRFS (as reported in dmesg) Maybe the information below can help you making btrfs better. Situation Doing an intensive sequential write on a SAS 3TB disk drive (SEAGATE ST33000652SS) with 128 threads with Sysbench. Device is connected through an HBA. Blocksize was 256k ; Kernel is 3.3.0 (x86_64) ; Btrfs is version v0.19 Write is done through an LVS volume formated with BTRFS of course; Mount options are : rw,noatime,nodiratime,compress=lzo,nospace_cache Dmesg gives me following error : [370517.203926] sd...
2009 Mar 05
1
[PATCH] OCFS2: Pagecache usage optimization on OCFS2
...t to read are uptodate. "block_is_partially_uptodate" function is already used by ext2/3/4. With the following patch random read/write mixed workloads or random read after random write workloads can be optimized and we can get performance improvement. I did a performance test using the sysbench. #sysbench --num-threads=16 --max-requests=150000 --test=fileio --file-num=1 --file-block-size=8K --file-total-size=1G --file-test-mode=rndrw --file-fsync-freq=0 --file-rw-ratio=0.5 run -2.6.29-rc7 Test execution summary: total time: 82.3114s total number of ev...
2020 Jun 14
4
very low performance of Xen guests
...ating the initramfs images. I've done rough tests with the storage? ( via dd if=/dev/zero of=a_test_file size bs=10M count=1000 ) and the speed was comparable between the hosts and the guests. The version of the kernel in use inside the guest also did not seem to make any difference . OTOH, sysbench ( https://github.com/akopytov/sysbench/ ) as well as p7zip benchmark report for the guests a speed which is between 10% and 50% of the host. Quite obviously, changing the elevator had no influence either. ??? Here is the info which I think that should be relevant for the software versions in...
2020 Jun 15
1
very low performance of Xen guests
...9;ve done rough tests with the storage? ( via dd if=/dev/zero > of=a_test_file size bs=10M count=1000 ) and the speed was > comparable between the hosts and the guests. The version of the > kernel in use inside the guest also did not seem to make any > difference . OTOH, sysbench ( > https://github.com/akopytov/sysbench/ ) as well as p7zip benchmark > report for the guests a speed which is between 10% and 50% of the > host. Quite obviously, changing the elevator had no influence either. > > ??? Here is the info which I think that should be rel...
2012 Dec 03
17
[PATCH 0 of 3] xen: sched_credit: fix tickling and add some tracing
...of the first patch. The new approach has been extensively benchmarked and proved itself either beneficial or harmless. That means it does not introduce any significant amount of overhead and/or performances regressions while, for some workloads, it improves the performances quite sensibly (e.g., `sysbench --test=memory''). Full results in the first changelog too. The rest of the series introduces some macros to enable generating per-scheduler tracing events, retaining the possibility of distinguishing them, even with more than one scheduler running at any given time (via cpupools), and add...
2017 Feb 14
6
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
...t to asm/mshyperv.h to use from both hv_init.c and vdso. - Add explicit barriers [Thomas Gleixner] Original description: Hyper-V TSC page clocksource is suitable for vDSO, however, the protocol defined by the hypervisor is different from VCLOCK_PVCLOCK. Implemented the required support. Simple sysbench test shows the following results: Before: # time sysbench --test=memory --max-requests=500000 run ... real 1m22.618s user 0m50.193s sys 0m32.268s After: # time sysbench --test=memory --max-requests=500000 run ... real 0m47.241s user 0m47.117s sys 0m0.008s Patches 1 and 2 are made on to...
2017 Feb 14
6
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
...t to asm/mshyperv.h to use from both hv_init.c and vdso. - Add explicit barriers [Thomas Gleixner] Original description: Hyper-V TSC page clocksource is suitable for vDSO, however, the protocol defined by the hypervisor is different from VCLOCK_PVCLOCK. Implemented the required support. Simple sysbench test shows the following results: Before: # time sysbench --test=memory --max-requests=500000 run ... real 1m22.618s user 0m50.193s sys 0m32.268s After: # time sysbench --test=memory --max-requests=500000 run ... real 0m47.241s user 0m47.117s sys 0m0.008s Patches 1 and 2 are made on to...
2020 Jun 15
0
very low performance of Xen guests
...mfs images. I've done rough tests with the > storage ( via dd if=/dev/zero of=a_test_file size bs=10M count=1000 ) and > the speed was comparable between the hosts and the guests. The version of > the kernel in use inside the guest also did not seem to make any difference > . OTOH, sysbench ( https://github.com/akopytov/sysbench/ ) as well as > p7zip benchmark report for the guests a speed which is between 10% and 50% > of the host. Quite obviously, changing the elevator had no influence > either. > > Here is the info which I think that should be relevant for the &g...
2009 Jun 08
1
[PATCH] Btrfs: fdatasync should skip metadata writeout
Hi. In btrfs, fdatasync and fsync are identical. I think fdatasync should skip committing transaction when inode->i_state is set just I_DIRTY_SYNC and this indicates only atime or/and mtime updates. Following patch improves fdatasync throughput. #sysbench --num-threads=16 --max-requests=10000 --test=fileio --file-block-size=4K --file-total-size=16G --file-test-mode=rndwr --file-fsync-mode=fdatasync run Results: -2.6.30-rc8 Test execution summary: total time: 1980.6540s total number of events: 10001...
2010 May 05
6
Benchmark Disk IO
What is the best way to benchmark disk IO? I'm looking to move one of my servers, which is rather IO intense. But not without first benchmarking the current and new disk array, To make sure this isn't a full waste of time. thanks
2017 Feb 08
3
[PATCH RFC 0/2] x86/vdso: Add Hyper-V TSC page clocksource support
Hi, Hyper-V TSC page clocksource is suitable for vDSO, however, the protocol defined by the hypervisor is different from VCLOCK_PVCLOCK. I implemented the required support re-using pvclock_page VVAR. Simple sysbench test shows the following results: Before: # time sysbench --test=memory --max-requests=500000 run ... real 1m22.618s user 0m50.193s sys 0m32.268s After: # time sysbench --test=memory --max-requests=500000 run ... real 0m50.218s user 0m50.171s sys 0m0.016s So it seems it is w...
2017 Feb 08
3
[PATCH RFC 0/2] x86/vdso: Add Hyper-V TSC page clocksource support
Hi, Hyper-V TSC page clocksource is suitable for vDSO, however, the protocol defined by the hypervisor is different from VCLOCK_PVCLOCK. I implemented the required support re-using pvclock_page VVAR. Simple sysbench test shows the following results: Before: # time sysbench --test=memory --max-requests=500000 run ... real 1m22.618s user 0m50.193s sys 0m32.268s After: # time sysbench --test=memory --max-requests=500000 run ... real 0m50.218s user 0m50.171s sys 0m0.016s So it seems it is w...
2017 Feb 09
4
[PATCH 0/2] x86/vdso: Add Hyper-V TSC page clocksource support
Hi, Hyper-V TSC page clocksource is suitable for vDSO, however, the protocol defined by the hypervisor is different from VCLOCK_PVCLOCK. Implemented the required support. Simple sysbench test shows the following results: Before: # time sysbench --test=memory --max-requests=500000 run ... real 1m22.618s user 0m50.193s sys 0m32.268s After: # time sysbench --test=memory --max-requests=500000 run ... real 0m47.241s user 0m47.117s sys 0m0.008s So it seems it is worth it. As...