similar to: tuned-adm fixed Windows VM disk write performance on CentOS 6

Displaying 20 results from an estimated 200 matches similar to: "tuned-adm fixed Windows VM disk write performance on CentOS 6"

2012 Apr 28
1
SMB2 write performace slower than SMB1 in 10Gb network
Hi forks: I've been testing SMB2 with samba 3.6.4 performance these days, and I find a weird benchmark that SMB2 write performance is slower than SMB1 in 10Gb ethernet network. Server ----------------------- Linux: Redhat Enterprise 6.1 x64 Kernel: 2.6.31 x86_64 Samba: 3.6.4 (almost using the default configuration) Network: Chelsio T4 T420-SO-CR 10GbE network adapter RAID: Adaptec 51645 RAID
2016 Feb 17
2
Amount CPU's
Quick question. In my host, I've got two processors with each 6 cores and each core has two threads. I use iometer to do some testings on hard drive performance. I get the idea that using more cores give me better results in iometer. (if it will improve the speed of my guest is an other question...) For a Windows 2012 R2 server guest, can I just give the guest 24 cores? Just to make
2014 Dec 03
2
Problem with AIO random read
Hello list, I setup Iometer to test AIO for 100% random read. If "Transfer Request Size" is more than or equal to 256 kilobytes,in the beginning the transmission is good. But 3~5 seconds later,the throughput will drop to zero. Server OS: Ubuntu Server 14.04.1 LTS Samba: Version 4.1.6-Ubuntu Dialect: SMB 2.0 AIO settings : aio read size = 1 aio write size = 1 vfs objects =
2012 Oct 01
3
Best way to measure performance of ZIL
Hi all, I currently have a OCZ Vertex 4 SSD as a ZIL device and am well aware of their exaggerated claims of sustained performance. I was thinking about getting a DRAM based ZIL accelerator such as Christopher George''s DDRDive, one of the STEC products, etc. Of course the key question i''m trying to answer is: is the price premium worth it? --- What is the (average/min/max)
2016 Feb 18
0
Re: Amount CPU's
On Wed, Feb 17, 2016 at 07:14:33PM +0000, Dominique Ramaekers wrote: >Quick question. > >In my host, I've got two processors with each 6 cores and each core has >two threads. > >I use iometer to do some testings on hard drive performance. > >I get the idea that using more cores give me better results in >iometer. (if it will improve the speed of my guest is an other
2018 May 28
0
Re: VM I/O performance drops dramatically during storage migration with drive-mirror
On Mon, May 28, 2018 at 02:05:05PM +0200, Kashyap Chamarthy wrote: > Cc the QEMU Block Layer mailing list (qemu-block@nongnu.org), [Sigh; now add the QEMU BLock Layer e-mail list to Cc, without typos.] > who might > have more insights here; and wrap long lines. > > On Mon, May 28, 2018 at 06:07:51PM +0800, Chunguang Li wrote: > > Hi, everyone. > > > > Recently
2018 May 28
4
Re: VM I/O performance drops dramatically during storage migration with drive-mirror
Cc the QEMU Block Layer mailing list (qemu-block@nongnu.org), who might have more insights here; and wrap long lines. On Mon, May 28, 2018 at 06:07:51PM +0800, Chunguang Li wrote: > Hi, everyone. > > Recently I am doing some tests on the VM storage+memory migration with > KVM/QEMU/libvirt. I use the following migrate command through virsh: > "virsh migrate --live
2019 Jul 05
3
Have you run "tuned-adm profile throughput-performance" ?
On 7/4/19 10:18 PM, Steven Tardy wrote: > I would also look at power settings in the BIOS and c-state settings in the > BIOS and OS as disabling c-states (often enabled by default to meet > green/energy star compliance) can make a noticeable performance difference. I'd be surprised if it did, but now that you mention it, I think that we should probably mention more often that
2012 Mar 09
2
btrfs_search_slot BUG...
When testing out 16KB blocks with direct I/O [1] on 3.3-rc6, we quickly see btrfs_search_slot returning positive numbers, popping an assertion [2]. Are >4KB block sizes known broken for now? Thanks, Daniel --- [1] mkfs.btrfs -m raid1 -d raid1 -l 16k -n 16k /dev/sda /dev/sdb mount /dev/sda /store && cd /store fio /usr/share/doc/fio/examples/iometer-file-access-server --- [2]
2009 Dec 15
1
IOZone: Number of outstanding requests..
Hello: Sorry for asking iozone ques in this mailing list but couldn't find any mailing list on iozone... In IOZone, is there a way to configure # of outstanding requests client sends to server side? Something on the lines of IOMeter option "Number of outstanding requests". Thanks a lot!
2008 Oct 02
1
Terrible performance when setting zfs_arc_max snv_98
Hi there. I just got a new Adaptec RAID 51645 controller in because the old (other type) was malfunctioning. It is paired with 16 Seagate 15k5 disks, of which two are used with hardware RAID 1 for OpenSolaris snv_98, and the rest is configured as striped mirrors as a zpool. I created a zfs filesystem on this pool with a blocksize of 8K. This server has 64GB of memory and will be running
2008 Jun 24
0
HVM domU disk i/o slow after resume
After resuming a HVM domU disk throughput decreases by 75 percent. Subsequent save/resume cycles do not further decrease disk throughput but it remains at 25 percent of the original rate. Iometer all-in-one test is used to measure throughput. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2012 Oct 11
0
samba performance downgrade with glusterfs backend
Hi folks, We found that samba performance downgrade a lot with glusterfs backend. volume info as followed, Volume Name: vol1 Type: Distribute Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: pana53:/data/ Options Reconfigured: auth.allow: 192.168.* features.quota: on nfs.disable: on Use dd (bs=1MB) or iozone (block=1MB) to test write performance, about 400MB/s. #dd
2018 Mar 08
0
fuse vs libgfapi LIO performances comparison: how to make tests?
Dear support, I need to export gluster volume with LIO for a virtualization system. In this moment I have a very basic test configuration: 2x HP 380 G7(2 * Intel X5670 (Six core @ 2,93GHz), 72GB ram, hd RAID10 6xsas 10krpm, lan Intel X540 T2 10GB) directly interconnected. Gluster configuration is replica 2. OS is Fedora 27 For my tests I used dd and I found strange results. Apparently the
2016 Apr 11
0
High Guest CPU Utilization when using libgfapi
Hi, I am currently testing running Openstack instance on Cinder volume with libgfapi. This instance is the Windows instance and i found that when running random 4k write workload, the CPU utilization is very high, 90% CPU utilization with about 86% in privileged time. I also tested the workload with volume from NFS and the CPU utilization is only around 5%. For gluster fuse, the CPU utilization
2006 May 23
0
Samba on NAS perfs
Hi, I am running a samba 3.0.14a server on a 2.6.15.6 kernel, on a 4T - SATA2 NAS. The disks are formated in xfs and configurated in hard RAID5. Network is bonded with 2 Gbits links using 802.3ad agregation. The NAS is connected on a Gbits switch (Summit 400 - extreme Networks), also configured with dynamic sharing. I am trying to get the best performances in Reading/writting the NAS, and what I
2014 Dec 03
0
Problem with AIO random read
On Wed, Dec 03, 2014 at 03:45:24AM -0800, mikeliu wrote: > Hello list, > > I setup Iometer to test AIO for 100% random read. > If "Transfer Request Size" is more than or equal to 256 kilobytes,in the > beginning the transmission is good. > But 3~5 seconds later,the throughput will drop to zero. > > Server OS: > Ubuntu Server 14.04.1 LTS > > Samba: >
2012 Oct 13
1
low samba performance with glusterfs backend
Hello folks, We test samba performance with local ext4 and glusterfs backends, it shows performance is very different. The samba server has 4 1Gbps NICs and bond with mode 6, backend storage is raid0 with 12 SAS disks. A LUN is created over all disks, make as EXT4 file system, and used as glusterfs brick. On the samba server, use dd test local ext4 and glusterfs, write bandwidth are 477MB/s and
2012 Oct 13
1
low samba performance with glusterfs backend
Hello folks, We test samba performance with local ext4 and glusterfs backends, it shows performance is very different. The samba server has 4 1Gbps NICs and bond with mode 6, backend storage is raid0 with 12 SAS disks. A LUN is created over all disks, make as EXT4 file system, and used as glusterfs brick. On the samba server, use dd test local ext4 and glusterfs, write bandwidth are 477MB/s and
2012 Aug 10
3
CentOS 6 kvm disk write performance
I have 2 similar servers. Since upgrading one from CentOS 5.5 to 6, disk write performance in kvm guest VMs is much worse. There are many, many posts about optimising kvm, many mentioning disk performance in CentOS 5 vs 6. I've tried various changes to speed up write performance, but northing's made a significant difference so far: - Install virtio disk drivers in guest - update the