similar to: SMB2 write performace slower than SMB1 in 10Gb network

Displaying 20 results from an estimated 1000 matches similar to: "SMB2 write performace slower than SMB1 in 10Gb network"

2016 Feb 17
2
Amount CPU's
Quick question. In my host, I've got two processors with each 6 cores and each core has two threads. I use iometer to do some testings on hard drive performance. I get the idea that using more cores give me better results in iometer. (if it will improve the speed of my guest is an other question...) For a Windows 2012 R2 server guest, can I just give the guest 24 cores? Just to make
2012 Oct 01
3
Best way to measure performance of ZIL
Hi all, I currently have a OCZ Vertex 4 SSD as a ZIL device and am well aware of their exaggerated claims of sustained performance. I was thinking about getting a DRAM based ZIL accelerator such as Christopher George''s DDRDive, one of the STEC products, etc. Of course the key question i''m trying to answer is: is the price premium worth it? --- What is the (average/min/max)
2008 Oct 02
1
Terrible performance when setting zfs_arc_max snv_98
Hi there. I just got a new Adaptec RAID 51645 controller in because the old (other type) was malfunctioning. It is paired with 16 Seagate 15k5 disks, of which two are used with hardware RAID 1 for OpenSolaris snv_98, and the rest is configured as striped mirrors as a zpool. I created a zfs filesystem on this pool with a blocksize of 8K. This server has 64GB of memory and will be running
2016 May 24
0
Improving 30-40MB/sec Sequential Reads
I'm seeing some really poor performance out of my FreeNAS (ver 9.10) machine, running Samba "4.3.6-GIT-UNKNOWN". I'm using IOMeter to benchmark sequential reads, and getting around 35-40 MB/sec, which seems unusual. Mostly, I'm hoping someone can point me in the direction of a decent tuning guide for a SOHO machine running Samba, but if you're inclined, I'll delve
2012 Aug 12
1
tuned-adm fixed Windows VM disk write performance on CentOS 6
On a 32bit Windows 2008 Server guest VM on a CentOS 5 host, iometer reported a disk write speed of 37MB/s The same VM on a CentOS 6 host reported 0.3MB/s. i.e. The VM was unusable. Write performance in a CentOS 6 VM was also much worse, but it was usable. (See http://lists.centos.org/pipermail/centos-virt/2012-August/002961.html) With iometer still running in the guest, I installed tuned on
2018 May 28
4
Re: VM I/O performance drops dramatically during storage migration with drive-mirror
Cc the QEMU Block Layer mailing list (qemu-block@nongnu.org), who might have more insights here; and wrap long lines. On Mon, May 28, 2018 at 06:07:51PM +0800, Chunguang Li wrote: > Hi, everyone. > > Recently I am doing some tests on the VM storage+memory migration with > KVM/QEMU/libvirt. I use the following migrate command through virsh: > "virsh migrate --live
2014 Dec 03
2
Problem with AIO random read
Hello list, I setup Iometer to test AIO for 100% random read. If "Transfer Request Size" is more than or equal to 256 kilobytes,in the beginning the transmission is good. But 3~5 seconds later,the throughput will drop to zero. Server OS: Ubuntu Server 14.04.1 LTS Samba: Version 4.1.6-Ubuntu Dialect: SMB 2.0 AIO settings : aio read size = 1 aio write size = 1 vfs objects =
2003 Jan 07
2
MRTG drop/reject hits
I have created shell script for MRTG statistics of droped/rejected packets: ftp://slovakia.shorewall.net/mirror/shorewall/mrtg/ http://slovakia.shorewall.net/pub/shorewall/mrtg/ rsync://slovakia.shorewall.net/shorewall/mrtg/ example: http://slovakia.shorewall.net/pub/shorewall/mrtg/example/ It is not based on /var/log/messages (syslog), but iptables counter. A lot of packets are droped/rejected
2012 Mar 09
2
btrfs_search_slot BUG...
When testing out 16KB blocks with direct I/O [1] on 3.3-rc6, we quickly see btrfs_search_slot returning positive numbers, popping an assertion [2]. Are >4KB block sizes known broken for now? Thanks, Daniel --- [1] mkfs.btrfs -m raid1 -d raid1 -l 16k -n 16k /dev/sda /dev/sdb mount /dev/sda /store && cd /store fio /usr/share/doc/fio/examples/iometer-file-access-server --- [2]
2009 Dec 15
1
IOZone: Number of outstanding requests..
Hello: Sorry for asking iozone ques in this mailing list but couldn't find any mailing list on iozone... In IOZone, is there a way to configure # of outstanding requests client sends to server side? Something on the lines of IOMeter option "Number of outstanding requests". Thanks a lot!
1997 Jul 24
1
Print Jobs
When sending print jobs back to back .. say 5 times or so .. sometimes a character or two get droped .. causing the printout to be some what garbled. This happens from NT as well ... heh .. which is really fun when they're printing postscript .. cause when the 1st character or so is droped from a postscript print job .. it's postscript commands are printed out as ascii. Is anyone else out
2023 Aug 21
2
Increase data length for SMB2 write and read requests for Windows 10 clients
Hello Jeremy, > OH - that's *really* interesting ! I wonder how it is > changing the SMB3+ redirector to do this ? It looks like applications could do something and give a hint to SMB3+ redirector, so far not quite sure how to make it, per process monitor (procmon) could show that write I/O size seems could be pass from the application layers,
2023 Aug 18
1
Increase data length for SMB2 write and read requests for Windows 10 clients
On Fri, Aug 18, 2023 at 04:25:28PM +0000, Jones Syue ??? wrote: >Hello Ivan, > >'FastCopy' has an option to revise max I/O size and works for SMB :) >it is a tool for file transferring and could be installed to win10, >download here: https://fastcopy.jp/ > >This is an example for writing, a job would write a file named '1GB.img' >from a local disk
2019 Nov 20
2
Is it possible to re-share a SMB2 filesystem for SMB1 clients?
Hi everyone, I'm having the following situation: I need to migrate all SMB file services to a new appliance that only supports the SMB2+ protocol. Unfortunately, there are still some very old Linux clients ("modinfo cifs" says version 1.60) that do only speak SMB1 and need to access these shares after the migration. Is it possible to have a Linux with a modern Samba
2008 Sep 28
2
Does "--link-dest" option supports link to remote backup server?
Hello everyone: I keeps using rsync to backup my files on my laptop from one folder to another, and to reduce disk usage, so I use "--link-dest" option for incremental backup like this: *rsync -a --link-dest=/local/old /tmp/myfile /local/new* And "/local/old" is backuped some days ago, I use "--link-dest" option to keep unchanged files as links to new destination
2014 Jul 22
1
asterisk performace 64bits
Hello, I'm running Asterisk on a CentOS 64-bit server. . Asterisk if I compile using the ./configure --libdir=/usr/lib64 instead of ./configure have a relative gain performace.? Has anyone done any comparison? Is there any way in the compilation or even in settings that I can improve the performace of the asterisk? tks Eduardo -------------- next part -------------- An HTML attachment was
2011 May 13
0
sun (oracle) 7110 zfs low performace fith high latency and high disc util.
Hello! Our company have 2 sun 7110 with the following configuration: Primary: 7110 with 2 qc 1.9ghz HE opterons and 32GB ram 16 2.5" 10Krpm sas disc (2 system, 1 spare) a pool is configured from the rest so we have 13 active working discs in raidz-2 (called main) there is a sun J4200 jbod connected to this device with 12x750GB discs with 1 spare and 11active discs there is another pool
2013 Jun 07
0
Bad performace for NV18 driver
On Fri, Jun 7, 2013 at 5:56 PM, Carlos Garces <carlos.garces at gmail.com> wrote: > Hi! > > I have some strange results on my test with nouveau drivers > > I'm using a old GeForce4 MX 4000 (NV18) with 128 MB. > > I have made 2 test. > > -The default configuration, using nomodeset without nouveau_vieux_dri.so > > $inxi -G > Graphics: Card: NVIDIA NV18
2008 Feb 02
17
New binary release of GPL PV drivers for Windows
I''ve just uploaded a new binary release of the GPL PV drivers for Windows - release 0.6.3. Fixes in this version are: . Should now work on any combination of front and backend bit widths (32, 32p, 64). Previous crashes that I thought were due to Intel arch were actually due to this problem. . The vbd driver will now not enumerate ''boot'' disks (eg those normally serviced
2008 Feb 02
17
New binary release of GPL PV drivers for Windows
I''ve just uploaded a new binary release of the GPL PV drivers for Windows - release 0.6.3. Fixes in this version are: . Should now work on any combination of front and backend bit widths (32, 32p, 64). Previous crashes that I thought were due to Intel arch were actually due to this problem. . The vbd driver will now not enumerate ''boot'' disks (eg those normally serviced