search for: 162mb

Displaying 12 results from an estimated 12 matches for "162mb".

Did you mean: 12mb
2017 Nov 03
3
samba 4.x slow ...
...>10G) 199MB (mtu 1500). > > 33692     pkoch         users         10.0.3.100 > (ipv4:10.0.3.100:54821) NT1 > > Using smbclient (using NT1) we see the following values: > > Cl             Server (using get) > 10GB <-> 10GB      199MB/s > 10GB <-> 2x1GB    162MB/s  (??? ) >   1GB <-> 2x1GB     60MB/s > > > Cl             Server (using put) > 10GB <-> 2x1GB    ~100MB/s >   1GB <-> 2x1GB        50MB/s Try using SMB3 (smbclient -mSMB3). See if that makes a difference.
2015 Jul 20
1
[PATCH 0/1] lss16 parser
...and erroneous) attempt to solve the issue. http://www.syslinux.org/archives/2014-October/022732.html http://www.syslinux.org/archives/2014-November/022778.html Now the core/graphics.c lss16 parser works as it should. This can be seen i.e. when loading with 6.03 "slacko-5.7.0-PAE.iso" (162MB). It contains a "isolinux.cfg" file, with a "DISPLAY boot.msg"directive. The "boot.msg" DISPLAY file uses the "logo.16" lss16 image file as background image. This ISO image file uses some variant of isolinux.bin version 4.05, and the lss16 background is s...
2017 Nov 03
4
samba 4.x slow ...
just to verify basic facts: Did you cross check vie network sniff, on which SMB protocol versions Server + Win 7 clients agree ? Or did you pin down via registry ? AFAIK only starting with win 8 or win 10 clients you could ask with powershell, which protocol version is in use. Did you also cross check samba logs for a name resolution issue ( windows names, not DNS) if one of your boxes is an
2020 Sep 17
2
storage for mailserver
Hello Phil, Wednesday, September 16, 2020, 7:40:24 PM, you wrote: PP> You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and PP> marking the HDD members as --write-mostly, meaning most of the reads PP> will come from the faster SSDs retaining much of the speed advantage, PP> but you have the redundancy of both SSDs and HDDs in the array. PP> Read performance is
2020 Sep 19
1
storage for mailserver
...ical performance with and without the SSD in the array as once any cache has been saturated, write speeds are presumably limited by the slowest device in the array. > > Sequential read QD32 > 187MB/s (2 x HDD RAID1) > 1725MB/s (1 x NVMe, 2 x HDD RAID1) > > Sequential read QD1 > 162MB/s (2 x HDD RAID1) > 1296MB/s (1 x NVMe, 2 x HDD RAID1) > > 4K random read > 712kB/s (2 x HDD RAID1) > 55.0MB/s (1 x NVMe, 2 x HDD RAID1) > > The read speeds are a completely different story, and the array essentially performs identically to the native speed of the SSD device on...
2017 Apr 05
2
[Bug 12732] hard links can cause rsync to block or to silently skip files
...but I never got a hang -- and that's the part I'm thinking might be nfs related, since I've seen several issues with nfs not working the same way as a local fs. I usually use smbfs -- in my usage, it's faster and I usually have fewer compatibility problems. (faster meaning ~ Reads~162MB/s, W~220MB/s, though i've seen explorer hit over 400MB/s). (using a 10Gb link). But the above testing shows some unexplained behaviors out of rsync that sure look like a bug. Good test case! :-)
2017 Nov 03
0
samba 4.x slow ...
...o test the transfer speed and get (10G<->10G) 199MB (mtu 1500). 33692     pkoch         users         10.0.3.100 (ipv4:10.0.3.100:54821) NT1 Using smbclient (using NT1) we see the following values: Cl             Server (using get) 10GB <-> 10GB      199MB/s 10GB <-> 2x1GB    162MB/s  (??? )   1GB <-> 2x1GB     60MB/s Cl             Server (using put) 10GB <-> 2x1GB    ~100MB/s   1GB <-> 2x1GB        50MB/s Bye, Peer On 03.11.2017 11:58, Michael Arndt wrote: > just to verify basic facts: > > Did you cross check vie network sniff, on which SMB...
2017 Nov 06
0
samba 4.x slow ...
...t; >> 33692     pkoch         users         10.0.3.100 >> (ipv4:10.0.3.100:54821) NT1 >> >> Using smbclient (using NT1) we see the following values: >> >> Cl             Server (using get) >> 10GB <-> 10GB      199MB/s >> 10GB <-> 2x1GB    162MB/s  (??? ) >>   1GB <-> 2x1GB     60MB/s >> >> >> Cl             Server (using put) >> 10GB <-> 2x1GB    ~100MB/s >>   1GB <-> 2x1GB        50MB/s > Try using SMB3 (smbclient -mSMB3). See if that makes a difference. -- Mit freundlichen Gr...
2020 Sep 17
0
storage for mailserver
...The write tests give near identical performance with and without the SSD in the array as once any cache has been saturated, write speeds are presumably limited by the slowest device in the array. Sequential read QD32 187MB/s (2 x HDD RAID1) 1725MB/s (1 x NVMe, 2 x HDD RAID1) Sequential read QD1 162MB/s (2 x HDD RAID1) 1296MB/s (1 x NVMe, 2 x HDD RAID1) 4K random read 712kB/s (2 x HDD RAID1) 55.0MB/s (1 x NVMe, 2 x HDD RAID1) The read speeds are a completely different story, and the array essentially performs identically to the native speed of the SSD device once the slower HDDs are set to -...
2017 Apr 05
0
[Bug 12732] hard links can cause rsync to block or to silently skip files
...that's the part I'm thinking might be nfs > related, since I've seen several issues with nfs not > working the same way as a local fs. > > I usually use smbfs -- in my usage, it's faster and I > usually have fewer compatibility problems. > (faster meaning ~ Reads~162MB/s, W~220MB/s, > though i've seen explorer hit over 400MB/s). > (using a 10Gb link). > > But the above testing shows some unexplained behaviors > out of rsync that sure look like a bug. > > Good test case! > :-) > > > > > > > > > >...
2007 Jan 11
4
Help understanding some benchmark results
...d, while a ZFS raidz using the same disks returns about 120MB/sec write, but 420MB/sec read. * 16-disk RAID10 on Linux returns 165MB/sec and 440MB/sec write and read, while a ZFS pool with 8 mirrored disks returns 140MB/sec write and 410MB/sec read. * 16-disk RAID6 on Linux returns 126MB/sec write, 162MB/sec read, while a 16-disk raidz2 returns 80MB/sec write and 142MB/sec read. The biggest problem I am having understanding "why is it so", is because I was under the impression with ZFS''s CoW, etc, that writing (*especially* writes like this, to a raidz array) should be much fast...
2017 Apr 04
5
[Bug 12732] New: hard links can cause rsync to block or to silently skip files
https://bugzilla.samba.org/show_bug.cgi?id=12732 Bug ID: 12732 Summary: hard links can cause rsync to block or to silently skip files Product: rsync Version: 3.1.2 Hardware: x64 OS: Linux Status: NEW Severity: normal Priority: P5 Component: core Assignee: