Displaying 12 results from an estimated 12 matches for "100mbytes".
Did you mean:
100bytes
2012 Nov 18
6
Xen + IPv6 + Netapp = NFS read problem
...ch as a central
point of network. Internally we use IPv6 protocol.
The problem is the following:
THE READ PERFORMANCE OF BACKUP SERVER IS DRAMATICALLy LOW: TYPICAL SEQUENTIAL
READ FROM NETAPP SHARE FALLS DOWN TO 5-10MBytes/s !!!!!!!!!!!!!!!!!!!!!!!
On the other hand WRITE PERFORMANCE IS OK i.e 80-100MBytes/s
During test on our testbed system we could read about 250-280 MBytes/s
from our netapp storage (using 10GBit network). The backup server is
connected using 1GBit/s network so we expected some 100MBytes/s performance.
We tried to find a reason of such slow performance and it looks like
it is som...
2004 Jan 15
3
ISDN CAPI and anonymous callers
...'t. Apparently because the ISDN CAPI doesn't
use 0 for callers who don't send their number.
Is there a way to make * identify ISDN callers who use CLIR?
-Walter
--
Walter Doerr =*= wd@infodn.rmi.de =*= FAX: +49 2421 962001
"The poor folks who only have 100MBytes of RAM five years
from now may not be able to buffer a 16MB packet, but that's their
tough luck." (John Gilmore on Mon, 10 Oct 88 18:10:21 PDT)
2007 Jun 22
1
Nagging performance issues with Vista
...too 2006.1,
tested with both Samba 3.0.24 and 3.0.25a. Gigabit network is handled by
the onboard nForce controller, and it's got a software RAID 5 setup that
has been running fine for months.
The client is a dual core Windows box with a PCI-Express gigabit card.
Netperf pegs the network at 100Mbytes/sec, so there's no problems there.
Transfers via FTP on both Windows XP and Vista work out around about
55Mbytes/sec consistently, and SMB transfers under Windows XP also top
out around 50-55Mbytes/sec, which seems to be the limit of the I/O on
the client.
Vista however, no matter what I d...
2008 Jan 14
1
direct assignment of PCIe virtual function to domU
Hi all,
I have been using Xen for a couple of weeks now.
So far, I could run domU with direct access to the NIC using late
binding method.
This way NIC is giving a throughput close to that of native linux
system.
Now I would like to know whether Xen supports the direct assignment of
PCIe virtual function to domU.
If yes, how could I do this?
Thanks and Regards,
Masroor
2004 Jul 17
3
chan_capi: sending incoming calls to different contexts
...have tried something like
msn=1234
incomingmsn=1234
context=msn1
msn=4567
incomingmsn=4567
context=msn2
in capi.conf but with no results.
Thanks for any hints.
-Walter
--
Walter Doerr =*= wd@infodn.rmi.de =*= FAX: +49 2421 962001
"The poor folks who only have 100MBytes of RAM five years
from now may not be able to buffer a 16MB packet, but that's their
tough luck." (John Gilmore on Mon, 10 Oct 88 18:10:21 PDT)
2017 May 31
2
OT: Want to capture all SIP messages
On Wed, 31 May 2017, Barry Flanagan wrote:
> sngrep?
Isn't sngrep a great tool? Since discovering it my use of
tcpdump/wireshark has cratered.
Being able to compare an INVITE that worked with one that didn't (with
color highlighting) rocks.
--
Thanks in advance,
-------------------------------------------------------------------------
Steve Edwards sedwards at sedwards.com
2009 Jan 24
3
zfs read performance degrades over a short time
I appear to be seeing the performance of a local ZFS file system degrading over a short period of time.
My system configuration:
32 bit Athlon 1800+ CPU
1 Gbyte of RAM
Solaris 10 U6
SunOS filer 5.10 Generic_137138-09 i86pc i386 i86pc
2x250 GByte Western Digital WD2500JB IDE hard drives
1 zfs pool (striped with the two drives, 449 GBytes total)
1 hard drive has
2019 Nov 09
0
Sudden, dramatic performance drops with Glusterfs
...connection.)
>>
>> Try to rsync a file directly to one of the bricks, then to the other brick (don't forget to remove the files after that, as gluster will not know about them).
>
> If I rsync manually, or scp a file directly to the zpool bricks (outside of gluster) I get 30-100MBytes/s (depending on what I'm copying.)
> If I rsync THROUGH gluster (via the glusterfs mounts) I get 1 - 5MB/s
>>
>> What are your mounting options ? Usually 'noatime,nodiratime' are a good start.
>
> I'll try these. Currently using ...
> (mounting TO serverA) se...
2007 Mar 14
3
I/O bottleneck Root cause identification w Dtrace ?? (controller or IO bus)
Dtrace and Performance Teams,
I have the following IO Performance Specific Questions (and I''m already
savy with the lockstat and pre-dtrace
utilities for performance analysis.. but in need of details regarding
specifying IO bottlenecks @ the controller or IO bus..) :
**Q.A*> Determining IO Saturation bottlenecks ( */.. beyond service
times and kernel contention.. )/
I''m
2008 Jul 16
1
[Fwd: [Fwd: The results of iozone stress on NFS/ZFS and SF X4500 shows the very bad performance in read but good in write]]
...how the mixed results as follows:
* The read performance sharply degrades (almost down to 1/20, i.e
from 2,000,000 down to 100,000) when the file sizes are larger
than 256KBytes.
* The write performance remains good (roughly 1,000,000) even with
the file sizes larger than 100MBytes.
The NFS/ZFS server configuraion and the test environment is briefed as
* The ZFS pool for NFS is composed of the 6 disks in stripping with
one on each SATA controller.
* Solaris 10 Update 5 (Solaris Factory Installation)
* The on-board GigE ports are trunked for better I/O and...
2012 Dec 14
12
any more efficient way to transfer snapshot between two hosts than ssh tunnel?
Assuming in a secure and trusted env, we want to get the maximum transfer speed without the overhead from ssh.
Thanks.
Fred
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20121213/654f543f/attachment-0001.html>
2009 Feb 24
7
bad I/O performance with HP Smart Array RAID
Hi all,
I discovered lately that two of my XEN servers suffer from a really
bad disk throughput - in domU as well as in dom0. All of them run
Debian/Etch, Kernel 2.6.18-6-xen (orig. Debian) with XEN 3.2 (backported).
The software versions seem to be ok for most machines:
Lenovo A57 with S-ATA and (E8200, VT enabled): up to 90MB/s
Shuttle, FB61 based, ATA-Disk (Celeron, no VT): up to