similar to: Lowest latency remote file system

Displaying 20 results from an estimated 10000 matches similar to: "Lowest latency remote file system"

2009 Jun 17
3
Yum Repo that has xcache
Hi, I am wondering where I can get a repo that has xcache. (Or if anyone has any tips on a PHP optimizer) Thanks James -- http://www.goldwatches.com http://www.jewelerslounge.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.centos.org/pipermail/centos/attachments/20090618/5d01a1d1/attachment-0001.html>
2009 May 17
2
When will we be moving to lighttpd 1.5?
I know I can compile it however I prefer to use the package manager and having an updated version. -- http://www.goldwatches.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.centos.org/pipermail/centos/attachments/20090517/423b19a0/attachment-0004.html>
2009 Apr 03
1
Yum trying to install both i386 and x64 binaries
Hi, I am trying to install lighttpd and yum wants to install both versions. ========================================================================================================================================== Package Arch Version Repository Size
2004 Sep 13
1
throughput of 300MB/s
Hello, are there any experiences with samba as a _really_ fast server? Assuming if the filesystem and network is fast enough, has anyone managed to get a throughput in samba of of let's say 300 MB/s ? Are there any benchmarks? regards, Martin
2019 Jul 25
2
SMB Direct support?
Hello all, I was reading up on SMB Direct support and it seems very interesting. I looked through slides of a presentation by Stefan Metzmacher about SMB Direct last year: https://www.samba.org/~metze/presentations/2018/SDC/StefanMetzmacher_SDC2018-SMB-Direct-Status-rev1-presentation.pdf I was wondering what the current support of of SMB Direct for Samba. I have a Windows 10 Pro for Workstations
2013 Mar 24
5
How to make a network interface come up automatically on link up?
I have a recently installed Mellanox VPI interface in my server. This is an InfiniBand interface, which, through the use of adapters, can also do 10GbE over fiber. I have one of the adapter's two ports configured for 10GbE in this way, with a point to point link to a Mac workstation with a Myricom 10GbE card. I've configured this interface on the Linux box (eth2) using
2008 Nov 24
2
Getting lowest latency sound?
I have been trying to get lowest-latency sound (with highest fidelity) to use with Dragon NaturallySpeaking. I have Jaunty and the latest RT kernel, which I know has problems for many applications but works fine to run DNS. (It will not, however, install the program nor train it.) I set up real-time audio access as follows: sudo su -c 'echo @audio - rtprio 99 >>
2004 Aug 06
0
Configuring icecast for lowest buffering/latency
> How can I configure the Icecast server to use as minimum buffering as > possible so I can reduce the latency that the http streaming on the local > box introduces? as far as I know, icecast and liveice have the smallest influence on the delay in that chain. The biggest delay is produced by the client player's buffer. Have your checked this out? Why do you need such a low latency?
2004 Aug 06
1
Configuring icecast for lowest buffering/latency
On Wednesday 24 March 2004 11:00, Enrico Minack wrote: > > ...HTTP is very inefficient compared to RTP. > > what exactly do you mean with 'efficient'? Used bandwidth or available > features? In terms of used bandwidth, RTP is the clear winner. In terms of features, neither is a clear winner: HTTP has features RTP doesn't have, RTP has features HTTP doesn't have.
2004 Aug 06
2
Configuring icecast for lowest buffering/latency
On Wednesday 24 March 2004 03:53, Enrico Minack wrote: > Why do you consider livecaster's stream being more efficient than the > HTTP-Stream? Actually, after the HTTP-Header there are just raw MP3-Data. > In comparision to that, livecaster puts these MP3-Data into an > RTP-protokoll, which produces more overhead than 'raw' http. And you may be > faced random packet loss.
2009 Oct 14
2
Best practice settings for channel bonding interface mode?
Hi, may be there are some best practice suggestions for the "best mode" for channel bonding interface? Or in other words, when should/would I use which mode? E.g. I do have some fileservers connected to the users lan and to some ISCSI Storages. Or some Webservers only connected to the LAN. The switches are all new cisco models. I've read sone docs (1), (2) and (3) so the theory
2004 Aug 06
4
Configuring icecast for lowest buffering/latency
Hi, I'm using Icecast on a windows pc as a "gobetween" to output from a streaming encoder, bounce it of an icecast server locally on the workstation and then being picked up from the local icecast server and relayed on again. (I'm using liveCaster from www.live.com as it allows me to send the audio stream as UDP which is more efficient than HTTP streaming - unfortunately it does
2008 Nov 10
2
Parallel/Shared/Distributed Filesystems
I'm looking at using GFS for parallel access to shared storage, most likely an iSCSI resource. It will most likely work just fine but I am curious if folks are using anything with fewer system requisites (e.g. installing and configuring the Cluster Suite). Specifically to our case, we have 50 nodes running in-house code (some in Java, some in C) which (among other things) receives JPGs,
2010 Mar 29
2
Samba SMB throughput
Hello everyone, Quoting from Samba Team Blog #2 (25 Sept 2009): "Volker showed how to get more than 700MB/sec from Samba using smbclient and a modern Samba server, which shows what you can really do when you understand the protocol thoroughly and don't feel you have to invent a new one (SMB2 :-)." Would it be possible to get a complete accounting of how this was achieved? Thanks,
2019 Jul 31
1
SMB Direct support?
On Wed, Jul 31, 2019 at 11:02:18AM +0000, douxevip via samba wrote: > Hi all, is there anybody on the mailing list who is more knowledgeable about SMB direct? Would appreciate some pointers. See below. Thanks. > > -------- Original Message -------- > On Jul 25, 2019, 22:52, douxevip via samba < samba at lists.samba.org> wrote: > Hello all, I was reading up on SMB Direct
2020 May 15
2
CentOS7 and NFS
The number of threads has nothing to do with the number of cores on the machine. It depends on the I/O, network speed, type of workload etc. We usually start with 32 threads and increase if necessary. You can check the statistics with: watch 'cat /proc/net/rpc/nfsd | grep th? Or you can check on the client nfsstat -rc Client rpc stats: calls retrans authrefrsh 1326777974 0
2009 Sep 14
2
Opinions on bonding modes?
I am working on setting up an NFS server, which will mainly serve files to web servers, and I want to setup two bonds. I have a question regarding *which* bonding mode to use. None of the documentation I have read suggests any mode is "better" than other with the exception of specific use cases (e.g. switch does not support 802.3ad, active-backup). Since my switch *does* support
2005 May 31
2
pbx -> fiber -> network media converter -> wifi -> network media converter -> fiber -> pbx ???
Please forgive the (almost?) OT post. (and the fact that I need a clue-bat) We've got a situation at one of our sites where a construction crew is likely to dig up our conduit which houses some data fiber and one pair of fiber used to tie a Definity 3gsi at a small office building to the rest of the phone system (school district). We're using a pair of Aeronets to the data network stays
2011 Feb 08
2
PXElinux boot sequence with multiple ethernets
Hello, I am attempting a PXE boot between two systems, each with multiple network cards. While there are a total of 8 ports on each computer, only two (each) are connected as follows: Boot Server eth0 - 10GbE fiber channel (private to the set of computers being managed) (Qlogic) eth4 - 1Gb ethernet (public and out of my sphere of management) (NetExtreme II)
2009 Mar 13
2
Help setting up multipathing on CentOS 4.7 to an Equallogic iSCSI target
I'm trying to test out an Equallogic PS5500 with a server running CentOS 4.7 I can create a volume and mount it fine using the standard iscsi-initiator-utils tools. The Equallogic box has 3 Gigabit interfaces and I would like to try to set up things so I can read/write from/to the volume using multiple NICs on the server i.e. get 200+ Mbyte/s access to the volume - I've had some