search for: 2gbps

Displaying 18 results from an estimated 18 matches for "2gbps".

Did you mean: gbps
2007 May 24
9
No zfs_nocacheflush in Solaris 10?
Hi, I''m running SunOS Release 5.10 Version Generic_118855-36 64-bit and in [b]/etc/system[/b] I put: [b]set zfs:zfs_nocacheflush = 1[/b] And after rebooting, I get the message: [b]sorry, variable ''zfs_nocacheflush'' is not defined in the ''zfs'' module[/b] So is this variable not available in the Solaris kernel? I''m getting really poor
2016 Sep 28
2
Multichannel working at half speed with 2x NICs
...re capable of sustaining read/writes of well over 250MB/s, so I don't think this is a disk bottleneck issue. What happens during a file transfer is that both interfaces are only utilized at around 50%, so they both combine to give me ~1Gbps, when they really should be working at 100% each for ~2Gbps total. What can I do to troubleshoot this? Thanks in advance for any help.
2008 Aug 28
4
Very Slow!
System info: Red Hat Enterprise Linux Server release 5 (Tikanga) Kerlen 2.6.18-8.el5 SMP x86_64 Samba version 3.0.23c-2 Eth0 && Eht1 bonded to bond0, 2Gbps. /etc/samba/smb.conf attached below... I?m seeing very slow transfers from Samba.... I?m not sure how else to describe it. If I try and copy a 4GB DVD image from the server to any Windows box (XP, 2003, 2008, MacOS) it estimates more than 4 hours to copy. However, if I FTP to the server fro...
2023 Apr 17
1
[Bug 5124] Parallelize the rsync run using multiple threads and/or connections
...ulo.marques at bitfile.pt> --- Using multiple connections also helps when you have LACP network links, which are relatively common in data center setups to have both redundancy and increased bandwidth. If you have two 1Gbps links aggregated, you can only use 1Gbps using rsync, but you could use 2Gbps if rsync made several connections from different TCP ports. -- You are receiving this mail because: You are the QA Contact for the bug.
2010 Oct 08
4
login_* options for 1.0.15
Hello all, Although i'm aware that version 1.0.15 is rather old, that's what is used in Lenny, so... Either way, the setup is rather simple, regular dovecot install, with maildirs residing on a "local" ext3 filesystem accessed through FC to a SAN (2Gbps link). The server has 2 cores (with HT), so "almost" 4 cores and 3GB of ram. A couple weeks ago we had a major number of account migrations, from POP to IMAP. We started to notice that in times of peaks, specially after lunch hour, dovecot started to get REALLY slow and closing client...
2012 Apr 12
7
10gig in domU
Using 10G interfaces on kernel 3.2 + xen 4.1.2 we''re seeing: dom0 ~9.2Gbps domU *~2.5Gbps* dmesg on domU: XENBUS: Device with no driver: device/vif/0 Is this normal? Thanks Kristoffer -- View this message in context: http://xen.1045712.n5.nabble.com/10gig-in-domU-tp5634876p5634876.html Sent from the Xen - User mailing list archive at Nabble.com.
2016 Sep 28
0
Multichannel working at half speed with 2x NICs
...read/writes of well over 250MB/s, so I don't think this is a > disk bottleneck issue. > > What happens during a file transfer is that both interfaces are only > utilized at around 50%, so they both combine to give me ~1Gbps, when they > really should be working at 100% each for ~2Gbps total. > > What can I do to troubleshoot this? strace the connected smbd to ensure it's using the pthreadpool.
2010 Dec 11
1
Storage performance
Hi, I recently did some benchmarking on a Rackspace VM and was surprised that bonnie++ showed a read throughput of almost 500MB/sec. Does anyone have an idea how they achieve these speeds in a shared environment? While you can achieve this with a RAID easily these days once you have lots of VMs accessing that array I would expect that speed to go down quite a bit. Was I just lucky that I was
2016 Sep 28
2
Multichannel working at half speed with 2x NICs
...B/s, so I don't think this is > a > > disk bottleneck issue. > > > > What happens during a file transfer is that both interfaces are only > > utilized at around 50%, so they both combine to give me ~1Gbps, when they > > really should be working at 100% each for ~2Gbps total. > > > > What can I do to troubleshoot this? > > strace the connected smbd to ensure it's using the pthreadpool. >
2009 May 24
1
Dovecot Max Connections & mbox vs. maildir format - Recommendations?
...at I go from mbox format to Maildir, I want to plan that as well. However I don't want to land up in a situation where, I have to move back from Maildir to mbox again. So please do advice me on the best. BTW one question: Assuming that my server has enough resources, ( 8 CPU @ 3.0GHz, 8GB RAM, 2Gbps FC Storage directly attached to the server & Gigabit NIC) what is the maximum number of concurrent connections that Dovecot can handle POP3 + IMAP combined? And I am asking for all the processes combined: POP3-login processes IMAP-login processes POP3 sessions IMAP sessions Any TCP/IP kernel...
2012 Jan 23
1
Director questions
In playing with dovecot director, a couple of things came up, one related to the other: 1) Is there an effective maximum of directors that shouldn't be exceeded? That is, even if technically possible, that I shouldn't go over? Since we're 100% NFS, we've scaled servers horizontally quite a bit. At this point, we've got servers operating as MTAs, servers doing IMAP/POP
2009 Jan 29
8
Help on setting up a PVM
I''m going to set up a PVM on xen-3.3.1 debian-amd64. I need advices about the best methods to install a fresh debian on it (i.e. how to choose the kernel for pvm) . I''ve installed xen from sources and only have one xen kernel in /boot which I''m using for dom0; should I use the same kernel for domUs? There is any problem on use vcpus=2; the other vm is an hvm running
2015 Jun 17
0
EFI & PXE-booting: very slow TFTP performance on a VMWare test setup
...inutes. So, there's something else going on, but what? > > I tried to figure out why a UDP packet on a virtual network with > unlimited bandwidth and zero packet-loss would hit a 15ms timeout. The You'd be surprised how much CPU and RAM speed influence this. Handling a sustained 2Gbps for example doesn't do well on most servers. > comment says it waits for 15ms, but it does this by counting volatile > uint32_t __jiffies, which gets incremented from efi/main.c by an event > initiated from the UEFI API. According to the comment near the #define > DEFAULT_TIMER_TIC...
2017 Mar 08
13
[Bug 1127] New: running nft command creates lag for forwarded packets
...ates lag for forwarded packets. In test case, ICMP packets going through boxes have normally about 5ms latency. When running nft (regardless command for listing set with few items or with several thousand items) latencies go up to 30-100ms. This is observed when router throughput is from 600Mbps to 2Gbps. When throughput is about 300Mbps, latecies go up too, but to 8-12ms. Routers are using multiple NICs queues with affinity to CPU cores, maximum load when tests were performed was about 10-20%. Userspace processes (including nft) are affined to other cores than NICs queues. Older boxes with iptab...
2007 Dec 06
12
DO NOT REPLY [Bug 5124] New: Lessons to learn from other tools, better use of resources, speed gains
https://bugzilla.samba.org/show_bug.cgi?id=5124 Summary: Lessons to learn from other tools, better use of resources, speed gains Product: rsync Version: 3.0.0 Platform: All OS/Version: All Status: NEW Severity: enhancement Priority: P3 Component: core AssignedTo:
2006 Jan 27
23
5,000 concurrent calls system rollout question
Hi, we are currently considering different options for rolling out a large scale IP PBX to handle around 3,000 + concurrent calls. Can this be done with Asterisk? Has it been done before? I really would like an input on this. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL:
2015 Jun 17
3
EFI & PXE-booting: very slow TFTP performance on a VMWare test setup
Dear people on the Syslinux Mailinglist, Are there any known problems with the performance of TFTP in (U)EFI environments in general or maybe just on VMWare? Right now I'm running tests on two virtual environments using either VMWare Workstation 11.1.0 on a Fedora 21 system or VMWare Player 7.1.0 on an Ubuntu 14.10 system. Both virtual systems support PXE booting in UEFI mode. Both systems
2012 Mar 15
2
Usage Case: just not getting the performance I was hoping for
...Where "1.1" above represents server 1, brick 1, etc... We set up 4 gigabit network ports in each server (2 on motherboard and 2 as intel pro dual-nic PCI-express). The network ports were bonded in Linux to the switch giving us 2 "bonded nics" in each server with theoretical 2Gbps throughput aggregate per bonded nic pair. One network was the "san/nas" network for gluster to communicate and the other was the lan interface where Samba would run. After tweaking settings the best we could, we were able to copy files from Mac and Win7 desktops across the network bu...