search for: 10gbps

Displaying 20 results from an estimated 86 matches for "10gbps".

2019 Jan 25
1
10Gbps network interfaces under KVM
Hi all, Anyone knows if exists some plan to support Intel 10Gbps nics like ?for example 82575, 82576, 82580 or 82598EB Ethernet controllers under KVM for virtual guest like it does with e1000 driver? Regards, C. L. Martinez
2008 Dec 19
3
dom0 using only a single CPU (networking)
Hello, I''m using a server with 10Gbps network interfaces and 4 CPUs, running several domUs. The problem is that in this setup, with high network load, dom0 turns out to be the bottleneck, using only a single CPU which is saturated and 100%. So the network speed is bounded to much less than 10Gbps. How could I make dom0 use more CPUs in...
2013 Sep 03
0
Request for suggesstions -- inexpensive 10Gbps NAS / 300TB JBOD with centos head
Greetings, I am located in India. Any experience or suggestions for building blocks using copper preferably (and not fiber SAN)? Enclosures, Technologies (iscsi etc). Many US companies are very picky about export regulations for such humongous data appetites. The application is mainly A/V or vfx file handling. -- Regards, Rajagopal
2014 Feb 07
1
Re: SR-IOV: no traffic isolation between VFs with Broadcom 10Gbps cards
> Instead of using <hostdev>, you should instead try using <interface > type='hostdev'>, which will allow you to specify the mac address for the > interface directly in the guest's XML config (rather than needing to do > it separately). Here's a link to documentation on this feature: > >
2010 Feb 08
4
Experiencing continual eth0 link up/down on a 10G Chelsio NIC (cxgb3 driver)
...re-version: 1.0-0 Driver for eth2 driver: e1000e version: 1.0.2-k2 firmware-version: 1.0-0 The last 3-4 weeks, I have noticed that the eth0 link keeps going up and down, confirmed by "dmesg" output as well in /var/log/messages (dmesg sample shown below). eth0: link down eth0: link up, 10Gbps, full-duplex eth0: link down eth0: link up, 10Gbps, full-duplex eth0: link down eth0: link up, 10Gbps, full-duplex The kernel RPM verification shows no errors # uname --kernel-release 2.6.18-164.2.1.el5.plus # rpm --verify kernel-2.6.18-164.2.1.el5.plus The hardware vendor tells me that the car...
2014 Feb 05
0
Re: SR-IOV: no traffic isolation between VFs with Broadcom 10Gbps cards
On 02/04/2014 05:10 PM, Yoann Juet wrote: > Hi all, > > I'm testing on debian/unstable SR-IOV feature with Broadcom BCM57810 > cards and KVM hypervisor: > > Compiled against library: libvirt 1.2.1 > Using library: libvirt 1.2.1 > Using API: QEMU 1.2.1 > Running hypervisor: QEMU 1.7.0 > > bnx2x > -> firmware 7.8.17 > -> driver from kernel 3.12.7 >
2020 May 28
1
Performance Degradation when copying >1500 files to mac
...7.13. The mac os client is running Darwin 18.7.0 (have also tested and confirmed behavior on Mojave 10.14.6) and writing to SSD, and I confirmed that comparable FS operations locally do not incur the performance penalty, only from SMB to disk. iperf gives the connection speed its full line rate of 10gbps, and disk read performance of the smb server is >10gbps. I've tested the same operations from linux <-> linux via the smb share, but no performance drop off was seen. Further, copying large files back and forth also saw no problem. I'm wondering if anyone might have any knowledge...
2016 Jan 30
1
bonding (IEEE 802.3ad) not working with qemu/virtio
...g a whitelist is just rediculous. > > There should be a default speed/duplex setting for such devices as well. > We can pick one that will be use universally for these kinds of devices. > Yes, that's the other thing - the default setting, from a brief grepping I see that veth uses 10Gbps, tun uses 10Mbps and batman-adv uses 10Mbps. If we add a default get_settings that can be used by virtual devices in ethtool that returns 10Gbps with the settings set like veth does sounds good to me. What do you think ? In fact they all set the same settings (apart from speed) so we can consolid...
2016 Jan 30
1
bonding (IEEE 802.3ad) not working with qemu/virtio
...g a whitelist is just rediculous. > > There should be a default speed/duplex setting for such devices as well. > We can pick one that will be use universally for these kinds of devices. > Yes, that's the other thing - the default setting, from a brief grepping I see that veth uses 10Gbps, tun uses 10Mbps and batman-adv uses 10Mbps. If we add a default get_settings that can be used by virtual devices in ethtool that returns 10Gbps with the settings set like veth does sounds good to me. What do you think ? In fact they all set the same settings (apart from speed) so we can consolid...
2011 Feb 14
8
e1000 gig nic howto?
We have a 10Gbps connection to our server so I need to be able to create VMs using the faster e1000 nics instead of the default Realtek 100Mbps ones but I''m not sure how to go about it. Is there a walkthrough or "howto" or can someone point me at some simple instructions? I''m using xen-3...
2014 Feb 04
2
SR-IOV: no traffic isolation between VFs with Broadcom 10Gbps cards
Hi all, I'm testing on debian/unstable SR-IOV feature with Broadcom BCM57810 cards and KVM hypervisor: Compiled against library: libvirt 1.2.1 Using library: libvirt 1.2.1 Using API: QEMU 1.2.1 Running hypervisor: QEMU 1.7.0 bnx2x -> firmware 7.8.17 -> driver from kernel 3.12.7 8 VFs are created on the first PF. For each VF, a specific mac address is set manually using "ip
2008 Jun 02
2
RE: Largish filesystems [was Re: XFS install issue]
...acing a 80TB SAN system based on StorNext with a Isilon system with 10G network connections. If there was a way to create a Linux (Centos) 100TB ? 500TB or larger clustered file system with the nodes connected via infiniband that was easily manageable with throughput that can support multiple 10Gbps Ethernet connections I would be very interested. And once more thanks for the fast response. Mike
2013 Dec 05
2
Ubuntu GlusterFS in Production
Hi, Is anyone using GlusterFS on Ubuntu in production? Specifically, I'm looking at using the NFS portion of it over a bonded interface. I believe I'll get better speed than user the gluster client across a single interface. Setup: 3 servers running KVM (about 24 VM's) 2 NAS boxes running Ubuntu (13.04 and 13.10) Since Gluster NFS does server side replication, I'll put
2012 May 12
0
bug report for network device i82599er: eepro100_write4: Assertion `!"feature is missing in this emulation: " "unknown longword write"' failed.
...missing in this emulation: " "unknown longword write"'' failed. The domain crashes. I suspect this occurs because I am trying to build the domain with a vif where model=i82559er. Changing the model to e1000 does build the domain. I would prefer to have the ability to use 10Gbps because my physical nic is also a 10Gbps nic: 02:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01) For further contact please be aware I am not receiving xen-devel emails, so please include me in CC''s when more info is required. Thank yo...
2018 May 09
0
3.12, ganesha and storhaug
...onflicted need: the NFS is needed to resolve the mmap issue (and hopefully do some speed up that the users see) but the real high-speed need is possibly through pNFS clients to NFS-Ganesha but those connections will be over infinniband so only TCP and no RDMA. That drops my raw speed from 40Gbps to 10Gbps (connectx-2 gear). I can run gnfs on RDMA (40Gbps!) but no NFSv4.1 for pNFS AND a manual mess setting up HA for connections. The last connection fun is not all clients connect over infinniband so I do have standard ethernet ports as well (40Gbps feeding multiple switches for 10Gbps and 1Gbps). --...
2008 Mar 14
3
about Xen accelerated network plugin modules
hi, I want to know some details about this network accelerated modules. 1)what''s the main purpose of this plugin module? 2)any requirement for harware nic? or other hardware platform? 3)I''ve not found the code in Xen3.1 code set. so was it added to xen source code yet? Thanks in advance _______________________________________________ Xen-devel mailing list
2018 Sep 04
3
authentication performance with 4.7.6 -> 4.7.8 upgrade (was: Re: gencache.tdb size and cache flush)
...would start to deteriorate after a little while (would take more than 10 seconds). So we now delete it (and locks/locking.tdb that also tends to grow forever) and restart our samba processes every morning at 7 am - which gives us much more stable performance. > > - Servers with 256GB of RAM, 10Gbps ethernet interfaces and around 110TB of disk per server. > - FreeBSD 11.2-p2 > - Samba 4.7.6 with some local patches to allow (much) bigger socket listening queues in order to handle the case of many clients connecting at the same time. > > (We are trying to upgrade to a more recent Sa...
2012 Jan 18
4
Performance of Maildir vs sdbox/mdbox
...to be migrated in over the course of 3-4 months. Some details on our new environment: * Approximately 1.6M+ mailboxes once all legacy systems are combined * NetApp FAS6280 storage w/ 120TB usable for mail storage, 1TB of FlashCache in each controller * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) * Postfix will feed new email to Dovecot via LMTP * Dovecot servers have been split based on their role - Dovecot LDA Servers (running LMTP protocol) - Dovecot POP/IMAP servers (running POP/IMAP protocols) - LDA & POP/IMAP servers are segmented into geographic...
2016 Sep 01
1
[PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
...ot of CPU cycles and > > network bandwidth. We put guest's free page information in bitmap and > > send it to host with the virt queue of virtio-balloon. For an idle 8GB > > guest, this can help to shorten the total live migration time from > > 2Sec to about 500ms in the 10Gbps network environment. > > I just read the slides of this feature for recent kvm forum, the cloud > providers more care about live migration downtime to avoid customers' > perception than total time, however, this feature will increase downtime > when acquire the benefit of reduci...
2016 Sep 01
1
[PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
...ot of CPU cycles and > > network bandwidth. We put guest's free page information in bitmap and > > send it to host with the virt queue of virtio-balloon. For an idle 8GB > > guest, this can help to shorten the total live migration time from > > 2Sec to about 500ms in the 10Gbps network environment. > > I just read the slides of this feature for recent kvm forum, the cloud > providers more care about live migration downtime to avoid customers' > perception than total time, however, this feature will increase downtime > when acquire the benefit of reduci...