Displaying 20 results from an estimated 305 matches for "gbps".
Did you mean:
gaps
2019 Jun 24
2
Issue with dvd/cdrom drive
...libata version 3.00 loaded.
[ 1.879093] ata1: DUMMY
[ 1.879096] ata2: DUMMY
[ 1.879099] ata3: SATA max UDMA/133 abar m524288 at 0xc5700000 port
0xc5700200 irq 136
[ 1.879102] ata4: SATA max UDMA/133 abar m524288 at 0xc5700000 port
0xc5700280 irq 137
[ 2.183682] ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[ 2.183825] ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[ 7.183893] ata3.00: qc timeout (cmd 0xec)
[ 7.183908] ata3.00: failed to IDENTIFY (I/O error, err_mask=0x4)
[ 7.183960] ata4.00: qc timeout (cmd 0xa1)
[ 7.183974] ata4.00: failed to IDENTIF...
2024 Mar 14
1
Unable to utilize past 1 gbps
Dear Icecast,
We are operating a web radio on a 10 Gbps dedicated server line. The line
bandwidth is tested and available. The web radio is hosted on a Proxmox
virtual environment. We own the physical server itself and made sure to
have allocated the sufficient amount of resources on the virtual machine.
We found that no matter what we do overall uploa...
2024 Mar 14
1
Unable to utilize past 1 gbps
Dear Marius,
In addition to that attached please find a screenshot from Proxmox on
statistics.
Do you know any ice cast servers that puts out more than 1 gbps for a
longer period? Just curious whether anyone was able to go beyond 1 gbps for
an extensive time.
Thank you!
Best,
Zsolt
zsolt makkai <gvmzsolt at gmail.com> ezt ?rta (id?pont: 2024. m?rc. 14., Cs,
23:39):
> Dear Marius,
>
> Our station is Megadanceradio in Hungary.
>
>...
2024 Mar 14
1
Unable to utilize past 1 gbps
...xsl For the slave:
http://45.67.158.94:8000/status.xsl
We are peaking at around 11.00 am and 13.00 pm in the afternoon.
We have tested extensively the bandwith with speedtest. We tried with iperf
but that did not finish for a long time so we stopped it.
Currently we are load balancing on two 10 gbps dedicated lines. So for now
we are ok, but for the future we do not know for how long it will last.
Currently we are limiting the number of listeners to 5200 on each server so
as not to cripple the system. When we are approaching the bandwidth limit
we are getting loads of timeouts until finally no...
2024 Mar 14
1
Unable to utilize past 1 gbps
I don't know if any such limitation exists, but maxing out just shy of 1Gbps sounds a bit more like an interface's line speed limited/set to 1Gbps somewhere in your production chain? You wrote that you have tested the bw - what did you use? Iperf? Speedtest?Maybe consider scaling with additional Icecast servers and use a load balancer in front? And then just scale accor...
2024 Mar 15
1
Unable to utilize past 1 gbps
...looking at a load balancer and scale with
several more VMs?
--
Marius
On 14.03.2024 23:52, zsolt makkai wrote:
> Dear Marius,
>
> In addition to that attached please find a screenshot from Proxmox on
> statistics.
>
> Do you know any ice cast servers that puts out more than 1 gbps for a
> longer period? Just curious whether anyone was able to go beyond 1
> gbps for an extensive time.
>
> Thank you!
>
> Best,
>
> Zsolt
>
>
> zsolt makkai <gvmzsolt at gmail.com> ezt ?rta (id?pont: 2024. m?rc. 14.,
> Cs, 23:39):
>
> Dear Ma...
2024 Mar 15
1
Unable to utilize past 1 gbps
...r and scale with several more
> VMs?
>
> --
> Marius
> On 14.03.2024 23:52, zsolt makkai wrote:
>
> Dear Marius,
>
> In addition to that attached please find a screenshot from Proxmox on
> statistics.
>
> Do you know any ice cast servers that puts out more than 1 gbps for a
> longer period? Just curious whether anyone was able to go beyond 1 gbps for
> an extensive time.
>
> Thank you!
>
> Best,
>
> Zsolt
>
>
> zsolt makkai <gvmzsolt at gmail.com> ezt ?rta (id?pont: 2024. m?rc. 14., Cs,
> 23:39):
>
>> Dear Mariu...
2019 Jun 24
0
Issue with dvd/cdrom drive
>
> [ 2.183682] ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
> [ 2.183825] ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
> [ 7.183893] ata3.00: qc timeout (cmd 0xec)
> [ 7.183908] ata3.00: failed to IDENTIFY (I/O error, err_mask=0x4)
> [ 7.183960] ata4.00: qc timeout (cmd 0xa1)
> [ 7.183974] a...
2014 Nov 11
2
10 Gbps adapter recommendation
Hi guys,
I'm yet to use 10 Gbps with CentOS, hence my question. I'm looking for a cheap (doh) adapter that won't cause me problems with CentOS. Any recommendations?
Cheers
Lucian
--
Sent from the Delta quadrant using Borg technology!
Nux!
www.nux.ro
2015 Jul 06
2
Live migration using shared storage in different networks
Hi!
I am building a KVM cluster that needs VM live migration.
My shared storage as well as the KVM hosts will be running
CentOS.
Because 10 Gbps Ethernet switches are very expensive at the
moment I will connect the KVM hosts to the storage by
cross-over cables and create private networks for each
connection (10.0.0.0/30 and 10.0.0.4/30).
The following diagram shows the topology
Management Management Management...
2019 Jun 24
2
Issue with dvd/cdrom drive
On Mon, Jun 24, 2019 at 3:35 AM Pete Biggs <pete at biggs.org.uk> wrote:
> On Sun, 2019-06-23 at 22:13 -0400, doug schmidt wrote:
> > Hi,
> > I'm having an issue with my Thinkpad P70 laptop/workstation. This system
> is
> > a dual boot,
> > windows 10 pro and centos 7. I have not needed to use the cdrom until
> now,
> > however the system does not
2019 Apr 04
2
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
...,netdev=net0
I did also a test using TCP_NODELAY, just to be fair, because VSOCK
doesn't implement something like this.
In both cases I set the MTU to the maximum allowed (65520).
VSOCK TCP + virtio-net + vhost
host -> guest [Gbps] host -> guest [Gbps]
pkt_size before opt. patch 1 patches 2+3 patch 4 TCP_NODELAY
64 0.060 0.102 0.102 0.096 0.16 0.15
256 0.22 0.40 0.40 0.36 0.32 0.57
512 0.42 0.82 0.85...
2019 Apr 04
2
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
...,netdev=net0
I did also a test using TCP_NODELAY, just to be fair, because VSOCK
doesn't implement something like this.
In both cases I set the MTU to the maximum allowed (65520).
VSOCK TCP + virtio-net + vhost
host -> guest [Gbps] host -> guest [Gbps]
pkt_size before opt. patch 1 patches 2+3 patch 4 TCP_NODELAY
64 0.060 0.102 0.102 0.096 0.16 0.15
256 0.22 0.40 0.40 0.36 0.32 0.57
512 0.42 0.82 0.85...
2009 Jul 26
2
SATA DVD Burner / AHCI / CentOS 4.7 (kernel: 2.6.9-78.0.22.EL)
...bit from the boot up (SATA ports 2-6 are empty at present):
Jul 26 16:29:27 sauron kernel: ACPI: PCI Interrupt 0000:00:09.0[A] -> GSI 20 (level, low) -> IRQ 201
Jul 26 16:29:27 sauron kernel: MSI INIT SUCCESS
Jul 26 16:29:27 sauron kernel: ahci 0000:00:09.0: AHCI 0001.0200 32 slots 6 ports 3 Gbps 0x3f impl SATA mode
Jul 26 16:29:27 sauron kernel: ahci 0000:00:09.0: flags: 64bit ncq led clo pmp pio
Jul 26 16:29:27 sauron kernel: ata1: SATA max UDMA/133 cmd 0xF8874100 ctl 0x0 bmdma 0x0 irq 58
Jul 26 16:29:27 sauron kernel: ata2: SATA max UDMA/133 cmd 0xF8874180 ctl 0x0 bmdma 0x0 irq 58
Jul 2...
2015 Jul 11
2
iSCSI on CentOS 6
Hi!
I am about to deploy a virtualisation cluster based on a storage
server with 10 Gbps interfaces for iSCSI and two computing nodes
running VMs in KVM that will access the storage via the 10 Gbps
network.
I am trying to find real use cases of people using CentOS 6 as an
iSCSI target but can't find any.
Anyone using a configuration similar to this one?
Thanks in advance!
Migue...
2007 Apr 30
3
Slow performance
...onfigured with a single 160 GiB OS drive (with
CentOS 5.0) and 4x500 GiB drives setup in a RAID-5 configuration. All
drives are setup for 3.0 GiB SATA link, and the motherboard also
supports that. Looking in dmesg when the system comes up, I see that
reflected as well:
ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata1.00: ATA-7, max UDMA/133, 312581808 sectors: LBA48 NCQ (depth 0/32)
ata1.00: configured for UDMA/133
scsi1 : ahci
ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata2.00: ATA-7, max UDMA/133, 976773168 sectors: LBA48 NCQ (depth 0/32)
ata2.00: configured for UDM...
2019 Apr 04
0
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
...g TCP_NODELAY, just to be fair, because VSOCK
> doesn't implement something like this.
Why not?
> In both cases I set the MTU to the maximum allowed (65520).
>
> VSOCK TCP + virtio-net + vhost
> host -> guest [Gbps] host -> guest [Gbps]
> pkt_size before opt. patch 1 patches 2+3 patch 4 TCP_NODELAY
> 64 0.060 0.102 0.102 0.096 0.16 0.15
> 256 0.22 0.40 0.40 0.36 0.32 0.57
> 512 0.42...
2013 Sep 12
15
large packet support in netfront driver and guest network throughput
...from netback to net front is segmented into MTU size? Is GRO not supported in the guest?
I am seeing extremely low throughput on a 10Gb/s link. Two linux guests (Centos 6.4 64bit, 4 VCPU and 4GB of memory) are running on two different XenServer 6.1s and iperf session between them shows at most 3.2 Gbps.
I am using linux bridge as network backend switch. Dom0 is configured to have 2940MB of RAM.
In most cases, after a few runs the throughput drops to ~2.2 Gbps. top shows that the netback thread in dom0 is having about 70-80% CPU utilization. I have checked the dom0 network configuration and there...
2009 Jan 17
25
GPLPV network performance
Just reporting some iperf results. In each case, Dom0 is iperf server,
DomU is iperf client:
(1) Dom0: Intel Core2 3.16 GHz, CentOS 5.2, xen 3.0.3.
DomU: Windows XP SP3, GPLPV 0.9.12-pre13, file based.
Iperf: 1.17 Gbits/sec
(2) Dom0: Intel Core2 2.33 GHz, CentOS 5.2, xen 3.0.3.
DomU: Windows XP SP3, GPLPV 0.9.12-pre13, file based.
Iperf: 725 Mbits/sec
(3) Dom0: Intel Core2 2.33 GHz,
2004 Nov 10
5
etherbridge bottleneck
...en-2.0, 2.4.27-xen0 and 2.4.27-xenU.
My iperf numbers:
940 Mbps stock linux -> stock linux
470 Mbps stock linux -> xenU
533 Mbps xenU -> stock linux
ether bridge speed
533 Mbps xenU -> xen0 on the same host
422 Mbps xen0 -> xenU on the same host
loopback speed
4.4 Gbps stock linux
3.2 Gbps xenU
3.2 Gbps xen0
(stock linux is 2.4.25)
-------------------------------------------------------
This SF.Net email is sponsored by:
Sybase ASE Linux Express Edition - download now for FREE
LinuxWorld Reader''s Choice Award Winner for best database on Linux....