similar to: iSCSI on CentOS 6

Displaying 20 results from an estimated 2000 matches similar to: "iSCSI on CentOS 6"

2015 Jul 11
1
iSCSI on CentOS 6
On Jul 11, 2015 11:37 AM, "Miguel Barbosa Gon?alves" <m at mbg.pt> wrote: > > Hi! > > I am about to deploy a virtualisation cluster based on a storage > server with 10 Gbps interfaces for iSCSI and two computing nodes > running VMs in KVM that will access the storage via the 10 Gbps > network. > > I am trying to find real use cases of people using CentOS
2015 Jul 11
0
iSCSI on CentOS 6
Hi Mauricio! No dia 11/07/2015, ?s 13:37, Mauricio Tavares <raubvogel at gmail.com> escreveu: On Jul 11, 2015 11:37 AM, "Miguel Barbosa Gon?alves" <m at mbg.pt> wrote: > > Hi! > > I am about to deploy a virtualisation cluster based on a storage > server with 10 Gbps interfaces for iSCSI and two computing nodes > running VMs in KVM that will access the
2015 Jul 06
2
Live migration using shared storage in different networks
Hi! I am building a KVM cluster that needs VM live migration. My shared storage as well as the KVM hosts will be running CentOS. Because 10 Gbps Ethernet switches are very expensive at the moment I will connect the KVM hosts to the storage by cross-over cables and create private networks for each connection (10.0.0.0/30 and 10.0.0.4/30). The following diagram shows the topology Management
2019 Jun 24
2
Issue with dvd/cdrom drive
> > > > > [root at darkness ~]# ls -al /dev/sr* > > ls: cannot access /dev/sr*: No such file or directory > > [root at darkness ~]# ls /dev/s* > > /dev/sda /dev/sda3 /dev/sdb1 /dev/sg0 /dev/stderr > > /dev/sda1 /dev/sda4 /dev/sdb2 /dev/sg1 /dev/stdin > > /dev/sda2 /dev/sdb /dev/sdb3 /dev/snapshot /dev/stdout > > [root at
2024 Mar 14
1
Unable to utilize past 1 gbps
Dear Marius, In addition to that attached please find a screenshot from Proxmox on statistics. Do you know any ice cast servers that puts out more than 1 gbps for a longer period? Just curious whether anyone was able to go beyond 1 gbps for an extensive time. Thank you! Best, Zsolt zsolt makkai <gvmzsolt at gmail.com> ezt ?rta (id?pont: 2024. m?rc. 14., Cs, 23:39): > Dear Marius,
2024 Mar 15
1
Unable to utilize past 1 gbps
Hi Zsolt, looking at the metrics from Proxmox it doesn't look like anything breaks there. What about at the operating system level? Could there be some limits there? Have you checked corresponding kernel or system logs? Which OS/distribution are you using? Checked the logs there? If nothing screams there, maybe start looking at a load balancer and scale with several more VMs? -- Marius
2024 Mar 14
1
Unable to utilize past 1 gbps
Dear Marius, Our station is Megadanceradio in Hungary. Our status page for the master server is: http://45.67.158.93:8000/status.xsl For the slave: http://45.67.158.94:8000/status.xsl We are peaking at around 11.00 am and 13.00 pm in the afternoon. We have tested extensively the bandwith with speedtest. We tried with iperf but that did not finish for a long time so we stopped it. Currently
2024 Mar 15
1
Unable to utilize past 1 gbps
A quick Google search indicates that it might be a Proxmox limitation: https://forum.proxmox.com/threads/i-cant-exceed-1gb-of-internet-with-a-10gb-network-card.128710/ On Fri, Mar 15, 2024 at 12:24?PM Marius Flage <marius at flage.org> wrote: > Hi Zsolt, > > looking at the metrics from Proxmox it doesn't look like anything breaks > there. What about at the operating
2024 Mar 14
1
Unable to utilize past 1 gbps
I don't know if any such limitation exists, but maxing out just shy of 1Gbps sounds a bit more like an interface's line speed limited/set to 1Gbps somewhere in your production chain? You wrote that you have tested the bw - what did you use? Iperf? Speedtest?Maybe consider scaling with additional Icecast servers and use a load balancer in front? And then just scale accordingly? This sounds
2024 Mar 14
1
Unable to utilize past 1 gbps
Dear Icecast, We are operating a web radio on a 10 Gbps dedicated server line. The line bandwidth is tested and available. The web radio is hosted on a Proxmox virtual environment. We own the physical server itself and made sure to have allocated the sufficient amount of resources on the virtual machine. We found that no matter what we do overall upload cannot go over 1 gbps on 1 mount point. It
2019 Apr 04
2
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
On Thu, Apr 04, 2019 at 11:52:46AM -0400, Michael S. Tsirkin wrote: > I simply love it that you have analysed the individual impact of > each patch! Great job! Thanks! I followed Stefan's suggestions! > > For comparison's sake, it could be IMHO benefitial to add a column > with virtio-net+vhost-net performance. > > This will both give us an idea about whether the
2019 Apr 04
2
[PATCH RFC 0/4] vsock/virtio: optimizations to increase the throughput
On Thu, Apr 04, 2019 at 11:52:46AM -0400, Michael S. Tsirkin wrote: > I simply love it that you have analysed the individual impact of > each patch! Great job! Thanks! I followed Stefan's suggestions! > > For comparison's sake, it could be IMHO benefitial to add a column > with virtio-net+vhost-net performance. > > This will both give us an idea about whether the
2009 Jul 26
2
SATA DVD Burner / AHCI / CentOS 4.7 (kernel: 2.6.9-78.0.22.EL)
OK, it seems that the kernel I have (stock CentOS 4.7: 2.6.9-78.0.22.EL i686) seems to gronk the nVideo SATA controller in AHCI mode. It is NOT able to deal with the SATA DVD Burner I have. It is a "Sony Optiarc DVD Burner with LightScribe Black SATA Model AD-7241S-0B LightScribe Support". Is there some special magic I need to do? Here is the relevant bit from the boot up (SATA ports
2007 Apr 30
3
Slow performance
Hi folks. I'm posting this to both the Fedora as well as the CentOS lists in hopes that somewhere, someone can help me figure out what's going on. I have a dual Xeon 3GHz server that's performing rather slow when it comes to disk activities. The machine is configured with a single 160 GiB OS drive (with CentOS 5.0) and 4x500 GiB drives setup in a RAID-5 configuration.
2010 Feb 17
1
CentOS 5.3 host not seeing storage device
Maybe one of you has experienced something like this before. I have a host running CentOS5.3, x86_64 version with the standard qla2xxx driver. Both ports are recognized and show output in dmesg but they never find my storage device: qla2xxx 0000:07:00.1: LIP reset occured (f700). qla2xxx 0000:07:00.1: LIP occured (f700). qla2xxx 0000:07:00.1: LIP reset occured (f7f7). qla2xxx 0000:07:00.0: LOOP
2008 Mar 11
1
Question on SATA DVD using centos 5.1
On my machine I have SATA0: HD SATA1: HD these two drives are set as RAID1 SATA2: HD extra SATA3: DVD SATA4: external USB disk Snip from dmesg shows the ATAPI device being detected. ata3: SATA max UDMA/133 cmd 0x9e0 ctl 0xbe0 bmdma 0xe400 irq 10 ata4: SATA max UDMA/133 cmd 0x960 ctl 0xb60 bmdma 0xe408 irq 10 ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ata3.00: ATAPI: PIONEER BD-ROM
2009 Sep 04
2
Xen & netperf
First, I apologize if this message has been received multiple times. I''m having problems subscribing to this mailing list: Hi xen-users, I am trying to decide whether I should run a game server inside a Xen domain. My primary reason for wanting to virtualize is because I want to isolate this environment from the rest of my server. I really like the idea of isolating the game server
2006 Dec 21
2
Help with SUSE 10.2 and Sangoma A104D
Hi all, as good? I try to install asterisk-1.2.14, zaptel-1.2.12,libpri-1.2.4,addons-1.2.5 , sounds-1.2.1 and wanpipe-2.3.4-3 and hwec-utils-beta4-2.3.4 But it is not compiling drivers of the Sangoma, why udev's for board in "/dev/zap"(1-31, channel,ctl,pseudo,timer) is not created. But when I install a board TE110P Digium, udev's is created and asterisk functions perfectly. : )
2014 Nov 11
2
10 Gbps adapter recommendation
Hi guys, I'm yet to use 10 Gbps with CentOS, hence my question. I'm looking for a cheap (doh) adapter that won't cause me problems with CentOS. Any recommendations? Cheers Lucian -- Sent from the Delta quadrant using Borg technology! Nux! www.nux.ro
2017 May 12
2
Poor network performance
Hello, I have some problem with poor network performance on libvirt with qemu and openvswitch. I’m using libvirt 1.3.1, qemu 2.5 and openvswitch 2.6.0 on Ubuntu 16.04 currently. My connection diagram looks like below: +---------------------------+ +---------------------------+