search for: 1gbe

Displaying 20 results from an estimated 34 matches for "1gbe".

Did you mean: 1gb
2020 Jan 17
3
Twin HDMI
Hello, I have an Intel NUC7PJYH running CentOS 6.8. This is a NUC with standard USB, 1GbE and 2*HDMI. Installation was no problem providing acpi=off. The problem is that by default the two displays are mirrored and I can?t seem to separate them. I can only see one HDMI port from CentOS. I need to see both HDMI ports discreetly. Can you please help. Regards, Mark Woolfson MW Consul...
2010 Oct 04
1
samba 3.3 - poor performance (compared to NFS)
...3.3.8 client: Linux CentOS 5.5 cifs mount, "mount -t cifs -o rsize=32768,wsize=32768 //server/storage /storage" Client is on the same LAN as the server, albeit different VLANs. Traffic is routed through intel gigabit NICs and Cisco Nexus 5000/7000 series switches. NAS server has a 4x 1gbe 802.3ad port channel set up with the Cisco 7000 switch, although I've run these tests both with and without the port channel with very similar results (as I'd expect, since the client is only a single 1gbe interface to begin with). (the 32768 numbers are the same as used in the NFS3/NFS...
2010 Jan 13
2
Bonding modes
I have a bonded interface running in mode 1 which is active/passive and no issue with this. I need to change it to mode 0 for active/active setup. Does mode 0 is dependent on the switches configuration? My setup is: 2 links from bonded interface is connected to different switches. When I change to mode0 from mode1 , bond0 is not coming up. These are the steps I performed 1) changed to options
2017 Jul 12
1
Hi all
I have setup a distributed glusterfs volume with 3 servers. the network is 1GbE, i get filebench test with a client. refer to this link: https://s3.amazonaws.com/aws001/guided_trek/Performance_in_a_Gluster_Systemv6F.pdf the more server for gluster, more throughput should gain. I have tested the network, the bandwidth is 117 MB/s, so when i have 3 servers i should gain abo...
2018 Aug 02
1
NFS/RDMA connection closed
...r setup is a cluster with one head node (NFS server) and 9 compute nodes (NFS clients). All the machines are running CentOS 6.9 2.6.32-696.30.1.el6.x86_64 and using the "Inbox"/CentOS RDMA implementation/drivers (not Mellanox OFED). (We also have other NFS clients but they are using 1GbE for NFS connection and, while they will still hang with messages like "NFS server not responding, retrying" or "timed out", they will eventually recover and don't need a reboot.) On the server (which is named pac) I will see messages like this: Jul 30 18:19:38 pac kernel:...
2011 May 19
2
[PATCH] arch/tile: add /proc/tile, /proc/sys/tile, and a sysfs cpu attribute
...or. For > example: > > # cat /proc/tile/board > board_part: 402-00002-05 > board_serial: NBS-5002-00012 > chip_serial: P62338.01.110 > chip_revision: A0 > board_revision: 2.2 > board_description: Tilera TILExpressPro-64, TILEPro64 processor (866 MHz-capable), 1 10GbE, 6 1GbE > # cat /proc/tile/switch > control: mdio gbe/0 I think it's ok to have it below /sys/hypervisor, because the information is provided through a hypervisor ABI, even though it describes something else. This is more like /sys/firmware, but the boundaries between that and /sys/hypervisor ar...
2011 May 19
2
[PATCH] arch/tile: add /proc/tile, /proc/sys/tile, and a sysfs cpu attribute
...or. For > example: > > # cat /proc/tile/board > board_part: 402-00002-05 > board_serial: NBS-5002-00012 > chip_serial: P62338.01.110 > chip_revision: A0 > board_revision: 2.2 > board_description: Tilera TILExpressPro-64, TILEPro64 processor (866 MHz-capable), 1 10GbE, 6 1GbE > # cat /proc/tile/switch > control: mdio gbe/0 I think it's ok to have it below /sys/hypervisor, because the information is provided through a hypervisor ABI, even though it describes something else. This is more like /sys/firmware, but the boundaries between that and /sys/hypervisor ar...
2017 Aug 13
0
throughput question
Hi everybody I have a some question about throughput for glusterfs volume. I have 3 server for glusterfs, each have one brick and 1GbE for their network, I have made distributed replica 3 volume with these 3 bricks. the network between the clients and the servers are 1GbE. refer to this link: https://s3.amazonaws.com/aws001/guided_trek/Perform ance_in_a_Gluster_Systemv6F.pdf I have setup clients 4 times of server number, i mean:...
2009 Oct 14
2
Best practice settings for channel bonding interface mode?
Hi, may be there are some best practice suggestions for the "best mode" for channel bonding interface? Or in other words, when should/would I use which mode? E.g. I do have some fileservers connected to the users lan and to some ISCSI Storages. Or some Webservers only connected to the LAN. The switches are all new cisco models. I've read sone docs (1), (2) and (3) so the theory
2020 Jan 18
1
Twin HDMI
...Fleur Il giorno sab 18 gen 2020 alle ore 01:07 John Pierce <jhn.pierce at gmail.com> ha scritto: > > On Fri, Jan 17, 2020 at 8:53 AM Mark (Netbook) <mrw at mwcltd.co.uk> wrote: > > > I have an Intel NUC7PJYH running CentOS 6.8. This is a NUC with standard > > USB, 1GbE and 2*HDMI. > > > FWIW, that has a Pentium Silver J2005, which has Intel? UHD Graphics 605 > > thats fairly new stuff, and centos 6 is pretty old now. > > -- > -john r pierce > recycling used bits in santa cruz > _______________________________________________ > C...
2009 Jul 21
2
Best Practices for PV Disk IO?
I was wondering if anyone''s compiled a list of places to look to reduce Disk IO Latency for Xen PV DomUs. I''ve gotten reasonably acceptable performance from my setup (Dom0 as a iSCSI initiator, providing phy volumes to DomUs), at about 45MB/sec writes, and 80MB/sec reads (this is to a IET target running in blockio mode). As always, reducing latency for small disk operations
2008 May 05
4
RE: Running MS Terminal Server and MS Small Business Serverunder Xen?
Jamie J. Begin wrote: > > I have the crazy idea to run both Microsoft Terminal Server > and Small Business Server (SBS is a license-restricted > version of Windows Server with Exchange for shop with <50 > users) in separate HVM domUs. Assuming that I have a beefy > enough underlying hardware, how likely do you think this > would work? I know that Exchange
2012 Jun 23
7
GPLPV xennet bsod when vcpu>15
Hello, I installed the signed drivers from http://wiki.univention.de/index.php?title=Installing-signed-GPLPV-drivers and I ran into a BSOD on a Windows 2008 Server R2 Enterprise domU with a large number of vcpu''s. The BSOD is related to xennet.sys. After some trials I found that it runs fine up to 15 cores. From 16 or more, the BSOD kicks in when booting the domU. The hardware (4 times
2008 May 05
7
iscsi conn error: Xen related?
Hello all, I got some severe iscsi connection loss on my dom0 (Gentoo 2.6.20-xen-r6, xen 3.1.1). Happening several times a day. open-iscsi version is 2.0.865.12. Target iscsi is the open-e DSS product. Here is a snip of my messages log file: May 5 16:52:50 ying connection226:0: iscsi: detected conn error (1011) May 5 16:52:51 ying iscsid: connect failed (111) May 5 16:52:51 ying iscsid:
2016 Jun 20
1
bad iscsi performance after upgrade to CentOS 7.2
hi all, after i upgraded a physical server (SUN FIRE X4170) from CentOS 6.8 to 7.2 i am not able to get the same iSCSI read performance. the server is connected to HP P2000 Storage via 2 x 1GbE Ethernet. CentOS 6.8 gives me full read performance on raw iSCSI devices /dev/sdxx at 115MB/s. CentOS 7.2 allows only 90-100MB/s, read performance varies and is not stable like for 6.8 the multipath performance on 7.2 is even worse. CentOS 6.8 allows to read a full speed and stable 220MB/s. Cent...
2020 Jan 18
0
Twin HDMI
On Fri, Jan 17, 2020 at 8:53 AM Mark (Netbook) <mrw at mwcltd.co.uk> wrote: > I have an Intel NUC7PJYH running CentOS 6.8. This is a NUC with standard > USB, 1GbE and 2*HDMI. FWIW, that has a Pentium Silver J2005, which has Intel? UHD Graphics 605 thats fairly new stuff, and centos 6 is pretty old now. -- -john r pierce recycling used bits in santa cruz
2011 Sep 16
0
QoS to prioritise netlogon traffic over SMB file shares
Exactly what it says in the subject really. Looking for a way to prioritise the netlogon requests on the network above the file share transfers. We have a few desktops that are used for design files and are connected by 1Gbe along with a fleet of WiFi (802.11g generally) enabled laptops. Trying to logon while the desktops are transferring large files can cause some nasty issues. Thinking about giving the Desktops a dedicated port for their connection, but would also like to be able to QoS prioritise. What ports do all...
2012 Mar 15
2
Usage Case: just not getting the performance I was hoping for
...ranted, but I was really hoping for better performance than this given that raw drive speeds using dd show that we can write at 125+ MB/s to each "brick" 2TB disk. Our network switch is a decent Layer 2 D-Link switch (actually, 2 of them stacked with 10Gb cable), and we are only using 1GbE nics rather than infiniband or 10 GbE in the servers. Overall, we spent about 22K on servers where drives where more than 1/3 of that cost due to the Thailand flooding. Me and my team have been tearing apart our entire network to try to see where the performance was lost. We've questione...
2009 Oct 10
11
SSD over 10gbe not any faster than 10K SAS over GigE
...SD c1xxxxx c1xxxxxx c1xxxxx. I have a single 10GBe card with a single IP on it. I created a NFS filesystem for vmware by using : zfs create SSD/vmware . I had to set permissoins for Vmware anon=0, but thats it. Below is what zpool iostat reads: File copy 10Gbe to SSD -> 40M max file copy 1gbe to SSD -> 5.4M max File copy SAS to SSD internal -> 90M File copy SSD to SAS internal -> 55M Top shows not matter what I always have 2.5 G free and every other test says the same thing. Can anyone tell me why this is seems to be slow? Does 90M mean MegaBytes or MegaBits? Thanks, De...
2016 May 24
0
Improving 30-40MB/sec Sequential Reads
...io T320 10Gbe card as well. I've benchmarked the local disk read speed and I get around 565MB/sec reading a file that far-exceeds the NAS' RAM. I've also used iperf to confirm the network interfaces aren't the bottleneck. A single instance/thread of iperf pushes 921Mb/sec over the 1Gbe line and 5.07 Gb/sec over the 10Gbe. There's not much else going on my NAS at the time I'm seeing these slow transfers. A peek at top shows the overal CPU utilization staying under 20%, a peek at the individual cores during the slow transfers doesn't show any one core ever exceeding...