Displaying 20 results from an estimated 20000 matches similar to: "IP over IB support."
2007 Dec 21
0
FW: faking IB multi-rail with multihomed clients
Guys,
For those of you not party to the original email exchange, this is
about how we can aggregate bandwidth across both rails of a dual-rail
IB cluster using current lustre/LNET (i.e. before we have implemented
transparant LNET support for failover and bandwidth aggregation across
multiple networks).
The following 2 points are fundamental - everything below is a direct
consequence...
1. LNET
2014 May 20
1
CTDB + InfiniBand Public IP Addresses
Hi,
I'm trying to set up a small CTDB cluster running in an IPoIB InfiniBand
network. When I try to start up a cluster with a set of public IP
addresses, the public addresses do not come online. So, I removed the
public address configuration and started a single node up by hand, then
tried to add a public address as follows:
[root at gp-1-0 ctdb]# ctdb ip
Public IPs on node 0
[root at
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
Chris,
Perhaps you need to perform some write_conf like command. I''m not sure if this is needed in 1.6 or not.
Shane
----- Original Message -----
From: lustre-discuss-bounces at lists.lustre.org <lustre-discuss-bounces at lists.lustre.org>
To: lustre-discuss <lustre-discuss at lists.lustre.org>
Sent: Fri Mar 07 12:03:17 2008
Subject: Re: [Lustre-discuss] Multihomed
2012 Jul 13
1
R combining many vectors of predictable name into one date frame
G'day R (power) users,
I have a many vectors, called:
ib1
ib2
ib3
...
ib100
and I would like them in one data frame (df) such that:
> df
ib1 ib2 ib3 ib4 ..... ib100
x x x x x
x x x x x
x x x x x
I have attempted:
hold.list <- list(objects(pattern="ib"))
df <- data.frame(hold.list)
but that
2016 May 25
3
Recommendations for Infiniband with CentOS 6.7
We have a new install of CentOS 6.7 with infiniband support installed.
We can see the card in hardware and we can see the mlx4 drivers loaded
in the kernel but cannot see the card as an ethernet interface, using
ifconfig -a. Can you recommend an install procedure to see this as an
ethernet interface?
Thanks
On 05/25/2016 07:32 AM, Fabian Arrotin wrote:
> On 25/05/16 03:08, Pat Haley
2010 Apr 07
6
using ipoib with xcp
Hello,
I have been playing with the XCP for a while now, and must say I''m very
exited about the technology. I had no prior experience with Xen so it has
taken me a while to understand the concepts, but now I feel most important
issues are solved and I''ve purchased some hardware to build my (tiny) cloud
on.
The box is a Supermicro 1026TT-IBXF, so I have 2 x Ethernet and 1 x
2010 Jun 22
7
lnet infiniband config
Hi all,
I''m getting my feet wet in the infiniband lake and of course I run into
some problems.
It would seem I got the compilation part of sles11 kernel 2.6.27 +
Lustre 1.8.3 + ofed 1.4.2 right, because it allows me to see and use the
infiniband fabric, and because ko2iblnd loads without any complaints.
In /etc/modprobe.d/lustre (this is a Debian system, hence this subdir of
2011 Mar 23
2
OFFTOPIC :: IB hardware choice
Hi! I would need an advice from those that use IB (as admins :) )
i have a choice between :
1. Mellanox InfiniHost? III Lx HCA card, single-port CX4, DDR, PCIe
x8, mem-free, tall bracket, RoHS R5
2. QLogic Single Port 20 Gb InfiniBand to x16 PCI Express Adapter
(Single Pack)
aside the price is there anything else that could help me make a
discrimination between this two?
(these will be used in
2013 Jun 10
1
Mellanox SR-IOV IB PCI passthrough in Xen - MSI-X pciback issue
Greetings Xen user community,
I am interested in using Mellanox ConnectX cards with SR-IOV capabilities to passthrough pci-e Virtual Functions (VFs) to Xen guests. The hope is to allow for the use of InfiniBand directly within virtual machines and thereby enable a plethora of high performance computing applications that already leverage InfiniBand interconnects. However, I have run into some
2011 Jan 14
1
mixing tcp/ip and ib/rdma in distributed replicated volume for disaster recovery.
Hi,
we would like to build a gluster storage systems that combines our
need for performance with our need for disaster recovery. I saw a
couple of posts indicating that this is possible
(http://gluster.org/pipermail/gluster-users/2010-February/003862.html)
but am not 100% clear if that is possible
Let's assume I have a total of 6 storage servers and bricks and want
to spread them across 2
2018 Jun 13
0
Difficulty configuring RDMA in CentOS
Hi
We are trying to configure RDMA for an infiniband connection between our
data server (running CentOS 6.8) and our compute nodes (running CentOS
6.6).? We have been trying to follow the instructions in
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_the_base_rdma_subsystemedit
however we are getting conflicting information on
2011 Aug 05
2
Problem running Gluterfs over Infiniband
Dear List,
We have lots of issues running Glusterfs over Infiniband. The client
can mount the share, but when trying to access it by ls, df, touch du
or any other kind the client hosts freeze the accessing shell and
reboot is hardly possible.
OS: CentOS 6.0 64 bit on i7 2600k machines
Version: GlusterFS 3.2.2, also found in lower 3.2.1
Kernel Version 2.6.32
IB Stack and kernel
2008 Oct 07
4
gluster over infiniband....
Hey guys,
I am running gluster over infiniband, and I have a couple of questions.
We have four servers, each with 1 disk that I am trying to access over infiniband using gluster. The servers look like they start okay, here are the last 10 or so lines of a client log (they are all identical):
2008-10-07 07:18:40 D [spec.y:196:section_sub] parser: child:stripe0->remote1
2008-10-07 07:18:40 D
2013 Jul 09
1
tips/nest practices for gluster rdma?
Hey guys,
So, we're testing Gluster RDMA storage, and are having some issues. Things
are working...just not as we expected them. THere isn't a whole lot in the
way, that I've foudn on docs for gluster rdma, aside from basically
"install gluster-rdma", create a volume with transport=rdma, and mount w/
transport=rdma....
I've done that...and the IB fabric is known to be
2012 Oct 04
0
Announce: Facter 1.6.13 Available
Facter 1.6.13 is a maintenance release in the 1.6.x branch with bug fixes.
Downloads are available at:
* Source: https://downloads.puppetlabs.com/facter/facter-1.6.13.tar.gz
RPMs are available at https://yum.puppetlabs.com/el or /fedora
Rubygem available at http://rubygems.org/gems/facter
Debs are available at https://apt.puppetlabs.com
Mac package is available at
2012 Jul 31
1
Request: Infiniband Support in ipconfig
When attempting to boot a system using Infiniband interfaces, ipconfig does not recognize Infiniband NICs as having valid MAC addresses, so they are silently ignored. As I would like to be able to netboot using Infiniband, I have patched ipconfig to support Infiniband interfaces.
Fortunately, it's a pretty simple and straightforward patch. Below is the patch for your consideration.
---
2019 Apr 11
1
[RFC 3/3] RDMA/virtio-rdma: VirtIO rdma driver
Signed-off-by: Yuval Shaia <yuval.shaia at oracle.com>
---
drivers/infiniband/Kconfig | 1 +
drivers/infiniband/hw/Makefile | 1 +
drivers/infiniband/hw/virtio/Kconfig | 6 +
drivers/infiniband/hw/virtio/Makefile | 4 +
drivers/infiniband/hw/virtio/virtio_rdma.h | 40 +
.../infiniband/hw/virtio/virtio_rdma_device.c | 59 ++
2019 Apr 13
1
[RFC 3/3] RDMA/virtio-rdma: VirtIO rdma driver
On 2019/4/11 19:01, Yuval Shaia wrote:
> Signed-off-by: Yuval Shaia <yuval.shaia at oracle.com>
> ---
> drivers/infiniband/Kconfig | 1 +
> drivers/infiniband/hw/Makefile | 1 +
> drivers/infiniband/hw/virtio/Kconfig | 6 +
> drivers/infiniband/hw/virtio/Makefile | 4 +
>
2013 Nov 12
0
InfiniBand Passthrough not working
2010 Aug 16
3
XCP & InfiniBand
I am running XCP 0.5. I have an InfiniBand network in place that I would
like to use to connect to my NFS server for VM image storage. Has anybody
tried getting IB to work with XCP? Is there a document that outlines the
steps for getting IB to work with XCP?
Also, I don''t need to expose the IB network to any of the domU. I only want
to use the IB network to connecting to NFS based