similar to: using ipoib with xcp

Displaying 20 results from an estimated 100 matches similar to: "using ipoib with xcp"

2010 Aug 16
3
XCP & InfiniBand
I am running XCP 0.5. I have an InfiniBand network in place that I would like to use to connect to my NFS server for VM image storage. Has anybody tried getting IB to work with XCP? Is there a document that outlines the steps for getting IB to work with XCP? Also, I don''t need to expose the IB network to any of the domU. I only want to use the IB network to connecting to NFS based
2010 Jun 22
7
lnet infiniband config
Hi all, I''m getting my feet wet in the infiniband lake and of course I run into some problems. It would seem I got the compilation part of sles11 kernel 2.6.27 + Lustre 1.8.3 + ofed 1.4.2 right, because it allows me to see and use the infiniband fabric, and because ko2iblnd loads without any complaints. In /etc/modprobe.d/lustre (this is a Debian system, hence this subdir of
2013 Jan 18
1
Configuration...
Hi have what might be some elementary questions. Really, what I"d love would be for someone who has had good success to publish his/her configuration files and, maybe, the output from ifconfig. At this point, when I see the not-so-good performance I"m getting, I don't realistically know if I'm in the right ballfield. It seems to me, that with so many mellanox cards out
2016 Jun 01
4
smbpasswd stops working post-upgrade
Background I have a network of machines behind an air-gap, therefore upgrades are a tedious business normally performed four times per year. The systems run various versions of CentOS and I use the Samba that is distributed with CentOS. Last weekend I updated the 5.7 machines with updates to 18 April 2016, not the current 5.8. Those of my users who run Windows boxes (Windows 7 Enterprise)
2016 May 25
3
Recommendations for Infiniband with CentOS 6.7
We have a new install of CentOS 6.7 with infiniband support installed. We can see the card in hardware and we can see the mlx4 drivers loaded in the kernel but cannot see the card as an ethernet interface, using ifconfig -a. Can you recommend an install procedure to see this as an ethernet interface? Thanks On 05/25/2016 07:32 AM, Fabian Arrotin wrote: > On 25/05/16 03:08, Pat Haley
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
Chris, Perhaps you need to perform some write_conf like command. I''m not sure if this is needed in 1.6 or not. Shane ----- Original Message ----- From: lustre-discuss-bounces at lists.lustre.org <lustre-discuss-bounces at lists.lustre.org> To: lustre-discuss <lustre-discuss at lists.lustre.org> Sent: Fri Mar 07 12:03:17 2008 Subject: Re: [Lustre-discuss] Multihomed
2009 Jun 30
0
help friend :java nio problem
I don't think your example would copy the file correctly, but I'm getting the same error when I run a similar test on our file system. It works on the local file system. Trygve On Mon, Jun 29, 2009 at 4:59 PM, eagleeyes <eagleeyes at 126.com> wrote: > Thanks ,the attachment is a java nio with mmap , You could use it for > testing > You should mount GFS at directory
2008 Apr 15
5
o2ib module prevents shutdown
Hello, Not sure if this is the right forum: I''m encountering difficulties with o2ib which prevents an LNET shutdown from proceeding: Unloading OpenIB kernel modules:NET: Unregistered protocal family 27 Failed to unload rdma_cm Failed to unload rdma_cm Failed to unload ib_cm Failed to unload ib_sa LustreError: 131-3: Received notification of device removal Please shutdown LNET
2012 Nov 02
3
lctl ping of Pacemaker IP
Greetings! I am working with Lustre-2.1.2 on RHEL 6.2. First I configured it using the standard defaults over TCP/IP. Everything worked very nicely usnig a real, static --mgsnode=a.b.c.x value which was the actual IP of the MGS/MDS system1 node. I am now trying to integrate it with Pacemaker-1.1.7. I believe I have most of the set-up completed with a particular exception. The "lctl
2013 Jul 09
1
tips/nest practices for gluster rdma?
Hey guys, So, we're testing Gluster RDMA storage, and are having some issues. Things are working...just not as we expected them. THere isn't a whole lot in the way, that I've foudn on docs for gluster rdma, aside from basically "install gluster-rdma", create a volume with transport=rdma, and mount w/ transport=rdma.... I've done that...and the IB fabric is known to be
2014 May 20
1
CTDB + InfiniBand Public IP Addresses
Hi, I'm trying to set up a small CTDB cluster running in an IPoIB InfiniBand network. When I try to start up a cluster with a set of public IP addresses, the public addresses do not come online. So, I removed the public address configuration and started a single node up by hand, then tried to add a public address as follows: [root at gp-1-0 ctdb]# ctdb ip Public IPs on node 0 [root at
2009 Jun 04
2
alfresco with GFS2.0.0
HELLO : Were there somebody who had used alfresco and GFS work well? I met some problems when alfresco create it index files in GFS ,alfresco couldn't start up ,but use server's local dir for alfresco ,it start well. The alfresco log is this : 14:22:43,009 User:System ERROR [lucene.index.IndexInfo] Channel reopen failed on index info files in:
2017 Jun 27
0
Slow write times to gluster disk
Hi Soumya, One example, we have a common working directory dri_fleat in the gluster volume drwxrwsr-x 22 root dri_fleat 4.0K May 1 15:14 dri_fleat my user (phaley) does not own that directory but is a member of the group dri_fleat and should have write permissions. When I go to the nfs-mounted version and try to use the touch command I get the following ibfdr-compute-0-4(dri_fleat)%
2017 Jun 27
2
Slow write times to gluster disk
On 06/27/2017 10:17 AM, Pranith Kumar Karampuri wrote: > The only problem with using gluster mounted via NFS is that it does not > respect the group write permissions which we need. > > We have an exercise coming up in the a couple of weeks. It seems to me > that in order to improve our write times before then, it would be good > to solve the group write permissions for gluster
2017 Jun 30
2
Slow write times to gluster disk
Hi, I was wondering if there were any additional test we could perform to help debug the group write-permissions issue? Thanks Pat On 06/27/2017 12:29 PM, Pat Haley wrote: > > Hi Soumya, > > One example, we have a common working directory dri_fleat in the > gluster volume > > drwxrwsr-x 22 root dri_fleat 4.0K May 1 15:14 dri_fleat > > my user (phaley) does
2017 Jul 03
2
Slow write times to gluster disk
Hi Soumya, When I originally did the tests I ran tcpdump on the client. I have rerun the tests, doing tcpdump on the server tcpdump -i any -nnSs 0 host 172.16.1.121 -w /root/capture_nfsfail.pcap The results are in the same place http://mseas.mit.edu/download/phaley/GlusterUsers/TestNFSmount/ capture_nfsfail.pcap has the results from the failed touch experiment capture_nfssucceed.pcap has
2017 Aug 18
1
Is transport=rdma tested with "stripe"?
On Wed, Aug 16, 2017 at 4:44 PM, Hatazaki, Takao <takao.hatazaki at hpe.com> wrote: >> Note that "stripe" is not tested much and practically unmaintained. > > Ah, this was what I suspected. Understood. I'll be happy with "shard". > > Having said that, "stripe" works fine with transport=tcp. The failure reproduces with just 2 RDMA servers
2017 Jul 03
0
Slow write times to gluster disk
On 06/30/2017 07:56 PM, Pat Haley wrote: > > Hi, > > I was wondering if there were any additional test we could perform to > help debug the group write-permissions issue? Sorry for the delay. Please find response inline -- > > Thanks > > Pat > > > On 06/27/2017 12:29 PM, Pat Haley wrote: >> >> Hi Soumya, >> >> One example, we have a
2017 Jul 05
2
Slow write times to gluster disk
Hi Soumya, (1) In http://mseas.mit.edu/download/phaley/GlusterUsers/TestNFSmount/ I've placed the following 2 log files etc-glusterfs-glusterd.vol.log gdata.log The first has repeated messages about nfs disconnects. The second had the <fuse_mnt_direcotry>.log name (but not much information). (2) About the gluster-NFS native server: do you know where we can find documentation on
2017 Mar 13
2
[RFC] improvements to LLVM diagnostic infrastructure
Hi all, I'm working on improvements to diagnostics handling in LLVM, specifically for the benefit of the (integrated) assembler. The goal is to support options such as -Werror, -w, and -W<warning> for files assembled with clang and inline assembly. Clang already has support for these options but does not apply them to diagnostics originating from (inline) assembly. I plan to add an