similar to: Recommendations for Infiniband with CentOS 6.7

Displaying 20 results from an estimated 1000 matches similar to: "Recommendations for Infiniband with CentOS 6.7"

2016 May 24
5
Hard drives being renamed
Hi, We are running Centos 6.7 - 2.6.32-573.22.1.el6.x86_64 on a Quanta Cirrascale, up to date with patches. We have had a couple of instances in which the hard drives have become renamed after reboot (e.g. drive sda is renamed to sdc after reboot). One time this occurred when we rebooted following the installation of a 10GB NIC card, another time after we tried to install mellanox drivers
2016 May 25
3
Recommendations for Infiniband with CentOS 6.7
We have a new install of CentOS 6.7 with infiniband support installed. We can see the card in hardware and we can see the mlx4 drivers loaded in the kernel but cannot see the card as an ethernet interface, using ifconfig -a. Can you recommend an install procedure to see this as an ethernet interface? Thanks On 05/25/2016 07:32 AM, Fabian Arrotin wrote: > On 25/05/16 03:08, Pat Haley
2016 May 25
0
Recommendations for Infiniband with CentOS 6.7
On 25/05/16 03:08, Pat Haley wrote: > > Hi All, > > We looking for suggestions on dealing with mellanox drivers in CentOS 6.7 > > We tried installing mellanox drivers > (MLNX_OFED_LINUX-3.2-2.0.0.0-rhel6.7-x86_64) on a Quanta Cirrascale > server running Centos 6.7 - 2.6.32-573.22.1.el6.x86_64. When we > rebooted the machine after installing the drivers, it went
2016 May 24
0
Hard drives being renamed
On 5/24/2016 2:08 PM, Pat Haley wrote: > > We are running Centos 6.7 - 2.6.32-573.22.1.el6.x86_64 on a Quanta > Cirrascale, up to date with patches. We have had a couple of instances > in which the hard drives have become renamed after reboot (e.g. drive > sda is renamed to sdc after reboot). One time this occurred when we > rebooted following the installation of a 10GB NIC
2013 Jan 18
1
Configuration...
Hi have what might be some elementary questions. Really, what I"d love would be for someone who has had good success to publish his/her configuration files and, maybe, the output from ifconfig. At this point, when I see the not-so-good performance I"m getting, I don't realistically know if I'm in the right ballfield. It seems to me, that with so many mellanox cards out
2018 Apr 25
2
RDMA Client Hang Problem
Dear Gluster-Users, I am experiencing RDMA problems. I have installed Ubuntu 16.04.4 running with 4.15.0-13-generic kernel, MLNX_OFED_LINUX-4.3-1.0.1.0-ubuntu16.04-x86_64 to 4 different servers. All of them has Mellanox ConnectX-4 LX dual port NICs. These four servers are connected via Mellanox SN2100 Switch. I have installed GlusterFS Server v3.10 (from Ubuntu PPA) to 3 servers. These 3
2017 Oct 23
1
problems running a vol over IPoIB, and qemu off it?
hi people I wonder if anybody experience any problems with vols in replica mode that run across IPoIB links and libvirt stores qcow image on such a volume? I wonder if maybe devel could confirm it should just work, and then hardware/Infiniband I should blame. I have a direct IPoIB link between two hosts, gluster replica volume, libvirt store disk images there. I start a guest on hostA and
2017 Jun 23
2
Slow write times to gluster disk
On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <phaley at mit.edu> wrote: > > Hi, > > Today we experimented with some of the FUSE options that we found in the > list. > > Changing these options had no effect: > > gluster volume set test-volume performance.cache-max-file-size 2MB > gluster volume set test-volume performance.cache-refresh-timeout 4 > gluster
2017 Jun 24
0
Slow write times to gluster disk
On Fri, Jun 23, 2017 at 9:10 AM, Pranith Kumar Karampuri < pkarampu at redhat.com> wrote: > > > On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <phaley at mit.edu> wrote: > >> >> Hi, >> >> Today we experimented with some of the FUSE options that we found in the >> list. >> >> Changing these options had no effect: >> >>
2017 Jun 26
3
Slow write times to gluster disk
Hi All, Decided to try another tests of gluster mounted via FUSE vs gluster mounted via NFS, this time using the software we run in production (i.e. our ocean model writing a netCDF file). gluster mounted via NFS the run took 2.3 hr gluster mounted via FUSE: the run took 44.2 hr The only problem with using gluster mounted via NFS is that it does not respect the group write permissions which
2017 Jun 02
2
Slow write times to gluster disk
Are you sure using conv=sync is what you want? I normally use conv=fdatasync, I'll look up the difference between the two and see if it affects your test. -b ----- Original Message ----- > From: "Pat Haley" <phaley at mit.edu> > To: "Pranith Kumar Karampuri" <pkarampu at redhat.com> > Cc: "Ravishankar N" <ravishankar at redhat.com>,
2017 Jun 20
2
Slow write times to gluster disk
Hi Ben, Sorry this took so long, but we had a real-time forecasting exercise last week and I could only get to this now. Backend Hardware/OS: * Much of the information on our back end system is included at the top of http://lists.gluster.org/pipermail/gluster-users/2017-April/030529.html * The specific model of the hard disks is SeaGate ENTERPRISE CAPACITY V.4 6TB
2012 Dec 28
6
problem with installing lustre and ofed
Hello, I am having trouble installing the server modules for lustre 2.1.4 and use mellanox''s OFED distribution so we may use infiniband. Would you folks look at my procedure and results below and let me know what you think? Thanks very much! The mellanox ofed installation builds and installs some kernel modules too, so I used this method to ensure OFED compiled against the correct
2017 Jun 22
0
Slow write times to gluster disk
Hi, Today we experimented with some of the FUSE options that we found in the list. Changing these options had no effect: gluster volume set test-volume performance.cache-max-file-size 2MB gluster volume set test-volume performance.cache-refresh-timeout 4 gluster volume set test-volume performance.cache-size 256MB gluster volume set test-volume performance.write-behind-window-size 4MB gluster
2017 Jun 27
0
Slow write times to gluster disk
On Mon, Jun 26, 2017 at 7:40 PM, Pat Haley <phaley at mit.edu> wrote: > > Hi All, > > Decided to try another tests of gluster mounted via FUSE vs gluster > mounted via NFS, this time using the software we run in production (i.e. > our ocean model writing a netCDF file). > > gluster mounted via NFS the run took 2.3 hr > > gluster mounted via FUSE: the run took
2012 Dec 18
1
Problem with srptools
Hello, I have a problem with the srptools to connect my Dom0 to the scst over IB ressources. *When i''m on the Debian kernel (without Dom0) * root@blade1:/# ibsrpdm -c id_ext=003048ffff9dd3b4,ioc_guid=003048ffff9dd3b4,dgid=fe80000000000000003048ffff9dd3b5,pkey=ffff,service_id=003048ffff9dd3b4
2017 Jun 12
0
Slow write times to gluster disk
Hi Guys, I was wondering what our next steps should be to solve the slow write times. Recently I was debugging a large code and writing a lot of output at every time step. When I tried writing to our gluster disks, it was taking over a day to do a single time step whereas if I had the same program (same hardware, network) write to our nfs disk the time per time-step was about 45 minutes.
2011 Jun 27
2
Using TSM to back-up glusterfs
Hi We have been trying back-up a glusterfs (v3.1.4) area using the Tivoli TSM software to an off-site area. The back-up keeps failing with the following typical error messages 06/14/2011 22:22:58 ANS1587W I/O error reading file attributes for: /gdata/projects/philex/OAG/2011/May16/mdor3km10/coast_den2.in. errno = 22, Invalid argument 06/14/2011 22:22:59 ANS4007E Error processing
2017 Jun 27
2
Slow write times to gluster disk
On 06/27/2017 10:17 AM, Pranith Kumar Karampuri wrote: > The only problem with using gluster mounted via NFS is that it does not > respect the group write permissions which we need. > > We have an exercise coming up in the a couple of weeks. It seems to me > that in order to improve our write times before then, it would be good > to solve the group write permissions for gluster
2016 May 26
0
Recommendations for Infiniband with CentOS 6.7
On Wed, 25 May 2016 11:48:55 -0400 Pat Haley <phaley at mit.edu> wrote: > We have a new install of CentOS 6.7 with infiniband support > installed. We can see the card in hardware and we can see the mlx4 > drivers loaded in the kernel but cannot see the card as an ethernet > interface, using ifconfig -a. Can you recommend an install procedure > to see this as an ethernet