Displaying 20 results from an estimated 4000 matches similar to: "Hard drives being renamed"
2016 May 25
3
Recommendations for Infiniband with CentOS 6.7
Hi All,
We looking for suggestions on dealing with mellanox drivers in CentOS 6.7
We tried installing mellanox drivers
(MLNX_OFED_LINUX-3.2-2.0.0.0-rhel6.7-x86_64) on a Quanta Cirrascale
server running Centos 6.7 - 2.6.32-573.22.1.el6.x86_64. When we
rebooted the machine after installing the drivers, it went into a kernel
panic for every installed kernel except for Centos 6.7
2016 May 25
3
Recommendations for Infiniband with CentOS 6.7
We have a new install of CentOS 6.7 with infiniband support installed.
We can see the card in hardware and we can see the mlx4 drivers loaded
in the kernel but cannot see the card as an ethernet interface, using
ifconfig -a. Can you recommend an install procedure to see this as an
ethernet interface?
Thanks
On 05/25/2016 07:32 AM, Fabian Arrotin wrote:
> On 25/05/16 03:08, Pat Haley
2016 May 24
0
Hard drives being renamed
On 5/24/2016 2:08 PM, Pat Haley wrote:
>
> We are running Centos 6.7 - 2.6.32-573.22.1.el6.x86_64 on a Quanta
> Cirrascale, up to date with patches. We have had a couple of instances
> in which the hard drives have become renamed after reboot (e.g. drive
> sda is renamed to sdc after reboot). One time this occurred when we
> rebooted following the installation of a 10GB NIC
2016 May 25
0
Recommendations for Infiniband with CentOS 6.7
On 25/05/16 03:08, Pat Haley wrote:
>
> Hi All,
>
> We looking for suggestions on dealing with mellanox drivers in CentOS 6.7
>
> We tried installing mellanox drivers
> (MLNX_OFED_LINUX-3.2-2.0.0.0-rhel6.7-x86_64) on a Quanta Cirrascale
> server running Centos 6.7 - 2.6.32-573.22.1.el6.x86_64. When we
> rebooted the machine after installing the drivers, it went
2017 Jun 26
3
Slow write times to gluster disk
Hi All,
Decided to try another tests of gluster mounted via FUSE vs gluster
mounted via NFS, this time using the software we run in production (i.e.
our ocean model writing a netCDF file).
gluster mounted via NFS the run took 2.3 hr
gluster mounted via FUSE: the run took 44.2 hr
The only problem with using gluster mounted via NFS is that it does not
respect the group write permissions which
2017 Jun 02
2
Slow write times to gluster disk
Are you sure using conv=sync is what you want? I normally use conv=fdatasync, I'll look up the difference between the two and see if it affects your test.
-b
----- Original Message -----
> From: "Pat Haley" <phaley at mit.edu>
> To: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> Cc: "Ravishankar N" <ravishankar at redhat.com>,
2017 Jun 12
0
Slow write times to gluster disk
Hi Guys,
I was wondering what our next steps should be to solve the slow write times.
Recently I was debugging a large code and writing a lot of output at
every time step. When I tried writing to our gluster disks, it was
taking over a day to do a single time step whereas if I had the same
program (same hardware, network) write to our nfs disk the time per
time-step was about 45 minutes.
2017 Jun 23
2
Slow write times to gluster disk
On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <phaley at mit.edu> wrote:
>
> Hi,
>
> Today we experimented with some of the FUSE options that we found in the
> list.
>
> Changing these options had no effect:
>
> gluster volume set test-volume performance.cache-max-file-size 2MB
> gluster volume set test-volume performance.cache-refresh-timeout 4
> gluster
2017 Jun 27
0
Slow write times to gluster disk
On Mon, Jun 26, 2017 at 7:40 PM, Pat Haley <phaley at mit.edu> wrote:
>
> Hi All,
>
> Decided to try another tests of gluster mounted via FUSE vs gluster
> mounted via NFS, this time using the software we run in production (i.e.
> our ocean model writing a netCDF file).
>
> gluster mounted via NFS the run took 2.3 hr
>
> gluster mounted via FUSE: the run took
2017 Jun 20
2
Slow write times to gluster disk
Hi Ben,
Sorry this took so long, but we had a real-time forecasting exercise
last week and I could only get to this now.
Backend Hardware/OS:
* Much of the information on our back end system is included at the
top of
http://lists.gluster.org/pipermail/gluster-users/2017-April/030529.html
* The specific model of the hard disks is SeaGate ENTERPRISE CAPACITY
V.4 6TB
2017 Jun 24
0
Slow write times to gluster disk
On Fri, Jun 23, 2017 at 9:10 AM, Pranith Kumar Karampuri <
pkarampu at redhat.com> wrote:
>
>
> On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <phaley at mit.edu> wrote:
>
>>
>> Hi,
>>
>> Today we experimented with some of the FUSE options that we found in the
>> list.
>>
>> Changing these options had no effect:
>>
>>
2017 Jun 22
0
Slow write times to gluster disk
Hi,
Today we experimented with some of the FUSE options that we found in the
list.
Changing these options had no effect:
gluster volume set test-volume performance.cache-max-file-size 2MB
gluster volume set test-volume performance.cache-refresh-timeout 4
gluster volume set test-volume performance.cache-size 256MB
gluster volume set test-volume performance.write-behind-window-size 4MB
gluster
2017 Jul 05
2
Slow write times to gluster disk
Hi Soumya,
(1) In http://mseas.mit.edu/download/phaley/GlusterUsers/TestNFSmount/
I've placed the following 2 log files
etc-glusterfs-glusterd.vol.log
gdata.log
The first has repeated messages about nfs disconnects. The second had
the <fuse_mnt_direcotry>.log name (but not much information).
(2) About the gluster-NFS native server: do you know where we can find
documentation on
2017 Jul 07
0
Slow write times to gluster disk
Hi All,
A follow-up question. I've been looking at various pages on nfs-ganesha
& gluster. Is there a version of nfs-ganesha that is recommended for
use with
glusterfs 3.7.11 built on Apr 27 2016 14:09:22
CentOS release 6.8 (Final)
Thanks
Pat
On 07/05/2017 11:36 AM, Pat Haley wrote:
>
> Hi Soumya,
>
> (1) In
2017 Jul 03
2
Slow write times to gluster disk
Hi Soumya,
When I originally did the tests I ran tcpdump on the client.
I have rerun the tests, doing tcpdump on the server
tcpdump -i any -nnSs 0 host 172.16.1.121 -w /root/capture_nfsfail.pcap
The results are in the same place
http://mseas.mit.edu/download/phaley/GlusterUsers/TestNFSmount/
capture_nfsfail.pcap has the results from the failed touch experiment
capture_nfssucceed.pcap has
2017 Jun 30
2
Slow write times to gluster disk
Hi,
I was wondering if there were any additional test we could perform to
help debug the group write-permissions issue?
Thanks
Pat
On 06/27/2017 12:29 PM, Pat Haley wrote:
>
> Hi Soumya,
>
> One example, we have a common working directory dri_fleat in the
> gluster volume
>
> drwxrwsr-x 22 root dri_fleat 4.0K May 1 15:14 dri_fleat
>
> my user (phaley) does
2017 Jul 14
0
Slow write times to gluster disk
Hi Soumya,
I just noticed some of the notes at the bottom. In particular
* Till glusterfs-3.7, gluster-NFS (gNFS) gets enabled by default. The
only requirement is that kernel-NFS has to be disabled for
gluster-NFS to come up. Please disable kernel-NFS server and restart
glusterd to start gNFS. In case of any issues with starting gNFS
server, please look at
2017 Jun 27
0
Slow write times to gluster disk
Hi Soumya,
One example, we have a common working directory dri_fleat in the gluster
volume
drwxrwsr-x 22 root dri_fleat 4.0K May 1 15:14 dri_fleat
my user (phaley) does not own that directory but is a member of the
group dri_fleat and should have write permissions. When I go to the
nfs-mounted version and try to use the touch command I get the following
ibfdr-compute-0-4(dri_fleat)%
2017 Jul 07
2
Slow write times to gluster disk
Hi,
On 07/07/2017 06:16 AM, Pat Haley wrote:
>
> Hi All,
>
> A follow-up question. I've been looking at various pages on nfs-ganesha
> & gluster. Is there a version of nfs-ganesha that is recommended for
> use with
>
> glusterfs 3.7.11 built on Apr 27 2016 14:09:22
> CentOS release 6.8 (Final)
For glusterfs 3.7, nfs-ganesha-2.3-* version can be used.
I see
2016 Sep 01
2
group write permissions not being respected
For the enforcing=0, is that referring to SELinux? If so, we are not
running SELinux.
On 08/31/2016 11:38 PM, Chris Murphy wrote:
----------------------------------------------------------------------------------------
> Try booting with enforcing=0 and if that fixes it, you need to find out
> what security label is needed for gluster.