Displaying 20 results from an estimated 2000 matches similar to: "CentOS 6: files now owned by nobody:nobody"
2016 Aug 29
0
CentOS 6: files now owned by nobody:nobody
On Mon, 29 Aug 2016 18:59:31 -0400
Pat Haley wrote:
> We noticed that all the files were owned by nobody
Here are my notes for dealing with this issue:
If all users come up as nobody on a nfs mount:
Add nfs server name to the Domain = line in /etc/idmapd.conf on both the server and the clients, i.e. Domain = nameof.server
/sbin/service rpcidmapd restart
/sbin/service nfslock restart
2017 Aug 07
2
Slow write times to gluster disk
Hi Soumya,
We just had the opportunity to try the option of disabling the
kernel-NFS and restarting glusterd to start gNFS. However the gluster
demon crashes immediately on startup. What additional information
besides what we provide below would help debugging this?
Thanks,
Pat
-------- Forwarded Message --------
Subject: gluster-nfs crashing on start
Date: Mon, 7 Aug 2017 16:05:09
2014 Mar 17
1
NFS Mount: files owned by nobody
This is one of those simple-been-doing-this-forever things that, for
some reason, has me stumped today.
When I try to NFS (v4) mount a directory, the user/group ownership shows
up as user "nobody" even though /etc/passwd has values for the correct
user names. How do I get it to mount with the correct user IDs?
Hume is the server, running CentOS 6, all updates applied, maybe a week
2017 Aug 08
0
Slow write times to gluster disk
----- Original Message -----
> From: "Pat Haley" <phaley at mit.edu>
> To: "Soumya Koduri" <skoduri at redhat.com>, gluster-users at gluster.org, "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> Cc: "Ben Turner" <bturner at redhat.com>, "Ravishankar N" <ravishankar at redhat.com>, "Raghavendra
2017 Aug 08
1
Slow write times to gluster disk
Soumya,
its
[root at mseas-data2 ~]# glusterfs --version
glusterfs 3.7.11 built on Apr 27 2016 14:09:20
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
2013 Sep 20
2
NFS mounted files owned by nobody
I have 2 centos boxes and I want to NFS mount a dir from one to the
other. When I do that the files on the client all are owned by
nobody.nobody. I verified that the user and group of the files on the
server exist on both hosts and have the same uid and gid. I googled
and found this:
http://whacked.net/2006/07/26/nfsv4nfs-mapid-nobody-domain/
domainname on both machines returns (none). I edited
2017 Dec 20
2
glusterfs, ganesh, and pcs rules
Hi,
I've just created again the gluster with NFS ganesha. Glusterfs version 3.8
When I run the command gluster nfs-ganesha enable - it returns a success.
However, looking at the pcs status, I see this:
[root at tlxdmz-nfs1 ~]# pcs status
Cluster name: ganesha-nfs
Stack: corosync
Current DC: tlxdmz-nfs2 (version 1.1.16-12.el7_4.5-94ff4df) - partition
with quorum
Last updated: Wed Dec 20
2011 Oct 10
2
can't snapshot
Good morning Btrfs list,
I am trying to create a subvolume of a directory tree (approximately 1.1
million subvolumes under nfs1). The following error is thrown and
without the wiki I don''t know what argument is needed. I am running
kernel 3.1.0-rc4.
[root@btrfs ~]# btrfs sub snapshot /btrfs/nfs1/ /btrfs/snaps/
Invalid arguments for subvolume snapshot
[root@btrfs ~]# btrfs sub list
2017 Dec 21
0
glusterfs, ganesh, and pcs rules
Hi,
In your ganesha-ha.conf do you have your virtual ip adresses set something like this?:
VIP_tlxdmz-nfs1="192.168.22.33"
VIP_tlxdmz-nfs2="192.168.22.34"
Renaud
De?: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] De la part de Hetz Ben Hamo
Envoy??: 20 d?cembre 2017 04:35
??: gluster-users at gluster.org
Objet?: [Gluster-users]
2017 Dec 24
1
glusterfs, ganesh, and pcs rules
I checked, and I have it like this:
# Name of the HA cluster created.
# must be unique within the subnet
HA_NAME="ganesha-nfs"
#
# The gluster server from which to mount the shared data volume.
HA_VOL_SERVER="tlxdmz-nfs1"
#
# N.B. you may use short names or long names; you may not use IP addrs.
# Once you select one, stay with it as it will be mildly unpleasant to
# clean up
2000 Jun 20
2
Multiple Services on one Server
Newbie question!
We currently are running a product call
TAS from Syntax Corporation and would like to move to Samba. I have review
the documentation and cannot find how to set up muliple services on one
server. I tried using the Netbios name = and the include statement to
bring in another smb.conf file but I don't think I'm on the right track.
2017 Jun 02
2
Slow write times to gluster disk
Are you sure using conv=sync is what you want? I normally use conv=fdatasync, I'll look up the difference between the two and see if it affects your test.
-b
----- Original Message -----
> From: "Pat Haley" <phaley at mit.edu>
> To: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> Cc: "Ravishankar N" <ravishankar at redhat.com>,
2017 Jun 23
2
Slow write times to gluster disk
On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <phaley at mit.edu> wrote:
>
> Hi,
>
> Today we experimented with some of the FUSE options that we found in the
> list.
>
> Changing these options had no effect:
>
> gluster volume set test-volume performance.cache-max-file-size 2MB
> gluster volume set test-volume performance.cache-refresh-timeout 4
> gluster
2017 Jun 20
2
Slow write times to gluster disk
Hi Ben,
Sorry this took so long, but we had a real-time forecasting exercise
last week and I could only get to this now.
Backend Hardware/OS:
* Much of the information on our back end system is included at the
top of
http://lists.gluster.org/pipermail/gluster-users/2017-April/030529.html
* The specific model of the hard disks is SeaGate ENTERPRISE CAPACITY
V.4 6TB
2017 Jun 26
3
Slow write times to gluster disk
Hi All,
Decided to try another tests of gluster mounted via FUSE vs gluster
mounted via NFS, this time using the software we run in production (i.e.
our ocean model writing a netCDF file).
gluster mounted via NFS the run took 2.3 hr
gluster mounted via FUSE: the run took 44.2 hr
The only problem with using gluster mounted via NFS is that it does not
respect the group write permissions which
2017 Jun 12
0
Slow write times to gluster disk
Hi Guys,
I was wondering what our next steps should be to solve the slow write times.
Recently I was debugging a large code and writing a lot of output at
every time step. When I tried writing to our gluster disks, it was
taking over a day to do a single time step whereas if I had the same
program (same hardware, network) write to our nfs disk the time per
time-step was about 45 minutes.
2017 Jun 24
0
Slow write times to gluster disk
On Fri, Jun 23, 2017 at 9:10 AM, Pranith Kumar Karampuri <
pkarampu at redhat.com> wrote:
>
>
> On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <phaley at mit.edu> wrote:
>
>>
>> Hi,
>>
>> Today we experimented with some of the FUSE options that we found in the
>> list.
>>
>> Changing these options had no effect:
>>
>>
2017 Jun 22
0
Slow write times to gluster disk
Hi,
Today we experimented with some of the FUSE options that we found in the
list.
Changing these options had no effect:
gluster volume set test-volume performance.cache-max-file-size 2MB
gluster volume set test-volume performance.cache-refresh-timeout 4
gluster volume set test-volume performance.cache-size 256MB
gluster volume set test-volume performance.write-behind-window-size 4MB
gluster
2017 Jun 27
0
Slow write times to gluster disk
On Mon, Jun 26, 2017 at 7:40 PM, Pat Haley <phaley at mit.edu> wrote:
>
> Hi All,
>
> Decided to try another tests of gluster mounted via FUSE vs gluster
> mounted via NFS, this time using the software we run in production (i.e.
> our ocean model writing a netCDF file).
>
> gluster mounted via NFS the run took 2.3 hr
>
> gluster mounted via FUSE: the run took
2012 Nov 30
3
Cannot mount gluster volume
Hi,
We recently installed glusterfs 3.3.1. We have a 3 brick gluster system running
that was being successfully mounted earlier. Yesterday we experienced a
power outage and now after rebooting our systems, we are unable to mount
this gluster file system. On the gluster client, a df -h command shows 41TB
out of 55TB, while an ls command shows broken links for directories and
missing files.