Anoop C S
2018-Nov-13 08:02 UTC
[Gluster-users] does your samba work with 4.1.x (centos 7.5)
On Mon, 2018-11-12 at 20:04 -0500, Diego Remolina wrote:> Hi Anoop, > > This is an overview of how to use Central files in Revit: > > https://revitpure.com/blog/how-to-use-central-and-local-files-in-revit > > Once a central file is created, other folders are also created in the location of the file, which > contains many other files. > > [root at ysmha02 vfsgluster]# ls -la > total 385588 > drwxrws---. 4 dijuremo Staff 4096 Nov 12 19:03 . > drwxr-xr-x. 21 root root 4096 Nov 7 19:51 .. > drwxrws---. 2 dijuremo Staff 4096 Nov 12 19:05 2017-07-06 CAPE CORAL > CJDR_CENTRAL_R2017_backup > -rw-rw----. 1 dijuremo Staff 394825728 Jul 23 2017 2017-07-06 CAPE CORAL CJDR_CENTRAL_R2017.rvt > drwxrws---. 2 dijuremo Staff 4096 Nov 12 19:03 Revit_temp > > So I copied the file 2017 2017-07-06 CAPE CORAL CJDR_CENTRAL_R2017.rvt to the network share: > \\ysmserver\vfsgluster > > When I attempted to create a central file, it failed giving this error message: > > > > A simple ls -l of the _backup folder shows there is an existing file there called > incrementtable.2108.dat: > > [root at ysmha02 vfsgluster]# ls -l 2017-07-06\ CAPE\ CORAL\ > CJDR_CENTRAL_R2017_backup/incrementtable.2108.dat > -rw-rw----. 1 dijuremo Staff 2357 Nov 12 19:13 2017-07-06 CAPE CORAL > CJDR_CENTRAL_R2017_backup/incrementtable.2108.dat > > However, at this point, things are not OK. The file is *not* a central file. If I hit close and > open the file again, then Revit will hang, usually just go into the usual windows "Not Responding" > state. This can last for several minutes. I just closed the application via End Task after 5 > minutes of waiting. Rather than double clicking on the file from the share, I also tried opening > Revit first, then opening the file from Revit using the Open dialog. This also hangs the program. > > In one occasion, I tried to manually delete the folders (long_name_backup and Revit_temp) from > windows using File Explorer to try and recreate the central again and then the delete process hung > in one file, preview.1957.dat for almost a minute, but it finally succeeded. This is not normal > behavior. > > On the server, I can see this is a rather small file: > > [root at ysmha02 2017-07-06 CAPE CORAL CJDR_CENTRAL_R2017_backup]# ls -la > total 11 > drwxrws---. 2 dijuremo Staff 4096 Nov 12 19:39 . > drwxrws---. 4 dijuremo Staff 4096 Nov 12 19:03 .. > -rw-rw----. 1 dijuremo Staff 2753 Nov 12 19:36 preview.1957.datThanks for explaining the issue. I understand that you are experiencing hang while doing some operations on files/directories in a GlusterFS volume share from a Windows client. For simplicity can you attach the output of following command: # gluster volume info <volume> # testparm -s --section-name global> This is the test samba share exported using vfs object = glusterfs: > > [vfsgluster] > path = /vfsgluster > browseable = yes > create mask = 660 > directory mask = 770 > write list = @Staff > kernel share modes = No > vfs objects = glusterfs > glusterfs:loglevel = 7 > glusterfs:logfile = /var/log/samba/glusterfs-vfsgluster.log > glusterfs:volume = exportSince you have mentioned path as /vfsgluster I hope you are sharing a subdirectory under root of the volume.> Full smb.conf > http://termbin.com/y4j0I see the "clustering" parameter set to 'yes'. How many nodes are there in the cluster? Out of those how many are running as samba and/or gluster nodes?> /var/log/samba/glusterfs-vfsgluster.log > http://termbin.com/5hdr > > Please let me know if there is any other information I can provide.Are there any errors in /var/log/samba/log.<IP/hostname>? IP/hostname = Windows client machine
Diego Remolina
2018-Nov-13 12:50 UTC
[Gluster-users] does your samba work with 4.1.x (centos 7.5)
> > Thanks for explaining the issue. > > I understand that you are experiencing hang while doing some operations on files/directories in a > GlusterFS volume share from a Windows client. For simplicity can you attach the output of following > command: > > # gluster volume info <volume> > # testparm -s --section-name globalgluster v status export Status of volume: export Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.0.1.7:/bricks/hdds/brick 49153 0 Y 2540 Brick 10.0.1.6:/bricks/hdds/brick 49153 0 Y 2800 Self-heal Daemon on localhost N/A N/A Y 2912 Self-heal Daemon on 10.0.1.6 N/A N/A Y 3107 Self-heal Daemon on 10.0.1.5 N/A N/A Y 5877 Task Status of Volume export ------------------------------------------------------------------------------ There are no active volume tasks # gluster volume info export Volume Name: export Type: Replicate Volume ID: b4353b3f-6ef6-4813-819a-8e85e5a95cff Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 10.0.1.7:/bricks/hdds/brick Brick2: 10.0.1.6:/bricks/hdds/brick Options Reconfigured: diagnostics.brick-log-level: INFO diagnostics.client-log-level: INFO performance.cache-max-file-size: 256MB client.event-threads: 5 server.event-threads: 5 cluster.readdir-optimize: on cluster.lookup-optimize: on performance.io-cache: on performance.io-thread-count: 64 nfs.disable: on cluster.server-quorum-type: server performance.cache-size: 10GB server.allow-insecure: on transport.address-family: inet performance.cache-samba-metadata: on features.cache-invalidation-timeout: 600 performance.md-cache-timeout: 600 features.cache-invalidation: on performance.cache-invalidation: on network.inode-lru-limit: 65536 performance.cache-min-file-size: 0 performance.stat-prefetch: on cluster.server-quorum-ratio: 51% I had sent you the full smb.conf, so no need to run testparm -s --section-name global, please reference: http://termbin.com/y4j0> > > This is the test samba share exported using vfs object = glusterfs: > > > > [vfsgluster] > > path = /vfsgluster > > browseable = yes > > create mask = 660 > > directory mask = 770 > > write list = @Staff > > kernel share modes = No > > vfs objects = glusterfs > > glusterfs:loglevel = 7 > > glusterfs:logfile = /var/log/samba/glusterfs-vfsgluster.log > > glusterfs:volume = export > > Since you have mentioned path as /vfsgluster I hope you are sharing a subdirectory under root of the > volume.Yes, vfsgluster is a directory at the root of the export volume. It is also currently mounted in /export so that the rest of the files can be exported via samba with fuse mounts: # mount | grep export 10.0.1.7:/export on /export type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,allow_other,max_read=131072) # ls -ld /export/vfsgluster drwxrws---. 4 dijuremo Staff 4096 Nov 12 20:24 /export/vfsgluster> > > Full smb.conf > > http://termbin.com/y4j0 > > I see the "clustering" parameter set to 'yes'. How many nodes are there in the cluster? Out of those > how many are running as samba and/or gluster nodes? >There are a total of 3 gluster peers, but only two have bricks. The other is just present, but not even configured as an arbiter. Two of the nodes with bricks run ctdb and samba.> > /var/log/samba/glusterfs-vfsgluster.log > > http://termbin.com/5hdr > > > > Please let me know if there is any other information I can provide. > > Are there any errors in /var/log/samba/log.<IP/hostname>? IP/hostname = Windows client machine >I do not currently have the log file directive enabled in smb.conf, I would have to enable it. Do you need me to repeat the process with it? Diego