similar to: no dentry for non-root inode

Displaying 20 results from an estimated 100 matches similar to: "no dentry for non-root inode"

2013 Feb 18
1
Directory metadata inconsistencies and missing output ("mismatched layout" and "no dentry for inode" error)
Hi I'm running into a rather strange and frustrating bug and wondering if anyone on the mailing list might have some insight about what might be causing it. I'm running a cluster of two dozen nodes, where the processing nodes are also the gluster bricks (using the SLURM resource manager). Each node has the glusters mounted natively (not NFS). All nodes are using v3.2.7. Each job in the
2010 Jan 07
2
Random directory/files gets unavailable after sometime
Hello, I am using glusterfs v3.0.0 and having some problems with random directory/files. They work fine for some time ( hours ) and them suddenly gets unavailable: # ls -lh ls: cannot access MyDir: No such file or directory total 107M d????????? ? ? ? ? ? MyDir ( long dir list, intentionally hidden ) At the logs i get a lot of messages like those ones: [2010-01-07
2010 Oct 27
1
Gluster 3.1 and NFS problem
HI, I using Gluster 3.1 and after 2 day of working it stops the NFS mount with the following error: [2010-10-27 14:59:54.687519] I [client-handshake.c:699:select_server_supported_programs] storage-client-0: Using Program GlusterFS-3.1.0, Num (1298437), Version (310) [2010-10-27 14:59:54.720898] I [client-handshake.c:535:client_setvolume_cbk] storage-client-0: Connected to 192.168.2.2:24009,
2013 Nov 09
2
Failed rebalance - lost files, inaccessible files, permission issues
I'm starting a new thread on this, because I have more concrete information than I did the first time around. The full rebalance log from the machine where I started the rebalance can be found at the following link. It is slightly redacted - one search/replace was made to replace an identifying word with REDACTED. https://dl.dropboxusercontent.com/u/97770508/mdfs-rebalance-redacted.zip
2011 Aug 01
1
[Gluster 3.2.1] Réplication issues on a two bricks volume
Hello, I have installed GlusterFS one month ago, and replication have many issues : First of all, our infrastructure, 2 storage array of 8Tb in replication mode... We have our backups file on this arrays, so 6Tb of datas. I want replicate datas on the second storrage array, so, i use this command : # gluster volume rebalance REP_SVG migrate-data start And gluster start to replicate, in 2 weeks
2008 Dec 09
1
File uploaded to webDAV server on GlusterFS AFR - ends up without xattr!
Hello list. I'm testing GlusterFS AFR mode as a solution for implementing a highly available webDAV file storage for our production environment. Whlie doing performance tests I've notticed a strange behavior: the files which are uploaded via a webDAV server, end up without extended attributes, which removes the ability to self-heal. The set up is a simple testing environment with 2
2016 Jun 27
2
How to debug not working Roaming profiles on Samba 4 AD setup?
Hi, thank your for your answer. > Are the 'File servers' joined to the domain ? Yes > Are the smb.conf files you posted complete No, they are abstracted ones, because they are very long > if not, can you post the complete ones, exactly as they are on the computers (you can sanitize them if you need to) Yes > Try taking a look here:
2016 Jun 27
2
How to debug not working Roaming profiles on Samba 4 AD setup?
Hi, some months before, I was serving files and profiles using a Samba 3 PDC server (I will name it PDCSERV), this is some abstracts fro smb.conf: PDCSERV:/etc/samba/smb.conf [general] logon path = \\%N\profile logon drive = U: logon home = \\%N\%U logon script = "logon.cmd" valid users = %S [homes] path =
2020 Feb 04
0
Re: PCI/GPU Passthrough with xen
this config does not work... why? <domain type='xen'> <name>marax.chao5.int</name> <uuid>72f8f7cf-d538-41cd-828a-9945b9157719</uuid> <memory unit='GiB'>32</memory> <currentMemory unit='GiB'>32</currentMemory> <vcpu placement='static'>16</vcpu> <os> <type
2016 Jun 28
0
How to debug not working Roaming profiles on Samba 4 AD setup?
On 27/06/16 22:42, Thomas DEBESSE wrote: > Hi, thank your for your answer. > > > Are the 'File servers' joined to the domain ? > Yes > > > Are the smb.conf files you posted complete > No, they are abstracted ones, because they are very long > > > if not, can you post the complete ones, exactly as they are on the > computers (you can sanitize them if
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello, ? ? Here is the volume info as requested by soumya: ? #gluster volume info www ?Volume Name: www Type: Replicate Volume ID: 5d64ee36-828a-41fa-adbf-75718b954aff Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 192.168.140.41:/gluster/www Brick2: 192.168.140.42:/gluster/www Brick3: 192.168.140.43:/gluster/www Options Reconfigured:
2020 Feb 06
1
Re: PCI/GPU Passthrough with xen
I know these are mostly gamers but they have a lot of experience doing PCI pass though: https://discord.gg/du9ecG I have found them extremely helpful in the past doing libvirt PCI passthough. *Paul O'Rorke* On 2020-02-05 10:13 a.m., Jim Fehlig wrote: > On 2/4/20 1:04 AM, Christoph wrote: >> this config does not work... why? > > Without more details, I don't know why
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got cluster configuration: volume afr-ns type cluster/afr subvolumes n1-ns n2-ns n3-ns option data-self-heal on option metadata-self-heal on option entry-self-heal on end-volume volume afr1 type cluster/afr subvolumes n1-brick2
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
My standard response to someone needing filesystem performance for www traffic is generally, "you're doing it wrong". https://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ That said, you might also look at these mount options: attribute-timeout, entry-timeout, negative-timeout (set to some large amount of time), and fopen-keep-cache. On 07/11/2017 07:48 AM, Jo
2004 Jul 26
0
Migration NT4 PDC to Smb3/LDAP/TOOLS: A Success Procedure
Greetings, After a few weeks trying, I figured out how to migrate from NT4 PDC to Samba-3/LDAP/SMBLDAP-TOOLS, at least in my case. I will just explain my setup and my understanding why it works and why it fails. I hope it is helpful to others who are in the same situation as I was. Basic Setup: OS: Fedora-2 (FC2) samba-3.0.3 that comes with FC2. openldap-2.1.29 that comes
2010 Apr 22
1
Transport endpoint not connected
Hey guys, I've recently implemented gluster to share webcontent read-write between two servers. Version : glusterfs 3.0.4 built on Apr 19 2010 16:37:50 Fuse : 2.7.2-1ubuntu2.1 Platform : ubuntu 8.04LTS I used the following command to generate my configs: /usr/local/bin/glusterfs-volgen --name repstore1 --raid 1 10.10.130.11:/data/export
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello Joe, ? ? I really appreciate your feedback, but I already tried the opcache stuff (to not valildate at all). It improves of course then, but not completely somehow. Still quite slow. ? I did not try the mount options yet, but I will now! ? ? With nfs (doesnt matter much built-in version 3 or ganesha version 4) I can even host the site perfectly fast without these extreme opcache settings.
2016 Jun 28
2
How to debug not working Roaming profiles on Samba 4 AD setup?
> OK, I think your problem is that you are trying to run your AD domain as if it is still an NT4-style domain. This does not sound like a surprise to me. ;-) > with AD, you would add […] to each users object in AD. You can do this with ADUC or by creating an ldif file on the DC and then use ldbmodify to add it. Oh, yes, you're right, I had to do the same for the logon.cmd, I already
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
On 07/11/2017 08:14 AM, Jo Goossens wrote: > RE: [Gluster-users] Gluster native mount is really slow compared to nfs > > Hello Joe, > > I really appreciate your feedback, but I already tried the opcache > stuff (to not valildate at all). It improves of course then, but not > completely somehow. Still quite slow. > > I did not try the mount options yet, but I will now!
2017 Jul 06
2
Very slow performance on Sharded GlusterFS
Hi Krutika, I also did one more test. I re-created another volume (single volume. Old one destroyed-deleted) then do 2 dd tests. One for 1GB other for 2GB. Both are 32MB shard and eager-lock off. Samples: sr:~# gluster volume profile testvol start Starting volume profile on testvol has been successful sr:~# dd if=/dev/zero of=/testvol/dtestfil0xb bs=1G count=1 1+0 records in 1+0