similar to: Other troubles

Displaying 20 results from an estimated 1000 matches similar to: "Other troubles"

2010 Feb 25
0
Nobody can log on from a trusted domain, EXCEPT my own account
Hello. The one of "strange problems" is here again. This time it's even stranger. I've setup a lab based on Mandriva 2010.0. I use winbind for authentication. I just installed ONE machine, then cloned it on the others, changing IP, name, and rejoining. We have two main domains (PERSONALE and STUDENTI). Machines have to be joined to PERSONALE, but the majority of users are
2012 Feb 23
1
Error accessing others domains in forest
Hello all. After last update (from winbind-3.5.3 and krb5-1.8.1 to winbind-3.5.10 and krb5-1.9.1) users from a trusted domain can't authenticate any more. Machines are joined to domain PERSONALE, and users from domain STUDENTI aren't recognized. Domains are handled by W2k8 or W2k8r2 (I have no control on these). Last lines from /var/log/samba/log.wb-STUDENTI report: [2012/02/23
2009 Dec 01
0
Mapping 'emails' to realms
Hello all. Still no luck with UPN logon. I think there's something missing in my krb5.conf, but can't find WHAT. Our UPNs are in the form of email addresses (name.surnameX at unibo.it for people in PERSONALE and name.surnameX at studio.unibo.it for people in STUDENTI domain). I never could make logon-by-upn work, but SOMETIMES "wbinfo -n UPN" resolves to the right SID
2023 Mar 24
1
How to configure?
In glfsheal-Connection.log I see many lines like: [2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021] [glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the volume file [{from server}, {errno=2}, {error=File o directory non esistente}] And *lots* of gfid-mismatch errors in glustershd.log . Couldn't find anything that would prevent heal to start. :( Diego Il 21/03/2023
2023 Mar 24
1
How to configure?
Can you check your volume file contents?Maybe it really can't find (or access) a specific volfile ? Best Regards,Strahil Nikolov? On Fri, Mar 24, 2023 at 8:07, Diego Zuccato<diego.zuccato at unibo.it> wrote: In glfsheal-Connection.log I see many lines like: [2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021] [glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the
2023 Mar 24
1
How to configure?
There are 285 files in /var/lib/glusterd/vols/cluster_data ... including many files with names related to quorum bricks already moved to a different path (like cluster_data.client.clustor02.srv-quorum-00-d.vol that should already have been replaced by cluster_data.clustor02.srv-bricks-00-q.vol -- and both vol files exist). Is there something I should check inside the volfiles? Diego Il
2023 Mar 21
1
How to configure?
I have no clue. Have you checked for errors in the logs ? Maybe you might find something useful. Best Regards,Strahil Nikolov? On Tue, Mar 21, 2023 at 9:56, Diego Zuccato<diego.zuccato at unibo.it> wrote: Killed glfsheal, after a day there were 218 processes, then they got killed by OOM during the weekend. Now there are no processes active. Trying to run "heal info" reports
2023 Mar 21
1
How to configure?
Killed glfsheal, after a day there were 218 processes, then they got killed by OOM during the weekend. Now there are no processes active. Trying to run "heal info" reports lots of files quite quickly but does not spawn any glfsheal process. And neither does restarting glusterd. Is there some way to selectively run glfsheal to fix one brick at a time? Diego Il 21/03/2023 01:21,
2023 Mar 21
1
How to configure?
Theoretically it might help.If possible, try to resolve any pending heals. Best Regards,Strahil Nikolov? On Thu, Mar 16, 2023 at 15:29, Diego Zuccato<diego.zuccato at unibo.it> wrote: In Debian stopping glusterd does not stop brick processes: to stop everything (and free the memory) I have to systemctl stop glusterd ? killall glusterfs{,d} ? killall glfsheal ? systemctl start
2023 Apr 23
1
How to configure?
After a lot of tests and unsuccessful searching, I decided to start from scratch: I'm going to ditch the old volume and create a new one. I have 3 servers with 30 12TB disks each. Since I'm going to start a new volume, could it be better to group disks in 10 3-disk (or 6 5-disk) RAID-0 volumes to reduce the number of bricks? Redundancy would be given by replica 2 (still undecided
2012 Jul 30
1
'x' bit always set?
Hello all. Seems I can't find the root cause of $subj. When I store a file on my "home", it gets chmodded ugo+x ... My smb.conf is: -8<-- [global] workgroup = PERSONALE realm = PERSONALE.EXAMPLE.COM server string = Local shares netbios name = STR00160-SAMBA security = ADS encrypt passwords = true password server =
2023 Mar 16
1
How to configure?
Can you restart glusterd service (first check that it was not modified to kill the bricks)? Best Regards,Strahil Nikolov? On Thu, Mar 16, 2023 at 8:26, Diego Zuccato<diego.zuccato at unibo.it> wrote: OOM is just just a matter of time. Today mem use is up to 177G/187 and: # ps aux|grep glfsheal|wc -l 551 (well, one is actually the grep process, so "only" 550 glfsheal
2023 Mar 16
1
How to configure?
OOM is just just a matter of time. Today mem use is up to 177G/187 and: # ps aux|grep glfsheal|wc -l 551 (well, one is actually the grep process, so "only" 550 glfsheal processes. I'll take the last 5: root 3266352 0.5 0.0 600292 93044 ? Sl 06:55 0:07 /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml root 3267220 0.7 0.0 600292 91964 ?
2023 Mar 15
1
How to configure?
If you don't experience any OOM , you can focus on the heals. 284 processes of glfsheal seems odd. Can you check the ppid for 2-3 randomly picked ?ps -o ppid= <pid> Best Regards,Strahil Nikolov? On Wed, Mar 15, 2023 at 9:54, Diego Zuccato<diego.zuccato at unibo.it> wrote: I enabled it yesterday and that greatly reduced memory pressure. Current volume info: -8<-- Volume
2023 Mar 15
1
How to configure?
I enabled it yesterday and that greatly reduced memory pressure. Current volume info: -8<-- Volume Name: cluster_data Type: Distributed-Replicate Volume ID: a8caaa90-d161-45bb-a68c-278263a8531a Status: Started Snapshot Count: 0 Number of Bricks: 45 x (2 + 1) = 135 Transport-type: tcp Bricks: Brick1: clustor00:/srv/bricks/00/d Brick2: clustor01:/srv/bricks/00/d Brick3: clustor02:/srv/bricks/00/q
2023 Mar 15
1
How to configure?
Do you use brick multiplexing ? Best Regards,Strahil Nikolov? On Tue, Mar 14, 2023 at 16:44, Diego Zuccato<diego.zuccato at unibo.it> wrote: Hello all. Our Gluster 9.6 cluster is showing increasing problems. Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]), configured in replica 3
2009 Nov 30
1
Disable local login check Winbind
Does anybody know how to disable that the local login and ssh check winbind? I'm seeing a delay when trying to login through ssh when Winbind is not cached yet. I'm not using Winbind for local logins so I'd like to disable it. In the past when Winbind would be having a problem I could not logon to the box due to it being checked in the login procedure. Winbind is used for AD
2024 Jan 18
1
Upgrade 10.4 -> 11.1 making problems
Since glusterd does not consider it a split brain, you can't solve it with standard split brain tools. I've found no way to resolve it except by manually handling one file at a time: completely unmanageable with thousands of files and having to juggle between actual path on brick and metadata files! Previously I "fixed" it by: 1) moving all the data from the volume to a temp
2009 Nov 23
1
Samba 3.0.33/3.2.15 AD joined slow initial connect with LDAP backend
I'm hoping someone can help me with the following. I currently have 2 Samba fileservers version 3.0.23d joined to our corporate Active Directory. Clients currently are Windows XP. I'm asked to prepare to migrate XP to Windows 7. From testing it looks like Samba 3.0.23d is not compatible with Windows 7. Therefor I started testing with the latest RHEL5 version 3.0.33-3.1e.el5 on RHEL5.
2011 Nov 30
1
Failing identification of users in trusted domains?
Hi all. I'm getting mad at this. I use winbind to authenticate users in multiple domains from AD. The config worked well, before upgrading from 3.5.3 to 3.5.10 in Mandriva. Now, if I 'winbind -i user.name' (so using the joined domain PERSONALE) I get the correct info, but if I do a 'winbind -i STUDENTI\\another.name' the answer is a 'Could not get info for user