search for: heinonen

Displaying 20 results from an estimated 25 matches for "heinonen".

2018 Jan 29
2
Stale locks on shards
On 29 Jan 2018 10:50 am, "Samuli Heinonen" <samppah at neutraali.net> wrote: Hi! Yes, thank you for asking. I found out this line in the production environment: lgetxattr("/tmp/zone2-ssd1-vmstor1.s6jvPu//.shard/f349ffbd- a423-4fb2-b83c-2d1d5e78e1fb.32", "glusterfs.clrlk.tinode.kblocked", 0x7f2d7c4379f0, 409...
2018 Jan 25
2
Stale locks on shards
On Thu, Jan 25, 2018 at 1:49 PM, Samuli Heinonen <samppah at neutraali.net> wrote: > Pranith Kumar Karampuri kirjoitti 25.01.2018 07:09: > >> On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen >> <samppah at neutraali.net> wrote: >> >> Hi! >>> >>> Thank you very much for your help so far....
2018 Jan 25
2
Stale locks on shards
On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen <samppah at neutraali.net> wrote: > Hi! > > Thank you very much for your help so far. Could you please tell an example > command how to use aux-gid-mount to remove locks? "gluster vol clear-locks" > seems to mount volume by itself. > You are correct, sorry, this...
2018 Jan 29
0
Stale locks on shards
Pranith Kumar Karampuri kirjoitti 29.01.2018 07:32: > On 29 Jan 2018 10:50 am, "Samuli Heinonen" <samppah at neutraali.net> > wrote: > >> Hi! >> >> Yes, thank you for asking. I found out this line in the production >> environment: >> > lgetxattr("/tmp/zone2-ssd1-vmstor1.s6jvPu//.shard/f349ffbd-a423-4fb2-b83c-2d1d5e78e1fb.32", >...
2018 Jan 29
0
Stale locks on shards
Hi, Did you find the command from strace? On 25 Jan 2018 1:52 pm, "Pranith Kumar Karampuri" <pkarampu at redhat.com> wrote: > > > On Thu, Jan 25, 2018 at 1:49 PM, Samuli Heinonen <samppah at neutraali.net> > wrote: > >> Pranith Kumar Karampuri kirjoitti 25.01.2018 07:09: >> >>> On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen >>> <samppah at neutraali.net> wrote: >>> >>> Hi! >>>> >>>>...
2018 Jan 25
0
Stale locks on shards
Pranith Kumar Karampuri kirjoitti 25.01.2018 07:09: > On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen > <samppah at neutraali.net> wrote: > >> Hi! >> >> Thank you very much for your help so far. Could you please tell an >> example command how to use aux-gid-mount to remove locks? "gluster >> vol clear-locks" seems to mount volume by itself. >...
2018 Jan 23
2
Stale locks on shards
On Tue, Jan 23, 2018 at 1:38 PM, Samuli Heinonen <samppah at neutraali.net> wrote: > Pranith Kumar Karampuri kirjoitti 23.01.2018 09:34: > >> On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen >> <samppah at neutraali.net> wrote: >> >> Hi again, >>> >>> here is more information regarding...
2018 Jan 24
0
Stale locks on shards
Hi! Thank you very much for your help so far. Could you please tell an example command how to use aux-gid-mount to remove locks? "gluster vol clear-locks" seems to mount volume by itself. Best regards, Samuli Heinonen > Pranith Kumar Karampuri <mailto:pkarampu at redhat.com> > 23 January 2018 at 10.30 > > > On Tue, Jan 23, 2018 at 1:38 PM, Samuli Heinonen > <samppah at neutraali.net <mailto:samppah at neutraali.net>> wrote: > > Pranith Kumar Karampuri kirjoitti 23....
2018 Jan 23
3
Stale locks on shards
On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen <samppah at neutraali.net> wrote: > Hi again, > > here is more information regarding issue described earlier > > It looks like self healing is stuck. According to "heal statistics" crawl > began at Sat Jan 20 12:56:19 2018 and it's still going on (It's aro...
2018 Jan 21
0
Stale locks on shards
...d1-vmstor1-server: 37374187: SETATTR <gfid:a52055bd-e2e9-42dd-92a3-e96b693bcafe> (a52055bd-e2e9-42dd-92a3-e96b693bcafe) ==> (Operation not permitted) [Operation not permitted] Is there anyways to force self heal to stop? Any help would be very much appreciated :) Best regards, Samuli Heinonen > Samuli Heinonen <mailto:samppah at neutraali.net> > 20 January 2018 at 21.57 > Hi all! > > One hypervisor on our virtualization environment crashed and now some > of the VM images cannot be accessed. After investigation we found out > that there was lots of image...
2018 Jan 20
3
Stale locks on shards
...unt: 64 performance.cache-size: 2048MB performance.write-behind-window-size: 256MB server.allow-insecure: on cluster.ensure-durability: off config.transport: rdma server.outstanding-rpc-limit: 512 diagnostics.brick-log-level: INFO Any recommendations how to advance from here? Best regards, Samuli Heinonen
2018 Jan 23
0
Stale locks on shards
Pranith Kumar Karampuri kirjoitti 23.01.2018 09:34: > On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen > <samppah at neutraali.net> wrote: > >> Hi again, >> >> here is more information regarding issue described earlier >> >> It looks like self healing is stuck. According to "heal statistics" >> crawl began at Sat Jan 20 12:56:19 2018 and i...
2009 Mar 21
1
Windows server 2003 SP2, SFU 3.5 and Samba 3.0.28
...nfo? Googling has revealed that the two possibilities are "sfu" and "rfc2307". But I haven't been able to find any decent documentation about when sfu should be used and when rfc2307. Are these somehow related to what SFU version is in use at the AD side? - Regards, Petteri Heinonen log.winbindd: [2009/03/21 22:59:04, 6] nsswitch/winbindd.c:new_connection(628) accepted socket 18 [2009/03/21 22:59:04, 3] nsswitch/winbindd_misc.c:winbindd_interface_version(491) [ 1876]: request interface version [2009/03/21 22:59:04, 3] nsswitch/winbindd_misc.c:winbindd_priv_pipe_dir(524)...
2011 Apr 07
3
Ubuntu Execution of '/etc/puppet/etckeeper-commit-pre' returned 1:
Hi I have just put puppet onto a new Ubuntu install and it ran a couple of times but now I get Execution of ''/etc/puppet/etckeeper-commit-pre'' returned 1: whenever puppetd runs. Grepping on etckeeper-commit turns up lots of posts to ubuntu and debian forums about changes made in February. Does anyone know what the story is? I''ve ended up with a screwed pam
2009 Oct 16
2
nss_winbind / offline logon
...) make nsswitch understand that I do not want it to query anything from winbind if user is found from local files b) make winbind even somehow responsive also upon the situation where it has to start up without network connection Any help or pointers would be greatly appreciated. Regards, Petteri Heinonen
2013 Mar 20
2
Geo-replication broken in 3.4 alpha2?
...e Options Reconfigured: performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off network.remote-dio: enable geo-replication.indexing: on storage.owner-uid: 36 storage.owner-gid: 36 network.ping-timeout: 10 nfs.disable: on Best regards, Samuli Heinonen
2009 Feb 11
1
zfs crashes with nfs and snapshots
Hi folks, I just saw one of my FreeBSD servers (7.0-stable of June 2008) crash while trying to access the .zfs snapshot directory via a nfs client machine. The server got a page fault caused by the nfsd process. It wasn't even able to dump the kernel image anymore. Resetting the machine it first appeared to come back fine, but shortly before the login prompt the nfsd let it crash hard again
2002 Sep 23
0
Latest on Abacus DPMI problem.
I contacted Mr. Heinonen, and got the response below. So, he says the problem is, it is 16 bit DPMI, which is rare, and the real problem is, DOSVM has a bug which allows two simultaneous interrupts. I assured him I was not asking anyone to work on the problem at all. I needed to get a handle on it, I had thought it wa...
2009 Jan 15
2
[patch] libc Berkeley DB information leak
Hi, FreeBSD libc Berkeley DB can leak sensitive information to database files. The problem is that it writes uninitialized memory obtained from malloc(3) to database files. You can use this simple test program to reproduce the behavior: http://www.saunalahti.fi/~jh3/dbtest.c Run the program and see the resulting test.db file which will contain a sequence of 0xa5 bytes directly from malloc(3).
2009 Mar 23
1
Internal Error Signal 11 (Samba 3.2.3)
...<DOMAIN> file (consequently, logins are failing also of course). Samba version is 3.2.3. Used config and log file below. Any help would be much appreciated. With some help, I guess I should also be able to use gdb to further study the coredump, if that's what is needed. -Regards, Petteri Heinonen Config: [global] # general part security = ADS interfaces = eth0 realm = DOMAIN.FI workgroup = DOMAIN netbios name = PJHVMWARE1 domain master = no local master = no preferred master = no server string = %h encrypt passwords = yes wins support = no wins serve...