search for: saruman

Displaying 20 results from an estimated 30 matches for "saruman".

Did you mean: cartman
2018 Oct 19
2
systemd automount of cifs share hangs
> > But if I start the automount unit and ls the mount point, the shell hangs > and eventually, a long time later (I haven't timed it, maybe an hour), I > eventually get a prompt again. Control-C won't interrupt it. I can still > ssh in and get another session so it's just the process that's accessing > the mount point that hangs. > I don't have a
2018 Oct 26
0
systemd automount of cifs share hangs
...and it also hung. I logged in yet another session and tried to ls the mountpoint and that hung completing the directory name. Here's what I see in /var/log/messages when dovecot hangs and I manually mount the shares from another shell session. SELinux is in permissive mode. Oct 26 09:11:39 saruman systemd: Mounting NAS1 share 1... Oct 26 09:11:39 saruman systemd: Failed to expire automount, ignoring: No such device Oct 26 09:11:39 saruman systemd: Mounted NAS1 share 1. Oct 26 09:11:45 saruman kernel: INFO: task dovecot:831 blocked for more than 120 seconds. Oct 26 09:11:45 saruman kernel:...
2018 Oct 26
2
systemd automount of cifs share hangs
...et another session and tried to ls > the mountpoint and that hung completing the directory name. > > Here's what I see in /var/log/messages when dovecot hangs and I manually > mount the shares from another shell session. SELinux is in permissive > mode. > > Oct 26 09:11:39 saruman systemd: Mounting NAS1 share 1... > Oct 26 09:11:39 saruman systemd: Failed to expire automount, ignoring: No > such device Oct 26 09:11:39 saruman systemd: Mounted NAS1 share 1. > Oct 26 09:11:45 saruman kernel: INFO: task dovecot:831 blocked for more > than 120 seconds. Oct 26 09:11:4...
2018 Jul 11
3
LMTP crashing heavily for my 2.2.36 installation
Hi, I'm running 2.2.36 (as provided by openSUSE in their server:mail repository) and at least at one of my systems LMTP is crashing regularly on certain messages (apparently a lot of them). Sometimes (but not always a backtrace is posted to the logs: 2018-07-11T07:34:56.741848+02:00 saruman dovecot: lmtp(14690): Fatal: master: service(lmtp): child 14690 killed with signal 11 (core dumps disabled) 2018-07-11T07:34:56.820474+02:00 saruman dovecot: lmtp(an007498): Panic: file imap-bodystructure.c: line 116 (part_write_body_multipart): assertion failed: (p art->data != NULL) 2018-07-11...
2005 Apr 09
1
Samba Dynamic DNS DHCP , client won't register
...date { univerzal; }; }; include "/etc/rndc.key"; -------------------------------------------------------------------------------- db.domainame -------------------------------------------------------------------------------- ; ; BIND data file for helpserver domain ; $TTL 86400 @ IN SOA saruman.domainame.co.yu. root.saruman.domainname.co.yu. ( 2005040901 ; Serial 28800 ; Refresh 7200 ; Retry 604800 ; Expire 86400 ; Negative Cache ttl ) @ IN NS saruman.domainname.co.yu. ;-----------------------------------------------------------; saruman IN A 192.168.10.2 ; ;------------------------------...
2018 Jul 11
4
LMTP crashing heavily for my 2.2.36 installation
...d by openSUSE in their server:mail > repository) and at least at one of my systems LMTP is crashing regularly on > certain messages (apparently a lot of them). > > > > Sometimes (but not always a backtrace is posted to the logs: > > > > 2018-07-11T07:34:56.741848+02:00 saruman dovecot: lmtp(14690): Fatal: > master: service(lmtp): child 14690 killed with signal 11 (core dumps > disabled) > > 2018-07-11T07:34:56.820474+02:00 saruman dovecot: lmtp(an007498): > Panic: file imap-bodystructure.c: line 116 (part_write_body_multipart): > assertion failed: (part...
2004 Nov 16
2
RE: basic encoder help
>I'm currently facing the same problem. >I added the libFLAC++ libraries to my MSVC application. >I implemented the same quality levels (0-8) as used in the FLAC frontend application. >But the resulting files are remarkable different between my application and the FLAC frontend >(although using the same settings). It did turn out to be something in my byte ordering in the end
2018 Jul 12
0
LMTP crashing heavily for my 2.2.36 installation (and now with 2.3.2.1)
Hi, I will try to create a coredump later but now I see version 2.3.2.1 also crashing in LMTP :-( 2018-07-12T10:09:57.336062+02:00 saruman dovecot: lmtp(an007498)<11814><zrPDEdUMR1smLgAAQ/KzDw>: Fatal: master: service(lmtp): child 11814 killed with signal 6 (core dumps disabled - https://dovecot.org/bugreport.html#coredumps) 2018-07-12T10:09:57.382925+02:00 saruman dovecot: lmtp(an007498)<11819><fqw2E9UMR1srLgAAQ/...
2004 Dec 29
0
[Fwd: mounting as a regular user]
...omir/wwwroot # # Necromancer (Linux JSP/Oracle Server) # mount -t smbfs -o username=$user,password=$pass,ro //necromancer/webroot-tomcat /backup/smb-mounts/necromancer/webroot-tomcat mount -t smbfs -o username=$user,password=$pass,ro //necromancer/oracle$ /backup/smb-mounts/necromancer/oracle # # Saruman (Win2K Exchange Server) # mount -t smbfs -o username=$user,password=$pass,ro //saruman/exchange-backup$ /backup/smb-mounts/saruman/exchange-backup # I can provide smb.conf files if needed for the FreeBSD and linux systems. Also for the local system I am trying to mount these shares too (Sauron)...
2018 Feb 13
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
...built on Jan 11 2017 14:07:11 Repository revision: git://git.gluster.com/glusterfs.git # gluster volume status Status of volume: palantir Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick saruman:/var/local/brick0/data 49154 0 Y 10690 Brick gandalf:/var/local/brick0/data 49155 0 Y 18732 Brick azathoth:/var/local/brick0/data 49155 0 Y 9507 Brick yog-sothoth:/var/local/brick0/data 49153 0 Y 39559 Brick cthulhu:/var/lo...
2018 Feb 27
2
Quorum in distributed-replicate volume
...ve the output of "gluster volume info <volname>" > and which brick is of what size. Volume Name: palantir Type: Distributed-Replicate Volume ID: 48379a50-3210-41b4-9a77-ae143c8bcac0 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: saruman:/var/local/brick0/data Brick2: gandalf:/var/local/brick0/data Brick3: azathoth:/var/local/brick0/data Brick4: yog-sothoth:/var/local/brick0/data Brick5: cthulhu:/var/local/brick0/data Brick6: mordiggian:/var/local/brick0/data Options Reconfigured: features.scrub: Inactive features.bitrot: off trans...
2018 Feb 15
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
...itory revision: git://git.gluster.com/glusterfs.git > > # gluster volume status > Status of volume: palantir > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------------------------ > Brick saruman:/var/local/brick0/data 49154 0 Y > 10690 > Brick gandalf:/var/local/brick0/data 49155 0 Y > 18732 > Brick azathoth:/var/local/brick0/data 49155 0 Y > 9507 > Brick yog-sothoth:/var/local/brick0/data 49153 0...
2018 Jul 11
0
LMTP crashing heavily for my 2.2.36 installation
...m running 2.2.36 (as provided by openSUSE in their server:mail repository) and at least at one of my systems LMTP is crashing regularly on certain messages (apparently a lot of them). > > Sometimes (but not always a backtrace is posted to the logs: > > 2018-07-11T07:34:56.741848+02:00 saruman dovecot: lmtp(14690): Fatal: master: service(lmtp): child 14690 killed with signal 11 (core dumps disabled) > 2018-07-11T07:34:56.820474+02:00 saruman dovecot: lmtp(an007498): Panic: file imap-bodystructure.c: line 116 (part_write_body_multipart): assertion failed: (part->data != NULL) .. &gt...
2018 Jul 11
0
LMTP crashing heavily for my 2.2.36 installation
...erver:mail >> repository) and at least at one of my systems LMTP is crashing regularly on >> certain messages (apparently a lot of them). >> > >> > Sometimes (but not always a backtrace is posted to the logs: >> > >> > 2018-07-11T07:34:56.741848+02:00 saruman dovecot: lmtp(14690): Fatal: >> master: service(lmtp): child 14690 killed with signal 11 (core dumps >> disabled) >> > 2018-07-11T07:34:56.820474+02:00 saruman dovecot: lmtp(an007498): >> Panic: file imap-bodystructure.c: line 116 (part_write_body_multipart): >> ass...
2018 Jul 12
0
LMTP crashing heavily for my 2.2.36 installation
...repository) and at least at one of my systems LMTP is crashing regularly on >>> certain messages (apparently a lot of them). >>> > >>> > Sometimes (but not always a backtrace is posted to the logs: >>> > >>> > 2018-07-11T07:34:56.741848+02:00 saruman dovecot: lmtp(14690): Fatal: >>> master: service(lmtp): child 14690 killed with signal 11 (core dumps >>> disabled) >>> > 2018-07-11T07:34:56.820474+02:00 saruman dovecot: lmtp(an007498): >>> Panic: file imap-bodystructure.c: line 116 (part_write_body_multipar...
2004 Sep 17
1
linking against the static libraries
We would like to use the static libraries in our commercial software. This software is an MFC application which is statically linked to the MFC libraries. We added LibFLAC_static.lib and LibFLAC++_static.lib but this causes an error when trying to run our application ('A required file was missing MSVCRTXX.DLL'). After looking in the Project Settings of the FLAC source, I found that the
2004 Nov 05
1
RE: basic encoder help
I'm currently facing the same problem. I added the libFLAC++ libraries to my MSVC application. I implemented the same quality levels (0-8) as used in the FLAC frontend application. But the resulting files are remarkable different between my application and the FLAC frontend (although using the same settings). for example: FLAC frontend (quality = 8) --------------------------------
2018 Feb 27
2
Quorum in distributed-replicate volume
...t; > > No it doesn't matter as long as the bricks of same replica subvol are not > on the same nodes. OK, great. So basically just install the gluster server on the new node(s), do a peer probe to add them to the cluster, and then gluster volume create palantir replica 3 arbiter 1 [saruman brick] [gandalf brick] [arbiter 1] [azathoth brick] [yog-sothoth brick] [arbiter 2] [cthulhu brick] [mordiggian brick] [arbiter 3] Or is there more to it than that? -- Dave Sherohman
2018 Feb 27
0
Quorum in distributed-replicate volume
...>" > > and which brick is of what size. > > Volume Name: palantir > Type: Distributed-Replicate > Volume ID: 48379a50-3210-41b4-9a77-ae143c8bcac0 > Status: Started > Snapshot Count: 0 > Number of Bricks: 3 x 2 = 6 > Transport-type: tcp > Bricks: > Brick1: saruman:/var/local/brick0/data > Brick2: gandalf:/var/local/brick0/data > Brick3: azathoth:/var/local/brick0/data > Brick4: yog-sothoth:/var/local/brick0/data > Brick5: cthulhu:/var/local/brick0/data > Brick6: mordiggian:/var/local/brick0/data > Options Reconfigured: > features.scrub:...
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > > "In a replica 2 volume... If we set the client-quorum option to > > > auto, then the first brick must always be up, irrespective of the > > > status of the second brick. If only the second brick is up,