search for: 4581

Displaying 20 results from an estimated 37 matches for "4581".

Did you mean: 4501
2003 Dec 07
2
Incoming IAX2 problems with NuFone
I've been using NuFone with Asterisk for a while, but I've started seeing this error with incoming calls: NOTICE[114696]: File chan_iax2.c, Line 4581 (socket_read): Rejected connect attempt from 216.234.116.189, requested/capability 0x4/0x4 incompatible with our capability 0xff03. Outgoing works just fine, but I can't get incoming to work at all. Any ideas? I googled for the error, but I couldn't find anything. David -- David Cou...
2008 Feb 26
1
iax trunking problem
...xecuting [3000 at centos-context:2] Dial('IAX2/iax1Centos-1', 'sip/3000/20') in new stackTx-Frame Retry[000] -- OSeqno: 001 ISeqno: 002 Type: IAX Subclass: ACCEPT Timestamp: 00117ms SCall: 00001 DCall: 00001 [192.168.0.25:4569] FORMAT : 4[Feb 26 18:14:42] WARNING[4581]: app_dial.c:1196 dial_exec_full: Unable to create channel of type 'sip' (cause 3 - No route to destination) == Everyone is busy/congested at this time (1:0/0/1) -- Executing [3000 at centos-context:3] VoiceMail('IAX2/iax1Centos-1', 'u3000 at default') in new stack[Feb 2...
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
...t; of them? Three good nodes - vnb, vng, vnh and one dead - vna from node vng: root at vng:~# gluster peer status Number of Peers: 3 Hostname: vna.proxmox.softlog Uuid: de673495-8cb2-4328-ba00-0419357c03d7 State: Peer in Cluster (Disconnected) Hostname: vnb.proxmox.softlog Uuid: 43a1bf8c-3e69-4581-8e16-f2e1462cfc36 State: Peer in Cluster (Connected) Hostname: vnh.proxmox.softlog Uuid: 9eb54c33-7f79-4a75-bc2b-67111bf3eae7 State: Peer in Cluster (Connected) -- Lindsay Mathieson
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
...roxmox.softlog Uuid: 9eb54c33-7f79-4a75-bc2b-67111bf3eae7 State: Peer in Cluster (Connected) *Hostname: vna.proxmox.softlog** **Uuid: de673495-8cb2-4328-ba00-0419357c03d7** **State: Peer in Cluster (Disconnected)** * Hostname: vnb.proxmox.softlog Uuid: 43a1bf8c-3e69-4581-8e16-f2e1462cfc36 State: Peer in Cluster (Connected) Do I just: rm /var/lib/glusterd/peers/de673495-8cb2-4328-ba00-0419357c03d7 On all the live nodes and restart glusterdd? nothing else? thanks. -- Lindsay Mathieson -------------- next part -------------- An HTML attachment was scr...
2008 Jan 28
0
Package Installation produces "gcc fails sanity check" error when installing RODBC error
...:4120: checking whether ln -s works configure:4124: result: yes configure:4131: checking how to recognise dependent libraries configure:4307: result: pass_all configure:4395: gcc -c -g -O2 conftest.c >&5 configure:4398: $? = 0 configure:4541: checking how to run the C preprocessor configure:4581: gcc -E conftest.c In file included from /usr/include/bits/posix1_lim.h:153, from /usr/include/limits.h:145, from /usr/lib64/gcc/x86_64-suse-linux/4.2.1/include/limits.h:122, from /usr/lib64/gcc/x86_64-suse-linux/4.2.1/include/syslimits.h:7,...
2010 Dec 17
15
Centos 5.5 - Kernel Panic while booting.
Dear centos community, I was in the process of loading the latest 5.5 release of centos in a VMWARE ESX 4.1 host as my first virtual machine, suddenly while booting I got a panic error with the following on screen. Can someone point me in the right direction. This machine has 24 cores and I allocated 1 for Centos to use with 1024MB of memory. Any clues or workaround to solve this problem? Thank
2002 Jun 07
0
smbd: Too many open files
...KO-I_X8-eh.cfg switch message SMBclose (pid 6617) [2002/06/07 15:03:27, 3] smbd/reply.c:reply_close(3019) close fd=34 fnum=4580 (numopen=2) [2002/06/07 15:03:27, 2] smbd/close.c:close_normal_file(211) johny closed file O/a1/KO-I_X8-eh.cfg (numopen=1) allocated file structure 485, fnum = 4581 (3 used) [2002/06/07 15:03:27, 4] smbd/open.c:open_file_shared1(891) calling open_file with flags=0x0 flags2=0x0 mode=0764 [2002/06/07 15:03:27, 2] smbd/open.c:open_file(230) johny opened file O/a1/KO-I_X8-eh.cfg read=Yes write=No (numopen=2) reply_ntcreate_and_X: fnum = 4581, open name =...
2005 Jun 08
1
[Bug 1008] GSSAPI authentication failes with Round Robin DNS hosts
http://bugzilla.mindrot.org/show_bug.cgi?id=1008 ------- Additional Comments From dleonard at vintela.com 2005-06-08 22:16 ------- a workaround at http://blog.macnews.de/unspecific/stories/4581/ ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee.
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On 6/10/2017 4:38 PM, Lindsay Mathieson wrote: > Since my node died on friday I have a dead peer (vna) that needs to be > removed. > > > I had major issues this morning that I haven't resolve yet with all > VM's going offline when I rebooted a node which I *hope * was due to > quorum issues as I now have four peers in the cluster, one dead, three > live. >
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
...ead - vna > > from node vng: > > root at vng:~# gluster peer status > Number of Peers: 3 > > Hostname: vna.proxmox.softlog > Uuid: de673495-8cb2-4328-ba00-0419357c03d7 > State: Peer in Cluster (Disconnected) > > Hostname: vnb.proxmox.softlog > Uuid: 43a1bf8c-3e69-4581-8e16-f2e1462cfc36 > State: Peer in Cluster (Connected) > > Hostname: vnh.proxmox.softlog > Uuid: 9eb54c33-7f79-4a75-bc2b-67111bf3eae7 > State: Peer in Cluster (Connected) > I thought you had removed vna as defective and then ADDED in vnh as the replacement? Why is vna still the...
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
...x.softlog > Uuid: 9eb54c33-7f79-4a75-bc2b-67111bf3eae7 > State: Peer in Cluster (Connected) > > *Hostname: vna.proxmox.softlog* > *Uuid: de673495-8cb2-4328-ba00-0419357c03d7* > *State: Peer in Cluster (Disconnected)* > > Hostname: vnb.proxmox.softlog > Uuid: 43a1bf8c-3e69-4581-8e16-f2e1462cfc36 > State: Peer in Cluster (Connected) > > Do I just: > > rm /var/lib/glusterd/peers/de673495-8cb2-4328-ba00-0419357c03d7 > > Yes. And please ensure you do this after bringing down all the glusterd instances and then once the peer file is removed from all the no...
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On Sun, 11 Jun 2017 at 16:35, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote: > > > Il 11 giu 2017 1:00 PM, "Atin Mukherjee" <amukherj at redhat.com> ha scritto: > > Yes. And please ensure you do this after bringing down all the glusterd > instances and then once the peer file is removed from all the nodes restart > glusterd on all the
2010 Feb 13
2
Wine, ICC compilation and performance tests.
...ould expect wine created by it to be at least a bit faster then "normal" wine - compiled with GCC... 3D Mark2000: wine-1.1.38-GCC scored 6020 and wine-1.1.38-ICC scored 5880 points As we can see ICC is a bit slower here. 3D Mark2001SE: wine-1.1.38-GCC scored 4586 and wine-1.1.38-ICC=4581 points On 2001SE I think we can say that speed is pretty much the same. 3D Mark2003: wine-1.1.38-GCC scored 2160 and wine-1.1.38-ICC scored 2151 points Again very similar speeds here. As for stability ? at least in games, I don't see any difference - today I played on it for few hours ? n...
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
Il 11 giu 2017 1:00 PM, "Atin Mukherjee" <amukherj at redhat.com> ha scritto: Yes. And please ensure you do this after bringing down all the glusterd instances and then once the peer file is removed from all the nodes restart glusterd on all the nodes one after another. If you have to bring down all gluster instances before file removal, you also bring down the whole gluster
2017 Jun 10
4
How to remove dead peer, osrry urgent again :(
Since my node died on friday I have a dead peer (vna) that needs to be removed. I had major issues this morning that I haven't resolve yet with all VM's going offline when I rebooted a node which I *hope * was due to quorum issues as I now have four peers in the cluster, one dead, three live. Confidence level is not high. -- Lindsay Mathieson
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson <lindsay.mathieson at gmail.com> wrote: > On 11/06/2017 10:46 AM, WK wrote: > > I thought you had removed vna as defective and then ADDED in vnh as > > the replacement? > > > > Why is vna still there? > > Because I *can't* remove it. It died, was unable to be brought up. The > gluster peer detach command
2000 Jun 19
2
dyn.load error:
...ibrary "D:\Reza\476\tv.gonsrc.R\deldirld.o": LoadLibrary failure ************************************************************************** Reza S. Mahani e-mail: mahani at uiuc.edu Department of Economics phone(Office): (217) 333-4581 University of Illinois at Urbana-Champaign phone(Home): (217) 384-0987 330 comm. west, 1206 S. sixth st. FAX: (217) 244-6678 Champaign, IL 61820, USA www.students.uiuc.edu\~mahani -.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-....
2011 Mar 08
5
warnings from 2.0.11
...ntry at line 275: 1299558462.M498819P35210.gromit.example.com,S=15495,W=15663 (uid 160 -> 285) Mar 8 06:20:11 gromit dovecot[59204]: imap(pid 68864 user user36): Warning: /Volumes/Mail/user36/dovecot-uidlist: Duplicate file entry at line 281: 1299535339.M353973P61136.gromit.example.com,S=4498,W=4581 (uid 360 -> 604) Mar 8 06:21:00 gromit dovecot[59204]: imap(pid 68864 user user239): Warning: /Volumes/Mail/user239/dovecot-uidlist: Duplicate file entry at line 105: 1299547390.M813768P69393.gromit.example.com,S=4773,W=4861 (uid 22 -> 164) Mar 8 06:21:00 gromit dovecot[59204]: imap(pid 688...
2017 Jun 11
5
How to remove dead peer, osrry urgent again :(
On 11/06/2017 10:46 AM, WK wrote: > I thought you had removed vna as defective and then ADDED in vnh as > the replacement? > > Why is vna still there? Because I *can't* remove it. It died, was unable to be brought up. The gluster peer detach command only works with live servers - A severe problem IMHO. -- Lindsay Mathieson
2016 Jul 10
0
Debian Jessie joining AD as member fails with "The object name is not found."
...SERVER_SELECT_SECRET_DOMAIN_6 1: NBT_SERVER_FULL_SECRET_DOMAIN_6 1: NBT_SERVER_ADS_WEB_SERVICE 0: NBT_SERVER_HAS_DNS_NAME 0: NBT_SERVER_IS_DEFAULT_NC 0: NBT_SERVER_FOREST_ROOT domain_uuid : 681ea09d-d921-4581-b653-8f8b8f4eb470 forest : 'domain.local' dns_domain : 'domain.local' pdc_dns_name : 'domain-controller.domain.local' domain_name : 'DOMAIN' pdc_name : ...