search for: 49156

Displaying 20 results from an estimated 43 matches for "49156".

Did you mean: 49152
2003 Oct 22
0
capi incoming call
...aces are used and somebody wants to call in - the response is "ha-la-li" - sounds like msn or called number does not exists. i think that right response should be - line busy tone... right ? can anybody tell me what is wrong ? regards Marian this apear in log : Oct 22 09:22:36 NOTICE[49156]: File chan_capi.c, Line 1813 (capi_handle_msg): CONNECT_IND ID=001 #0x4f00 LEN=0053 Controller/PLCI/NCCI = 0x301 CIPValue = 0x1 CalledPartyNumber = <c1>5430754 CallingPartyNumber = <01 83>0903517715 CalledPartySubad...
2018 May 30
2
RDMA inline threshold?
...TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick spidey.ib.runlevelone.lan:/gluster/brick/rhev_vms_primary 0 49157 Y 15666 Brick deadpool.ib.runlevelone.lan:/gluster/brick/rhev_vms_primary 0 49156 Y 2542 Brick groot.ib.runlevelone.lan:/gluster/brick/rhev_vms_primary 0 49156 Y 2180 Self-heal Daemon on localhost N/A N/A N N/A << Brick process is not running on any node. Self-heal Daemon on spidey.ib.runlevelone.lan...
2018 May 30
0
RDMA inline threshold?
...id > ------------------------------------------------------------------------------ > Brick spidey.ib.runlevelone.lan:/gluster/brick/rhev_vms_primary 0 49157 Y 15666 > Brick deadpool.ib.runlevelone.lan:/gluster/brick/rhev_vms_primary 0 49156 Y 2542 > Brick groot.ib.runlevelone.lan:/gluster/brick/rhev_vms_primary 0 49156 Y 2180 > Self-heal Daemon on localhost N/A N/A N N/A << Brick process is n...
2017 Sep 04
2
heal info OK but statistics not working
...gluster vol status QEMU-VMs Status of volume: QEMU-VMs Gluster process???????????????????????????? TCP Port? RDMA Port Online? Pid ------------------------------------------------------------------------------ Brick 10.5.6.32:/__.aLocalStorages/0/0-GLUS TERs/0GLUSTER-QEMU-VMs????????????????????? 49156???? 0 Y?????? 9302 Brick 10.5.6.49:/__.aLocalStorages/0/0-GLUS TERs/0GLUSTER-QEMU-VMs????????????????????? 49156???? 0 Y?????? 7610 Brick 10.5.6.100:/__.aLocalStorages/0/0-GLU STERs/0GLUSTER-QEMU-VMs???????????????????? 49156???? 0 Y?????? 11013 Self-heal Daemon on localhost?????????????? N/A???...
2017 Sep 07
1
Firewalls and ports and protocols
...ter servers. Apart from these ports, you need to open one port for each brick starting from port 49152 (instead of 24009 onwards as with previous releases). The brick ports assignment scheme is now compliant with IANA guidelines. For example: if you have five bricks, you need to have ports 49152 to 49156 open. This part of the page is actually in the "Setting up Clients" section but it clearly mentions server. To add some more confusion there is an examply when using iptables: `$ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24008 -j ACCEPT ` `$ su...
2017 Sep 04
0
heal info OK but statistics not working
...Status of volume: QEMU-VMs > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------ > ------------------ > Brick 10.5.6.32:/__.aLocalStorages/0/0-GLUS > TERs/0GLUSTER-QEMU-VMs 49156 0 Y 9302 > Brick 10.5.6.49:/__.aLocalStorages/0/0-GLUS > TERs/0GLUSTER-QEMU-VMs 49156 0 Y 7610 > Brick 10.5.6.100:/__.aLocalStorages/0/0-GLU > STERs/0GLUSTER-QEMU-VMs 49156 0 Y 11013 > Self-heal Daemon on localhos...
2017 Jul 18
2
Sporadic Bus error on mmap() on FUSE mount
...uter.org:/data/glusterfs/fl ow/brick1/safety_dir 49155 0 Y 26441 Brick dc2.liberouter.org:/data/glusterfs/fl ow/brick2/safety_dir 49155 0 Y 26110 Brick dc2.liberouter.org:/data/glusterfs/fl ow/brick1/safety_dir 49156 0 Y 26129 Brick dc3.liberouter.org:/data/glusterfs/fl ow/brick2/safety_dir 49152 0 Y 8703 Brick dc3.liberouter.org:/data/glusterfs/fl ow/brick1/safety_dir 49153 0 Y 8722 Brick dc1.liberouter.org:/data/glusterfs/fl ow/brick2...
2018 May 30
0
RDMA inline threshold?
Stefan, Sounds like a brick process is not running. I have notice some strangeness in my lab when using RDMA, I often have to forcibly restart the brick process, often as in every single time I do a major operation, add a new volume, remove a volume, stop a volume, etc. gluster volume status <vol> Does any of the self heal daemons show N/A? If that's the case, try forcing a restart on
2003 Apr 24
3
new mgcp patch errors
...ith new mode: recvonly on callid: 5a4b82ad79f47a70 -- MGCP Asked to indicate tone: on aaln/1@iptlf03-1 in cxmode: recvonly -- MGCP mgcp_hangup(MGCP/aaln/1@iptlf03-1) on aaln/1@iptlf03 set vmwi(-) -- MGCP Asked to indicate tone: vmwi(-) on aaln/1@iptlf03-1 in cxmode: inactive NOTICE[49156]: File chan_capi.c, Line 1435 (capi_handle_msg): CONNECT_IND ID=002 #0x08a7 LEN=0045 -- Roy Sigurd Karlsbakk, Datavaktmester ProntoTV AS - http://www.pronto.tv/ Tel: +47 9801 3356 Computers are like air conditioners. They stop working when you open Windows.
2018 May 29
2
RDMA inline threshold?
Dear all, I faced a problem with a glusterfs volume (pure distributed, _not_ dispersed) over RDMA transport. One user had a directory with a large number of files (50,000 files) and just doing an "ls" in this directory yields a "Transport endpoint not connected" error. The effect is, that "ls" only shows some files, but not all. The respective log file shows this
2017 Jul 18
0
Sporadic Bus error on mmap() on FUSE mount
...fl > ow/brick1/safety_dir 49155 0 Y 26441 > Brick dc2.liberouter.org:/data/glusterfs/fl > ow/brick2/safety_dir 49155 0 Y 26110 > Brick dc2.liberouter.org:/data/glusterfs/fl > ow/brick1/safety_dir 49156 0 Y 26129 > Brick dc3.liberouter.org:/data/glusterfs/fl > ow/brick2/safety_dir 49152 0 Y 8703 > Brick dc3.liberouter.org:/data/glusterfs/fl > ow/brick1/safety_dir 49153 0 Y 8722 > Brick dc1.liberouter.org:/da...
2017 Oct 24
0
brick is down but gluster volume status says it's fine
...a >> Gluster process TCP Port RDMA Port Online >> Pid >> ------------------------------------------------------------ >> ------------------ >> Brick gluster-2:/export/brick7/digitalcorpo >> ra 49156 0 Y >> 125708 >> Brick gluster1.vsnet.gmu.edu:/export/brick7 >> /digitalcorpora 49152 0 Y >> 12345 >> Brick gluster0:/export/brick7/digitalcorpor >> a 49152 0...
2017 Jul 18
1
Sporadic Bus error on mmap() on FUSE mount
...1/safety_dir 49155 0 Y 26441 >> Brick dc2.liberouter.org:/data/glusterfs/fl >> ow/brick2/safety_dir 49155 0 Y 26110 >> Brick dc2.liberouter.org:/data/glusterfs/fl >> ow/brick1/safety_dir 49156 0 Y 26129 >> Brick dc3.liberouter.org:/data/glusterfs/fl >> ow/brick2/safety_dir 49152 0 Y 8703 >> Brick dc3.liberouter.org:/data/glusterfs/fl >> ow/brick1/safety_dir 49153 0 Y 8722 >> Brick dc...
2017 Sep 04
0
heal info OK but statistics not working
Please provide the output of gluster volume info, gluster volume status and gluster peer status. On Mon, Sep 4, 2017 at 4:07 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > hi all > > this: > $ vol heal $_vol info > outputs ok and exit code is 0 > But if I want to see statistics: > $ gluster vol heal $_vol statistics > Gathering crawl statistics on volume GROUP-WORK
2017 Sep 04
2
heal info OK but statistics not working
hi all this: $ vol heal $_vol info outputs ok and exit code is 0 But if I want to see statistics: $ gluster vol heal $_vol statistics Gathering crawl statistics on volume GROUP-WORK has been unsuccessful on bricks that are down. Please check if all brick processes are running. I suspect - gluster inability to cope with a situation where one peer(which is not even a brick for a single vol on
2014 Apr 16
1
Possible SYN flooding
Anyone seen this problem? server Apr 16 14:34:28 nas1 kernel: [7506182.154332] TCP: TCP: Possible SYN flooding on port 49156. Sending cookies. Check SNMP counters. Apr 16 14:34:31 nas1 kernel: [7506185.142589] TCP: TCP: Possible SYN flooding on port 49157. Sending cookies. Check SNMP counters. Apr 16 14:34:53 nas1 kernel: [7506207.126193] TCP: TCP: Possible SYN flooding on port 49159. Sending cookies. Check SNMP count...
2017 Oct 24
2
brick is down but gluster volume status says it's fine
...of volume: digitalcorpora > Gluster process TCP Port RDMA Port Online > Pid > > ------------------------------------------------------------------------------ > Brick gluster-2:/export/brick7/digitalcorpo > ra 49156 0 Y > 125708 > Brick gluster1.vsnet.gmu.edu:/export/brick7 > /digitalcorpora 49152 0 Y > 12345 > Brick gluster0:/export/brick7/digitalcorpor > a 49152 0 Y > 16098 > S...
2009 Jun 19
2
Cisco 7941G & Auth
...d from 192.168.1.61 on port 49155 [19/06 10:16:40.046] Read request for file </mk-sip.jar>. Mode octet [19/06 10:16:40.046] File <\mk-sip.jar> : error 2 in system call CreateFile Impossibile trovare il file specificato. [19/06 10:16:40.046] Connection received from 192.168.1.61 on port 49156 [19/06 10:16:40.984] Read request for file <Italy/g3-tones.xml>. Mode octet [19/06 10:16:40.999] File <Italy\g3-tones.xml> : error 3 in system call CreateFile Impossibile trovare il percorso specificato. [19/06 10:16:40.999] Connection received from 192.168.1.61 on port 49164 [19/06 10...
2010 Sep 17
1
multipath troubleshoot
...32771 sdp 8:240 1 [active][ready] XXX....... 7/20 5:0:0:49155 sdq 65:0 1 [active][ready] XXX....... 7/20 5:0:0:4 sdr 65:16 1 [active][ready] XXX....... 7/20 5:0:0:16388 sds 65:32 1 [active][ready] XXX....... 7/20 5:0:0:32772 sdt 65:48 1 [active][ready] XXX....... 7/20 5:0:0:49156 sdu 65:64 1 [active][ready] XXX....... 7/20 5:0:0:5 sdv 65:80 0 [undef] [faulty] [orphan] 5:0:0:16389 sdw 65:96 0 [undef] [faulty] [orphan] 5:0:0:32773 sdx 65:112 0 [undef] [faulty] [orphan] 5:0:0:49157 sdy 65:128 0 [undef] [faulty] [orphan] multipathd> Thanks in Adv Paras.
2018 Mar 04
1
tiering
...on Ubuntu 16.04 with a 3 ssd tier where one ssd is bad. Status of volume: labgreenbin Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick labgfs81:/gfs/p1-tier/mount 49156 0 Y 4217 Brick labgfs51:/gfs/p1-tier/mount N/A N/A N N/A Brick labgfs11:/gfs/p1-tier/mount 49152 0 Y 643 Cold Bricks: Brick labgfs11:/gfs/p1/mount 49153 0 Y 312 Brick labgfs51:/gfs/p1/moun...