search for: b3ff

Displaying 10 results from an estimated 10 matches for "b3ff".

Did you mean: b3fe
2018 Sep 12
2
hangup the _called_ channel ?
On 9/12/18 1:22 PM, Joshua Colp wrote: > On Wed, Sep 12, 2018, at 2:19 PM, sean darcy wrote: >> I understand that HangUp() hangs up the calling channel. I want to >> hangup the called channel. >> >> SIP/mycall-xxxxx calls and bridges with DAHDI/1-1. >> >> I send SIP/.... to listen to a long, very long, file. > > Define "send". How are you
2004 Oct 19
1
Problem with Internal accessing internal via web
...inet 129.15.70.48/23 brd 129.15.71.255 scope global eth1 inet 129.15.70.56/23 brd 129.15.71.255 scope global secondary eth1:1 inet 129.15.70.24/23 brd 129.15.71.255 scope global secondary eth1:2 inet 129.15.70.49/23 brd 129.15.71.255 scope global secondary eth1:3 inet6 fe80::202:b3ff:fea8:52/64 scope link valid_lft forever preferred_lft forever 6: sit0: <NOARP> mtu 1480 qdisc noop link/sit 0.0.0.0 brd 0.0.0.0 ip route show 192.168.1.0/24 dev eth0 scope link 129.15.70.0/23 dev eth1 scope link 169.254.0.0/16 dev eth1 scope link 127.0.0.0/8 dev lo scope link...
2011 Sep 12
2
interface not responding to arp requests
...44.101.183/27 brd 65.44.101.191 scope global secondary p2p1 inet 65.44.101.185/27 brd 65.44.101.191 scope global secondary p2p1 inet 65.44.101.187/27 brd 65.44.101.191 scope global secondary p2p1 inet 65.44.101.188/27 brd 65.44.101.191 scope global secondary p2p1 inet6 fe80::202:b3ff:fea1:9b03/64 scope link valid_lft forever preferred_lft forever 5: p2p2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc htb state UP qlen 1000 link/ether 00:02:b3:a1:9b:04 brd ff:ff:ff:ff:ff:ff inet 4.28.99.98/30 brd 4.28.99.99 scope global p2p2 inet6 fe80::2...
2005 Feb 05
4
Wireless connectivity issues
...t6 fe80::280:c6ff:fee7:d7e7/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:02:b3:da:d6:96 brd ff:ff:ff:ff:ff:ff inet 192.168.1.1/24 brd 192.168.2.255 scope global eth1 inet6 fe80::202:b3ff:feda:d696/64 scope link valid_lft forever preferred_lft forever 4: eth2: <BROADCAST,MULTICAST,NOTRAILERS,UP> mtu 8192 qdisc pfifo_fast qlen 1000 link/ether 00:0c:f1:7d:3d:72 brd ff:ff:ff:ff:ff:ff inet 67.49.72.255/20 brd 255.255.255.255 scope global eth2 inet6 fe80::20c:f1f...
2017 Mar 25
0
[Bug 1138] New: icmpv6 mld-listener-query not detcted
...packets 65 bytes 4680 log prefix "UNKOWN Scanner!: " reject } I get type 131 (mld-listener-report) packets dropped, but not 130 (mld-listener-query) ... dmesg [45184.023825] UNKOWN Scanner!: IN=ens192 OUT= MAC=33:33:00:00:00:01:64:66:b3:80:77:42:86:dd SRC=fe80:0000:0000:0000:6666:b3ff:fe80:7742 DST=ff02:0000:0000:0000:0000:0000:0000:0001 LEN=72 TC=0 HOPLIMIT=1 FLOWLBL=0 PROTO=ICMPv6 TYPE=130 CODE=0 Also it seems that this issue has been around for quite some time and I have found it reported before: https://www.spinics.net/lists/netfilter/msg55746.html Best regards, Bratislav...
2006 Feb 14
14
[Bug 448] IPv6 conntrack does not work on a tunnel interface
https://bugzilla.netfilter.org/bugzilla/show_bug.cgi?id=448 laforge@netfilter.org changed: What |Removed |Added ---------------------------------------------------------------------------- Component|ip_conntrack |nf_conntrack ------- Additional Comments From laforge@netfilter.org 2006-02-14 09:05 MET ------- ipv6 conntrack is
2005 Mar 03
20
Network config and troubleshooting wih Ping
...fe80::20e:cff:fe60:9c5d/64 scope link valid_lft forever preferred_lft forever 4: eth2: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:02:b3:f0:93:0a brd ff:ff:ff:ff:ff:ff inet 209.126.225.34/28 brd 209.126.225.47 scope global eth2 inet6 fe80::202:b3ff:fef0:930a/64 scope link valid_lft forever preferred_lft forever 5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000 link/ether 00:11:11:3f:32:a8 brd ff:ff:ff:ff:ff:ff 6: sit0: <NOARP> mtu 1480 qdisc noop link/sit 0.0.0.0 brd 0.0.0.0 ip route show root@ipowa...
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...n.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed metadata selfheal on bea04f8f-c079-40d6-b827-1bf19ba9379c. sources=0 [2] sinks=1 [2017-10-25 10:40:20.456782] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 98682511-31db-4c5d-b3ff-8bdc523eb98c. sources=0 [2] sinks=1 [2017-10-25 10:40:20.461263] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-home-replicate-0: performing metadata selfheal on e68c7655-38e6-4b9e-86cf-8c658e9d34a8 [2017-10-25 10:40:20.464788] I [MSGID: 108026] [afr-self-heal-common...