Displaying 10 results from an estimated 10 matches for "61ff".
Did you mean:
61ef
2008 Feb 01
3
CentOS 5 loses ip address (newbie question)
Reserved ip in 192.168.x.x range for CenOS 5 (Samba Server)
loses samba clients due to eth0 losing it's ip.
eth0 Link encap:Ethernet HWaddr 00:04:61:72:AB:98
inet addr:169.254.66.122 Bcast:169.254.255.255
Mask:255.255.0.0
inet6 addr: fe80::204:61ff:fe72:ab98/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1492 Metric:1
RX packets:60058 errors:0 dropped:0 overruns:0
frame:0
TX packets:66564 errors:0 dropped:0 overruns:0
carrier:0
collisions:0 txqueuelen:1000
RX bytes:11387965 (10.8 MiB) TX...
2005 Apr 01
2
Problems using VMWare with a Bridged Firewall
...:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0d:61:1a:e2:25 brd ff:ff:ff:ff:ff:ff
inet6 fe80::20d:61ff:fe1a:e225/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:04:5a:8c:67:6a brd ff:ff:ff:ff:ff:ff
inet6 fe80::204:5aff:fe8c:676a/64 scope link
valid_lft forever preferred_lft forever
4...
2004 Sep 06
0
Problems with Firewall start at Boot time
...7.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0d:61:73:66:60 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 brd 10.0.0.255 scope global eth0
inet6 fe80::20d:61ff:fe73:6660/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:04:5a:8c:67:6a brd ff:ff:ff:ff:ff:ff
inet 192.168.0.1/24 brd 192.168.0.255 scope global eth1
inet6 fe80::204:5aff:fe8c:676a/64 sco...
2004 Sep 06
0
[Fwd: Problems with Firewall start at Boot time]
...7.0.0.1/8 scope host lo
~ inet6 ::1/128 scope host
~ valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
~ link/ether 00:0d:61:73:66:60 brd ff:ff:ff:ff:ff:ff
~ inet 10.0.0.1/24 brd 10.0.0.255 scope global eth0
~ inet6 fe80::20d:61ff:fe73:6660/64 scope link
~ valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
~ link/ether 00:04:5a:8c:67:6a brd ff:ff:ff:ff:ff:ff
~ inet 192.168.0.1/24 brd 192.168.0.255 scope global eth1
~ inet6 fe80::204:5aff:fe8c:676a/64 sco...
2005 Jan 12
2
Samba and ProxyArp
...cope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0d:61:73:66:60 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.4/24 brd 192.168.0.255 scope global eth0
inet6 fe80::20d:61ff:fe73:6660/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:04:5a:8c:67:6a brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 brd 10.0.0.255 scope global eth1
inet6 fe80::204:5aff:fe8c:676a/64 scope...
2005 Jan 11
5
Problem starting Shorewall using Bridge configuration
...0:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,PROMISC,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0d:61:73:66:60 brd ff:ff:ff:ff:ff:ff
inet6 fe80::20d:61ff:fe73:6660/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,PROMISC,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:04:5a:8c:67:6a brd ff:ff:ff:ff:ff:ff
inet6 fe80::204:5aff:fe8c:676a/64 scope link
valid_lft forever preferred_lft f...
2006 Jun 12
9
Network stops responding after some time
...t triggers it. After a few hours, the network just stops working.
Doing an ifconfig -a before and after gives me the following
Before
eth0 Link encap:Ethernet HWaddr 00:0D:61:42:F5:8D
inet addr:172.18.41.82 Bcast:172.18.43.255 Mask:255.255.252.0
inet6 addr: fe80::20d:61ff:fe42:f58d/64 Scope:Link
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:1
RX packets:278 errors:0 dropped:0 overruns:0 frame:0
TX packets:39 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:41433 (40.4 Kb)...
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...0E9AB558AB215193C83B (057c97fa-2ada-4322-abb3-6b201301dca3) on home-client-2
[2017-10-25 10:14:19.097169] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/0C7D2A125101EBD5E4C8A00B95FD2B04B84A69F2 (0ad4b44f-61ff-4491-b2b3-8a5d1909607b) on home-client-2
[2017-10-25 10:14:19.119910] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/6F4930EF7144E4A3F1649B560D831AB50274DCC5 (347a49b6-cb9f-4993-bf71-1de4bc7cd5da) on hom...