Displaying 12 results from an estimated 12 matches for "4f40".
Did you mean:
440
2017 Oct 17
2
Gluster processes remaining after stopping glusterd
...opt-glusterfs-advdemo -p /var/lib/glusterd/vols/advdemo/run/dvihcasc0s-opt-glusterfs-advdemo.pid -S /var/run/gluster/b7cbd8cac308062ef1ad823a3abf54f5.socket --brick-name /opt/glusterfs/advdemo -l /var/log/glusterfs/bricks/opt-glusterfs-advdemo.log --xlator-option *-posix.glusterd-uuid=30865b77-4da5-4f40-9945-0bd2cf55ac2a --brick-port 49152 --xlator-option advdemo-server.listen-port=49152
root 2058 1 0 Oct05 ? 00:00:28 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/...
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
...9;m concerned it will
interrupt the geo-replication entirely.
does anybody else have been faced with this situation...any hints,
workarounds... ?
best regards
Dietmar Putz
root at gl-node1:~/tmp# gluster volume info mvol1
Volume Name: mvol1
Type: Distributed-Replicate
Volume ID: a1c74931-568c-4f40-8573-dd344553e557
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gl-node1-int:/brick1/mvol1
Brick2: gl-node2-int:/brick1/mvol1
Brick3: gl-node3-int:/brick1/mvol1
Brick4: gl-node4-int:/brick1/mvol1
Options Reconfigured:
changelog.changelog: on
geo-r...
2004 May 18
0
No luck using asterisk as proxy...
...ks like:
Sip read:
INVITE sip:8378@asterisk SIP/2.0
Via: SIP/2.0/UDP
213.208.99.115:5060;rport;branch=z9hG4bK280F039561C44F4A93B15B494551D18A
From: Tony Hoyle <sip:6001@asterisk>;tag=3751201687
To: <sip:8378@asterisk>
Contact: <sip:6001@213.208.99.115:5060>
Call-ID: 6D3C9176-5684-4F40-8620-D7A105CD0A42@213.208.99.115
CSeq: 1567 INVITE
Max-Forwards: 70
Content-Type: application/sdp
User-Agent: X-Lite release 1103a
Content-Length: 295
v=0
o=6001 8049593 8049593 IN IP4 213.208.99.115
s=X-Lite
c=IN IP4 213.208.99.115
t=0 0
m=audio 8000 RTP/AVP 0 8 3 98 97 101
a=rtpmap:0 pcmu/8000
a...
2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
...tirely.
> does anybody else have been faced with this situation...any hints,
> workarounds... ?
>
> best regards
> Dietmar Putz
>
>
> root at gl-node1:~/tmp# gluster volume info mvol1
>
> Volume Name: mvol1
> Type: Distributed-Replicate
> Volume ID: a1c74931-568c-4f40-8573-dd344553e557
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: gl-node1-int:/brick1/mvol1
> Brick2: gl-node2-int:/brick1/mvol1
> Brick3: gl-node3-int:/brick1/mvol1
> Brick4: gl-node4-int:/brick1/mvol1
> O...
2017 Oct 18
0
Gluster processes remaining after stopping glusterd
...o -p
> /var/lib/glusterd/vols/advdemo/run/dvihcasc0s-opt-glusterfs-advdemo.pid
> -S /var/run/gluster/b7cbd8cac308062ef1ad823a3abf54f5.socket --brick-name
> /opt/glusterfs/advdemo -l /var/log/glusterfs/bricks/opt-glusterfs-advdemo.log
> --xlator-option *-posix.glusterd-uuid=30865b77-4da5-4f40-9945-0bd2cf55ac2a
> --brick-port 49152 --xlator-option advdemo-server.listen-port=49152
> root 2058 1 0 Oct05 ? 00:00:28 /usr/sbin/glusterfs -s
> localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid
> -l /var/log/glusterfs/gluster...
2013 Nov 20
4
puppet testing
...you are subscribed to the Google Groups "Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/5ef065b1-8a57-4f40-a43c-3e989da23101%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
...se have been faced with this situation...any hints,
> workarounds... ?
>
> best regards
> Dietmar Putz
>
>
> root at gl-node1:~/tmp# gluster volume info mvol1
>
> Volume Name: mvol1
> Type: Distributed-Replicate
> Volume ID: a1c74931-568c-4f40-8573-dd344553e557
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: gl-node1-int:/brick1/mvol1
> Brick2: gl-node2-int:/brick1/mvol1
> Brick3: gl-node3-int:/brick1/mvol1
> Brick...
2003 Aug 13
0
All "GNU" software potentially Trojaned
...ftp://alpha.gnu.org/before-2003-08-01.md5sums.asc
Note that both of these files and the announcement above are signed by
Bradley Kuhn, Executive Director of the FSF, with the following PGP
key:
pub 1024D/DB41B387 1999-12-09 Bradley M. Kuhn <bkuhn@fsf.org>
Key fingerprint = 4F40 645E 46BE 0131 48F9 92F6 E775 E324 DB41 B387
uid Bradley M. Kuhn (bkuhn99) <bkuhn@ebb.org>
uid Bradley M. Kuhn <bkuhn@gnu.org>
sub 2048g/75CA9CB3 1999-12-09
The CERT/CC believes this key to be valid.
As a matter of good sec...
2012 Jun 24
0
nouveau _BIOS method
...*
4f00: 5b 12 5c 2f 04 5f 53 42 5f 50 43 49 30 49 45 49 [.\/._SB_PCI0IEI
4f10: 54 45 49 54 56 00 5c 2f 04 5f 53 42 5f 50 43 49 TEITV.\/._SB_PCI
4f20: 30 49 45 49 54 45 49 54 56 86 5c 2e 5f 54 5a 5f 0IEITEITV.\._TZ_
4f30: 54 5a 30 30 0a 80 86 5c 2e 5f 54 5a 5f 54 5a 30 TZ00...\._TZ_TZ0
4f40: 31 0a 80 a0 0c 5b 12 54 4e 4f 54 00 54 4e 4f 54 1....[.TNOT.TNOT
4f50: 14 34 5f 4c 30 36 00 a0 2d 90 5c 2f 04 5f 53 42 .4_L06..-.\/._SB
4f60: 5f 50 43 49 30 47 46 58 30 47 53 53 45 92 47 53 _PCI0GFX0GSSE.GS
4f70: 4d 49 5c 2f 04 5f 53 42 5f 50 43 49 30 47 46 58 MI\/._SB_PCI0GFX
4f80: 30...
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...eal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on ffcc6db2-6440-4a40-970c-28e33a33c011. sources=0 [2] sinks=1
[2017-10-25 10:40:30.841457] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 6beb050a-ae38-4f40-9d08-a223cc5bb7a5. sources=0 [2] sinks=1
[2017-10-25 10:40:30.849418] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 94060bd4-79e4-4071-b973-42fc25ef083e. sources=0 [2] sinks=1
[2017-10-25 10:40:30.860329] I [MSGID: 108026] [afr-s...