Displaying 12 results from an estimated 12 matches for "30706".
Did you mean:
307,6
2017 May 09
1
Empty info file preventing glusterd from starting
...wed open file descriptors set to 65536
[2017-05-06 03:33:39.807974] I [MSGID: 106479] [glusterd.c:1399:init]
0-management: Using /system/glusterd as working directory
[2017-05-06 03:33:39.826833] I [MSGID: 106513]
[glusterd-store.c:2047:glusterd_restore_op_version] 0-glusterd: retrieved
op-version: 30706
[2017-05-06 03:33:39.827515] E [MSGID: 106206]
[glusterd-store.c:2562:glusterd_store_update_volinfo] 0-management: Failed
to get next store iter
[2017-05-06 03:33:39.827563] E [MSGID: 106207]
[glusterd-store.c:2844:glusterd_store_retrieve_volume] 0-management: Failed
to update volinfo for c_gluster...
2012 Nov 08
1
OpenStack+libvirt+lxc: lxcContainerGetSubtree:1199 : Failed to read /proc/mounts
...t index for interface veth0: No such device
2012-11-08 12:41:10.370+0000: 24640: error : virLXCProcessStop:701 :
internal error Invalid PID -1 for container
2012-11-08 12:41:10.370+0000: 24640: error : virLXCProcessStop:701 :
internal error Invalid PID -1 for container
2012-11-08 12:48:26.136+0000: 30706: info : libvirt version: 0.10.2,
package: 1.el6 (Unknown, 2012-11-08-20:20:52, localhost)
2012-11-08 12:48:26.136+0000: 30706: error : virDomainObjParseNode:10094 :
XML error: unexpected root element <domain>, expecting <domstatus>
2012-11-08 12:48:50.878+0000: 30695: error : virNetS...
2017 Jun 01
2
[Gluster-devel] Empty info file preventing glusterd from starting
...gt;>>>>>> directory
> > >>>>>>>> [2017-05-06 03:33:39.826833] I [MSGID: 106513]
> > >>>>>>>> [glusterd-store.c:2047:glusterd_restore_op_version] 0-glusterd:
> > >>>>>>>> retrieved op-version: 30706
> > >>>>>>>> [2017-05-06 03:33:39.827515] E [MSGID: 106206]
> > >>>>>>>> [glusterd-store.c:2562:glusterd_store_update_volinfo]
> > >>>>>>>> 0-management: Failed to get next store iter
> > >>>>...
2017 Feb 12
0
Maildirsize not updated
...48 1
27713 1
27744 1
29845 1
27121 1
27744 1
26032 1
30146 1
27121 1
45454 1
26241 1
28332 1
32103 1
3859 1
26016 1
27121 1
28336 1
4272 1
29709 1
29688 1
27125 1
28336 1
1757 1
4631 1
54951 1
26170 1
1757 1
1975 1
29765 1
28882 1
1757 1
25683 1
71184 1
28332 1
32080 1
55040 1
26464 1
1757 1
4631 1
30706 1
4322 1
49607 1
1757 1
1757 1
46535 1
47378 1
27717 1
27744 1
21723 1
1759 1
1757 1
20218 1
21737 1
21724 1
539443 1
1892 1
27713 1
27744 1
26036 1
27713 1
27740 1
32079 1
12815596 1
26523 1
54511 1
26020 1
27125 1
29847 1
28336 1
30423 1
27009 1
27065 1
26510 1
27121 1
324880 1
27740 1
21128 1
21...
2017 Jun 01
0
[Gluster-devel] Empty info file preventing glusterd from starting
...>>> directory
> > > >>>>>>>> [2017-05-06 03:33:39.826833] I [MSGID: 106513]
> > > >>>>>>>> [glusterd-store.c:2047:glusterd_restore_op_version] 0-glusterd:
> > > >>>>>>>> retrieved op-version: 30706
> > > >>>>>>>> [2017-05-06 03:33:39.827515] E [MSGID: 106206]
> > > >>>>>>>> [glusterd-store.c:2562:glusterd_store_update_volinfo]
> > > >>>>>>>> 0-management: Failed to get next store iter
> >...
2017 Feb 12
2
Maildirsize not updated
I am using dovecot lmtp
root at messagerie[10.10.10.19] ~ # grep virtual_transport /etc/postfix/main.cf
# transport_maps = hash:/var/lib/mailman/data/transport-mailman, proxy:mysql:/etc/postfix/mysql-virtual_transports.cf
# virtual_transport = maildrop
virtual_transport = lmtp:unix:private/dovecot-lmtp
root at messagerie[10.10.10.19] ~ #
On Thursday, February 9, 2017 7:54 PM, WJCarpenter
2007 Sep 19
18
sip.conf best practices?
All - I've been wrestling with how to best structure the sip device
accounts on a new asterisk server I'm deploying. All of the sip
devices (currently only Linksys SPA941s) will reside on the same
subnet as the server, and I have already set up a decent automatic
provisioning system for the phones. When the rollout is complete,
there will be about 100 SIP devices authenticating and
2017 Dec 20
2
Upgrading from Gluster 3.8 to 3.12
Looks like a bug as I see tier-enabled = 0 is an additional entry in the
info file in shchhv01. As per the code, this field should be written into
the glusterd store if the op-version is >= 30706 . What I am guessing is
since we didn't have the commit 33f8703a1 "glusterd: regenerate volfiles on
op-version bump up" in 3.8.4 while bumping up the op-version the info and
volfiles were not regenerated which caused the tier-enabled entry to be
missing in the info file.
For now, you...
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
...ok.
On Wed, Dec 20, 2017 at 11:28 AM, Atin Mukherjee <amukherj at redhat.com> wrote:
> Looks like a bug as I see tier-enabled = 0 is an additional entry in the
> info file in shchhv01. As per the code, this field should be written into
> the glusterd store if the op-version is >= 30706 . What I am guessing is
> since we didn't have the commit 33f8703a1 "glusterd: regenerate volfiles on
> op-version bump up" in 3.8.4 while bumping up the op-version the info and
> volfiles were not regenerated which caused the tier-enabled entry to be
> missing in the info...
2012 Sep 14
0
Wine release 1.5.13
...nger crashes on start (kernel32.OutputDebugStringA needs to cope with NULL pointer)
30610 64-bit JRE installer needs kernel32.dll _local_unwind and kernel32.dll _C_specific_handler
30690 no mouse or keyboard in orcs must die
30693 Mono: Could not load Mono into this process in Wine 1.5.4
30706 Sony USB Driver installer fails on unimplemented function setupapi.dll.SetupAddToSourceListA
30771 Comm port Properties missing Interval Timeouts capability
30965 Diablo III (installer): Progress bar stays at 0%
31085 Pulsen complains "A required *.pulsen file is missing"
3110...
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
I was attempting the same on a local sandbox and also have the same problem.
Current: 3.8.4
Volume Name: shchst01
Type: Distributed-Replicate
Volume ID: bcd53e52-cde6-4e58-85f9-71d230b7b0d3
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x 3 = 12
Transport-type: tcp
Bricks:
Brick1: shchhv01-sto:/data/brick3/shchst01
Brick2: shchhv02-sto:/data/brick3/shchst01
Brick3:
2017 Dec 19
2
Upgrading from Gluster 3.8 to 3.12
I have not done the upgrade yet. Since this is a production cluster I
need to make sure it stays up or schedule some downtime if it doesn't
doesn't. Thanks.
On Tue, Dec 19, 2017 at 10:11 AM, Atin Mukherjee <amukherj at redhat.com> wrote:
>
>
> On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
> wrote:
>>
>> Hi,
>>