search for: 106488

Displaying 20 results from an estimated 28 matches for "106488".

Did you mean: 106487
2018 Jan 11
3
Creating cluster replica on 2 nodes 2 bricks each.
...:40.485497] I [input.c:31:cli_batch] 0-: Exiting with: 0 ?? etc-glusterfs-glusterd.vol.log ? [2018-01-10 14:59:23.676814] I [MSGID: 106499] [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: Received status volume req for volume scratch [2018-01-10 15:00:29.516071] I [MSGID: 106488] [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2018-01-10 15:01:09.872082] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2018-01-10 15:01:09.872128] I [MSGID: 106578] [glusterd-brick-op...
2018 Jan 12
0
Creating cluster replica on 2 nodes 2 bricks each.
...3:40.485497] I [input.c:31:cli_batch] 0-: Exiting with: 0 ?? etc-glusterfs-glusterd.vol.log ? [2018-01-10 14:59:23.676814] I [MSGID: 106499] [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: Received status volume req for volume scratch [2018-01-10 15:00:29.516071] I [MSGID: 106488] [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2018-01-10 15:01:09.872082] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2018-01-10 15:01:09.872128] I [MSGID: 106578] [glusterd-brick-op...
2018 Jan 15
1
Creating cluster replica on 2 nodes 2 bricks each.
...tch] 0-: Exiting with: 0 > > ?? etc-glusterfs-glusterd.vol.log ? > > [2018-01-10 14:59:23.676814] I [MSGID: 106499] > [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: > Received status volume req for volume scratch > [2018-01-10 15:00:29.516071] I [MSGID: 106488] > [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: > Received get vol req > [2018-01-10 15:01:09.872082] I [MSGID: 106482] > [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: > Received add brick req > [2018-01-10 15:01:09.872128] I [MSG...
2018 Jan 11
0
Creating cluster replica on 2 nodes 2 bricks each.
Hi Jose, Gluster is working as expected. The Distribute-replicated type just means that there are now 2 replica sets and files will be distributed across them. A volume of type Replicate (1xn where n is the number of bricks in the replica set) indicates there is no distribution (all files on the volume will be present on all the bricks in the volume). A volume of type Distributed-Replicate
2018 Jan 10
2
Creating cluster replica on 2 nodes 2 bricks each.
Hi Nithya This is what i have so far, I have peer both cluster nodes together as replica, from node 1A and 1B , now when i tried to add it , i get the error that it is already part of a volume. when i run the cluster volume info , i see that has switch to distributed-replica. Thanks Jose [root at gluster01 ~]# gluster volume status Status of volume: scratch Gluster process
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...network.remote-dio: enable performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet nfs.disable: off performance.client-io-threads: off /var/log/glusterfs/glusterd.log: [2018-01-15 14:17:50.196228] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2018-01-15 14:25:09.555214] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req (empty because today it's 2018-01-16) /var/log/gluste...
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...-threads: 32 > performance.io-cache: off > performance.read-ahead: off > performance.quick-read: off > transport.address-family: inet > nfs.disable: off > performance.client-io-threads: off > > /var/log/glusterfs/glusterd.log: > > [2018-01-15 14:17:50.196228] I [MSGID: 106488] > [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] > 0-management: Received get vol req > [2018-01-15 14:25:09.555214] I [MSGID: 106488] > [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] > 0-management: Received get vol req > > (empty because today it&...
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...he: off >> performance.read-ahead: off >> performance.quick-read: off >> transport.address-family: inet >> nfs.disable: off >> performance.client-io-threads: off >> >> /var/log/glusterfs/glusterd.log: >> >> [2018-01-15 14:17:50.196228] I [MSGID: 106488] >> [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] >> 0-management: Received get vol req >> [2018-01-15 14:25:09.555214] I [MSGID: 106488] >> [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] >> 0-management: Received get vol req >> >&...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...-threads: 32 > performance.io-cache: off > performance.read-ahead: off > performance.quick-read: off > transport.address-family: inet > nfs.disable: off > performance.client-io-threads: off > > /var/log/glusterfs/glusterd.log: > > [2018-01-15 14:17:50.196228] I [MSGID: 106488] [glusterd-handler.c:1548:__ > glusterd_handle_cli_get_volume] 0-management: Received get vol req > [2018-01-15 14:25:09.555214] I [MSGID: 106488] [glusterd-handler.c:1548:__ > glusterd_handle_cli_get_volume] 0-management: Received get vol req > > (empty because today it's 2018-0...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...; performance.quick-read: off >>> transport.address-family: inet >>> nfs.disable: off >>> performance.client-io-threads: off >>> >>> /var/log/glusterfs/glusterd.log: >>> >>> [2018-01-15 14:17:50.196228] I [MSGID: 106488] >>> [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] >>> 0-management: Received get vol req >>> [2018-01-15 14:25:09.555214] I [MSGID: 106488] >>> [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] >>> 0-management:...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...he: off >> performance.read-ahead: off >> performance.quick-read: off >> transport.address-family: inet >> nfs.disable: off >> performance.client-io-threads: off >> >> /var/log/glusterfs/glusterd.log: >> >> [2018-01-15 14:17:50.196228] I [MSGID: 106488] >> [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: >> Received get vol req >> [2018-01-15 14:25:09.555214] I [MSGID: 106488] >> [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: >> Received get vol req >> >>...
2003 Sep 29
1
encoding prob
Hi, I'm new to this list. I use Wine 20030115 on Suse 8.2. I don't have any Turkish encoding problems within KDE, but in the windows sw i use with wine, i can't read or write any Turkish characters. The encoding must be ISO8859-9, or windows-1254. Any help appreciated.
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...t; transport.address-family: inet >>>> nfs.disable: off >>>> performance.client-io-threads: off >>>> >>>> /var/log/glusterfs/glusterd.log: >>>> >>>> [2018-01-15 14:17:50.196228] I [MSGID: 106488] >>>> [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] >>>> 0-management: Received get vol req >>>> [2018-01-15 14:25:09.555214] I [MSGID: 106488] >>>> [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...ress-family: inet >>>>> nfs.disable: off >>>>> performance.client-io-threads: off >>>>> >>>>> /var/log/glusterfs/glusterd.log: >>>>> >>>>> [2018-01-15 14:17:50.196228] I [MSGID: 106488] >>>>> [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] >>>>> 0-management: Received get vol req >>>>> [2018-01-15 14:25:09.555214] I [MSGID: 106488] >>>>> [glusterd-handler.c:1548:__glusterd_handl...
2018 Jan 16
1
Problem with Gluster 3.12.4, VM and sharding
Also to help isolate the component, could you answer these: 1. on a different volume with shard not enabled, do you see this issue? 2. on a plain 3-way replicated volume (no arbiter), do you see this issue? On Tue, Jan 16, 2018 at 4:03 PM, Krutika Dhananjay <kdhananj at redhat.com> wrote: > Please share the volume-info output and the logs under /var/log/glusterfs/ > from all your
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...t;>>>> nfs.disable: off >>>>>> performance.client-io-threads: off >>>>>> >>>>>> /var/log/glusterfs/glusterd.log: >>>>>> >>>>>> [2018-01-15 14:17:50.196228] I [MSGID: 106488] >>>>>> [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] >>>>>> 0-management: Received get vol req >>>>>> [2018-01-15 14:25:09.555214] I [MSGID: 106488] >>>>>> [glusterd-handler.c:1548:...
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...d-ahead: off >>> performance.quick-read: off >>> transport.address-family: inet >>> nfs.disable: off >>> performance.client-io-threads: off >>> >>> /var/log/glusterfs/glusterd.log: >>> >>> [2018-01-15 14:17:50.196228] I [MSGID: 106488] >>> [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] >>> 0-management: Received get vol req >>> [2018-01-15 14:25:09.555214] I [MSGID: 106488] >>> [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] >>> 0-management: Received get vol re...
2017 Oct 04
2
Glusterd not working with systemd in redhat 7
...a reboot. Gluster is not starting. Seems that gluster starts before network layer. Some logs here: Thanks [2017-10-04 15:33:00.506396] I [MSGID: 106143] [glusterd-pmap.c:277:pmap_registry_bind] 0-pmap: adding brick /opt/glusterfs/advdemo on port 49152 [2017-10-04 15:33:01.206401] I [MSGID: 106488] [glusterd-handler.c:1538:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2017-10-04 15:33:01.206936] I [MSGID: 106488] [glusterd-handler.c:1538:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2017-10-04 15:33:18.043104] W [glusterfsd.c:1360:cleanup_and_e...
2018 Jan 19
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...isable: off >>>>>>> performance.client-io-threads: off >>>>>>> >>>>>>> /var/log/glusterfs/glusterd.log: >>>>>>> >>>>>>> [2018-01-15 14:17:50.196228] I [MSGID: 106488] >>>>>>> [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] >>>>>>> 0-management: Received get vol req >>>>>>> [2018-01-15 14:25:09.555214] I [MSGID: 106488] >>>>>>>...
2005 Apr 08
6
Asterisk Memory Requirements
I have asterisk installed on a Dell 2850 dual-Xeon 3.0Ghz box with 2GB of memory. This is serving about 75 sip clients, Polycom500's and 600's. We are running into problems with the memory. Asterisk, right now, is using about 1.8GB of system memory. I am using Asterisk 1.0.7, Zaptel 1.0.7 with Digiums TE410 1xT1 RBS and 1xT1 PRI, Libpri 1.0.7 on Fedora Core 3. My question; is this