Displaying 20 results from an estimated 50 matches for "replicats".
Did you mean:
replicate
2012 Nov 24
0
Grouped data objects within GLS and Variogram
Dear R Help,
I am having difficulty using Variogram within GLS to examine spatial structure of nested data. My data frame consists of ecological measurements of a forest, in which three landscape positions ("landposi") are compared. Each landscape position is replicated five times ("replicat"), and the environment is measured ("canopy", "litdepth", etc.)
2017 Aug 01
3
How to delete geo-replication session?
Hi,
I would like to delete a geo-replication session on my GluterFS 3.8.11 replicat 2 volume in order to re-create it. Unfortunately the "delete" command does not work as you can see below:
$ sudo gluster volume geo-replication myvolume gfs1geo.domain.tld::myvolume-geo delete
Staging failed on arbiternode.domain.tld. Error: Geo-replication session between myvolume and
2012 Dec 18
0
R function for computing Simultaneous confidence intervals for multinomial proportions
...hod, I think to boostrap the mean of each proportion
and get in that way confidence interval of the mean.
I observed 21 times a response that could be one out of 8 categories
(multinomial response). I computed the proportions for every categories.
I did it independently 12 times. Hence I have 12 replicats for each
proportions.
Is boostraping the mean proportion over the 12 replicats for getting the
confidence interval is a good idea?
I tried (see codes below) and gets some confidence interval quiet large.
Actually, according to the boostraped CI, there are no difference in
proportions, while a...
2003 Dec 19
2
SMB 3.0.1/LDAP Cannot add computer to domain
I'm trying to setup samba with ldapsam (Novell eDir 8.7.1). Right now I
can login to samba and browse my shares with user "Administrator", but when
I'm trying to add computer to domain I get "unknown user name or bad
password" error.
I have administrator, root and nobody accounts in ldap. And I have
manualy added following groupmappings to ldap-groups:
Domain Users
2017 Aug 21
2
self-heal not working
Hi,
I have a replicat 2 with arbiter GlusterFS 3.8.11 cluster and there is currently one file listed to be healed as you can see below but never gets healed by the self-heal daemon:
Brick node1.domain.tld:/data/myvolume/brick
/data/appdata_ocpom4nckwru/preview/1344699/64-64-crop.png
Status: Connected
Number of entries: 1
Brick node2.domain.tld:/data/myvolume/brick
2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
Hi Dietmar,
I am trying to understand the problem and have few questions.
1. Is trashcan enabled only on master volume?
2. Does the 'rm -rf' done on master volume synced to slave ?
3. If trashcan is disabled, the issue goes away?
The geo-rep error just says the it failed to create the directory
"Oracle_VM_VirtualBox_Extension" on slave.
Usually this would be because of gfid
2017 Aug 07
0
How to delete geo-replication session?
Hi,
I would really like to get rid of this geo-replication session as I am stuck with it right now. For example I can't even stop my volume as it complains about that geo-replcation...
Can someone let me know how I can delete it?
Thanks
> -------- Original Message --------
> Subject: How to delete geo-replication session?
> Local Time: August 1, 2017 12:15 PM
> UTC Time: August
2017 Aug 08
2
How to delete geo-replication session?
Do you see any session listed when Geo-replication status command is
run(without any volume name)
gluster volume geo-replication status
Volume stop force should work even if Geo-replication session exists.
From the error it looks like node "arbiternode.domain.tld" in Master
cluster is down or not reachable.
regards
Aravinda VK
On 08/07/2017 10:01 PM, mabi wrote:
> Hi,
>
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
Hello,
in regard to
https://bugzilla.redhat.com/show_bug.cgi?id=1434066
i have been faced to another issue when using the trashcan feature on a
dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4)
for e.g. removing an entire directory with subfolders :
tron at gl-node1:/myvol-1/test1/b1$ rm -rf *
afterwards listing files in the trashcan :
tron at gl-node1:/myvol-1/test1$
2017 Aug 21
0
self-heal not working
----- Original Message -----
> From: "mabi" <mabi at protonmail.ch>
> To: "Gluster Users" <gluster-users at gluster.org>
> Sent: Monday, August 21, 2017 9:28:24 AM
> Subject: [Gluster-users] self-heal not working
>
> Hi,
>
> I have a replicat 2 with arbiter GlusterFS 3.8.11 cluster and there is
> currently one file listed to be healed as
2017 Aug 08
0
How to delete geo-replication session?
When I run the "gluster volume geo-replication status" I see my geo replication session correctly including the volume name under the "VOL" column. I see my two nodes (node1 and node2) but not arbiternode as I have added it later after setting up geo-replication. For more details have a quick look at my previous post here:
2017 Aug 21
2
self-heal not working
Hi Ben,
So it is really a 0 kBytes file everywhere (all nodes including the arbiter and from the client).
Here below you will find the output you requested. Hopefully that will help to find out why this specific file is not healing... Let me know if you need any more information. Btw node3 is my arbiter node.
NODE1:
STAT:
File:
2017 Aug 08
1
How to delete geo-replication session?
Sorry I missed your previous mail.
Please perform the following steps once a new node is added
- Run gsec create command again
gluster system:: execute gsec_create
- Run Geo-rep create command with force and run start force
gluster volume geo-replication <mastervol> <slavehost>::<slavevol>
create push-pem force
gluster volume geo-replication <mastervol>
2019 Jun 25
3
methods package: A _R_CHECK_LENGTH_1_LOGIC2_=true error
**Maybe this bug needs to be understood further before applying the
patch because patch is most likely also wrong**
Because, from just looking at the expressions, I think neither the R
3.6.0 version:
omittedSig <- omittedSig && (signature[omittedSig] != "missing")
nor the patched version (I proposed):
omittedSig <- omittedSig & (signature[omittedSig] !=
2012 Feb 17
3
portable parallel seeds project: request for critiques
I've got another edition of my simulation replication framework. I'm
attaching 2 R files and pasting in the readme.
I would especially like to know if I'm doing anything that breaks
.Random.seed or other things that R's parallel uses in the
environment.
In case you don't want to wrestle with attachments, the same files are
online in our SVN
2006 Mar 15
0
Raise your hand if you''re going to the MySQL conference
If you are attending the MySQL User Conference 2006, you are encouraged to come attend my session about Applied Ruby on Rails and AJAX.
The session information is as follows:
Applied Ruby on Rails and AJAX
Farhan Mashraqi
Track: LAMP, Community Projects
Date: Thursday, April 27
Time: 2:20pm - 3:05pm
Location: Ballroom B
Adoppt (http://adoppt.com) is a fully
2009 Jun 12
0
Problems with ReceiveFAX (asterisk 1.6.0.3 and t38)
Hello users,
have been facing problems with t38 passthrough using
asterisk 1.6.0.3.
observed also that in case of SendFAX we are not having
major issues, its almost successfull.
ReceiveFAX has problems most of the time.
we have been using a ringcentral account for testing this
receivefax.
so ringcentral is trying for 3 times if the sending fax failed for
the first time.
what i observed is
2017 Aug 21
0
self-heal not working
Can you also provide:
gluster v heal <my vol> info split-brain
If it is split brain just delete the incorrect file from the brick and run heal again. I haven't tried this with arbiter but I assume the process is the same.
-b
----- Original Message -----
> From: "mabi" <mabi at protonmail.ch>
> To: "Ben Turner" <bturner at redhat.com>
> Cc:
2017 Aug 21
2
self-heal not working
Sure, it doesn't look like a split brain based on the output:
Brick node1.domain.tld:/data/myvolume/brick
Status: Connected
Number of entries in split-brain: 0
Brick node2.domain.tld:/data/myvolume/brick
Status: Connected
Number of entries in split-brain: 0
Brick node3.domain.tld:/srv/glusterfs/myvolume/brick
Status: Connected
Number of entries in split-brain: 0
> -------- Original
2015 May 16
2
Couldn't create lock .dovecot-sync.lock
Hi,
In a cluster with two servers and replication via dovecot-dsync, this
error is logged:
server1 dovecot: dsync-server(<user>): Error: Couldn't create lock
/var/lib/imap/user/6a/<user>/.dovecot-sync.lock: No such file or directory
This is because "/var/lib/imap/user/6a/<user>/" doesn't exist in
server1. In another cluster node, the directory exists and