Displaying 20 results from an estimated 500 matches similar to: "How to write a function in a graph"
2009 Jan 23
2
Write to multiple connections or multiple text files
Hi all,
I want to modify a large number of text files (ca 4000) by replacing a
value found on a particular line in them with a value from an R object.
For a single file I would normally use:
con<-file ("foo.txt", open="r+")
content<-readLines(con)
content[n]<-"test"
writeLines(content,con)
close(con)
For repeating this for several files I can
2010 Apr 30
1
gluster-volgen - syntax for mirroring/distributing across 6 nodes
NOTE: posted this to gluster-devel when I meant to post it to gluster-users
01 | 02 mirrored --|
03 | 04 mirrored --| distributed
05 | 06 mirrored --|
1) Would this command work for that?
glusterfs-volgen --name repstore1 --raid 1 clustr-01:/mnt/data01
clustr-02:/mnt/data01 --raid 1 clustr-03:/mnt/data01
clustr-04:/mnt/data01 --raid 1 clustr-05:/mnt/data01
clustr-06:/mnt/data01
So the
2017 Nov 16
0
Missing files on one of the bricks
Hello, we are using glusterfs 3.10.3.
We currently have a gluster heal volume full running, the crawl is still
running.
Starting time of crawl: Tue Nov 14 15:58:35 2017
Crawl is in progress
Type of crawl: FULL
No. of entries healed: 0
No. of entries in split-brain: 0
No. of heal failed entries: 0
getfattr from both files:
# getfattr -d -m . -e hex
2012 Feb 26
2
Help needed! Error in setwd(newdir) : cannot change working directory
Hi Guys,
I am new to R and just trying to write a small script to automate a couple commands. But I run into the setwd(): cannot change working directory.
I googled a little bit and tried all fixes/suggestions with no success.
Basically I have a script that works from inside a directory with my data (/home/sean/Rtest/Data01). Now I want to modify the script to make it run from the upper directory
2018 Feb 22
4
What is exit code 5888?
rsync v3.1.0
linux v4.4.104-39-default x86_64
Found in the system log:
2018-02-22T05:02:00-0700 sma-server3 python3[31371]: backintime
(sma-user3x/3): WARNING: Command "rsync -rtDHh --links --no-p --no-g
--no-o --info=progress2 --no-i-r --delete --delete-excluded -i
--dry-run --out-format="BACKINTIME: %i %n%L" --chmod=Du+wx
--exclude="/bkp/cgate-backintime"
2017 Jun 15
2
Interesting split-brain...
I am new to gluster but already like it. I did a maintenance last week
where I shutdown both nodes (one after each others). I had many files that
needed to be healed after that. Everything worked well, except for 1 file.
It is in split-brain, with 2 different GFID. I read the documentation but
it only covers the cases where the GFID is the same on both bricks. BTW, I
am running Gluster 3.10.
Here
2009 Oct 29
2
Difficulty testing an SSD as a ZIL
Hi all,
I received my SSD, and wanted to test it out using fake zpools with files as backing stores before attaching it to my production pool. However, when I exported the test pool and imported, I get an error. Here is what I did:
I created a file to use as a backing store for my new pool:
mkfile 1g /data01/test2/1gtest
Created a new pool:
zpool create ziltest2 /data01/test2/1gtest
Added the
2017 Jun 15
0
Interesting split-brain...
Hi Ludwig,
There is no way to resolve gfid split-brains with type mismatch. You have
to do it manually by following the steps in [1].
In case of type mismatch it is recommended to resolve it manually. But for
only gfid mismatch in 3.11 we have a way to
resolve it by using the *favorite-child-policy*.
Since the file is not important, you can go with deleting that.
[1]
2017 Nov 16
2
Missing files on one of the bricks
On 11/16/2017 04:12 PM, Nithya Balachandran wrote:
>
>
> On 15 November 2017 at 19:57, Frederic Harmignies
> <frederic.harmignies at elementai.com
> <mailto:frederic.harmignies at elementai.com>> wrote:
>
> Hello, we have 2x files that are missing from one of the bricks.
> No idea how to fix this.
>
> Details:
>
> # gluster volume
2017 Jun 15
1
Interesting split-brain...
Can you please explain How we ended up in this scenario. I think that
will help to understand more about this scenarios and why gluster
recommend replica 3 or arbiter volume.
Regards
Rafi KC
On 06/15/2017 10:46 AM, Karthik Subrahmanya wrote:
> Hi Ludwig,
>
> There is no way to resolve gfid split-brains with type mismatch. You
> have to do it manually by following the steps in [1].
2009 Jul 10
2
error: optim(rho, n2ll.rho, method = method, control = control, beta = parm$beta, : initial value in 'vmmin' is not finite
I am trying to use the lnam autocorrelation model from the SNA package. I have it running for smaller adjacency matrices (<1,500) it works just fine but when my matrices are bigger 4000+. I get the error:
> lnam1_01.adj<- lnam(data01$adopt,x01,ec2001.csr)
Error in optim(rho, n2ll.rho, method = method, control = control, beta = parm$beta, :
initial value in 'vmmin' is not
2017 Nov 15
2
Missing files on one of the bricks
Hello, we have 2x files that are missing from one of the bricks. No idea
how to fix this.
Details:
# gluster volume info
Volume Name: data01
Type: Replicate
Volume ID: 39b4479c-31f0-4696-9435-5454e4f8d310
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.186.11:/mnt/AIDATA/data
Brick2: 192.168.186.12:/mnt/AIDATA/data
Options Reconfigured:
2009 Nov 10
3
HEEELP!!!!
Hello.
My name is Ana. I?m doing an eology master, and I?m just learning how R
works.
I have a Mac OS X 10.5.6, and I?m tryng to run just a simple ANOVA
nanalyses.
I dowloaded R version 2.10.0, and it seems I have problems with the script.
I don?t know what to do, I?ve already change the languages, be sure of
being
working in the correct directory and doesn?t work
the script is:
#1. example
2010 Sep 24
3
Kernel panic on ZFS import - how do I recover?
I posted this on the www.nexentastor.org forums, but no answer so far, so I apologize if you are seeing this twice. I am also engaged with nexenta support, but was hoping to get some additional insights here.
I am running nexenta 3.0.3 community edition, based on 134. The box crashed yesterday, and goes into a reboot loop (kernel panic) when trying to import my data pool, screenshot attached.
2015 Feb 23
2
Quota-status service on Director
Hello,
I'm trying to configure the quota-status service, but it seems I'm not successful with my director setup (2.2.9). I activate the quota-status service like this on my director server:
$ cat 91-quota-status.conf
##
## Quota-Status configuration.
##
# Load Module quota-status and listen on TCP/IP Port for connections.
service quota-status {
? executable = quota-status -p postfix
?
2017 Jun 20
0
trash can feature, crashed???
On Tue, 2017-06-20 at 08:52 -0400, Ludwig Gamache wrote:
> All,
>
> I currently have 2 bricks running Gluster 3.10.1. This is a Centos installation. On Friday last
> week, I enabled the trashcan feature on one of my volumes:
> gluster volume set date01 features.trash on
I think you misspelled the volume name. Is it data01 or date01?
> I also limited the max file size to 500MB:
2011 Jul 15
1
Strange Behavior using FUSE client
I've recently setup a distributed/replicated cluster and have had an issue
with seeing the directories on the cluster. Also, a df -h only shows data
from one of the three bricks.
The strange behavior doesn't end there. If I log into the 'primary' server
as root, then do an ls on the client, the directories appear. However, df -h
is still incorrect.
I'm not sure exactly
2016 Oct 10
2
Quota-status service on Director
Hi!
quota-status is not supported in proxy configuration. You should use
quota_warning and quota_over_flag scripts instead.
Aki
On 08.10.2016 03:51, Michael Kliewe wrote:
> Hello,
> any news on this topic? I tried it again with Dovecot 2.2.25, but it's
> still not possible to run the quota-status services on the directors.
> They try to access the mailbox of the user, which they
2017 Nov 16
0
Missing files on one of the bricks
On 15 November 2017 at 19:57, Frederic Harmignies <
frederic.harmignies at elementai.com> wrote:
> Hello, we have 2x files that are missing from one of the bricks. No idea
> how to fix this.
>
> Details:
>
> # gluster volume info
>
> Volume Name: data01
> Type: Replicate
> Volume ID: 39b4479c-31f0-4696-9435-5454e4f8d310
> Status: Started
> Snapshot Count:
2017 Jun 20
2
trash can feature, crashed???
All,
I currently have 2 bricks running Gluster 3.10.1. This is a Centos
installation. On Friday last week, I enabled the trashcan feature on one of
my volumes:
gluster volume set date01 features.trash on
I also limited the max file size to 500MB:
gluster volume set data01 features.trash-max-filesize 500MB
3 hours after that I enabled this, this specific gluster volume went down:
[2017-06-16