similar to: Rsync help needed...

Displaying 20 results from an estimated 600 matches similar to: "Rsync help needed..."

2006 Jan 15
2
rsync of file list
Hi All, I would to rsync data spread of many files from remote site. Each file may exist in total different location - the path for each file may be different. My question is: Can I do it one single rsync command, giving a file containing list of paths as parameter, or do I need to run rsync for each file. I did not find any option doing it in the man page. I tried to play with
2006 Mar 21
3
Rsync 4TB datafiles...?
I need to rsync 4 TB datafiles to remote server and clone to a new oracle database..I have about 40 drives that contains this 4 TB data. I would like to do rsync from a directory level by using --files-from=FILE option. But the problem is what will happen if the network connection fails the whole rsync will fail right. rsync -a srchost:/ / --files-from=dbf-list and dbf-list would contain this:
2009 Mar 27
1
General help for a function I'm attempting to write
Hello, I have written a small function ('JostD' based upon a recent molecular ecology paper) to calculate genetic distance between populations (columns in my data set). As I have it now I have to tell it which 2 columns to use (X, Y). I would like it to automatically calculate 'JostD' for all combinations of columns, perhaps returning a matrix of distances. Thanks for any help
2013 Sep 27
2
RV: cronbach
¿existe algún método en el cual no sea necesario hacer este trabajo y que aparezcan los nombres de las preguntas? label.cronbach <- label.var(p01, "¿Le agrada el programa que se le ha mostrado? ") label.cronbach <- label.var(p02, "¿Cree que ayuda en el aprendizaje?") label.cronbach <- label.var(p03, "¿Propicia el trabajo en el equipo?") label.cronbach
2013 Sep 27
1
RV: cronbach
rm(list = ls()) #borra todo lo anterior en memoria setwd("G:/Public/Documents/R/EPICALC/") #como se llama la data desde su path sanda<-read.csv("sandavid2.csv",header=TRUE, sep=",", dec=".") use(sanda) attach (sanda) library (MASS) label.cronbach <- label.var(p01, "¿Consume mucho pan? ") ####¿Hay alguna manera de nombrarlos
2008 Jul 09
1
Help with installing add-on packages
Dear R users. Recently I wanted to update my R distribution to the current one (R-2.7.1). I am running a Fedora core 8 distirbution. The installation went fine, but when I tried to add some additional packages the majority made an exit with an error. Only a few least demanding (e.g. RColorBrewer) managed to get through the installation process. What happened between the versions 2.6.x and 2.7.x
2007 Aug 07
5
Extending RAIDZ.
Yeah:) I''d like to work on this. Here are my first observations: - We need to call vdev_op_asize method with additonal ''offset'' argument, - We need to move data to new disk starting from the very begining, so we can''t reuse scrub/resilver code which does tree-walk through the data. Below you can see how I imagine to extend RAIDZ. Here is the legend:
2008 Sep 18
2
Problem installing packages in newer versions of R
Dear all, I was wandering what could be wrong with my system (regularly updated Fedora core 8) so that installing packages does not succeed with almost every package. I follow the procedure specified in the help file R-admin section 6.3. This is not a feature new to the current version but happened (to me) in the previous version as well. Additionally R-help html pages try to follow wrong links
2008 Nov 26
1
multiple imputation with fit.mult.impute in Hmisc - how to replace NA with imputed value?
I am doing multiple imputation with Hmisc, and can't figure out how to replace the NA values with the imputed values. Here's a general ourline of the process: > set.seed(23) > library("mice") > library("Hmisc") > library("Design") > d <- read.table("DailyDataRaw_01.txt",header=T) > length(d);length(d[,1]) [1] 43 [1] 2666
2006 Mar 28
2
RSYNC authors help plz...
I started the massive transfer based on the historical timings, but now I am facing lots of problems in the transfer...I am getting lot of errors like this.. This is a very critical production update I have only 3 weeks time to complete this 4 tb transfer that was taken from historical times using rsync rsync: writefd_unbuffered failed to write 4813 bytes: phase "unknown" [sender]:
2001 Jun 05
2
a bug? (PR#968)
--T4sUOijqQbZv57TR Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Dear R, I would like to report what I think is a bug in R. I am running R within emacs on a Digital AlphaStation. See the version information at the end of my R session for details. I also attach a copy of the file that is read in the `read.table' command. Here's my R session, with a few
2004 Dec 01
2
cp --o_direct
Another question. When my database is running, I do [oracle@LNCSTRTLDB03 LPTE3]$ cp --o_direct xdb01.dbf /tmp cp: cannot open `xdb01.dbf' for reading: Permission denied [oracle@LNCSTRTLDB03 LPTE3]$ When the database is shudown it works. Is this normal for ocfs because with any other filesystem I can just copy a file at any time (Its only a testing, I know I cant copy datafiles and have
2005 Sep 24
1
10gR2, oifcfg fails on ocfs2
Howdy, I am trying to setup a 10gR2 RAC on top of some nbd devices. OS is Centos4.1/i386, (2.6.9-11.0.0.10.3.EL, ocfs2-2.6.9-11.0.0.10.3.EL-1.0.4-1). ocfs2 setup went perfectly well, btw. (besides the fact that ocfs2console apparently ignores nbd disks) mount says /dev/nbd1 on /opt/oradata/data1 type ocfs2 (rw,_netdev) on both nodes ls says (after runInstaller was started, so the files were
2004 Apr 19
5
OCFS Hang
Greetings, Having read about the previous OSFS hangs, I think this one that we are seeing is different, but I'm not sure if this is caused by OCFS or the Linux OS. We are running OCFS Version 1.09 with Linux AS 3.0/9i RAC. We have a 2 node Intel Cluster (Node 1 and Node 2). This morning the DBA tried to do an "ls" command on /u06/oradata/database
2004 Apr 21
1
Fwd: RE: OCFS Hang
Oh yeah - easy way to check, Randy: Next time your node hangs, get on the OTHER NODE and go into each directory where files are being opened (datafiles, archivelogs, controlefiles, redo logs, etc) and delete a file (you can create one first then delete it). If this causes the hung node to recover then you're having the same problem I was having. Jeremy >>> "Jeremy
2005 Feb 11
3
OCFS file system used as archived redo destination is corrupted
we started using an ocfs file system about 4 months ago as the shared archived redo destination for the 4-node rac instances (HP dl380, msa1000, RH AS 2.1) . last night we are seeing some weird behavior, and my guess is the inode directory in the file system is getting corrupted. I've always had a bad feeling about OCFS not being very robust at handling constant file creation and deletion
2013 Sep 27
0
RV: cronbach
Hola, Todos estamos muy bien, gracias. Con respecto de tu consulta, creo que ofreces poca información para poder reproducir o intentar emular tu inquietud (aunque en este grupo de voluntarios hay gente muy hábil que seguramente podrá ayudarte mejor). Yo me pregunto, por ejemplo: ¿cómo son tus datos originalmente?¿los estás leyendo de otro programa?¿los tienes en un texto plano?¿podrías enviar
2013 Sep 27
0
cronbach
¿existe algún método en el cual no sea necesario hacer este trabajo y que aparezcan los nombres de las preguntas? label.cronbach <- label.var(p01, "¿Le agrada el programa que se le ha mostrado? ") label.cronbach <- label.var(p02, "¿Cree que ayuda en el aprendizaje?") label.cronbach <- label.var(p03, "¿Propicia el trabajo en el equipo?") label.cronbach
2006 Aug 04
3
OCFS2 and ASM Question
Ok guys & gals here is the scenario: 1.) Host RHEL 4 U3 2.6.9-34.0.2.EL 2.) OCFS2 latest version 3.) Successfully formatted & mounted OCFS2 filesystems on 2 nodes /dev/sdb1 /u02/oradata/usdev/voting /dev/sdc1 /u02/oradata/usdev/data01 /dev/sdd1 /u02/oradata/usdev/data02 /dev/sde1 /u02/oradata/usdev/data03 4.) Downloaded & installed ASMLib 2.0 on both nodes 5.) Ran
2004 Apr 12
2
FW: cluster1 error
I am trying to use: ocfs-support-1.0.10-1 ocfs-2.4.21-EL-smp-1.0.11-1 ocfs-tools-1.0.10-1 with RedHat AS 3.0, 2-node cluster with shared SCSI. 2 dell 1650s, dual CPUs, PERC 3/DC cards chained to a PowerVault 220S. I am using lvm, and here is my layout: [root@cluster1 archive]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 32G 5.1G 25G