similar to: confused by classes and methods.

Displaying 20 results from an estimated 300 matches similar to: "confused by classes and methods."

2010 Feb 16
1
RODBC missing values in integer columns
Hello, We are having some strange issues with RODBC related to integer columns. Whenever we do a sql query the data in a integer column is 150 actual data points then 150 0's then 150 actual data points then 150 0's. However, our database actually has numbers where the 0's are filled in. Furthermore, other datatypes do not have this problem: double and varchar are correct and do not
2010 Feb 22
3
relative file path
Hello, Is there a way to find where a script is located within a script? getwd() doesn't do what I want because it depends on where R was called from. I want something like source("randomFile") and within randomFile there is a function called whereAmI() which returns c:\blah\blah2\randomFile.R In perl there is a library called FindBin and $FindBin::Bin has the directory of the file
2010 Feb 24
1
RODBC connection name
Hello all, I've scoured the RODBC.pdf, but there appears to be no way to set the name of the RODBC connection. This is useful for the DBA's to know that some processes should only run for so long and can be automatically killed. But currently the name is just R. so they aren't sure if the connection can be automatically killed. We are able to set this with our perl ODBC interfaces
2010 Mar 03
1
data.table evaluating columns
Hi everyone, I have the following code that works in data frames taht I would like tow ork in data.tables . However, I'm not really sure how to go about it. I basically have the following names = c("data1", "data2") frame = data.frame(list(key1=as.integer(c(1,2,3,4,5,6)), key2=as.integer(c(1,2,3,2,5,6)),data1 = c(3,3,2,3,5,2), data2= c(3,3,2,3,5,2))) for(i in
2010 Mar 15
1
rbind, data.frame, classes
Hi, This has bugged me for a bit. First question is how to keep classes with rbind, and second question is how to properly return vecotrs instead of lists after turning an rbind of lists into a data.frame list1=list(a=2, b=as.Date("20090102", format="%Y%m%d")) list2=list(a=2, b=as.Date("20090102", format="%Y%m%d")) rbind(list1, list2) #this loses the
2010 May 20
1
computer out of memory when using sigpathway
Dear R users, I am sorry to disturb you! But I really need your help for the usage of sigPathwy. Actually, I want a sliding window analysis for possible chromosome expression pattern mining. My research microorganism is a plant pathogen, Gibberella zeae, and I first used SAS to divide locus number with 10, 20, 30, or 40 on the fungal chromosome according to their location. I really
2010 Feb 26
2
dramatic speed difference in lapply
So I have a function that does lapply's for me based on dimension. Currently only works for length(pivotColumns)=2 because I haven't fixed the rbinds. I have two versions. One runs WAYYY faster than the other. And I'm not sure why. Fast Version: fedb.ddplyWrapper2Fast <- function(data, pivotColumns, listNameFunctions, ...){ lapplyFunctionRecurse <- function(cdata, level=1,
2010 Mar 17
1
Reg GARCH+ARIMA
Hi, Although my doubt is pretty,as i m not from stats background i am not sure how to proceed on this. Currently i am doing a forecasting.I used ARIMA to forecast and time series was volatile i used garchFit for residuals. How to use the output of Garch to correct the forecasted values from ARIMA. Here is my code: ###delta is the data fit<-arima(delta,order=c(2,,0,1)) fit.res <-
2012 Apr 29
1
CForest Error Logical Subscript Too Long
Hi, This is my code (my data is attached): library(languageR) library(rms) library(party) OLDDATA <- read.csv("/Users/Abigail/Documents/OldData250412.csv") OLDDATA$YD <- factor(OLDDATA$YD, label=c("Yes", "No"))? OLDDATA$ND <- factor(OLDDATA$ND, label=c("Yes", "No"))? attach(OLDDATA) defaults <- cbind(YD, ND) set.seed(47) data.controls
2010 May 21
3
Concatenation
Hi, I have a dataframe with some 800 rows and 14 columns. Could you please advise how I can concatenate the rows - one after another. Similarly for columns, one below the other. Many thanks. Cheers, Santana [[alternative HTML version deleted]]
2012 Mar 26
1
assigning vector or matrix sparsely (for use with mclapply)
Dear R wizards--- I have a wrapper on mclapply() that makes it a little easier for me to do multiprocessing. (Posting this may make life easier for other googlers.) I pass a data frame, a vector that tells me what rows should be recomputed, and the function; and I get back a vector or matrix of answers. d <- data.frame( id=1:6, val=11:16 ) loc <- c(TRUE,TRUE,FALSE,TRUE,FALSE,TRUE)
2017 Jun 09
2
Extremely slow du
Hi I have just moved our 400 TB HPC storage from lustre to gluster. It is part of a research institute and users have very small files to big files ( few KB to 20GB) . Our setup consists of 5 servers, each with 96TB RAID 6 disks. All servers are connected through 10G ethernet but not all clients. Gluster volumes are distributed without any replication. There are approximately 80 million files in
2017 Jun 09
2
Extremely slow du
Hi Vijay Thanks for your quick response. I am using gluster 3.8.11 on Centos 7 servers glusterfs-3.8.11-1.el7.x86_64 clients are centos 6 but I tested with a centos 7 client as well and results didn't change gluster volume info Volume Name: atlasglust Type: Distribute Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b Status: Started Snapshot Count: 0 Number of Bricks: 5 Transport-type: tcp
2017 Jun 12
2
Extremely slow du
Hi Vijay I have enabled client profiling and used this script https://github.com/bengland2/gluster-profile-analysis/blob/master/gvp-client.sh to extract data. I am attaching output files. I don't have any reference data to compare with my output. Hopefully you can make some sense out of it. On Sat, Jun 10, 2017 at 10:47 AM, Vijay Bellur <vbellur at redhat.com> wrote: > Would it be
2017 Jun 09
0
Extremely slow du
Can you please provide more details about your volume configuration and the version of gluster that you are using? Regards, Vijay On Fri, Jun 9, 2017 at 5:35 PM, mohammad kashif <kashif.alig at gmail.com> wrote: > Hi > > I have just moved our 400 TB HPC storage from lustre to gluster. It is > part of a research institute and users have very small files to big files > ( few
2009 Feb 22
1
split/decompose lines
Dear R users, I have a very simple problem but I can't find the function in R to deal with it. I need to split (or decompose) one line into many lines using one field as a reference. I have a table with the following format: A B Frequency 23 3 2 24 2 5 25 1 3 And need to split each line into several lines according to the frequency to achieve something like this: A B
2001 Dec 07
2
question
Isn't anything in a data frame that is not explicitly numeric a *factor*? -Greg > -----Original Message----- > From: Peter Dalgaard BSA [mailto:p.dalgaard@biostat.ku.dk] > Sent: Friday, December 07, 2001 5:32 PM > To: Erich Neuwirth > Cc: r-devel@stat.math.ethz.ch > Subject: Re: [Rd] question > > > Erich Neuwirth
2014 Aug 13
1
adjust SOA record
Hi, We have outdated SOA information in our samba DNS. We used to have a DC1, and it is no more, however it's listed in our SOA records on both remaining DC's. I think this is not correct. I am under the impression that in order to get full failover support, all DC's need to have listed themselves as SOA. This is also what google tells me:
2017 Jun 10
0
Extremely slow du
Would it be possible for you to turn on client profiling and then run du? Instructions for turning on client profiling can be found at [1]. Providing the client profile information can help us figure out where the latency could be stemming from. Regards, Vijay [1] https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Performance%20Testing/#client-side-profiling On Fri, Jun 9, 2017 at
2017 Jun 16
0
Extremely slow du
Hi Vijay Did you manage to look into the gluster profile logs ? Thanks Kashif On Mon, Jun 12, 2017 at 11:40 AM, mohammad kashif <kashif.alig at gmail.com> wrote: > Hi Vijay > > I have enabled client profiling and used this script > https://github.com/bengland2/gluster-profile-analysis/blob/ > master/gvp-client.sh to extract data. I am attaching output files. I >