search for: 0.0000000000

Displaying 20 results from an estimated 30 matches for "0.0000000000".

Did you mean: 00.000000000
2008 Oct 31
6
[LLVMdev] polyhedron 2005 results for llvm svn
I am finding with the patch that all of the Polyhedron 2005 benchmarks pass on i686-apple-darwin9. Could someone clarify the regression rules for releases? Not building a secondary language on a primary target is usually considered a P1 regression for FSF gcc. Not doing so here gives one the impression that llvm.org isn't playing by the same rules. No one is ever going to want to use these
2003 Jan 02
1
aggregate: "sum" not meaningful for factors
Dear all, I try to summarise my data per category using aggregate, but for some reason I get the error message "sum" not meaningful for factors even though my vector is numeric. The data set is shown below. Could someone please give a hint. Thanks in advance! Sincerely, Tord > names(test) [1] "ObjektID" "tallstubbyta" > is.factor(test$ObjektID);
2013 Feb 01
29
cumulative sum by group and under some criteria
Thank you very much for your reply. Your code work well with this example. I modified a little to fit my real data, I got an error massage. Error in split.default(x = seq_len(nrow(x)), f = f, drop = drop, ...) : Group length is 0 but data length > 0 On Thu, Jan 31, 2013 at 12:21 PM, arun kirshna [via R] < ml-node+s789695n4657196h87@n4.nabble.com> wrote: > Hi, > Try this: >
2009 Jul 16
0
Rsocp
Hi, The following works fine: > f [1] 0.08 0.03 0.04 > A2 [,1] [,2] [,3] [1,] 0.000000e+00 0.0000000000 0.000000e+00 [2,] 0.000000e+00 0.0000000000 0.000000e+00 [3,] 2.999651e-03 0.0009094342 1.945708e-03 [4,] 4.124431e-05 0.0001360390 1.203345e-05 [5,] 3.027932e-04 0.0000412920 4.668090e-04 [6,] 0.000000e+00 0.0000000000 0.000000e+00 > b2 [1] 0 0 0 0 0 0
2013 Jun 24
1
K-means results understanding!!!
Dear members. I am having problems to understand the kmeans- results in R. I am applying kmeans-algorithms to my big data file, and it is producing the results of the clusters. Q1) Does anybody knows how to find out in which cluster (I have fixed numberofclusters = 5 ) which data have been used? COMMAND (kmeans.results <- kmeans(mydata,centers =5, iter.max= 1000, nstart =10000)) Q2) When I
2003 Nov 23
3
make check reg-tests-3
Should I submit this as a bug report? --- reg-tests-3.Rout.save Thu Jul 3 09:55:40 2003 +++ reg-tests-3.Rout Sun Nov 23 13:10:57 2003 @@ -1,17 +1,18 @@ -R : Copyright 2003, The R Development Core Team -Version 1.8.0 Under development (unstable) (2003-07-03) +R : Copyright 2003, The R Foundation for Statistical Computing +Version 1.8.1 (2003-11-21), ISBN 3-900051-00-3 R is free software and
2013 Feb 28
11
new question
Hi, directory<- "/home/arunksa111/data.new" #first function filelist<-function(directory,number,list1){ setwd(directory) filelist1<-dir(directory) direct<-dir(directory,pattern = paste("MSMS_",number,"PepInfo.txt",sep=""), full.names = FALSE, recursive = TRUE) list1<-lapply(direct, function(x) read.table(x,header=TRUE, sep =
2002 Apr 15
1
glm link = logit, passing arguments
Hello R-users. I haven't use R for a life time and this might be trivial - I hope you do not mind. I have a questions about arguments in the Glm-function. There seems to be something that I cannot cope. The basics are ok: > y <- as.double(rnorm(20) > .5) > logit.model <- glm(y ~ rnorm(20), family=binomial(link=logit), trace = TRUE) Deviance = 28.34255 Iterations - 1
2006 Aug 26
1
Problem on Histogtam
Dear all, May be question seems trivial for most of the R users, but really at least for me, this comes out to be very problematic. Suppose I have the following data: > r [1] -0.0008179960 -0.0277968529 -0.0105731583 -0.0254050262 0.0321847131 0.0328170674 [7] 0.0431894392 -0.0217614918 -0.0218366946 0.0048939739 -0.0012212499 0.0032533579 [13] -0.0081533269 -0.0098725606
2009 Sep 28
0
msm and pmatrix
Dear All, I?m using R package ?msm? to fit a multi state model to infection history data (counts of infections per month upto diagnosis of a particular disease (sink state is state 11). The observed transitions are as follows: to from 1 2 3 4 5 6 7 8 10 11 1 35192 3806 899 233 46 11 3 0 1 534 2 3801 790 249 69 15
2003 Jul 21
1
help on barplot
Hello, I am trying to compare two histograms using barplot. the idea is to plot the histograms as pairs of columns side by side for each x value. I was able to do it using barplot before but I can't remember now for the life of me now how I did it in the past: > d [,1] [,2] -37.5 0.0000000000 2.789396e-05 -32.5 0.0001394700 5.578801e-05 -27.5 0.0019804742
2007 May 15
3
qr.solve and lm
Dear R experts, I have a Matlab code which I am translating to R in order to examine and enhance it. First of all, I need to reproduce in R the results which were already obtained in Matlab (to make sure that everything is correct). There are some matrix manipulations and '\' operation among them in the code. I have the following data frame > ABS.df Pro syn
2007 Jan 06
1
garchFit in R
Dear all, I have problem here : I'm using garchFit from fSeries package, here is part of the script : > data <- read.table("d:/data.txt") > a <- garchFit(~garch(1,1),ts(data)) I also attached the file here. In my experience, I got my R not responding. I also tried with > a <- garchFit(~garch(1,1),ts(data*10)) and it's worked. I
2012 Apr 09
1
Pairwise comparison matrix elements
Hi!, I'm really hoping someone out there will be able to help me. I recently started my MSc dissertation on Population Projection Matrices, which has been going well until now. I am trying to set-up a general script that does a pairwise comparison of all elements in my matrices. So for example, given that I have the following matrix S: > S [,1] [,2] [,3] [1,]
2010 Nov 09
0
convergence message & SE calculation when using optim( )
Hi R-users, I am trying to estimate function parameters using optim(). My count observations follows a Poisson like distribution. The problem is that I wanna express the lambda coefficient, in the passion likelihood function, as a linear function of other covariates (and thus of other coefficients). The codes that I am using (except data frame) are the following (FYI the parameters need to be
2008 Nov 25
0
Vector autoregression, panel data
Hi! I'm a new R user and I have a question about estimating VAR on a panel data. What I'm trying to do is to explain stock's volume on it's lagged volume, it's lagged returns and lagged market return's (and vice versa). In addition I have generated an exogenous variable controlling for stock's volatility. Some of you may be familiar with this experiment since it follows
2001 Feb 05
4
Removing "row.names"
I need to completely remove row.names from a dataframe. Are there other ways to remove them (and not anything else) besides: mydataframe<-data.frame(mydataframe, row.names=NULL) I realize that this doesn't really remove the row.names; it merely replaces the current row.names vector with the numbers 1..nrow (in quotes). ===================== Dr. Marc R. Feldesman Professor and
2001 Feb 05
4
Removing "row.names"
I need to completely remove row.names from a dataframe. Are there other ways to remove them (and not anything else) besides: mydataframe<-data.frame(mydataframe, row.names=NULL) I realize that this doesn't really remove the row.names; it merely replaces the current row.names vector with the numbers 1..nrow (in quotes). ===================== Dr. Marc R. Feldesman Professor and
2010 May 20
2
Re : Manipulating Data Frames
Dear All, I have data some thisng like this : > data <- read.csv(file='ipsample.csv',sep=',' , header=TRUE) > data State Jan Feb Mar Apr May Jun 1 AAA 1 1 0 2 2 0 2 BBB 1298 1195 1212 1244 1158 845 3 CCC 0 0 0 1 2 1 4 DDD 5 11 17 15 10 9 5 EEE 18 28 27 23 23 16 6 FFF 68 152 184 135 111
2003 Aug 11
0
Designing and incorporating a digital filter
I have a time series of data from an electroencephalogram (EEG). I wish to filter the data to get rid of 50Hz mains 'hum'. I have 'designed' a combination bandpass and notch filter using a web- site. The site returns the filter in "ANSI C" source code. It is:- /* Digital filter designed by mkfilter/mkshape/gencode A.J. Fisher Command line: