similar to: using list to pass argument to function

Displaying 20 results from an estimated 6000 matches similar to: "using list to pass argument to function"

2008 Apr 13
2
Arrays and functions
Hi, I' am doing a stats project using R to work out the size of a t-test and wilcoxon test depending on the distribution and sample size. I just can't get it to work - I want to put my results from the function size() into an array.At the moment I keep getting the error message:Error in res[distribution, test, samplesize] <- results : subscript out of boundsCan anyone tell me where
2006 Apr 15
1
Removing Rows/Records from a Table
I would like to selectively remove rows from a table. I had hoped that I could create a table and selectively add rows with something like > NewTable<-table(nrow=100, ncol=4) > NewTable[1,]<-OldTable[10,] but that doesn't work. The former call gives > NewTable ncol nrow 4 100 1 while the latter call gives a table the length of OldTable. Making a matrix, m, with the
2009 Sep 18
1
lapply - value changes as parameters to function?
Hi, I'm trying to get better at things like lapply but it still stumps me. I have a function I've written, tested and debugged using individual calls to the function, ala: ResultList5 = DoAvgCalcs(IndexData, Lookback=5, SampleSize=TestSamples , Iterations=TestIterations ) ResultList8 = DoAvgCalcs(IndexData, Lookback=8, SampleSize=TestSamples , Iterations=TestIterations ) ResultList13
2000 Sep 20
1
SV: sample from contingency table
I have had the same problem and I wrote this function rmulti <- function(n, size, p) { NrDim <- length(p) if(NrDim<2) stop("The simulated variabel has to be at least 2-dimensional") res <- matrix(data=NA, nrow=n, ncol=NrDim) p <- p/sum(p) TempSize <- size for(i in 1:NrDim) { TempP <- p[i]/sum(p[i:NrDim]) TempBin <- rbinom(n=n, size=TempSize,
2010 Jul 22
1
How do I get rid of list elements where the value is NULL before applying rbind?
Here is the function that makes the data.frames in the list: funweek <- function(df) if (length(df$elapsed_time) > 5) { res = fitdist(df$elapsed_time,"exp") year = df$sale_year[1] sample = df$sale_week[1] mid = df$m_id[1] estimate = res$estimate sd = res$sd samplesize = res$n loglik = res$loglik aic = res$aic bic = res$bic chisq =
2018 Oct 04
2
Bug : Autocorrelation in sample drawn from stats::rnorm (hmh)
Hi Hugo, I've been able to replicate your bug, including for other distributions (runif, rexp, rgamma, etc) which shouldn't be surprising since they're probably all drawing from the same pseudo-random number generator. ?Interestingly, it does not seem to depend on the choice of seed, I am not sure why that is the case. I'll point out first of all that the R-devel mailing list is
2018 Oct 04
2
Bug : Autocorrelation in sample drawn from stats::rnorm (hmh)
Hi Hugo, I've been able to replicate your bug, including for other distributions (runif, rexp, rgamma, etc) which shouldn't be surprising since they're probably all drawing from the same pseudo-random number generator. ?Interestingly, it does not seem to depend on the choice of seed, I am not sure why that is the case. I'll point out first of all that the R-devel mailing list is
2004 Sep 24
1
algorithm reference for sample()
Hi, Don't know if it belongs to r-devel or r-help, but since I am planning to alter some of R's internal code I am sending it here. The existing implementation of the sample() function, when the optional 'prob' argument is given, is quite inefficient. The complexity is O(sampleSize * universeSize), see ProbSampleReplace() and ProbSampleNoReplace() in random.c. This makes the
2001 Dec 09
1
Help for Power analysis
Dear colleague, I not sure this R code is correctly ? I would to show the number of Sample Size at Sample Size Axis that line draw from Power Axis (80%) from R code. How I show this and select the most appropriate of this power (.79955687 - 80983575). Thank for your help and answer. Best Regards, Nikom Thanomsieng, Email: nikom at kku.ac.th .... #Power analysis: Sample size for
2011 Mar 25
1
Appending data to a data.frame and writing a csv
Dear R helpers exposure <- data.frame(id = c(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20), ead = c(9483.686,50000,6843.4968,10509.37125,21297.8905,50000,706152.8354, 62670.5625, 687.801995,50641.4875,59227.125,43818.5778,52887.72534,601788.7937, 56813.14859,4012356.056,1419501.179,210853.4743,749961,6599.0862), pd =
2001 Apr 15
1
contingency tables in R
Dear List: Most of the analysis I do involves contingency tables. I am migrating to R from Stata and I have a number of questions about using contingency tables in R. I suspect that most of the things I want to do are very short R scripts that people on this list probably have. I wonder if you would be willing to share them. First, the presentation of tables by table() is not
2010 Sep 19
1
Weibull- Random Censoring
I generate random vector from Weibull distribution sampWB <-urweibull(sampleSize, shape=shape.true, scale=scale.true, lb=0, ub=Inf) how can I create subvector containing 30% of samplesize of sampWB which should be assigned as Censored data? The probability for each value in sampWB can be uniform to be included in the subvector.
2004 Sep 24
1
algorithm reference for sample() - Knuth
Thank you for the reference to Knuth. Indeed in vol. 2 he has a > -----Original Message----- > From: Tony Plate [mailto:tplate@blackmesacapital.com] > Sent: Friday, September 24, 2004 8:05 AM > To: Vadim Ogranovich > Subject: Re: [Rd] algorithm reference for sample() > > Have you tried looking in Knuth's books on computer > algorithms? (They are classics for good
2018 Oct 05
2
Bug : Autocorrelation in sample drawn from stats::rnorm (hmh)
On 05/10/2018, 09:45, "R-help on behalf of hmh" <r-help-bounces at r-project.org on behalf of hugomh at gmx.fr> wrote: Hi, Thanks William for this fast answer, and sorry for sending the 1st mail to r-help instead to r-devel. I noticed that bug while I was simulating many small random walks using c(0,cumsum(rnorm(10))). Then the negative
2009 May 22
1
Paste Strings as logical for functions?
Dear R Users, I have some dynamic selection rules that I want to pass around for my functions: >rules <- paste(g$TrialList==1 & g$Session==2) >myfunction <- function(rules) { > index <- which(rules) > anotherFunction(index) > } However, I can't find a way to pass around these selection rules easily (for subset, for which, etc) Please let me know if you have
2010 Aug 11
1
sem & psych
Dear R users, I am trying to simulate some multitrait-multimethod models using the packages sem and psych but whatever I do to deal with models which do not converge I always get stuck and get error messages such as these: "Error in summary.sem(M1) : coefficient covariances cannot be computed" "Error in solve.default(res$hessian) : System ist f?r den Rechner singul?r: reziproke
2009 Jul 08
3
Fitting a trend-line
Hi all, I am new to R. How does one go about fitting a trend-line to a scatter plot? Any help is appreciated. Thanks and regards, Anupam [[alternative HTML version deleted]]
2007 May 04
5
Something wierd with .save method
Hi, I have a very sime app. I need to create as many records in the db as the selected checkboxes (named :isselected) in the view. Here is the code snippet from the controller: isselectedhash = params[:isselected] for mod in isselectedhash.keys @issue = Issue.new(params[:issue]) @issue.state = ''NEW'' @issue.mod_id = isselectedhash[mod] # this gets
2023 Jun 29
3
Plotting factors in graph panel
Thanks, Pikal and Jim. Yes, it has been a long time Jim. I hope you have been well. Pikal, thanks. Your solution may be close to what I want. I did not know that I was posting in HTML. I just copied the data from Excel and posted in the email in Gmail. The data is still in Excel, because I have not yet figured out what is a good way to organize it in R. I am posting it again below as text. These
2010 Apr 02
2
tetrachoric correlations
Hi, Is there any R library/package that calculates tetrachoric correlations from given marginals and Pearson correlations among ordinal variables? Inputs to polychor function in polycor package are either contingency tables or ordinal data themselves. I am looking for something that takes marginal distributions and Pearson correlation as inputs. For example, Y1=(1,2,3) with P(Y1=1)=0.3,