search for: 0.0256

Displaying 20 results from an estimated 24 matches for "0.0256".

Did you mean: 0.025
2024 Jul 12
2
grep
Thanks. In this case below, what is "x"? I tried rownames(out) which did not work. Sorry. Does this sound like homework to you? On 7/12/2024 5:09 PM, Uwe Ligges wrote: > > > On 12.07.2024 10:54, Steven Yen wrote: >> Below is part a regression printout. How can I use "grep" to identify >> rows headed by variables (first column) with a certain label. In
2024 Jul 12
1
grep
Below is part a regression printout. How can I use "grep" to identify rows headed by variables (first column) with a certain label. In this case, I like to find variables containing "somewhath", "veryh",?"somewhatm", "verym", "somewhatc", "veryc","somewhatl", "veryl". The result should be an index 6:13 or
2007 Aug 12
1
Write values on y axe
Hi, I have values on y axe from 0.0001 to 3.086. When I do plot I have writen values: 0.001, 0.050,1.000 ..., but how I can write on graph the minimum value and maximum value, with all decimals (I don't want to use the format 1e-0x)? I am using log scale. For example, if I have the values: 0.0001 0.0015 0.0256 0.0236 .... 0.0201 2.9668 3.0086 I need have each 'x' value put on y axe,
2024 Jul 12
1
grep
On 12.07.2024 10:54, Steven Yen wrote: > Below is part a regression printout. How can I use "grep" to identify > rows headed by variables (first column) with a certain label. In this > case, I like to find variables containing "somewhath", > "veryh",?"somewhatm", "verym", "somewhatc", "veryc","somewhatl",
2010 Mar 17
1
constrOptim - error: initial value not feasible
Hello at all, working with a dataset I try to optimize a non-linear function with constraint. test<-read.csv2("C:/Users/Herb/Desktop/Opti/NORM.csv") fkt<- function(x){ a<-c(0) s<-c(0) #Minimizing square error for(j in 1:107){ s<-(test[j,2] - (x[1] * test[j,3]) - (x[2] * test[j,4]) - (x[3]*test[j,5]) - (x[4]*test[j,6]) - (x[5]*test[j,7]))^2 a<- a+s} a<-as.double(a)
2009 Apr 09
1
arima on defined lags
Dear all, The standard call to ARIMA in the base package such as arima(y,c(5,0,0),include.mean=FALSE) gives a full 5th order lag polynomial model with for example coeffs Coefficients: ar1 ar2 ar3 ar4 ar5 0.4715 0.067 -0.1772 0.0256 -0.2550 s.e. 0.1421 0.158 0.1569 0.1602 0.1469 Is it possible (I doubt it but am
2024 Jul 12
1
grep
Could not get "which" to work, but my grep worked. Thanks. > which(grep("very|somewhat",names(goprobit.p$est))) Error in which(grep("very|somewhat", names(goprobit.p$est))) : argument to 'which' is not logical > grep("very|somewhat",names(goprobit.p$est)) [1] 6 7 8 9 10 11 12 13 28 29 30 31 32 33 34 35 50 51 52 53 54 55 56 57 On 7/12/2024
2011 Apr 09
2
[LLVMdev] dragonegg/llvm-gfortran/gfortran benchmarks
With the case-insensitive file system patch from http://llvm.org/bugs/show_bug.cgi?id=9656#c15 applied to dragonegg 2.9, the following Polyhedron 2005 benchmarks are seen on x86_64-apple-darwin10 under gcc 4.5.3svn using the dragonegg plugin... ================================================================================ Date & Time : 8 Apr 2011 19:52:56 Test Name :
2008 Mar 02
0
coxpath() in package glmpath
Hi, I am new to model selection by coefficient shrinkage method such as lasso. And I became particularly interested in variable selection in Cox regression by lasso. I became aware of the coxpath() in R package glmpath does lasso on Cox model. I have tried the sample script on the help page of coxpath(), but I have difficult time understanding the output. Therefore, I would greatly appreciate if
2017 Oct 31
0
Final models from caret's train function
Using caret on the Titanic data from Kaggle, I tried various models, including rfRules which produces a model, partly described as such: > caret.rfRules.cv$finalModel $model len freq err [1,] "2" "0.0368" "0" [2,] "2" "0.032" "0.05" [3,] "2"
2024 Jul 12
0
grep
Now I've found another way to make it work. All I need is to pick up the names in the column (x.1.age...). > v<-pr(goprobit.p); v Maximum-Likelihood Estimates weighted = FALSE iterations = 5 logLik = -14160.75 finalHessian = TRUE Covariance matrix is Robust Number of parameters = 66 Sample size = 17922 est se t p g sig x.1.age 0.0341 0.0138 2.4766 0.0133 -3.8835e-04 ** x.1.sleep
2024 Jul 14
0
grep
Yes. Any of the following worked. The pipe greater than (|>) is neat! Thanks. > v<-goprobit.p$est > names(v) |> grep("somewhat|very", x = _) ?[1]? 6? 7? 8? 9 10 11 12 13 28 29 30 31 32 33 34 35 50 51 52 53 54 55 56 57 > v |> names() |> grep("somewhat|very", x = _) ?[1]? 6? 7? 8? 9 10 11 12 13 28 29 30 31 32 33 34 35 50 51 52 53 54 55 56 57 >
2011 Jul 24
2
[LLVMdev] [llvm-testresults] bwilson__llvm-gcc_PROD__i386 nightly tester results
A big compile time regression. Any ideas? Ciao, Duncan. On 22/07/11 19:13, llvm-testresults at cs.uiuc.edu wrote: > > bwilson__llvm-gcc_PROD__i386 nightly tester results > > URL http://llvm.org/perf/db_default/simple/nts/253/ > Nickname bwilson__llvm-gcc_PROD__i386:4 > Name curlew.apple.com > > Run ID Order Start Time End Time > Current 253 0 2011-07-22 16:22:04
2005 May 26
0
Confidence intervals for prediction based on the logistic equation
Greetings, We are performing a meta-analysis of mink pup survival data versus chemical concentration. We have modeled percent survival successfully using nls as shown below and the plot. What we need to do is construct a confidence interval on the concentration at which we get 50% survival (aka the EC50, although we may want other percent survivals in the future). My first question is, what seems
2011 Apr 09
0
[LLVMdev] dragonegg/llvm-gfortran/gfortran benchmarks
Hi Jack, thanks for the numbers. Any chance of analysing why gcc does better on those where it does much better than dragonegg? Ciao, Duncan. > With the case-insensitive file system patch from http://llvm.org/bugs/show_bug.cgi?id=9656#c15 > applied to dragonegg 2.9, the following Polyhedron 2005 benchmarks are seen on x86_64-apple-darwin10 > under gcc 4.5.3svn using the dragonegg
2013 Feb 01
29
cumulative sum by group and under some criteria
Thank you very much for your reply. Your code work well with this example. I modified a little to fit my real data, I got an error massage. Error in split.default(x = seq_len(nrow(x)), f = f, drop = drop, ...) : Group length is 0 but data length > 0 On Thu, Jan 31, 2013 at 12:21 PM, arun kirshna [via R] < ml-node+s789695n4657196h87@n4.nabble.com> wrote: > Hi, > Try this: >
2011 Jul 24
0
[LLVMdev] [llvm-testresults] bwilson__llvm-gcc_PROD__i386 nightly tester results
On Jul 24, 2011, at 3:02 AM, Duncan Sands wrote: > A big compile time regression. Any ideas? > > Ciao, Duncan. False alarm. For some reason that I have not yet been able to figure out, these tests run significantly more slowly when I run them during the daytime, which I did for that run. I checked a few of the worst regressions reported here and they all recovered in subsequent
2012 Aug 22
1
Error in if (n > 0)
I've searched the Web with Google and do not find what might cause this particular error from an invocation of cenboxplot: cenboxplot(cu.t$quant, cu.t$ceneq1, cu.t$era, range=1.5, main='Total Recoverable Copper', ylab='Concentration (mg/L)', xlab='Time Period') Error in if (n > 0) (1L:n - a)/(n + 1 - 2 * a) else numeric() : argument is of length zero I do
2006 Jul 17
1
sem: negative parameter variances
Dear Spencer and Prof. Fox, Thank you for your replies. I'll very appreciate, if you have any ideas concerning the problem described below. First, I'd like to describe the model in brief. In general I consider a model with three equations. First one is for annual GRP growth - in general it looks like: 1) GRP growth per capita = G(investment, migration, initial GRP per
2012 Nov 23
2
[LLVMdev] [cfe-dev] costing optimisations
On 23.11.2012, at 15:12, john skaller <skaller at users.sourceforge.net> wrote: > > On 23/11/2012, at 5:46 PM, Sean Silva wrote: > >> Adding LLVMdev, since this is intimately related to the optimization passes. >> >>> I think this is roughly because some function level optimisations are >>> worse than O(N) in the number of instructions. >>