similar to: optim and singularity

Displaying 20 results from an estimated 3000 matches similar to: "optim and singularity"

2010 Dec 22
0
adjust secondary y-axis bounds to minimize visual residuals
Hello, I'm plotting two sets of data referenced to either the left or right y-axes. The first, water table depth (blue circles), is plotted on the left y-axis in reverse order (0 at the top) as this is more intuitive when thinking in terms of depth. The second is electrical conductance (a surrogate for salinity), and is referenced to the right y-axis. The data and plot commands follow
2011 Jan 19
1
Using subset to filter data table
I am having difficulty understanding how I would constrain a data set by filtering out 'records' based on certain criteria. Using SQL I could query using 'select * from my.data where LithClass in ('sand', 'clay')' or some such. Using subset, there seem to be ghosts left behind (that is, all of the LithClass *.Labels* remain after subset) > dput(tcc)
2010 Feb 17
2
extract the data that match
Hi r-users,   I would like to extract the data that match.  Attached is my data: I'm interested in matchind the value in column 'intg' with value in column 'rand_no' > cbind(z=z,intg=dd,rand_no = rr)             z  intg rand_no    [1,]  0.00 0.000   0.001    [2,]  0.01 0.000   0.002    [3,]  0.02 0.000   0.002    [4,]  0.03 0.000   0.003    [5,]  0.04 0.000   0.003    [6,] 
2010 Mar 20
2
different forms of nls recommendations
Hello, Using this data: http://n4.nabble.com/file/n1676330/US_Final_Values.txt US_Final_Values.txt and the following code i got the image at the end of this message: US.final.values<-read.table("c:/tmp/US_Final_Values.txt",header=T,sep=" ") US.nls.1<-nls(US.final.values$ECe~a*US.final.values$WTD^b+c,data=US.final.values,start=list(a=2.75,b=-0.95,c=0.731),trace=TRUE)
2011 Apr 20
2
survexp with weights
Hello, I probably have a syntax error in trying to generate an expected survival curve from a weighted cox model, but I can't see it. I used the help sample code to generate a weighted model, with the addition of a "weights=albumin" argument (I only chose albumin because it had no missing values, not because of any real relevance). Below are my code with the resulting error
2006 Mar 16
4
problem for wtd.quantile()
Dear R-users, I don't know if there is a problem in wtd.quantile (from library "Hmisc"): -------------------------------- x <- c(1,2,3,4,5) w <- c(0.5,0.4,0.3,0.2,0.1) wtd.quantile(x,weights=w) ------------------------------- The output is: 0% 25% 50% 75% 100% 3.00 3.25 3.50 3.75 4.00 The version of R I am using is: 2.1.0 Best,Jing
2012 Jul 24
1
Function for ddply
Hello, all. I'm new to R and just beginning to learn to write functions. I know I'm out of my depth posting here, and I'm sure my issue is mundane. But here goes. I'm analyzing the American National Election Study (nes), looking at mean values of a numeric dep_var (environ.therm) across values of a factor (partyid3). I use ddply from plyr and wtd.mean from Hmisc. The nes requires a
2007 May 31
3
Problem with Weighted Variance in Hmisc
The function wtd.var(x,w) in Hmisc calculates the weighted variance of x where w are the weights. It appears to me that wtd.var(x,w) = var(x) if all of the weights are equal, but this does not appear to be the case. Can someone point out to me where I am going wrong here? Thanks. Tom La Bone [[alternative HTML version deleted]]
2012 Apr 20
1
pasting a formula string with double quotes in it
Hello everyone, I have tried several ways of doing this and searched the documentation and help lists and I have been unable to find an answer or even whether it is possible to do it. I am pasting together a formula and I need to insert double quotes around the strings. Here's an example: location <- c("AL", "AK", "MA", "PA") v=2 test <-
2009 Jun 23
3
subset POSIXct
Hi, I have a data frame with two columns: dt and tf. The dt column is datetime and the tf column is a temperature. dt tf 1 2009-06-20 00:53:00 73 2 2009-06-20 01:08:00 73 3 2009-06-20 01:44:00 72 4 2009-06-20 01:53:00 71 5 2009-06-20 02:07:00 72 ... I need a subset of the rows where the minutes are 53. The hour is immaterial. I can not find a wildcard
2007 Jul 23
1
replacing double for loops with apply's
Hi, I am doing double for loops to calculate SDs with some weights and wondering if I can get rid of the outer for loop as well. I made a simple examples which is essentially what I am doing. Thanks for your help! -Young #------------------------------------------------------ # wtd.var is Hmisc package # you can replace the 3 lines inside for loop as # sdx[i,] =
2010 Apr 16
0
Blocking and Nested ANOVA Design. Am I using the aov() function correctly?
Dear list members, I am new member and fairly new into R world! I hope what I have is not beyond the purpose of this list. I did first search for similar experimental designs without success. I want to perform an ANOVA analysis using the aov() function. I am not 100% sure that I have it right. If anyone can help me, that will be greatly appreciated. My design is not balanced for any of the
2009 Jan 19
1
conditional weighted quintiles
Dear All, I am economist and working on poverty / income inequality. I need descriptive statitics like the ratio of education expentitures between different income quintiles where each household has a different weight. After a bit of google search I found 'Hmisc' and 'quantreg' libraries for weighted quantiles. The problem is that these packages give me only weighted quintiles;
2008 Jan 07
2
How should I improve the following R code?
I'm looking for a way to improve code that's proven to be inefficient. Suppose that a data source generates the following table every minute: Index Count ------------ 0 234 1 120 7 11 30 1 I save the tables in the following CSV format: time,index,count 0,0:1:7:30,234:120:11:1 1,0:2:3:19,199:110:87:9 That is, each line represents a table, and I
2012 Mar 06
1
How to eliminate for next loops in this script
I needed to compute a complicated cross tabulation to show weighted means and standard deviations and the only method I could get that worked uses a series of nested for next loops. I know that there must be a better way to do so, but could use some assistance pointing the way. Here is my working, but inefficient script: library(Hmisc) rm(list=ls()) load('NHTS.Rdata') day.wt <-
2006 Jan 12
2
tapply and weighted means
I' m trying to compute weighted mean on different groups but it only returns NA. If I use the following data.frame truc: x y w 1 1 1 1 2 2 1 3 1 1 4 2 0 2 1 0 3 2 0 4 1 0 5 1 where x is a factor, and then use the command : tapply(truc$y,list(truc$x),wtd.mean, weights=truc$w) I just get NA. What's the problem ? What can I do ?
2017 Nov 24
2
number to volume weighted distribution
Hi Duncan I tried Ecdf and/or wtd.quantile from Hmisc and it is working (probably). Ecdf(x, q=.5) Ecdf(x, weights=xw,col=2, add=T, q=.5) wtd.quantile(x) 0% 25% 50% 75% 100% 10 10 10 100 300 wtd.quantile(x, weights=xw, type="i/n") 0% 25% 50% 75% 100% 10.0000 138.8667 192.5778 246.2889 300.0000 But could you please be more specific in this? >
2017 Nov 24
0
number to volume weighted distribution
Hi Petr, I think that Duncan suggests something like this: x<- c(rep(10,20), rep(300,5), rep(100, 10)) tx <- table(x) prop.x <- tx / sum(tx) vx <- as.integer(names(tx)) prop.wx <- tx * vx / sum(tx * vx) plot(ecdf(x)) plot(vx, cumsum(prop.x), ylim = 0:1) plot(vx, cumsum(prop.wx), ylim = 0:1) Best regards, Thierry ir. Thierry Onkelinx Statisticus / Statistician Vlaamse
2008 Mar 20
3
Problem with diff(strptime(...
Hi all, I have been chipping away at a problem I encountered in calculating rates per year from a moderately large data file (46412 rows). When I ran the following command, I got obviously wrong output: interval<- c(NA,as.numeric(diff( strptime(mkdf$MEAS_DATE,"%d/%m/%Y")))/365.25) The values in MEAS_DATE looked like this: mkdf$MEAS_DATE[1:10] [1] 1/5/1962 1/5/1963
2008 Nov 24
1
weighted ftable
I need to do some fairly deep tables, and ftable() offers most of what I need, except for the weighting. With smaller samples, I've just used replicate to let me have a weighted data set, but with this data set, I'm afraid replicate is going to make my data set too big for R to handle comfortably. That being said, is there some way to weight my table (similar to wtd.table) but offer the