search for: downweighting

Displaying 16 results from an estimated 16 matches for "downweighting".

2011 Nov 07
2
ordination in vegan: what does downweight() do?
Can anyone point me in the right direction of figuring out what downweight() is doing? I am using vegan to perform CCA on diatom assemblage data. I have a lot of rare species, so I want to reduce the influence of rare species in my CCA. I have read that some authors reduce rare species by only including species with an abundance of at least 1% in at least one sample (other authors use 5% as a
2008 Apr 22
0
Downweighting of cases in GLM
...about it. I guess, the weight argument in glm-function does not do what I intend to do? Might the survey package be a solution? And if so, do I ignore all the arguments except weight to specify the svydesign? Thanks a lot for your help! Eva -- View this message in context: http://www.nabble.com/Downweighting-of-cases-in-GLM-tp16823770p16823770.html Sent from the R help mailing list archive at Nabble.com.
2011 Mar 28
1
ordination in vegan
Hi all, I have site data with plant species cover and am looking for trends. I'm kind of new to this, but have done lots of reading and can't find an answer. I tried decorana (I know it's been replaced by ca.) and see a trend, but I'm not sure what it means. Is there a way to get the loadings/eigenvectors of the axes (like in PCA)? Is there a way to do this with rda() too? How
2006 Apr 06
5
pros and cons of "robust regression"? (i.e. rlm vs lm)
Can anyone comment or point me to a discussion of the pros and cons of robust regressions, vs. a more "manual" approach to trimming outliers and/or "normalizing" data used in regression analysis?
2012 Dec 17
2
How to get transparent colors to sum to complete opacity?
Dear List, I want to use transparency in R to represent downweighting of observations based on clusters (repeated observations in a dataset). Some clusters will have identical covariate values in a parameter space -- in the 2D x,y case, these represent a bunch of semi-tranparent dots in the same place. I'd like these overlapping dots to be completely opaqu...
2002 Oct 09
1
s.window in stl()
Hi, This is actually a theory question. I'm a bit confused by the s.window parameter in the stl() function (which is in the ts package). For example, in the stl documentation it uses the nottem data, and then: plot(stl(nottem, s.win = 4, t.win = 50, t.jump = 1)) What does it mean by s.win = 4? Is it because a year has 4 seasons (namely Spring, Summer, Autumn and Winter)? If so will it
2004 Mar 18
2
logistic regression with temporal correlation
Hello We would like to perform a logistic regression analysis weighting the independent variable in a temporal fashion, i.e. events occuring most recent get highest weight. Does anyone know how to do this in R?? Regards S. Merser and S. Lophaven
2005 Jul 27
2
GAM weights
Dear all, we are trying to model some data from rare plants so we always have less than 50 1x1 km presences, and the total area is about 550.000 square km. So we have a real problem, when we perform a GAM, if we consider only the same amount of absences than presences. We have thought to use a greater number of absences but in this case we shoud downweight them. Does anybody know how to use the
2009 Feb 08
0
library vegan - cca - versus CANOCO
...uot;good for my eyes", however, when I did summary(CCA1), the first two axis accounted 0.04 CCA1 and similar in CCA2....then the variation explained by each axis was small. On the other hand, when I performed CCA in CANOCO, without selecting the option Log-transforming data matrix and without downweighting rare species, the results were the opposite from the CCA performed in R. The axes accounted high percentage of variation, the model was also significant, but the plot had little sense. Thank you very much! [[alternative HTML version deleted]]
2009 Feb 19
1
Difference between GEE and Robust Cluster Standard Errors
Hello, I know that two possible approaches to dealing with clustered data would be GEE or a robust cluster covariance matrix from a standard regression. What are the differences between these two methods, or are they doing the same thing? Thanks. -- View this message in context: http://www.nabble.com/Difference-between-GEE-and-Robust-Cluster-Standard-Errors-tp22092897p22092897.html Sent from the
1999 Aug 14
1
leaps and bounds
Dear friends. On the Bayesian averaging homepage http://www.research.att.com/~volinsky/bma.html I found some S code some of which perhaps may run in R. There was a call to an algorithm possibly within S but not supported by R 64.1: "leaps and bounds". I guess it is a minimization step. Can anyone clarify the algorithm and perhaps even give a pointer to some code ? I guess this may be
2005 Apr 13
1
logistic regression weights problem
Hi All, I have a problem with weighted logistic regression. I have a number of SNPs and a case/control scenario, but not all genotypes are as "guaranteed" as others, so I am using weights to downsample the importance of individuals whose genotype has been heavily "inferred". My data is quite big, but with a dummy example: > status <- c(1,1,1,0,0) > SNPs <-
2012 May 30
0
Survival with different probabilities of censoring
Dear all I have a fairly funky problem that I think demands some sort of survival analysis. There are two Red List assessments for mammals: 1986 and 2008. Some mammals changed their Red List status between those dates. Those changes can be regarded as "events" and are "interval censored" in the sense that we don't know at what point between 1986 and 2008 each species
2008 Dec 08
1
residual standard error in rlm (MASS package)
Hi, I would appreciate of someone could explain how the residual standard error is computed for rlm models (MASS package). Usually, one would expect to get the residual standard error by > sqrt(sum((y-fitted(fm))^2)/(n-2)) where y is the response, fm a linear model with an intercept and slope for x and n the number of observations. This does not seem to work for rlm models and I am wondering
2007 Sep 12
2
k-means clustering
Dear list, first apologies for this is not strictly an R question but a theoretical one. I have read that use of k-means clustering assumes sphericity of data distribution. Can anyone explain me what this means? My statistical background is too poor. Is it another kind of distribution, like gaussian or binomial? What does it happen if the distribution is not spherical? Could you give me an
2006 May 20
1
(PR#8877) predict.lm does not have a weights argument for newdata
Dear R developers, I am a little disappointed that my bug report only made it to the wishlist, with the argument: Well, it does not say it has. Only relevant to prediction intervals. predict.lm does calculate prediction intervals for linear models from weighted regression, so they should be correct, right? As far as I can see they are bound to be wrong in almost all cases, if no weights