search for: seber

Displaying 13 results from an estimated 13 matches for "seber".

Did you mean: weber
2008 Feb 27
1
dhyper, phyper (PR#10853)
...timation in Vertebrate Samples. They say that "calculation of precise confidence intervals for population sizes is, in principle at least, straightforward. It involves calculation of cumulative hypergeometric probabilities (i.e. the summation of probabilities given by equation 3.1 of Seber, 1973)." The reference is to G.A.F. Seber's book, The Estimation of Animal Abundance. I went to equation 3.1 and wrote a small function to sum its probabilities, modeled after phyper() and taking the arguments in the same order (the names have changed to suit the archaeological s...
2003 Feb 25
1
Error analysis
Dear R experts, I fitted data (length == m) with an external library (that's a non-linear problem), and, I've got the final parameter vector (length == n) and Jacobian matrix (dim == c(m, n)). Now, I want to analyse the standard errors of the parameters correctly. How can I do it? Where can I read about it else? Thanks! -- WBR, Timur.
2010 Mar 05
0
R algorithm for maximum curvatures measures of nonlinear models
Hi all, I'm looking for the algorithm to calculate maximum intrinsic and parameter effects curvature of Bates & Watts (1980). I have Bates & Watts (1980) original article, Bates et al (1983) article, Seber & Wild (1989) and Ratkowsky (1983) books. Those sources show steps and algorithm to get this measures but I don't know translate C code to R language and I've no success until now. I know and I use rms.curv() in MASS package but I would like the maximum curvatures measures. Does someo...
2012 May 23
1
how a latent state matrix is updated using package R2WinBUGS
...and how a latent state matrix is updated by the MCMC iterations in a WinBUGS model, using the package R2WinBUGS and an example from Kery and Schaub's (2012) book, "Bayesian Population Analysis Using WinBUGS". The example I'm using is 7.3.1. from a chapter on the Cormack-Jolly-Seber model. Some excerpted code is included at the end of this message; the full code is available at http://www.vogelwarte.ch/downloads/files/publications/BPA/bpa-code.txt The latent state of individual i on occasion t is stored in the z matrix where rows index individuals (owls that are...
2003 Jun 13
5
covariate data errors
Greetings, I would like to fit a multiple linear regression model in which the residuals are expected to follow a multivariate normal distribution, using weighted least squares. I know that the data in question have biases that would result in correlated residuals, and I have a means for quantifying those biases as a covariance matrix. I cannot, unfortunately, correct the data for these biases.
2005 Sep 18
0
How to test homogeneity of covariance matrices?
...ibe the outlines of specimens. I rather like to explore precisely these harmonics parameters. It is known that Boxs M-test of homogeneity of variance-covariance matrices is oversensitive to heteroscedasticity and to deviation from multivariate normality and that it I not useful (Everitt, 2005 ; Seber, 1984 ; Layard, 1974). I have tried a quick and dirty intuitive comparison between two covariance matrices and I am seeking the opinion of professional statisticians about this stuff. The idea is to compare the two matrices using the absolute value of their difference, then to make a quadratic...
2003 Nov 08
2
Effects of rounding on regression
Does anyone know of research on the effects of rounding on regression? e.g., when you ask people "How often have you _______?" you are more likely to get answers like 100, 200, etc. than 98, 203, etc. I'm interested in investigating this, but don't want to reinvent the wheel. thanks Peter Peter L. Flom, PhD Assistant Director, Statistics and Data Analysis Core Center for
2007 Nov 20
1
How is the Gauss-Newton method compared to Levenberg-Marquardt for curve-fitting?
Hi, It seems to me that the most suitable method in R for curve-fitting is the use of nls, which uses a Gauss-Newton (GN) algorithm, while the use of the Levenberg-Marquardt (LM) algorithm does not seem to be very stressed in R. According to this [1] by Ripley, 'Levenberg-Marquardt is hardly competitive these days' which could imply the low emphasize on LM in R. The position of LM is, to
2001 Jun 11
1
Additional output in cancor
Hi everyone, Can I suggest an additional output component in cancor, from package mva? It would be useful to have the number of canonical correlation vectors, equivalently the rank of the covariance between x and y (label "rank"). This would usually be min(dx, dy), where dx and dy have already been computed for the svd function, but there might be situations where it was less than
2000 Jul 26
3
Correlation matrices
Hello, are there any good methods in R that will test if two correlation matrices (obtained in different ways) are equal? Something better than the Mantel test would be preferable. Regards, Patrik Waldmann -.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.- r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html Send "info",
2003 Oct 30
3
Change in 'solve' for r-patched
...other results from the least squares calculation, such as fitted values or residuals, you may want to save qr(X) so you can reuse it qrX <- qr(X) betahat <- qr.coef(qrX, y) res <- qr.resid(qrX, y) ... There are alternatives but solve(t(X) %*% X) %*% t(X) %*% y is never a good one. Seber and Lee discuss discuss such calculations at length in chapter 11 of their "Linear Regression Analysis (2nd ed)" (Wiley, 2003). Some other comments: - the condition number of X'X is the square of the condition number of X, which is why it is a good idea to avoid working with X...
2005 Dec 14
2
suggestions for nls error: false convergence
Hi, I'm trying to fit some data using a logistic function defined as y ~ a * (1+m*exp(-x/tau)) / (1+n*exp(-x/tau) My data is below: x <- 1:100 y <- c(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,1,1,1,2,2,2,2,2,3,4,4,4,5, 5,5,5,6,6,6,6,6,8,8,9,9,10,13,14,16,19,21, 24,28,33,40,42,44,50,54,69,70,93,96,110,127,127,141,157,169,
2000 Jun 03
4
How to do linear regression with errors in x and y?
QUESTION: how should I do a linear regression in which there are errors in x as well as y? SUPPLEMENT: I've seen folks approach this problem by computing eigenvectors of the covariance matrix, and that makes sense to me. But I'm wondering if this has a "pedigree" (i.e. if it makes sense to folks on this list, and if it's something that has been published, so I can refer to