similar to: predicted values after fitting gamma2 function

Displaying 20 results from an estimated 700 matches similar to: "predicted values after fitting gamma2 function"

2009 Jun 05
2
p-values from VGAM function vglm
Anyone know how to get p-values for the t-values from the coefficients produced in vglm? Attached is the code and output ? see comment added to output to show where I need p-values + print(paste("********** Using VGAM function gamma2 **********")) + modl2<- vglm(MidPoint~Count,gamma2,data=modl.subset,trace=TRUE,crit="c") + print(coef(modl2,matrix=TRUE))
2009 Jun 16
0
Generation from COX PH with gamma frailty
Hello, I want to generate data set from Cox PH model with gamma frailty effects. theta(parameter for frailty distribution)=2 beta=1.5 n=300 cluster size=30 number of clusters=10 I think I should first generate u from Gamma(Theta,theta) and then using this theta I could not decide how I should generate the survival times? Is there any package for this? or any document you could suggest? Any
2019 Mar 03
2
bug: sample( x, size, replace = TRUE, prob= skewed.probs) produces uniform sample
When `length( skewed.probs ) > 200' uniform samples are generated in R-devel. R-3.5.1 behaves as expected. `epsilon` can be a lot bigger than illustrated and still the uniform distribution is produced. Chuck > set.seed(123) > > epsilon <- 1e-10 > > ## uniform to 200 then small > p200 <- prop.table( rep( c(1, epsilon), c(200, 999-200))) > ## uniform to 201
2010 Sep 16
2
use same breaks and colors, but the displayed scale are different-image.plot()
Hi all, I want to put several figures in a one figure for easy comparison, so i need to use the same methods to plot these figures. The following is an example. I also list my method, but it does not work. #Example data x<- 1:10; y<- 1:10; z<- outer( x,y,"+");z2<- outer( x,y,"-") #Quick view them image.plot(x,y,z) #relatively larger value image.plot(x,y,z2)
2013 Jan 04
1
SpatialPolygon with the max value gets no color assigned in spplot function when using "at" parameter
Hi, I would like to do coloring of map regions based on the region values "weight". The approach I am taking is first to break regions into equal intervals, classIntervals(spdf$weight,4)$brks #4 intervals in this case and coloring all regions within the interval with the same color col = brewer.pal(4,"RdYlGn")) The max "weight" is as well the boundary of the
2012 Oct 17
2
loop of quartile groups
Greetings R users, My goal is to generate quartile groups of each variable in my data set. I would like each experiment to have its designated group added as a subsequent column. I can accomplish this individually with the following code: brks <- with(data_variables, cut2(var2, g=4)) #I don't want the actual numbers, I need a numbered group data$test1=factor(brks,
2011 Nov 29
1
Hmisc break points error
I am making frequency histograms using the histbackback function on my 2 datasets. However when I try to use the brks function: foo<-histbackback(log(fie11), log(fie86),ylim=c(0,9),probability=FALSE,axes=TRUE,ylab=("log10 Parcel Size"),brks=16) The graphic results in a 'NA' label for the y axis (no intervals are returned) Also when I use 'summary(foo)' the
2006 Nov 07
2
wrong fill colors in polygon-map
Dear all, I would like to produce a map with information about the patenting activity in German districts, by coloring districts with different degrees of patenting activity in different colors. I work with the packages maptools, maps and spdep. The map data is read from an external .shp file (+ the corresponding .shx and .dbf files). Plotting a map with the IDs or the patenting indicator itself
2010 Mar 22
1
Setting breaks to data more appropriately
Basic question. For the below data, i would like to but each of the values in a bin that represents their value. So the below would hopefully put .1 in the 0-.1 bin, .2 in the .11-.2 bin and so forth. The outlying values would then be put into and outer category representing everything >1. Im using the breaks to inform some code for making a clorepleth map that represents probabilities,
2017 Jun 18
3
R_using non linear regression with constraints
I am not as expert as John, but I thought it worth pointing out that the variable substitution technique gives up one set of constraints for another (b=0 in this case). I also find that plots help me see what is going on, so here is my reproducible example (note inclusion of library calls for completeness). Note that NONE of the optimizers mentioned so far appear to be finding the true best
2017 Jun 18
0
R_using non linear regression with constraints
I've seen a number of problems like this over the years. The fact that the singular values of the Jacobian have a ration larger than the usual convergence tolerances can mean the codes stop well before the best fit. That is the "numerical analyst" view. David and Jeff have given geometric and statistical arguments. All views are useful, but it takes some time to sort them all out and
2009 Jun 04
1
hist returning density larger than 1
The following code is giving me problems. I want to export densities of a distribution to a csv file. At the bottom of the code I use the hist function to generate the densities. But hist is returning values greater than 1. I don't understand, why. Any help you can supply is greatly appreciated. # Set word path dir<-"~/Research/MR Distribution Analysis/" setwd(dir)
2017 Jul 09
2
Histogram plots in Lattice with spatialgrid dataframe data
Hi all, I can not seem to get what I want using the Lattice package to generate an array of histograms of spatialgrid dataframe data. I can use the sp package and spplot to generate an array of maps that display an array of spatialgrid dataframe data -- that's good. I have:
2006 Oct 27
0
VGAM package released on CRAN
Dear useRs, upon request, the VGAM package (currently version 0.7-1) has been officially released on CRAN (the package has been at my website http://www.stat.auckland.ac.nz/~yee/VGAM for a number of years now). VGAM implements a general framework for several classes of regression models using iteratively reweighted least squares (IRLS). The key ideas are Fisher scoring, generalized linear and
2017 Jul 09
0
Histogram plots in Lattice with spatialgrid dataframe data
Hello all, After more digging I was able to find out how to do this. The answer came from an example here: https://stackoverflow.com/questions/3541713/how-to-plot-two-histograms-together-in-r yr_1997<-data.frame(bias=ann_bias$bias1997) yr_1998<-data.frame(bias=ann_bias$bias1998) yr_1999<-data.frame(bias=ann_bias$bias1999) yr_2000<-data.frame(bias=ann_bias$bias2000)
2016 Apr 18
0
R [coding : do not run for every row ]
You can make this much more readable with apply functions. result <- apply( all_combine1, 1, function(x){ p.value <- sapply( seq_len(nSims), function(sim){ gamma1 <- rgamma(x["m"], x["sp(skewness1.5)"], x["scp1"]) gamma2 <- rgamma(x["n"], x["scp1"], 1) gamma1 <- gamma1 -
2017 Jul 10
1
Histogram plots in Lattice with spatialgrid dataframe data
Glad you found an answer, though it looks more self-educational than efficient (see suggestions below). In the future, follow the recommendations of the Posting Guide: use plain text, and provide a reproducible example. Some elaborations on what "reproducible" means are [1][2][3]. One issue here was that you did not include sample data to work with (I have assumed below that ann_bias has
2014 May 06
3
Mapa de quantiles con spplot
Hola, El problema con la propuesta de Olivier es que los intervalos son diferentes para cada variable. La forma sencilla y rápida es: spplot(zm["part88"], col.regions=plotclr, at=class$brks) Pero para que quede más elegante hay que dar algunos pasos más: ## Intervalos en forma character op <- options(digits=4) tab <- print(class) options(op) intChar <- names(tab) ## Indice
2016 Apr 18
0
R [coding : do not run for every row ]
Always keep the mailing list in cc. The code runs for each row in the data. However I get the feeling that there is a mismatch between what you think that is in the data and the actual data. ir. Thierry Onkelinx Instituut voor natuur- en bosonderzoek / Research Institute for Nature and Forest team Biometrie & Kwaliteitszorg / team Biometrics & Quality Assurance Kliniekstraat 25 1070
2010 Mar 24
0
Getting choropleth map intervals correct
Hello all, Working on mapping some probabilities using R to a geographic unit called a TAZ. The below data will work but you will have to set your directory for the shape file. Never did this before so hopefully this works. ResProbs is just supposed to be a value between 0-1, sorry if that more complicated than it needed to be. TazFile <- "*directory*/TAZ.shp" TazShape <-