similar to: data frame method for as.table()

Displaying 20 results from an estimated 10000 matches similar to: "data frame method for as.table()"

2016 May 23
0
data frame method for as.table()
> On May 23, 2016, at 11:46 AM, Ernest Adrogu? <eac at openmailbox.org> wrote: > > Hello, > > Currently it's possible to convert an object of class table to a data frame > with as.data.frame.table(), but there's no ready-made function, AFAIK, to do > the reverse operation, i.e. conversion of a data frame to a table. > > Do you think it would be a good
2007 May 05
1
How to latex tables?
Suppose I have a table constructed from structable or simply just an object of class table. How can I convert it to a latex object? I looked in RSiteSearch, but only found info about matrices or data frames. Steve For example, here is a table t2 > str(t2) table [1:2, 1:2, 1:2] 6 8 594 592 57 ... - attr(*, "dimnames")=List of 3 ..$ Hospital : chr [1:2] "A"
2020 May 13
7
justify hard coded in format.ftable
Dear all, I haven't received any feedback so far on my proposal to make "justify" argument available in stats:::format.ftable Is this list the appropriate place for this kind of proposal? I hope this follow-up to my message won't be taken as rude. Of course it's not meant to be, but I'm not used to the R mailing lists... Thank you in advance for your comments, Best,
2007 Feb 24
1
Woolf's test, Odds ratio, stratification
Just a general question concerning the woolf test (package vcd), when we have stratified data (2x2 tables) and when the p.value of the woolf-test is below 0.05 then we assume that there is a heterogeneity and a common odds ratio cannot be computed? Does this mean that we have to try to add more stratification variables (stratify more) to make the woolf-test p.value insignificant? Also in the
2002 Feb 26
1
Cross-tabulation of data from database
I am quite new to R, so please bear over with me if I have problems with the R terminology. I want to (try to) use R for some analyses within vegetation ecology, using the vegan package. I have my data in a postgresql database, and I manage to get them into R as a dataframe with columns for respectively: Name of the analysed m2, Name of the species, coverage of species in the square in %. I
2012 Jan 26
1
ftable.formula
I apologize in advance if this is the wrong forum for this report/request, and for the fact that I have not read the code for ftable.formula in any detail. >From reading the documentation for ftable.formula, I expected that the following two calls to ftable would produce the same results: data(UCBAdmissions) ftable(UCBAdmissions, row.vars = "Dept", col.vars = c("Gender",
2012 Aug 31
2
test Breslow-Day for svytable??
Hi all, I want to know how to perform the test Breslow-Day test for homogeneity of odds ratios (OR) stratified for svytable. This test is obtained with the following code: epi.2by2 (dat = daty, method = "case.control" conf.level = 0.95, units = 100, homogeneity = "breslow.day", verbose = TRUE) where "daty" is the object type table svytable consider it, but
2017 Jan 19
2
xtabs(), factors and NAs
Hi all, I know this issue has been discussed a few times in the past already, but Martin Maechler suggested in a bug report [1] that I raise it here. Basically, there is no (easy) way of printing NAs for all variables when calling xtabs() on factors. Passing 'exclude=NULL, na.action=na.pass' works for character vectors, but not for factors. > test <-
2010 Dec 02
1
latex tables for 3+ dimensional tables/arrays
I'm looking for an R method to produce latex versions of tables for table/array objects of 3 or more dimensions, which, of necessity is flattened to a 2D display, for example with ftable(), or vcd::structable, as shown below. I'd be happy to settle for a flexible solution for the 3D case. > UCB <- aperm(UCBAdmissions, c(2, 1, 3)) > ftable(UCB) Dept A B
2015 Feb 09
3
xtabs and NA
Hi I haven't found a way to produce a tabulation from factor data with NA values using xtabs. Please find a minimal example below, it's also on R-pubs [1]. Tested with R 3.1.2 and R-devel r67720. It doesn't seem to be documented explicitly that it's not supported. From reading the code [2] it looks like the relevant call to table() doesn't set the "useNA"
2011 Apr 03
1
style question
Hi everyone, I am trying to build a table putting standard errors horizontally. I haven't been able to do it. library(memisc) berkeley <- aggregate(Table(Admit,Freq)~.,data=UCBAdmissions) berk0 <- glm(cbind(Admitted,Rejected)~1,data=berkeley,family="binomial") berk1 <- glm(cbind(Admitted,Rejected)~Gender,data=berkeley,family="binomial") berk2 <-
2005 Apr 13
1
lm() with many responses
Hi all, I have one array of predictors, one observation per row, and one array of responses, also arranged one observation per row. I arrange these into a data.frame and call lm() with a pasted-together formula. I would like to call lm() with a number of responses in excess of 100, but for some reason, 39 seems to be a limit. Why do I get an "invalid variable names" error from
2016 Aug 17
1
table(exclude = NULL) always includes NA
The quirk as in table(1:3, exclude = 1, useNA = "ifany") is actually somewhat documented, and still in R devel r71104. In R help on 'table', in "Details" section: It is best to supply factors rather than rely on coercion. In particular, ?exclude? will be used in coercion to a factor, and so values (not levels) which appear in ?exclude? before coercion will be mapped to
2016 Aug 15
1
table(exclude = NULL) always includes NA
>>>>> Martin Maechler <maechler at stat.math.ethz.ch> >>>>> on Mon, 15 Aug 2016 11:07:43 +0200 writes: >>>>> Suharto Anggono Suharto Anggono <suharto_anggono at yahoo.com> >>>>> on Sun, 14 Aug 2016 03:42:08 +0000 writes: >> useNA <- if (missing(useNA) && !missing(exclude) && !(NA %in%
2011 Apr 03
1
setCoefTemplate
Hi everyone, I am trying to build a table putting standard errors horizontally. I haven't been able to do it. library(memisc) berkeley <- aggregate(Table(Admit,Freq)~.,data=UCBAdmissions) berk0 <- glm(cbind(Admitted,Rejected)~1,data=berkeley,family="binomial") berk1 <- glm(cbind(Admitted,Rejected)~Gender,data=berkeley,family="binomial") berk2 <-
2006 Apr 04
2
documenting s4 methods in package
Hi, I have written a package that contains many s4 generic functions and associated methods. I am having a lot of trouble getting R to build the help pages for these generic functions without reporting, "missing link(s): ~~fun~~, which means that it cannot find the appropriate function when code in the example section of the help is run. Right? After some playing around I can get it to
2008 Nov 05
1
How do I read a text (.csv) file to match a matrix/cross tab? (Object confusion??)
I'm having a problem reading data to set control totals for a dataframe. I want to adjust a dataframe based on a 2-d table of values, which I get by using : > CurrentX1Sums <- as.matrix(xtabs(~tripid_nu+lineon, data=SurveyData)) > CurrentX2Sums <- apply(CurrentX1Sums, 1, sum) I've created a .csv file with new (target) sums that looks like this: tripid_nu Warner
2017 Jun 06
2
integrating 2 lists and a data frame in R
> On Jun 6, 2017, at 4:01 AM, Jim Lemon <drjimlemon at gmail.com> wrote: > > Hi Bogdan, > Kinda messy, but: > > N <- data.frame(N=c("n1","n2","n3","n4")) > M <- data.frame(M=c("m1","m2","m3","m4","m5")) > C <-
2011 Feb 16
1
Saturated model in binomial glm
Hi all, Could somebody be so kind to explain to me what is the saturated model on which deviance and degrees of freedom are calculated when fitting a binomial glm? Everything makes sense if I fit the model using as response a vector of proportions or a two-column matrix. But when the response is a factor and counts are specified via the "weights" argument, I am kind of lost as far as
2003 Jun 17
1
probability values ?
Hello I try to find probability values of some predictor combinations using logistic reg. in response level. Firstly I found coefficients by glm function. Then I followed two ways to get probability values: 1- probility <- exp(X0+bX1+cX2+...)/((1+exp(X0+bX1+cX2+...)) 2- probility <- predict(glm.obj,type="resp") Should have these two given same result ? if so, I did not have. Why