Displaying 20 results from an estimated 2000 matches similar to: "Plot complete dataset"
2009 May 07
13
what database field type should i use ??
in a database table if there is a field which has a certain set of
fixed values. for example
staus => {Single, Married, Divorced }
OR
state => {California, Albama, Olaska ...}
so what should be preferred way out of the following for storing the
values
1. Keep the field as "string(Rails)" VARCHAR(MySQL) itself ....and
while showing the field just show the field value.
2. Keep
2008 Jul 16
4
Likelihood ratio test between glm and glmer fits
Dear list,
I am fitting a logistic multi-level regression model and need to test the difference between the ordinary logistic regression from a glm() fit and the mixed effects fit from glmer(), basically I want to do a likelihood ratio test between the two fits.
The data are like this:
My outcome is a (1,0) for health status, I have several (1,0) dummy variables RURAL, SMOKE, DRINK, EMPLOYED,
2017 Dec 14
1
Aggregation across two variables in data.table
Dear all,
I have a data.frame that includes a series of demographic variables for a
set of respondents plus a dependent variable (Theta). For example:
Age Education Marital Familysize
Income Housing Theta
1: 50 Associate degree Divorced 4
70K+ Owned with mortgage 9.147777
2: 65
2012 Mar 07
4
problem with data
Good Afternoon,
?? I have a small problem with the following code.
# The x.sub$Time[[1]] 2006-10-31 19:03:01 EST
# when put in variable star give-me
star<-x.sub$Time[[1]]
print(star)
print(x.sub$Time[[1]])
[1] 1 36 32 -........
do not understand why
--
View this message in context: http://r.789695.n4.nabble.com/problem-with-data-tp4453510p4453510.html
Sent from the R help mailing
2019 Apr 20
3
User mapping/login issue
I have been a bit divorced from Samba for a while and am stumped by a recently seen issue.
My Samba server (V4.8.3) is Centos 7 and the remote clients are windoze boxes at the other end of a VPN (OpenVPN).
At some point in "recent" history, access to shares on the Centos server started to fail with password failures.
The reason seems to be associated with user mapping. (See log fragment
2005 Mar 02
2
--one-file-system problem
rsync commandline:
/usr/bin/rsync -e /usr/bin/ssh --archive --compress --sparse
--verbose --stats --delete --numeric-ids --partial --relative
--one-file-system target.host:/ /destination/path/
target rsync version: 2.6.3
destination rsync version: 2.6.2
The server we're trying to synchronize contains directories within "/"
that are mounted to other locations within
2011 Jul 28
1
[LLVMdev] git
Jason Kim <jasonwkim at google.com> writes:
> On Wed, Jul 27, 2011 at 8:59 PM, Mark Lacey <641 at rudkx.com> wrote:
>
> Besides, the git-svn readonly bridge is a great solution for those who want to use git
>
> It seems to be a reasonable solution for those individuals who
> want to use git, but in my experience not for organizations that
>
2012 Mar 08
8
Copy dataframe for another
I'm trying to copy the results of a dataframe to another within a cycle for
but I am not able to implement the rbind, because give th
d<-Null
df<-NULL
for(r in 2: nrow(x))
{
val_user<-x.name[[r]]
pos<-x.pos[[r]] -4
age <-x.age[[r]]
d<-data.frame(val_user,pos,)
print(d)
}
df<-rbind(df,d)
}
someone can help me solve this
Thanks
2008 Aug 01
1
Smartest way to evaluate question forms
Hi,
I'm trying to help a friend who is doing a thesis in a nurse college, to
evaluate medical question forms.
There are about 30 questions giving more than 110 parameters to describe
each responding person's (gender, health etc.) and there are about 120
question forms to evaluate.
I have basically 2 questions.
1. What to search for.
2. How to evaluate it statisticaly.
As for No. 1. I have
2012 Mar 07
4
Column with codes
Good Day,
I have a small question, I think it is simple to solve,
I have a column with the following records
name
saucer
cup
tea
saucer
saucer
what is the quickest way to create a new column codes
1
1
3
1
1
--
View this message in context: http://r.789695.n4.nabble.com/Column-with-codes-tp4452906p4452906.html
Sent from the R help mailing list archive at Nabble.com.
2015 Jan 22
2
Programming Tools CTV
On Thu, Jan 22, 2015 at 12:45 PM, Achim Zeileis
<Achim.Zeileis at uibk.ac.at> wrote:
> On Thu, 22 Jan 2015, Max Kuhn wrote:
>
>> I've had a lot of requests for additions to the reproducible research
>> task view that fall into a grey area (to me at least).
>>
>> For example, roxygen2 is a tool that broadly enable reproducibility
>> but I see it more as
2008 Dec 21
2
data format issue
Dear all-
I have a dataset (see a sample below - but the whole dataset is June
2005 - June 2008). The "LST" format is "YYMMDDHHmm" and I would like to
get the hourly average of the "mph" for the summer months (spanning all
years). I have been trying to use "aggregate" but am not having much
success at all! any thoughts would be greatly appreciated.
2010 Apr 16
2
managing data and removing lines
Hi,
I am very new to R and I've been trying to work through the R book to gain a
better idea of the code (which is also completely new to me).
Initially I imputed my data from a text file and that seemed to work ok, but
I'm trying to examine linear relationships between gdist and gair, gdist and
gsub, m6dist and m6air, etc.
This didn't work and I think it might have something to do
2008 Jun 07
2
Rails integration tests without stories
I''m looking to drive the development of a rails app that does nothing
but serve a JSON API. All of the models are well tested elsewhere, so
I needn''t worry about that. My only immediate goal is to be able to
fire off requests to a path and check the returned JSON.
I''ve tried a number of methods for this today, without being
particularly enthused about any of them.
I
2015 Jan 22
1
Programming Tools CTV
On Thu, Jan 22, 2015 at 1:05 PM, Achim Zeileis <Achim.Zeileis at uibk.ac.at> wrote:
> On Thu, 22 Jan 2015, Max Kuhn wrote:
>
>> On Thu, Jan 22, 2015 at 12:45 PM, Achim Zeileis
>> <Achim.Zeileis at uibk.ac.at> wrote:
>>>
>>> On Thu, 22 Jan 2015, Max Kuhn wrote:
>>>
>>>> I've had a lot of requests for additions to the
2015 May 14
3
comportamiento de data.table al hacer calculos por grupos
Estimada comunidad tengo un problema del que no encuentro datos que me
ayuden mucho en la web.
Estoy haciendo calculos por grupos con data,table. Tengo un archivo
(zp.res) con tres columnas que clasifican los datos (sol, con, dia) y
una columna de datos numericos (media), de la siguiente forma:
sol con dia media
1: con 0 1 -22.6
2: con 0 1 -36.6
3: con 0 1 -35.6
y
2009 Aug 10
0
logistic regression with repeated measures
Hello ,
I am writing because I would need some advice on the following question. I am working on paternity in a monogamous bird species and I am performing analyses to check whether the probability for a male to be cuckolded (binary variable) depends on his body size, the body size of his female, the degree of genetic relatedness to his female and nest density around his own nest (all continuous
2011 Nov 18
2
Export Tree for latex
Hello everybody.
I'm trying to send the result of a decision tree for latex, but I do not
get with the package(xtable), there is a package that make this export
Export this for latex
marital.status = Divorced
| educational.num <= 12: <=50K (1795.0/90.0)
| educational.num > 12
| | hours.per.week <= 41: <=50K (302.0/58.0)
| | hours.per.week > 41
| | |
2012 Jul 06
4
differences between survival models between STATA and R
Dear Community,
I have been using two types of survival programs to analyse a data set.
The first one is an R function called aftreg. The second one an STATA
function called streg.
Both of them include the same analyisis with a weibull distribution. Yet,
results are very different.
Shouldn't the results be the same?
Kind regards,
J
--
View this message in context:
2005 Jun 15
4
Multiple line plots
Greetings,
I would like to plot three lines on the same figure, and I am lost. There is
an answer to a similar thread… but I tried matplot and it is beyond me. An
example of the data follows:
Year EM IM BM
1983 9.1 16.8 -7.7
1984 12.0 18.0 -6.0
1985 13.6 19.1 -5.5
1986 12.4 17.3 -4.9
1987 14.6 20.3 -5.7
1988 20.6 23.3 -2.6
1989 25.0 27.2 -2.2
1990 28.4 30.2 -1.8
1991 33.3 31.2 2.1
1992 40.6