Displaying 20 results from an estimated 70 matches similar to: "3D Scatter Plot"
2009 Jan 11
1
Boxplot from matrices
Hii,
I will create boxplots from matrices. I have the following data sets:
5.0 1.78 2.99 2.019 0
10.0 1.79 3.00 1.744 0
15.0 1.78 2.98 1.936 0
20.0 1.78 2.99 1.975 0
25.0 1.73 2.91 3.591 0
30.0 1.79 3.00 1.966 0
35.0 1.79 3.00 2.451 0
40.0 1.79 3.00
2008 Sep 03
1
R puts '+' within my numbers
Hello,
my test.R file contains two huge arrays (>3000 entries), from which R needs to calculate the Pearson Correlation, if I look at the file the numbers look correct.
if I run R
R < test.R --no-save
I see things like this:
0.723, 0.838, 1.002, 0.364, 0.357, 0.227, 0.982+ , 0.963, 0.535, 1.214, 1.270, 0.832, 1.033, 0.632, 2.482, 1.239, 0.743, 1.077, 0.962, 1.052, 1.075, 1.427, 1.395,
2008 Apr 23
3
dom0 lost packets.
I try to get working together vlan and bonding both for dom0 and domU. I
lost packets sent to dom0 while domU is OK.
Nightly stats for dom0:
52879 packets transmitted, 45293 received, 14% packet loss, time 52879599ms
rtt min/avg/max/mdev = 0.144/0.224/717.306/5.129 ms
Nightly stats for domU:
52952 packets transmitted, 52952 received, 0% packet loss, time 52952554ms
rtt min/avg/max/mdev =
2010 Dec 01
1
Poisson GLM warning message
Hi,
I receive the following warning message when I run a poisson GLM in R:
"glm.fit: fitted rates numerically 0 occurred"
The model summary is shown below. The variable 'Species' consists of
counts of different species ranging from 0 to 4. I suspect this may
have something to do with the warning message but I'm not sure. Can
anybody help?
Thank you!
Anna
Call:
2017 Sep 01
3
How to use getSymbols() to get annual data
Dear Sir/Madam,
How to use getSymbols() to get annual data? For example, I need the annual stock price of APPLE from the year 2000 to 2016. How to write the command? I only know how to get the daily data. It is:
getSymbols("AAPL",from="2000-01-01",to="2016-12-31")
Thank you very much.
Have a good week!
Best regards,
Yingrui Liu
[[alternative HTML
2009 Jan 26
1
glm StepAIC with all interactions and update to remove a term vs. glm specifying all but a few terms and stepAIC
Problem:
I am sorting through model selection process for first time and want to make
sure that I have used glm, stepAIC, and update correctly. Something is
strange because I get a different result between:
1) a glm of 12 predictor variables followed by a stepAIC where all
interactions are considered and then an update to remove one specific
interaction.
vs.
2) entering all the terms
2009 Apr 21
4
RELENG_7 crash
The box has a fairly heavy UDP load. Its RELENG_7 as of today and
took 3hrs for it to dump core.
Fatal trap 12: page fault while in kernel mode
cpuid = 1; apic id = 01
fault virtual address = 0x68
fault code = supervisor read, page not present
instruction pointer = 0x20:0xc0637146
stack pointer = 0x28:0xe766eaac
frame pointer = 0x28:0xe766eb54
code segment
2010 Feb 25
2
error using pvcm() on unbalanced panel data
Dear all
I am trying to fit Variable Coefficients Models on Unbalanced Panel
Data. I managed to fit such models on balanced panel data (the example
from the "plm" vignette), but I failed to do so on my real, unbalanced
panel data.
I can reproduce the error on a modified example from the vignette:
> require(plm)
> data("Hedonic")
> Hed <- pvcm(mv ~ crim + zn + indus
2011 Oct 09
2
help with statistics in R - how to measure the effect of users in groups
Hi,
I'm a newbie to R. My knowledge of statistics is mostly self-taught. My
problem is how to measure the effect of users in groups. I can calculate a
particular attribute for a user in a group. But my hypothesis is that the
user's attribute is not independent of each other and that the user's
attribute depends on the group ie that user's behaviour change based on the
group.
Let
2013 Mar 12
5
extract values
Hello all!
I have a problem to extract values greater that for example 1820.
I try this code: x[x[,1]>1820,]->x1
Please help me!
Thank you!
The data structure is:
structure(c(2.576, 1.728, 3.434, 2.187, 1.928, 1.886, 1.2425,
1.23, 1.075, 1.1785, 1.186, 1.165, 1.732, 1.517, 1.4095, 1.074,
1.618, 1.677, 1.845, 1.594, 1.6655, 1.1605, 1.425, 1.099, 1.007,
1.1795, 1.3855, 1.4065, 1.138, 1.514,
2012 Oct 16
2
Creating Optimization Constraints
Good afternoon,
In the code below, I have a set of functions (m1,m2,m3,s1,s2, and s3) which represent response surface designs for the mean and variance for three response variables, followed by an objective function that uses the "Big M" method to minimize variance (that is, push s1, s2, and s3 as close to 0 as possible) and hit targets for each of the three means (which are 0, 10,
2013 Mar 13
2
merge datas
Hello all!
I have a problem with R. I try to merge data like this:
structure(c(2.1785, 1.868, 2.1855, 2.5175, 2.025, 2.435, 1.809,
1.628, 1.327, 1.3485, 1.4335, 2.052, 2.2465, 2.151, 1.7945, 1.79,
1.6055, 1.616, 1.633, 1.665, 2.002, 2.152, 1.736, 1.7985, 1.9155,
1.7135, 1.548, 1.568, 1.713, 2.079, 1.875, 2.12, 2.072, 1.906,
1.4645, 1.3025, 1.407, 1.5445, 1.437, 1.463, 1.5235, 1.609, 1.738,
1.478,
2012 Jun 04
2
paquete SPEI función thornthwaite
Hola eRReros.
Os lo explico con un ejemplo:
# Cargamos los paquetes y el ejemplo
install.packages("SPEI")
library("SPEI")
data(wichita)
# los primeros 12 datos
head(wichita,12)
# mi subset de los primeros 12 datos
meu<-wichita[c(1:12),]
meu
# como veis los valores de TMED son iguales en ambos dataframes.
# ahora viene el problema
# calculamos
2008 Sep 11
0
Bug#498659: logcheck-database: amavis filter a little too verbose?
Package: logcheck-database
Version: 1.2.68
Severity: normal
Hi,
I use postfix, amavisd-new and logcheck on a lenny server, and the mails I
receive seem a little to verbose. For example, I get in the report all mails
which are received, for example:
Sep 11 08:35:11 heracles amavis[19788]: (19788-02) Passed CLEAN, [xxx.xxx.xxx.xxx] [xxx.xxx.xxx.xxx] <user at domain> -> <user at
2001 Dec 28
0
flattening return value of tapply
Dear R-Users,
Does anyone know how to flatten, i.e. convert to a table, a return value of
tapply when its INDEX argument is a list? Here is an example of what I need
> x <- rnorm(100)
> f1 <- rep(c(T,F),50)
> f2 <- c(rep(T,50), rep(F,50))
> y <- tapply(x, list(f1=f1,f2=f2), summary)
> y
f2
f1 FALSE TRUE
FALSE "Numeric,6"
2002 Sep 10
2
Traceroute
How do I allow traceroute to reach my server? Pings work fine but
traceroute stops at the last hop before my server. If I shut off the
firewall it reaches it fine.
PING danicar.net (24.222.246.120): 56 data bytes
64 bytes from 24.222.246.120: icmp_seq=0 ttl=237 time=104.0 ms
64 bytes from 24.222.246.120: icmp_seq=1 ttl=237 time=74.9 ms
64 bytes from 24.222.246.120: icmp_seq=2 ttl=237 time=90.6
2009 Jun 27
1
Regression; how to get t-values for all parameters estimates
Dear all,
Even after a couple of hours looking at old messages I still haven't found a
solution for my problem.
I'm trying to fit an additive linear regression model with 2 effects, both
fixed, to some dataset. The function contrasts(effectA) <- contr.sum can
gaurantee that the coefficients per parameter sum to one, and the function
dummy.coef provices the estimates of all
2011 Jun 21
1
Help interpreting ANCOVA results
Please help me interpret the following results.
The full model (Schwa~Dialect*Prediction*Reduction) was reduced via both
update() and step().
The minimal adequate model is:
ancova<-lm(Schwa~Dialect+Prediction+Reduction+Dialect:Prediction)
Schwa is response variable
Dialect is factor, two levels ("QF","SF")
Prediction is factor, two levels ("High","Low")
2004 Jun 11
4
Regression query
Hi
I have a set of data with both quantitative and categorical predictors.
After scaling of response variable, i looked for multicollinearity (VIF
values)
among the predictors and removed the predictors who were hinding some of the
other significant
predictors. I'm curious to know whether the predictors (who are not
significant)
while doing simple 'lm' will be involved in
2009 Aug 19
5
scale or not to scale that is the question - prcomp
Dear all
here is my data called "rglp"
structure(list(vzorek = structure(1:17, .Label = c("179/1/1",
"179/2/1", "180/1", "181/1", "182/1", "183/1", "184/1", "185/1",
"186/1", "187/1", "188/1", "189/1", "190/1", "191/1", "192/1",