similar to: formatting data???

Displaying 20 results from an estimated 60 matches similar to: "formatting data???"

2013 Apr 11
2
Read the data from a text file and reshape the data
I have a data set for different time intervals. The data has three comment lines before data for each time interval. For each time interval there are 500 data points. I want to change the dataset such that I have the following format: t1 t2 t3 ................ 0.00208 0.00417 0.00625 ................. a1 a2 a3 ...................
2006 May 24
2
data.frame
Dear all, Does any one knows why should I get the following error message, when trying to do a simple data.frame?? DataF<-data.frame(Subject,BiomR,Spp,Capas,Litter,Herbs,LitterD,MaxCanH,DDifS p,DSSp,Slope, CanDens,NearestSp) Erro em data.frame(Subject, BiomR, Spp, Capas, Litter, Herbs, LitterD, : arguments imply differing number of rows: 202, 0 The data I am using
2007 Dec 08
0
help for segmented package
Hi, I am trying to find m breakpoints of a linear regression model. I used the segmented package. It works fine for small number of predicators and breakpoints.(3 r.v. 3 points). However, my model has 14 variables it even would not work even for just one breakpoints!. The error message is always estimated breakpoints are out of range. Since my problem is time related problem. So I
2013 Apr 10
6
means in tables
Hi. I have 2 tables, with same dimensions (8000 x 5). Something like: tab1: V1 V2 V3 V4 V5 14.23 1.71 2.43 15.6 127 13.20 1.78 2.14 11.2 100 13.16 2.36 2.67 18.6 101 14.37 1.95 2.50 16.8 113 13.24 2.59 2.87 21.0 118 tab2: V1 V2 V3 V4 V5 1.23 1.1 2.3 1.6 17 1.20 1.8 2.4 1.2 10 1.16 2.6 2.7 1.6 11 1.37 1.5 2.0 1.8 13 1.24 2.9 2.7 2.0 18 I need generate a table of averages, the
2014 Sep 01
1
Correlation Matrix with a Covariate
R Help - I'm trying to run a correlation matrix with a covariate of "age" and will at some point will also want to covary other variables concurrently. I'm using the "psych" package and have tried other methods such as writing a loop to extract semi-partial correlations, but it does not seem to be working. How can I accomplish this? library(psych) > set.cor(y =
2010 Jul 15
2
taking daily means from hourly data
I have a data frame (morgan) of hourly river flow, river levels and wind direction and speed thus: Time hour lev.morgan lev.lock2 lev.lock1 flow direction velocity 1 2009-07-06 15:00:00 15 3.266 3.274 3.240 1710.6 180.282 4.352 2 2009-07-06 16:00:00 16 3.268 3.272 3.240 1441.8 192.338 5.496 3 2009-07-06 17:00:00 17 3.268
2012 Aug 14
1
bootstrapped CI for nonlinear models using nlsBoot from nlstools
Hi all I?m trying to get confidence intervals for parameters from nls modeling. I fitted a nls model to the following variables: > x [1] 2 1 1 5 4 6 13 11 13 101 101 101 > y [1] 1.281055090 1.563609934 0.001570796 2.291579783 0.841891853 [6] 6.553951324 14.243274230 14.519899320 15.066473610 21.728809880 [11] 18.553054450 23.722637370 The model fitted was:
2007 Jan 03
1
User defined split function in Rpart
Dear all, I'm trying to manage with user defined split function in rpart (file rpart\tests\usersplits.R in http://cran.r-project.org/src/contrib/rpart_3.1-34.tar.gz - see bottom of the email). Suppose to have the following data.frame (note that x's values are already sorted) > D y x 1 7 0.428 2 3 0.876 3 1 1.467 4 6 1.492 5 3 1.703 6 4 2.406 7 8 2.628 8 6 2.879 9 5 3.025 10 3 3.494
2012 Aug 14
0
error using boxcox.nls during non linear estimation
Hi all I?ve got a problem using boxcox.nls function in nlrwr packagge. I?m fitting several non linear models to these data: > x [1] 2 1 1 5 4 6 13 11 13 101 101 101 > y [1] 1.281055090 1.563609934 0.001570796 2.291579783 0.841891853 [6] 6.553951324 14.243274230 14.519899320 15.066473610 21.728809880 [11] 18.553054450 23.722637370 I used nls function with self
2010 Aug 18
1
reading lmer table
Dear all, I'm quite new in R and especially with linear mixed effects models and I'm not completely sure to read the lmer table in the right way. for example: head(march.f) fam subjID Cond Code reg total first second log.total log.second cat 3 f 30 an fDan1 3 1.2304688 0.6679688 0.56250000 0.20739519 0.44628710 f
2018 Jan 05
0
Calculating the correlations of nested random effects in lme4
I postulate the following model AC <- glmer(Accuracy ~ RT*Group + (1+RT|Group:subject) + (1+RT|Group:Trial), data = da, family = binomial, verbose = T) Here I predict Accuracy from RT, Group (which has values 0 or 1) and the interaction of Group and RT (those are the fixed effects). I also estimate the random effects for both intercepts and slopes for subjects and different trials.
2006 Nov 20
1
Proportional data with categorical explanatory variables
Ein eingebundener Text mit undefiniertem Zeichensatz wurde abgetrennt. Name: nicht verf?gbar URL: https://stat.ethz.ch/pipermail/r-help/attachments/20061120/73240e63/attachment.pl
2019 May 13
0
[PATCH v2 0/8] vsock/virtio: optimizations to increase the throughput
On 2019/5/10 ??8:58, Stefano Garzarella wrote: > While I was testing this new series (v2) I discovered an huge use of memory > and a memory leak in the virtio-vsock driver in the guest when I sent > 1-byte packets to the guest. > > These issues are present since the introduction of the virtio-vsock > driver. I added the patches 1 and 2 to fix them in this series in order > to
2009 Jun 08
3
Plotting two regression lines on one graph
Hi! I have fitted two glms assuming a poisson distribution which are: fit1 <- glm(Aids ~ Year, data=aids, family=poisson()) fit2 <- glm(Aids ~ Year+I(Year^2), data=aids, family=poisson()) I am trying to work out how to represent the fitted regression curves of fit1 and fit2 on the one graph. I have tried: graphics.off() plot(Aids ~ Year, data = aids) line(glm(Aids ~ Year,
2019 May 10
18
[PATCH v2 0/8] vsock/virtio: optimizations to increase the throughput
While I was testing this new series (v2) I discovered an huge use of memory and a memory leak in the virtio-vsock driver in the guest when I sent 1-byte packets to the guest. These issues are present since the introduction of the virtio-vsock driver. I added the patches 1 and 2 to fix them in this series in order to better track the performance trends. v1:
2019 May 10
18
[PATCH v2 0/8] vsock/virtio: optimizations to increase the throughput
While I was testing this new series (v2) I discovered an huge use of memory and a memory leak in the virtio-vsock driver in the guest when I sent 1-byte packets to the guest. These issues are present since the introduction of the virtio-vsock driver. I added the patches 1 and 2 to fix them in this series in order to better track the performance trends. v1:
2012 Nov 19
6
loop to subtract arrays / error
Hi everyone, I am having trouble with creating a loop to subtract arrays. In R, this is what I have done: > Vobsr <- read.csv("Observed_Flow.csv", header = TRUE, sep =",") # see data > below > Vsimr <- read.csv("1000Samples_Vsim.csv", header = TRUE, sep =",") # see > data below > Vobsr <- as.matrix(Vobsr[,-1]) # remove column 1 from
2011 Oct 04
2
About stepwise regression problem
First of all, I have GAMs noxd<-gam(newNOX~pressure+maxtemp+s(avetemp,bs="cr")+s(mintemp,bs="cr")+s(RH,bs="cr")+s(solar,bs="cr")+s(windspeed,bs="cr")+s(transport,bs="cr"),family=gaussian (link=log),groupD,methods=REML) Then I type " summary(noxd)". and show Family: gaussian Link function: log Formula: newNO2 ~ pressure
2016 Jan 12
3
[PATCH 3/4] x86,asm: Re-work smp_store_mb()
On Mon, Nov 02, 2015 at 04:06:46PM -0800, Linus Torvalds wrote: > On Mon, Nov 2, 2015 at 12:15 PM, Davidlohr Bueso <dave at stgolabs.net> wrote: > > > > So I ran some experiments on an IvyBridge (2.8GHz) and the cost of XCHG is > > constantly cheaper (by at least half the latency) than MFENCE. While there > > was a decent amount of variation, this difference
2016 Jan 12
3
[PATCH 3/4] x86,asm: Re-work smp_store_mb()
On Mon, Nov 02, 2015 at 04:06:46PM -0800, Linus Torvalds wrote: > On Mon, Nov 2, 2015 at 12:15 PM, Davidlohr Bueso <dave at stgolabs.net> wrote: > > > > So I ran some experiments on an IvyBridge (2.8GHz) and the cost of XCHG is > > constantly cheaper (by at least half the latency) than MFENCE. While there > > was a decent amount of variation, this difference