similar to: data messed up by read.table ? (PR#9779)

Displaying 20 results from an estimated 400 matches similar to: "data messed up by read.table ? (PR#9779)"

2007 Aug 28
1
subcripts on data frames (PR#9885)
I'm not sure if this is a bug, or if I'm doing something wrong. =20 =46rom the worms dataframe, which is at in a file called worms.txt at =20 http://www.imperial.ac.uk/bio/research/crawley/therbook <http://www.imperial.ac.uk/bio/research/mjcraw/therbook/index.htm>=20 =20 the idea is to extract a subset of the rows, sorted in declining order of worm density, with only the maximum
2006 Jul 19
1
Random structure of nested design in lme
All, I'm trying to analyze the results of a reciprocal transplant experiment using lme(). While I get the error-term right in aov(), in lme() it appears impossible to get as expected. I would be greatful for any help. My experiment aimed to identify whether two fixed factors (habitat type and soil type) affect the development of plants. I took soil from six random sites each of two types
2003 Jul 22
1
Making a group membership matrix
Hi Helpers: I have a factor object that has 314k entries of 39 land cover types. (This object can be coerced to characters neatly should that be easier to work with.) > length(foo) [1] 314482 > foo[1:10] [1] Montane Chaparral Barren Red Fir Red Fir [5] Red Fir Red Fir Red Fir Red Fir [9] Red Fir Red Fir 39 Levels:
2010 Oct 24
1
best predictive model for mixed catagorical/continuous variables
Would anybody be able to advise on which package would offer the best approach for producing a model able to predict the probability of species occupation based upon a range of variables, some of them catagorical (eg. ten soil types where the numbers assigned are not related to any qualitative/quantitative continuum or vegetation type) and others continuous such as field size or vegetation height.
2006 Aug 23
0
Random structure of nested design in lme
Why are the results not reliable? ________________________________ From: ESCHEN Rene [mailto:rene.eschen@unifr.ch] Sent: Wednesday, August 23, 2006 3:48 AM To: Spencer Graves; r-help@stat.math.ethz.ch Cc: Doran, Harold Subject: RE: [R] Random structure of nested design in lme The output of the suggested lmer model looks very similar to the output of aov, also when I ran the model
2009 Feb 07
3
Re-post data format question (apologies)
Hello all, I have a *.csv file that looks like this (actual file is orders of magnitude larger): Site taxa no.ind forest LMA 1 forest LCY 1 forest SCO 1 meadow LMA 2 meadow LCY 1 meadow PNT
2006 Feb 26
2
subtotal, submean, aggregate
Dear All, I would like to make partial sums (or means or any other function) of the values in intervals along a sequence (spatial transect) where groups are defined. For instance: habitats<-rep(c("meadow","forest","meadow","pasture"),c(10,5,12,6)) observations<-rpois(length(habitats),2)
2003 Jul 21
1
Setting name attributes to a vector - join?
I have a vector of land cover class data from a GIS: > landcov[1:10,2] #the first ten examples of a large (100k+) object 1 2 3 4 5 6 7 8 9 10 12 12 01 12 01 15 15 15 15 15 etc. I also have a lookup table for the class data that gives the cover type as a string: > cnd.names #the look up table, i.e., landcov[3,2] == 1 == "Montane Meadow" CndVal
2004 Dec 18
4
variables - data-structure
dear R-friends, i`ve got a large dataset of vegetation-samples with about 500 variables(=species) in the following format: 1 spec1 1 spec23 1 spec54 1 spec63 2 spec1 2 spec2 2 spec253 2 spec300 2 spec423 3 spec20 3 spec88 3 spec121 3 spec200 3 spec450 . . this means: sample 1 (grassland) with the species (=spec) 1, 23, 54, 63 is it possible to get a following data-structure for further
2011 Dec 16
1
Model design
Dear List, I am realtively inexperienced so i apologise in advance and ask for understanding in the simplicity of my question: I have data on the amount of grass per km in a cell ( of which i have lots) "grass" and for each cell i have x/y coordinates - required due to spatial autocorrelation Cells can be classfied in a hierarchical nature into AREAS and STATES i.e Cell 1, Cell 2,
2011 Nov 11
6
need help
hello all R experts, how do I calculate the reliability between the two groups using the ICCs? I'll appreciate your reply, Thanks Sincerely, Supreet kaur, Biomedical research engineer, Nationwide Childrens Hospital, Columbus, OH (614)355-3509 [[alternative HTML version deleted]]
2011 Feb 18
1
Using Weights in R
I am new to R. I have a data set like this (given below is a fictional dataset): AgeCat FINWT 1 98 2 62 1 75 3 39 4 28 2 47 2 66 4 83 1 19 3 50 I need to calculate the weighted distribution of the variable AgeCat. In SAS i can do: proc freq data=ageval; tables agecat; weight finwt; run; What or is there an equivalent in R? TIA, Krishnan -- Krishnan Viswanathan 1101 High Meadow Dr
2011 Jun 01
1
git push heroku master - has error
I am trying to put my app on heroku, following the instructions, when I get here I get this error: $ git push heroku master Enter passphrase for key ''/c/Users/Laurence/.ssh/id_rsa'': Counting objects: 277, done. Delta compression using up to 2 threads. Compressing objects: 100% (246/246), done. Read from remote host heroku.com: The connection was aborted fatal: sha1 file
2003 Oct 17
1
netbios and samba
Hi all Bear with me on this one... We have a problem connecting to a specific port via Oracle TNSPING There are two PC's in our office that are on the same network as the server (i.e. my desktop PC can not get to the server) The server has multiple network interfaces, the primary is a gigabit fibre card on IP xx.16. The secondary is a megabit card on IP xx.80 The tnsping command fails
2009 Sep 07
1
How to reduce memory demands in a function?
I've written a function that regularly throws the "cannot allocate vector of size X Kb" error, since it contains a loop that creates large numbers of big distance matrices. I'd be very grateful for any simple advice on how to reduce the memory demands of my function. Besides increasing memory.size to the maximum available, I've tried reducing my "dist" objects to 3
2012 Apr 03
1
Compare by row and insert previous row value (Or non Time Series Lag)
I have the following sample dataset (CSV input here:http://goo.gl/YR8LP. CSV output here: http://goo.gl/EFCC8) which I want to transform as follows. For each person in a household I want to create two new variables OrigTAZ and DestTAZ. It should take the value in TripendTAZ and put that in DestTAZ. For OrigTAZ it should put value of TripendTAZ from the previous row. For the first trip of every
2017 Dec 01
1
Timezone problem with 3.4.2
From Peter Dalgaard announcement earlier today. CHANGES IN R 3.4.3: INSTALLATION on a UNIX-ALIKE: * A workaround has been added for the changes in location of time-zone files in macOS 10.13 'High Sierra' and again in 10.13.1, so the default time zone is deduced correctly from the system setting when R is configured with --with-internal-tzcode (the default on
2011 Jan 28
1
Help with ape - read.GenBank()
Hi, I am trying to work with the ape package, and there is one thing I am struggling with. When calling the *read.GenBank()* function, I can get it to work with an object created like this: *>x <- c("AY395554","AY611035", ...)* *>read.GenBank(x)* However, I am trying to use the function to fetch several hundred sequences at once. So I have been testing with small
2017 Dec 01
0
Timezone problem with 3.4.2
Mark Thanks for pointing this out. I did a default installation of R. Does this mean that I need to reinstall from the command line? Dennis Dennis Fisher MD P < (The "P Less Than" Company) Phone / Fax: 1-866-PLessThan (1-866-753-7784) www.PLessThan.com <http://www.plessthan.com/> > On Nov 30, 2017, at 6:42 PM, R. Mark Sharp <rmsharp at me.com> wrote: > >
2003 Feb 19
3
trying to get better ogg quality for this clip
hi folks, in my (unlucky) first test of ogg vs other encoders, i found a case where wma and mp3pro sound much better than ogg at 64k. can anyone suggest a setting that i haven't tried yet that can rival the wma and mp3pro samples at 64k? it's the "gravel effect" that is troublesome. the part in question is the first 15 seconds of this wave file: