Displaying 20 results from an estimated 79 matches for "dilut".
Did you mean:
diluo
2003 Oct 04
2
mixed effects with nlme
Dear R users:
I have some difficulties analizing data with mixed effects NLME and the
last version of R. More concretely, I have a repeated measures design with
a single group and 2 experimental factors (say A and B) and my interest is
to compare additive and nonadditive models.
suj rv A B
1 s1 4 a1 b1
2 s1 5 a1 b2
3 s1 7 a1 b3
4 s1 1 a2
2012 Jul 24
2
limit of detection (LOD) by logistic regression
Dear all,
I am trying to apply the logistic regression to determine the limit of
detection (LOD) of a molecular biology assay, the polymerase chain reaction
(PCR). The aim of the procedure is to identify the value (variable
"dilution") that determine a 95% probability of success, that is
"positive"/"total"=0.95. The procedure I have implemented seemed to work
looking at the figure obtained from the sample set 1; however the figure
obtained from the sample set 2 shows that interpolation is not correct...
2007 Jul 30
2
problems saving and loading (PLMset) objects
...test R on a presumably up to date Linux server.
'Doing something silly I'm sure, but can't see why my saved PLMset objects
come out all wrong. To use an example:
Setting up an example PLMset (I have the same problem no matter what example
I use)
> library(affyPLM)
> data(Dilution) # affybatch object
> Dilution = updateObject(Dilution)
> options(width=36)
> expr <- fitPLM(Dilution)
This works, and I'm able to get the probeset coefficients with coefs(expr).
until I save and try reloading:
> save(expr, file="expr.RData")
> rm(expr...
2006 Feb 07
0
lme and Assay data: Test for block effect when block is systematic - anova/summary goes wrong
Consider the Assay data where block, sample within block and dilut within block is random.
This model can be fitted with (where I define Assay2 to get an ordinary data frame rather
than a grouped data object):
Assay2 <- as.data.frame(Assay)
fm2<-lme(logDens~sample*dilut, data=Assay2,
random=list(Block = pdBlocked(list(pdIdent(~1), pdIdent(~sample-1),pdId...
2005 Apr 11
2
How to calculate the AUC in R
Hello R-listers,
I'm working in an experiment that try to determine the degree of
infection of different clones of a fungus and, one of the measures we
use to determine these degree is the counting of antibodies in the
plasma at different dilutions, in this experiment the maximum number of
dilutions was eleven. I already checked for differences on the maximum
concentration of the antibodies in function of each clone of the fungus.
However one measure of interest is the area under the curve (AUC) for
the counting of antibodies in funct...
2002 Oct 05
2
R-1.6.0 and R CMD check
Hi,
I upgraded to R-1.6.0 and R CMD check is behaving a bit weird.
The package I am check cannot make it throught because of
errors like
> ##___ Examples ___:
>
> data(Dilution)
> hist(Dilution[,1])
Error in if (log == T) { : missing value where logical needed
Execution halted
while the function called in the example defined in the .Rd fiel
as a signature in which 'log=T' is stated...
If I comment out this example I end up with an another similar erro...
2003 May 12
1
update.lme trouble (PR#2985)
Try this
data(Assay)
as1 <- lme(logDens~sample*dilut, data=Assay,
random=pdBlocked(list(
pdIdent(~1),
pdIdent(~sample-1),
pdIdent(~dilut-1))))
update(as1,random=pdCompSymm(~sample-1))
update(as1,random=pdCompSymm(~sample-1))
update(as1,random=pdCompSymm(~sample-1))
update(as1,...
2012 Oct 04
1
data structure for plsr
...for plsr" posts. I have
spectroscopic data I'd like to run through a PLSR and have read the
tutorial series, but still do not understand the data format required for
the code to process my data. My current data structure consists of a .csv
file read into R containing 15 columns (a charcoal dilution series going
from 100% to 0%) and 1050 rows of absorbance data from 400 nm to 2500 nm at
2 nm interval. I think I need to transpose the data such that the specific
wavelengths become my columns and dilutions are defined in rows, but after
that point I am lost. Should I (and how do I) make my ab...
2024 Jan 30
2
Use of geometric mean for geochemical concentrations
Dear Rich,
It depends how the data is generated.
Although I am not an expert in ecology, I can explain it based on a biomedical example.
Certain variables are generated geometrically (exponentially), e.g. MIC or Titer.
MIC = Minimum Inhibitory Concentration for bacterial resistance
Titer = dilution which still has an effect, e.g. serially diluting blood samples;
Obviously, diluting the samples will generate the following concentrations:
1, 1/2, 1.4, 1/8, 1/16, ...
(or the reciprocal: 1, 2, 4, 8, 16, ...)
It makes no sense to compute the arithmetic mean. Results are usually reported as so...
2006 Jan 03
3
Package for multiple membership model?
Hello all:
I am interested in computing what the multilevel modeling literature calls a multiple membership model. More specifically, I am working with a data set involving clients and providers. The clients are the lower-level units who are nested within providers (higher-level). However, this is not nesting in the usual sense, as clients can belong to multple providers, which I understand
2008 Dec 09
1
creating standard curves for ELISA analysis
...orbance values. I ususally use Excel to calculate the
concentrations of unknown, but it is too tedious and manual especially when
I have 100's of files to process. I would appreciate some help in creating
a R script to do this with minimal manual input. s A1-G1 and A2-G2 are
standards serially diluted H1 and H2 are Blanks. A3 to H12 are serum
samples. I am pasting the structure of my data below:
A1 14821
B1 11577
C1 5781
D1 2580
E1 902
F1 264
G1 98
H1 4
A2 14569.5
B2 11060
C2 5612
D2 2535
E2 872
F2 285
G2 85
H2 3
A3 1016
B3 2951.5
C3 547
D3 1145
E3 4393
F3 4694
G3 112...
2009 Jun 15
2
[LLVMdev] unwind/invoke design
...unwind the stack down to the nearest 'unwind
label' of an invoke. Thread-local storage, global variable, or any
other approach one might think of to carry the exception information
is a frontend policy decision. Adding this support to 'unwind'
complicates its implementation and also dilutes its usefulness.
2003 Nov 10
8
Memory issues..
Hi dear R-listers, I'm trying to fit a 3-level model using lme in R. My
sample size is about 2965 and 3 factors:
year (5 levels), ssize (4 levels), condition (2 levels).
When I issue the following command:
>
lme(var~year*ssize*condition,random=~ssize+condition|subject,data=smp,method
="ML")
I got the following error:
Error in logLik.lmeStructInt(lmeSt, lmePars) :
2011 Feb 26
2
Reproducibility issue in gbm (32 vs 64 bit)
Dear List,
The gbm package on Win 7 produces different results for the
relative importance of input variables in R 32-bit relative to R 64-bit. Any
idea why? Any idea which one is correct?
Based on this example, it looks like the relative importance of 2 perfectly
correlated predictors is "diluted" by half in 32-bit, whereas in 64-bit, one
of these predictors gets all the importance and the other gets none. I found
this interesting.
### Sample code
library(gbm)
set.seed(12345)
xc=matrix(rnorm(100*20),100,20)
y=sample(1:2,100,replace=TRUE)
xc[,2] <- xc[,1]
gbmfit <- gbm(y~xc[,1...
2006 Jan 17
1
Rails too Active?
I feel the need to protest about a disturbing trend in the vibrant RoR
community - name dilution.
ActiveRecord is called that precisely because it is that. The name come
from Martin Fowler, and it expresses a class which is a database record,
only _active_ - that is with methods & behaviors (unlike a classical
database record, which is completely passive.) If you look in the
Acti...
2006 Mar 23
1
PCA, Source analysis and Unmixing, environmental forensics
...al
source or a mixture of sources based on the composition (unmixing and
source allocation). Typically there are 10 to 50 chemicals that have
been analyzed in all of the samples. In most cases concentrations are
converted to proportion of total as we are interested in composition
rather that simple dilution.
I have had great success with ratio analysis; simple exploratory
analysis such a property property plots etc; and cluster analysis such
as principal components analysis (PCA) and hierarchal cluster analysis.
I have also been experimenting with glyph representation, k-means
clusters, and simila...
2010 Aug 03
1
Garbled messages - format_wav_gsm.c:414 wav_read: Short read (60) (No such file or directory)!
...the time, on rare occasions
it will work fine - rare enough that I can't pin down what it is that works.
The problem is that voice mail message get played back garbled.
Occasionally, I can make out moments of a voice or another sound that may
be in the actual message, though it's far too diluted with garbling and
chirps to detect any words or phrases. When running asterisk -r, I will get
a message on the console after the system completes playback of the file:
format_wav_gsm.c:414 wav_read: Short read (60) (No such file or directory)!
I am running under a linux virtual server, and her...
2010 Apr 25
1
R for Engineering (Mechanical, Industrial , Civil, etc.)
...mpositions, and such) and other
visualisations/computations that are required for such studies.
If any R user has employed R to teach engineering courses (which do
not require much statistics), I would highly appreciate your feedback
and insights gained from such an undertaking.
I am not trying to dilute R's primary focus in being a statistical
tool, but I would like to make R available to an audience who do not
deal with a lot of statistics.
Of course, there are other tools for engineering drawing, circuitry
design and such others, but maybe there is a niche area (somewhere in
between core e...
2007 Aug 30
2
Q: Mean, median and confidence intervals with functions "summary" & "boxplot.stats"
Een ingesloten tekst met niet-gespecificeerde tekenset is
van het bericht gescrubt ...
Naam: niet beschikbaar
Url: https://stat.ethz.ch/pipermail/r-help/attachments/20070830/e557d2a7/attachment.ksh
2016 Mar 31
0
RFC: Large, especially super-linear, compile time regressions are bugs.
...t the commit brings compared to the compile time slow down.
This is a fallacy.
Compile time often regress across all targets, while execution
improvements are focused on specific targets and can have negative
effects on those that were not benchmarked on. Overall, though,
compile time regressions dilute over the improvements, but not on a
commit per commit basis. That's what I meant by which hunt.
I think we should keep an eye on those changes, ask for numbers in
code review and even maybe do some benchmarking on our own before
accepting it. Also, we should not commit code that we know hurts...