Displaying 20 results from an estimated 10759 matches for "adjustment".
Did you mean:
adjustments
2006 Apr 04
1
Can't recieve Fax: No carrier detected - Asterisk + iaxmodem + Hylafaxv --- sorry.wrong log.
I'm able to recieve fax with pure SpanDSP 0.0.2 + Asterisk successfully
but I have problems with some fax machine so I wanted to try using
HylaFax to recieve Fax instead of SpanDSP hoping that it'll solve my
problem. I'm trying to connect Asterisk with SpanDSP using iaxmodem. My
system looks like this:
ISDN <---> Asterisk <---> IAXModem <---> Hylafax
Asterisk and
2006 Apr 04
0
Can't recieve Fax: No carrier detected - Asterisk + iaxmodem + Hylafax
I'm able to recieve fax with pure SpanDSP 0.0.2 + Asterisk successfully
but I have problems with some fax machine so I wanted to try using
HylaFax to recieve Fax instead of SpanDSP hoping that it'll solve my
problem. I'm trying to connect Asterisk with SpanDSP using iaxmodem. My
system looks like this:
ISDN <---> Asterisk <---> IAXModem <---> Hylafax
Asterisk
2018 Mar 15
1
Adjusting OHCL data via quantmod
...ust my data.
My overarching-goal is to adjust my OHLC data appropriately to minimize the
difference between my backtest returns, and the returns I would get if I
was trading for real (which I'll be doing shortly).
Background:
-1. I'm using Alpha Vantage's data, and quantmod's data adjustment tools.
-2: I used Joshua Ulrich's DataCamp guidance (
https://campus.datacamp.com/courses/importing-and-managing-financial-data-in-r/importing-text-data-and-adjusting-for-corporate-actions?ex=10)
(and quantmod documentation) to determine how Alpha Vantage's data is
adjusted.
Here are my fi...
2012 Apr 30
5
Different varable lengths
Hi!
I'm trying to do a lm() test on three objects. My problem is that R protests
and says that the variable lengths differ for one of the objects
(Sweden.GDP.gap). But I have double checked that the number of observations
are the same. All three objects should contain 9 observations but R only
accepts 9 observations in two of the objects. The third must have 10! Very
confusing because there
2010 Aug 07
4
basic question about t-test with adjusted p value
I have read the R manual and help archives, sorry but I'm still stuck.
How would I do a t-test with an adjusted p-value?
Suppose that I use t.test ( ) , with the function argument alternative =
"two.sided", and data such that degrees of freedom = 20. The function
calculates a t-statistic of 2.086, and p-value =0.05
How do I then adjust the p-value? My thought is to do
2012 Apr 29
3
Sieve doesn't find user scripts
Hi,
I want to use Sieve filtering with my Dovecot 1.2 installation on Debian
squeeze. I have a virtual domain setup using Portgresql.
ManageSieve works fine so far, I can edit and activate/deactive scripts (using
Thunderbird + Plugin) and they show up in the filesystem where I expect them to
be, see below.
The problem is that LDA doesn't find the script. From
/var/log/dovecot-deliver.log:
2007 Mar 16
1
Probably simple function problem
# I have a simple function problem. I thought that I
could write a function to modify a couple of vectors
but I am doing something wrong
#I have a standard cost vector called "fuel" and some
adjustments to the
#costs called "adjusts". The changes are completely
dependend on the length
#of the dataframe newdata I then need to take the
modifed vectors and use
# them later. I need to do this several times and the
only change in the variables
# is the length of the data.frame.
# Can anyo...
2017 Jul 12
2
How to make a figure plotting p-values by range of different adjustment values?
Hi all,
Thank you for taking the time to read my message. I'm trying to make a
figure that plots p-values by a range of different adjustment values.
(Using the **logit** function in package **car**)
My Statistical analyses were conducted on probability estimates ranging
from 0% to 100%. As it's not ideal to run linear models on percentages that
are bounded between 0 and 1, these estimates were logit transformed.
However, this int...
2004 Dec 20
1
[BioC] limma, FDR, and p.adjust
...t you should read the article Wright (1992), which is cited in the help entry for
p.adjust(), and which explains quite clearly the concept of an adjusted p-value.
The idea that you're having trouble with actually has nothing specifically to do with FDR or with
B&H's (1995) method. Any adjustment method for multiple testing can be expressed in terms of
adjusted p-values. The function p.adjust() actually implements several adjustment methods, not
just B&H's, where were not expressed in terms of p-values in their original papers. The adjust
p-value approach is exactly equivalent to...
2017 Jul 13
1
How to make a figure plotting p-values by range of different adjustment values?
Hi Jim,
Thanks for your help, I really appreciate it.
Perhaps I'm misunderstanding, but does this formula run different ajustment
values for this function?
logit(p = doc$value, adjust = 0.025)
I'm looking to plot the p-values of different adjustment values.
Thanks so much,
Kirsten
On Wed, Jul 12, 2017 at 8:49 PM, Jim Lemon <drjimlemon at gmail.com> wrote:
> Hi Kirsten,
> Perhaps this will help:
>
> set.seed(3)
> kmdf<-data.frame(group=rep(1:4,each=20),
> prop=c(runif(20,0.25,1),runif(20,0.2,0.92),
> runif(20...
2013 Jul 20
1
BH correction with p.adjust
Dear List,
I have been trying to use p.adjust() to do BH multiple test correction and have gotten some unexpected results. I thought that the equation for this was:
pBH = p*n/i
where p is the original p value, n is the number of tests and i is the rank of the p value. However when I try and recreate the corrected p from my most significant value it does not match up to the one computed by the
2017 Jul 13
0
How to make a figure plotting p-values by range of different adjustment values?
...x)
staxlab(1,at=1:8,labels=p.adjust.methods)
Jim
On Thu, Jul 13, 2017 at 12:53 AM, Kirsten Morehouse
<kmoreho1 at swarthmore.edu> wrote:
> Hi all,
>
> Thank you for taking the time to read my message. I'm trying to make a
> figure that plots p-values by a range of different adjustment values.
>
> (Using the **logit** function in package **car**)
>
> My Statistical analyses were conducted on probability estimates ranging
> from 0% to 100%. As it's not ideal to run linear models on percentages that
> are bounded between 0 and 1, these estimates were logit tra...
2008 Nov 10
3
in R when I get negative adjusted R^2 using "lm", what might be the problem?
This is a linear regression of Y onto factors...
If I take log of Y, and regress onto the factors, I got:
Multiple R-squared: 0.4023, Adjusted R-squared: 0.2731
If I don't take log of Y, and directly regress Y onto the factors, I got:
Multiple R-squared: 0.1807, Adjusted R-squared: -0.001112
Is this negative adjusted R^2 a problem?
What observation can I make here and what might
2002 Mar 18
3
function design
I have a, no doubt, simple question. I wish to write a function such
that
a <- 9
b <- 10
changer _ function(x,y) { if (y>x){ x <<- Y+1}}
Of course there are easier ways to accomplish the task above, but I am
more interested in how to have the "x <<- Y+1" part of the function to
change x in place for purposes of a much larger function.
I have been wrestling with
2005 Jan 08
1
p.adjust(<NA>s), was "Re: [BioC] limma and p-values"
...NA's and forget about them till the very end''
(where they are wanted in the result),
i.e., your sample size `n' would be sum(!is.na(p)) instead of
length(p).
To me it doesn't seem obvious that this setting
"n = #{non-NA observations}" is desirable for all
P-value adjustment methods. One argument for keeping
``n = #{all observations}'' at least for some correction
methods is the following "continuity" one:
If only a few ``irrelevant'' (let's say > 0.5) P-values are
replaced by NA, the adjusted relevant small P-values shouldn't
cha...
2004 Dec 20
1
Re: [BioC] limma, FDR, and p.adjust
Mark,
there is a fdr website link via Yoav Benjamini's homepage which is: http://www.math.tau.ac.il/%7Eroee/index.htm
On it you can download an S-Plus function (under the downloads link) which calculates the false discovery rate threshold alpha level using stepup, stepdown, dependence methods etc.
Some changes are required to the plotting code when porting it to R. I removed the
2009 Jan 22
2
Standard errors of least squares adjusted means
Hello,
I have the following model:
lm.7 <- lm(Y ~ F + C1 + C2 , data = EM4)
F is a 4-level factor, the rest are covariates centered at their mean (Y
is a two-column matrix).
I have tried to find functions to give the model-adjusted means
(adjusted at the covariates'means) and their standard deviations for each.
(That is, what I believe is called in SAS "least square or LS-means,
2005 Jun 17
2
adjusted R^2 vs. ordinary R^2
I thought the point of adjusting the R^2 for degrees of
freedom is to allow comparisons about goodness of fit between
similar models with different numbers of data points. Someone
has suggested to me off-list that this might not be the case.
Is an ADJUSTED R^2 for a four-parameter, five-point model
reliably comparable to the adjusted R^2 of a four-parameter,
100-point model? If such values
2008 Mar 09
2
p-adjust using Benjamn and Hochberg
Hello,
I am trying to use the p.adjust function for multiple testing.
here is what i have
9997 201674_s_at 0.327547396
9998 221013_s_at 0.834211067
9999 221685_s_at 0.185099475
I import them from excel have have the gene symbol as well as the pvalue
here is the issue
> pa<-p.adjust(pt,method="BH")
Error in p[nna] : object is not
2012 Nov 26
1
Plotting an adjusted survival curve
First a statistical issue: The survfit routine will produce predicted survival curves for
any requested combination of the covariates in the original model. This is not the same
thing as an "adjusted" survival curve. Confusion on this is prevalent, however. True
adjustment requires a population average over the confounding factors and is closely
related to the standardized incidence ratio concept found in epidemiology.
To answer your technical question:
fit <- coxph(Surv(.........
mysurv <- survfit(fit, newdata= mydata)
This will give a set of predict...