Displaying 20 results from an estimated 1000 matches similar to: "help"
2017 Dec 27
2
require help
Respected sir,
hoping that you are well.sir, i am trying to run Tado-Yamamoto causality
test with my data. I have three variables. but in running wal.test in R, I
have faced problems (especially in 'terms' arguments). my results have
shown as...
Error in L %*% V : non-conformable arguments
-- kindly help me in solving this issue. I have also attached my codes
and data to this email.
2017 Sep 22
0
require help
thankx to everyone for your valuable suggestions. one query regarding the
GARCH model.
I have applied the GARCH model for the same data that I send you all . and
my results coming like
Error in .sgarchfit(spec = spec, data = data, out.sample = out.sample, :
ugarchfit-->error: function requires at least 100 data
points to run
can you suggest something on it.
On Fri, Sep 22, 2017 at 6:02
2017 Sep 22
2
require help
Assuming the input data.frame, DF, is of the form shown reproducibly
in the Note below, to convert the series to zoo or ts:
library(zoo)
# convert to zoo
z <- read.zoo(DF)
# convert to ts
as.ts(z) #
Note:
DF <- structure(list(year = c(1980, 1981, 1982, 1983, 1984), cnsm = c(174,
175, 175, 172, 173), incm = c(53.4, 53.7, 53.5, 53.2, 53.3),
with = c(60.3, 60.5, 60.2, 60.1, 60.7)),
2017 Sep 16
0
require help
oky.. thank you very much to all of you
On Sat, Sep 16, 2017 at 2:06 PM, Eric Berger <ericjberger at gmail.com> wrote:
> You can just use the same code that I provided before but now use your
> dataset. Like this
>
> df <- read.csv(file="data2.csv",header=TRUE)
> dates <- as.Date(paste(df$year,"-01-01",sep=""))
> myXts <-
2017 Nov 21
2
help
thank you for your valuable reply. I have attached my commands, results, and
data with this mail..maybe it will be beneficial for you to feedback.
On Tue, Nov 21, 2017 at 9:13 PM, Jeff Newmiller <jdnewmil at dcn.davis.ca.us>
wrote:
> Your example is incomplete... as the bottom of this and every post says,
> we need to be able to proceed from an empty R environment to wherever you
2017 Sep 15
0
require help
> On 15 Sep 2017, at 12:38, yadav neog <yadavneog at gmail.com> wrote:
>
> hello to all. I am working on macroeconomic data series of India, which in
> a yearly basis. I am unable to convert my data frame into time series.
Do you really need to convert your data to time series/xts/zoo? I don?t know you try what kind of an analysis but perhaps you don?t have to.
> kindly
2017 Nov 21
0
help
Your example is incomplete... as the bottom of this and every post says, we need to be able to proceed from an empty R environment to wherever you are having the problem (reproducible), in as few steps as possible (minimal). The example needs to include data, preferably in R syntax as the dput function creates... see the howtos referenced below for help with that. [1], [2], [3]
You also need to
2017 Sep 15
7
require help
hello to all. I am working on macroeconomic data series of India, which in
a yearly basis. I am unable to convert my data frame into time series.
kindly help me.
also using zoo and xts packages. but they take only monthly observations.
'data.frame': 30 obs. of 4 variables:
$ year: int 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 ...
$ cnsm: num 174 175 175 172 173 ...
$ incm:
2017 Sep 16
0
require help
> On 15 Sep 2017, at 11:38, yadav neog <yadavneog at gmail.com> wrote:
>
> hello to all. I am working on macroeconomic data series of India, which in
> a yearly basis. I am unable to convert my data frame into time series.
> kindly help me.
> also using zoo and xts packages. but they take only monthly observations.
>
> 'data.frame': 30 obs. of 4 variables:
2017 Sep 16
2
require help
You can just use the same code that I provided before but now use your
dataset. Like this
df <- read.csv(file="data2.csv",header=TRUE)
dates <- as.Date(paste(df$year,"-01-01",sep=""))
myXts <- xts(df,order.by=dates)
head(myXts)
#The last command "head(myXts)" shows you the first few rows of the xts
object
year cnsm incm wlth
2017 Nov 21
2
help
I am working on Johansen cointegration test, using urca and var package.
in the selection of var, I have got following results.
>VARselect(newd, lag.max = 10,type = "none")
$selection
AIC(n) HQ(n) SC(n) FPE(n)
6 6 6 5
$criteria
1 2 3 4
5 6 7 8 9
AIC(n) -3.818646e+01 -3.864064e+01
2017 Sep 15
0
require help
> On 15 Sep 2017, at 11:38, yadav neog <yadavneog at gmail.com> wrote:
>
> hello to all. I am working on macroeconomic data series of India, which in
> a yearly basis. I am unable to convert my data frame into time series.
> kindly help me.
> also using zoo and xts packages. but they take only monthly observations.
>
> 'data.frame': 30 obs. of 4 variables:
2009 Apr 28
1
kernlab - custom kernel
hi,
I am using R's "kernlab" package, exactly i am doing classification using
ksvm(.) and predict.ksvm(.).I want use of custom kernel. I am getting some
error.
# Following R code works (with promotergene dataset):
library("kernlab")
s <- function(x, y) {
sum((x*y)^1.25)
}
class(s) <- "kernel"
data("promotergene")
gene <- ksvm(Class ~ .,
2013 Apr 30
0
Panel Granger Causality Tests
Hi,
I was wondering if there is a package/function for Panel Granger
non-causality tests? I am interested in Toda-Yamamoto procedure in panel
data setting.
Thank you,
--
View this message in context: http://r.789695.n4.nabble.com/Panel-Granger-Causality-Tests-tp4665834.html
Sent from the R help mailing list archive at Nabble.com.
2013 May 04
0
Panel Granger Non-Causality Tests in R
Hi,
I was wondering if there is a package/function for Panel Granger
non-causality tests? I am interested in Toda-Yamamoto like procedure for
panel models.
Thank you,
--
View this message in context: http://r.789695.n4.nabble.com/Panel-Granger-Non-Causality-Tests-in-R-tp4666316.html
Sent from the R help mailing list archive at Nabble.com.
2018 Mar 21
0
Confidence intervals for the Instrumental Variable estimators of TWO causal effects
Dear all,
I am using the Instrumental Variable approach to estimate the causal
effects of TWO endogenous variables in a Mendelian Randomization study.
As long as point estimation is concerned, I have no problem: both "ivreg"
in library "AER" and "tsls" in library "sem" do the job perfectly. The
problems begin
when I try to obtain confidence intervals for
2020 Oct 29
1
R: sim1000G
Hi,
I am using the sim1000G R package to simulate data for case/control study.
I can not figure out how to manipulate this code to be able to generate 10%
or 50% causal SNPs in R.
This is whole code provided as example on GitHub:
library(sim1000G)
vcf_file = "region-chr4-357-ANK2.vcf.gz" #nvariants = 442, ss=1000
vcf = readVCF( vcf_file, maxNumberOfVariants = 442 ,min_maf =
2020 Nov 01
0
R: sim1000G
Hi Berina,
I'm not an expert on genetics.
I haven't looked at the package.
And I've only glanced at your question.
So, this is probably not the best response.
But as no one else has responded, here's some comments:
(1)
Have you checked if there's a function in the package to do what you want?
The remainder of these questions assume that you have, and the answer is no.
(2)
2006 Apr 06
1
Look What 911 Will Cost in Canada
Check out the proposed prices when this is approved.
BELL CANADA REPORT
ON THE
ECONOMIC EVALUATION
FOR
THE TARIFF REVISION
OF
Bell Canada's Access Services Tariff Item 315 ? Zero-Dialed
Emergency Call Routing Service (0-ECRS)
*2 March 2006
TABLE OF CONTENTS
Page
1.0 GENERAL 3
1.1 Purpose of the Study 3
2.0 SERVICE DESCRIPTION 3
2.1 Service Characteristics 3
2.2
2007 Dec 02
1
speeding up likelihood computation
R Users:
I am trying to estimate a model of fertility behaviour using birth history data with maximum likelihood. My code works but is extremely slow (because of several for loops and my programming inefficiencies); when I use the genetic algorithm to optimize the likelihood function, it takes several days to complete (on a machine with Intel Core 2 processor [2.66GHz] and 2.99 GB RAM). Computing