Displaying 20 results from an estimated 700 matches similar to: "numerical derivative R help"
2010 Jul 06
0
Help needed with numericDeriv and optim functions
Hello All:
I have defined the following function (fitterma as a sum of exponentials)
that best fits my cumulative distribution. I am also attaching the "xtime"
values that I have. I want to try two things as indicated below and am
experiencing problems. Any help will be greatly appreciated.
Best, Parmee
-----------------------
*fitterma <- function(xtime) { *
*a <-
2010 Jun 23
1
Probabilities from survfit.coxph:
Hello:
In the example below (or for a censored data) using survfit.coxph, can
anyone point me to a link or a pdf as to how the probabilities appearing in
bold under "summary(pred$surv)" are calculated? Do these represent
acumulative probability distribution in time (not including censored time)?
Thanks very much,
parmee
*fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)*
2010 Apr 01
1
predicted time length differs from survfit.coxph:
Hello All,
Does anyone know why length(fit1$time) < length(fit2$n) in survfit.coxph
output? Why is the predicted time length is not the same as the number of
samples (n)?
I tried: example(survfit.coxph).
Thanks,
parmee
> fit2$n
[1] 241
> fit2$time
[1] 0 31 32 60 61 152 153 174 273 277 362
365 499 517 518 547
[17] 566 638 700 760 791
2007 Apr 28
6
Where is xtime updated in a domU with an independent wallclock?
Hi All,
I have just started looking at the code for Xen so please bear with me.
A domU Linux kernel running with independent_wallclock=1 seems to sync
its time with dom0 after every "xm unpause" (obviously preceded by an
"xm pause").
I don''t see where the xtime variable is being updated after an "xm
unpause", i.e., domain_unpause_by_systemcontroller().
2013 Aug 28
1
GlusterFS extended attributes, "system" namespace
Hi,
I'm running GlusterFS 3.3.2 and I'm having trouble getting geo-replication to work. I think it is a problem with extended attributes. I'll using ssh with a normal user to perform the replication.
On the server log in /var/log/glusterfs/geo-replication/VOLNAME/ssh?.log I'm getting an error "ReceClient: call ?:?:? (xtime) failed on peer with OSError". On the
2017 Aug 21
2
self-heal not working
Hi Ben,
So it is really a 0 kBytes file everywhere (all nodes including the arbiter and from the client).
Here below you will find the output you requested. Hopefully that will help to find out why this specific file is not healing... Let me know if you need any more information. Btw node3 is my arbiter node.
NODE1:
STAT:
File:
2017 Aug 21
0
self-heal not working
Can you also provide:
gluster v heal <my vol> info split-brain
If it is split brain just delete the incorrect file from the brick and run heal again. I haven't tried this with arbiter but I assume the process is the same.
-b
----- Original Message -----
> From: "mabi" <mabi at protonmail.ch>
> To: "Ben Turner" <bturner at redhat.com>
> Cc:
2017 Aug 21
2
self-heal not working
Sure, it doesn't look like a split brain based on the output:
Brick node1.domain.tld:/data/myvolume/brick
Status: Connected
Number of entries in split-brain: 0
Brick node2.domain.tld:/data/myvolume/brick
Status: Connected
Number of entries in split-brain: 0
Brick node3.domain.tld:/srv/glusterfs/myvolume/brick
Status: Connected
Number of entries in split-brain: 0
> -------- Original
2011 Jul 26
1
Error during geo-replication : Unable to get <uuid>.xtime attr
Hi,
I got a problem during geo-replication:
The master Gluster server log has the following error every second:
[2011-07-26 04:20:50.618532] W [libxlator.c:128:cluster_markerxtime_cbk]
0-flvol-dht: Unable to get <uuid>.xtime attr
While the slave log has the error every a few seconds:
[2011-07-26 04:25:08.77133] E
[stat-prefetch.c:695:sp_remove_caches_from_all_fds_opened]
2017 Aug 22
0
self-heal not working
Explore the following:
- Launch index heal and look at the glustershd logs of all bricks for
possible errors
- See if the glustershd in each node is connected to all bricks.
- If not try to restart shd by `volume start force`
- Launch index heal again and try.
- Try debugging the shd log by setting client-log-level to DEBUG
temporarily.
On 08/22/2017 03:19 AM, mabi wrote:
> Sure, it
2017 Aug 22
0
self-heal not working
On 08/22/2017 02:30 PM, mabi wrote:
> Thanks for the additional hints, I have the following 2 questions first:
>
> - In order to launch the index heal is the following command correct:
> gluster volume heal myvolume
>
Yes
> - If I run a "volume start force" will it have any short disruptions
> on my clients which mount the volume through FUSE? If yes, how long?
2017 Aug 22
3
self-heal not working
Thanks for the additional hints, I have the following 2 questions first:
- In order to launch the index heal is the following command correct:
gluster volume heal myvolume
- If I run a "volume start force" will it have any short disruptions on my clients which mount the volume through FUSE? If yes, how long? This is a production system that's why I am asking.
> --------
2017 Aug 23
2
self-heal not working
I just saw the following bug which was fixed in 3.8.15:
https://bugzilla.redhat.com/show_bug.cgi?id=1471613
Is it possible that the problem I described in this post is related to that bug?
> -------- Original Message --------
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 22, 2017 11:51 AM
> UTC Time: August 22, 2017 9:51 AM
> From: ravishankar at
2017 Aug 24
0
self-heal not working
Unlikely. In your case only the afr.dirty is set, not the
afr.volname-client-xx xattr.
`gluster volume set myvolume diagnostics.client-log-level DEBUG` is right.
On 08/23/2017 10:31 PM, mabi wrote:
> I just saw the following bug which was fixed in 3.8.15:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1471613
>
> Is it possible that the problem I described in this post is
2012 Apr 27
1
geo-replication and rsync
Hi,
can someone tell me the differenct between geo-replication and plain rsync?
On which frequency files are replicated with geo-replication?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120427/72f35727/attachment.html>
1999 Jul 02
0
Bug in "[.ts" for multivariate ts {Problem with plot.ts, "[" (PR#217)
This message is in MIME format
--_=XFMail.1.3.p0.Linux:990702182137:16900=_
Content-Type: text/plain; charset=us-ascii
There was some discussion a while back on R-devel between Ross Ihaka,
Paul Gilbert and myself about row subsetting in time series. I think
the consensus was that "[.ts" should not try to coerce its result
back to a time series object (which is underlying the problem
2017 Aug 24
2
self-heal not working
Thanks for confirming the command. I have now enabled DEBUG client-log-level, run a heal and then attached the glustershd log files of all 3 nodes in this mail.
The volume concerned is called myvol-pro, the other 3 volumes have no problem so far.
Also note that in the mean time it looks like the file has been deleted by the user and as such the heal info command does not show the file name
2017 Aug 25
0
self-heal not working
Hi Ravi,
Did you get a chance to have a look at the log files I have attached in my last mail?
Best,
Mabi
> -------- Original Message --------
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 24, 2017 12:08 PM
> UTC Time: August 24, 2017 10:08 AM
> From: mabi at protonmail.ch
> To: Ravishankar N <ravishankar at redhat.com>
> Ben Turner
1999 Jul 02
1
Bug in "[.ts" for multivariate ts {Problem with plot.ts, "["} (PR#216)
>>>>> On Fri, 02 Jul 1999, Adrian Trapletti <Adrian.Trapletti@wu-wien.ac.at> said:
Adrian> There seems to be a problem with plot.ts (R Version 0.64.2)
> x<-cbind(1:10,2:11)
> x<-as.ts(x)
> plot(x)
Adrian> Error: subscript (20) out of bounds, should be at most 10
This is definitely a bug
--> CC: R-bugs
ALL NOTE : This is *not* new
2017 Aug 27
2
self-heal not working
Yes, the shds did pick up the file for healing (I saw messages like "
got entry: 1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea") but no error afterwards.
Anyway I reproduced it by manually setting the afr.dirty bit for a zero
byte file on all 3 bricks. Since there are no afr pending xattrs
indicating good/bad copies and all files are zero bytes, the data
self-heal algorithm just picks the