Displaying 2 results from an estimated 2 matches for "op1l".
Did you mean:
op1
2018 Feb 09
1
Optim function returning always initial value for parameter to be optimized
...nput= data.frame(state1 = (1:500), state2 = (201:700) )
with data that partially overlap in terms of values.
I want to minimize the assessment error of each state by using this function:
err.th.scalar <- function(threshold, data){
state1 <- data$state1
state2 <- data$state2
op1l <- length(state1)
op2l <- length(state2)
op1.err <- sum(state1 <= threshold)/op1l
op2.err <- sum(state2 >= threshold)/op2l
total.err <- (op1.err + op2.err)
return(total.err)
}
SO I'm trying to minimize the total error. This Total Error should be a U shape...
2018 Feb 10
0
Optim function returning always initial value for parameter to be optimized
...nk so. It's zero, so of course
> you end up where you start.
>
> Try
>
> data.input= data.frame(state1 = (1:500), state2 = (201:700) )
> err.th.scalar <- function(threshold, data){
>
> state1 <- data$state1
> state2 <- data$state2
>
> op1l <- length(state1)
> op2l <- length(state2)
>
> op1.err <- sum(state1 <= threshold)/op1l
> op2.err <- sum(state2 >= threshold)/op2l
I think this function is not smooth, and not even continuous. Gradient
methods require differentiable (smooth) functions...