`heemod`

modelsThe parameters for health economic models can be difficult to measure, either because they cannot be observed directly, or because appropriate data are not systematically gathered in the area of interest. When expected model results are know, *model calibration* is the search for the appropriate value of initially unknown parameters that allow to obtain these results.

For example the shape and scale parameters of a Weibull survival model can be unknown parameter values. But from the litterature we can know the expected probability of being alive at time *t*. If this probability is a result from the model, we can find the value of the shape and scale parameters that allow the model results to match, as closely as possible, the observed probability of being alive.

In order to perform calibration, the user must provide:

- A heemod object from
`run_model()`

of`update()`

^{1}. - The names of the parameters of the model to calibrate (the parameters for which we seek appropriate values).
- A function that when applied to the model returns the result we want to match with reference values.
- The target values we would like the model results to match.

For this example we will use the result from the assessment of a new total hip replacement previously described in `vignette("d-non-homogeneous", "heemod")`

.

We will calibrate the parameters `gamma`

(a Weibull survival parameter) and `rrNP1`

(the relative risk associated with the new treatment), which originally have values of 1.45 and 0.26 respectively.

The original number of patients with a THR revision after 20 cycles are found in this way:

```
library(dplyr)
get_counts(res_mod) %>%
dplyr::filter(markov_cycle == 20 & state_names == "RevisionTHR")
```

```
## # A tibble: 2 x 4
## .strategy_names markov_cycle state_names count
## <chr> <int> <chr> <dbl>
## 1 standard 20 RevisionTHR 2.60
## 2 np1 20 RevisionTHR 0.690
```

We want to calibrate `gamma`

and `rrNP1`

to obtain 3 patients for the `standard`

strategy and 1 patient for the `np1`

strategy at time 20. We need to define a function to extract the values we want to change from the model and return them as a numeric vector:

```
extract_values <- function(x) {
dplyr::filter(
get_counts(x),
markov_cycle == 20 & state_names == "RevisionTHR"
)$count
}
extract_values(res_mod)
```

`## [1] 2.5964891 0.6902178`

Any arbitrary function of any model output would work, as long as it returns numeric values.

A convenience function `define_calibration_fn()`

exists to help easily define calibration functions.

```
calib_fn <- define_calibration_fn(
type = "count",
strategy_names = c("standard", "np1"),
element_names = c("RevisionTHR", "RevisionTHR"),
cycles = c(20, 20)
)
calib_fn(res_mod)
```

`## [1] 2.5964891 0.6902178`

We can now call `calibrate_model()`

, and give the values we want to reach as `target_values`

.

```
res_cal <- calibrate_model(
res_mod,
parameter_names = c("gamma", "rrNP1"),
fn_values = extract_values,
target_values = c(2.5, 0.8)
)
```

`## Loading required namespace: optimx`

```
## gamma rrNP1 value convcode
## 1 1.442252 0.3144459 0.00000000008630486 0
```

The new parameter values are 1.44 for `gamma`

and 0.31 for `rrNP1`

. The `convcode`

code at 0 indicates the calibration was successful.

It is possible to specify several possible starting values for the calibration procedure in order to explore the parameter space:

```
start <- data.frame(
gamma = c(1.0, 1.5, 2.0),
rrNP1 = c(0.2, 0.3, 0.4)
)
res_cal_2 <- calibrate_model(
res_mod,
parameter_names = c("gamma", "rrNP1"),
fn_values = extract_values,
target_values = c(3, 1),
initial_values = start,
lower = c(0, 0), upper = c(2, 1)
)
```

Additional options to control the optimization process can be passed to `calibrate_model()`

. These options are parameters of the optimx function, such as `upper`

and `lower`

to specify upper and lower values, `method`

to change the optimization method, etc.

Calibration uses optimization to minimize the sum of squared errors between calculated and desired values, and so is subject to all the many difficulties of optimization. Different optimization methods, for example Nelder-Mead (which does not require gradients) and BFGS or conjugate gradient methods, which do require gradients but can approximate them numerically, may work better for different problems. Some attempted optimizations may not converge.

It may be impossible to evaluate the function at badly-specified initial parameter values (for example, if a negative initial value is given for a parameter that must be positive); using box limits on some parameters may help with this.

Even if the calibration converges from different initial values, it may not converge to the same parameter values every time; in general, an underconstrained model can have different parameter sets that fit equally well. For these and other reasons, the user is advised to carefully check the results of calibrations.

Calibrating models from

`update()`

is*extremely*time-consuming.↩