Evaluates the performance of random forest on unseen data over independent spatial folds.
rf_evaluate(
model = NULL,
xy = NULL,
repetitions = 30,
training.fraction = 0.75,
metrics = c("r.squared", "pseudo.r.squared", "rmse", "nrmse", "auc"),
distance.step = NULL,
distance.step.x = NULL,
distance.step.y = NULL,
grow.testing.folds = FALSE,
seed = 1,
verbose = TRUE,
n.cores = parallel::detectCores() - 1,
cluster = NULL
)
Model fitted with rf()
, rf_repeat()
, or rf_spatial()
.
Data frame or matrix with two columns containing coordinates and named "x" and "y". If NULL
, the function will throw an error. Default: NULL
Integer, number of spatial folds to use during cross-validation. Must be lower than the total number of rows available in the model's data. Default: 30
Proportion between 0.5 and 0.9 indicating the proportion of records to be used as training set during spatial cross-validation. Default: 0.75
Character vector, names of the performance metrics selected. The possible values are: "r.squared" (cor(obs, pred) ^ 2
), "pseudo.r.squared" (cor(obs, pred)
), "rmse" (sqrt(sum((obs - pred)^2)/length(obs))
), "nrmse" (rmse/(quantile(obs, 0.75) - quantile(obs, 0.25))
), and "auc" (only for binary responses with values 1 and 0). Default: c("r.squared", "pseudo.r.squared", "rmse", "nrmse")
Numeric, argument distance.step
of thinning_til_n()
. distance step used during the selection of the centers of the training folds. These fold centers are selected by thinning the data until a number of folds equal or lower than repetitions
is reached. Its default value is 1/1000th the maximum distance within records in xy
. Reduce it if the number of training folds is lower than expected.
Numeric, argument distance.step.x
of make_spatial_folds()
. Distance step used during the growth in the x axis of the buffers defining the training folds. Default: NULL
(1/1000th the range of the x coordinates).
Numeric, argument distance.step.x
of make_spatial_folds()
. Distance step used during the growth in the y axis of the buffers defining the training folds. Default: NULL
(1/1000th the range of the y coordinates).
Logic. By default, this function grows contiguous training folds to keep the spatial structure of the data as intact as possible. However, when setting grow.testing.folds = TRUE
, the argument training.fraction
is set to 1 - training.fraction
, and the training and testing folds are switched. This option might be useful when the training data has a spatial structure that does not match well with the default behavior of the function. Default: FALSE
Integer, random seed to facilitate reproduciblity. If set to a given number, the results of the function are always the same. Default: 1
.
Logical. If TRUE
, messages and plots generated during the execution of the function are displayed, Default: TRUE
Integer, number of cores to use for parallel execution. Creates a socket cluster with parallel::makeCluster()
, runs operations in parallel with foreach
and %dopar%
, and stops the cluster with parallel::clusterStop()
when the job is done. Default: parallel::detectCores() - 1
A cluster definition generated with parallel::makeCluster()
. If provided, overrides n.cores
. When cluster = NULL
(default value), and model
is provided, the cluster in model
, if any, is used instead. If this cluster is NULL
, then the function uses n.cores
instead. The function does not stop a provided cluster, so it should be stopped with parallel::stopCluster()
afterwards. The cluster definition is stored in the output list under the name "cluster" so it can be passed to other functions via the model
argument, or using the %>%
pipe. Default: NULL
A model of the class "rf_evaluate" with a new slot named "evaluation", that is a list with the following slots:
training.fraction
: Value of the argument training.fraction
.
spatial.folds
: Result of applying make_spatial_folds()
on the data coordinates. It is a list with as many slots as repetitions
are indicated by the user. Each slot has two slots named "training" and "testing", each one having the indices of the cases used on the training and testing models.
per.fold
: Data frame with the evaluation results per spatial fold (or repetition). It contains the ID of each fold, it's central coordinates, the number of training and testing cases, and the training and testing performance measures: R squared, pseudo R squared (cor(observed, predicted)), rmse, and normalized rmse.
per.model
: Same data as above, but organized per fold and model ("Training", "Testing", and "Full").
aggregated
: Same data, but aggregated by model and performance measure.
The evaluation algorithm works as follows: the number of repetitions
and the input dataset (stored in model$ranger.arguments$data
) are used as inputs for the function thinning_til_n()
, that applies thinning()
to the input data until as many cases as repetitions
are left, and as separated as possible. Each of these remaining records will be used as a "fold center". From that point, the fold grows, until a number of points equal (or close) to training.fraction
is reached. The indices of the records within the grown spatial fold are stored as "training" in the output list, and the remaining ones as "testing". Then, for each spatial fold, a "training model" is fitted using the cases corresponding with the training indices, and predicted over the cases corresponding with the testing indices. The model predictions on the "unseen" data are compared with the observations, and the performance measures (R squared, pseudo R squared, RMSE and NRMSE) computed.
if(interactive()){
#loading example data
data(plant_richness_df)
data(distance_matrix)
#fitting random forest model
rf.model <- rf(
data = plant_richness_df,
dependent.variable.name = "richness_species_vascular",
predictor.variable.names = colnames(plant_richness_df)[5:21],
distance.matrix = distance_matrix,
distance.thresholds = 0,
n.cores = 1,
verbose = FALSE
)
#evaluation with spatial cross-validation
rf.model <- rf_evaluate(
model = rf.model,
xy = plant_richness_df[, c("x", "y")],
n.cores = 1
)
#checking evaluation results
plot_evaluation(rf.model)
print_evaluation(rf.model)
x <- get_evaluation(rf.model)
}