Evaluates the performance of random forest on unseen data over independent spatial folds.

rf_evaluate( model = NULL, xy = NULL, repetitions = 30, training.fraction = 0.75, metrics = c("r.squared", "pseudo.r.squared", "rmse", "nrmse", "auc"), distance.step = NULL, distance.step.x = NULL, distance.step.y = NULL, grow.testing.folds = FALSE, seed = 1, verbose = TRUE, n.cores = parallel::detectCores() - 1, cluster = NULL )

model | Model fitted with |
---|---|

xy | Data frame or matrix with two columns containing coordinates and named "x" and "y". If |

repetitions | Integer, number of spatial folds to use during cross-validation. Must be lower than the total number of rows available in the model's data. Default: |

training.fraction | Proportion between 0.5 and 0.9 indicating the proportion of records to be used as training set during spatial cross-validation. Default: |

metrics | Character vector, names of the performance metrics selected. The possible values are: "r.squared" ( |

distance.step | Numeric, argument |

distance.step.x | Numeric, argument |

distance.step.y | Numeric, argument |

grow.testing.folds | Logic. By default, this function grows contiguous training folds to keep the spatial structure of the data as intact as possible. However, when setting |

seed | Integer, random seed to facilitate reproduciblity. If set to a given number, the results of the function are always the same. Default: |

verbose | Logical. If |

n.cores | Integer, number of cores to use for parallel execution. Creates a socket cluster with |

cluster | A cluster definition generated with |

A model of the class "rf_evaluate" with a new slot named "evaluation", that is a list with the following slots:

`training.fraction`

: Value of the argument`training.fraction`

.`spatial.folds`

: Result of applying`make_spatial_folds()`

on the data coordinates. It is a list with as many slots as`repetitions`

are indicated by the user. Each slot has two slots named "training" and "testing", each one having the indices of the cases used on the training and testing models.`per.fold`

: Data frame with the evaluation results per spatial fold (or repetition). It contains the ID of each fold, it's central coordinates, the number of training and testing cases, and the training and testing performance measures: R squared, pseudo R squared (cor(observed, predicted)), rmse, and normalized rmse.`per.model`

: Same data as above, but organized per fold and model ("Training", "Testing", and "Full").`aggregated`

: Same data, but aggregated by model and performance measure.

The evaluation algorithm works as follows: the number of `repetitions`

and the input dataset (stored in `model$ranger.arguments$data`

) are used as inputs for the function `thinning_til_n()`

, that applies `thinning()`

to the input data until as many cases as `repetitions`

are left, and as separated as possible. Each of these remaining records will be used as a "fold center". From that point, the fold grows, until a number of points equal (or close) to `training.fraction`

is reached. The indices of the records within the grown spatial fold are stored as "training" in the output list, and the remaining ones as "testing". Then, for each spatial fold, a "training model" is fitted using the cases corresponding with the training indices, and predicted over the cases corresponding with the testing indices. The model predictions on the "unseen" data are compared with the observations, and the performance measures (R squared, pseudo R squared, RMSE and NRMSE) computed.

if(interactive()){ #loading example data data(plant_richness_df) data(distance_matrix) #fitting random forest model rf.model <- rf( data = plant_richness_df, dependent.variable.name = "richness_species_vascular", predictor.variable.names = colnames(plant_richness_df)[5:21], distance.matrix = distance_matrix, distance.thresholds = 0, n.cores = 1, verbose = FALSE ) #evaluation with spatial cross-validation rf.model <- rf_evaluate( model = rf.model, xy = plant_richness_df[, c("x", "y")], n.cores = 1 ) #checking evaluation results plot_evaluation(rf.model) print_evaluation(rf.model) x <- get_evaluation(rf.model) }