Extracts aggregated performance metrics from a model evaluated with rf_evaluate().
Arguments
- model
Model object with class
rf_evaluatefromrf_evaluate().
Value
Data frame with aggregated evaluation metrics containing:
model: Model type - "Full" (original model), "Training" (trained on training folds), or "Testing" (performance on testing folds, representing generalization ability).metric: Metric name - "rmse", "nrmse", "r.squared", or "pseudo.r.squared".mean,sd,min,max: Summary statistics across cross-validation repetitions.
Details
This function returns aggregated statistics across all cross-validation repetitions. The "Testing" model metrics indicate the model's ability to generalize to unseen spatial locations.
See also
rf_evaluate(), plot_evaluation(), print_evaluation()
Other model_info:
get_importance(),
get_importance_local(),
get_moran(),
get_performance(),
get_predictions(),
get_residuals(),
get_response_curves(),
get_spatial_predictors(),
print.rf(),
print_evaluation(),
print_importance(),
print_moran(),
print_performance()
Examples
if(interactive()){
data(plants_rf, plants_xy)
# Evaluate model with spatial cross-validation
m_evaluated <- rf_evaluate(
model = plants_rf,
xy = plants_xy,
repetitions = 5,
n.cores = 1
)
# Extract evaluation metrics
eval_metrics <- get_evaluation(m_evaluated)
eval_metrics
}