Computes aggregated statistics from bootstrap AUC iterations. This function processes
the raw output of auc_parallel
to produce meaningful summary metrics of the
partial ROC test.
Arguments
- auc_results
Numeric matrix output from
auc_parallel
(dimensions: n_iterations x 4)- has_complete_auc
Boolean indicating whether complete AUC was computed in the bootstrap iterations (affects first summary column)
Value
A numeric matrix with 1 row and 5 columns containing:
mean_complete_auc: Mean of complete AUC values (NA if not computed)
mean_pauc: Mean of partial AUC values for the model
mean_pauc_rand: Mean of partial AUC values for random model (reference)
mean_auc_ratio: Mean of AUC ratios (model/random)
prop_ratio_gt1: Proportion of iterations where ratio > 1 (performance better than random)
Details
This function: 1. Filters iterations with non-finite ratio values (handles bootstrap failures) 2. Computes means for each AUC metric across valid iterations 3. Calculates proportion of iterations where model outperforms random (ratio > 1). This way of computing the the p-value of the test.
Special handling:
- Returns all NAs if no valid iterations exist
- First column (complete AUC) depends on has_complete_auc
parameter
- Handles NaN/Inf values safely by filtering
Interpretation Guide
- mean_auc_ratio > 1
: Model generally outperforms random predictions
- prop_ratio_gt1 = 1.9
: 90
- mean_pauc
: Absolute performance measure (higher = better discrimination)
See also
auc_parallel
for generating the input matrix
Examples
# Basic usage with simulated results
set.seed(123)
# Simulate bootstrap output (100 iterations x 4 metrics)
auc_matrix <- cbind(
complete = rnorm(100, 0.85, 0.05), # Complete AUC
pmodel = rnorm(100, 0.15, 0.03), # Partial model AUC
prand = rnorm(100, 0.08, 0.02), # Partial random AUC
ratio = rnorm(100, 1.9, 0.4) # Ratio
)
# Summarize results (assuming complete AUC was computed)
summary <- summarize_auc_results(auc_matrix, has_complete_auc = TRUE)
# Typical output interpretation:
# - mean_complete_auc: 0.85 (good overall discrimination)
# - mean_pauc: 0.15 (absolute partial AUC)
# - mean_pauc_rand: 0.08 (random expectation)
# - mean_pAUCratio: 1.9 (model 90% better than random)
# - p_value: 0.98 (98% of iterations showed model > random)
# Real-world usage with actual AUC function output
# \donttest{
# First run bootstrap AUC calculation
bg_pred <- runif(1000)
test_pred <- runif(500)
auc_output <- auc_parallel(
test_prediction = test_pred,
prediction = bg_pred,
iterations = 100
)
# Then summarize results (complete AUC not computed in this case)
summary <- summarize_auc_results(auc_output, has_complete_auc = FALSE)
# Print summary statistics
colnames(summary) <- c("mean_complete_auc", "mean_pauc",
"mean_pauc_rand", "mean_pAUCratio", "p_value")
print(summary)
#> mean_complete_auc mean_pauc mean_pauc_rand mean_pAUCratio p_value
#> [1,] NA 0.0507199 0.05050317 1.003634 0.31
# Expected output structure:
# mean_complete_auc mean_pauc mean_pauc_rand mean_pAUCratio p_value
# [1,] NA 0.152 0.083 1.83 0.94
# }