tidymodels-tree1

statlearning
trees
tidymodels
string
Published

November 8, 2023

library(tidymodels)
── Attaching packages ────────────────────────────────────── tidymodels 1.1.1 ──
✔ broom        1.0.5     ✔ recipes      1.0.8
✔ dials        1.2.0     ✔ rsample      1.2.0
✔ dplyr        1.1.3     ✔ tibble       3.2.1
✔ ggplot2      3.4.4     ✔ tidyr        1.3.0
✔ infer        1.0.5     ✔ tune         1.1.2
✔ modeldata    1.2.0     ✔ workflows    1.1.3
✔ parsnip      1.1.1     ✔ workflowsets 1.0.1
✔ purrr        1.0.2     ✔ yardstick    1.2.0
── Conflicts ───────────────────────────────────────── tidymodels_conflicts() ──
✖ purrr::discard() masks scales::discard()
✖ dplyr::filter()  masks stats::filter()
✖ dplyr::lag()     masks stats::lag()
✖ recipes::step()  masks stats::step()
• Dig deeper into tidy modeling with R at https://www.tmwr.org

Aufgabe

Berechnen Sie folgende prädiktiven Modelle und vergleichen Sie die Modellgüte:

  1. Entscheidungsbaum
  2. Bagging (Bootstrap-Bäume)

Modellformel: am ~ . (Datensatz mtcars)

Berichten Sie die Modellgüte (ROC-AUC).

Hinweise:

  • Tunen Sie alle Parameter (die der Engine anbietet).
  • Verwenden Sie Defaults, wo nicht anders angegeben.
  • Führen Sie eine \(v=2\)-fache Kreuzvalidierung durch (weil die Stichprobe so klein ist).
  • Beachten Sie die üblichen Hinweise.











Lösung

Setup

library(tidymodels)
data(mtcars)
library(tictoc)  # Zeitmessung
library(baguette)

Für Klassifikation verlangt Tidymodels eine nominale AV, keine numerische:

mtcars <-
  mtcars %>% 
  mutate(am = factor(am))

Daten teilen

d_split <- initial_split(mtcars)
d_train <- training(d_split)
d_test <- testing(d_split)

Modell(e)

mod_tree <-
  decision_tree(mode = "classification",
                cost_complexity = tune(),
                tree_depth = tune(),
                min_n = tune())

mod_bag <-
  bag_tree(mode = "classification",
           cost_complexity = tune(),
           tree_depth = tune(),
           min_n = tune())

Rezept(e)

rec_plain <- 
  recipe(am ~ ., data = d_train)

Resampling

rsmpl <- vfold_cv(d_train, v = 2)

Workflows

wf_tree <-
  workflow() %>%  
  add_recipe(rec_plain) %>% 
  add_model(mod_tree)
wf_bag <-
  workflow() %>%  
  add_recipe(rec_plain) %>% 
  add_model(mod_bag)

Tuning/Fitting

Tuninggrid:

tune_grid <- grid_regular(extract_parameter_set_dials(mod_tree), levels = 5)
tune_grid
# A tibble: 125 × 3
   cost_complexity tree_depth min_n
             <dbl>      <int> <int>
 1    0.0000000001          1     2
 2    0.0000000178          1     2
 3    0.00000316            1     2
 4    0.000562              1     2
 5    0.1                   1     2
 6    0.0000000001          4     2
 7    0.0000000178          4     2
 8    0.00000316            4     2
 9    0.000562              4     2
10    0.1                   4     2
# ℹ 115 more rows

Da beide Modelle die gleichen Tuningparameter aufweisen, brauchen wir nur ein Grid zu erstellen.

tic()
fit_tree <-
  tune_grid(object = wf_tree,
            grid = tune_grid,
            metrics = metric_set(roc_auc),
            resamples = rsmpl)
→ A | warning: 21 samples were requested but there were 12 rows in the data. 12 will be used.
There were issues with some computations   A: x1
There were issues with some computations   A: x11
→ B | warning: 30 samples were requested but there were 12 rows in the data. 12 will be used.
There were issues with some computations   A: x11
There were issues with some computations   A: x25   B: x16
→ C | warning: 40 samples were requested but there were 12 rows in the data. 12 will be used.
There were issues with some computations   A: x25   B: x16
There were issues with some computations   A: x25   B: x25   C: x24
There were issues with some computations   A: x26   B: x25   C: x25
There were issues with some computations   A: x50   B: x31   C: x25
There were issues with some computations   A: x50   B: x50   C: x38
There were issues with some computations   A: x50   B: x50   C: x50
toc()
20.49 sec elapsed
fit_tree
# Tuning results
# 2-fold cross-validation 
# A tibble: 2 × 4
  splits          id    .metrics           .notes           
  <list>          <chr> <list>             <list>           
1 <split [12/12]> Fold1 <tibble [125 × 7]> <tibble [75 × 3]>
2 <split [12/12]> Fold2 <tibble [125 × 7]> <tibble [75 × 3]>

There were issues with some computations:

  - Warning(s) x50: 21 samples were requested but there were 12 rows in the data. 12 ...
  - Warning(s) x50: 30 samples were requested but there were 12 rows in the data. 12 ...
  - Warning(s) x50: 40 samples were requested but there were 12 rows in the data. 12 ...

Run `show_notes(.Last.tune.result)` for more information.
tic()
fit_bag <-
  tune_grid(object = wf_bag,
            grid = tune_grid,
            metrics = metric_set(roc_auc),
            resamples = rsmpl)
→ A | warning: There were 11 warnings in `dplyr::mutate()`.
               The first warning was:
               ℹ In argument: `model = iter(...)`.
               Caused by warning:
               ! 21 samples were requested but there were 12 rows in the data. 12 will be used.
               ℹ Run `dplyr::last_dplyr_warnings()` to see the 10 remaining warnings.
There were issues with some computations   A: x1
There were issues with some computations   A: x7
There were issues with some computations   A: x13
There were issues with some computations   A: x19
There were issues with some computations   A: x25
→ B | warning: There were 11 warnings in `dplyr::mutate()`.
               The first warning was:
               ℹ In argument: `model = iter(...)`.
               Caused by warning:
               ! 30 samples were requested but there were 12 rows in the data. 12 will be used.
               ℹ Run `dplyr::last_dplyr_warnings()` to see the 10 remaining warnings.
There were issues with some computations   A: x25
There were issues with some computations   A: x25   B: x5
There were issues with some computations   A: x25   B: x11
There were issues with some computations   A: x25   B: x16
There were issues with some computations   A: x25   B: x22
→ C | warning: There were 11 warnings in `dplyr::mutate()`.
               The first warning was:
               ℹ In argument: `model = iter(...)`.
               Caused by warning:
               ! 40 samples were requested but there were 12 rows in the data. 12 will be used.
               ℹ Run `dplyr::last_dplyr_warnings()` to see the 10 remaining warnings.
There were issues with some computations   A: x25   B: x22
There were issues with some computations   A: x25   B: x25   C: x3
There were issues with some computations   A: x25   B: x25   C: x9
There were issues with some computations   A: x25   B: x25   C: x14
There were issues with some computations   A: x25   B: x25   C: x19
There were issues with some computations   A: x25   B: x25   C: x25
There were issues with some computations   A: x26   B: x25   C: x25
There were issues with some computations   A: x30   B: x25   C: x25
There were issues with some computations   A: x35   B: x25   C: x25
There were issues with some computations   A: x40   B: x25   C: x25
There were issues with some computations   A: x47   B: x25   C: x25
There were issues with some computations   A: x50   B: x27   C: x25
There were issues with some computations   A: x50   B: x33   C: x25
There were issues with some computations   A: x50   B: x39   C: x25
There were issues with some computations   A: x50   B: x45   C: x25
There were issues with some computations   A: x50   B: x50   C: x27
There were issues with some computations   A: x50   B: x50   C: x33
There were issues with some computations   A: x50   B: x50   C: x38
There were issues with some computations   A: x50   B: x50   C: x44
There were issues with some computations   A: x50   B: x50   C: x50
There were issues with some computations   A: x50   B: x50   C: x50
toc()
112.989 sec elapsed

Bester Kandidat

show_best(fit_tree)
# A tibble: 5 × 9
  cost_complexity tree_depth min_n .metric .estimator  mean     n std_err
            <dbl>      <int> <int> <chr>   <chr>      <dbl> <int>   <dbl>
1    0.0000000001          1     2 roc_auc binary     0.847     2  0.0694
2    0.0000000178          1     2 roc_auc binary     0.847     2  0.0694
3    0.00000316            1     2 roc_auc binary     0.847     2  0.0694
4    0.000562              1     2 roc_auc binary     0.847     2  0.0694
5    0.1                   1     2 roc_auc binary     0.847     2  0.0694
# ℹ 1 more variable: .config <chr>
show_best(fit_bag)
# A tibble: 5 × 9
  cost_complexity tree_depth min_n .metric .estimator  mean     n std_err
            <dbl>      <int> <int> <chr>   <chr>      <dbl> <int>   <dbl>
1    0.000562              8     2 roc_auc binary     0.889     2  0.111 
2    0.0000000178          4    40 roc_auc binary     0.889     2  0.111 
3    0.1                  15    11 roc_auc binary     0.884     2  0.0880
4    0.00000316            8    11 roc_auc binary     0.875     2  0.0972
5    0.000562              4    30 roc_auc binary     0.875     2  0.0972
# ℹ 1 more variable: .config <chr>

Bagging erzielte eine klar bessere Modellgüte (in den Validierungssamples) als das Entscheidungsbaum-Modell.

Finalisieren

wf_best_finalized <-
  wf_bag %>% 
  finalize_workflow(select_best(fit_bag))

Last Fit

final_fit <- 
  last_fit(object = wf_best_finalized, d_split)

collect_metrics(final_fit)
# A tibble: 2 × 4
  .metric  .estimator .estimate .config             
  <chr>    <chr>          <dbl> <chr>               
1 accuracy binary         0.875 Preprocessor1_Model1
2 roc_auc  binary         0.906 Preprocessor1_Model1

Wie man sieht, ist die Modellgüte im Test-Sample schlechter als in den Train- bzw. Validierungssamples; ein typischer Befund.


Categories:

  • statlearning
  • trees
  • tidymodels
  • string