germeval05

textmining
datawrangling
germeval
prediction
tidymodels
wordvec
string
Published

November 16, 2023

Aufgabe

Erstellen Sie ein prädiktives Modell für Textdaten. Nutzen Sie Word-Vektoren für das Feature-Engineering.

Nutzen Sie die GermEval-2018-Daten.

Die Daten sind unter CC-BY-4.0 lizensiert. Author: Wiegand, Michael (Spoken Language Systems, Saarland University (2010-2018), Leibniz Institute for the German Language (since 2019)),

Die Daten sind auch über das R-Paket PradaData zu beziehen.

library(tidyverse)
data("germeval_train", package = "pradadata")
data("germeval_test", package = "pradadata")

Die AV lautet c1. Die (einzige) UV lautet: text.

Hinweise:











Lösung

d_train <-
  germeval_train |> 
  select(id, c1, text)
library(tictoc)
library(tidymodels)
library(syuzhet)
library(beepr)
library(textrecipes)

Eine Vorlage für ein Tidymodels-Pipeline findet sich hier.

Textvektoren importieren

library(textdata)

glove_embedding <- embedding_glove6b(
  dir = "/Users/sebastiansaueruser/datasets",
  return_path = TRUE,
  manual_download = TRUE
)

head(glove_embedding)
# A tibble: 6 × 51
  token     d1      d2     d3      d4    d5      d6     d7     d8        d9
  <chr>  <dbl>   <dbl>  <dbl>   <dbl> <dbl>   <dbl>  <dbl>  <dbl>     <dbl>
1 the   0.418   0.250  -0.412  0.122  0.345 -0.0445 -0.497 -0.179 -0.000660
2 ,     0.0134  0.237  -0.169  0.410  0.638  0.477  -0.429 -0.556 -0.364   
3 .     0.152   0.302  -0.168  0.177  0.317  0.340  -0.435 -0.311 -0.450   
4 of    0.709   0.571  -0.472  0.180  0.544  0.726   0.182 -0.524  0.104   
5 to    0.680  -0.0393  0.302 -0.178  0.430  0.0322 -0.414  0.132 -0.298   
6 and   0.268   0.143  -0.279  0.0163 0.114  0.699  -0.513 -0.474 -0.331   
# ℹ 41 more variables: d10 <dbl>, d11 <dbl>, d12 <dbl>, d13 <dbl>, d14 <dbl>,
#   d15 <dbl>, d16 <dbl>, d17 <dbl>, d18 <dbl>, d19 <dbl>, d20 <dbl>,
#   d21 <dbl>, d22 <dbl>, d23 <dbl>, d24 <dbl>, d25 <dbl>, d26 <dbl>,
#   d27 <dbl>, d28 <dbl>, d29 <dbl>, d30 <dbl>, d31 <dbl>, d32 <dbl>,
#   d33 <dbl>, d34 <dbl>, d35 <dbl>, d36 <dbl>, d37 <dbl>, d38 <dbl>,
#   d39 <dbl>, d40 <dbl>, d41 <dbl>, d42 <dbl>, d43 <dbl>, d44 <dbl>,
#   d45 <dbl>, d46 <dbl>, d47 <dbl>, d48 <dbl>, d49 <dbl>, d50 <dbl>

Workflow

# model:
mod1 <-
  logistic_reg()


# cv:
set.seed(42)
rsmpl <- vfold_cv(d_train, v = 5)


# recipe:
rec1 <-
  recipe(c1 ~ ., data = d_train) |> 
  update_role(id, new_role = "id")  |> 
  #update_role(c2, new_role = "ignore") |> 
  step_tokenize(text) %>%
  step_stopwords(text, keep = FALSE) %>%
  step_word_embeddings(text,
                       embeddings = glove_embedding,
                       aggregation = "mean") |> 
  step_normalize(all_numeric_predictors()) 


# workflow:
wf1 <-
  workflow() %>% 
  add_model(mod1) %>% 
  add_recipe(rec1)

Tuining/Fitting

tic()
wf1_fit <-
  wf1 %>% 
  fit_resamples(
    resamples = rsmpl,
    metrics = metric_set(accuracy, f_meas, roc_auc),
    control = control_grid(verbose = TRUE))
toc()
26.374 sec elapsed
beep()
wf1_fit |> collect_metrics()
# A tibble: 3 × 6
  .metric  .estimator  mean     n std_err .config             
  <chr>    <chr>      <dbl> <int>   <dbl> <chr>               
1 accuracy binary     0.656     5 0.00726 Preprocessor1_Model1
2 f_meas   binary     0.129     5 0.0156  Preprocessor1_Model1
3 roc_auc  binary     0.593     5 0.00947 Preprocessor1_Model1

Bester Fold:

show_best(wf1_fit)
# A tibble: 1 × 6
  .metric  .estimator  mean     n std_err .config             
  <chr>    <chr>      <dbl> <int>   <dbl> <chr>               
1 accuracy binary     0.656     5 0.00726 Preprocessor1_Model1

Fit

fit1 <- 
  wf1 |> 
  fit(data = d_train)

Test-Set-Güte

Vorhersagen im Test-Set:

tic()
preds <-
  predict(fit1, new_data = germeval_test)
toc()
2.229 sec elapsed

Und die Vorhersagen zum Test-Set hinzufügen, damit man TRUTH und ESTIMATE vergleichen kann:

d_test <-
  germeval_test |> 
  bind_cols(preds) |> 
  mutate(c1 = as.factor(c1))
my_metrics <- metric_set(accuracy, f_meas)
my_metrics(d_test,
           truth = c1,
           estimate = .pred_class)
# A tibble: 2 × 3
  .metric  .estimator .estimate
  <chr>    <chr>          <dbl>
1 accuracy binary         0.652
2 f_meas   binary         0.138

Fazit

glove6b ist für die englische Sprache vorgekocht. Das macht wenig Sinn für einen deutschsprachigen Corpus.


Categories:

  • textmining
  • datawrangling
  • germeval
  • prediction
  • tidymodels
  • wordvec
  • string