MyNixOS website logo
Description

Utilities for Scoring and Assessing Predictions.

Provides a collection of metrics and proper scoring rules (Tilmann Gneiting & Adrian E Raftery (2007) <doi:10.1198/016214506000001437>, Jordan, A., Krüger, F., & Lerch, S. (2019) <doi:10.18637/jss.v090.i12>) within a consistent framework for evaluation, comparison and visualisation of forecasts. In addition to proper scoring rules, functions are provided to assess bias, sharpness and calibration (Sebastian Funk, Anton Camacho, Adam J. Kucharski, Rachel Lowe, Rosalind M. Eggo, W. John Edmunds (2019) <doi:10.1371/journal.pcbi.1006785>) of forecasts. Several types of predictions (e.g. binary, discrete, continuous) which may come in different formats (e.g. forecasts represented by predictive samples or by quantiles of the predictive distribution) can be evaluated. Scoring metrics can be used either through a convenient data.frame format, or can be applied as individual functions in a vector / matrix format. All functionality has been implemented with a focus on performance and is robustly tested. Find more information about the package in the accompanying paper (<doi:10.48550/arXiv.2205.07090>).

scoringutils: Utilities for Scoring and Assessing Predictions

R-CMD-check codecov CRAN_Release_Badge GitHub R packageversion metacrandownloads

The scoringutils package provides a collection of metrics and proper scoring rules and aims to make it simple to score probabilistic forecasts against the true observed values.

You can find additional information and examples in the papers Evaluating Forecasts with scoringutils in RScoring epidemiological forecasts on transformed scales as well as the Vignettes (Getting started, Details on the metrics implemented and Scoring forecasts directly).

The scoringutils package offers convenient automated forecast evaluation through the function score(). The function operates on data.frames (it uses data.table internally for speed and efficiency) and can easily be integrated in a workflow based on dplyr or data.table. It also provides experienced users with a set of reliable lower-level scoring metrics operating on vectors/matrices they can build upon in other applications. In addition it implements a wide range of flexible plots designed to cover many use cases.

Where available scoringutils depends on functionality from scoringRules which provides a comprehensive collection of proper scoring rules for predictive probability distributions represented as sample or parametric distributions. For some forecast types, such as quantile forecasts, scoringutils also implements additional metrics for evaluating forecasts. On top of providing an interface to the proper scoring rules implemented in scoringRules and natively, scoringutils also offers utilities for summarising and visualising forecasts and scores, and to obtain relative scores between models which may be useful for non-overlapping forecasts and forecasts across scales.

Predictions can be handled in various formats: scoringutils can handle probabilistic forecasts in either a sample based or a quantile based format. For more detail on the expected input formats please see below. True values can be integer, continuous or binary, and appropriate scores for each of these value types are selected automatically.

Installation

Install the CRAN version of this package using:

install.packages("scoringutils")

Install the stable development version of the package with:

install.packages("scoringutils", repos = "https://epiforecasts.r-universe.dev")

Install the unstable development from GitHub using the following,

remotes::install_github("epiforecasts/scoringutils", dependencies = TRUE)

Quick start

In this quick start guide we explore some of the functionality of the scoringutils package using quantile forecasts from the ECDC forecasting hub as an example. For more detailed documentation please see the package vignettes, and individual function documentation.

Plotting forecasts

As a first step to evaluating the forecasts we visualise them. For the purposes of this example here we make use of plot_predictions() to filter the available forecasts for a single model, and forecast date.

example_quantile %>%
  make_NA(what = "truth", 
          target_end_date >= "2021-07-15", 
          target_end_date < "2021-05-22"
  ) %>%
  make_NA(what = "forecast",
          model != "EuroCOVIDhub-ensemble", 
          forecast_date != "2021-06-28"
  ) %>%
  plot_predictions(
    x = "target_end_date",
    by = c("target_type", "location")
  ) +
  facet_wrap(target_type ~ location, ncol = 4, scales = "free") 

Scoring forecasts

Forecasts can be easily and quickly scored using the score() function. score() automatically tries to determine the forecast_unit, i.e. the set of columns that uniquely defines a single forecast, by taking all column names of the data into account. However, it is recommended to set the forecast unit manually using set_forecast_unit() as this may help to avoid errors, especially when scoringutils is used in automated pipelines. The function set_forecast_unit() will simply drop unneeded columns. To verify everything is in order, the function check_forecasts() should be used. The result of that check can then passed directly into score(). score() returns unsummarised scores, which in most cases is not what the user wants. Here we make use of additional functions from scoringutils to add empirical coverage-levels (add_coverage()), and scores relative to a baseline model (here chosen to be the EuroCOVIDhub-ensemble model). See the getting started vignette for more details. Finally we summarise these scores by model and target type.

example_quantile %>%
  set_forecast_unit(c("location", "target_end_date", "target_type", "horizon", "model")) %>%
  check_forecasts() %>%
  score() %>%
  add_coverage(ranges = c(50, 90), by = c("model", "target_type")) %>%
  summarise_scores(
    by = c("model", "target_type"),
    relative_skill = TRUE,
    baseline = "EuroCOVIDhub-ensemble"
  ) %>%
  summarise_scores(
    fun = signif, 
    digits = 2
  ) %>%
  kable()
#> The following messages were produced when checking inputs:
#> 1.  144 values for `prediction` are NA in the data provided and the corresponding rows were removed. This may indicate a problem if unexpected.
modeltarget_typeinterval_scoredispersionunderpredictionoverpredictioncoverage_deviationbiasae_mediancoverage_50coverage_90relative_skillscaled_rel_skill
EuroCOVIDhub-baselineCases28000410010000.014000.0-0.1100.0980380000.330.821.301.6
EuroCOVIDhub-baselineDeaths160912.166.00.1200.34002300.661.002.303.8
EuroCOVIDhub-ensembleCases1800037004200.010000.0-0.098-0.0560240000.390.800.821.0
EuroCOVIDhub-ensembleDeaths41304.17.10.2000.0730530.881.000.601.0
UMass-MechBayesDeaths532717.09.0-0.023-0.0220780.460.880.751.3
epiforecasts-EpiNow2Cases2100057003300.012000.0-0.067-0.0790280000.470.790.951.2
epiforecasts-EpiNow2Deaths673216.019.0-0.043-0.00511000.420.910.981.6

scoringutils contains additional functionality to transform forecasts, to summarise scores at different levels, to visualise them, and to explore the forecasts themselves. See the package vignettes and function documentation for more information.

You may want to score forecasts based on transformations of the original data in order to obtain a more complete evaluation (see this paper for more information). This can be done using the function transform_forecasts(). In the following example, we truncate values at 0 and use the function log_shift() to add 1 to all values before applying the natural logarithm.

example_quantile %>%
 .[, true_value := ifelse(true_value < 0, 0, true_value)] %>%
  transform_forecasts(append = TRUE, fun = log_shift, offset = 1) %>%
  score %>%
  summarise_scores(by = c("model", "target_type", "scale")) %>%
  head()
#> The following messages were produced when checking inputs:
#> 1.  288 values for `prediction` are NA in the data provided and the corresponding rows were removed. This may indicate a problem if unexpected.
#>                    model target_type   scale interval_score   dispersion
#> 1: EuroCOVIDhub-baseline       Cases     log   1.169972e+00    0.4373146
#> 2: EuroCOVIDhub-baseline       Cases natural   2.209046e+04 4102.5009443
#> 3: EuroCOVIDhub-ensemble       Cases     log   5.500974e-01    0.1011850
#> 4: EuroCOVIDhub-ensemble       Cases natural   1.155071e+04 3663.5245788
#> 5:  epiforecasts-EpiNow2       Cases     log   6.005778e-01    0.1066329
#> 6:  epiforecasts-EpiNow2       Cases natural   1.443844e+04 5664.3779484
#>    underprediction overprediction coverage_deviation        bias    ae_median
#> 1:    3.521964e-01      0.3804607        -0.10940217  0.09726562 1.185905e+00
#> 2:    1.028497e+04   7702.9836957        -0.10940217  0.09726562 3.208048e+04
#> 3:    1.356563e-01      0.3132561        -0.09785326 -0.05640625 7.410484e-01
#> 4:    4.237177e+03   3650.0047554        -0.09785326 -0.05640625 1.770795e+04
#> 5:    1.858699e-01      0.3080750        -0.06660326 -0.07890625 7.656591e-01
#> 6:    3.260356e+03   5513.7058424        -0.06660326 -0.07890625 2.153070e+04

Citation

If using scoringutils in your work please consider citing it using the output of citation("scoringutils"):

#> To cite scoringutils in publications use the following. If you use the
#> CRPS, DSS, or Log Score, please also cite scoringRules.
#> 
#>   Nikos I. Bosse, Hugo Gruson, Sebastian Funk, Anne Cori, Edwin van
#>   Leeuwen, and Sam Abbott (2022). Evaluating Forecasts with
#>   scoringutils in R, arXiv. DOI: 10.48550/ARXIV.2205.07090
#> 
#> To cite scoringRules in publications use:
#> 
#>   Alexander Jordan, Fabian Krueger, Sebastian Lerch (2019). Evaluating
#>   Probabilistic Forecasts with scoringRules. Journal of Statistical
#>   Software, 90(12), 1-37. DOI 10.18637/jss.v090.i12
#> 
#> To see these entries in BibTeX format, use 'print(<citation>,
#> bibtex=TRUE)', 'toBibtex(.)', or set
#> 'options(citation.bibtex.max=999)'.

How to make a bug report or feature request

Please briefly describe your problem and what output you expect in an issue. If you have a question, please don’t open an issue. Instead, ask on our Q and A page.

Contributing

We welcome contributions and new contributors! We particularly appreciate help on priority problems in the issues. Please check and add to the issues, and/or add a pull request.

Code of Conduct

Please note that the scoringutils project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.

Metadata

Version

1.2.2

License

Unknown

Platforms (75)

    Darwin
    FreeBSD
    Genode
    GHCJS
    Linux
    MMIXware
    NetBSD
    none
    OpenBSD
    Redox
    Solaris
    WASI
    Windows
Show all
  • aarch64-darwin
  • aarch64-genode
  • aarch64-linux
  • aarch64-netbsd
  • aarch64-none
  • aarch64_be-none
  • arm-none
  • armv5tel-linux
  • armv6l-linux
  • armv6l-netbsd
  • armv6l-none
  • armv7a-darwin
  • armv7a-linux
  • armv7a-netbsd
  • armv7l-linux
  • armv7l-netbsd
  • avr-none
  • i686-cygwin
  • i686-darwin
  • i686-freebsd
  • i686-genode
  • i686-linux
  • i686-netbsd
  • i686-none
  • i686-openbsd
  • i686-windows
  • javascript-ghcjs
  • loongarch64-linux
  • m68k-linux
  • m68k-netbsd
  • m68k-none
  • microblaze-linux
  • microblaze-none
  • microblazeel-linux
  • microblazeel-none
  • mips-linux
  • mips-none
  • mips64-linux
  • mips64-none
  • mips64el-linux
  • mipsel-linux
  • mipsel-netbsd
  • mmix-mmixware
  • msp430-none
  • or1k-none
  • powerpc-netbsd
  • powerpc-none
  • powerpc64-linux
  • powerpc64le-linux
  • powerpcle-none
  • riscv32-linux
  • riscv32-netbsd
  • riscv32-none
  • riscv64-linux
  • riscv64-netbsd
  • riscv64-none
  • rx-none
  • s390-linux
  • s390-none
  • s390x-linux
  • s390x-none
  • vc4-none
  • wasm32-wasi
  • wasm64-wasi
  • x86_64-cygwin
  • x86_64-darwin
  • x86_64-freebsd
  • x86_64-genode
  • x86_64-linux
  • x86_64-netbsd
  • x86_64-none
  • x86_64-openbsd
  • x86_64-redox
  • x86_64-solaris
  • x86_64-windows