MyNixOS website logo
Description

Behavioral Economic Easy Discounting.

Facilitates some of the analyses performed in studies of behavioral economic discounting. The package supports scoring of the 27-Item Monetary Choice Questionnaire (see Kaplan et al., 2016; <doi:10.1007/s40614-016-0070-9>), calculating k values (Mazur's simple hyperbolic and exponential) using nonlinear regression, calculating various Area Under the Curve (AUC) measures, plotting regression curves for both fit-to-group and two-stage approaches, checking for unsystematic discounting (Johnson & Bickel, 2008; <doi:10.1037/1064-1297.16.3.264>) and scoring of the minute discounting task (see Koffarnus & Bickel, 2014; <doi:10.1037/a0035973>) using the Qualtrics 5-trial discounting template (see the Qualtrics Minute Discounting User Guide; <doi:10.13140/RG.2.2.26495.79527>), which is also available as a .qsf file in this package.

Behavioral Economic (be) Easy (ez) Discounting

CRAN_Status_Badge downloads total

An R package containing commonly used functions for analyzing behavioral economic discounting data.

The package supports scoring of the 27-Item Monetary Choice Questionnaire (see Kaplan et al., 2016), calculating k values (and Area Under the Curve metrics) from indifference points using nonlinear regression (Mazur’s simple hyperbola and exponential), and scoring of the minute discounting task (see Koffarnus & Bickel, 2014) using the Qualtrics 5-trial discounting template (see the Qualtrics Minute Discounting User Guide), which is also available as a .qsf file in this package.

Note About Use

Currently, this version (0.3.2) appears stable. I encourage you to use it but be aware that, as with any software release, there might be (unknown) bugs present. I’ve tried hard to make this version usable while including the core functionality (described more below). However, if you find issues or would like to contribute, please open an issue on my GitHub page or email me.

You may also use these functions in the Shinybeez web application and also found at the GitHub page.

Citing the Package

If you use this package in your own work, please consider citing the package:

Kaplan, B. A. (2023). beezdiscounting: Behavioral Economic Easy Discounting. R package version 0.3.1, https://github.com/brentkaplan/beezdiscounting

You can also find the latest citation using citation("beezdemand")

Installing beezdiscounting

CRAN Release (recommended method)

The latest stable version of beezdiscounting (currently v.0.3.1) can be found on CRAN and installed using the following command. The first time you install the package, you may be asked to select a CRAN mirror. Simply select the mirror geographically closest to you.

install.packages("beezdiscounting")

library(beezdiscounting)

GitHub Release

To install a stable release directly from GitHub, first install and load the devtools package. Then, use install_github to install the package and associated vignette. You don’t need to download anything directly from GitHub, as you should use the following instructions:

install.packages("devtools")

devtools::install_github("brentkaplan/beezdiscounting")

library(beezdiscounting)

Using the Package

27-item Monetary Choice Questionnaire Scoring Overview

Example Dataset

An example dataset of responses on the 27-Item Monetary Choice Questionnaire is provided. This object is called mcq27 and is located within the beezdiscounting package. These data are the example data used in the paper by Kaplan et al, 2016. Note the format of the data, which is called “long format”. Long format data are data structured such that repeated observations are stacked in multiple rows, rather than across columns.

subjectidquestionidresponse
1110
2120
3130
4141
5151
6160
7171
28210
29221
30231
31241
32251
33260
34271

The first column contains the subject id. The second column contains the question id. The third column contains the response (0 for smaller sooner, 1 for larger later)

Converting from Wide to Long and Vice Versa

beezdiscounting includes several helper functions to reshape data.

long_to_wide_mcq()

Long format data are widened such that subject id is the first column and each subsequent column contains the response associated with the question (specified as column names).

wide <- long_to_wide_mcq(generate_data_mcq(2))

knitr::kable(wide[, c(1:5, 24:28)], caption = "Wide Format Data")
subjectid12342324252627
1111111110
2111011111

Wide Format Data

wide_to_long_mcq()

Wide data (see example of wide data above) are made long such that subject id is in the first column, question id (inferred from the column names from the wide format dataframe) is the second column, and the response is the third column.

long <- wide_to_long_mcq(wide, items = 27)

knitr::kable(long[c(1:5, 28:32), ], caption = "Long Format Data")
subjectidquestionidresponse
111
121
131
141
150
211
221
231
240
251

Long Format Data

wide_to_long_mcq_excel()

A different ‘type’ of wide data is that used in the 27-Item Monetary Choice Questionnaire Automated Excel Scorer (Kaplan et al, 2016). In this format, the first column is the question id and each subsequent column represents a subject (as the column name) and the response in rows (see the example below). This function takes the data from that format and converts it to the format needed for beezdiscounting functions.

knitr::kable(wide_excel[c(1:5, 22:27), ],
             caption = "Format Expected in the 27-Item MCQ Excel Scorer")
questionid12
111
211
311
410
501
2210
2311
2411
2511
2611
2701

Format Expected in the 27-Item MCQ Excel Scorer

long_excel <- wide_to_long_mcq_excel(wide_excel)

knitr::kable(long_excel[c(1:5, 28:32), ], caption = "Long Format")
subjectidquestionidresponse
111
121
131
141
150
211
221
231
240
251

Long Format

long_to_wide_mcq_excel()

Data can be manipulated from long form into a form used by the 27-Item Monetary Choice Questionnaire Automated Excel Scorer.

wide_excel <- long_to_wide_mcq_excel(long_excel)

knitr::kable(wide_excel[c(1:5, 22:27), ],
             caption = "Format Expected in the 27-Item MCQ Excel Scorer")
questionid12
111
211
311
410
501
2210
2311
2411
2511
2611
2701

Format Expected in the 27-Item MCQ Excel Scorer

Generate Fake MCQ Data

Generate data specifying reproducibility and proportion of NA responses.

## fake data with no missing values
fake_data_no_missing <- generate_data_mcq(n_ids = 2, n_items = 27,
                                          seed = 1234, prop_na = 0)
knitr::kable(fake_data_no_missing, caption = "Fake Data - No Missings")
subjectidquestionidresponse
111
121
131
141
150
161
170
180
190
1101
1111
1121
1131
1140
1151
1161
1171
1180
1191
1201
1211
1221
1231
1241
1251
1261
1270
211
221
231
240
251
260
270
280
291
2100
2111
2121
2130
2141
2150
2161
2171
2181
2190
2200
2210
2220
2231
2241
2251
2261
2271

Fake Data - No Missings

## fake data with missing values
fake_data_missing <- generate_data_mcq(n_ids = 2, n_items = 27,
                                          seed = 1234, prop_na = .1)
knitr::kable(fake_data_missing, caption = "Fake Data - Missings")
subjectidquestionidresponse
111
12NA
131
141
150
161
170
180
190
1101
1111
1121
1131
1140
115NA
1161
1171
1180
1191
1201
1211
1221
1231
1241
1251
1261
1270
211
221
231
240
251
260
270
280
291
2100
211NA
2121
2130
2141
2150
216NA
2171
2181
2190
2200
2210
222NA
2231
2241
2251
2261
2271

Fake Data - Missings

Score 27-item MCQ

MCQ data can be scored regularly and can also impute using various methods specified by Yeh et al, 2023

Normal (no imputation)

No missing data
## normal scoring of data with no missing values
tbl1 <- score_mcq27(fake_data_no_missing)
subjectidoverall_ksmall_kmedium_klarge_kgeomean_k
10.0001580.0001580.0001580.0002510.000185
20.0002510.0015620.0044690.0001580.001034

k Values

subjectidoverall_consistencysmall_consistencymedium_consistencylarge_consistencycomposite_consistency
10.7407410.6666670.6666671.0000000.777778
20.6296300.7777780.5555560.6666670.666667

Consistency Scores

subjectidoverall_proportionsmall_proportionmedium_proportionlarge_proportionimpute_method
10.7407410.6666670.6666670.888889none
20.5925930.5555560.5555560.666667none

Proportions

Missing data
## normal scoring of data with missings with no imputation
tbl2 <- score_mcq27(fake_data_missing)
subjectidoverall_ksmall_kmedium_klarge_kgeomean_k
1NA0.0001580.000158NANA
2NANANA0.000158NA

k Values

subjectidoverall_consistencysmall_consistencymedium_consistencylarge_consistencycomposite_consistency
1NA0.6666670.666667NANA
2NANANA0.666667NA

Consistency Scores

subjectidoverall_proportionsmall_proportionmedium_proportionlarge_proportionimpute_method
1NA0.6666670.666667NAnone
2NANANA0.666667none

Proportions

GGM imputation

This approach (Group Geometric Mean) “…calculates the composite k when at least one of the three amount set ks is fully available” (Yeh et al, 2023)

tbl3 <- score_mcq27(fake_data_missing, impute_method = "GGM")
subjectidoverall_ksmall_kmedium_klarge_kgeomean_k
1NA0.0001580.000158NA0.000158
2NANANA0.0001580.000158

k Values

subjectidoverall_consistencysmall_consistencymedium_consistencylarge_consistencycomposite_consistency
1NA0.6666670.666667NANA
2NANANA0.666667NA

Consistency Scores

subjectidoverall_proportionsmall_proportionmedium_proportionlarge_proportionimpute_method
1NA0.6666670.666667NAGGM
2NANANA0.666667GGM

Proportions

INN imputation (no random component)

This approach (Item Nearest Neighbor) “…replaces the missing value with the congruent non-missing responses to the items corresponding to the same k value” (Yeh et al, 2023)

tbl4 <- score_mcq27(fake_data_missing, impute_method = "INN")
subjectidoverall_ksmall_kmedium_klarge_kgeomean_k
10.0001580.0001580.0001580.0002510.000185
2NANA0.0631540.000158NA

k Values

subjectidoverall_consistencysmall_consistencymedium_consistencylarge_consistencycomposite_consistency
10.7407410.6666670.6666671.0000000.777778
2NANA0.6666670.666667NA

Consistency Scores

subjectidoverall_proportionsmall_proportionmedium_proportionlarge_proportionimpute_method
10.7407410.6666670.6666670.888889INN
2NANA0.4444440.666667INN

Proportions

INN imputation (with random component)

This approach (Item Nearest Neighbor with Random) “… is identical to [INN no random component], except that when a missing response cannot be resolved, this datum will be randomly replaced with 0 or 1, corresponding to choosing immediate or delayed rewards, respectively” (Yeh et al, 2023)

tbl5 <- score_mcq27(fake_data_missing, impute_method = "INN",
                    random = TRUE)
subjectidoverall_ksmall_kmedium_klarge_kgeomean_k
10.0001580.0001580.0001580.0002510.000185
20.0002510.0015620.0631540.0001580.002500

k Values

subjectidoverall_consistencysmall_consistencymedium_consistencylarge_consistencycomposite_consistency
10.7407410.6666670.6666671.0000000.777778
20.5925930.7777780.6666670.6666670.703704

Consistency Scores

subjectidoverall_proportionsmall_proportionmedium_proportionlarge_proportionimpute_method
10.7407410.6666670.6666670.888889INN with random
20.5555560.5555560.4444440.666667INN with random

Proportions

Return a list

You can also return a list when INN imputation with random is specified. This is helpful to see what values replaced the missings (NAs) in the original dataset.

lst <- score_mcq27(fake_data_missing, impute_method = "INN",
                    random = TRUE, return_data = TRUE)

The scoring summary metric dataframe as before (access via ...$results):

subjectidoverall_ksmall_kmedium_klarge_kgeomean_k
10.0001580.0001580.0001580.0002510.000185
20.0002510.0015620.0631540.0001580.002500

k Values

subjectidoverall_consistencysmall_consistencymedium_consistencylarge_consistencycomposite_consistency
10.7407410.6666670.6666671.0000000.777778
20.5555560.6666670.6666670.6666670.666667

Consistency Scores

subjectidoverall_proportionsmall_proportionmedium_proportionlarge_proportionimpute_method
10.7407410.6666670.6666670.888889INN with random
20.5185190.4444440.4444440.666667INN with random

Proportions

The original data and the new responses imputed (access via ...$data):

subjectidquestionidresponsenewresponse
1111
12NA1
1311
1411
1500
1611
1700
1800
1900
11011
11111
11211
11311
11400
115NA1
11611
11711
11800
11911
12011
12111
12211
12311
12411
12511
12611
12700
2111
2211
2311
2400
2511
2600
2700
2800
2911
21000
211NA0
21211
21300
21411
21500
216NA0
21711
21811
21900
22000
22100
222NA0
22311
22411
22511
22611
22711

Original Data and Imputed Data

Discount Rates via Indifference Points

Data format

The data must be in a dataframe with the following columns: - id: participant ID - x: delay - y: indifference point

For example, the following data set is available in the package: dd_ip

knitr::kable(dd_ip[1:12, ], caption = "Indifference Point Data")
idxy
P110.8162505
P170.3908523
P1300.0191631
P1900.0990859
P11800.0134581
P13650.0035518
P210.5724503
P270.1652014
P2300.0326867
P2900.0802244
P21800.0275921
P23650.0247967

Indifference Point Data

Identifying unsystematic data (Johnson & Bickel, 2008)

The check_unsystematic() function can be used to check whether the data conform to the assumptions of the Johnson & Bickel (2008) method. The function is designed to work with a single participant. As will often be the case, you will want to run this for each unique participant in the dataset as shown below:

unsys <- dd_ip |>
  dplyr::group_split(id) |>
  purrr::map_dfr(~ check_unsystematic(
    dat = .x,
    ll = 1, # LL specification
    c1 = 0.2, # Criterion 1 threshold
    c2 = 0.1 # Criterion 2 threshold
  )) |>
  dplyr::mutate(id = factor(id, levels = unique(dd_ip$id))) |>
  dplyr::arrange(id) |>
  dplyr::slice(1:5)

knitr::kable(unsys, caption = "Unsystematic Data Output")
idc1_passc2_pass
P1TRUETRUE
P2TRUETRUE
P3TRUETRUE
P4TRUETRUE
P5TRUETRUE

Unsystematic Data Output

Calculating k

The fit_dd() function can be used to estimate k values from either the simple hyperbola (Mazur, 1987) or exponential equation. The output of this function can then be used in results_dd() and plot_dd() to obtain a table of results and plots of data.

First use the fit_dd() function to fit the data:

dd_fit <- fit_dd(
    dat = dd_ip,
    equation = "Hyperbolic",
    method = "Two Stage"
)

Then use the results_dd() function to get a table of results. The results table automatically includes measures of Area Under the Curve (AUC). Three different AUC measures are calculated:

  • auc_regular: AUC calculated using the regular trapezoidal rule

  • auc_log10: AUC calculated using the trapezoidal rule on the log10-transformed x values (Borges et al., 2016)

  • auc_ord: AUC calculated using the trapezoidal rule on the ordinally transformed x values (Borges et al., 2016)

dd_results <- results_dd(dd_fit) |>
  dplyr::mutate(id = factor(id, levels = unique(dd_ip$id))) |>
  dplyr::arrange(id) |>
  dplyr::slice(1:5)

knitr::kable(dd_results[, c(1:7, 21:22)], caption = "Parameter Estimates and Information")
methodidtermestimatestd.errorstatisticp.valueconf_lowconf_high
Two StageP1k0.24816170.04324955.7379070.00225250.13698530.3593381
Two StageP2k0.73387170.08398488.7381490.00032520.51798190.9497615
Two StageP3k0.55518450.08101976.8524650.00101100.34691680.7634522
Two StageP4k0.28446550.026474410.7449310.00012100.21641090.3525200
Two StageP5k1.01358830.076071013.3242350.00004260.81804151.2091351

Parameter Estimates and Information

knitr::kable(dd_results[, c(1:3, 8:17)], caption = "Model Information")
methodidtermsigmaisConvfinTollogLikAICBICdeviancedf.residualnobsR2
Two StageP1k0.0529456TRUE09.664278-15.32856-15.745040.0140162560.9735088
Two StageP2k0.0324102TRUE012.609020-21.21804-21.634520.0052521560.9769628
Two StageP3k0.0419241TRUE011.064704-18.12941-18.545890.0087881560.9715690
Two StageP4k0.0278923TRUE013.509754-23.01951-23.435990.0038899560.9917668
Two StageP5k0.0205973TRUE015.328915-26.65783-27.074310.0021212560.9886979

Model Information

knitr::kable(dd_results[, c(1:2, 18:20)], caption = "Area Under the Curve Values")
methodidauc_regularauc_log10auc_ord
Two StageP10.05088420.23471510.1864921
Two StageP20.04827940.14620150.1208656
Two StageP30.04097010.16853020.1352153
Two StageP40.03132410.21699820.1664288
Two StageP50.01698140.11266640.0867586

Area Under the Curve Values

Finally, use the plot_dd() function to plot the data:

plot_dd(
    fit_dd_object = dd_fit,
    xlabel = "Delay (days)", # Specify x label
    ylabel = "Indifference Point", # Specify y label
    title = "Two Stage Plot", # Specify plot title
    logx = TRUE # Specify log scale for x axis
    )

Scoring the Minute Discounting Tasks

5.5 Trial Delay Discounting Task

dd_out <- calc_dd(five.fivetrial_dd)

knitr::kable(dd_out, caption = "Scoring Summary of the 5.5 Trial Delay Discounting Task")
ResponseIdindexqfirstclicklastclickpagesubmittotalclicksresponseattentionflagkvaled50
1I1611.7611.7613.3371llNo0.0067058149.1249275
1I2427.7297.7298.4571ssNo0.0067058149.1249275
1I2031.5581.5583.3771llNo0.0067058149.1249275
1I2242.3333.9494.5012ssNo0.0067058149.1249275
1I2153.1613.1613.7281ssNo0.0067058149.1249275
2I1613.7793.7794.3511ssNo4.89897950.2041241
2I821.4541.4543.1901ssNo4.89897950.2041241
2I431.1791.1793.1441llNo4.89897950.2041241
2I640.8730.8733.2561ssNo4.89897950.2041241
2I552.6212.6213.2581ssNo4.89897950.2041241
3I1611.1151.1153.2721ssYesNANA
3I820.6790.6793.0741ssYesNANA
3I430.6060.6063.0441ssYesNANA
3I240.7450.7453.3021ssYesNANA
3I150.9240.9244.1811ssYesNANA
3AttendSS61.4501.4504.1811ssYesNANA
4I1611.0111.0113.1901llYesNANA
4I2421.0411.0413.1091llYesNANA
4I2830.8060.8063.1131llYesNANA
4I3040.8220.8223.4871llYesNANA
4I3150.9140.9143.1701llYesNANA
4AttendLL62.1582.1583.5731llYesNANA

Scoring Summary of the 5.5 Trial Delay Discounting Task

5.5 Trial Probability Discounting Task

pd_out <- calc_pd(five.fivetrial_pd)

knitr::kable(pd_out, caption = "Scoring Summary of the 5.5 Trial Probability Discounting Task")
ResponseIdindexqfirstclicklastclickpagesubmittotalclicksresponseattentionflaghvaletheta50ep50
1I1613.9803.9805.1841scNo7.4354360.134491188.14525
1I824.0104.0104.7631luNo7.4354360.134491188.14525
1I1232.0612.0613.2521scNo7.4354360.134491188.14525
1I1041.5251.5253.0191scNo7.4354360.134491188.14525
1I952.2532.9543.7382luNo7.4354360.134491188.14525
2I1612.8732.8733.8831scNo99.0000000.010101099.00000
2I823.7453.7454.8641scNo99.0000000.010101099.00000
2I431.1591.1596.3561scNo99.0000000.010101099.00000
2I243.0643.0645.4081scNo99.0000000.010101099.00000
2I152.0492.0495.0971scNo99.0000000.010101099.00000
2AttendSS62.2952.2954.6411luNo99.0000000.010101099.00000
3I1618.9338.9339.7691scNo1.6014450.624436161.55983
3I822.1632.1632.9811luNo1.6014450.624436161.55983
3I1233.1293.1293.8951luNo1.6014450.624436161.55983
3I1442.6552.6554.8551luNo1.6014450.624436161.55983
3I1554.0214.0214.7051scNo1.6014450.624436161.55983
4I1614.4154.4155.3821scNo7.4354360.134491188.14525
4I826.1236.1236.9741luNo7.4354360.134491188.14525
4I1231.6731.6733.1911scNo7.4354360.134491188.14525
4I1041.7571.7573.2591scNo7.4354360.134491188.14525
4I951.2071.2074.5921luNo7.4354360.134491188.14525

Scoring Summary of the 5.5 Trial Probability Discounting Task

Learn More About Functions

To learn more about a function and what arguments it takes, type “?” in front of the function name.

?score_mcq27

Recommended Readings

  • Kaplan, B. A., Amlung, M., Reed, D. D., Jarmolowicz, D. P., McKerchar, T. L., & Lemley, S. M. (2016). Automating scoring of delay discounting for the 21-and 27-item monetary choice questionnaires. The Behavior Analyst, 39, 293-304. https://doi.org/10.1007/s40614-016-0070-9

  • Reed, D. D., Niileksela, C. R., & Kaplan, B. A. (2013). Behavioral economics: A tutorial for behavior analysts in practice. Behavior Analysis in Practice, 6 (1), 34–54. https://doi.org/10.1007/BF03391790

  • Mazur, J. E. (1987). An adjusting procedure for studying delayed reinforcement. In M. L. Commons, J. E. Mazur, J. A. Nevin, & H. Rachlin (Eds.), The effect of delay and of intervening events on reinforcement value (pp. 55–73). Lawrence Erlbaum Associates, Inc.

  • Borges, A. M., Kuang, J., Milhorn, H. and Yi, R. (2016), An alternative approach to calculating Area-Under-the-Curve (AUC) in delay discounting research. Journal of the Experimental Analysis of Behavior, 106, 145-155. https://doi.org/10.1002/jeab.219

  • Kirby, K. N., Petry, N. M., & Bickel, W. K. (1999). Heroin addicts have higher discount rates for delayed rewards than non-drug-using controls. Journal of Experimental Psychology: General, 128 (1), 78-87. https://doi.org/10.1037//0096-3445.128.1.78

  • Yeh, Y. H., Tegge, A. N., Freitas-Lemos, R., Myerson, J., Green, L., & Bickel, W. K. (2023). Discounting of delayed rewards: Missing data imputation for the 21-and 27-item monetary choice questionnaires. PLOS ONE, 18 (10), e0292258. https://doi.org/10.1371/journal.pone.0292258

  • Koffarnus, M. N., & Bickel, W. K. (2014). A 5-trial adjusting delay discounting task: accurate discount rates in less than one minute. Experimental and Clinical Psychopharmacology, 22(3), 222-228. https://doi.org/10.1037/a0035973

  • Koffarnus, M. N., Rzeszutek, M. J., & Kaplan, B. A. (2021). Additional discounting rates in less than one minute: Task variants for probability and a wider range of delays. https://doi.org/10.13140/RG.2.2.31281.92000

  • Koffarnus, M. N., Kaplan, B. A., & Stein, J. S. (2017). User guide for Qualtrics minute discounting template. https://doi.org/10.13140/RG.2.2.26495.79527

Metadata

Version

0.3.2

License

Unknown

Platforms (77)

    Darwin
    FreeBSD
    Genode
    GHCJS
    Linux
    MMIXware
    NetBSD
    none
    OpenBSD
    Redox
    Solaris
    WASI
    Windows
Show all
  • aarch64-darwin
  • aarch64-freebsd
  • aarch64-genode
  • aarch64-linux
  • aarch64-netbsd
  • aarch64-none
  • aarch64-windows
  • aarch64_be-none
  • arm-none
  • armv5tel-linux
  • armv6l-linux
  • armv6l-netbsd
  • armv6l-none
  • armv7a-darwin
  • armv7a-linux
  • armv7a-netbsd
  • armv7l-linux
  • armv7l-netbsd
  • avr-none
  • i686-cygwin
  • i686-darwin
  • i686-freebsd
  • i686-genode
  • i686-linux
  • i686-netbsd
  • i686-none
  • i686-openbsd
  • i686-windows
  • javascript-ghcjs
  • loongarch64-linux
  • m68k-linux
  • m68k-netbsd
  • m68k-none
  • microblaze-linux
  • microblaze-none
  • microblazeel-linux
  • microblazeel-none
  • mips-linux
  • mips-none
  • mips64-linux
  • mips64-none
  • mips64el-linux
  • mipsel-linux
  • mipsel-netbsd
  • mmix-mmixware
  • msp430-none
  • or1k-none
  • powerpc-netbsd
  • powerpc-none
  • powerpc64-linux
  • powerpc64le-linux
  • powerpcle-none
  • riscv32-linux
  • riscv32-netbsd
  • riscv32-none
  • riscv64-linux
  • riscv64-netbsd
  • riscv64-none
  • rx-none
  • s390-linux
  • s390-none
  • s390x-linux
  • s390x-none
  • vc4-none
  • wasm32-wasi
  • wasm64-wasi
  • x86_64-cygwin
  • x86_64-darwin
  • x86_64-freebsd
  • x86_64-genode
  • x86_64-linux
  • x86_64-netbsd
  • x86_64-none
  • x86_64-openbsd
  • x86_64-redox
  • x86_64-solaris
  • x86_64-windows