MyNixOS website logo
Description

Optimal Binning and Weight of Evidence Framework for Modeling.

High-performance implementation of 36 optimal binning algorithms (16 categorical, 20 numerical) for Weight of Evidence ('WoE') transformation, credit scoring, and risk modeling. Includes advanced methods such as Mixed Integer Linear Programming ('MILP'), Genetic Algorithms, Simulated Annealing, and Monotonic Regression. Features automatic method selection based on Information Value ('IV') maximization, strict monotonicity enforcement, and efficient handling of large datasets via 'Rcpp'. Fully integrated with the 'tidymodels' ecosystem for building robust machine learning pipelines. Based on methods described in Siddiqi (2006) <doi:10.1002/9781119201731> and Navas-Palencia (2020) <doi:10.48550/arXiv.2001.08025>.

OptimalBinningWoE OptimalBinningWoE website

CRANstatus R-CMD-check Downloads License:MIT Lifecycle:stable

Overview

OptimalBinningWoE is a high-performance R package for optimal binning and Weight of Evidence (WoE) transformation, designed for credit scoring, risk assessment, and predictive modeling applications.

Why OptimalBinningWoE?

FeatureBenefit
36 AlgorithmsChoose the best method for your data characteristics
C++ PerformanceProcess millions of records efficiently via Rcpp/RcppEigen
tidymodels ReadySeamless integration with modern ML pipelines
Regulatory ComplianceMonotonic binning for Basel/IFRS 9 requirements
Production QualityComprehensive testing and documentation

Installation

# Install from CRAN
install.packages("OptimalBinningWoE")

# Or install the development version from GitHub
# install.packages("pak")
pak::pak("evandeilton/OptimalBinningWoE")

Quick Start

Basic Usage with German Credit Data

library(OptimalBinningWoE)
library(scorecard)

# Load the German Credit dataset
data("germancredit", package = "scorecard")

# Create binary target variable
german <- germancredit
german$default <- factor(
  ifelse(german$creditability == "bad", 1, 0),
  levels = c(0, 1),
  labels = c("good", "bad")
)
german$creditability <- NULL

# Select key features for demonstration
german_model <- german[, c(
  "default",
  "duration.in.month",
  "credit.amount",
  "age.in.years",
  "status.of.existing.checking.account",
  "credit.history",
  "savings.account.and.bonds"
)]

# Run Optimal Binning with JEDI algorithm (general purpose)
binning_results <- obwoe(
  data = german_model,
  target = "default",
  algorithm = "jedi",
  min_bins = 3,
  max_bins = 5
)

# View summary
print(binning_results)

# Check Information Value (IV) summary to see feature importance
print(binning_results$summary)

# View detailed binning for a specific feature
binning_results$results$duration.in.month

Single Feature Binning

library(OptimalBinningWoE)
library(scorecard)

# Load data
data("germancredit", package = "scorecard")
german <- germancredit
german$default <- factor(
  ifelse(german$creditability == "bad", 1, 0),
  levels = c(0, 1),
  labels = c("good", "bad")
)

# Bin a single feature with specific algorithm
result_single <- obwoe(
  data = german,
  target = "default",
  feature = "credit.amount",
  algorithm = "mob",
  min_bins = 3,
  max_bins = 6
)

# View results
print(result_single)

# Detailed binning table
bins <- result_single$results$credit.amount
data.frame(
  Bin = bins$bin,
  Count = bins$count,
  Event_Rate = round(bins$count_pos / bins$count * 100, 2),
  WoE = round(bins$woe, 4),
  IV = round(bins$iv, 4)
)

Apply WoE Transformation to New Data

library(OptimalBinningWoE)
library(scorecard)

# Load and prepare data
data("germancredit", package = "scorecard")
german <- germancredit
german$default <- factor(
  ifelse(german$creditability == "bad", 1, 0),
  levels = c(0, 1),
  labels = c("good", "bad")
)

# Train/test split
set.seed(123)
train_idx <- sample(1:nrow(german), size = 0.7 * nrow(german))
train_data <- german[train_idx, ]
test_data <- german[-train_idx, ]

# Fit binning model on training data
model <- obwoe(
  data = train_data,
  target = "default",
  algorithm = "mob",
  min_bins = 2,
  max_bins = 5
)

# Apply learned bins to training and test data
train_woe <- obwoe_apply(train_data, model, keep_original = FALSE)
test_woe <- obwoe_apply(test_data, model, keep_original = FALSE)

# View transformed features
head(train_woe[, c("default", "duration.in.month_woe", "credit.amount_woe")])

Gains Table Analysis

library(OptimalBinningWoE)
library(scorecard)

# Load and prepare data
data("germancredit", package = "scorecard")
german <- germancredit
german$default <- factor(
  ifelse(german$creditability == "bad", 1, 0),
  levels = c(0, 1),
  labels = c("good", "bad")
)

# Fit binning model
model <- obwoe(
  data = german,
  target = "default",
  algorithm = "cm",
  min_bins = 3,
  max_bins = 5
)

# Compute gains table for a feature
gains <- obwoe_gains(model, feature = "duration.in.month", sort_by = "id")

# View gains table with KS, Gini, and lift metrics
print(gains)

# Visualize gains curves
par(mfrow = c(2, 2))
plot(gains, type = "cumulative")
plot(gains, type = "ks")
plot(gains, type = "lift")
plot(gains, type = "woe_iv")
par(mfrow = c(1, 1))

Integration with tidymodels

OptimalBinningWoE integrates seamlessly with tidymodels recipes.

library(tidymodels)
library(OptimalBinningWoE)
library(scorecard)

# Load and prepare data
data("germancredit", package = "scorecard")
german <- germancredit
german$default <- factor(
  ifelse(german$creditability == "bad", 1, 0),
  levels = c(0, 1),
  labels = c("good", "bad")
)
german$creditability <- NULL

# Select features
german_model <- german[, c(
  "default",
  "duration.in.month",
  "credit.amount",
  "age.in.years",
  "status.of.existing.checking.account",
  "credit.history"
)]

# Train/test split
set.seed(123)
german_split <- initial_split(german_model, prop = 0.7, strata = default)
train_data <- training(german_split)
test_data <- testing(german_split)

# Create recipe with WoE transformation
rec_woe <- recipe(default ~ ., data = train_data) %>%
  step_obwoe(
    all_predictors(),
    outcome = "default",
    algorithm = "jedi",
    min_bins = 2,
    max_bins = 5,
    bin_cutoff = 0.05,
    output = "woe"
  )

# Define model specification
lr_spec <- logistic_reg() %>%
  set_engine("glm") %>%
  set_mode("classification")

# Create workflow
wf_credit <- workflow() %>%
  add_recipe(rec_woe) %>%
  add_model(lr_spec)

# Fit the workflow
final_fit <- fit(wf_credit, data = train_data)

# Evaluate on test data
test_pred <- augment(final_fit, test_data)

# Performance metrics
metrics <- metric_set(roc_auc, accuracy)
metrics(test_pred,
  truth = default,
  estimate = .pred_class,
  .pred_bad,
  event_level = "second"
)

# ROC curve
roc_curve(test_pred,
  truth = default,
  .pred_bad,
  event_level = "second"
) %>%
  autoplot() +
  labs(title = "ROC Curve - German Credit Model")

Hyperparameter Tuning

library(tidymodels)
library(OptimalBinningWoE)
library(scorecard)

# Load and prepare data
data("germancredit", package = "scorecard")
german <- germancredit
german$default <- factor(
  ifelse(german$creditability == "bad", 1, 0),
  levels = c(0, 1),
  labels = c("good", "bad")
)
german$creditability <- NULL

german_model <- german[, c(
  "default",
  "duration.in.month",
  "credit.amount",
  "status.of.existing.checking.account",
  "credit.history"
)]

# Split data
set.seed(123)
german_split <- initial_split(german_model, prop = 0.7, strata = default)
train_data <- training(german_split)

# Recipe with tunable max_bins
rec_woe <- recipe(default ~ ., data = train_data) %>%
  step_obwoe(
    all_predictors(),
    outcome = "default",
    algorithm = "jedi",
    min_bins = 2,
    max_bins = tune(),
    output = "woe"
  )

# Model specification
lr_spec <- logistic_reg() %>%
  set_engine("glm") %>%
  set_mode("classification")

# Workflow
wf_credit <- workflow() %>%
  add_recipe(rec_woe) %>%
  add_model(lr_spec)

# Cross-validation folds
set.seed(456)
cv_folds <- vfold_cv(train_data, v = 5, strata = default)

# Tuning grid
tune_grid <- tibble(max_bins = c(3, 4, 5, 6))

# Tune
tune_results <- tune_grid(
  wf_credit,
  resamples = cv_folds,
  grid = tune_grid,
  metrics = metric_set(roc_auc)
)

# Best parameters
best_params <- select_best(tune_results, metric = "roc_auc")
print(best_params)

# Visualize tuning results
autoplot(tune_results, metric = "roc_auc")

Core Concepts

Weight of Evidence (WoE)

WoE quantifies the predictive power of each bin by measuring the log-odds ratio:

$$\text{WoE}_i = \ln\left(\frac{\text{Distribution of Goods}_i}{\text{Distribution of Bads}_i}\right)$$

Interpretation:

  • WoE > 0: Lower risk than average (more “goods” than expected)
  • WoE < 0: Higher risk than average (more “bads” than expected)
  • WoE ≈ 0: Similar to population average

Information Value (IV)

IV measures the overall predictive power of a feature:

$$\text{IV} = \sum_{i=1}^{n} (\text{Dist. Goods}_i - \text{Dist. Bads}_i) \times \text{WoE}_i$$

IV RangePredictive PowerRecommendation
< 0.02UnpredictiveExclude
0.02 – 0.10WeakUse cautiously
0.10 – 0.30MediumGood predictor
0.30 – 0.50StrongExcellent predictor
> 0.50SuspiciousCheck for data leakage

Algorithm Reference

OptimalBinningWoE provides 36 algorithms optimized for different scenarios:

Universal Algorithms (Numerical & Categorical)

AlgorithmFunctionBest For
JEDIob_numerical_jedi()General purpose, balanced performance
MOBob_numerical_mob()Regulatory compliance (monotonic)
ChiMergeob_numerical_cm()Statistical significance-based merging
DPob_numerical_dp()Optimal partitioning with constraints
Sketchob_numerical_sketch()Large-scale / streaming data

Numerical-Only Algorithms (20)

AlgorithmFunctionSpecialty
MDLPob_numerical_mdlp()Entropy-based discretization
MBLPob_numerical_mblp()Monotonic binning via linear programming
IRob_numerical_ir()Isotonic regression binning
EWBob_numerical_ewb()Fast equal-width binning
KMBob_numerical_kmb()K-means clustering approach

View all 20 numerical algorithms

AcronymFull NameDescription
BBBranch and BoundExact optimization
CMChiMergeChi-square merging
DMIVDecision Tree MIVRecursive partitioning
DPDynamic ProgrammingOptimal partitioning
EWBEqual WidthFixed-width bins
Fast-MDLPFast MDLPOptimized entropy
FETBFisher’s Exact TestStatistical significance
IRIsotonic RegressionOrder-preserving
JEDIJoint Entropy-DrivenInformation maximization
JEDI-MWoEJEDI MultinomialMulti-class targets
KMBK-Means BinningClustering-based
LDBLocal DensityDensity estimation
LPDBLocal PolynomialSmooth density
MBLPMonotonic LPLP optimization
MDLPMin Description LengthEntropy-based
MOBMonotonic OptimalIV-optimal + monotonic
MRBLPMonotonic Regression LPRegression + LP
OSLPOptimal Supervised LPSupervised learning
SketchKLL SketchStreaming quantiles
UBSDUnsupervised StdDevStandard deviation
UDTUnsupervised DTDecision tree

Categorical-Only Algorithms (16)

AlgorithmFunctionSpecialty
SBLPob_categorical_sblp()Similarity-based grouping
IVBob_categorical_ivb()IV maximization
GMBob_categorical_gmb()Greedy monotonic
SABob_categorical_sab()Simulated annealing

View all 16 categorical algorithms

AcronymFull NameDescription
CMChiMergeChi-square merging
DMIVDecision Tree MIVRecursive partitioning
DPDynamic ProgrammingOptimal partitioning
FETBFisher’s Exact TestStatistical significance
GMBGreedy MonotonicGreedy monotonic binning
IVBInformation ValueIV maximization
JEDIJoint Entropy-DrivenInformation maximization
JEDI-MWoEJEDI MultinomialMulti-class targets
MBAModified BinningModified approach
MILPMixed Integer LPLP optimization
MOBMonotonic OptimalIV-optimal + monotonic
SABSimulated AnnealingStochastic optimization
SBLPSimilarity-Based LPSimilarity grouping
SketchCount-Min SketchStreaming counts
SWBSliding WindowWindow-based
UDTUnsupervised DTDecision tree

Algorithm Selection Guide

Use CaseRecommendedRationale
General Credit Scoringjedi, mobBest balance of speed and predictive power
Regulatory Compliancemob, mblp, irGuaranteed monotonic WoE patterns
Large Datasets (>1M rows)sketch, ewbSublinear memory, single-pass
High Cardinality Categoricalsblp, gmb, ivbIntelligent category grouping
Interpretability Focusdp, mdlpClear, explainable bins
Multi-class Targetsjedi_mwoeMultinomial WoE support

Key Functions

FunctionPurpose
obwoe()Main interface for optimal binning and WoE
obwoe_apply()Apply learned binning to new data
obwoe_gains()Compute gains table with KS, Gini, lift
step_obwoe()tidymodels recipe step
ob_preprocess()Data preprocessing with outlier handling
obwoe_algorithms()List all available algorithms
control.obwoe()Create control parameters

Complete Workflow Example

Here is a complete end-to-end credit scoring workflow:

library(OptimalBinningWoE)
library(scorecard)
library(pROC)

# ============================================
# 1. Data Preparation
# ============================================

# Load German Credit dataset
data("germancredit", package = "scorecard")

# Create binary target
german <- germancredit
german$default <- factor(
  ifelse(german$creditability == "bad", 1, 0),
  levels = c(0, 1),
  labels = c("good", "bad")
)
german$creditability <- NULL

# Select features for modeling
features_num <- c("duration.in.month", "credit.amount", "age.in.years")
features_cat <- c(
  "status.of.existing.checking.account",
  "credit.history",
  "savings.account.and.bonds",
  "purpose"
)

german_model <- german[, c("default", features_num, features_cat)]

# Train/test split
set.seed(123)
train_idx <- sample(1:nrow(german_model), size = 0.7 * nrow(german_model))
train_data <- german_model[train_idx, ]
test_data <- german_model[-train_idx, ]

cat("Training set:", nrow(train_data), "observations\n")
cat("Test set:", nrow(test_data), "observations\n")
cat(
  "Training default rate:",
  round(mean(train_data$default == "bad") * 100, 2), "%\n"
)

# ============================================
# 2. Fit Optimal Binning Model
# ============================================

# Use Monotonic Optimal Binning for regulatory compliance
sc_binning <- obwoe(
  data = train_data,
  target = "default",
  algorithm = "mob",
  min_bins = 2,
  max_bins = 5,
  control = control.obwoe(
    bin_cutoff = 0.05,
    convergence_threshold = 1e-6
  )
)

# View summary
summary(sc_binning)

# ============================================
# 3. Feature Selection by IV
# ============================================

# Extract IV summary and select predictive features
iv_summary <- sc_binning$summary[!sc_binning$summary$error, ]
iv_summary <- iv_summary[order(-iv_summary$total_iv), ]

cat("\nFeature Ranking by Information Value:\n")
print(iv_summary[, c("feature", "total_iv", "n_bins")])

# Select features with IV >= 0.02
selected_features <- iv_summary$feature[iv_summary$total_iv >= 0.02]
cat("\nSelected features (IV >= 0.02):", length(selected_features), "\n")
print(selected_features)

# ============================================
# 4. Apply WoE Transformation
# ============================================

# Transform training and test data
train_woe <- obwoe_apply(train_data, sc_binning, keep_original = FALSE)
test_woe <- obwoe_apply(test_data, sc_binning, keep_original = FALSE)

# Preview transformed features
cat("\nTransformed training data (first 5 rows):\n")
print(head(train_woe[, c(
  "default",
  paste0(selected_features[1:3], "_woe")
)], 5))

# ============================================
# 5. Build Logistic Regression Model
# ============================================

# Build formula with WoE-transformed features
woe_vars <- paste0(selected_features, "_woe")
formula_str <- paste("default ~", paste(woe_vars, collapse = " + "))

# Fit logistic regression
scorecard_glm <- glm(
  as.formula(formula_str),
  data = train_woe,
  family = binomial(link = "logit")
)

cat("\nModel Summary:\n")
summary(scorecard_glm)

# ============================================
# 6. Model Evaluation
# ============================================

# Predictions on test set
test_woe$score <- predict(scorecard_glm, newdata = test_woe, type = "response")

# ROC curve and AUC
roc_obj <- roc(test_woe$default, test_woe$score, quiet = TRUE)
auc_val <- auc(roc_obj)

# KS statistic
ks_stat <- max(abs(
  ecdf(test_woe$score[test_woe$default == "bad"])(seq(0, 1, 0.01)) -
    ecdf(test_woe$score[test_woe$default == "good"])(seq(0, 1, 0.01))
))

# Gini coefficient
gini <- 2 * auc_val - 1

cat("\n============================================\n")
cat("Scorecard Performance Metrics:\n")
cat("============================================\n")
cat("  AUC:  ", round(auc_val, 4), "\n")
cat("  Gini: ", round(gini, 4), "\n")
cat("  KS:   ", round(ks_stat * 100, 2), "%\n")

# Plot ROC curve
plot(roc_obj,
  main = "Scorecard ROC Curve",
  print.auc = TRUE,
  print.thres = "best"
)

# ============================================
# 7. Gains Analysis
# ============================================

# Compute gains for best numerical feature
best_num_feature <- iv_summary$feature[iv_summary$feature %in% features_num][1]

gains <- obwoe_gains(sc_binning, feature = best_num_feature, sort_by = "id")
print(gains)

# Plot WoE and IV
plot(gains, type = "woe_iv")

Data Preprocessing

Handle missing values and outliers before binning:

library(OptimalBinningWoE)

# Simulate problematic feature
set.seed(2024)
problematic_feature <- c(
  rnorm(800, 5000, 2000), # Normal values
  rep(NA, 100), # Missing values
  runif(100, -10000, 50000) # Outliers
)
target_sim <- rbinom(1000, 1, 0.3)

# Preprocess with IQR method
preproc_result <- ob_preprocess(
  feature = problematic_feature,
  target = target_sim,
  outlier_method = "iqr",
  outlier_process = TRUE,
  preprocess = "both"
)

# View preprocessing report
print(preproc_result$report)

# Access cleaned feature
cleaned_feature <- preproc_result$preprocess$feature_preprocessed

Algorithm Comparison

Compare different algorithms on the same feature:

library(OptimalBinningWoE)
library(scorecard)

# Load data
data("germancredit", package = "scorecard")
german <- germancredit
german$default <- factor(
  ifelse(german$creditability == "bad", 1, 0),
  levels = c(0, 1),
  labels = c("good", "bad")
)

# Test multiple algorithms
algorithms <- c("jedi", "mob", "mdlp", "ewb", "cm")

compare_results <- lapply(algorithms, function(algo) {
  tryCatch(
    {
      fit <- obwoe(
        data = german,
        target = "default",
        feature = "credit.amount",
        algorithm = algo,
        min_bins = 3,
        max_bins = 6
      )

      data.frame(
        Algorithm = algo,
        N_Bins = fit$summary$n_bins[1],
        IV = round(fit$summary$total_iv[1], 4),
        Converged = fit$summary$converged[1]
      )
    },
    error = function(e) {
      data.frame(
        Algorithm = algo,
        N_Bins = NA,
        IV = NA,
        Converged = FALSE
      )
    }
  )
})

# Combine and display results
comparison_df <- do.call(rbind, compare_results)
comparison_df <- comparison_df[order(-comparison_df$IV), ]

cat("Algorithm Comparison on 'credit.amount':\n\n")
print(comparison_df, row.names = FALSE)

# View available algorithms
algorithms_info <- obwoe_algorithms()
print(algorithms_info[, c("algorithm", "numerical", "categorical")])

Performance

OptimalBinningWoE is optimized for speed through:

  • RcppEigen: Vectorized linear algebra operations
  • Efficient algorithms: O(n log n) or better complexity
  • Memory-conscious design: Streaming algorithms for large data

Typical performance on a standard laptop:

Data SizeProcessing Time
100K rows< 1 second
1M rows2-5 seconds
10M rows20-60 seconds

Best Practices

Workflow Recommendations

  1. Start Simple: Use algorithm = "jedi" as default
  2. Check IV: Select features with IV ≥ 0.02
  3. Validate Monotonicity: Use mob, mblp, or ir for regulatory models
  4. Cross-Validate: Tune binning parameters with CV
  5. Monitor Stability: Track WoE distributions over time
  6. Document Thoroughly: Save metadata with models

Common Pitfalls to Avoid

# RONG: Bin on full dataset before splitting (causes data leakage!)
bad_approach <- obwoe(full_data, target = "default")
train_woe <- obwoe_apply(train_data, bad_approach)

# ORRECT: Bin only on training data
good_approach <- obwoe(train_data, target = "default")
test_woe <- obwoe_apply(test_data, good_approach)

# RONG: Ignore IV thresholds (IV > 0.50 likely indicates target leakage)
suspicious_features <- result$summary$feature[result$summary$total_iv > 0.50]
# Investigate these features carefully!

# RONG: Over-bin (too many bins reduces interpretability)
# max_bins > 10 may cause overfitting

Documentation

Contributing

Contributions are welcome! Please see our Contributing Guidelines and Code of Conduct.

Citation

If you use OptimalBinningWoE in your research, please cite:

@software{optimalbinningwoe,
  author = {José Evandeilton Lopes},
  title = {OptimalBinningWoE: Optimal Binning and Weight of Evidence Framework for Modeling},
  year = {2026},
  url = {https://github.com/evandeilton/OptimalBinningWoE}
}

References

  • Siddiqi, N. (2006). Credit Risk Scorecards: Developing and Implementing Intelligent Credit Scoring. John Wiley & Sons.
  • Thomas, L. C., Edelman, D. B., & Crook, J. N. (2002). Credit Scoring and Its Applications. SIAM.
  • Navas-Palencia, G. (2020). Optimal Binning: Mathematical Programming Formulation. arXiv:2001.08025.
  • Anderson, R. (2007). The Credit Scoring Toolkit: Theory and Practice for Retail Credit Risk Management. Oxford University Press.

License

MIT License © 2026 José Evandeilton Lopes.

Metadata

Version

1.0.8

License

Unknown

Platforms (78)

    Darwin
    FreeBSD
    Genode
    GHCJS
    Linux
    MMIXware
    NetBSD
    none
    OpenBSD
    Redox
    Solaris
    uefi
    WASI
    Windows
Show all
  • aarch64-darwin
  • aarch64-freebsd
  • aarch64-genode
  • aarch64-linux
  • aarch64-netbsd
  • aarch64-none
  • aarch64-uefi
  • aarch64-windows
  • aarch64_be-none
  • arm-none
  • armv5tel-linux
  • armv6l-linux
  • armv6l-netbsd
  • armv6l-none
  • armv7a-linux
  • armv7a-netbsd
  • armv7l-linux
  • armv7l-netbsd
  • avr-none
  • i686-cygwin
  • i686-freebsd
  • i686-genode
  • i686-linux
  • i686-netbsd
  • i686-none
  • i686-openbsd
  • i686-windows
  • javascript-ghcjs
  • loongarch64-linux
  • m68k-linux
  • m68k-netbsd
  • m68k-none
  • microblaze-linux
  • microblaze-none
  • microblazeel-linux
  • microblazeel-none
  • mips-linux
  • mips-none
  • mips64-linux
  • mips64-none
  • mips64el-linux
  • mipsel-linux
  • mipsel-netbsd
  • mmix-mmixware
  • msp430-none
  • or1k-none
  • powerpc-linux
  • powerpc-netbsd
  • powerpc-none
  • powerpc64-linux
  • powerpc64le-linux
  • powerpcle-none
  • riscv32-linux
  • riscv32-netbsd
  • riscv32-none
  • riscv64-linux
  • riscv64-netbsd
  • riscv64-none
  • rx-none
  • s390-linux
  • s390-none
  • s390x-linux
  • s390x-none
  • vc4-none
  • wasm32-wasi
  • wasm64-wasi
  • x86_64-cygwin
  • x86_64-darwin
  • x86_64-freebsd
  • x86_64-genode
  • x86_64-linux
  • x86_64-netbsd
  • x86_64-none
  • x86_64-openbsd
  • x86_64-redox
  • x86_64-solaris
  • x86_64-uefi
  • x86_64-windows