MyNixOS website logo
Description

Compare Supervised Machine Learning Models Using Shiny App.

Implementation of a shiny app to easily compare supervised machine learning model performances. You provide the data and configure each model parameter directly on the shiny app. Different supervised learning algorithms can be tested either on Spark or H2O frameworks to suit your regression and classification tasks. Implementation of available machine learning models on R has been done by Lantz (2013, ISBN:9781782162148).

Compare Supervised Machine Learning Models Using Shiny App

CRAN StatusBadge CRAN DownloadsBadge CRAN DownloadsBadge License: GPLv3 Travis-CI BuildStatus AppVeyor buildstatus LifecycleStatus

shinyML

Implement in one line of code a shareable web app to compare supervised machine learning models for regression and classification tasks!

With shinyML, you can compare your favorite regression or classification models issued from H2O or Spark frameworks without any effort.

Installation

The package can be installed from CRAN:

install.packages("shinyML")

You can also install the latest development version from github:

devtools::install_github("JeanBertinR/shinyML")

Getting started: create the shinyML web app in just one list of code

This is a basic examples which shows you how to run the app:

library(shinyML)

# An example of regression task 
shinyML_regression(data = iris,y = "Petal.Width",framework = "h2o")

# An example of classification task  
shinyML_classification(data = iris,y = "Species",framework = "h2o")

Please note that shinyML_regression and shinyML_classification will automatically detect if you input dataset contains time-based column: in that case, train/test splitting will be adapted to time-series forecasting.

# An example of time-series forecasting
longley2 <- longley %>% mutate(Year = as.Date(as.character(Year),format = "%Y"))
shinyML_regression(data = longley2,y = "Population",framework = "h2o")

Explore input dataset before running the models…

Before running machine learning models, it can be useful to inspect the distribution of each variable and to have an insight of dependencies between explanatory variables. BothshinyML_regression and shinyML_classification functions allows to check classes of explanatory variables, plot histograms of each distribution and show correlation matrix between all variables. This tabs can be used to determine if some variable are strongly correlated to another and eventually removed from the training phase.You can also plot variation of every variable as a function of another using the “Explore dataset” tab.

Test different machine learning techniques and hyper-parameters configurations with just a few clicks

To test supervised machine learning models on shinyML package, the first step consist in separating train and test period from your dataset: this can be done in one second using slider button on the right shinyML app side. You can also remove variables from your initial selection directly from app just simply using “Input variable” textbox. You are then free to select hyper-parameters configuration for your favorite machine learning model.
Note that hidden layers of deep learning technique can be set inside the corresponding text box: the default c(200,200) configuration corresponds to a two hidden-layers neural network, with 200 neurons for each layer.

Run at the same time all machine learning techniques to compare variable importances and error metrics

You can easily use shinyML package to compare different machine learning techniques with your own hyper-parameters configuration. For that, you will just need to use shiny app buttons corresponding to your parameters and click then to “Run tuned models !”

You will see a validation message box once all models have been trained: at that point, you can have an overview of your results comparing variables importances and error metrics like MAPE or *RMSE**.

Run autoML algorithm to find automatically configure the best machine learning regression model associated to your dataset

AutoML algorithm will automatically find the best algorithm to suit your regression or classification task: the user will be informed of the machine learning model that has been selected and know which hyper-parameters have been chosen.

The only setting that must be adjusted by the user is the maximum time authorized for searching.

For more information take a look at the package vignette.

Metadata

Version

1.0.1

License

Unknown

Platforms (77)

    Darwin
    FreeBSD
    Genode
    GHCJS
    Linux
    MMIXware
    NetBSD
    none
    OpenBSD
    Redox
    Solaris
    WASI
    Windows
Show all
  • aarch64-darwin
  • aarch64-freebsd
  • aarch64-genode
  • aarch64-linux
  • aarch64-netbsd
  • aarch64-none
  • aarch64-windows
  • aarch64_be-none
  • arm-none
  • armv5tel-linux
  • armv6l-linux
  • armv6l-netbsd
  • armv6l-none
  • armv7a-darwin
  • armv7a-linux
  • armv7a-netbsd
  • armv7l-linux
  • armv7l-netbsd
  • avr-none
  • i686-cygwin
  • i686-darwin
  • i686-freebsd
  • i686-genode
  • i686-linux
  • i686-netbsd
  • i686-none
  • i686-openbsd
  • i686-windows
  • javascript-ghcjs
  • loongarch64-linux
  • m68k-linux
  • m68k-netbsd
  • m68k-none
  • microblaze-linux
  • microblaze-none
  • microblazeel-linux
  • microblazeel-none
  • mips-linux
  • mips-none
  • mips64-linux
  • mips64-none
  • mips64el-linux
  • mipsel-linux
  • mipsel-netbsd
  • mmix-mmixware
  • msp430-none
  • or1k-none
  • powerpc-netbsd
  • powerpc-none
  • powerpc64-linux
  • powerpc64le-linux
  • powerpcle-none
  • riscv32-linux
  • riscv32-netbsd
  • riscv32-none
  • riscv64-linux
  • riscv64-netbsd
  • riscv64-none
  • rx-none
  • s390-linux
  • s390-none
  • s390x-linux
  • s390x-none
  • vc4-none
  • wasm32-wasi
  • wasm64-wasi
  • x86_64-cygwin
  • x86_64-darwin
  • x86_64-freebsd
  • x86_64-genode
  • x86_64-linux
  • x86_64-netbsd
  • x86_64-none
  • x86_64-openbsd
  • x86_64-redox
  • x86_64-solaris
  • x86_64-windows