MyNixOS website logo
Description

Model-Free Reinforcement Learning.

Performs model-free reinforcement learning in R. This implementation enables the learning of an optimal policy based on sample sequences consisting of states, actions and rewards. In addition, it supplies multiple predefined reinforcement learning algorithms, such as experience replay. Methodological details can be found in Sutton and Barto (1998) <ISBN:0262039249>.

Reinforcement Learning

BuildStatus CRAN_Status_Badge DOI Downloads

ReinforcementLearning performs model-free reinforcement learning in R. This implementation enables the learning of an optimal policy based on sample sequences consisting of states, actions and rewards. In addition, it supplies multiple predefined reinforcement learning algorithms, such as experience replay.

Overview

The most important functions of ReinforcementLearning are:

  • Learning an optimal policy from a fixed set of a priori known transition samples
  • Predefined learning rules and action selection modes
  • A highly customizable framework for model-free reinforcement learning tasks

Installation

You can easily install the latest version of ReinforcementLearning with

# Recommended option: download and install latest version from CRAN
install.packages("ReinforcementLearning")

# Alternatively, install the development version from GitHub:
# install.packages("devtools")
devtools::install_github("nproellochs/ReinforcementLearning")

Usage

This section shows the basic functionality of how to perform reinforcement learning. First, load the corresponding package ReinforcementLearning.

library(ReinforcementLearning)

The following example shows how to learn a reinforcement learning agent using input data in the form of sample sequences consisting of states, actions and rewards. The result of the learning process is a state-action table and an optimal policy that defines the best possible action in each state.

# Generate sample experience in the form of state transition tuples
data <- sampleGridSequence(N = 1000)
head(data)
#>   State Action Reward NextState
#> 1    s4   left     -1        s4
#> 2    s2  right     -1        s3
#> 3    s2  right     -1        s3
#> 4    s3   left     -1        s2
#> 5    s4     up     -1        s4
#> 6    s1   down     -1        s2

# Define reinforcement learning parameters
control <- list(alpha = 0.1, gamma = 0.1, epsilon = 0.1)

# Perform reinforcement learning
model <- ReinforcementLearning(data, s = "State", a = "Action", r = "Reward", 
                               s_new = "NextState", control = control)

# Print result
print(model)
#> State-Action function Q
#>          right        up        down       left
#> s1 -1.09619438 -1.098533 -1.00183072 -1.0978962
#> s2 -0.01980279 -1.097758 -1.00252228 -1.0037977
#> s3 -0.02335524  9.884394 -0.01722548 -0.9985081
#> s4 -1.09616040 -1.106392 -1.10548631 -1.1059655
#> 
#> Policy
#>      s1      s2      s3      s4 
#>  "down" "right"    "up" "right" 
#> 
#> Reward (last iteration)
#> [1] -263

Learning Reinforcement Learning

If you are new to reinforcement learning, you are better off starting with a systematic introduction, rather than trying to learn from reading individual documentation pages. There are three good places to start:

  1. A thorough introduction to reinforcement learning is provided in Sutton (1998).

  2. The package vignette demonstrates the main functionalities of the ReinforcementLearning R package by drawing upon common examples from the literature (e.g. finding optimal game strategies).

  3. Multiple blog posts on R-bloggers demonstrate the capabilities of the ReinforcementLearning package using practical examples.

Contributing

If you experience any difficulties with the package, or have suggestions, or want to contribute directly, you have the following options:

License

ReinforcementLearning is released under the MIT License

Copyright (c) 2019 Nicolas Pröllochs & Stefan Feuerriegel.

Metadata

Version

1.0.5

License

Unknown

Platforms (75)

    Darwin
    FreeBSD
    Genode
    GHCJS
    Linux
    MMIXware
    NetBSD
    none
    OpenBSD
    Redox
    Solaris
    WASI
    Windows
Show all
  • aarch64-darwin
  • aarch64-genode
  • aarch64-linux
  • aarch64-netbsd
  • aarch64-none
  • aarch64_be-none
  • arm-none
  • armv5tel-linux
  • armv6l-linux
  • armv6l-netbsd
  • armv6l-none
  • armv7a-darwin
  • armv7a-linux
  • armv7a-netbsd
  • armv7l-linux
  • armv7l-netbsd
  • avr-none
  • i686-cygwin
  • i686-darwin
  • i686-freebsd
  • i686-genode
  • i686-linux
  • i686-netbsd
  • i686-none
  • i686-openbsd
  • i686-windows
  • javascript-ghcjs
  • loongarch64-linux
  • m68k-linux
  • m68k-netbsd
  • m68k-none
  • microblaze-linux
  • microblaze-none
  • microblazeel-linux
  • microblazeel-none
  • mips-linux
  • mips-none
  • mips64-linux
  • mips64-none
  • mips64el-linux
  • mipsel-linux
  • mipsel-netbsd
  • mmix-mmixware
  • msp430-none
  • or1k-none
  • powerpc-netbsd
  • powerpc-none
  • powerpc64-linux
  • powerpc64le-linux
  • powerpcle-none
  • riscv32-linux
  • riscv32-netbsd
  • riscv32-none
  • riscv64-linux
  • riscv64-netbsd
  • riscv64-none
  • rx-none
  • s390-linux
  • s390-none
  • s390x-linux
  • s390x-none
  • vc4-none
  • wasm32-wasi
  • wasm64-wasi
  • x86_64-cygwin
  • x86_64-darwin
  • x86_64-freebsd
  • x86_64-genode
  • x86_64-linux
  • x86_64-netbsd
  • x86_64-none
  • x86_64-openbsd
  • x86_64-redox
  • x86_64-solaris
  • x86_64-windows