MyNixOS website logo
Description

Interface to 'TensorFlow SIG Addons'.

'TensorFlow SIG Addons' <https://www.tensorflow.org/addons> is a repository of community contributions that conform to well-established API patterns, but implement new functionality not available in core 'TensorFlow'. 'TensorFlow' natively supports a large number of operators, layers, metrics, losses, optimizers, and more. However, in a fast moving field like Machine Learning, there are many interesting new developments that cannot be integrated into core 'TensorFlow' (because their broad applicability is not yet clear, or it is mostly used by a smaller subset of the community).

R interface to useful extra functionality for TensorFlow 2.x by SIG-addons

The tfaddons package provides R wrappers to TensorFlow Addons.

TensorFlow Addons is a repository of contributions that conform to well-established API patterns, but implement new functionality not available in core TensorFlow. TensorFlow natively supports a large number of operators, layers, metrics, losses, and optimizers. However, in a fast moving field like ML, there are many interesting new developments that cannot be integrated into core TensorFlow (because their broad applicability is not yet clear, or it is mostly used by a smaller subset of the community).

Actions Status CRAN Lifecycle: experimental Last commit

Addons provide the following features which are compatible with keras library.

  • activations
  • callbacks
  • image
  • layers
  • losses
  • metrics
  • optimizers
  • rnn
  • seq2seq
  • text

Installation

Requirements:

  • TensorFlow 2.X

The dev version:

devtools::install_github('henry090/tfaddons')

Later, you need to install the python module tensorflow-addons:

tfaddons::install_tfaddons()

Usage: the basics

Here's how to build a sequential model with keras using additional features from tfaddons package.

Import and prepare MNIST dataset.

library(keras)
library(tfaddons)

mnist = dataset_mnist()

x_train <- mnist$train$x
y_train <- mnist$train$y

# reshape the dataset
x_train <- array_reshape(x_train, c(nrow(x_train), 28, 28, 1))

# Transform RGB values into [0,1] range
x_train <- x_train / 255

y_train <- to_categorical(y_train, 10)

Using the Sequential API, define the model architecture.

# Build a sequential model
model = keras_model_sequential() %>% 
  layer_conv_2d(filters = 10, kernel_size = c(3,3),input_shape = c(28,28,1),
                #apply activation gelu
                activation = activation_gelu) %>% 
  # apply group normalization layer
  layer_group_normalization(groups = 5, axis = 3) %>% 
  layer_flatten() %>% 
  layer_dense(10, activation='softmax')

# Compile
model %>% compile(
  # apply rectified adam
  optimizer = optimizer_radam(),
  # apply sparse max loss
  loss = loss_sparsemax(),
  # choose cohen kappa metric
  metrics = metric_cohen_kappa(10)
)

Train the Keras model.

model %>% fit(
  x_train, y_train,
  batch_size = 128,
  epochs = 1,
  validation_split = 0.2
)
Train on 48000 samples, validate on 12000 samples
48000/48000 [==============================] - 24s 510us/sample - loss: 0.1193 - cohen_kappa: 0.8074 - 
val_loss: 0.0583 - val_cohen_kappa: 0.9104

Let's apply Weight Normalization, a Simple Reparameterization technique to Accelerate Training of Deep Neural Networks:

Note: We only change the model architecture and then train our model.

# Build a sequential model
model = keras_model_sequential() %>% 
  layer_weight_normalization(input_shape = c(28L,28L,1L),
                             layer_conv_2d(filters = 10, kernel_size = c(3,3))) %>% 
  layer_flatten() %>% 
  layer_weight_normalization(layer_dense(units = 10, activation='softmax'))
Train on 48000 samples, validate on 12000 samples
48000/48000 [==============================] - 12s 253us/sample - loss: 0.1276 - cohen_kappa: 0.7920 - 
val_loss: 0.0646 - val_cohen_kappa: 0.9044

We can see that the training process has finished in 12 seconds. But without this method, 1 epoch required 24 seconds.

Callbacks

One can stop training after certain time. For this purpose, seconds parameter should be set in callback_time_stopping function:

model %>% fit(
  x_train, y_train,
  batch_size = 128,
  epochs = 4,
  validation_split = 0.2,
  verbose = 0,
  callbacks = callback_time_stopping(seconds = 6, verbose = 1)
)
Timed stopping at epoch 1 after training for 0:00:06

Losses

TripletLoss can be applied in the following form:

First task is to create a Keras model.

model = keras_model_sequential() %>% 
  layer_conv_2d(filters = 64, kernel_size = 2, padding='same', input_shape=c(28,28,1)) %>% 
  layer_max_pooling_2d(pool_size=2) %>% 
  layer_flatten() %>% 
  layer_dense(256, activation= NULL) %>% 
  layer_lambda(f = function(x) tf$math$l2_normalize(x, axis = 1L))

model %>% compile(
  optimizer = optimizer_lazy_adam(),
  # apply triplet semihard loss
  loss = loss_triplet_semihard())

With tfdatasets package we can cast our dataset and then fit.

library(tfdatasets)

train = tensor_slices_dataset(list(tf$cast(x_train,'uint8'),tf$cast( y_train,'int64'))) %>% 
  dataset_shuffle(1024) %>% dataset_batch(32)
  
# fit
model %>% fit(
  train,
  epochs = 1
)
Train for 1875 steps
1875/1875 [==============================] - 74s 39ms/step - loss: 0.4227
Metadata

Version

0.10.0

License

Unknown

Platforms (77)

    Darwin
    FreeBSD
    Genode
    GHCJS
    Linux
    MMIXware
    NetBSD
    none
    OpenBSD
    Redox
    Solaris
    WASI
    Windows
Show all
  • aarch64-darwin
  • aarch64-freebsd
  • aarch64-genode
  • aarch64-linux
  • aarch64-netbsd
  • aarch64-none
  • aarch64-windows
  • aarch64_be-none
  • arm-none
  • armv5tel-linux
  • armv6l-linux
  • armv6l-netbsd
  • armv6l-none
  • armv7a-darwin
  • armv7a-linux
  • armv7a-netbsd
  • armv7l-linux
  • armv7l-netbsd
  • avr-none
  • i686-cygwin
  • i686-darwin
  • i686-freebsd
  • i686-genode
  • i686-linux
  • i686-netbsd
  • i686-none
  • i686-openbsd
  • i686-windows
  • javascript-ghcjs
  • loongarch64-linux
  • m68k-linux
  • m68k-netbsd
  • m68k-none
  • microblaze-linux
  • microblaze-none
  • microblazeel-linux
  • microblazeel-none
  • mips-linux
  • mips-none
  • mips64-linux
  • mips64-none
  • mips64el-linux
  • mipsel-linux
  • mipsel-netbsd
  • mmix-mmixware
  • msp430-none
  • or1k-none
  • powerpc-netbsd
  • powerpc-none
  • powerpc64-linux
  • powerpc64le-linux
  • powerpcle-none
  • riscv32-linux
  • riscv32-netbsd
  • riscv32-none
  • riscv64-linux
  • riscv64-netbsd
  • riscv64-none
  • rx-none
  • s390-linux
  • s390-none
  • s390x-linux
  • s390x-none
  • vc4-none
  • wasm32-wasi
  • wasm64-wasi
  • x86_64-cygwin
  • x86_64-darwin
  • x86_64-freebsd
  • x86_64-genode
  • x86_64-linux
  • x86_64-netbsd
  • x86_64-none
  • x86_64-openbsd
  • x86_64-redox
  • x86_64-solaris
  • x86_64-windows