MyNixOS website logo
Description

Language Model Agents in R for AI Workflows and Research.

Provides modular, graph-based agents powered by large language models (LLMs) for intelligent task execution in R. Supports structured workflows for tasks such as forecasting, data visualization, feature engineering, data wrangling, data cleaning, 'SQL', code generation, weather reporting, and research-driven question answering. Each agent performs iterative reasoning: recommending steps, generating R code, executing, debugging, and explaining results. Includes built-in support for packages such as 'tidymodels', 'modeltime', 'plotly', 'ggplot2', and 'prophet'. Designed for analysts, developers, and teams building intelligent, reproducible AI workflows in R. Compatible with LLM providers such as 'OpenAI', 'Anthropic', 'Groq', and 'Ollama'. Inspired by the Python package 'langagent'.

LLMAgentR

License:MIT R-CMD-check Docs CRANstatus TotalDownloads LastCommit Issues

Overview

LLMAgentR is an R package for building Language Model Agents using a modular state graph execution framework. Inspired by LangGraph and LangChain architectures, it supports iterative workflows for research, data analysis, and automation.


Installation

install.packages("LLMAgentR")

Development version

To get the latest features or bug fixes, you can install the development version of LLMAgentR from GitHub:

# If needed
install.packages("remotes")

remotes::install_github("knowusuboaky/LLMAgentR")

See the full function reference or the package website for more details.


Environment Setup

API Setup

Sys.setenv(
  OPENAI_API_KEY     = "your-openai-key",
  GROQ_API_KEY       = "your-groq-key",
  ANTHROPIC_API_KEY  = "your-anthropic-key",
  DEEPSEEK_API_KEY   = "your-deepseek-key",
  DASHSCOPE_API_KEY  = "your-dashscope-key",
  GH_MODELS_TOKEN    = "your-github-models-token"
)

LLM Support (Minimal Wrapper)

The chatLLM package allows you to interact with large language models (LLMs) effortlessly - either through direct calls or via reusable minimal wrappers.

Load the Package

library(chatLLM)

Minimal Wrapper Function

Create a lightweight wrapper around call_llm() for reuse. It optionally provides verbose output:


call_llm(
  prompt     = "Summarize the capital of France.",
  provider   = "groq",
  model      = "llama3-8b",
  temperature = 0.7,
  max_tokens = 200,
  verbose = TRUE
)

my_llm_wrapper <- function(prompt, verbose = FALSE) {
  if (verbose) {
    message("[my_llm_wrapper] Sending prompt to LLM...")
  }

  # Suppress console output but always return the response
  response_text <- if (verbose) {
    call_llm(
      prompt     = prompt,
      provider   = "openai",
      model      = "gpt-4o",
      max_tokens = 3000,
      verbose    = TRUE
    )
  } else {
    suppressMessages(
      suppressWarnings(
        call_llm(
          prompt     = prompt,
          provider   = "openai",
          model      = "gpt-4o",
          max_tokens = 3000,
          verbose    = TRUE
        )
      )
    )
  }

  if (verbose) {
    message("[my_llm_wrapper] Response received.")
  }

  return(response_text)
}

Quick Access Shortcut

Alternatively, preconfigure an LLM call for one-liners:

my_llm_wrapper <- call_llm(
      provider    = "openai",
      model       = "gpt-4o",
      max_tokens  = 3000,
      verbose = TRUE
)

Related Package: chatLLM

The chatLLM package (now available on CRAN) offers a modular interface for interacting with LLM providers including OpenAI, Groq, Anthropic, DeepSeek, DashScope, and GitHub Models.

install.packages("chatLLM")

Agent Articles

Detailed guides now live in pkgdown Articles (one per agent):

Custom graph workflows:

A full index page is also available:

License

MIT (c) Kwadwo Daddy Nyame Owusu Boakye.

Metadata

Version

0.3.2

License

Unknown

Platforms (78)

    Darwin
    FreeBSD
    Genode
    GHCJS
    Linux
    MMIXware
    NetBSD
    none
    OpenBSD
    Redox
    Solaris
    uefi
    WASI
    Windows
Show all
  • aarch64-darwin
  • aarch64-freebsd
  • aarch64-genode
  • aarch64-linux
  • aarch64-netbsd
  • aarch64-none
  • aarch64-uefi
  • aarch64-windows
  • aarch64_be-none
  • arm-none
  • armv5tel-linux
  • armv6l-linux
  • armv6l-netbsd
  • armv6l-none
  • armv7a-linux
  • armv7a-netbsd
  • armv7l-linux
  • armv7l-netbsd
  • avr-none
  • i686-cygwin
  • i686-freebsd
  • i686-genode
  • i686-linux
  • i686-netbsd
  • i686-none
  • i686-openbsd
  • i686-windows
  • javascript-ghcjs
  • loongarch64-linux
  • m68k-linux
  • m68k-netbsd
  • m68k-none
  • microblaze-linux
  • microblaze-none
  • microblazeel-linux
  • microblazeel-none
  • mips-linux
  • mips-none
  • mips64-linux
  • mips64-none
  • mips64el-linux
  • mipsel-linux
  • mipsel-netbsd
  • mmix-mmixware
  • msp430-none
  • or1k-none
  • powerpc-linux
  • powerpc-netbsd
  • powerpc-none
  • powerpc64-linux
  • powerpc64le-linux
  • powerpcle-none
  • riscv32-linux
  • riscv32-netbsd
  • riscv32-none
  • riscv64-linux
  • riscv64-netbsd
  • riscv64-none
  • rx-none
  • s390-linux
  • s390-none
  • s390x-linux
  • s390x-none
  • vc4-none
  • wasm32-wasi
  • wasm64-wasi
  • x86_64-cygwin
  • x86_64-darwin
  • x86_64-freebsd
  • x86_64-genode
  • x86_64-linux
  • x86_64-netbsd
  • x86_64-none
  • x86_64-openbsd
  • x86_64-redox
  • x86_64-solaris
  • x86_64-uefi
  • x86_64-windows