MyNixOS website logo
Description

A Flexible Interface for 'LLM' API Interactions.

Provides a flexible interface for interacting with Large Language Model ('LLM') providers including 'OpenAI', 'Azure OpenAI', 'Azure AI Foundry', 'Groq', 'Anthropic', 'DeepSeek', 'DashScope', 'Gemini', 'Grok', 'GitHub Models', and AWS Bedrock. Supports both synchronous and asynchronous chat-completion APIs, with features such as retry logic, dynamic model selection, customizable parameters, and multi-message conversation handling. Designed to streamline integration with state-of-the-art LLM services across multiple platforms.

chatLLM

License:MIT R-CMD-check Docs CRANstatus TotalDownloads LastCommit Issues

Overview

chatLLM is an R package providing a single, consistent interface to multiple “OpenAI‑compatible” chat APIs (OpenAI, Groq, Anthropic, DeepSeek, Alibaba DashScope, Gemini, Grok, GitHub Models, AWS Bedrock, Azure OpenAI, and Azure AI Foundry).

Key features:

  • 🔄 Uniform API across providers
  • 🗣 Multi‑message context (system/user/assistant roles)
  • 🔁 Retries & backoff with clear timeout handling
  • 🔈 Verbose control (verbose = TRUE/FALSE)
  • ⚙️ Discover models via list_models()
  • 🏗 Factory interface for repeated calls
  • 🌐 Custom endpoint override and advanced tuning

Installation

From CRAN:

install.packages("chatLLM")

Development version:

# install.packages("remotes")  # if needed
remotes::install_github("knowusuboaky/chatLLM")

Setup

Set your API keys or tokens once per session:

Sys.setenv(
  OPENAI_API_KEY       = "your-openai-key",
  GROQ_API_KEY         = "your-groq-key",
  ANTHROPIC_API_KEY    = "your-anthropic-key",
  DEEPSEEK_API_KEY     = "your-deepseek-key",
  DASHSCOPE_API_KEY    = "your-dashscope-key",
  GH_MODELS_TOKEN      = "your-github-models-token",
  GEMINI_API_KEY       = "your-gemini-key",
  XAI_API_KEY          = "your-grok-key",
  AWS_ACCESS_KEY_ID    = "your-aws-access-key",
  AWS_SECRET_ACCESS_KEY = "your-aws-secret-key",
  AWS_REGION           = "us-east-1",
  AZURE_OPENAI_KEY     = "your-azure-openai-key",
  AZURE_OPENAI_ENDPOINT = "https://your-resource.openai.azure.com",
  AZURE_FOUNDRY_KEY    = "your-azure-foundry-key",
  AZURE_FOUNDRY_ENDPOINT = "https://your-foundry-endpoint"
)

Usage

1. Simple Prompt

response <- call_llm(
  prompt     = "Who is Messi?",
  provider   = "openai",
  max_tokens = 300
)
cat(response)

2. Multi‑Message Conversation

conv <- list(
  list(role    = "system",    content = "You are a helpful assistant."),
  list(role    = "user",      content = "Explain recursion in R.")
)
response <- call_llm(
  messages          = conv,
  provider          = "openai",
  max_tokens        = 200,
  presence_penalty  = 0.2,
  frequency_penalty = 0.1,
  top_p             = 0.95
)
cat(response)

3. Verbose Off

Suppress informational messages:

res <- call_llm(
  prompt      = "Tell me a joke",
  provider    = "openai",
  verbose     = FALSE
)
cat(res)

4. Factory Interface

Create a reusable LLM function:

# Build a “GitHub Models” engine with defaults baked in
GitHubLLM <- call_llm(
  provider    = "github",
  max_tokens  = 60,
  verbose     = FALSE
)

# Invoke it like a function:
story <- GitHubLLM("Tell me a short story about libraries.")
cat(story)

5. Discover Available Models

# All providers at once
all_models <- list_models("all")
names(all_models)

# Only OpenAI models
openai_models <- list_models("openai")
head(openai_models)

6. Call a Specific Model

Pick from the list and pass it to call_llm():

anthro_models <- list_models("anthropic")
cat(call_llm(
  prompt     = "Write a haiku about autumn.",
  provider   = "anthropic",
  model      = anthro_models[1],
  max_tokens = 60
))

Troubleshooting

  • Timeouts: increase n_tries / backoff or supply a custom .post_func with higher timeout().
  • Model Not Found: use list_models("<provider>") or consult provider docs.
  • Auth Errors: verify your API key/token and environment variables.
  • Network Issues: check VPN/proxy, firewall, or SSL certs.

Contributing & Support

Issues and PRs welcome at https://github.com/knowusuboaky/chatLLM


License

MIT © Kwadwo Daddy Nyame Owusu - Boakye


Acknowledgements

Inspired by RAGFlowChainR, powered by httr and the R community. Enjoy!

Metadata

Version

0.1.4

License

Unknown

Platforms (78)

    Darwin
    FreeBSD
    Genode
    GHCJS
    Linux
    MMIXware
    NetBSD
    none
    OpenBSD
    Redox
    Solaris
    uefi
    WASI
    Windows
Show all
  • aarch64-darwin
  • aarch64-freebsd
  • aarch64-genode
  • aarch64-linux
  • aarch64-netbsd
  • aarch64-none
  • aarch64-uefi
  • aarch64-windows
  • aarch64_be-none
  • arm-none
  • armv5tel-linux
  • armv6l-linux
  • armv6l-netbsd
  • armv6l-none
  • armv7a-linux
  • armv7a-netbsd
  • armv7l-linux
  • armv7l-netbsd
  • avr-none
  • i686-cygwin
  • i686-freebsd
  • i686-genode
  • i686-linux
  • i686-netbsd
  • i686-none
  • i686-openbsd
  • i686-windows
  • javascript-ghcjs
  • loongarch64-linux
  • m68k-linux
  • m68k-netbsd
  • m68k-none
  • microblaze-linux
  • microblaze-none
  • microblazeel-linux
  • microblazeel-none
  • mips-linux
  • mips-none
  • mips64-linux
  • mips64-none
  • mips64el-linux
  • mipsel-linux
  • mipsel-netbsd
  • mmix-mmixware
  • msp430-none
  • or1k-none
  • powerpc-linux
  • powerpc-netbsd
  • powerpc-none
  • powerpc64-linux
  • powerpc64le-linux
  • powerpcle-none
  • riscv32-linux
  • riscv32-netbsd
  • riscv32-none
  • riscv64-linux
  • riscv64-netbsd
  • riscv64-none
  • rx-none
  • s390-linux
  • s390-none
  • s390x-linux
  • s390x-none
  • vc4-none
  • wasm32-wasi
  • wasm64-wasi
  • x86_64-cygwin
  • x86_64-darwin
  • x86_64-freebsd
  • x86_64-genode
  • x86_64-linux
  • x86_64-netbsd
  • x86_64-none
  • x86_64-openbsd
  • x86_64-redox
  • x86_64-solaris
  • x86_64-uefi
  • x86_64-windows