Description
A Flexible Interface for 'LLM' API Interactions.
Description
Provides a flexible interface for interacting with Large Language Model ('LLM') providers including 'OpenAI', 'Groq', 'Anthropic', 'DeepSeek', 'DashScope', and 'GitHub Models'. Supports both synchronous and asynchronous chat-completion APIs, with features such as retry logic, dynamic model selection, customizable parameters, and multi-message conversation handling. Designed to streamline integration with state-of-the-art LLM services across multiple platforms.
README.md
chatLLM 
Overview
chatLLM is an R package providing a single, consistent interface to multiple “OpenAI‑compatible” chat APIs (OpenAI, Groq, Anthropic, DeepSeek, Alibaba DashScope, and GitHub Models).
Key features:
- 🔄 Uniform API across providers
- 🗣 Multi‑message context (system/user/assistant roles)
- 🔁 Retries & backoff with clear timeout handling
- 🔈 Verbose control (
verbose = TRUE/FALSE
) - ⚙️ Discover models via
list_models()
- 🏗 Factory interface for repeated calls
- 🌐 Custom endpoint override and advanced tuning
Installation
From CRAN:
install.packages("chatLLM")
Development version:
# install.packages("remotes") # if needed
remotes::install_github("knowusuboaky/chatLLM")
Setup
Set your API keys or tokens once per session:
Sys.setenv(
OPENAI_API_KEY = "your-openai-key",
GROQ_API_KEY = "your-groq-key",
ANTHROPIC_API_KEY = "your-anthropic-key",
DEEPSEEK_API_KEY = "your-deepseek-key",
DASHSCOPE_API_KEY = "your-dashscope-key",
GH_MODELS_TOKEN = "your-github-models-token"
)
Usage
1. Simple Prompt
response <- call_llm(
prompt = "Who is Messi?",
provider = "openai",
max_tokens = 300
)
cat(response)
2. Multi‑Message Conversation
conv <- list(
list(role = "system", content = "You are a helpful assistant."),
list(role = "user", content = "Explain recursion in R.")
)
response <- call_llm(
messages = conv,
provider = "openai",
max_tokens = 200,
presence_penalty = 0.2,
frequency_penalty = 0.1,
top_p = 0.95
)
cat(response)
3. Verbose Off
Suppress informational messages:
res <- call_llm(
prompt = "Tell me a joke",
provider = "openai",
verbose = FALSE
)
cat(res)
4. Factory Interface
Create a reusable LLM function:
# Build a “GitHub Models” engine with defaults baked in
GitHubLLM <- call_llm(
provider = "github",
max_tokens = 60,
verbose = FALSE
)
# Invoke it like a function:
story <- GitHubLLM("Tell me a short story about libraries.")
cat(story)
5. Discover Available Models
# All providers at once
all_models <- list_models("all")
names(all_models)
# Only OpenAI models
openai_models <- list_models("openai")
head(openai_models)
6. Call a Specific Model
Pick from the list and pass it to call_llm()
:
anthro_models <- list_models("anthropic")
cat(call_llm(
prompt = "Write a haiku about autumn.",
provider = "anthropic",
model = anthro_models[1],
max_tokens = 60
))
Troubleshooting
- Timeouts: increase
n_tries
/backoff
or supply a custom.post_func
with highertimeout()
. - Model Not Found: use
list_models("<provider>")
or consult provider docs. - Auth Errors: verify your API key/token and environment variables.
- Network Issues: check VPN/proxy, firewall, or SSL certs.
Contributing & Support
Issues and PRs welcome at https://github.com/knowusuboaky/chatLLM
License
MIT © Kwadwo Daddy Nyame Owusu - Boakye
Acknowledgements
Inspired by RAGFlowChainR, powered by httr and the R community. Enjoy!