SynergiZing ML and LLMs in R | Presale — AthlyticZ
Presale ends May 7, 2026  ·  Save $350  ·  --d --h --m remaining Presale: Save $350  ·  --d --h left
Presale Open // Launch May 2026

Ship production R + LLM
workflows that actually work

The bridge between tidymodels and modern LLMs — tool-calling with ellmer, RAG with ragnar, production deployment via Shiny and vetiver, rigorous evaluation with vitals.

Regular $1,149 · Save $350 · Ends May 7
SynergiZing ML and LLMs in R - Course Preview
Preview · Module 02
FREE PREVIEW
21
Modules
110+
Lessons
6
Capstone Systems
Lifetime Access

Every enrollment includes 1 month of prebuilt VM access with R and every package pre-configured. Zero setup — sign up, start coding. Explore membership for annual VM access across our platform.

The Gap We're Closing

Most R users get stuck between
tidymodels and LLMs

You know how to fit a model. You’ve played with ChatGPT. But the bridge between them — tool-calling, RAG, production deployment, evaluation — is where everything breaks.

Where people get stuck

Python tutorials don’t translate

Every LLM course is in Python. You spend hours translating LangChain only to discover half of it doesn’t have an R equivalent.

Tool-calling feels like black magic

You read about "agents" and "tools" but can’t figure out how to register a tidymodels prediction function as something an LLM can actually call.

RAG tutorials skip the hard parts

They show chunking on a toy example. Then you try it on your PDFs and retrieval quality is 20%. Nobody teaches evaluation.

Deployment is an afterthought

Your LLM app works in RStudio. Behind a Shiny app that doesn’t leak tokens or crash? That’s where projects die.

What you’ll build here

R-native from end to end

ellmer, ragnar, querychat, mall, vitals, vetiver. The modern R AI stack taught by the people who use it in production.

Tool-calling that calls your real models

Register a tidymodels workflow as a tool. NL interfaces to ML pipelines. Users ask questions, get predictions, never leave R.

Production RAG with proper eval

Hybrid retrieval (BM25 + embeddings). Chunking strategies that work. RAG evaluation with vitals so you know when your system is good.

Full Shiny + vetiver deployment

Package models with vetiver, wrap in Shiny, monitor cost and latency, add observability. Your app in production, not your laptop.

Real Code · Live Typing

Watch the modern R AI stack
write itself

Most courses teach packages in isolation. Here you learn how ellmer, ragnar, tidymodels, and Shiny compose into real systems.

The terminal on the right is typing real R code, live. Three examples cycle automatically — or click any tab to jump to it.

01 // ellmer
Chat + Tool-Calling
Register R functions, call them from natural language
02 // ragnar
RAG Pipelines
BM25, embeddings, hybrid retrieval in R
03 // vitals
Model Evaluation
Statistical model comparison, RAG-specific evals
04 // vetiver
ML Deployment
Package models, version them, deploy as APIs
~/athlyticz/llm-r — ellmer-tool.R R
ellmer-tool.R
ragnar-rag.R
querychat-sql.R
Retrieval-Augmented Generation

See your RAG pipeline come alive

Watch documents flow through chunking, embedding, hybrid retrieval, and context augmentation — into an LLM that answers with sources. This is Module 19 in one diagram.

report_01.pdf 24 pages report_02.pdf 18 pages report_03.pdf 31 pages DOCUMENTS ragnar_chunks chunk_size=512 CHUNKING embed_openai dim=1536 VECTOR SEARCH bm25_rank k1=1.2 b=0.75 KEYWORD SEARCH hybrid_rerank top_k=8 HYBRID RETRIEVAL ellmer::chat "Arsenal xG: 0.84" [3 sources cited] GROUNDED ANSWER "Why did Arsenal lose?" USER QUERY
Core pipeline primitives
LLM output with source citations
Live data flow
What You’ll Actually Build

Six portfolio-worthy systems

Not toy examples. Complete, deployable R applications that demonstrate production-grade ML + LLM integration. The kind recruiters actually want to see.

01Phase I

Full tidymodels ML pipeline

Recipes, workflow sets, regularized regression, random forests, neural nets, cross-validation, hyperparameter tuning.

tidymodelsrecipesworkflow_sets
02Phase II

NL → SQL database analyst

Natural language interface to a relational database. Tool-calling, safe query execution, structured output extraction.

ellmerquerychatduckdb
03Phase III

Document Q&A system

Hybrid retrieval over multiple documents. BM25 plus embeddings, chunking strategies, retrieval tool registration.

ragnarBM25embeddings
04Phase III

LLM evaluation harness

Eval datasets, statistical model comparison, RAG-specific evaluation design. Know when your system is actually good.

vitalsevalsRAG eval
05Phase I

Deployed ML model API

Package a tidymodels workflow with vetiver, version it, deploy as an API, monitor drift in production.

vetiverorbitalplumber
06Phase IV

Production Shiny chatbot

Full LLM chatbot in Shiny. Tool-calling, RAG, prediction explanation, cost/latency monitoring, proper UX.

shinychatShinyobservability
Your Instructors

Built by R’s most respected
LLM practitioners

Two instructors. One deep in package development and production R. One at the intersection of modeling, deployment, and modern AI workflows. Both writing R in production every day.

Dr. Nic Crane
Dr. Nic Crane
Director, NC Data Labs
Apache Arrow Core Maintainer
“LLMs open up new ways to extend your analyses in R. I’m excited to teach how to use LLMs with tidymodels to take your modelling further.”

Independent R consultant with 15 years in R and 10+ years shipping R in production, plus a PhD in Statistics. Core maintainer of Apache Arrow for R and author of Scaling Up with R and Arrow (CRC Press, 2025). Deep experience across pharma, public health, academia, and startups.

Apache Arrow tidymodels ragnar Production R Pharma
Dr. Christoph Scheuch
Dr. Christoph Scheuch
Associate Researcher, Humboldt University of Berlin
Co-Creator of Tidy Finance & EconDataverse
“LLMs are reshaping modeling. Exploration is faster, communication clearer, workflows more powerful. Let’s pair tidymodels with modern AI.”

Financial economist turned data scientist and product manager. Builds interactive dashboards, automated reporting, and custom R/Python packages. Extensive experience with CI/CD and Shiny deployments — Docker, ShinyProxy, Posit Connect, GCR.

ellmer Shiny vetiver Docker CI/CD
The Full Curriculum

21 modules. 110+ lessons.
One complete path.

From tidymodels foundations through production LLM apps. Click any module to see every lesson.

I
Phase I · Machine Learning with tidymodels
From Visualization to Production ML
10 modules · 47 lessons

Build a rigorous ML foundation using tidymodels — from exploratory analysis through deployment. Sports data throughout. Real workflows, not toy examples.

01
Foundations of AI in R
2 lessons
1.1ML vs LLMs
1.2AI Ecosystem in R
02
From Viz to Models
3 lessons
2.1Play-by-Play Data
2.2Exploration via Visualization
2.3Logistic Regression
03
Model Evaluation and Testing
6 lessons
3.1Training and Test Sets
3.2Model Specifications and Workflows
3.3Making Predictions
3.4Confusion Matrix and Basic Metrics
3.5Advanced Evaluation Techniques
3.6Overfitting and Test Evaluation
04
Resampling and Model Tuning
6 lessons
4.1Introduction to Cross-Validation
4.2Working with Resampling Results
4.3Comparing Models with Cross-Validation
4.4Introduction to Hyperparameter Tuning
4.5Selecting and Finalizing Models
4.6Tuning Multiple Parameters
05
Predicting Continuous Outcomes
4 lessons
5.1From Classification to Regression
5.2Linear Regression in Tidymodels
5.3Random Forests for Regression
5.4Evaluating Regression Models
06
Preprocessing with Recipes
4 lessons
6.1Why Preprocessing Matters
6.2Essential Recipe Steps
6.3Preparing and Applying Recipes
6.4Recipes in Workflows
07
Regularized Regressions
4 lessons
7.1The Problem of Overfitting
7.2Ridge, Lasso, and Elastic Net
7.3Tuning Regularized Models
7.4Interpreting Regularized Coefficients
08
Workflow Sets for Model Comparison
4 lessons
8.1The Challenge of Fair Model Comparison
8.2Creating Workflow Sets
8.3Tuning Workflow Sets
8.4Selecting and Finalizing the Best Workflow
09
Neural Networks
4 lessons
9.1Why Neural Networks
9.2Building Your First Neural Network
9.3Tuning Neural Networks
9.4Deeper Networks and Practical Considerations
10
Deployment of ML Models
4 lessons
10.1Why Deployment Matters
10.2Deploying Models with Vetiver
10.3Exporting Models with Orbital
10.4Monitoring and Maintaining Deployed Models
Why this matters

You don’t need a separate ML course after this

Most LLM courses handwave the ML side. Here, you get the full tidymodels treatment — preprocessing with recipes, tuning with workflow_sets, deployment with vetiver. By the time you hit Phase II, you’ve built production ML pipelines. The LLM layer sits on top of real modeling foundations.

II
Phase II · LLM Foundations in R
Chat, Prompts, Structured Output, Multimodal
6 modules · 22 lessons

Build the core LLM skill stack in R using ellmer. Prompts, conversations, structured data extraction, multimodal input, and model selection including local models via Ollama.

11
From ML to LLMs
2 lessons
11.1A Gentle Introduction to LLMs
11.2Data Science Principles
12
Chat Basics
4 lessons
12.1Introduction to LLMs
12.2Getting Started with Ellmer
12.3Conversations, Roles, and Tokens
12.4System Prompts
13
Prompt Engineering
5 lessons
13.1Introduction to Prompt Engineering
13.2Adding Context and Instructions
13.3Handling Missing Data
13.4Few-shot Prompting
13.5Structuring and Storing Prompts
14
Structured Output
4 lessons
14.1Introduction to Structured Output
14.2Extracting Data Frames
14.3Handling Missing Fields
14.4Batch Processing
15
Multimodal Input
3 lessons
15.1Multimodal Input
15.2Hallucination with Plot Interpretation
15.3Extracting Data from Images
16
Model Selection
5 lessons
16.1Model Selection
16.2Working with Proprietary Models
16.3Local Models Intro
16.4Running Local Models with Ollama
16.5Local Models in R
The real skill

Prompt engineering isn’t magic — it’s structured thinking

You’ll go from basic chat to few-shot prompting, structured output that returns clean data frames, and multimodal workflows that extract tables from images. This is the skill stack that turns LLMs into actual R tools instead of toys.

III
Phase III · Production LLM Systems
Tool-calling, RAG, NLP, and Evaluation
4 modules · 25 lessons

The core differentiator of this course. Register R functions as tools. Build RAG systems with proper retrieval evaluation. Run NLP pipelines with mall. Measure everything with vitals.

17
Tool Calling
5 lessons
17.1Introduction to Tools
17.2Data Query Tools
17.3Data Query Tools (continued)
17.4Bio Retrieval Tools
17.5User Control and Safety
18
NLP with Mall
4 lessons
18.1Introduction to Mall
18.2Text Summarisation
18.3Extracting Information
18.4Text Classification
19
Retrieval-Augmented Generation
8 lessons
19.1Introducing RAG and Ragnar
19.2Document Processing
19.3Retrieval via BM25
19.4Retrieval via Embeddings
19.5Hybrid Retrieval
19.6Registering Retrieval Tools
19.7Context Augmentation
19.8Multiple Documents
20
Model Evaluation with Vitals
8 lessons
20.1Introduction to Evals
20.2Evaluation Data Sets
20.3Running First Evaluation
20.4Viewing Eval Results
20.5Comparing Models
20.6Statistical Comparison
20.7RAG Evaluation Design
20.8Running RAG Evaluation
The part nobody else teaches in R

Build a working RAG pipeline and learn how to evaluate it

Eight lessons on RAG covering BM25, embeddings, hybrid retrieval, tool registration, and context augmentation. Then eight lessons introducing evaluation with vitals, including RAG-specific eval design. By the end, you’ll have a working RAG system and a principled starting framework for evaluating it — with clear pointers to the deeper evaluation literature so you can go further when you need to.

IV
Phase IV · Shipping Real LLM Apps
Shiny + querychat + LLM Application Patterns
1 module · 10 lessons

Put it all together. Build complete LLM applications with Shiny, querychat for NL-to-SQL, and shinychat for production chatbots. Tool-calling, RAG, model explanation — all wired into a real deployable app.

21
LLM Apps
10 lessons
21.1LLMs for Data Exploration
21.2Text to SQL
21.3Querychat
21.4Querychat System Prompt
21.5Customizing Querychat
21.6Chatbots with Shinychat
21.7Tool Calling in Shiny (Part 1)
21.8Tool Calling in Shiny (Part 2)
21.9RAG in Chatbots
21.10Explaining Model Predictions
The finish line

You leave with a deployable app — not a notebook

By Module 21, you’re wiring everything together. Tool-calling inside Shiny. RAG inside chatbots. NL-to-SQL with querychat. Model explanation in-app. This is the portfolio project recruiters actually want to see — not another Kaggle notebook.

Enroll Today

Two ways to get in

Presale pricing ends May 7, 2026 at 11:59pm ET. Same course, later date, higher price.

Regular Price // After May 7
SynergiZing ML & LLMs in R
$1,149
Same course · Same content · Later date
  • All presale content included
  • Launch access at course release
  • Same 1-month VM access
  • Same lifetime access
Presale saves you $350.
Why pay more to wait?
Get Presale Price Instead

AthlyticZ Members enroll at $639 · Explore Membership →

Before You Enroll

Quick answers

Comfort with base R and the tidyverse is recommended. Basic Shiny familiarity helps for Phase IV but isn’t required. If you’re rusty, check out BreeZing Through the Tidyverse by Dr. Paul Sabin first.
Full course goes live May 2026. As a presale customer, you’ll get access on launch day, plus early syllabus updates, pilot lesson drops, and a behind-the-scenes look as modules are finalized.
Async. 110+ lessons of short, focused video clips (typically 5–15 minutes each) plus case study projects. Each enrollment includes 1 month of prebuilt VM access with R and every package pre-configured, so you can code along from day one. Explore membership for annual VM access across our platform.
Both proprietary (OpenAI, Anthropic, Google) and local models via Ollama. Module 16 walks through model selection, tradeoffs, and running local models in R. You’ll bring your own API keys or use free local models during the course.
You save $350 (30%). You get access to syllabus updates and pilot lessons as they’re produced. And you lock in the rate before any price increases. Same course, lower price, earlier visibility.
3-day money-back guarantee from course launch day. If the course isn’t what you expected after launch, we’ll refund you — no questions.
This is R-native end to end. Built around ellmer, ragnar, vitals, querychat, shinychat, mall, and vetiver — the modern R LLM stack. Taught by an Apache Arrow core maintainer and a production R practitioner. You won’t be translating LangChain code. You’ll be writing R.
SynergiZing ML & LLMs in R

Build the R + LLM workflows
the rest of the field is still Googling

21 modules. 110+ lessons. Full tidymodels + modern LLM stack. Six portfolio systems. 1 month of prebuilt VM access included.

$799
$1,149 after May 7
// You save $350
Presale closes May 7, 2026 at 11:59pm ET ·
$799
// Presale ends May 7
Enroll Now