Overview
Fine-tune LLMs on your Lightning Rod datasets. The training API supports LoRA fine-tuning with configurable base models, training steps, batch size, and rank.
Early Access — The training API is in preview and not fully stable. Join the early access waitlist to get access and stay updated.
Workflow
Generate Dataset — Use the dataset generation pipeline to create labeled forecasting samples
Prepare Data — Run
filter_and_split()to filter, deduplicate, and split into train/test datasetsConfigure Training — Set base model, training steps, and optional LoRA parameters
Train — Submit a training job and monitor progress
Evaluation — Run evals against your test dataset
Inference — Use the trained model via
lr.predict()or the OpenAI-compatible API
Key Capabilities
LoRA fine-tuning with configurable rank and batch size
Configurable base models (e.g. Qwen, Llama)
Cost estimation before running
Live progress monitoring in notebooks
OpenAI-compatible inference with your trained model
Built-in evals against test datasets
Next Steps
Data Preparation — Use
filter_and_split()to get training-ready datasetsTraining — Configure and run training jobs
Evaluation — Evaluate your trained model
Inference — Use your trained model for predictions
Last updated
