Evaluation
Methods
create
eval_job = lr.evals.create(
model_id=job.model_id,
dataset=test_dataset,
benchmark_model_id="openai/gpt-5.2",
temperature=0.0,
)Parameter
Type
Default
Description
run
eval_job = lr.evals.run(
model_id=job.model_id,
dataset=test_dataset,
benchmark_model_id="openai/gpt-5.2",
temperature=0.0,
poll_interval=15.0,
)get
list
print_eval
Example
Last updated
