Groq Provider¶
Configure and use Groq for ultra-fast inference with Ondine.
Setup¶
Basic Usage¶
from ondine import PipelineBuilder
pipeline = (
PipelineBuilder.create()
.from_csv("data.csv", input_columns=["text"], output_columns=["result"])
.with_prompt("Process: {text}")
.with_llm(provider="groq", model="llama-3.3-70b-versatile")
.build()
)
result = pipeline.execute()
Available Models¶
llama-3.3-70b-versatile- Best performancellama-3.1-70b-versatile- Fast and capablemixtral-8x7b-32768- Long context window
Configuration Options¶
Performance¶
Groq is optimized for speed. Recommended concurrency: 30-50